id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
10,795,528
https://en.wikipedia.org/wiki/Cylindrospermopsin
Cylindrospermopsin (abbreviated to CYN, or CYL) is a cyanotoxin produced by a variety of freshwater cyanobacteria. CYN is a polycyclic uracil derivative containing guanidino and sulfate groups. It is also zwitterionic, making it highly water soluble. CYN is toxic to liver and kidney tissue and is thought to inhibit protein synthesis and to covalently modify DNA and/or RNA. It is not known whether cylindrospermopsin is a carcinogen, but it appears to have no tumour initiating activity in mice. CYN was first discovered after an outbreak of a mystery disease on Palm Island, Queensland, Australia. The outbreak was traced back to a bloom of Cylindrospermopsis raciborskii in the local drinking water supply, and the toxin was subsequently identified. Analysis of the toxin led to a proposed chemical structure in 1992, which was revised after synthesis was achieved in 2000. Several analogues of CYN, both toxic and non-toxic, have been isolated or synthesised. C. raciborskii has been observed mainly in tropical areas, however has also recently been discovered in temperate regions of Australia, North, South America, New Zealand and Europe. However, CYN-producing strain of C. raciborskii has not been identified in Europe, several other cyanobacteria species occurring across the continent are able to synthesize it. Discovery In 1979, 138 inhabitants of Palm Island, Queensland, Australia, were admitted to hospital, suffering various symptoms of gastroenteritis. All of these were children; in addition, 10 adults were affected but not hospitalised. Initial symptoms, including abdominal pain and vomiting, resembled those of hepatitis; later symptoms included kidney failure and bloody diarrhoea. Urine analysis revealed high levels of proteins, ketones and sugar in many patients, along with blood and urobilinogen in lesser numbers. The urine analysis, along with faecal microscopy and poison screening, could not provide a statistical link to the symptoms. All patients recovered within 4 to 26 days, and at the time there was no apparent cause for the outbreak. Initial thoughts on the cause included poor water quality and diet, however none were conclusive, and the illness was coined the “Palm Island Mystery Disease”. At the time, it was noticed that this outbreak coincided with a severe algal bloom in the local drinking water supply, and soon after the focus turned to the dam in question. An epidemiological study of this “mystery disease” later confirmed that the Solomon Dam was implicated, as those that became ill had used water from the dam. It became apparent that a recent treatment of the algal bloom with copper sulfate caused lysis of the algal cells, releasing a toxin into the water. A study of the dam revealed that periodic blooms of algae were caused predominantly by three strains of cyanobacteria: two of the genus Anabaena, and Cylindrospermopsis raciborskii, previously unknown in Australian waters. A mouse bioassay of the three demonstrated that although the two Anabaena strains were non-toxic, C. raciborskii was highly toxic. Later isolation of the compound responsible led to the identification of the toxin cylindrospermopsin. A later report alternatively proposed that the excess copper in the water was the cause of the disease. The excessive dosing was following the use of least-cost contractors to control the algae, who were unqualified in the field. Chemistry Structure determination Isolation of the toxin using cyanobacteria cultured from the original Palm Island strain was achieved by gel filtration of an aqueous extract, followed by reverse-phase HPLC. Structure elucidation was achieved via mass spectrometry (MS) and nuclear magnetic resonance (NMR) experiments, and a structure (later proven slightly incorrect) was proposed (Figure 1). This almost-correct molecule possesses a tricyclic guanidine group (rings A, B & C), along with a uracil ring (D). The zwitterionic nature of the molecule makes this highly water-soluble, as the presence of charged areas within the molecule creates a dipole effect, suiting the polar solvent. Sensitivity of key signals in the NMR spectrum to small changes in pH suggested that the uracil ring exists in a keto/enol tautomeric relationship, where a hydrogen transfer results in two distinct structures (Figure 2). It was originally proposed that a hydrogen bond between the uracil and guanidine groups in the enol tautomer would make this the dominant form. Analogues A second metabolite of C. raciborskii was identified from extracts of the cyanobacteria after the observation of a frequently occurring peak accompanying that of CYN during UV and MS experiments. Analysis by MS and NMR methods concluded that this new compound was missing the oxygen adjacent to the uracil ring, and was named deoxycylindrospermopsin (Figure 3). In 1999, an epimer of CYN, named 7-epicyclindrospermopsin (epiCYN), was also identified as a minor metabolite from Aphanizomenon ovalisporum. This occurred whilst isolating CYN from cyanobacteria taken from Lake Kinneret in Israel. The proposed structure of this molecule differed from CYN only in the orientation of the hydroxyl group adjacent to the uracil ring (Figure 4). Total synthesis Synthetic approaches to CYN started with the piperidine ring (A), and progressed to annulation of rings B and C. The first total synthesis of CYN was reported in 2000 through a 20-step process. Improvements to synthetic methods led to a revision of the stereochemistry of CYN in 2001. A synthetic process controlling each of the six stereogenic centres of epiCYN established that the original assignments of both CYN and epiCYN were in fact a reversal of the correct structures. An alternative approach by White and Hansen supported these absolute configurations (Figure 5). At the time of this correct assignment, it was suggested that the enol form was not dominant. Stability One of the key factors associated with the toxicity of CYN is its stability. Although the toxin has been found to degrade rapidly in an algal extract when exposed to sunlight, it is resistant to degradation by changes in pH and temperature, and shows no degradation in either the pure solid form or in pure water. As a result, in turbid and unmoving water the toxin can persist for long periods, and although boiling water will kill the cyanobacteria, it may not remove the toxin. Toxicology Toxic effects Hawkins et al.. demonstrated the toxic effects of CYN by mouse bioassay, using an extract of the original Palm Island strain. Acutely poisoned mice displayed anorexia, diarrhoea and gasping respiration. Autopsy results revealed haemorrhages in the lungs, livers, kidneys, small intestines and adrenal glands. Histopathology revealed dose-related necrosis of hepatocytes, lipid accumulation, and fibrin thrombi formation in blood vessels of the liver and lungs, along with varying epithelial cell necrosis in areas of the kidneys. A more recent mouse bioassay of the effects of cylindrospermopsin revealed an increase in liver weight, with both lethal and non-lethal doses; in addition the livers appeared dark-coloured. Extensive necrosis of hepatocytes was visible in mice administered a lethal dose, and some localised damage was also observed in mice administered a non-lethal dose. Toxicity An initial estimate of the toxicity of CYN in 1985 was that an at 24 hours was 64±5 mg of freeze-dried culture/kg of mouse body weight on intraperitoneal injection. A further experiment in 1997 measured the LD50 as 52 mg/kg at 24 hours and 32 mg/kg at 7 days, however the data suggested that another toxic compound was present in the isolate of sonicated cells used; predictions made by Ohtani et al. about the 24‑hour toxicity were considerably higher, and it was proposed that another metabolite was present to account for the relatively low 24‑hour toxicity level measured. Because the most likely human route of uptake of CYN is ingestion, oral toxicity experiments were conducted on mice. The oral LD50 was found to be 4.4-6.9 mg CYN/kg, and in addition to some ulceration of the oesophageal gastric mucosa, symptoms were consistent with that of intraperitoneal dosing. Stomach contents included culture material, which indicated that these LD50 figures might be overestimated. Another means of exposure to CYN is related to alterations in the gut microbiome by artificial sweetners. A study including Aspartame conducted at Cedars-Sinai in Los Angeles by Ruchi Mathur, MD detected CYN in the duodenum at levels four times above baseline in Aspartame users, along with alterations in bacterial species. Mechanism of action Pathological changes associated with CYN poisoning were reported to be in four distinct stages: inhibition of protein synthesis, proliferation of membranes, lipid accumulation within cells, and finally cell death. Examination of mice livers removed at autopsy showed that on intraperitoneal injection of CYN, after 16 hours ribosomes from the rough endoplasmic reticulum (rER) had detached, and at 24 hours, marked proliferation of the membrane systems of the smooth ER and Golgi apparatus had occurred. At 48 hours, small lipid droplets had accumulated in the cell bodies, and at 100 hours, hepatocytes in the hepatic lobules were destroyed beyond function. The process of protein synthesis inhibition has been shown to be irreversible, however is not conclusively the method of cytotoxicity of the compound. Froscio et al.. proposed that CYN has at least two separate modes of action: the previously reported protein synthesis inhibition, and an as-yet unclear method of causing cell death. It has been shown that cells can survive for long periods (up to 20 hours) with 90% inhibition of protein synthesis, and still maintain viability. Since CYN is cytotoxic within 16–18 hours it has been suggested that other mechanisms are the cause of cell death. Cytochrome P450 has been implicated in the toxicity of CYN, as blocking the action of P450 reduces the toxicity of CYN. It has been proposed that an activated P450-derived metabolite (or metabolites) of CYN is the main cause of toxicity. Shaw et al.. demonstrated that the toxin could be metabolised in vivo, resulting in bound metabolites in the liver tissue, and that damage was more prevalent in rat hepatocytes than other cell types. Due to the structure of CYN, which includes sulfate, guanidine and uracil groups, it has been suggested that CYN acts on DNA or RNA. Shaw et al.. reported covalent binding of CYN or its metabolites to DNA in mice, and DNA strand breakage has also been observed. Humpage et al. also supported this, and in addition postulated that CYN (or a metabolite) acts on either the spindle or centromeres during cell division, inducing loss of whole chromosomes. The uracil group of CYN has been identified as a pharmacophore of the toxin. In two experiments, the vinylic hydrogen atom on the uracil ring was replaced with a chlorine atom to form 5-chlorocylindrospermopsin, and the uracil group was truncated to a carboxylic acid, to form cylindrospermic acid (Figure 6). Both products were assessed as being non-toxic, even at 50 times the LD50 of CYN. In the previous determination of the structure of deoxycylindrospermopsin, a toxicity assessment of the compound was carried out. Mice injected intraperitoneally with four times the 5-day median lethal dose of CYN showed no toxic effects. As this compound was shown to be relatively abundant, it was concluded that this analogue was comparatively non-toxic. Given that both CYN and epiCYN are toxic, the hydroxyl group on the uracil bridge can be considered necessary for toxicity. As yet, the relative toxicities of CYN and epiCYN have not been compared. Biosynthesis The cylindrospermopsin biosynthetic gene cluster (BGC) was described from Cylindrospermopsis raciborskii AWT205 in 2008. Related toxic blooms and their impact Since the Palm Island outbreak, several other species of cyanobacteria have been identified as producing CYN: Anabaena bergii, Anabaena lapponica , Aphanizomenon ovalisporum, Umezakia natans, Raphidiopsis curvata. and Aphanizomenon issatschenkoi. In Australia, three main toxic cyanobacteria exist: Anabaena circinalis, Microcystis species and C. raciborskii. Of these the latter, which produces CYN, has attracted considerable attention, not only due to the Palm Island outbreak, but also as the species is spreading to more temperate areas. Previously, the algae was classed as only tropical, however it has recently been discovered in temperate regions of Australia, Europe, North and South America, and also New Zealand. In August 1997, three cows and ten calves died from cylindrospermopsin poisoning on a farm in northwest Queensland. A nearby dam containing an algal bloom was tested, and C. raciborskii was identified. Analysis by HPLC/mass spectrometry revealed the presence of CYN in a sample of the biomass. An autopsy of one of the calves reported a swollen liver and gall bladder, along with haemorrhages of the heart and small intestine. Histological examination of the hepatic tissue was consistent with that reported in CYN-affected mice. This was the first report of C. raciborskii causing mortality in animals in Australia. The effect of a bloom of C. raciborskii on an aquaculture pond in Townsville, Australia was assessed in 1997. The pond contained Redclaw crayfish, along with a population of Lake Eacham Rainbowfish to control the excess food. Analysis revealed that the water contained both extracellular and intracellular CYN, and that the crayfish had accumulated this primarily in the liver but also in the muscle tissue. Examination of the gut contents revealed cyanobacterial cells, indicating that the crayfish had ingested intracellular toxin. An experiment using an extract of the bloom showed that it was also possible to uptake extracellular toxin directly into the tissues. Such bioaccumulation, particularly in the aquaculture industry, was of concern, especially when humans were the end users of the product. The impact of cyanobacterial blooms has been assessed in economic terms. In December 1991, the world's largest algal bloom occurred in Australia, where 1000 km of the Darling-Barwon River was affected. One million people-days of drinking water were lost, and the direct costs incurred totalled more than A$1.3 million. Moreover, 2000 site-days of recreation were also lost, and the economic cost was estimated at A$10 million, after taking into account indirectly affected industries such as tourism, accommodation and transport. Current methods of analysis in water samples Current methods include liquid chromatography coupled to mass spectrometry (LC-MS), mouse bioassay, protein synthesis inhibition assay, and reverse-phase HPLC-PDA (Photo Diode Array) analysis. A cell free protein synthesis assay has been developed which appears to be comparable to HPLC-MS. See also Cyanotoxin Lyngbyatoxin Microcystin Nodularin Saxitoxin Guanitoxin References Neurotoxins Nitrogen heterocycles Bacterial alkaloids Cyanotoxins Guanidine alkaloids Zwitterions Total synthesis Uracil derivatives Protein synthesis inhibitors Sulfate esters
Cylindrospermopsin
Physics,Chemistry
3,436
75,541,557
https://en.wikipedia.org/wiki/ESO%200313-192
ESO 0313-192 (also known as PGC 97372 and LO95 0313-192) is an edge-on spiral galaxy, or a double-lobed radio galaxy located around 1 billion light-years away in the constellation Eridanus. Its radio jets were discovered in 2003 by NASA, and its radio lobes are an estimated 1.5 million light years in diameter. It is part of the cluster Abell 428, and it has an active galactic nuclei. Characteristics ESO 0313-192 has a spiral shape similar to that of the Milky Way. It has a large central bulge, and arms speckled with brightly glowing gas inhabited by thick lanes of dark dust. Its companion, sitting pretty in the right of the frame, is known rather unpoetically as [LOY2001] J031549.8-190623. Jets, outbursts of superheated gas moving at close to the speed of light, have long been associated with the cores of giant elliptical galaxies, and galaxies in the process of merging. However, in an unexpected discovery, astronomers found ESO 0313-192 to have intense radio jets spewing out from its central supermassive black hole. The galaxy appears to have two more regions that are also strongly emitting in the radio part of the spectrum, making it even rarer still. The discovery of these giant jets in 2003, not visible in this image, but indicated in this earlier Hubble composite, has been followed by the unearthing of a further three spiral galaxies containing radio-emitting jets in recent years. This growing class of unusual spirals continues to raise significant questions about how jets are produced within galaxies, and how they are thrown out into the cosmos. Nucleus The core of ESO 0313-192 is rather bright as seen in the Hubble Composite. The central supermassive black hole of 0313-192 is known to be highly active, indicating the unusually luminous bulge in the center of the galaxy. A disklike emission-line structure is seen around the nucleus, inclined by ~20° to the stellar disk but nearly perpendicular to the jets. This may represent the aftermath of a galaxy encounter in which gas is photoionized by a direct view of the nuclear continuum. The SMBH has a mass of ~. Properties Nearly all classic double-lobed radio galaxies have either an elliptical galaxy or some galactic merger at their center. However, there is one remarkable exception to this rule. In 2003, astronomers confirmed that ESO 0313-192, flanked by large, bright clouds of radio emissions, is in fact an edge-on spiral galaxy. Twisted Disc If you look closely enough, ESO 0313-192's dark plane of dust is distinctly twisted. This may be due to a collision or a near pass-by with a smaller galaxy, which may have sparked the galaxy's nucleus to life. Gallery External links ESO 0313-192 on NASA References Radio galaxies Eridanus (constellation) Spiral galaxies 97372 J03155208-1906442 Astronomical objects discovered in 2003 ESO objects
ESO 0313-192
Astronomy
638
53,382,545
https://en.wikipedia.org/wiki/Robot%20tax
A robot tax is a legislative strategy to disincentivize the replacement of workers by machines and bolster the social safety net for those who are displaced. While the automation of manual labour has been contemplated since before the Industrial Revolution, the issue has received increased discussion in the 21st century due to newer developments such as machine learning. Assessments of the risk vary widely, with one study finding that 47% of the workforce is automatable in the United States, and another study finding that this figure is 9% across 21 OECD countries. The idea of taxing companies for deploying robots is controversial with opponents arguing that such measures will stifle innovation and impede the economic growth that technology has consistently brought in the past. Proponents have pointed to the phenomenon of "income polarization" which threatens the jobs of low-income workers who lack the means to enter the knowledge-based fields in high demand. Arguments for Support for an automation tax by American politicians can be traced back to 1940 in which Joseph C. O'Mahoney tabled one such bill in the Senate. In 2017, San Francisco supervisor Jane Kim made these strategies the subject of a task force, stating that income disparity attributable to robots is widely visible in her district. In 2019, New York City mayor Bill de Blasio advocated for a robot tax during and after his presidential run. While crediting Andrew Yang for drawing attention to the issue, de Blasio stated that he had different policy goals and proposed making large corporations responsible for five years of income tax from jobs that are automated away. In 2017 in the UK, Labour leader Jeremy Corbyn called for a robot tax. Francisco Ossandón argues that at this stage of development, the idea of a limited robot tax could be addressed if it meets some requirements, such as: (i) it is paid by certain taxpayers that use robots (i.e. large companies); (ii) is related to certain activities (i.e. some industrial and/or financial activities); (iii) has a limited definition for robots (i.e. physical smart machines or non-physical intelligent software’s in case of financial activities), and; (iv) has a low tax rate. However, he does not see a case for a general robot tax. In a 2015 Reddit discussion, Stephen Hawking criticized machine owners for initiating a "lobby against wealth resdistribution". Following Elon Musk's statement that universal basic income should offset the employment effects of robots, Bill Gates gave an interview in favour of a robot tax. Mark Cuban announced his support for a robot tax in 2017, citing an essay by Quincy Larson about the accelerating pace of technological unemployment. Tax law professor Xavier Oberson has called for robots to be tax-compliant so that government spending can continue even as the pool of taxable income for human workers decreases. Oberson's proposals suggest taxing robot owners until robots themselves have the ability to pay, pending further advances in artificial intelligence. Arguments against Critics including Jim Stanford and Tshilidzi Marwala have discussed the futility of a robot tax given the malleability in the definition of "robot". In particular, autonomous elements are present in many 21st-century devices that are not normally considered robots. Economist Yanis Varoufakis has discussed the additional complication of determining how much a human worker would have hypothetically made in a labour sector that has been dominated by robots for decades. He has instead proposed a variation of universal basic income called the "universal basic dividend" to combat income polarization. Robotics companies including Savioke and the Advancing Automation trade group have fought robot taxes, calling them an "innovation penalty". ABB Group CEO Ulrich Spiesshofer compared taxing robots to taxing software, and pointed to the fact that countries with a low unemployment rate have a high automation rate. EU Commissioner Andrus Ansip rejected the idea of a robot tax, stating that any jurisdiction implementing one would become less competitive as technological companies are incentivized to move elsewhere. The 2019 World Development Report, prepared by Simeon Djankov and Federica Saliola of the World Bank, opposed a robot tax, arguing that it would result in reduced productivity and increased tax avoidance by large corporations and their shareholders. Existing laws On August 6, 2017, South Korea, under President Moon, passed what has been called the first robot tax. Rather than taxing entities directly, the law reduces tax breaks that were previously awarded to investments into robotics. A robot tax had previously been part of Mady Delvaux's bill imposing ethical standards for robots in the European Union. However, the European Parliament rejected this aspect when it voted on the law. See also Disruptive innovation Guaranteed basic income Income inequality Technological unemployment References Universal basic income Robotics Tax Technological change
Robot tax
Engineering
985
8,749,476
https://en.wikipedia.org/wiki/Conocybe%20rugosa
Conocybe rugosa is a common species of mushroom that is widely distributed and especially common in the Pacific Northwest of the United States. It grows in woodchips, flowerbeds and compost. It has been found in Europe, Asia and North America. It contains the same mycotoxins as the death cap mushroom. Conocybe rugosa was originally described in the genus Pholiotina, and its morphology and a 2013 molecular phylogenetics study supported its continued classification there. Description Conocybe rugosa has a conical cap that expands to flat, usually with an umbo. It is less than 3 cm across, has a smooth brown top, and the margin is often striate. The gills are rusty brown, close, and adnexed. The stalk is 2 mm thick and 1 to 6 cm long, smooth, and brown, with a prominent and movable ring. The spores are rusty brown, and it may be difficult to identify the species without a microscope. Toxicity This species is deadly poisonous, the fruiting bodies containing alpha-amanitin, a cyclic peptide that is highly toxic to the liver and is responsible for many deaths by poisoning from mushrooms in the genera Amanita and Lepiota. They are sometimes mistaken for species of the genus Psilocybe due to their similar looking cap. See also List of deadly fungi References Bolbitiaceae Deadly fungi Fungi described in 1898 Fungi of North America Fungi of Europe Fungi of Asia Taxa named by Charles Horton Peck Fungus species
Conocybe rugosa
Biology
309
1,828,004
https://en.wikipedia.org/wiki/Bareback%20%28sexual%20act%29
Bareback sex is physical sexual activity, especially sexual penetration, without the use of a condom. The topic primarily concerns anal sex between men without the use of a condom, and may be distinguished from unprotected sex because bareback sex denotes the deliberate act of forgoing condom use. Etymology An LGBT slang term, bareback sex comes from the equestrian term bareback, which is the practice of riding a horse without a saddle. It is not known when the term (as sexual slang) was first used, although its use did gain momentum in the 1960s with the first appearance in print (as analogous reference) occurring in 1968. The term was used by G.I.s during the Vietnam War when sex without the use of a condom was known as "going in" or "riding" bareback. The term was included in the 1972 publication, Playboy's Book of Forbidden Words: A Liberated Dictionary of Improper English. The term appeared occasionally in print until the 1980s and then in context to the AIDS epidemic and the discussion of sexual practices. It did not have widespread use in LGBT culture until 1997, when there was an increase of discussion regarding condomless sex (as reflected in print publications). The term bareback sex is now used less frequently among heterosexuals. A 2009 survey by the New York City Department of Health and Mental Hygiene found that heterosexual women are more likely to have unprotected anal intercourse than gay and bisexual men. History and culture Initially used for contraceptive purposes, condoms also came to be used to limit or prevent sexually transmitted infections (STIs), even after other contraceptive methods were developed. As AIDS emerged and the sexual transmission of HIV became known in the 1980s, the use of condoms to prevent infection became much more widespread, especially among men who have sex with men (MSM) who engage in anal sex. At the beginning of the AIDS crisis, in the context of the invention and development of safe sex, the uptake of condoms among Western MSM was so widespread and effective that condom use became established as a norm for sex between men. From 1995, several high-profile HIV positive men declared their refusal to wear condoms with other HIV positive men in gay publications, dubbing the practice barebacking. While these early articulations of barebacking expressed a concern for HIV prevention, in that they generally referred to dispensing with condoms in the context of sex between people of the same HIV status, the moral panic which ensued was so pronounced that barebacking came to be framed as a rebellious and transgressive erotic practice for HIV positive and HIV negative people alike, irrespective of the risks of HIV transmission. Resurgence and stigma A resurgence of barebacking in first-world gay communities during the 1990s has been a frequent topic for gay columnists and editorialists in The Advocate, Genre magazine, and Out magazine. Many of these articles express concern over bareback sex's popularity, and liken it to irresponsible and reckless behavior, despite the fact that a third of gay men take part in the practice. Some of the reasons listed for the resurgence are: the recent perception of HIV as a treatable illness one can live with, insufficient sex education, use of drugs such as methamphetamine in sexual settings, and fetishization of bareback sex on various porn and dating sites. Academic works suggest that barebacking is a way to reach for transcendence, to overcome the boredom of everyday average life in a hyper-rationalized society. Some men are dispensing with condoms in the context of seroconcordant sex (sex between two men of the same HIV status). Early articulations of barebacking generally refer to sex between two HIV-positive men, whereby barebacking could be considered an early harm reduction strategy, similar to serosorting, which was later endorsed by some public health authorities in the USA. Bareback sex has also become more acceptable since the introduction of pre-exposure prophylaxis (PrEP). One of the most popular forms of PrEP is Truvada, a medication previously taken for HIV treatment that, when taken properly, has been shown to prevent HIV-negative users from contracting HIV from infected partners. While these drugs do not necessarily prevent the transmission of other STIs, they have stirred a discussion on what "safe" sex without the use of condoms really entails. A 2005 study concluded that the resurgence of barebacking led to an increase in sexually transmitted infections among the MSM community. The study found that of the 448 men who were familiar with barebacking, nearly half reported they had bareback sex in the last three months. In the San Francisco study, fewer men reported engaging in barebacking when the behavior was defined as intentional unprotected anal intercourse with a non-primary partner. Using this definition, 14% of the 390 men who were aware of barebacking reported engaging in the behavior in the past two years. The study also found that HIV-positive MSM were more likely to have bareback sex than were HIV-negative MSM. Gay pornographic films Bareback gay pornography was standard in "pre-condom" films from the 1970s and early 1980s. As awareness of the risk of AIDS developed, pornography producers came under pressure to use condoms, both for the health of the performers and to serve as role models for their viewers. By the early 1990s new pornographic videos usually featured the use of condoms for anal sex. However, beginning in the 1990s, an increasing number of studios have been devoted to the production of new films featuring men engaging in unprotected sex. Mainstream gay pornographic studios have also continued to feature the occasional bareback scene. Also, mainstream studios that consistently use condoms for anal sex scenes may sometimes choose editing techniques that make the presence of condoms somewhat ambiguous and less visually evident, and thus may encourage viewers to fantasize that barebacking is taking place, even though the performers are following safer-sex protocols. (In contrast, some mainstream directors use close-up shots of condom packets being opened, etc., to help clearly establish for the viewer that the sex is not bareback.) Health risks In addition to sexually transmitted infections, mechanical trauma are the same as in anal sex. Unprotected anal sex is a risk factor for formation of antisperm antibodies (ASA) in the recipient. In some people, ASA may cause autoimmune infertility. Antisperm antibodies can impair fertilization, negatively affect the implantation process, and impair growth and development of the embryo. See also Creampie (sexual act) HIV superinfection Saddlebacking Sexual practices between men References Further reading Race, K. (2010) "Engaging in a Culture of Barebacking: Gay Men and the Risk of HIV Prevention". In M. Davis & C. Squire (eds.) HIV Treatment and Prevention Technologies in International Perspective. Basingstoke: Palgrave MacMillan Nicolas Sheon and Aaron Plant, "Protease Dis-Inhibitors? The Gay Bareback Phenomenon," managingdesire.org. With a long list of further references. Riding Bareback: A Qualitative Examination of the Subjective Meanings Attached to Condomless Sex by MSM, Bruce W. Whitehead, Journal of Sex Research (Feb 2006) Sharif Mowlabocus, Justin Harbottle and CHarlie Witzel, "What We Can't See? Understanding the Representations and Meanings of UAI [unprotected anal intercourse], Barebacking, and Semen Exchange in Gay Male Pornography", Journal of Homosexuality, vol. 61, no. 10, 2014, pp. 1462–1480. Anal sex Sexual acts Sexual slang HIV/AIDS LGBTQ slang
Bareback (sexual act)
Biology
1,588
22,819,907
https://en.wikipedia.org/wiki/Acaulospora%20scrobiculata
Acaulospora scrobiculata is a species of fungus in the family Acaulosporaceae. It forms arbuscular mycorrhiza and vesicles in roots. Originally described in Mexico, it is found throughout the world. References Diversisporales Fungus species
Acaulospora scrobiculata
Biology
60
2,150,099
https://en.wikipedia.org/wiki/MUSASINO-1
The MUSASINO-1 was one of the earliest electronic digital computers built in Japan. Construction started at the Electrical Communication Laboratories of NTT at Musashino, Tokyo in 1952 and was completed in July 1957. The computer was used until July 1962. Saburo Muroga, a University of Illinois visiting scholar and member of the ILLIAC I team, returned to Japan and oversaw the construction of MUSASINO-1. Using 519 vacuum tubes and 5,400 parametrons, the MUSASINO-1 possessed a magnetic core memory, initially of 32 (later expanded to 256) words. A word was composed of 40 bits, and two instructions could be stored in a single word. Addition time was clocked at 1,350 microseconds, multiplication at 6,800 microseconds, and division time at 26.1 milliseconds. The MUSASINO-1's instruction set was a superset of the ILLIAC I's instructions, so it could generally use the latter's software. However, many of the programs for the ILLIAC used some of the unused bits in the instructions to store data, and these would be interpreted as a different instructions by the MUSASINO-1 control circuitry. See also FUJIC ILLIAC I List of vacuum-tube computers References Raúl Rojas and Ulf Hashagen, ed. The First Computers: History and Architectures. 2000, MIT Press, . In memory of Saburo Muroga, CS @ Illinois Alumni Magazine, Summer 2011 External links Descriptions of the MUSASINO-1 and its immediate successors at the IPSJ Computer Museum IAS architecture computers 40-bit computers Vacuum tube computers Magnetic logic computers
MUSASINO-1
Technology
349
25,220,785
https://en.wikipedia.org/wiki/Ministry%20of%20Mines%20and%20Energy%20%28Brazil%29
The Ministry of Mines and Energy (, MME) is a Brazilian government ministry established in 1960. It fosters investments in mining and energy-related activities, funds research and sets out government policies. Previously, mines and energy were the responsibility of the Ministry of Agriculture. , the minister of mines and energy is Alexandre Silveira. List of ministers References External links Brazil Mines and Energy Brazil Ministries established in 1960 1960 establishments in Brazil
Ministry of Mines and Energy (Brazil)
Engineering
89
24,073,726
https://en.wikipedia.org/wiki/Cooler%20Master
Cooler Master Technology Inc. is a computer hardware manufacturer based in Taipei, Taiwan. Founded in 1992, the company produces computer cases, power supplies, air and liquid CPU coolers, laptop cooling pads, and computer peripherals. Alongside its retail business, Cooler Master also is an original equipment manufacturer of cooling devices for other manufacturers, including Nvidia (VGA coolers), AMD (CPU and VGA coolers), and EVGA (motherboard heatsinks). The company has sponsored major eSports events. Some of Cooler Master's products have won awards including the iF product design award. History Facilities The company headquarters of Cooler Master is located in Neihu District, Taipei City, Taiwan and has a manufacturing facility in Huizhou, China. To support international operations, the company also has branch offices in various continents, including United States (Fremont, California and Chino, California), the Netherlands (Eindhoven), Italy (Milano), France (Paris), Germany (Augsburg), Russia (Moscow), and Brazil (São Paulo). Products The company in March 2020 continues to release its own gaming headsets, with products like the MH670 headphones allowing customization through the Cooler Master configuration app, named Portal. It also makes esports mice. One of its main current products is the Hyper 212 Evo CPU cooler, which PCWorld says is "arguably the most popular CPU air cooler for the budget-minded crowd wanting to upgrade from the stock options." The company also makes other air coolers, liquid coolers, PC cases, fans, and power supplies. In January 2020, the company changed its applicator style to a new wide-tipped applicator, with Cooler Master stating that it was to alarm fewer parents about the older syringe shape. The KFConsole, a home video game console launched in December 2020, is part of a partnership between Cooler Master and KFC. CM Storm CM Storm is a subsidiary brand created in 2008. Products are developed using research collected from partnerships with gaming organizations in eSports including Mousesports and Frag Dominant. In September 2009, CM Storm launched the Sentinel Advanced mouse with a programmable organic light-emitting diode display. In January 2012, CM Storm launched the QuickFire Ultimate keyboard. In January 2013, CM Storm launched the Sirus S gaming headset featuring an inline remote with volume control and microphone mute button. As of 2018, Cooler Master no longer sells products under the CM Storm brand. In June 2024, Cooler Master launched its 57-inch GP57ZS gaming monitor featuring DUHD resolution, mini LED, and VRR technology. See also List of companies of Taiwan List of computer hardware manufacturers Noctua (company) References External links Global home page 1992 establishments in Taiwan Computer companies of Taiwan Computer hardware companies Computer enclosure companies Computer power supply unit manufacturers Technology companies established in 1992 Manufacturing companies based in New Taipei Electronics companies of Taiwan Computer hardware cooling Taiwanese brands Computer peripheral companies Computer keyboard companies
Cooler Master
Technology
614
75,731,313
https://en.wikipedia.org/wiki/Metamitron
Metamitron is an organic compound used as a selective pre- and post-emergence herbicide in sugar beets. It is used in the European Union for weed suppression in sugar beets. Metamitron is marketed under the trade name Goltix by ADAMA in Europe, the United Kingdom, New Zealand, and South Africa. Metamitron is a triazinone herbicide. It possesses a triazine ring like other organic compounds that use cyanuric chloride as a precursor. It is a modification of the chemical 1,2,4-triazin-5(4H)-one, with methyl, amino, and phenyl group substitutions at positions 3, 4, and 6. Metamitron is in the HRAC Mode of action Group 5. It functions as an inhibitor of PSII by binding to serine 264 on the D1 protein. Resistance to metamitron has been found in Chenopodium album growing as weeds among sugar beet fields in Belgium, caused by a mutation in serine 264. Metamitron has moderate acute oral and inhalation toxicity. See also Atrazine Hexazinone Metribuzin References Herbicides Triazines
Metamitron
Biology
249
20,068,107
https://en.wikipedia.org/wiki/Danon%20disease
Danon disease (or glycogen storage disease Type IIb) is a metabolic disorder. Danon disease is an X-linked lysosomal and glycogen storage disorder associated with hypertrophic cardiomyopathy, skeletal muscle weakness, and intellectual disability. It is inherited in an X-linked dominant pattern. Symptoms and signs Males In males the symptoms of Danon disease are more severe. Features of Danon disease in males are: An early age of onset of muscle weakness and heart disease (onset in childhood or adolescence) Some learning problems or intellectual disability can be present Muscle weakness can be severe and can affect endurance and the ability to walk Heart disease (cardiomyopathy) can be severe and can lead to a need for medications. It usually progresses to heart failure, commonly complicated by atrial fibrillation and embolic strokes with severe neurological disability, leading to death unless heart transplant is performed. Cardiac conduction abnormalities can occur. Wolff–Parkinson–White syndrome is a common conduction pattern in Danon disease. Symptoms are usually gradually progressive Some individuals may have visual disturbances, and/or retinal pigment abnormalities Danon Disease is rare and unfamiliar to most physicians. It can be mistaken for other forms of heart disease and/or muscular dystrophies, including Pompe disease. Females In females the symptoms of Danon disease are less severe. Common symptoms of Danon disease in females are: A later age of onset of symptoms. Many females will not have obvious symptoms until late adolescence or even adulthood. Learning problems and intellectual disability are usually absent. Muscle weakness is often absent or subtle. Some females will tire easily with exercise Cardiomyopathy is often absent in childhood. Some women will develop this in adulthood. Cardiomyopathy can be associated with atrial fibrillation and embolic strokes. Cardiac conduction abnormalities can occur. Wolff–Parkinson–White syndrome is a common conduction pattern in Danon disease. Symptoms in females progress more slowly than in males. Some females may have visual disturbances, and/or retinal pigment abnormalities Danon Disease is rare and unfamiliar to most physicians. The milder and more subtle symptoms in females can make it more difficult to diagnose females with Danon Disease Causes Although the genetic cause of Danon disease is known, the mechanism of disease is not well understood. Danon disease involves a genetic defect (mutation) in a gene called LAMP2, which results in a change to the normal protein structure. While the function of the LAMP2 gene is not well understood, it is known that LAMP2 protein is primarily located in small structures within cells called lysosomes. Genetics It is associated with LAMP2. The status of this condition as a GSD has been disputed. Diagnosis Making a diagnosis for a genetic or rare disease can often be challenging. Healthcare professionals typically look at a person's medical history, symptoms, physical exam, and laboratory test results in order to make a diagnosis. The following resources provide information relating to diagnosis and testing for this condition. If you have questions about getting a diagnosis, you should contact a healthcare professional. Testing Resources The Genetic Testing Registry (GTR) provides information about the genetic tests for this condition. The intended audience for the GTR is health care providers and researchers. Patients and consumers with specific questions about a genetic test should contact a health care provider or a genetics professional. Orphanet lists international laboratories offering diagnostic testing for this condition. Treatment RP-A501 is an AAV-based gene therapy aimed to restore the LAMP-2 gene which is defective in male patients with Danon Disease and how to cure it. Cardiac transplantation has been performed as a treatment; however, most patients die early in life. History Danon disease was characterized by Moris Danon in 1981. Dr. Danon first described the disease in 2 boys with heart and skeletal muscle disease (muscle weakness), and intellectual disability. The first case of Danon disease reported in the Middle East was a family diagnosed in the eastern region of United Arab Emirates with a new LAMP2 mutation; discovered by the Egyptian cardiologist Dr. Mahmoud Ramadan the associate professor of Cardiology in Mansoura University (Egypt) after doing genetic analysis for all the family members in Bergamo, Italy, where 6 males were diagnosed as Danon disease patients and 5 female were diagnosed as carriers; as published in Al-Bayan newspaper on 20 February 2016 making this family the largest one with patients and carriers of Danon disease. Danon disease has overlapping symptoms with another rare genetic condition called 'Pompe' disease. Microscopically, muscles from Danon disease patients appear similar to muscles from Pompe disease patients. However, intellectual disability is rarely, if ever, a symptom of Pompe disease. Negative enzymatic or molecular genetic testing for Pompe disease can help rule out this disorder as a differential diagnosis. See also Autophagic vacuolar myopathy Glycogen storage disease GSD-II (Pompe disease, formerly GSD-IIa) Inborn errors of carbohydrate metabolism Lysosomal storage disease Metabolic myopathies References External links AGSD - Association of Glycogen Storage Disease in the United States AGSD-UK - Association of Glycogen Storage Disease in the UK IamGSD - International Association for Muscle Glycogen Storage Disease Defects of cell structure Metabolic disorders Rare diseases
Danon disease
Chemistry
1,109
35,515,445
https://en.wikipedia.org/wiki/Bathymodiolus%20thermophilus
Bathymodiolus thermophilus is a species of large, deep water mussel, a marine bivalve mollusc in the family Mytilidae, the true mussels. The species was discovered at abyssal depths when submersible vehicles such as DSV Alvin began exploring the deep ocean. It occurs on the sea bed, often in great numbers, close to hydrothermal vents where hot, sulphur-rich water wells up through the floor of the Pacific Ocean. Description Bathymodiolus thermophilus is a very large mussel with a dark brown periostracum, growing to a length of about . It is attached to rocks on the seabed by byssus threads but it is able to detach itself and move to a different location. It is sometimes very abundant, having been recorded at densities of up to 300 individuals per square metre (270 per square yard). Distribution Bathymodiolus thermophilus is found clustered around deep sea thermal vents on the East Pacific Rise between 13°N and 22°S and in the nearby Galapagos Rift at depths around 2800 metres (one and a half miles). Deep sea hydrothermal vents are frequently found along tectonic plate boundaries, and underwater mountain ranges and ridges. They are particularly well dispersed along the global mid-ocean ridge system. Specific geographical barriers exist along the mid-ocean ridge system that may impede gene flow between populations along the ridge-axis. A study sampled mussels across various topographical interruptions along the ridge system and localities encompassed the Galapagos Rift and the East Pacific Rise. Results determined that mussel populations that were geographically isolated from one another via the Easter Microplate, known for its strong cross-axis currents, were genetically more divergent than populations from the Galapagos Rift and the East Pacific Rise, where there are no barriers to dispersal and no isolation-by-distance. Mussels in these populations were genetically homogeneous and contained high levels of unimpeded interpopulational gene flow. The environments surrounding hydrothermal vents are important for the B. thermophilus to survive. It has been shown that environmental changes can impair the ability of the mussels and their symbiotic bacteria to live. Research has shown that when the B. thermophilus are experimentally placed in a low-sulfide environment, the gill symbionts are lost, and the mussels suffer harm to the gills and body conditions. The distribution of B. thermophilus along hydrothermal events has an impact on the biodiversity in the environment. High-density mussel populations can directly inhibit the recruitment of invertebrates at deep-sea hydrothermal vents. When researchers transplanted B. thermophilus to a naturally high density hydrothermal vent, there was a lower recruitment at the hydrothermal vent in just 11 months. A potential reason for this phenomenon is due to enhanced predation or avoidance of superior competitors. Symbiosis and feeding Bathymodiolus thermophilus feeds by extracting suspended food particles from the surrounding water through its gills. This mostly consists of the bacteria that live around the vent, often forming a dense mat. As a result, B. thermophilus possesses a chemosymbiotic relationship with a Gammaproteobacteria that oxidizes hydrogen sulfide in the mussel's gills. The mussel absorbs nutrients synthesized by bacteria and is not reliant on photosynthetically produced organic matter. The bacteria lacks enzymes used to synthesize succinate and tricarboxylic acid cycle intermediates, but is able to synthesize nutrients by using sulfur compounds in the environment. The genome of the bacteria reveals that it possesses pathways for the sulfide energy source and encodes cycles for fixation. Only mussels that contain high concentrations of bacteria demonstrated the ability to perform fixation. The bacteria also contain genes for cell surface adhesion, bacteriotoxicity, and phage immunity. These genetic characteristics may help the chemosynthetic bacteria defend against the B. thermophilus’ immune system, which allows for the bacteria to live in the mussels’ gills without being killed. Not only does the chemosymbiotic relationship between B. thermophilus and the bacteria benefit the mussel, but there is some speculation that living inside the gills of the mussel also benefits the bacteria. It is possible that living within another host helps the bacteria withstand the harsh environment of the hydrothermal vents. Life cycle The larvae of Bathymodiolus thermophilus drift with the currents and are planktotrophic, feeding on phytoplankton and small zooplankton. This method of feeding is likely to give them good dispersal capabilities and it has been shown by DNA analysis that there is a high rate of gene flow between populations round different vents. The Bathymodiolus species represents one of the most well-known fauna to colonize hydrothermal vents and cold seeps. In particular, B. thermophilus has been extensively studied due to their chemosynthetic symbioses and their crucial roles in ecosystem productivity. In early stages of development, deep-sea mussels appear to follow similar growth processes of gametogenesis in comparison to shallow-water mytilidae. It has been observed that Bathymodiolins produce small oocytes which may predict high fecundity levels for this species. While there are limited studies regarding fecundity of B. thermophilus, one way to improve understanding of both fecundity as well as spawning patterns would to observe a spawning event with use of yearly sampling. Adult stages of the bathymodioline species have received the most attention, especially when studying the bacterial symbionts that are fundamental to the mussels nutritional needs. When reaching maturity, adults form close aggregations along seeps and vents. The mantle of B. thermophilus serves two physiological roles, one being the accumulation of somatic reserves, and the other being the development of the gonads. Gonads likely originate from germinal stem cells that appear in germinal stem-cell clumps around the dorsal region, between the mantle and the gill of the animal. In larger adult specimens, gonads can extend along the mantle epithelium. Gametogenesis occurs in small saclike cavity in a matrix of connective tissue supplied with seminal cells. In males, sertoli cells deliver nutrients to the developing gametes, where follicle cells perform the analogous role in female mussels. Evolution and phylogeny Analysis of DNA sequence data posited that mussels of subfamily Bathymodiolinae were divided into four groups. B. thermophilus belongs to Group 2, along with four other species: B. septemdierum, B. brevior, and B. marisindicus. All members of Group 2 subspecies were labelled as thioautotrophs, chemoautotrophic organisms that feed on sulfides. Sequence data provided evidence for an outgroup of Modioline species from sunken wood and whale carcasses and that the phylogeny of Bathymodiolus mussels and their relatives had derivations from a single ancestor with COI (cytochrome C oxidase subunit I) or ND4 (NADH dehydrogenase subunit 4) sequence data. References Molluscs described in 1985 thermophilus Chemosynthetic symbiosis Animals living on hydrothermal vents
Bathymodiolus thermophilus
Biology
1,587
54,609,406
https://en.wikipedia.org/wiki/Hugo%20Stintzing
Hugo Stintzing (10 August 1888 in Munich, Germany – 11 December 1970 in Darmstadt, West Germany) was a German university lecturer in physics at the Technische Hochschule Darmstadt. He was involved in early research on the electron scanning microscope, studied radioactive elements and developed a model for the periodic table. He was the Director of the Institute of X-Ray Physics and Technology at Darmstadt from 1936 to 1945. He was removed from his position due to his involvement in the National Socialist German Workers' Party (NSDAP, Nazi Party, joined 1933), and was interned in 1946. Childhood Hugo Stintzing was born in Munich, Germany on 10 August 1888 the son of Roderich Stintzing, an internist and later professor of internal medicine. Education Hugo studied chemistry and metallurgy, graduating in May 1911 with the title Diplomingenieur (a degree in engineering) from the Technische Hochschule in Charlottenburg (now Technische Universität Berlin). From 1913 he was assistant at the Photochemical Department of the Physikalisch-Chemisches Institut (the Institute of Physical Chemistry) at the University of Leipzig. His dissertation on the subject of the influence of light on colloid systems was published in 1914. He received his Doctorate in Philosophy on 12 January 1915 from the University of Giessen. In 1916 he proposed a model for the periodic table, organized as a cone-like rotational body. His paper, "Eine neue Anordnung des periodischen Systems der Elemente" (A new arrangement of the periodic system of the elements) was published in Zeitschrift für Physikalische Chemie. His representation of the elements is one of several early helix-based displays. From 1918 he was an assistant at the Physikalisch-Chemisches Institut of the University of Giessen. He habilitated in 1923 with a thesis on the use of x-rays for chemical investigations: Röntgenographisch-chemische Untersuchungen. Career Hugo Stintzing then became a lecturer in physical chemistry and technology at the University of Giessen. On 4 July 1928 he was appointed an extraordinary professor of physical chemistry at the University of Giessen, lecturing on X-ray spectroscopy. During the early part of his career, he translated some of the works of Niels Bohr into German. In 1929, Stintzing filed a patent for a proposed electron scanning microscope, to be capable of automatic detection and measurement of particles using a light beam or beam of electrons. He suggested the use of crossed slits to obtain a small diameter probe. A light beam could be mechanically scanned, while an electron beam could be detected using electric or magnetic fields. Detectors would observe the beam transmitted after absorption or scattering. A chart recorder would represent the linear dimension of a particle by the width of a deflection, and its amplitude by thickness. No drawings accompany the specification, and Stintzing is presumed not to have attempted construction of such an instrument. Almost forty years later, a computer-controlled scanning electron microscope based on his specifications was built and tested. The results were presented at the Fifth International Congress on X-Ray Optics and Microanalysis at Tübingen University in 1968. Stintzing worked on the chemical analysis of x-ray spectra, developing apparatus with x-ray tubes for the measurement of secondary fluorescent emission lines. He published a textbook on Rontgenstrahlen und Chemische Bindung ("X-ray and chemical bonding") in 1931. In 1936, Stintzing was appointed to Technische Hochschule Darmstadt, to replace Paul Knipping, who had died unexpectedly. Knipping had founded an institute for X-ray physics and technology at Darmstadt in 1929/30. Stintzing held a lectureship (Lehrauftrag) at Darmstadt as of 1 April 1936, and was appointed Director of the Institute of X-Ray Physics and Technology as of 1 October 1936. On 10 June 1943 he received promotion to the rank of extraordinary professor of X-ray physics and technology at Darmstadt. As early as 1942, the Stintzing X-ray Institute was classified as important to the war and the state, which meant that the institute obtained funding and privileges during the Second World War. Military and political involvement During the First World War Hugo Stintzing was in the artillery with the rank of lieutenant of the reserve. On 1 May 1933, Hugo Stintzing joined the National Socialist German Workers' Party (NSDAP, Nazi Party) as well as its paramilitary wing, the Sturmabteilung. From September 1938 to June 1940, he held the position of National Socialist lecturer at the University of Darmstadt. Like Karl Lieser, Friedrich List and Jakob Sprenger, he was a strong supporter of Nazism. On 8 October 1945 Hugo Stintzing was removed from his position at the university by the American military government. In 1946 he was interned. The Institute for X-Ray Physics and X-Ray Technology was merged into the Technische Hochschule Darmstadt, under the direction of Richard Vieweg. As of 4 November 1955, Stintzing applied for a patent for a Method and apparatus for improving the effectiveness of radioactive sources, which was granted on 27 March 1958. A notice in Physick Journal in 1958 commemorated his 70th birthday. Personal life Hugo Stintzing was married to Friedel (Frieda) Gertrud Keferstein (1899–1989) on 15 October 1929. Hugo Stintzing died at the age of 82 on 11 December 1970 and was buried in the Old Cemetery in Darmstadt, Germany. References 1888 births 1970 deaths 20th-century German chemists People involved with the periodic table Technische Universität Berlin alumni Academic staff of Technische Universität Darmstadt Scientists from Darmstadt
Hugo Stintzing
Chemistry
1,178
2,567,750
https://en.wikipedia.org/wiki/Narrow%20class%20group
In algebraic number theory, the narrow class group of a number field K is a refinement of the class group of K that takes into account some information about embeddings of K into the field of real numbers. Formal definition Suppose that K is a finite extension of Q. Recall that the ordinary class group of K is defined as the quotient where IK is the group of fractional ideals of K, and PK is the subgroup of principal fractional ideals of K, that is, ideals of the form aOK where a is an element of K. The narrow class group is defined to be the quotient where now PK+ is the group of totally positive principal fractional ideals of K; that is, ideals of the form aOK where a is an element of K such that σ(a) is positive for every embedding Uses The narrow class group features prominently in the theory of representing integers by quadratic forms. An example is the following result (Fröhlich and Taylor, Chapter V, Theorem 1.25). Theorem. Suppose that where d is a square-free integer, and that the narrow class group of K is trivial. Suppose that is a basis for the ring of integers of K. Define a quadratic form , where NK/Q is the norm. Then a prime number p is of the form for some integers x and y if and only if either or or where dK is the discriminant of K, and denotes the Legendre symbol. Examples For example, one can prove that the quadratic fields Q(), Q(), Q() all have trivial narrow class group. Then, by choosing appropriate bases for the integers of each of these fields, the above theorem implies the following: A prime p is of the form p = x2 + y 2 for integers x and y if and only if (This is known as Fermat's theorem on sums of two squares.) A prime p is of the form p = x2 − 2y 2 for integers x and y if and only if A prime p is of the form p = x2 − xy + y 2 for integers x and y if and only if (cf. Eisenstein prime) An example that illustrates the difference between the narrow class group and the usual class group is the case of Q(). This has trivial class group, but its narrow class group has order 2. Because the class group is trivial, the following statement is true: A prime p or its inverse −p is of the form ± p = x2 − 6y 2 for integers x and y if and only if However, this statement is false if we focus only on p and not −p (and is in fact even false for p = 2), because the narrow class group is nontrivial. The statement that classifies the positive p is the following: A prime p is of the form p = x2 − 6y 2 for integers x and y if and only if p = 3 or (Whereas the first statement allows primes , the second only allows primes .) See also Class group Quadratic form References A. Fröhlich and M. J. Taylor, Algebraic Number Theory (p. 180), Cambridge University Press, 1991. Algebraic number theory
Narrow class group
Mathematics
695
14,724,063
https://en.wikipedia.org/wiki/ZFP36
Tristetraprolin (TTP), also known as zinc finger protein 36 homolog (ZFP36), is a protein that in humans, mice and rats is encoded by the ZFP36 gene. It is a member of the TIS11 (TPA-induced sequence) family, along with butyrate response factors 1 and 2. TTP binds to AU-rich elements (AREs) in the 3'-untranslated regions (UTRs) of the mRNAs of some cytokines and promotes their degradation. For example, TTP is a component of a negative feedback loop that interferes with TNF-alpha production by destabilizing its mRNA. Mice deficient in TTP develop a complex syndrome of inflammatory diseases. Interactions ZFP36 has been shown to interact with 14-3-3 protein family members, such as YWHAH, and with NUP214, a member of the nuclear pore complex. Regulation Post-transcriptionally, TTP is regulated in several ways. The subcellular localization of TTP is influenced by interactions with protein partners such as the 14-3-3 family of proteins. These interactions and, possibly, interactions with target mRNAs are affected by the phosphorylation state of TTP, as the protein can be posttranslationally modified by a large number of protein kinases. There is some evidence that the TTP transcript may also be targeted by microRNAs, such as miR-29a. References Further reading
ZFP36
Chemistry
313
23,877,002
https://en.wikipedia.org/wiki/Azo%20violet
Azo violet (Magneson I; p-nitrobenzeneazoresorcinol) is an azo compound with the chemical formula C12H9N3O4. It is used commercially as a violet dye and experimentally as a pH indicator, appearing yellow below pH 11, and violet above pH 13. It also turns deep blue in the presence of magnesium salt in a slightly alkaline, or basic, environment. Azo violet may also be used to test for the presence of ammonium ions. The color of ammonium chloride or ammonium hydroxide solution will vary depending upon the concentration of azo violet used. Magneson I is used to test Be also; it produces an orange-red lake with Be(II) in alkaline medium. Properties The intense color from which the compound gets its name results from irradiation and subsequent excitation and relaxation of the extended π electron system across the R-N=N-R' linked phenols. Absorption of these electrons falls in the visible region of the electromagnetic spectrum. Azo violet's intense indigo color (λmax 432 nm) approximates Pantone R: 102 G: 15 B: 240. Synthesis Azo violet can be synthesised by reacting 4-nitroaniline with nitrous acid (generated in situ with an acid and a nitrite salt) to produce a diazonium intermediate. This is then reacted with resorcinol, dissolved in a sodium hydroxide solution, via an azo coupling reaction. This is consistent with the generalized strategy for preparing azo dyes. Reactivity The chemical character of azo violet may be attributed to its azo group (-N=N-), six-membered rings, and hydroxyl side groups. Due to steric repulsions, azo violet is most stable in the trans-configuration, but isomerization of azo dyes by irradiation is not uncommon. The para-position tautomerization of azo violet provides mechanical insight into the behavior of the compound in an acidic environment, and thus its use as a basic pH indicator. The predicted 1H-NMR of pure azo violet shows the hydroxyl protons as the most deshielded and acidic protons. The participation of these hydroxyl groups' electron-donation to the conjugated π system likewise influences azo violet's λmax and pKa value. References Azo dyes 4-Nitrophenyl compounds Resorcinols PH indicators
Azo violet
Chemistry,Materials_science
525
20,580
https://en.wikipedia.org/wiki/Motion
In physics, motion is when an object changes its position with respect to a reference point in a given time. Motion is mathematically described in terms of displacement, distance, velocity, acceleration, speed, and frame of reference to an observer, measuring the change in position of the body relative to that frame with a change in time. The branch of physics describing the motion of objects without reference to their cause is called kinematics, while the branch studying forces and their effect on motion is called dynamics. If an object is not in motion relative to a given frame of reference, it is said to be at rest, motionless, immobile, stationary, or to have a constant or time-invariant position with reference to its surroundings. Modern physics holds that, as there is no absolute frame of reference, Newton's concept of absolute motion cannot be determined. Everything in the universe can be considered to be in motion. Motion applies to various physical systems: objects, bodies, matter particles, matter fields, radiation, radiation fields, radiation particles, curvature, and space-time. One can also speak of the motion of images, shapes, and boundaries. In general, the term motion signifies a continuous change in the position or configuration of a physical system in space. For example, one can talk about the motion of a wave or the motion of a quantum particle, where the configuration consists of the probabilities of the wave or particle occupying specific positions. Equations of motion Laws of motion In physics, the motion of bodies is described through two related sets of laws of mechanics. Classical mechanics for super atomic (larger than an atom) objects (such as cars, projectiles, planets, cells, and humans) and quantum mechanics for atomic and sub-atomic objects (such as helium, protons, and electrons). Historically, Newton and Euler formulated three laws of classical mechanics: Classical mechanics Classical mechanics is used for describing the motion of macroscopic objects moving at speeds significantly slower than the speed of light, from projectiles to parts of machinery, as well as astronomical objects, such as spacecraft, planets, stars, and galaxies. It produces very accurate results within these domains and is one of the oldest and largest scientific descriptions in science, engineering, and technology. Classical mechanics is fundamentally based on Newton's laws of motion. These laws describe the relationship between the forces acting on a body and the motion of that body. They were first compiled by Sir Isaac Newton in his work Philosophiæ Naturalis Principia Mathematica, which was first published on July 5, 1687. Newton's three laws are: A body at rest will remain at rest, and a body in motion will remain in motion unless it is acted upon by an external force. (This is known as the law of inertia.) Force () is equal to the change in momentum per change in time (). For a constant mass, force equals mass times acceleration ( ). For every action, there is an equal and opposite reaction. (In other words, whenever one body exerts a force onto a second body, (in some cases, which is standing still) the second body exerts the force back onto the first body. and are equal in magnitude and opposite in direction. So, the body that exerts will be pushed backward.) Newton's three laws of motion were the first to accurately provide a mathematical model for understanding orbiting bodies in outer space. This explanation unified the motion of celestial bodies and the motion of objects on Earth. Relativistic mechanics Modern kinematics developed with study of electromagnetism and refers all velocities to their ratio to speed of light . Velocity is then interpreted as rapidity, the hyperbolic angle for which the hyperbolic tangent function . Acceleration, the change of velocity over time, then changes rapidity according to Lorentz transformations. This part of mechanics is special relativity. Efforts to incorporate gravity into relativistic mechanics were made by W. K. Clifford and Albert Einstein. The development used differential geometry to describe a curved universe with gravity; the study is called general relativity. Quantum mechanics Quantum mechanics is a set of principles describing physical reality at the atomic level of matter (molecules and atoms) and the subatomic particles (electrons, protons, neutrons, and even smaller elementary particles such as quarks). These descriptions include the simultaneous wave-like and particle-like behavior of both matter and radiation energy as described in the wave–particle duality. In classical mechanics, accurate measurements and predictions of the state of objects can be calculated, such as location and velocity. In quantum mechanics, due to the Heisenberg uncertainty principle, the complete state of a subatomic particle, such as its location and velocity, cannot be simultaneously determined. In addition to describing the motion of atomic level phenomena, quantum mechanics is useful in understanding some large-scale phenomena such as superfluidity, superconductivity, and biological systems, including the function of smell receptors and the structures of protein. Orders of magnitude Humans, like all known things in the universe, are in constant motion; however, aside from obvious movements of the various external body parts and locomotion, humans are in motion in a variety of ways that are more difficult to perceive. Many of these "imperceptible motions" are only perceivable with the help of special tools and careful observation. The larger scales of imperceptible motions are difficult for humans to perceive for two reasons: Newton's laws of motion (particularly the third), which prevents the feeling of motion on a mass to which the observer is connected, and the lack of an obvious frame of reference that would allow individuals to easily see that they are moving. The smaller scales of these motions are too small to be detected conventionally with human senses. Universe Spacetime (the fabric of the universe) is expanding, meaning everything in the universe is stretching, like a rubber band. This motion is the most obscure as it is not physical motion, but rather a change in the very nature of the universe. The primary source of verification of this expansion was provided by Edwin Hubble who demonstrated that all galaxies and distant astronomical objects were moving away from Earth, known as Hubble's law, predicted by a universal expansion. Galaxy The Milky Way Galaxy is moving through space and many astronomers believe the velocity of this motion to be approximately relative to the observed locations of other nearby galaxies. Another reference frame is provided by the Cosmic microwave background. This frame of reference indicates that the Milky Way is moving at around . Sun and Solar System The Milky Way is rotating around its dense Galactic Center, thus the Sun is moving in a circle within the galaxy's gravity. Away from the central bulge, or outer rim, the typical stellar velocity is between . All planets and their moons move with the Sun. Thus, the Solar System is in motion. Earth The Earth is rotating or spinning around its axis. This is evidenced by day and night, at the equator the earth has an eastward velocity of . The Earth is also orbiting around the Sun in an orbital revolution. A complete orbit around the Sun takes one year, or about 365 days; it averages a speed of about . Continents The Theory of Plate tectonics tells us that the continents are drifting on convection currents within the mantle, causing them to move across the surface of the planet at the slow speed of approximately per year. However, the velocities of plates range widely. The fastest-moving plates are the oceanic plates, with the Cocos Plate advancing at a rate of per year and the Pacific Plate moving per year. At the other extreme, the slowest-moving plate is the Eurasian Plate, progressing at a typical rate of about per year. Internal body The human heart is regularly contracting to move blood throughout the body. Through larger veins and arteries in the body, blood has been found to travel at approximately 0.33 m/s. Though considerable variation exists, and peak flows in the venae cavae have been found between . additionally, the smooth muscles of hollow internal organs are moving. The most familiar would be the occurrence of peristalsis, which is where digested food is forced throughout the digestive tract. Though different foods travel through the body at different rates, an average speed through the human small intestine is . The human lymphatic system is also constantly causing movements of excess fluids, lipids, and immune system related products around the body. The lymph fluid has been found to move through a lymph capillary of the skin at approximately 0.0000097 m/s. Cells The cells of the human body have many structures and organelles that move throughout them. Cytoplasmic streaming is a way in which cells move molecular substances throughout the cytoplasm, various motor proteins work as molecular motors within a cell and move along the surface of various cellular substrates such as microtubules, and motor proteins are typically powered by the hydrolysis of adenosine triphosphate (ATP), and convert chemical energy into mechanical work. Vesicles propelled by motor proteins have been found to have a velocity of approximately 0.00000152 m/s. Particles According to the laws of thermodynamics, all particles of matter are in constant random motion as long as the temperature is above absolute zero. Thus the molecules and atoms that make up the human body are vibrating, colliding, and moving. This motion can be detected as temperature; higher temperatures, which represent greater kinetic energy in the particles, feel warm to humans who sense the thermal energy transferring from the object being touched to their nerves. Similarly, when lower temperature objects are touched, the senses perceive the transfer of heat away from the body as a feeling of cold. Subatomic particles Within the standard atomic orbital model, electrons exist in a region around the nucleus of each atom. This region is called the electron cloud. According to Bohr's model of the atom, electrons have a high velocity, and the larger the nucleus they are orbiting the faster they would need to move. If electrons were to move about the electron cloud in strict paths the same way planets orbit the Sun, then electrons would be required to do so at speeds that would far exceed the speed of light. However, there is no reason that one must confine oneself to this strict conceptualization (that electrons move in paths the same way macroscopic objects do), rather one can conceptualize electrons to be 'particles' that capriciously exist within the bounds of the electron cloud. Inside the atomic nucleus, the protons and neutrons are also probably moving around due to the electrical repulsion of the protons and the presence of angular momentum of both particles. Light Light moves at a speed of 299,792,458 m/s, or , in a vacuum. The speed of light in vacuum (or ) is also the speed of all massless particles and associated fields in a vacuum, and it is the upper limit on the speed at which energy, matter, information or causation can travel. The speed of light in vacuum is thus the upper limit for speed for all physical systems. In addition, the speed of light is an invariant quantity: it has the same value, irrespective of the position or speed of the observer. This property makes the speed of light c a natural measurement unit for speed and a fundamental constant of nature. In 2019, the speed of light was redefined alongside all seven SI base units using what it calls "the explicit-constant formulation", where each "unit is defined indirectly by specifying explicitly an exact value for a well-recognized fundamental constant", as was done for the speed of light. A new, but completely equivalent, wording of the metre's definition was proposed: "The metre, symbol m, is the unit of length; its magnitude is set by fixing the numerical value of the speed of light in vacuum to be equal to exactly when it is expressed in the SI unit ." This implicit change to the speed of light was one of the changes that was incorporated in the 2019 revision of the SI, also termed the New SI. Superluminal motion Some motion appears to an observer to exceed the speed of light. Bursts of energy moving out along the relativistic jets emitted from these objects can have a proper motion that appears greater than the speed of light. All of these sources are thought to contain a black hole, responsible for the ejection of mass at high velocities. Light echoes can also produce apparent superluminal motion. This occurs owing to how motion is often calculated at long distances; oftentimes calculations fail to account for the fact that the speed of light is finite. When measuring the movement of distant objects across the sky, there is a large time delay between what has been observed and what has occurred, due to the large distance the light from the distant object has to travel to reach us. The error in the above naive calculation comes from the fact that when an object has a component of velocity directed towards the Earth, as the object moves closer to the Earth that time delay becomes smaller. This means that the apparent speed as calculated above is greater than the actual speed. Correspondingly, if the object is moving away from the Earth, the above calculation underestimates the actual speed. Types of motion Simple harmonic motion – motion in which the body oscillates in such a way that the restoring force acting on it is directly proportional to the body's displacement. Mathematically Force is directly proportional to the negative of displacement. Negative sign signifies the restoring nature of the force. (e.g., that of a pendulum). Linear motion – motion that follows a straight linear path, and whose displacement is exactly the same as its trajectory. [Also known as rectilinear motion] Reciprocal motion Brownian motion – the random movement of very small particles Circular motion Rotatory motion – a motion about a fixed point. (e.g. Ferris wheel). Curvilinear motion – It is defined as the motion along a curved path that may be planar or in three dimensions. Rolling motion – (as of the wheel of a bicycle) Oscillatory – (swinging from side to side) Vibratory motion Combination (or simultaneous) motions – Combination of two or more above listed motions Projectile motion – uniform horizontal motion + vertical accelerated motion Fundamental motions Linear motion Circular motion Oscillation Wave Relative motion Rotary motion See also References External links Feynman's lecture on motion Mechanics Physical phenomena
Motion
Physics,Engineering
2,977
70,774,718
https://en.wikipedia.org/wiki/Erysiphe%20azerbaijanica
Erysiphe azerbaijanica is a species of powdery mildew in the family Erysiphaceae. It is found in Azerbaijan, where it grows on the leaves of sweet chestnut trees. Taxonomy The fungus was formally described as a new species in 2018 by Lamiya Abasova, Dilzara Aghayeva, and Susumu Takamatsu. The type specimen was collected near Baş Küngüt village (Shaki District), where it was found growing on sweet chestnut (Castanea sativa). Molecular phylogenetic analysis showed that the species forms its own clade in the Microsphaera lineage of genus Erysiphe. The species epithet refers to the country of the type locality. Description The fungus forms thin, white irregular patches on both sides of the leaves of its host. Foot cells are cylindrical and straight, typically measuring 31–53 by 5–7 μm. The conidia are mostly cylindrical and oblong with dimensions of 33–48 by 14–16 μm; they have a length/width ratio of 2–3.6. Erysiphe alphitoides, E. castaneae, and E. castaneigena are somewhat similar in morphology, but can be distinguished by the dimensions and form of their conidia and foot cells. References azerbaijanica Fungi described in 2018 Fungi of Europe Fungus species
Erysiphe azerbaijanica
Biology
276
44,529,150
https://en.wikipedia.org/wiki/Multirate%20filter%20bank%20and%20multidimensional%20directional%20filter%20banks
This article provides a short survey of the concepts, principles and applications of Multirate filter banks and Multidimensional Directional filter banks. Multirate systems Linear time-invariant systems typically operate at a single sampling rate, which means that we have the same sampling rate at input and output. In other words, in an LTI system, the sampling rate would not change in the system. Systems that use different sampling rates at different stages are called multirate systems. The multirate system can have different sampling rates based on desire. Also multirate systems can provide different sampling rates without destroying the signal components. In Figure 1, you can see a block diagram of a two channel multirate system. Multirate filter bank A multirate filter bank divides a signal into a number of subbands, which can be analysed at different rates corresponding to the bandwidth of the frequency bands. One important fact in multirate filtering is that the signal should be filtered before decimation, otherwise aliasing and frequency folding would occur. Multirate filter designs Multirate filter design makes use of properties of decimation and interpolation (or expansion) in the design implementation of the filter. Decimation or downsampling by a factor of essentially means keeping every sample of a given sequence. Decimation, interpolation, and modulation Generally speaking, using decimation is very common in multirate filter designs. In the second step, after using decimation, interpolation will be used to restore the sampling rate. The advantage of using decimators and interpolator is that they can reduce the computations when resulting in a lower sampling rate. Decimation by a factor of can be mathematically defined as: or equivalently, . Expansion or upsampling by a factor of M means that we insert M-1 zeros between each sample of a given signal or a sequence. The expansion by a factor of M can be mathematically explained as: or equivalently, . Modulation is needed for different kinds of filter designs. For instance, in many communication applications we need to modulate the signal to baseband. After using lowpass filtering for the baseband signal, we use modulation and change the baseband signal to the center frequency of the bandpass filter. Here we provide two examples of designing multirate narrow lowpass and narrow bandpass filters. Narrow lowpass filter We can define a narrow lowpass filter as a lowpass filter with a narrow passband. In order to create a multirate narrow lowpass FIR filter, we need to replace the time invariant FIR filter with a lowpass antialiasing filter and use a decimator along with an interpolator and lowpass anti-imaging filter. In this way the resulting multirate system would be a time varying linear phase filter via the decimator and interpolator. This process is explained in block diagram form where Figure 2 (a) is replaced by Figure 2(b). The lowpass filter consists of two polyphase filters, one for the decimator and one for the interpolator. Multirate filter bank Filter banks has different usage in many areas, such as signal and image compression, and processing. The main usage of using filter banks is that in this way we can divide the signal or system to several separate frequency domains. A filter bank divides the input signal into a set of signals . In this way each of the generated signals corresponds to a different region in the spectrum of . In this process it can be possible for the regions overlap (or not, based on application). Figure 4 shows an example of a three-band filter bank. The generated signals can be generated via a collection of set of bandpass filters with bandwidths and center frequencies (respectively). A multirate filter bank use a single input signal and then produces multiple outputs of the signal by filtering and subsampling. In order to split the input signal into two or more signals (see Figure 5) an analysis-synthesis system can be used . In figure 5, only 4 sub-signals are used. The signal would split with the help of four filters for k =0,1,2,3 into 4 bands of the same bandwidths (In the analysis bank) and then each sub-signal is decimated by a factor of 4. In each band by dividing the signal in each band, we would have different signal characteristics. In synthesis section the filter will reconstruct the original signal: First, upsampling the 4 sub-signals at the output of the processing unit by a factor of 4 and then filtere by 4 synthesis filters for k = 0,1,2,3. Finally, the outputs of these four filters are added. Multidimensional filter banks Multidimensional Filtering, downsampling, and upsampling are the main parts of multidimensional multirate systems and filter banks. A complete filter bank consists of the analysis and synthesis sides. The analysis filter bank divides an input signal to different subbands with different frequency spectra. The synthesis part reassembles the different subband signals and generates a reconstructed signal. Two of the basic building blocks are the decimator and expander. As illustrated in Figure 6, the input gets divided into four directional sub bands that each of them covers one of the wedge-shaped frequency regions. In 1D systems, M-fold decimators keep only those samples that are multiples of M and discard the rest. while in multi-dimensional systems the decimators are D X D nonsingular integer matrix. it considers only those samples that are on the lattice generated by the decimator. Commonly used decimator is the quincunx decimator whose lattice is generated from the Quincunx matrix which is defined by . A quincunx lattice that is generated by this matrix is shown in figure. It is important to analyze filter banks from a frequency domain perspective in terms of subband decomposition and reconstruction. However, equally important is a Hilbert space interpretation of filter banks, which plays a key role in geometrical signal representations. For a generic K-channel filter bank, with analysis filters , synthesis filters , and sampling matrices . In the analysis side, we can define vectors in as , each index by two parameters: and . Similarly, for the synthesis filters we can define . Considering the definition of analysis/synthesis sides we can verify that and for reconstruction part . In other words, the analysis filter bank calculate the inner product of the input signal and the vector from analysis set. Moreover, the reconstructed signal in the combination of the vectors from the synthesis set, and the combination coefficients of the computed inner products, meaning that If there is no loss in the decomposition and the subsequent reconstruction, the filter bank is called perfect reconstruction. (in that case we would have . Multidimensional filter banks design 1-D filter banks have been well developed until today. However, many signals, such as image, video, 3D sound, radar, sonar, are multidimensional, and require the design of multidimensional filter banks. With the fast development of communication technology, signal processing systems need more room to store data during the processing, transmission and reception. In order to reduce the data to be processed, save storage and lower the complexity, multirate sampling techniques were introduced to achieve these goals. Filter banks can be used in various areas, such as image coding, voice coding, radar and so on. Many 1D filter issues were well studied and researchers proposed many 1D filter bank design approaches. But there are still many multidimensional filter bank design problems that need to be solved.[6] Some methods may not well reconstruct the signal; some methods are complex and hard to implement. Design of separable filter bank The simplest approach to design a multi-dimensional filter banks is to cascade 1D filter banks in the form of a tree structure where the decimation matrix is diagonal and data is processed in each dimension separately. Such systems are referred to as separable systems. Design of non-separable multidimensional filter banks Below are several approaches on the design of multidimensional filter banks. 2-channel multidimensional perfect reconstruction (PR) filter banks In real life, we always want to reconstruct the divided signal back to the original one, which makes PR filter banks very important. Let H(z) be the transfer function of a filter. The size of the filter is defined as the order of corresponding polynomial in every dimension. The symmetry or anti-symmetry of a polynomial determines the linear phase property of the corresponding filter and is related to its size. Like the 1D case, the aliasing term A(z) and transfer function T(z) for a 2 channel filter bank are: A(z)=1/2(H0(-z) F0 (z)+H1 (-z) F1 (z)); T(z)=1/2(H0 (z) F0 (z)+H1 (z) F1 (z)), where H0 and H1 are decomposition filters, and F0 and F1 are reconstruction filters. The input signal can be perfectly reconstructed if the alias term is cancelled and T(z) equal to a monomial. So the necessary condition is that T'(z) is generally symmetric and of an odd-by-odd size. Linear phase PR filters are very useful for image processing. This 2-Channel filter bank is relatively easy to implement. But 2 channels sometimes are not enough for use. 2-channel filter banks can be cascaded to generate multi-channel filter banks. To understand the working of 2-Channel Multidimensional filter banks we must first understand the design process of a simple 2D two-channel filter banks. In particular the diamond filter banks are of special interest in some image coding applications. The decimation matrix M for the diamond filter bank is usually the quincunx matrix which is briefly discussed in the above sections. For a two-channel system, there are only four filters, two analysis filters and two synthesis filters. So in some designs, two or three filters are chosen so that there is no aliasing and the remaining filters are then optimized to achieve approximate reconstruction. Design of 2D filters are more complex than 1D filters. So we usually use appropriate mapping techniques to achieve perfect reconstruction.. A polyphase mapping method is proposed to design an IIR analysis filter. For filter banks with FIR filter types, several 1D to 2D transformations have been considered. For example, the McClellan transformation is used to achieve the FIR filter banks. There has also been some interest in quadrant filters as shown. The decimation matrix for a quadrant filter bank is given by D = The fan filters are shifted versions of the diamond filters and hence the diamond filter banks are designed and can be shifted by ( π 0 ) in the frequency domain to obtain a fan filter. Filter banks in which the filters have parallelogram support are also of some importance. Several parallelogram supports for analysis and synthesis filters are also shown. These filters can be derived from the diamond filters by using uni-modular transformation. Tree-structured filter banks For any given subband analysis filter bank, we can split it into further subbands as shown in figure 8. By repeating this operation we can actually build a tree-structured analysis bank. Example of a 1D tree structured filter bank is the one that results in an octave stacking of the passbands. In the 2D case, tree structures based on simple two-channel modules can offer sophisticated band-splitting schemes, especially if we combine the various configurations shown above. The directional filter bank which will be discussed below is one such example. Multidimensional directional filter banks M-dimensional directional filter banks (MDFB) are a family of filter banks that can achieve the directional decomposition of arbitrary M-dimensional signals with a simple and efficient tree-structured construction. They have many distinctive properties like: directional decomposition, efficient tree construction, angular resolution and perfect reconstruction. In the general M-dimensional case, the ideal frequency supports of the MDFB are hypercube-based hyperpyramids. The first level of decomposition for MDFB is achieved by an N-channel undecimated filter bank, whose component filters are M-D "hourglass"-shaped filter aligned with the w1,...,wM respectively axes. After that, the input signal is further decomposed by a series of 2-D iteratively resampled checkerboard filter banks IRCli(Li)(i=2,3,...,M), where IRCli(Li)operates on 2-D slices of the input signal represented by the dimension pair (n1,ni) and superscript (Li) means the levels of decomposition for the ith level filter bank. Note that, starting from the second level, we attach an IRC filter bank to each output channel from the previous level, and hence the entire filter has a total of 2(L1+...+LN) output channels. Multidimensional oversampled filter banks Oversampled filter banks are multirate filter banks where the number of output samples at the analysis stage is larger than the number of input samples. It is proposed for robust applications. One particular class of oversampled filter banks is nonsubsampled filter banks without downsampling or upsampling. The perfect reconstruction condition for an oversampled filter bank can be stated as a matrix inverse problem in the polyphase domain. For IIR oversampled filter banks, perfect reconstruction have been studied in Wolovich and Kailath. in the context of control theory. While for FIR oversampled filter banks we have to use a different strategy for 1-D and M-D, FIR filter are more popular since they are easier to implement. For 1-D oversampled FIR filter banks, the Euclidean algorithm plays a key role in the matrix inverse problem. However, the Euclidean algorithm fails for multidimensional (MD) filters. For MD filter, we can convert the FIR representation into a polynomial representation. and then use Algebraic geometry and Gröbner bases to get the framework and the reconstruction conditions for the multidimensional oversampled filter banks. Multidimensional filter banks using Grobner bases The general multidimensional filter bank (Figure 7) can be represented by a pair of analysis and synthesis polyphase matrices and of size and , where N is the number of channels and is the absolute value of the determinant of the sampling matrix. Also and are the z-transform of the polyphase components of the analysis and synthesis filters. Therefore, they are multivariate Laurent polynomials, which have the general form: . Laurent polynomial matrix equation need to be solve to design perfect reconstruction filter banks: . In the multidimensional case with multivariate polynomials we need to use the theory and algorithms of Grobner bases (developed by Buchberger) "Grobner bases" can be used to characterizing perfect reconstruction multidimensional filter banks, but it first need to extend from polynomial matrices to Laurent polynomial matrices. The Grobner basis computation can be considered equivalently as Gaussian elimination for solving the polynomial matrix equation . If we have set of polynomial vectors where are polynomials. The module is analogous to the span of a set of vectors in linear algebra. The theory of Grobner bases implies that the Module has a unique reduced Grobner basis for a given order of power products in polynomials. If we define the Grobner basis as , it can be obtained from by a finite sequence of reduction (division) steps. Using reverse engineering, we can compute the basis vectors in terms of the original vectors through a transformation matrix as Mapping-based multidimensional filter banks Designing filters with good frequency responses is challenging via Grobner bases approach. Mapping based design in popularly used to design nonseparable multidimensional filter banks with good frequency responses. The mapping approaches have certain restrictions on the kind of filters; However, it brings many important advantages, such as efficient implementation via lifting/ladder structures. Here we provide an example of two-channel filter banks in 2D with sampling matrix We would have several possible choices of ideal frequency responses of the channel filter and . (Note that the other two filters and are supported on complementary regions.) All the frequency regions in Figure can be critically sampled by the rectangular lattice spanned by . So imagine the filter bank achieves perfect reconstruction with FIR filters. Then from the polyphase domain characterization it follows that the filters H1(z) and G1(z) are completely specified by H0(z) and G0(z), respectively. Therefore, we need to design H0(x) and G0(z) which have desired frequency responses and satisfy the polyphase-domain conditions. There are different mapping technique that can be used to get above result. Filter banks design in the frequency domain If we do not want perfect reconstruction filter banks using FIR filters, the design problem can be simplified by working in frequency domain instead of using FIR filters. Note that the frequency domain method is not limited to the design of nonsubsampled filter banks (read ). Directional filter banks Bamberger and Smith proposed a 2D directional filter bank (DFB). The DFB is efficiently implemented via an l-level tree-structured decomposition that leads to subbands with wedge-shaped frequency partition (see Figure ). The original construction of the DFB involves modulating the input signal and using diamond-shaped filters. Moreover, in order to obtain the desired frequency partition, a complicated tree expanding rule has to be followed. As a result, the frequency regions for the resulting subbands do not follow a simple ordering as shown in Figure 9 based on the channel indices. The first advantage of DFB is that not only it is not a redundant transform but also it offers perfect reconstruction. Another advantage of DFB is its directional-selectivity and efficient structure. This advantage makes DFB an appropriate approach for many signal and image processing usage. Directional filter banks can be developed to higher dimensions. It can be used in 3-D to achieve frequency sectioning. These kinds of filters can be used in selective filtering purposes to record and save signals information and features. Some other advantages of NDFB can be addressed as follow: Directional decomposition, Construction, Angular resolution, Perfect reconstruction, and Small redundancy. Multidimensional directional filter banks N-dimensional directional filter banks (NDFB) can be used in capturing signals features and information. There are a number of studies regarding capturing signals information in 2-D(e.g., steerable pyramid, the directional filter bank, 2-D directional wavelets, curvelets, complex (dual-tree) wavelets, contourlets, and bandelets), with reviews for instance in. Conclusion and application Filter banks play an important role in different aspects of signal processing these days. They have different usage in many areas, such as signal and image compression, and processing. The main usage of filter banks is that in this way we can divide the signal or system to several separate frequency components. Depending on our purpose we can choose different methods to design the filters. In this page we provide information regarding filter banks, multidimensional filter banks and different methods to design multidimensional filters. Also we talked about MDFB, which is built upon an efficient tree-structured construction, which leads to a low redundancy ratio and refinable angular resolution. By combining the MDFB with a new multiscale pyramid, we can constructed the surfacelet transform, which has potentials in efficiently capturing and representing surface-like singularities in multidimensional signals. MDFB and surfacelet transform have applications in various areas that involve the processing of multidimensional volumetric data, including video processing, seismic image processing, and medical image analysis. Some other advantages of MDFB include: directional decomposition, construction, angular resolution, perfect reconstruction, and small redundancy. References Filter theory Multidimensional signal processing
Multirate filter bank and multidimensional directional filter banks
Engineering
4,135
40,259,049
https://en.wikipedia.org/wiki/Ebomegobius
Ebomegobius goodi is a species of brackish water goby native to a stream in Cameroon and is known from a single specimen. This species grows to a length of SL. This species is the only known member of its genus. The genus name is a compound of Ebomé, the brackish stream where the species was found, and gobius while the specific name honours the missionary Albert Irwin Good (1884-1975), who collected West African fishes and collected the type of this species. References Endemic fauna of Cameroon Gobiidae Species known from a single specimen Fish described in 1946 Monotypic ray-finned fish genera
Ebomegobius
Biology
133
11,794,014
https://en.wikipedia.org/wiki/Alternaria%20helianthi
Alternaria helianthi is a fungal plant pathogen causing a disease in sunflowers known as Alternaria blight of sunflower. As a pathogen Alternaria spp. plant pathogens are found across the world, affecting a variety of species, including sunflowers. It is a major defoliating pathogen in warm humid climates like India and Africa. Farmers and businesses produce sunflowers (Helianthus spp.) for manufacturing oil, edible seeds, and ornamental flowers displays. There are a couple of diseases that threaten the production of sunflowers. Leaf blight of sunflowers is one of the most devastating diseases for sunflowers and is caused by Alternaria helianthi (Hansford) Tubaki and Nishihara,  a seed-borne pathogenic fungus. It was recorded in Japan and was the same as a fungus collected earlier on sunflower in Argentina, India, Tanzania, Uganda and Zambia. Transportation of infected sunflowers and agricultural practices spread the pathogen worldwide and cause disasters in places, like India, where sunflowers are a main source of oil production. The pathogen that causes this disease is part of the Alternaria genus, it is ubiquitous and abundant and can cause a high mycotoxicological risk during harvest, which causes devastation to entire crop production. Alternaria helianthi is an ascomycete from the Pleosporaceae family. This pathogen produces simple, meaning rarely branched conidiophores, that bear solitary conidia. These conidia are light brown colored, ellipsoid or broadly ovoid, and rarely form longitudinal septa. Host(s) and symptoms There are eight different species that cause yield loss of sunflowers; however, Alternaria helianthi is the primary causal agent and most widespread. The main hosts are sunflowers (Helianthus annuus); however, it has been proven that safflower (Carthamus tinctorius), noogoora burr (Xanthium pungens), cocklebur (Xanthium strumarium), and Bathurst burr (Xanthium spinosum) can serve as alternative hosts for the pathogen. Symptoms Leaf spots start as small, dark, and angular that eventually turn into necrotic areas that result in defoliation. Defoliation is most prevalent starting at lower leaves, where the microclimate is most favorable. Dark brown lesions are found on leaves, stems, petioles, and bracts. Stem lesions are normally narrow (1-3mm) black streaks that get up to 3 cm long. The pathogen may cause linear spots on stems and water-soaked, sunken lesions on the back of the sunflower head. Some spots could have yellow halo around the spots. Infection causes destruction of the flowers and early senescence. Disease cycle This pathogen overwinters on infected plant residues, but wild sunflowers may also serve as reservoirs. All species, including Alternaria helianthi, may also be seedborne. Alternaria spp. have no sexual or perfect stage. They multiply asexually  through the method of sporulation. By producing one or more germ tubes, the conidia are germinated. Disease progression heavily relies on the duration of leaf wetness following initial infection, as the germination of new spores can occur within days. Germination occurs best at temperatures less than 26 degrees C and require a minimum of 4 hours of leaf wetness for sporulation. The pathogen is dispersed and spread via windblown or water-splashed onto lower leaves of the sunflower. Young seedlings are more susceptible, but lower leaves on mature plants frequently are defoliated by Alternaria spp.. Germ tubes are produced by the conidia and grow across the leaf surface before forming an appressorium. The fungus will then enter the host by penetrating through the cuticle and epidermis. It has also been observed that penetration through wounds and stomates can occur. The conidiophores then develop through the collapsed stomates. Conidiophores are 12 to 50 micrometers long and rise singly or in branches. The conidia will emerge through the stomata and trichomes. At this stage, the conidia produced will cause secondary infection and spread to other healthy plants. Under certain conditions, micro-cyclic conidia can be produced directly from the parent conidia. Management There are three main types of control for Alternaria helianthi: cultural practices, fungicides, and resistance. Cultural practices include removing wild and volunteer sunflower hosts, minimizing dampness/wetness of leaves, and removal of previous sunflower residues found in the soil. The destruction of the plant residue will eliminate the source of inoculum. Additionally, crop rotation with non-Asteraceae crops or allowing for fallow periods can assist in management. There is an option for seed treatments; however, the use of multiple applications of different fungicides are more effective. Finally, oilseed sunflowers have been found with disease resistance, so there could be a possibility of hybrid sunflowers incorporating the resistance into other sunflower species. References helianthi Fungal plant pathogens and diseases Fungi described in 1943 Fungus species
Alternaria helianthi
Biology
1,088
682,482
https://en.wikipedia.org/wiki/Human
Humans (Homo sapiens) or modern humans are the most common and widespread species of primate, and the last surviving species of the genus Homo. They are great apes characterized by their hairlessness, bipedalism, and high intelligence. Humans have large brains, enabling more advanced cognitive skills that enable them to thrive and adapt in varied environments, develop highly complex tools, and form complex social structures and civilizations. Humans are highly social, with individual humans tending to belong to a multi-layered network of distinct social groupsfrom families and peer groups to corporations and political states. As such, social interactions between humans have established a wide variety of values, social norms, languages, and traditions (collectively termed institutions), each of which bolsters human society. Humans are also highly curious: the desire to understand and influence phenomena has motivated humanity's development of science, technology, philosophy, mythology, religion, and other frameworks of knowledge; humans also study themselves through such domains as anthropology, social science, history, psychology, and medicine. As of January 2025, there are estimated to be more than 8 billion humans alive. For most of their history, humans were nomadic hunter-gatherers. Humans began exhibiting behavioral modernity about 160,000–60,000 years ago. The Neolithic Revolution, which began in Southwest Asia around 13,000 years ago (and separately in a few other places), saw the emergence of agriculture and permanent human settlement; in turn, this led to the development of civilization and kickstarted a period of continuous (and ongoing) population growth and rapid technological change. Since then, a number of civilizations have risen and fallen, while a number of sociocultural and technological developments have resulted in significant changes to the human lifestyle. Although some scientists equate the term "humans" with all members of the genus Homo, in common usage it generally refers to Homo sapiens, the only extant member. All other members of the genus Homo, which are now extinct, are known as archaic humans, and the term "modern human" is used to distinguish Homo sapiens from archaic humans. Anatomically modern humans emerged around 300,000 years ago in Africa, evolving from Homo heidelbergensis or a similar species. Migrating out of Africa, they gradually replaced and interbred with local populations of archaic humans. Multiple hypotheses for the extinction of archaic human species such as Neanderthals include competition, violence, interbreeding with Homo sapiens, or inability to adapt to climate change. Genes and the environment influence human biological variation in visible characteristics, physiology, disease susceptibility, mental abilities, body size, and life span. Though humans vary in many traits (such as genetic predispositions and physical features), humans are among the least genetically diverse primates. Any two humans are at least 99% genetically similar. Humans are sexually dimorphic: generally, males have greater body strength and females have a higher body fat percentage. At puberty, humans develop secondary sex characteristics. Females are capable of pregnancy, usually between puberty, at around 12 years old, and menopause, around the age of 50. Humans are omnivorous, capable of consuming a wide variety of plant and animal material, and have used fire and other forms of heat to prepare and cook food since the time of Homo erectus. Humans have had a dramatic effect on the environment. They are apex predators, being rarely preyed upon by other species. Human population growth, industrialization, land development, overconsumption and combustion of fossil fuels have led to environmental destruction and pollution that significantly contributes to the ongoing mass extinction of other forms of life. Within the last century, humans have explored challenging environments such as Antarctica, the deep sea, and outer space. Human habitation within these hostile environments is restrictive and expensive, typically limited in duration, and restricted to scientific, military, or industrial expeditions. Humans have visited the Moon and made their presence known on other celestial bodies through human-made robotic spacecraft. Since the early 20th century, there has been continuous human presence in Antarctica through research stations and, since 2000, in space through habitation on the International Space Station. Humans can survive for up to eight weeks without food and several days without water. Humans are generally diurnal, sleeping on average seven to nine hours per day. Childbirth is dangerous, with a high risk of complications and death. Often, both the mother and the father provide care for their children, who are helpless at birth. Etymology and definition All modern humans are classified into the species Homo sapiens, coined by Carl Linnaeus in his 1735 work Systema Naturae. The generic name Homo is a learned 18th-century derivation from Latin , which refers to humans of either sex. The word human can refer to all members of the Homo genus. The name Homo sapiens means 'wise man' or 'knowledgeable man'. There is disagreement if certain extinct members of the genus, namely Neanderthals, should be included as a separate species of humans or as a subspecies of H. sapiens. Human is a loanword of Middle English from Old French , ultimately from Latin , the adjectival form of ('man'in the sense of humanity). The native English term man can refer to the species generally (a synonym for humanity) as well as to human males. It may also refer to individuals of either sex. Despite the fact that the word animal is colloquially used as an antonym for human, and contrary to a common biological misconception, humans are animals. The word person is often used interchangeably with human, but philosophical debate exists as to whether personhood applies to all humans or all sentient beings, and further if a human can lose personhood (such as by going into a persistent vegetative state). Evolution Humans are apes (superfamily Hominoidea). The lineage of apes that eventually gave rise to humans first split from gibbons (family Hylobatidae) and orangutans (genus Pongo), then gorillas (genus Gorilla), and finally, chimpanzees and bonobos (genus Pan). The last split, between the human and chimpanzee–bonobo lineages, took place around 8–4 million years ago, in the late Miocene epoch. During this split, chromosome 2 was formed from the joining of two other chromosomes, leaving humans with only 23 pairs of chromosomes, compared to 24 for the other apes. Following their split with chimpanzees and bonobos, the hominins diversified into many species and at least two distinct genera. All but one of these lineagesrepresenting the genus Homo and its sole extant species Homo sapiensare now extinct. The genus Homo evolved from Australopithecus. Though fossils from the transition are scarce, the earliest members of Homo share several key traits with Australopithecus. The earliest record of Homo is the 2.8 million-year-old specimen LD 350-1 from Ethiopia, and the earliest named species are Homo habilis and Homo rudolfensis which evolved by 2.3 million years ago. H. erectus (the African variant is sometimes called H. ergaster) evolved 2 million years ago and was the first archaic human species to leave Africa and disperse across Eurasia. H. erectus also was the first to evolve a characteristically human body plan. Homo sapiens emerged in Africa around 300,000 years ago from a species commonly designated as either H. heidelbergensis or H. rhodesiensis, the descendants of H. erectus that remained in Africa. H. sapiens migrated out of the continent, gradually replacing or interbreeding with local populations of archaic humans. Humans began exhibiting behavioral modernity about 160,000–70,000 years ago, and possibly earlier. This development was likely selected amidst natural climate change in Middle to Late Pleistocene Africa. The "out of Africa" migration took place in at least two waves, the first around 130,000 to 100,000 years ago, the second (Southern Dispersal) around 70,000 to 50,000 years ago. H. sapiens proceeded to colonize all the continents and larger islands, arriving in Eurasia 125,000 years ago, Australia around 65,000 years ago, the Americas around 15,000 years ago, and remote islands such as Hawaii, Easter Island, Madagascar, and New Zealand in the years 300 to 1280 CE. Human evolution was not a simple linear or branched progression but involved interbreeding between related species. Genomic research has shown that hybridization between substantially diverged lineages was common in human evolution. DNA evidence suggests that several genes of Neanderthal origin are present among all non sub-Saharan-African populations, and Neanderthals and other hominins, such as Denisovans, may have contributed up to 6% of their genome to present-day non sub-Saharan-African humans. Human evolution is characterized by a number of morphological, developmental, physiological, and behavioral changes that have taken place since the split between the last common ancestor of humans and chimpanzees. The most significant of these adaptations are hairlessness, obligate bipedalism, increased brain size and decreased sexual dimorphism (neoteny). The relationship between all these changes is the subject of ongoing debate. History Prehistory Until about 12,000 years ago, all humans lived as hunter-gatherers. The Neolithic Revolution (the invention of agriculture) first took place in Southwest Asia and spread through large parts of the Old World over the following millennia. It also occurred independently in Mesoamerica (about 6,000 years ago), China, Papua New Guinea, and the Sahel and West Savanna regions of Africa. Access to food surplus led to the formation of permanent human settlements, the domestication of animals and the use of metal tools for the first time in history. Agriculture and sedentary lifestyle led to the emergence of early civilizations. Ancient An urban revolution took place in the 4th millennium BCE with the development of city-states, particularly Sumerian cities located in Mesopotamia. It was in these cities that the earliest known form of writing, cuneiform script, appeared around 3000 BCE. Other major civilizations to develop around this time were Ancient Egypt and the Indus Valley Civilisation. They eventually traded with each other and invented technology such as wheels, plows and sails. Emerging by 3000 BCE, the Caral–Supe civilization is the oldest complex civilization in the Americas. Astronomy and mathematics were also developed and the Great Pyramid of Giza was built. There is evidence of a severe drought lasting about a hundred years that may have caused the decline of these civilizations, with new ones appearing in the aftermath. Babylonians came to dominate Mesopotamia while others, such as the Poverty Point culture, Minoans and the Shang dynasty, rose to prominence in new areas. The Late Bronze Age collapse around 1200 BCE resulted in the disappearance of a number of civilizations and the beginning of the Greek Dark Ages. During this period iron started replacing bronze, leading to the Iron Age. In the 5th century BCE, history started being recorded as a discipline, which provided a much clearer picture of life at the time. Between the 8th and 6th century BCE, Europe entered the classical antiquity age, a period when ancient Greece and ancient Rome flourished. Around this time other civilizations also came to prominence. The Maya civilization started to build cities and create complex calendars. In Africa, the Kingdom of Aksum overtook the declining Kingdom of Kush and facilitated trade between India and the Mediterranean. In West Asia, the Achaemenid Empire's system of centralized governance became the precursor to many later empires, while the Gupta Empire in India and the Han dynasty in China have been described as golden ages in their respective regions. Medieval Following the fall of the Western Roman Empire in 476, Europe entered the Middle Ages. During this period, Christianity and the Church would provide centralized authority and education. In the Middle East, Islam became the prominent religion and expanded into North Africa. It led to an Islamic Golden Age, inspiring achievements in architecture, the revival of old advances in science and technology, and the formation of a distinct way of life. The Christian and Islamic worlds would eventually clash, with the Kingdom of England, the Kingdom of France and the Holy Roman Empire declaring a series of holy wars to regain control of the Holy Land from Muslims. In the Americas, between 200 and 900 CE Mesoamerica was in its Classic Period, while further north, complex Mississippian societies would arise starting around 800 CE. The Mongol Empire would conquer much of Eurasia in the 13th and 14th centuries. Over this same time period, the Mali Empire in Africa grew to be the largest empire on the continent, stretching from Senegambia to Ivory Coast. Oceania would see the rise of the Tuʻi Tonga Empire which expanded across many islands in the South Pacific. By the late 15th century, the Aztecs and Inca had become the dominant power in Mesoamerica and the Andes, respectively. Modern The early modern period in Europe and the Near East (–1800) began with the final defeat of the Byzantine Empire, and the rise of the Ottoman Empire. Meanwhile, Japan entered the Edo period, the Qing dynasty rose in China and the Mughal Empire ruled much of India. Europe underwent the Renaissance, starting in the 15th century, and the Age of Discovery began with the exploring and colonizing of new regions. This included the colonization of the Americas and the Columbian Exchange. This expansion led to the Atlantic slave trade and the genocide of Native American peoples. This period also marked the Scientific Revolution, with great advances in mathematics, mechanics, astronomy and physiology. The late modern period (1800–present) saw the Technological and Industrial Revolution bring such discoveries as imaging technology, major innovations in transport and energy development. Influenced by Enlightenment ideals, the Americas and Europe experienced a period of political revolutions known as the Age of Revolution. The Napoleonic Wars raged through Europe in the early 1800s, Spain lost most of its colonies in the New World, while Europeans continued expansion into Africawhere European control went from 10% to almost 90% in less than 50 yearsand Oceania. In the 19th century, the British Empire expanded to become the world's largest empire.A tenuous balance of power among European nations collapsed in 1914 with the outbreak of the First World War, one of the deadliest conflicts in history. In the 1930s, a worldwide economic crisis led to the rise of authoritarian regimes and a Second World War, involving almost all of the world's countries. The war's destruction led to the collapse of most global empires, leading to widespread decolonization. Following the conclusion of the Second World War in 1945, the United States and the USSR emerged as the remaining global superpowers. This led to a Cold War that saw a struggle for global influence, including a nuclear arms race and a space race, ending in the collapse of the Soviet Union. The current Information Age, spurred by the development of the Internet and artificial intelligence systems, sees the world becoming increasingly globalized and interconnected. Habitat and population Early human settlements were dependent on proximity to water anddepending on the lifestyleother natural resources used for subsistence, such as populations of animal prey for hunting and arable land for growing crops and grazing livestock. Modern humans, however, have a great capacity for altering their habitats by means of technology, irrigation, urban planning, construction, deforestation and desertification. Human settlements continue to be vulnerable to natural disasters, especially those placed in hazardous locations and with low quality of construction. Grouping and deliberate habitat alteration is often done with the goals of providing protection, accumulating comforts or material wealth, expanding the available food, improving aesthetics, increasing knowledge or enhancing the exchange of resources. Humans are one of the most adaptable species, despite having a low or narrow tolerance for many of the earth's extreme environments. Currently the species is present in all eight biogeographical realms, although their presence in the Antarctic realm is very limited to research stations and annually there is a population decline in the winter months of this realm. Humans established nation-states in the other seven realms, such as South Africa, India, Russia, Australia, Fiji, United States and Brazil (each located in a different biogeographical realm). By using advanced tools and clothing, humans have been able to extend their tolerance to a wide variety of temperatures, humidities, and altitudes. As a result, humans are a cosmopolitan species found in almost all regions of the world, including tropical rainforest, arid desert, extremely cold arctic regions, and heavily polluted cities; in comparison, most other species are confined to a few geographical areas by their limited adaptability. The human population is not, however, uniformly distributed on the Earth's surface, because the population density varies from one region to another, and large stretches of surface are almost completely uninhabited, like Antarctica and vast swathes of the ocean. Most humans (61%) live in Asia; the remainder live in the Americas (14%), Africa (14%), Europe (11%), and Oceania (0.5%). Estimates of the population at the time agriculture emerged in around 10,000 BC have ranged between 1 million and 15 million. Around 50–60 million people lived in the combined eastern and western Roman Empire in the 4th century AD. Bubonic plagues, first recorded in the 6th century AD, reduced the population by 50%, with the Black Death killing 75–200 million people in Eurasia and North Africa alone. Human population is believed to have reached one billion in 1800. It has since then increased exponentially, reaching two billion in 1930 and three billion in 1960, four in 1975, five in 1987 and six billion in 1999. It passed seven billion in 2011 and passed eight billion in November 2022. It took over two million years of human prehistory and history for the human population to reach one billion and only 207 years more to grow to 7 billion. The combined biomass of the carbon of all the humans on Earth in 2018 was estimated at 60 million tons, about 10 times larger than that of all non-domesticated mammals. In 2018, 4.2 billion humans (55%) lived in urban areas, up from 751 million in 1950. The most urbanized regions are Northern America (82%), Latin America (81%), Europe (74%) and Oceania (68%), with Africa and Asia having nearly 90% of the world's 3.4 billion rural population. Problems for humans living in cities include various forms of pollution and crime, especially in inner city and suburban slums. Biology Anatomy and physiology Most aspects of human physiology are closely homologous to corresponding aspects of animal physiology. The dental formula of humans is: . Humans have proportionately shorter palates and much smaller teeth than other primates. They are the only primates to have short, relatively flush canine teeth. Humans have characteristically crowded teeth, with gaps from lost teeth usually closing up quickly in young individuals. Humans are gradually losing their third molars, with some individuals having them congenitally absent. Humans share with chimpanzees a vestigial tail, appendix, flexible shoulder joints, grasping fingers and opposable thumbs. Humans also have a more barrel-shaped chests in contrast to the funnel shape of other apes, an adaptation for bipedal respiration. Apart from bipedalism and brain size, humans differ from chimpanzees mostly in smelling, hearing and digesting proteins. While humans have a density of hair follicles comparable to other apes, it is predominantly vellus hair, most of which is so short and wispy as to be practically invisible. Humans have about 2 million sweat glands spread over their entire bodies, many more than chimpanzees, whose sweat glands are scarce and are mainly located on the palm of the hand and on the soles of the feet. It is estimated that the worldwide average height for an adult human male is about , while the worldwide average height for adult human females is about . Shrinkage of stature may begin in middle age in some individuals but tends to be typical in the extremely aged. Throughout history, human populations have universally become taller, probably as a consequence of better nutrition, healthcare, and living conditions. The average mass of an adult human is for females and for males. Like many other conditions, body weight and body type are influenced by both genetic susceptibility and environment and varies greatly among individuals. Humans have a far faster and more accurate throw than other animals. Humans are also among the best long-distance runners in the animal kingdom, but slower over short distances. Humans' thinner body hair and more productive sweat glands help avoid heat exhaustion while running for long distances. Compared to other apes, the human heart produces greater stroke volume and cardiac output and the aorta is proportionately larger. Genetics Like most animals, humans are a diploid and eukaryotic species. Each somatic cell has two sets of 23 chromosomes, each set received from one parent; gametes have only one set of chromosomes, which is a mixture of the two parental sets. Among the 23 pairs of chromosomes, there are 22 pairs of autosomes and one pair of sex chromosomes. Like other mammals, humans have an XY sex-determination system, so that females have the sex chromosomes XX and males have XY. Genes and environment influence human biological variation in visible characteristics, physiology, disease susceptibility and mental abilities. The exact influence of genes and environment on certain traits is not well understood. While no humansnot even monozygotic twinsare genetically identical, two humans on average will have a genetic similarity of 99.5%-99.9%. This makes them more homogeneous than other great apes, including chimpanzees. This small variation in human DNA compared to many other species suggests a population bottleneck during the Late Pleistocene (around 100,000 years ago), in which the human population was reduced to a small number of breeding pairs. The forces of natural selection have continued to operate on human populations, with evidence that certain regions of the genome display directional selection in the past 15,000 years. The human genome was first sequenced in 2001 and by 2020 hundreds of thousands of genomes had been sequenced. In 2012 the International HapMap Project had compared the genomes of 1,184 individuals from 11 populations and identified 1.6 million single nucleotide polymorphisms. African populations harbor the highest number of private genetic variants. While many of the common variants found in populations outside of Africa are also found on the African continent, there are still large numbers that are private to these regions, especially Oceania and the Americas. By 2010 estimates, humans have approximately 22,000 genes. By comparing mitochondrial DNA, which is inherited only from the mother, geneticists have concluded that the last female common ancestor whose genetic marker is found in all modern humans, the so-called mitochondrial Eve, must have lived around 90,000 to 200,000 years ago. Life cycle Most human reproduction takes place by internal fertilization via sexual intercourse, but can also occur through assisted reproductive technology procedures. The average gestation period is 38 weeks, but a normal pregnancy can vary by up to 37 days. Embryonic development in the human covers the first eight weeks of development; at the beginning of the ninth week the embryo is termed a fetus. Humans are able to induce early labor or perform a caesarean section if the child needs to be born earlier for medical reasons. In developed countries, infants are typically in weight and in height at birth. However, low birth weight is common in developing countries, and contributes to the high levels of infant mortality in these regions. Compared with other species, human childbirth is dangerous, with a much higher risk of complications and death. The size of the fetus's head is more closely matched to the pelvis than in other primates. The reason for this is not completely understood, but it contributes to a painful labor that can last 24 hours or more. The chances of a successful labor increased significantly during the 20th century in wealthier countries with the advent of new medical technologies. In contrast, pregnancy and natural childbirth remain hazardous ordeals in developing regions of the world, with maternal death rates approximately 100 times greater than in developed countries. Both the mother and the father provide care for human offspring, in contrast to other primates, where parental care is mostly done by the mother. Helpless at birth, humans continue to grow for some years, typically reaching sexual maturity at 15 to 17 years of age. The human life span has been split into various stages ranging from three to twelve. Common stages include infancy, childhood, adolescence, adulthood and old age. The lengths of these stages have varied across cultures and time periods but is typified by an unusually rapid growth spurt during adolescence. Human females undergo menopause and become infertile at around the age of 50. It has been proposed that menopause increases a woman's overall reproductive success by allowing her to invest more time and resources in her existing offspring, and in turn their children (the grandmother hypothesis), rather than by continuing to bear children into old age. The life span of an individual depends on two major factors, genetics and lifestyle choices. For various reasons, including biological/genetic causes, women live on average about four years longer than men. , the global average life expectancy at birth of a girl is estimated to be 74.9 years compared to 70.4 for a boy. There are significant geographical variations in human life expectancy, mostly correlated with economic developmentfor example, life expectancy at birth in Hong Kong is 87.6 years for girls and 81.8 for boys, while in the Central African Republic, it is 55.0 years for girls and 50.6 for boys. The developed world is generally aging, with the median age around 40 years. In the developing world, the median age is between 15 and 20 years. While one in five Europeans is 60 years of age or older, only one in twenty Africans is 60 years of age or older. In 2012, the United Nations estimated that there were 316,600 living centenarians (humans of age 100 or older) worldwide. Diet Humans are omnivorous, capable of consuming a wide variety of plant and animal material. Human groups have adopted a range of diets from purely vegan to primarily carnivorous. In some cases, dietary restrictions in humans can lead to deficiency diseases; however, stable human groups have adapted to many dietary patterns through both genetic specialization and cultural conventions to use nutritionally balanced food sources. The human diet is prominently reflected in human culture and has led to the development of food science. Until the development of agriculture, Homo sapiens employed a hunter-gatherer method as their sole means of food collection. This involved combining stationary food sources (such as fruits, grains, tubers, and mushrooms, insect larvae and aquatic mollusks) with wild game, which must be hunted and captured in order to be consumed. It has been proposed that humans have used fire to prepare and cook food since the time of Homo erectus. Human domestication of wild plants began about 11,700 years ago, leading to the development of agriculture, a gradual process called the Neolithic Revolution. These dietary changes may also have altered human biology; the spread of dairy farming provided a new and rich source of food, leading to the evolution of the ability to digest lactose in some adults. The types of food consumed, and how they are prepared, have varied widely by time, location, and culture. In general, humans can survive for up to eight weeks without food, depending on stored body fat. Survival without water is usually limited to three or four days, with a maximum of one week. In 2020 it is estimated 9 million humans die every year from causes directly or indirectly related to starvation. Childhood malnutrition is also common and contributes to the global burden of disease. However, global food distribution is not even, and obesity among some human populations has increased rapidly, leading to health complications and increased mortality in some developed and a few developing countries. Worldwide, over one billion people are obese, while in the United States 35% of people are obese, leading to this being described as an "obesity epidemic." Obesity is caused by consuming more calories than are expended, so excessive weight gain is usually caused by an energy-dense diet. Biological variation There is biological variation in the human specieswith traits such as blood type, genetic diseases, cranial features, facial features, organ systems, eye color, hair color and texture, height and build, and skin color varying across the globe. The typical height of an adult human is between , although this varies significantly depending on sex, ethnic origin, and family bloodlines. Body size is partly determined by genes and is also significantly influenced by environmental factors such as diet, exercise, and sleep patterns. There is evidence that populations have adapted genetically to various external factors. The genes that allow adult humans to digest lactose are present in high frequencies in populations that have long histories of cattle domestication and are more dependent on cow milk. Sickle cell anemia, which may provide increased resistance to malaria, is frequent in populations where malaria is endemic. Populations that have for a very long time inhabited specific climates tend to have developed specific phenotypes that are beneficial for those environmentsshort stature and stocky build in cold regions, tall and lanky in hot regions, and with high lung capacities or other adaptations at high altitudes. Some populations have evolved highly unique adaptations to very specific environmental conditions, such as those advantageous to ocean-dwelling lifestyles and freediving in the Bajau. Human hair ranges in color from red to blond to brown to black, which is the most frequent. Hair color depends on the amount of melanin, with concentrations fading with increased age, leading to grey or even white hair. Skin color can range from darkest brown to lightest peach, or even nearly white or colorless in cases of albinism. It tends to vary clinally and generally correlates with the level of ultraviolet radiation in a particular geographic area, with darker skin mostly around the equator. Skin darkening may have evolved as protection against ultraviolet solar radiation. Light skin pigmentation protects against depletion of vitamin D, which requires sunlight to make. Human skin also has a capacity to darken (tan) in response to exposure to ultraviolet radiation. There is relatively little variation between human geographical populations, and most of the variation that occurs is at the individual level. Much of human variation is continuous, often with no clear points of demarcation. Genetic data shows that no matter how population groups are defined, two people from the same population group are almost as different from each other as two people from any two different population groups. Dark-skinned populations that are found in Africa, Australia, and South Asia are not closely related to each other. Genetic research has demonstrated that human populations native to the African continent are the most genetically diverse and genetic diversity decreases with migratory distance from Africa, possibly the result of bottlenecks during human migration. These non-African populations acquired new genetic inputs from local admixture with archaic populations and have much greater variation from Neanderthals and Denisovans than is found in Africa, though Neanderthal admixture into African populations may be underestimated. Furthermore, recent studies have found that populations in sub-Saharan Africa, and particularly West Africa, have ancestral genetic variation which predates modern humans and has been lost in most non-African populations. Some of this ancestry is thought to originate from admixture with an unknown archaic hominin that diverged before the split of Neanderthals and modern humans. Humans are a gonochoric species, meaning they are divided into male and female sexes. The greatest degree of genetic variation exists between males and females. While the nucleotide genetic variation of individuals of the same sex across global populations is no greater than 0.1%–0.5%, the genetic difference between males and females is between 1% and 2%. Males on average are 15% heavier and taller than females. On average, men have about 40–50% more upper-body strength and 20–30% more lower-body strength than women at the same weight, due to higher amounts of muscle and larger muscle fibers. Women generally have a higher body fat percentage than men. Women have lighter skin than men of the same population; this has been explained by a higher need for vitamin D in females during pregnancy and lactation. As there are chromosomal differences between females and males, some X and Y chromosome-related conditions and disorders only affect either men or women. After allowing for body weight and volume, the male voice is usually an octave deeper than the female voice. Women have a longer life span in almost every population around the world. There are intersex conditions in the human population, however these are rare. Psychology The human brain, the focal point of the central nervous system in humans, controls the peripheral nervous system. In addition to controlling "lower", involuntary, or primarily autonomic activities such as respiration and digestion, it is also the locus of "higher" order functioning such as thought, reasoning, and abstraction. These cognitive processes constitute the mind, and, along with their behavioral consequences, are studied in the field of psychology. Humans have a larger and more developed prefrontal cortex than other primates, the region of the brain associated with higher cognition. This has led humans to proclaim themselves to be more intelligent than any other known species. Objectively defining intelligence is difficult, with other animals adapting senses and excelling in areas that humans are unable to. There are some traits that, although not strictly unique, do set humans apart from other animals. Humans may be the only animals who have episodic memory and who can engage in "mental time travel". Even compared with other social animals, humans have an unusually high degree of flexibility in their facial expressions. Humans are the only animals known to cry emotional tears. Humans are one of the few animals able to self-recognize in mirror tests and there is also debate over to what extent humans are the only animals with a theory of mind. Sleep and dreaming Humans are generally diurnal. The average sleep requirement is between seven and nine hours per day for an adult and nine to ten hours per day for a child; elderly people usually sleep for six to seven hours. Having less sleep than this is common among humans, even though sleep deprivation can have negative health effects. A sustained restriction of adult sleep to four hours per day has been shown to correlate with changes in physiology and mental state, including reduced memory, fatigue, aggression, and bodily discomfort. During sleep humans dream, where they experience sensory images and sounds. Dreaming is stimulated by the pons and mostly occurs during the REM phase of sleep. The length of a dream can vary, from a few seconds up to 30 minutes. Humans have three to five dreams per night, and some may have up to seven. Dreamers are more likely to remember the dream if awakened during the REM phase. The events in dreams are generally outside the control of the dreamer, with the exception of lucid dreaming, where the dreamer is self-aware. Dreams can at times make a creative thought occur or give a sense of inspiration. Consciousness and thought Human consciousness, at its simplest, is sentience or awareness of internal or external existence. Despite centuries of analyses, definitions, explanations and debates by philosophers and scientists, consciousness remains puzzling and controversial, being "at once the most familiar and most mysterious aspect of our lives". The only widely agreed notion about the topic is the intuition that it exists. Opinions differ about what exactly needs to be studied and explained as consciousness. Some philosophers divide consciousness into phenomenal consciousness, which is sensory experience itself, and access consciousness, which can be used for reasoning or directly controlling actions. It is sometimes synonymous with 'the mind', and at other times, an aspect of it. Historically it is associated with introspection, private thought, imagination and volition. It now often includes some kind of experience, cognition, feeling or perception. It may be 'awareness', or 'awareness of awareness', or self-awareness. There might be different levels or orders of consciousness, or different kinds of consciousness, or just one kind with different features. The process of acquiring knowledge and understanding through thought, experience, and the senses is known as cognition. The human brain perceives the external world through the senses, and each individual human is influenced greatly by his or her experiences, leading to subjective views of existence and the passage of time. The nature of thought is central to psychology and related fields. Cognitive psychology studies cognition, the mental processes underlying behavior. Largely focusing on the development of the human mind through the life span, developmental psychology seeks to understand how people come to perceive, understand, and act within the world and how these processes change as they age. This may focus on intellectual, cognitive, neural, social, or moral development. Psychologists have developed intelligence tests and the concept of intelligence quotient in order to assess the relative intelligence of human beings and study its distribution among population. Motivation and emotion Human motivation is not yet wholly understood. From a psychological perspective, Maslow's hierarchy of needs is a well-established theory that can be defined as the process of satisfying certain needs in ascending order of complexity. From a more general, philosophical perspective, human motivation can be defined as a commitment to, or withdrawal from, various goals requiring the application of human ability. Furthermore, incentive and preference are both factors, as are any perceived links between incentives and preferences. Volition may also be involved, in which case willpower is also a factor. Ideally, both motivation and volition ensure the selection, striving for, and realization of goals in an optimal manner, a function beginning in childhood and continuing throughout a lifetime in a process known as socialization. Emotions are biological states associated with the nervous system brought on by neurophysiological changes variously associated with thoughts, feelings, behavioral responses, and a degree of pleasure or displeasure. They are often intertwined with mood, temperament, personality, disposition, creativity, and motivation. Emotion has a significant influence on human behavior and their ability to learn. Acting on extreme or uncontrolled emotions can lead to social disorder and crime, with studies showing criminals may have a lower emotional intelligence than normal. Emotional experiences perceived as pleasant, such as joy, interest or contentment, contrast with those perceived as unpleasant, like anxiety, sadness, anger, and despair. Happiness, or the state of being happy, is a human emotional condition. The definition of happiness is a common philosophical topic. Some define it as experiencing the feeling of positive emotional affects, while avoiding the negative ones. Others see it as an appraisal of life satisfaction or quality of life. Recent research suggests that being happy might involve experiencing some negative emotions when humans feel they are warranted. Sexuality and love For humans, sexuality involves biological, erotic, physical, emotional, social, or spiritual feelings and behaviors. Because it is a broad term, which has varied with historical contexts over time, it lacks a precise definition. The biological and physical aspects of sexuality largely concern the human reproductive functions, including the human sexual response cycle. Sexuality also affects and is affected by cultural, political, legal, philosophical, moral, ethical, and religious aspects of life. Sexual desire, or libido, is a basic mental state present at the beginning of sexual behavior. Studies show that men desire sex more than women and masturbate more often. Humans can fall anywhere along a continuous scale of sexual orientation, although most humans are heterosexual. While homosexual behavior occurs in some other animals, only humans and domestic sheep have so far been found to exhibit exclusive preference for same-sex relationships. Most evidence supports nonsocial, biological causes of sexual orientation, as cultures that are very tolerant of homosexuality do not have significantly higher rates of it. Research in neuroscience and genetics suggests that other aspects of human sexuality are biologically influenced as well. Love most commonly refers to a feeling of strong attraction or emotional attachment. It can be impersonal (the love of an object, ideal, or strong political or spiritual connection) or interpersonal (love between humans). When in love dopamine, norepinephrine, serotonin and other chemicals stimulate the brain's pleasure center, leading to side effects such as increased heart rate, loss of appetite and sleep, and an intense feeling of excitement. Culture Humanity's unprecedented set of intellectual skills were a key factor in the species' eventual technological advancement and concomitant domination of the biosphere. Disregarding extinct hominids, humans are the only animals known to teach generalizable information, innately deploy recursive embedding to generate and communicate complex concepts, engage in the "folk physics" required for competent tool design, or cook food in the wild. Teaching and learning preserves the cultural and ethnographic identity of human societies. Other traits and behaviors that are mostly unique to humans include starting fires, phoneme structuring and vocal learning. Language While many species communicate, language is unique to humans, a defining feature of humanity, and a cultural universal. Unlike the limited systems of other animals, human language is openan infinite number of meanings can be produced by combining a limited number of symbols. Human language also has the capacity of displacement, using words to represent things and happenings that are not presently or locally occurring but reside in the shared imagination of interlocutors. Language differs from other forms of communication in that it is modality independent; the same meanings can be conveyed through different media, audibly in speech, visually by sign language or writing, and through tactile media such as braille. Language is central to the communication between humans, and to the sense of identity that unites nations, cultures and ethnic groups. There are approximately six thousand different languages currently in use, including sign languages, and many thousands more that are extinct. The arts Human arts can take many forms including visual, literary, and performing. Visual art can range from paintings and sculptures to film, fashion design, and architecture. Literary arts can include prose, poetry, and dramas. The performing arts generally involve theatre, music, and dance. Humans often combine the different forms (for example, music videos). Other entities that have been described as having artistic qualities include food preparation, video games, and medicine. As well as providing entertainment and transferring knowledge, the arts are also used for political purposes. Art is a defining characteristic of humans and there is evidence for a relationship between creativity and language. The earliest evidence of art was shell engravings made by Homo erectus 300,000 years before modern humans evolved. Art attributed to H. sapiens existed at least 75,000 years ago, with jewellery and drawings found in caves in South Africa. There are various hypotheses as to why humans have adapted to the arts. These include allowing them to better problem solve issues, providing a means to control or influence other humans, encouraging cooperation and contribution within a society or increasing the chance of attracting a potential mate. The use of imagination developed through art, combined with logic may have given early humans an evolutionary advantage. Evidence of humans engaging in musical activities predates cave art and so far music has been practiced by virtually all known human cultures. There exists a wide variety of music genres and ethnic musics; with humans' musical abilities being related to other abilities, including complex social human behaviours. It has been shown that human brains respond to music by becoming synchronized with the rhythm and beat, a process called entrainment. Dance is also a form of human expression found in all cultures and may have evolved as a way to help early humans communicate. Listening to music and observing dance stimulates the orbitofrontal cortex and other pleasure sensing areas of the brain. Unlike speaking, reading and writing does not come naturally to humans and must be taught. Still, literature has been present before the invention of words and language, with 30,000-year-old paintings on walls inside some caves portraying a series of dramatic scenes. One of the oldest surviving works of literature is the Epic of Gilgamesh, first engraved on ancient Babylonian tablets about 4,000 years ago. Beyond simply passing down knowledge, the use and sharing of imaginative fiction through stories might have helped develop humans' capabilities for communication and increased the likelihood of securing a mate. Storytelling may also be used as a way to provide the audience with moral lessons and encourage cooperation. Tools and technologies Stone tools were used by proto-humans at least 2.5 million years ago. The use and manufacture of tools has been put forward as the ability that defines humans more than anything else and has historically been seen as an important evolutionary step. The technology became much more sophisticated about 1.8 million years ago, with the controlled use of fire beginning around 1 million years ago. The wheel and wheeled vehicles appeared simultaneously in several regions some time in the fourth millennium BC. The development of more complex tools and technologies allowed land to be cultivated and animals to be domesticated, thus proving essential in the development of agriculturewhat is known as the Neolithic Revolution. China developed paper, the printing press, gunpowder, the compass and other important inventions. The continued improvements in smelting allowed forging of copper, bronze, iron and eventually steel, which is used in railways, skyscrapers and many other products. This coincided with the Industrial Revolution, where the invention of automated machines brought major changes to humans' lifestyles. Modern technology is observed as progressing exponentially, with major innovations in the 20th century including: electricity, penicillin, semiconductors, internal combustion engines, the Internet, nitrogen fixing fertilizers, airplanes, computers, automobiles, contraceptive pills, nuclear fission, the green revolution, radio, scientific plant breeding, rockets, air conditioning, television and the assembly line. Religion and spirituality Definitions of religion vary; according to one definition, a religion is a belief system concerning the supernatural, sacred or divine, and practices, values, institutions and rituals associated with such belief. Some religions also have a moral code. The evolution and the history of the first religions have become areas of active scientific investigation. Credible evidence of religious behaviour dates to the Middle Paleolithic era (45–200 thousand years ago). It may have evolved to play a role in helping enforce and encourage cooperation between humans. Religion manifests in diverse forms. Religion can include a belief in life after death, the origin of life, the nature of the universe (religious cosmology) and its ultimate fate (eschatology), and moral or ethical teachings. Views on transcendence and immanence vary substantially; traditions variously espouse monism, deism, pantheism, and theism (including polytheism and monotheism). Although measuring religiosity is difficult, a majority of humans profess some variety of religious or spiritual belief. In 2015 the plurality were Christian followed by Muslims, Hindus and Buddhists. As of 2015, about 16%, or slightly under 1.2 billion humans, were irreligious, including those with no religious beliefs or no identity with any religion. Science and philosophy An aspect unique to humans is their ability to transmit knowledge from one generation to the next and to continually build on this information to develop tools, scientific laws and other advances to pass on further. This accumulated knowledge can be tested to answer questions or make predictions about how the universe functions and has been very successful in advancing human ascendancy. Aristotle has been described as the first scientist, and preceded the rise of scientific thought through the Hellenistic period. Other early advances in science came from the Han dynasty in China and during the Islamic Golden Age. The scientific revolution, near the end of the Renaissance, led to the emergence of modern science. A chain of events and influences led to the development of the scientific method, a process of observation and experimentation that is used to differentiate science from pseudoscience. An understanding of mathematics is unique to humans, although other species of animals have some numerical cognition. All of science can be divided into three major branches, the formal sciences (e.g., logic and mathematics), which are concerned with formal systems, the applied sciences (e.g., engineering, medicine), which are focused on practical applications, and the empirical sciences, which are based on empirical observation and are in turn divided into natural sciences (e.g., physics, chemistry, biology) and social sciences (e.g., psychology, economics, sociology). Philosophy is a field of study where humans seek to understand fundamental truths about themselves and the world in which they live. Philosophical inquiry has been a major feature in the development of humans' intellectual history. It has been described as the "no man's land" between definitive scientific knowledge and dogmatic religious teachings. Major fields of philosophy include metaphysics, epistemology, logic, and axiology (which includes ethics and aesthetics). Society Society is the system of organizations and institutions arising from interaction between humans. Humans are highly social and tend to live in large complex social groups. They can be divided into different groups according to their income, wealth, power, reputation and other factors. The structure of social stratification and the degree of social mobility differs, especially between modern and traditional societies. Human groups range from the size of families to nations. The first form of human social organization is thought to have resembled hunter-gatherer band societies. Gender Human societies typically exhibit gender identities and gender roles that distinguish between masculine and feminine characteristics and prescribe the range of acceptable behaviours and attitudes for their members based on their sex. The most common categorisation is a gender binary of men and women. Some societies recognize a third gender, or less commonly a fourth or fifth. In some other societies, non-binary is used as an umbrella term for a range of gender identities that are not solely male or female. Gender roles are often associated with a division of norms, practices, dress, behavior, rights, duties, privileges, status, and power, with men enjoying more rights and privileges than women in most societies, both today and in the past. As a social construct, gender roles are not fixed and vary historically within a society. Challenges to predominant gender norms have recurred in many societies. Little is known about gender roles in the earliest human societies. Early modern humans probably had a range of gender roles similar to that of modern cultures from at least the Upper Paleolithic, while the Neanderthals were less sexually dimorphic and there is evidence that the behavioural difference between males and females was minimal. Kinship All human societies organize, recognize and classify types of social relationships based on relations between parents, children and other descendants (consanguinity), and relations through marriage (affinity). There is also a third type applied to godparents or adoptive children (fictive). These culturally defined relationships are referred to as kinship. In many societies, it is one of the most important social organizing principles and plays a role in transmitting status and inheritance. All societies have rules of incest taboo, according to which marriage between certain kinds of kin relations is prohibited, and some also have rules of preferential marriage with certain kin relations. Pair bonding is a ubiquitous feature of human sexual relationships, whether it is manifested as serial monogamy, polygyny, or polyandry. Genetic evidence indicates that humans were predominantly polygynous for most of their existence as a species, but that this began to shift during the Neolithic, when monogamy started becoming widespread concomitantly with the transition from nomadic to sedentary societies. Anatomical evidence in the form of second-to-fourth digit ratios, a biomarker for prenatal androgen effects, likewise indicates modern humans were polygynous during the Pleistocene. Ethnicity Human ethnic groups are a social category that identifies together as a group based on shared attributes that distinguish them from other groups. These can be a common set of traditions, ancestry, language, history, society, culture, nation, religion, or social treatment within their residing area. Ethnicity is separate from the concept of race, which is based on physical characteristics, although both are socially constructed. Assigning ethnicity to a certain population is complicated, as even within common ethnic designations there can be a diverse range of subgroups, and the makeup of these ethnic groups can change over time at both the collective and individual level. Also, there is no generally accepted definition of what constitutes an ethnic group. Ethnic groupings can play a powerful role in the social identity and solidarity of ethnopolitical units. This has been closely tied to the rise of the nation state as the predominant form of political organization in the 19th and 20th centuries. Government and politics As farming populations gathered in larger and denser communities, interactions between these different groups increased. This led to the development of governance within and between the communities. Humans have evolved the ability to change affiliation with various social groups relatively easily, including previously strong political alliances, if doing so is seen as providing personal advantages. This cognitive flexibility allows individual humans to change their political ideologies, with those with higher flexibility less likely to support authoritarian and nationalistic stances. Governments create laws and policies that affect the citizens that they govern. There have been many forms of government throughout human history, each having various means of obtaining power and the ability to exert diverse controls on the population. Approximately 47% of humans live in some form of a democracy, 17% in a hybrid regime, and 37% in an authoritarian regime. Many countries belong to international organizations and alliances; the largest of these is the United Nations, with 193 member states. Trade and economics Trade, the voluntary exchange of goods and services, is seen as a characteristic that differentiates humans from other animals and has been cited as a practice that gave Homo sapiens a major advantage over other hominids. Evidence suggests early H. sapiens made use of long-distance trade routes to exchange goods and ideas, leading to cultural explosions and providing additional food sources when hunting was sparse, while such trade networks did not exist for the now extinct Neanderthals. Early trade likely involved materials for creating tools like obsidian. The first truly international trade routes were around the spice trade through the Roman and medieval periods. Early human economies were more likely to be based around gift giving instead of a bartering system. Early money consisted of commodities; the oldest being in the form of cattle and the most widely used being cowrie shells. Money has since evolved into governmental issued coins, paper and electronic money. Human study of economics is a social science that looks at how societies distribute scarce resources among different people. There are massive inequalities in the division of wealth among humans; the eight richest humans are worth the same monetary value as the poorest half of all the human population. Conflict Humans commit violence on other humans at a rate comparable to other primates, but have an increased preference for killing adults, infanticide being more common among other primates. Phylogenetic analysis predicts that 2% of early H. sapiens would be murdered, rising to 12% during the medieval period, before dropping to below 2% in modern times. There is great variation in violence between human populations, with rates of homicide about 0.01% in societies that have legal systems and strong cultural attitudes against violence. The willingness of humans to kill other members of their species en masse through organized conflict (i.e., war) has long been the subject of debate. One school of thought holds that war evolved as a means to eliminate competitors, and has always been an innate human characteristic. Another suggests that war is a relatively recent phenomenon and has appeared due to changing social conditions. While not settled, current evidence indicates warlike predispositions only became common about 10,000 years ago, and in many places much more recently than that. War has had a high cost on human life; it is estimated that during the 20th century, between 167 million and 188 million people died as a result of war. War casualty data is less reliable for pre-medieval times, especially global figures. But compared with any period over the past 600 years, the last ~80 years (post 1946), has seen a very significant drop in global military and civilian death rates due to armed conflict. See also List of human evolution fossils Notes References External links Hominini Apex predators Articles containing video clips Mammals described in 1758 Taxa named by Carl Linnaeus Tool-using mammals Cosmopolitan mammals Extant Middle Pleistocene first appearances Anatomically modern humans
Human
Biology
11,586
6,244,856
https://en.wikipedia.org/wiki/Laser%20microtome
The laser microtome is an instrument used for non-contact sectioning of biological tissues or materials. It was developed by the Rowiak GmbH, a spin-off of the Laser Centre, Hannover. In contrast to mechanically working microtomes, the laser microtome does not require sample preparation techniques such as freezing, dehydration or embedding. It has the ability to slice tissue in its native state. Depending on the material being processed, slice thicknesses of 10 to 100 micrometers are feasible. Principle The cutting process is performed by a femtosecond laser, emitting radiation in the near-infrared range. Within this wavelength range, the laser is able to penetrate the tissue up to a certain depth without causing thermal damage. By tight focusing of the laser radiation, intensities over 1 TW/cm2 (1 TW = 1012 watts) arise inside the laser focus. These extreme intensities induce nonlinear effects and optical breakdown occurs. This causes the disruption of the material, limited to the focal point. The process is known as photodisruption. Due to the ultra short pulse duration of only a few femtoseconds (1 fs = 10−15 seconds) there is only very low energy of a few nanojoules (1 nJ = 10−9 joules) per laser pulse deposits into the tissue. This limits the interaction range to diameters below one micrometer (1 μm = 10−6 meters). Out of this range there is no thermal damage. Moved by a fast scanner, the laser beam writes a cutting plane into the sample. A positioning unit moves the sample simultaneously, so that the sample can be processed within a short time. See also Laser microdissection External links Laser microtomy: opening a new feasibility for tissue preparation by Holger Lubatschowski, Optik and Photonik 2(2):49–51, June 2007. Medical equipment de:Mikrotom#Laser-Mikrotom
Laser microtome
Biology
414
44,355,934
https://en.wikipedia.org/wiki/Flow%20process
The region of space enclosed by open system boundaries is usually called a control volume. It may or may not correspond to physical walls. It is convenient to define the shape of the control volume so that all flow of matter, in or out, occurs perpendicular to its surface. One may consider a process in which the matter flowing into and out of the system is chemically homogeneous. Then the inflowing matter performs work as if it were driving a piston of fluid into the system. Also, the system performs work as if it were driving out a piston of fluid. Through the system walls that do not pass matter, heat () and work () transfers may be defined, including shaft work. Classical thermodynamics considers processes for a system that is initially and finally in its own internal state of thermodynamic equilibrium, with no flow. This is feasible also under some restrictions, if the system is a mass of fluid flowing at a uniform rate. Then for many purposes a process, called a flow process, may be considered in accord with classical thermodynamics as if the classical rule of no flow were effective. For the present introductory account, it is supposed that the kinetic energy of flow, and the potential energy of elevation in the gravity field, do not change, and that the walls, other than the matter inlet and outlet, are rigid and motionless. Under these conditions, the first law of thermodynamics for a flow process states: the increase in the internal energy of a system is equal to the amount of energy added to the system by matter flowing in and by heating, minus the amount lost by matter flowing out and in the form of work done by the system. Under these conditions, the first law for a flow process is written: where and respectively denote the average internal energy entering and leaving the system with the flowing matter. There are then two types of work performed: 'flow work' described above, which is performed on the fluid in the control volume (this is also often called ' work'), and 'shaft work', which may be performed by the fluid in the control volume on some mechanical device with a shaft. These two types of work are expressed in the equation: Substitution into the equation above for the control volume cv yields: The definition of enthalpy, , permits us to use this thermodynamic potential to account jointly for internal energy and work in fluids for a flow process: During steady-state operation of a device (see turbine, pump, and engine), any system property within the control volume is independent of time. Therefore, the internal energy of the system enclosed by the control volume remains constant, which implies that in the expression above may be set equal to zero. This yields a useful expression for the power generation or requirement for these devices with chemical homogeneity in the absence of chemical reactions: This expression is described by the diagram above. See also Process flow diagram Steady flow energy equation / Steady state single flow References Continuum mechanics Thermodynamics
Flow process
Physics,Chemistry,Mathematics
615
61,095
https://en.wikipedia.org/wiki/Flight%20and%20expulsion%20of%20Germans%20%281944%E2%80%931950%29
During the later stages of World War II and the post-war period, Germans and fled and were expelled from various Eastern and Central European countries, including Czechoslovakia, and from the former German provinces of Lower and Upper Silesia, East Prussia, and the eastern parts of Brandenburg (Neumark) and Pomerania (Hinterpommern), which were annexed by Poland and the Soviet Union. The idea to expel the Germans from the annexed territories had been proposed by Winston Churchill, in conjunction with the Polish and Czechoslovak exile governments in London at least since 1942. Tomasz Arciszewski, the Polish prime minister in-exile, supported the annexation of German territory but opposed the idea of expulsion, wanting instead to naturalize the Germans as Polish citizens and to assimilate them. Joseph Stalin, in concert with other Communist leaders, planned to expel all ethnic Germans from east of the Oder and from lands which from May 1945 fell inside the Soviet occupation zones. In 1941, his government had already transported Germans from Crimea to Central Asia. Between 1944 and 1948, millions of people, including ethnic Germans () and German citizens (), were permanently or temporarily moved from Central and Eastern Europe. By 1950, a total of about 12 million Germans had fled or been expelled from east-central Europe into Allied-occupied Germany and Austria. The West German government put the total at 14.6 million, including a million ethnic Germans who had settled in territories conquered by Nazi Germany during World War II, ethnic German migrants to Germany after 1950, and the children born to expelled parents. The largest numbers came from former eastern territories of Germany ceded to the People's Republic of Poland and the Soviet Union (about seven million), and from Czechoslovakia (about three million). The areas affected included the former eastern territories of Germany, which were annexed by Poland, as well as the Soviet Union after the war and Germans who were living within the borders of the pre-war Second Polish Republic, Czechoslovakia, Hungary, Romania, Yugoslavia, and the Baltic states. The Nazis had made plans—only partially completed before the Nazi defeat—to remove Jews and many Slavic people from Eastern Europe and settle the area with Germans. The death toll attributable to the flight and expulsions is disputed, with estimates ranging from 500,000 up to 2.5 million according to the German government. The removals occurred in three overlapping phases, the first of which was the organized evacuation of ethnic Germans by the Nazi government in the face of the advancing Red Army, from mid-1944 to early 1945. The second phase was the disorganised fleeing of ethnic Germans immediately following the 's surrender. The third phase was a more organised expulsion following the Allied leaders' Potsdam Agreement, which redefined the Central European borders and approved expulsions of ethnic Germans from the former German territories transferred to Poland, Russia and Czechoslovakia. Many German civilians were sent to internment and labour camps where they were used as forced labour as part of German reparations to countries in Eastern Europe. The major expulsions were completed in 1950. Estimates for the total number of people of German ancestry still living in Central and Eastern Europe in 1950 ranged from 700,000 to 2.7 million. Background Before World War II, East-Central Europe generally lacked clearly delineated ethnic settlement areas. There were some ethnic-majority areas, but there were also vast mixed areas and abundant smaller pockets settled by various ethnicities. Within these areas of diversity, including the major cities of Central and Eastern Europe, people in various ethnic groups had interacted every day for centuries, while not always harmoniously, on every civic and economic level. With the rise of nationalism in the 19th century, the ethnicity of citizens became an issue in territorial claims, the self-perception/identity of states, and claims of ethnic superiority. The German Empire introduced the idea of ethnicity-based settlement in an attempt to ensure its territorial integrity. It was also the first modern European state to propose population transfers as a means of solving "nationality conflicts", intending the removal of Poles and Jews from the projected post–World War I "Polish Border Strip" and its resettlement with Christian ethnic Germans. Following the collapse of Austria-Hungary, the Russian Empire, and the German Empire at the end of World War I, the Treaty of Versailles pronounced the formation of several independent states in Central and Eastern Europe, in territories previously controlled by these imperial powers. None of the new states were ethnically homogeneous. After 1919, many ethnic Germans emigrated from the former imperial lands back to the Weimar Republic and the First Austrian Republic after losing their privileged status in those foreign lands, where they had maintained minority communities. In 1919 ethnic Germans became national minorities in Poland, Czechoslovakia, Hungary, Yugoslavia, and Romania. In the following years, the Nazi ideology encouraged them to demand local autonomy. In Germany during the 1930s, Nazi propaganda claimed that Germans elsewhere were subject to persecution. Nazi supporters throughout eastern Europe (Czechoslovakia's Konrad Henlein, Poland's Deutscher Volksverband and Jungdeutsche Partei, Hungary's Volksbund der Deutschen in Ungarn) formed local Nazi political parties sponsored financially by the German Ministry of Foreign Affairs, e.g. by the Hauptamt Volksdeutsche Mittelstelle. However, by 1939 more than half of Polish Germans lived outside of the formerly German territories of Poland due to improving economic opportunities. Population movements Notes: According to the national census figures the percentage of ethnic Germans in the total population was: Poland 2.3%; Czechoslovakia 22.3%; Hungary 5.5%; Romania 4.1% and Yugoslavia 3.6%. The West German figures are the base used to estimate losses in the expulsions. The West German figure for Poland is broken out as 939,000 monolingual German and 432,000 bi-lingual Polish/German. The West German figure for Poland includes 60,000 in Trans-Olza which was annexed by Poland in 1938. In the 1930 census, this region was included in the Czechoslovak population. A West German analysis of the wartime Deutsche Volksliste by Alfred Bohmann (de) put the number of Polish nationals in the Polish areas annexed by Nazi Germany who identified themselves as German at 709,500 plus 1,846,000 Poles who were considered candidates for Germanisation. In addition, there were 63,000 Volksdeutsch in the General Government. Martin Broszat cited a document with different Volksliste figures 1,001,000 were identified as Germans and 1,761,000 candidates for Germanisation. The figures for the Deutsche Volksliste exclude ethnic Germans resettled in Poland during the war. The national census figures for Germans include German-speaking Jews. Poland (7,000) Czech territory not including Slovakia (75,000) Hungary 10,000, Yugoslavia (10,000) During the Nazi German occupation, many citizens of German descent in Poland registered with the Deutsche Volksliste. Some were given important positions in the hierarchy of the Nazi administration, and some participated in Nazi atrocities, causing resentment towards German speakers in general. These facts were later used by the Allied politicians as one of the justifications for the expulsion of the Germans. The contemporary position of the German government is that, while the Nazi-era war crimes resulted in the expulsion of the Germans, the deaths due to the expulsions were an injustice. During the German occupation of Czechoslovakia, especially after the reprisals for the assassination of Reinhard Heydrich, most of the Czech resistance groups demanded that the "German problem" be solved by transfer/expulsion. These demands were adopted by the Czechoslovak government-in-exile, which sought the support of the Allies for this proposal, beginning in 1943. The final agreement for the transfer of the Germans was not reached until the Potsdam Conference. The expulsion policy was part of a geopolitical and ethnic reconfiguration of postwar Europe. In part, it was retribution for Nazi Germany's initiation of the war and subsequent atrocities and ethnic cleansing in Nazi-occupied Europe. Allied leaders Franklin D. Roosevelt of the United States, Winston Churchill of the United Kingdom, and Joseph Stalin of the USSR, had agreed in principle before the end of the war that the border of Poland's territory would be moved west (though how far was not specified) and that the remaining ethnic German population were subject to expulsion. They assured the leaders of the émigré governments of Poland and Czechoslovakia, both occupied by Nazi Germany, of their support on this issue. Reasons and justifications for the expulsions Given the complex history of the affected regions and the divergent interests of the victorious Allied powers, it is difficult to ascribe a definitive set of motives to the expulsions. The respective paragraph of the Potsdam Agreement only states vaguely: "The Three Governments, having considered the question in all its aspects, recognize that the transfer to Germany of German populations, or elements thereof, remaining in Poland, Czechoslovakia and Hungary, will have to be undertaken. They agreed that any transfers that take place should be effected in an orderly and humane manner." The major motivations revealed were: A desire to create ethnically homogeneous nation-states: This is presented by several authors as a key issue that motivated the expulsions. View of a German minority as potentially troublesome: From the Soviet perspective, shared by the communist administrations installed in Soviet-occupied Europe, the remaining large German populations outside postwar Germany were seen as a potentially troublesome 'fifth column' that would, because of its social structure, interfere with the envisioned Sovietisation of the respective countries. The Western allies also saw the threat of a potential German 'fifth column', especially in Poland after the agreed-to compensation with former German territory. In general, the Western allies hoped to secure a more lasting peace by eliminating the German minorities, which they thought could be done in a humane manner. The proposals from the Polish and Czech governments-in-exile to expel ethnic Germans after the war received support from Winston Churchill and Anthony Eden. Another motivation was to punish the Germans: the Allies declared them collectively guilty of German war crimes. Soviet political considerations: Stalin saw the expulsions as a means of creating antagonism between Germany and its Eastern neighbors, who would thus need Soviet protection. The expulsions served several practical purposes as well. Ethnically homogeneous nation-state The creation of ethnically homogeneous nation states in Central and Eastern Europe was presented as the key reason for the official decisions of the Potsdam and previous Allied conferences as well as the resulting expulsions. The principle of every nation inhabiting its own nation state gave rise to a series of expulsions and resettlements of Germans, Poles, Ukrainians and others who after the war found themselves outside their supposed home states. The 1923 population exchange between Greece and Turkey lent legitimacy to the concept. Churchill cited the operation as a success in a speech discussing the German expulsions. In view of the desire for ethnically homogeneous nation-states, it did not make sense to draw borders through regions that were already inhabited homogeneously by Germans without any minorities. As early as 9 September 1944, Soviet leader Joseph Stalin and Polish communist Edward Osóbka-Morawski of the Polish Committee of National Liberation signed a treaty in Lublin on population exchanges of Ukrainians and Poles living on the "wrong" side of the Curzon Line. Many of the 2.1 million Poles expelled from the Soviet-annexed Kresy, so-called 'repatriants', were resettled to former German territories, then dubbed 'Recovered Territories'. Czech Edvard Beneš, in his decree of 19 May 1945, termed ethnic Hungarians and Germans "unreliable for the state", clearing a way for confiscations and expulsions. View of German minorities as potential fifth columns Distrust and enmity One of the reasons given for the population transfer of Germans from the former eastern territories of Germany was the claim that these areas had been a stronghold of the Nazi movement. Neither Stalin nor the other influential advocates of this argument required that expellees be checked for their political attitudes or their activities. Even in the few cases when this happened and expellees were proven to have been bystanders, opponents or even victims of the Nazi regime, they were rarely spared from expulsion. Polish Communist propaganda used and manipulated hatred of the Nazis to intensify the expulsions. With German communities living within the pre-war borders of Poland, there was an expressed fear of disloyalty of Germans in Eastern Upper Silesia and Pomerelia, based on wartime Nazi activities. Created on order of Reichsführer-SS Heinrich Himmler, a Nazi ethnic German organisation called Selbstschutz carried out executions during Intelligenzaktion alongside operational groups of German military and police, in addition to such activities as identifying Poles for execution and illegally detaining them. To Poles, expulsion of Germans was seen as an effort to avoid such events in the future. As a result, Polish exile authorities proposed a population transfer of Germans as early as 1941. The Czechoslovak government-in-exile worked with the Polish government-in-exile towards this end during the war. Preventing ethnic violence The participants at the Potsdam Conference asserted that expulsions were the only way to prevent ethnic violence. As Winston Churchill expounded in the House of Commons in 1944, "Expulsion is the method which, insofar as we have been able to see, will be the most satisfactory and lasting. There will be no mixture of populations to cause endless trouble... A clean sweep will be made. I am not alarmed by the prospect of disentanglement of populations, not even of these large transferences, which are more possible in modern conditions than they have ever been before". Polish resistance fighter, statesman and courier Jan Karski warned President Franklin D. Roosevelt in 1943 of the possibility of Polish reprisals, describing them as "unavoidable" and "an encouragement for all the Germans in Poland to go west, to Germany proper, where they belong." Punishment for Nazi crimes The expulsions were also driven by a desire for retribution, given the brutal way German occupiers treated non-German civilians in the German-occupied territories during the war. Thus, the expulsions were at least partly motivated by the animus engendered by the war crimes and atrocities perpetrated by the German belligerents and their proxies and supporters. Czechoslovak President Edvard Beneš, in the National Congress, justified the expulsions on 28 October 1945 by stating that the majority of Germans had acted in full support of Hitler; during a ceremony in remembrance of the Lidice massacre, he blamed all Germans as responsible for the actions of the German state. In Poland and Czechoslovakia, newspapers, leaflets and politicians across the political spectrum, which narrowed during the post-war Communist take-over, asked for retribution for wartime German activities. Responsibility of the German population for the crimes committed in its name was also asserted by commanders of the late and post-war Polish military. Karol Świerczewski, commander of the Second Polish Army, briefed his soldiers to "exact on the Germans what they enacted on us, so they will flee on their own and thank God they saved their lives." In Poland, which had suffered the loss of six million citizens, including its elite and almost its entire Jewish population due to Lebensraum and the Holocaust, most Germans were seen as Nazi-perpetrators who could now finally be collectively punished for their past deeds. Soviet political considerations Stalin, who had earlier directed several population transfers in the Soviet Union, strongly supported the expulsions, which worked to the Soviet Union's advantage in several ways. The satellite states would now feel the need to be protected by the Soviets from German anger over the expulsions. The assets left by expellees in Poland and Czechoslovakia were successfully used to reward cooperation with the new governments, and support for the Communists was especially strong in areas that had seen significant expulsions. Settlers in these territories welcomed the opportunities presented by their fertile soils and vacated homes and enterprises, increasing their loyalty. Movements in the later stages of the war Evacuation and flight to areas within Germany Late in the war, as the Red Army advanced westward, many Germans were apprehensive about the impending Soviet occupation. Most were aware of the Soviet reprisals against German civilians. Soviet soldiers committed numerous rapes and other crimes. News of atrocities such as the Nemmersdorf massacre were exaggerated and disseminated by the Nazi propaganda machine. Plans to evacuate the ethnic German population westward into Germany, from Poland and the eastern territories of Germany, were prepared by various Nazi authorities toward the end of the war. In most cases, implementation was delayed until Soviet and Allied forces had defeated the German forces and advanced into the areas to be evacuated. The abandonment of millions of ethnic Germans in these vulnerable areas until combat conditions overwhelmed them can be attributed directly to the measures taken by the Nazis against anyone suspected of 'defeatist' attitudes (as evacuation was considered) and the fanaticism of many Nazi functionaries in their execution of Hitler's 'no retreat' orders. The first exodus of German civilians from the eastern territories was composed of both spontaneous flight and organized evacuation, starting in mid-1944 and continuing until early 1945. Conditions turned chaotic during the winter when kilometers-long queues of refugees pushed their carts through the snow trying to stay ahead of the advancing Red Army. Refugee treks which came within reach of the advancing Soviets suffered casualties when targeted by low-flying aircraft, and some people were crushed by tanks. The German Federal Archive has estimated that 100–120,000 civilians (1% of the total population) were killed during the flight and evacuations. Polish historians Witold Sienkiewicz and Grzegorz Hryciuk maintain that civilian deaths in the flight and evacuation were "between 600,000 and 1.2 million. The main causes of death were cold, stress, and bombing." The mobilized Strength Through Joy liner Wilhelm Gustloff was sunk in January 1945 by Soviet Navy submarine S-13, killing about 9,000 civilians and military personnel escaping East Prussia in the largest loss of life in a single ship sinking in history. Many refugees tried to return home when the fighting ended. Before 1 June 1945, 400,000 people crossed back over the Oder and Neisse rivers eastward, before Soviet and Polish communist authorities closed the river crossings; another 800,000 entered Silesia through Czechoslovakia. In accordance with the Potsdam Agreement, at the end of 1945—wrote Hahn & Hahn—4.5 million Germans who had fled or been expelled were under the control of the Allied governments. From 1946 to 1950 around 4.5 million people were brought to Germany in organized mass transports from Poland, Czechoslovakia, and Hungary. An additional 2.6 million released POWs were listed as expellees. Evacuation and flight to Denmark From the Baltic coast, many soldiers and civilians were evacuated by ship in the course of Operation Hannibal. Between 23 January and 5 May 1945, up to 250,000 Germans, primarily from East Prussia, Pomerania, and the Baltic states, were evacuated to Nazi-occupied Denmark, based on an order issued by Hitler on 4 February 1945. When the war ended, the German refugee population in Denmark amounted to 5% of the total Danish population. The evacuation focused on women, the elderly and children—a third of whom were under the age of fifteen. After the war, the Germans were interned in several hundred refugee camps throughout Denmark, the largest of which was the Oksbøl Refugee Camp with 37,000 inmates. The camps were guarded by Danish Defence units. The situation eased after 60 Danish clergymen spoke in defence of the refugees in an open letter, and Social Democrat Johannes Kjærbøl took over the administration of the refugees on 6 September 1945. On 9 May 1945, the Red Army occupied the island of Bornholm; between 9 May and 1 June 1945, the Soviets shipped 3,000 refugees and 17,000 Wehrmacht soldiers from there to Kolberg. In 1945, 13,492 German refugees died, among them 7,000 children under five years of age. According to Danish physician and historian Kirsten Lylloff, these deaths were partially due to denial of medical care by Danish medical staff, as both the Danish Association of Doctors and the Danish Red Cross began refusing medical treatment to German refugees starting in March 1945. The last refugees left Denmark on 15 February 1949. In the Treaty of London, signed 26 February 1953, West Germany and Denmark agreed on compensation payments of 160 million Danish kroner for its extended care of the refugees, which West Germany paid between 1953 and 1958. Following Germany's defeat The Second World War ended in Europe with Germany's defeat in May 1945. By this time, all of Eastern and much of Central Europe was under Soviet occupation. This included most of the historical German settlement areas, as well as the Soviet occupation zone in eastern Germany. The Allies settled on the terms of occupation, the territorial truncation of Germany, and the expulsion of ethnic Germans from post-war Poland, Czechoslovakia and Hungary to the Allied Occupation Zones in the Potsdam Agreement, drafted during the Potsdam Conference between 17 July and 2 August 1945. Article XII of the agreement is concerned with the expulsions to post-war Germany and reads: The Three Governments, having considered the question in all its aspects, recognize that the transfer to Germany of German populations, or elements thereof, remaining in Poland, Czechoslovakia, and Hungary, will have to be undertaken. They agree that any transfers that take place should be effected in an orderly and humane manner. The agreement further called for equal distribution of the transferred Germans for resettlement among American, British, French and Soviet occupation zones comprising post–World War II Germany. Expulsions that took place before the Allies agreed on the terms at Potsdam are referred to as "irregular" expulsions (Wilde Vertreibungen). They were conducted by military and civilian authorities in Soviet-occupied post-war Poland and Czechoslovakia in the first half of 1945. In Yugoslavia, the remaining Germans were not expelled; ethnic German villages were turned into internment camps where over 50,000 perished from deliberate starvation and direct murders by Yugoslav guards. In late 1945 the Allies requested a temporary halt to the expulsions, due to the refugee problems created by the expulsion of Germans. While expulsions from Czechoslovakia were temporarily slowed, this was not true in Poland and the former eastern territories of Germany. Sir Geoffrey Harrison, one of the drafters of the cited Potsdam article, stated that the "purpose of this article was not to encourage or legalize the expulsions, but rather to provide a basis for approaching the expelling states and requesting them to co-ordinate transfers with the Occupying Powers in Germany." After Potsdam, a series of expulsions of ethnic Germans occurred throughout the Soviet-controlled Eastern European countries. Property and materiel in the affected territory that had belonged to Germany or to Germans was confiscated; it was either transferred to the Soviet Union, nationalised, or redistributed among the citizens. Of the many post-war forced migrations, the largest was the expulsion of ethnic Germans from Central and Eastern Europe, primarily from the territory of 1937 Czechoslovakia (which included the historically German-speaking area in the Sudeten mountains along the German-Czech-Polish border (Sudetenland)), and the territory that became post-war Poland. Poland's post-war borders were moved west to the Oder-Neisse line, deep into former German territory and within 80 kilometers of Berlin. Polish refugees expelled from the Soviet Union were resettled in the former German territories that were awarded to Poland after the war. During and after the war, 2,208,000 Poles fled or were expelled from the former eastern Polish regions that were merged to the USSR after the 1939 Soviet invasion of Poland; 1,652,000 of these refugees were resettled in the former German territories. Czechoslovakia The final agreement for the transfer of the Germans was reached at the Potsdam Conference. According to the West German Schieder commission, there were 4.5 million German civilians present in Bohemia-Moravia in May 1945, including 100,000 from Slovakia and 1.6 million refugees from Poland. Between 700,000 and 800,000 Germans were affected by irregular expulsions between May and August 1945. The expulsions were encouraged by Czechoslovak politicians and were generally executed by order of local authorities, mostly by groups of armed volunteers and the army. Transfers of population under the Potsdam agreements lasted from January until October 1946. 1.9 million ethnic Germans were expelled to the American zone, part of what would become West Germany. More than 1 million were expelled to the Soviet zone, which later became East Germany. About 250,000 ethnic Germans were allowed to remain in Czechoslovakia. According to the West German Schieder commission 250,000 persons who had declared German nationality in the 1939 Nazi census remained in Czechoslovakia; however the Czechs counted 165,790 Germans remaining in December 1955. Male Germans with Czech wives were expelled, often with their spouses, while ethnic German women with Czech husbands were allowed to stay. According to the Schieder commission, Sudeten Germans considered essential to the economy were held as forced labourers. The West German government estimated the expulsion death toll at 273,000 civilians, and this figure is cited in historical literature. However, in 1995, research by a joint German and Czech commission of historians found that the previous demographic estimates of 220,000 to 270,000 deaths to be overstated and based on faulty information. They concluded that the death toll was between 15,000 and 30,000 dead, assuming that not all deaths were reported. The German Red Cross Search Service (Suchdienst) confirmed the deaths of 18,889 people during the expulsions from Czechoslovakia. (Violent deaths 5,556; Suicides 3,411; Deported 705; In camps 6,615; During the wartime flight 629; After wartime flight 1,481; Cause undetermined 379; Other misc. 73.) Hungary In contrast to expulsions from other nations or states, the expulsion of the Germans from Hungary was dictated from outside Hungary. It began on 22 December 1944 when the Soviet Red Army Commander-in-Chief ordered the expulsions. In February 1945 the Soviet-dominated Allied Control Commission ordered the Hungarian Ministry of Interior to compile lists of all ethnic Germans living in the country. Initially the Census Bureau refused to divulge information on Hungarians who had registered as Volksdeutsche, but acceded under pressure from the Hungarian State Protection Authority. Three percent of the German pre-war population (about 20,000 people) had been evacuated by the Volksbund before that. They went to Austria, but many had returned. Overall, 60,000 ethnic Germans had fled. According to the West German Schieder commission report of 1956, in early 1945 between 30 and 35,000 ethnic German civilians and 30,000 military POW were arrested and transported from Hungary to the Soviet Union as forced labourers. In some villages, the entire adult population was taken to labor camps in the Donbas. 6,000 died there as a result of hardships and ill-treatment. Data from the Russian archives, which were based on an actual enumeration, put the number of ethnic Germans registered by the Soviets in Hungary at 50,292 civilians, of whom 31,923 were deported to the USSR for reparation labor implementing Order 7161. 9% (2,819) were documented as having died. In 1945, official Hungarian figures showed 477,000 German speakers in Hungary, including German-speaking Jews, 303,000 of whom had declared German nationality. Of the German nationals, 33% were children younger than 12 or elderly people over 60; 51% were women. On 29 December 1945, the postwar Hungarian Government, obeying the directions of the Potsdam Conference agreements, ordered the expulsion of anyone identified as German in the 1941 census, or who had been a member of the Volksbund, the SS, or any other armed German organisation. Accordingly, mass expulsions began. The rural population was affected more than the urban population or those ethnic Germans determined to have needed skills, such as miners. Germans married to Hungarians were not expelled, regardless of sex. The first 5,788 expellees departed from Wudersch on 19 January 1946. About 180,000 German-speaking Hungarian citizens were stripped of their citizenship and possessions, and expelled to the Western zones of Germany. By July 1948, 35,000 others had been expelled to the Soviet occupation zone of Germany. Most of the expellees found new homes in the south-west German province of Baden-Württemberg, but many others settled in Bavaria and Hesse. Other research indicates that, between 1945 and 1950, 150,000 were expelled to western Germany, 103,000 to Austria, and none to eastern Germany. During the expulsions, numerous organized protest demonstrations by the Hungarian population took place. Acquisition of land for distribution to Hungarian refugees and nationals was one of the main reasons stated by the government for the expulsion of the ethnic Germans from Hungary. The botched organization of the redistribution led to social tensions. 22,445 people were identified as German in the 1949 census. An order of 15 June 1948 halted the expulsions. A governmental decree of 25 March 1950 declared all expulsion orders void, allowing the expellees to return if they so wished. After the fall of Communism in the early 1990s, German victims of expulsion and Soviet forced labor were rehabilitated. Post-Communist laws allowed expellees to be compensated, to return, and to buy property. There were reportedly no tensions between Germany and Hungary regarding expellees. In 1958, the West German government estimated, based on a demographic analysis, that by 1950, 270,000 Germans remained in Hungary; 60,000 had been assimilated into the Hungarian population, and there were 57,000 "unresolved cases" that remained to be clarified. The editor for the section of the 1958 report for Hungary was Wilfried Krallert, a scholar dealing with Balkan affairs since the 1930s when he was a Nazi Party member. During the war, he was an officer in the SS and was directly implicated in the plundering of cultural artifacts in eastern Europe. After the war, he was chosen to author the sections of the demographic report on the expulsions from Hungary, Romania, and Yugoslavia. The figure of 57,000 "unresolved cases" in Hungary is included in the figure of 2 million dead expellees, which is often cited in official German and historical literature. Netherlands After World War II, the Dutch government decided to expel the German expatriates (25,000) living in the Netherlands. Germans, including those with Dutch spouses and children, were labelled as "hostile subjects" ("vijandelijke onderdanen"). The operation began on 10 September 1946 in Amsterdam, when German expatriates and their families were arrested at their homes in the middle of the night and given one hour to pack of luggage. They were only allowed to take 100 guilders with them. The remainder of their possessions were seized by the state. They were taken to internment camps near the German border, the largest of which was Mariënbosch concentration camp, near Nijmegen. About 3,691 Germans (less than 15% of the total number of German expatriates in the Netherlands) were expelled. The Allied forces occupying the Western zone of Germany opposed this operation, fearing that other nations might follow suit. Poland, including former German territories Throughout 1944 until May 1945, as the Red Army advanced through Eastern Europe and the provinces of eastern Germany, some German civilians were killed in the fighting. While many had already fled ahead of the advancing Soviet Army, frightened by rumors of Soviet atrocities, which in some cases were exaggerated and exploited by Nazi Germany's propaganda, millions still remained. A 2005 study by the Polish Academy of Sciences estimated that during the final months of the war, 4 to 5 million German civilians fled with the retreating German forces, and in mid-1945, 4.5 to 4.6 million Germans remained in the territories under Polish control. By 1950, 3,155,000 had been transported to Germany, 1,043,550 were naturalized as Polish citizens and 170,000 Germans still remained in Poland. According to the West German Schieder commission of 1953, 5,650,000 Germans remained in what would become Poland's new borders in mid-1945, 3,500,000 had been expelled and 910,000 remained in Poland by 1950. According to the Schieder commission, the civilian death toll was 2 million; in 1974, the German Federal Archives estimated the death toll at about 400,000. (The controversy regarding the casualty figures is covered below in the section on casualties.) During the 1945 military campaign, most of the male German population remaining east of the Oder–Neisse line were considered potential combatants and held by Soviet military in detention camps subject to verification by the NKVD. Members of Nazi party organizations and government officials were segregated and sent to the USSR for forced labour as reparations. In mid-1945, the eastern territories of pre-war Germany were turned over to the Soviet-controlled Polish military forces. Early expulsions were undertaken by the Polish Communist military authorities even before the Potsdam Conference placed them under temporary Polish administration pending the final Peace Treaty, in an effort to ensure later territorial integration into an ethnically homogeneous Poland. The Polish Communists wrote: "We must expel all the Germans because countries are built on national lines and not on multinational ones." The Polish government defined Germans as either Reichsdeutsche, people enlisted in first or second Volksliste groups; or those who held German citizenship. Around 1,165,000 German citizens of Slavic descent were "verified" as "autochthonous" Poles. Of these, most were not expelled; but many chose to migrate to Germany between 1951 and 1982, including most of the Masurians of East Prussia. At the Potsdam Conference (17 July – 2 August 1945), the territory to the east of the Oder–Neisse line was assigned to Polish and Soviet Union administration pending the final peace treaty. All Germans had their property confiscated and were placed under restrictive jurisdiction. The Silesian voivode Aleksander Zawadzki in part had already expropriated the property of the German Silesians on 26 January 1945, another decree of 2 March expropriated that of all Germans east of the Oder and Neisse, and a subsequent decree of 6 May declared all "abandoned" property as belonging to the Polish state. Germans were also not permitted to hold Polish currency, the only legal currency since July, other than earnings from work assigned to them. The remaining population faced theft and looting, and also in some instances rape and murder by the criminal elements, crimes that were rarely prevented nor prosecuted by the Polish Militia Forces and newly installed communist judiciary. In mid-1945, 4.5 to 4.6 million Germans resided in territory east of the Oder–Neisse Line. By early 1946, 550,000 Germans had already been expelled from there, and 932,000 had been verified as having Polish nationality. In the February 1946 census, 2,288,000 people were classified as Germans and subject to expulsion, and 417,400 were subject to verification action, to determine nationality. The negatively verified people, who did not succeed in demonstrating their "Polish nationality", were directed for resettlement. Those Polish citizens who had collaborated or were believed to have collaborated with the Nazis, were considered "traitors of the nation" and sentenced to forced labor prior to being expelled. By 1950, 3,155,000 German civilians had been expelled and 1,043,550 were naturalized as Polish citizens. 170,000 Germans considered "indispensable" for the Polish economy were retained until 1956, although almost all had left by 1960. 200,000 Germans in Poland were employed as forced labor in communist-administered camps prior to being expelled from Poland. These included Central Labour Camp Jaworzno, Central Labour Camp Potulice, Łambinowice and Zgoda labour camp. Besides these large camps, numerous other forced labor, punitive and internment camps, urban ghettos and detention centers, sometimes consisting only of a small cellar, were set up. The German Federal Archives estimated in 1974 that more than 200,000 German civilians were interned in Polish camps; they put the death rate at 20–50% and estimated that over 60,000 probably died. Polish historians Witold Sienkiewicz and Grzegorz Hryciuk maintain that the internment:resulted in numerous deaths, which cannot be accurately determined because of lack of statistics or falsification. At certain periods, they could be in the tens of percent of the inmate numbers. Those interned are estimated at 200–250,000 German nationals and the indigenous population and deaths might range from 15,000 to 60,000 persons." Note: The indigenous population were former German citizens who declared Polish ethnicity. Historian R. M. Douglas describes a chaotic and lawless regime in the former German territories in the immediate postwar era. The local population was victimized by criminal elements who arbitrarily seized German property for personal gain. Bilingual people who were on the Volksliste during the war were declared Germans by Polish officials who then seized their property for personal gain. The Federal Statistical Office of Germany estimated that in mid-1945, 250,000 Germans remained in the northern part of the former East Prussia, which became the Kaliningrad Oblast. They also estimated that more than 100,000 people surviving the Soviet occupation were evacuated to Germany beginning in 1947. German civilians were held as "reparation labor" by the USSR. Data from the Russian archives, newly published in 2001 and based on an actual enumeration, put the number of German civilians deported from Poland to the USSR in early 1945 for reparation labor at 155,262; 37% (57,586) died in the USSR. The West German Red Cross had estimated in 1964 that 233,000 German civilians were deported to the USSR from Poland as forced laborers and that 45% (105,000) were dead or missing. The West German Red Cross estimated at that time that 110,000 German civilians were held as forced labor in the Kaliningrad Oblast, where 50,000 were dead or missing. The Soviets deported 7,448 Poles of the Armia Krajowa from Poland. Soviet records indicated that 506 Poles died in captivity. Tomasz Kamusella maintains that in early 1945, 165,000 Germans were transported to the Soviet Union. According to Gerhardt Reichling, an official in the German Finance office, 520,000 German civilians from the Oder–Neisse region were conscripted for forced labor by both the USSR and Poland; he maintains that 206,000 perished. The attitudes of surviving Poles varied. Many had suffered brutalities and atrocities by the Germans, surpassed only by the German policies against Jews, during the Nazi occupation. The Germans had recently expelled more than a million Poles from territories they annexed during the war. Some Poles engaged in looting and various crimes, including murders, beatings, and rapes against Germans. On the other hand, in many instances Poles, including some who had been made slave laborers by the Germans during the war, protected Germans, for instance by disguising them as Poles. Moreover, in the Opole (Oppeln) region of Upper Silesia, citizens who claimed Polish ethnicity were allowed to remain, even though some, not all, had uncertain nationality, or identified as ethnic Germans. Their status as a national minority was accepted in 1955, along with state subsidies, with regard to economic assistance and education. The attitude of Soviet soldiers was ambiguous. Many committed atrocities, most notably rape and murder, and did not always distinguish between Poles and Germans, mistreating them equally. Other Soviets were taken aback by the brutal treatment of the German civilians and tried to protect them. Richard Overy cites an approximate total of 7.5 million Germans evacuated, migrated, or expelled from Poland between 1944 and 1950. Tomasz Kamusella cites estimates of 7 million expelled in total during both the "wild" and "legal" expulsions from the recovered territories from 1945 to 1948, plus an additional 700,000 from areas of pre-war Poland. Romania The ethnic German population of Romania in 1939 was estimated at 786,000. In 1940, Bessarabia and Bukovina were occupied by the USSR, and the ethnic German population of 130,000 was deported to German-held territory during the Nazi–Soviet population transfers, as well as 80,000 from Romania. 140,000 of these Germans were resettled in German-occupied Poland; in 1945, they were caught up in the flight and expulsion from Poland. Most of the ethnic Germans in Romania resided in Transylvania, the northern part of which was annexed by Hungary during World War II. The pro-German Hungarian government, as well as the pro-German Romanian government of Ion Antonescu, allowed Germany to enlist the German population in Nazi-sponsored organizations. During the war, 54,000 of the male population was conscripted by Nazi Germany, many into the Waffen-SS. In mid-1944, roughly 100,000 Germans fled from Romania with the retreating German forces. According to the West German Schieder commission report of 1957, 75,000 German civilians were deported to the USSR as forced labour and 15% (approximately 10,000) did not return. Data from the Russian archives which were based on an actual enumeration put the number of ethnic Germans registered by the Soviets in Romania at 421,846 civilians, of whom 67,332 were deported to the USSR for reparation labour, where 9% (6,260) died. The roughly 400,000 ethnic Germans who remained in Romania were treated as guilty of collaboration with Nazi Germany and were deprived of their civil liberties and property. Many were impressed into forced labour and deported from their homes to other regions of Romania. In 1948, Romania began a gradual rehabilitation of the ethnic Germans: they were not expelled, and the communist regime gave them the status of a national minority, the only Eastern Bloc country to do so. In 1958, the West German government estimated, based on a demographic analysis, that by 1950, 253,000 were counted as expellees in Germany or the West, 400,000 Germans still remained in Romania, 32,000 had been assimilated into the Romanian population, and that there were 101,000 "unresolved cases" that remained to be clarified. The figure of 101,000 "unresolved cases" in Romania is included in the total German expulsion dead of 2 million which is often cited in historical literature. 355,000 Germans remained in Romania in 1977. During the 1980s, many began to leave, with over 160,000 leaving in 1989 alone. By 2002, the number of ethnic Germans in Romania was 60,000. Soviet Union and annexed territories The Baltic, Bessarabian and ethnic Germans in areas that became Soviet-controlled following the Molotov–Ribbentrop Pact of 1939 were resettled to Nazi Germany, including annexed areas like Warthegau, during the Nazi-Soviet population exchange. Only a few returned to their former homes when Germany invaded the Soviet Union and temporarily gained control of those areas. These returnees were employed by the Nazi occupation forces to establish a link between the German administration and the local population. Those resettled elsewhere shared the fate of the other Germans in their resettlement area. The ethnic German minority in the USSR was considered a security risk by the Soviet government, and they were deported during the war in order to prevent their possible collaboration with the Nazi invaders. In August 1941 the Soviet government ordered ethnic Germans to be deported from the European USSR, by early 1942, 1,031,300 Germans were interned in "special settlements" in Central Asia and Siberia. Life in the special settlements was harsh and severe, food was limited, and the deported population was governed by strict regulations. Shortages of food plagued the whole Soviet Union and especially the special settlements. According to data from the Soviet archives, by October 1945, 687,300 Germans remained alive in the special settlements; an additional 316,600 Soviet Germans served as labour conscripts during World War II. Soviet Germans were not accepted in the regular armed forces but were employed instead as conscript labour. The labor army members were arranged into worker battalions that followed camp-like regulations and received Gulag rations. In 1945 the USSR deported to the special settlements 203,796 Soviet ethnic Germans who had been previously resettled by Germany in Poland. These post-war deportees increased the German population in the special settlements to 1,035,701 by 1949. According to J. Otto Pohl, 65,599 Germans perished in the special settlements. He believes that an additional 176,352 unaccounted for people "probably died in the labor army". Under Stalin, Soviet Germans continued to be confined to the special settlements under strict supervision, in 1955 they were rehabilitated but were not allowed to return to the European USSR. The Soviet-German population grew despite deportations and forced labor during the war; in the 1939 Soviet census the German population was 1.427 million. By 1959 it had increased to 1.619 million. The calculations of the West German researcher Gerhard Reichling do not agree to the figures from the Soviet archives. According to Reichling a total of 980,000 Soviet ethnic Germans were deported during the war; he estimated that 310,000 died in forced labour. During the early months of the invasion of the USSR in 1941 the Germans occupied the western regions of the USSR that had German settlements. A total of 370,000 ethnic Germans from the USSR were deported to Poland by Germany during the war. In 1945 the Soviets found 280,000 of these resettlers in Soviet-held territory and returned them to the USSR; 90,000 became refugees in Germany after the war. Those ethnic Germans who remained in the 1939 borders of the Soviet Union occupied by Nazi Germany in 1941 remained where they were until 1943, when the Red Army liberated Soviet territory and the Wehrmacht withdrew westward. From January 1943, most of these ethnic Germans moved in treks to the Warthegau or to Silesia, where they were to settle. Between 250,000 and 320,000 had reached Nazi Germany by the end of 1944. On their arrival, they were placed in camps and underwent 'racial evaluation' by the Nazi authorities, who dispersed those deemed 'racially valuable' as farm workers in the annexed provinces, while those deemed to be of "questionable racial value" were sent to work in Germany. The Red Army captured these areas in early 1945, and 200,000 Soviet Germans had not yet been evacuated by the Nazi authorities, who were still occupied with their 'racial evaluation'. They were regarded by the USSR as Soviet citizens and repatriated to camps and special settlements in the Soviet Union. 70,000 to 80,000 who found themselves in the Soviet occupation zone after the war were also returned to the USSR, based on an agreement with the Western Allies. The death toll during their capture and transportation was estimated at 15–30%, and many families were torn apart. The special "German settlements" in the post-war Soviet Union were controlled by the Internal Affairs Commissioner, and the inhabitants had to perform forced labor until the end of 1955. They were released from the special settlements by an amnesty decree of 13 September 1955, and the Nazi collaboration charge was revoked by a decree of 23 August 1964. They were not allowed to return to their former homes and remained in the eastern regions of the USSR, and no individual's former property was restored. Since the 1980s, the Soviet and Russian governments have allowed ethnic Germans to emigrate to Germany. Different situations emerged in northern East Prussia regarding Königsberg (renamed Kaliningrad) and the adjacent Memel territory around Memel (Klaipėda). The Königsberg area of East Prussia was annexed by the Soviet Union, becoming an exclave of the Russian Soviet Republic. Memel was integrated into the Lithuanian Soviet Republic. Many Germans were evacuated from East Prussia and the Memel territory by Nazi authorities during Operation Hannibal or fled in panic as the Red Army approached. The remaining Germans were conscripted for forced labor. Ethnic Russians and the families of military staff were settled in the area. In June 1946, 114,070 Germans and 41,029 Soviet citizens were registered as living in the Kaliningrad Oblast, with an unknown number of unregistered Germans ignored. Between June 1945 and 1947, roughly half a million Germans were expelled. Between 24 August and 26 October 1948, 21 transports with a total of 42,094 Germans left the Kaliningrad Oblast for the Soviet Occupation Zone. The last remaining Germans were expelled between November 1949 (1,401 people) and January 1950 (7). Thousands of German children, called the "wolf children", had been left orphaned and unattended or died with their parents during the harsh winter without food. Between 1945 and 1947, around 600,000 Soviet citizens settled in the oblast. Yugoslavia Before World War II, roughly 500,000 German-speaking people (mostly Danube Swabians) lived in the Kingdom of Yugoslavia. Most fled during the war or emigrated after 1950 thanks to the Displaced Persons Act of 1948; some were able to emigrate to the United States. During the final months of World War II a majority of the ethnic Germans fled Yugoslavia with the retreating Nazi forces. After the liberation, Yugoslav Partisans exacted revenge on ethnic Germans for the wartime atrocities of Nazi Germany, in which many ethnic Germans had participated, especially in the Banat area of the Territory of the Military Commander in Serbia. The approximately 200,000 ethnic Germans remaining in Yugoslavia suffered persecution and sustained personal and economic losses. About 7,000 were killed as local populations and partisans took revenge for German wartime atrocities. From 1945 to 1948 ethnic Germans were held in labour camps where about 50,000 perished. Those surviving were allowed to emigrate to Germany after 1948. According to West German figures in late 1944 the Soviets transported 27,000 to 30,000 ethnic Germans, a majority of whom were women aged 18 to 35, to Ukraine and the Donbas for forced labour; about 20% (5,683) were reported dead or missing. Data from Russian archives published in 2001, based on an actual enumeration, put the number of German civilians deported from Yugoslavia to the USSR in early 1945 for reparation labour at 12,579, where 16% (1,994) died. After March 1945, a second phase began in which ethnic Germans were massed into villages such as Gakowa and Kruševlje that were converted into labour camps. All furniture was removed, straw placed on the floor, and the expellees housed like animals under military guard, with minimal food and rampant, untreated disease. Families were divided into the unfit women, old, and children, and those fit for slave labour. A total of 166,970 ethnic Germans were interned, and 48,447 (29%) perished. The camp system was shut down in March 1948. In Slovenia, the ethnic German population at the end of World War II was concentrated in Slovenian Styria, more precisely in Maribor, Celje, and a few other smaller towns (like Ptuj and Dravograd), and in the rural area around Apače on the Austrian border. The second-largest ethnic German community in Slovenia was the predominantly rural Gottschee County around Kočevje in Lower Carniola, south of Ljubljana. Smaller numbers of ethnic Germans also lived in Ljubljana and in some western villages in the Prekmurje region. In 1931, the total number of ethnic Germans in Slovenia was around 28,000: around half of them lived in Styria and in Prekmurje, while the other half lived in the Gottschee County and in Ljubljana. In April 1941, southern Slovenia was occupied by Italian troops. By early 1942, ethnic Germans from Gottschee/Kočevje were forcefully transferred to German-occupied Styria by the new German authorities. Most resettled to the Posavje region (a territory along the Sava river between the towns of Brežice and Litija), from where around 50,000 Slovenes had been expelled. Gottschee Germans were generally unhappy about their forced transfer from their historical home region. One reason was that the agricultural value of their new area of settlement was perceived as much lower than the Gottschee area. As German forces retreated before the Yugoslav Partisans, most ethnic Germans fled with them in fear of reprisals. By May 1945, only a few Germans remained, mostly in the Styrian towns of Maribor and Celje. The Liberation Front of the Slovenian People expelled most of the remainder after it seized complete control in the region in May 1945. The Yugoslavs set up internment camps at Sterntal and Teharje. The government nationalized their property on a "decision on the transition of enemy property into state ownership, on state administration over the property of absent people, and on sequestration of property forcibly appropriated by occupation authorities" of 21 November 1944 by the Presidency of the Anti-Fascist Council for the People's Liberation of Yugoslavia. After March 1945, ethnic Germans were placed in so-called "village camps". Separate camps existed for those able to work and for those who were not. In the latter camps, containing mainly children and the elderly, the mortality rate was about 50%. Most of the children under 14 were then placed in state-run homes, where conditions were better, though the German language was banned. These children were later given to Yugoslav families, and not all German parents seeking to reclaim their children in the 1950s were successful. West German government figures from 1958 put the death toll at 135,800 civilians. A recent study published by the ethnic Germans of Yugoslavia based on an actual enumeration has revised the death toll down to about 58,000. A total of 48,447 people had died in the camps; 7,199 were shot by partisans, and another 1,994 perished in Soviet labour camps. Those Germans still considered Yugoslav citizens were employed in industry or the military, but could buy themselves free of Yugoslav citizenship for the equivalent of three months' salary. By 1950, 150,000 of the Germans from Yugoslavia were classified as "expelled" in Germany, another 150,000 in Austria, 10,000 in the United States, and 3,000 in France. According to West German figures 82,000 ethnic Germans remained in Yugoslavia in 1950. After 1950, most emigrated to Germany or were assimilated into the local population. Kehl, Germany The population of Kehl (12,000 people), on the east bank of the Rhine opposite Strasbourg, fled and was evacuated in the course of the Liberation of France, on 23 November 1944. The French Army occupied the town in March 1945 and prevented the inhabitants from returning until 1953. Latin America Fearing a Nazi Fifth Column, between 1941 and 1945 the US government facilitated the expulsion of 4,058 German citizens from 15 Latin American countries to internment camps in Texas and Louisiana. Subsequent investigations showed many of the internees to be harmless, and three-quarters of them were returned to Germany during the war in exchange for citizens of the Americas, while the remainder returned to their homes in Latin America. Palestine At the start of World War II, colonists with German citizenship were rounded up by the British authorities and sent to internment camps in Bethlehem in Galilee. 661 Templers were deported to Australia via Egypt on 31 July 1941, leaving 345 in Palestine. Internment continued in Tatura, Victoria, Australia, until 1946–47. In 1962 the State of Israel paid 54 million Deutsche Marks in compensation to property owners whose assets were nationalized. Human losses Estimates of total deaths of German civilians in the flight and expulsions, including forced labour of Germans in the Soviet Union, range from 500,000 to a maximum of 3 million people. Although the German government's official estimate of deaths has stood at 2 million since the 1960s, the publication in 1987–89 of previously classified West German studies has led some historians to the conclusion that the actual number was much lower—in the range of 500,000–600,000. English-language sources have put the death toll at 2–3 million based on West German government figures from the 1960s. West German government estimates of the death toll In 1950 the West German Government made a preliminary estimate of 3.0 million missing people (1.5 million in prewar Germany and 1.5 million in Eastern Europe) whose fate needed to be clarified. These figures were superseded by the publication of the 1958 study by the Statistisches Bundesamt. In 1953 the West German government ordered a survey by the Suchdienst (search service) of the German churches to trace the fate of 16.2 million people in the area of the expulsions; the survey was completed in 1964 but kept secret until 1987. The search service was able to confirm 473,013 civilian deaths; there were an additional 1,905,991 cases of persons whose fate could not be determined. From 1954 to 1961 the Schieder commission issued five reports on the flight and expulsions. The head of the commission Theodor Schieder was a rehabilitated former Nazi party member who was involved in the preparation of the Nazi to colonize eastern Europe. The commission estimated a total death toll of about 2.3 million civilians including 2 million east of the Oder Neisse line. The figures of the Schieder commission were superseded by the publication in 1958 of the study by the West German government Statistisches Bundesamt, Die deutschen Vertreibungsverluste (The German Expulsion Casualties). The authors of the report included former Nazi party members, :de:Wilfried Krallert, Walter Kuhn and :de:Alfred Bohmann. The Statistisches Bundesamt put losses at 2,225,000 (1.339 million in prewar Germany and 886,000 in Eastern Europe). In 1961 the West German government published slightly revised figures that put losses at 2,111,000 (1,225,000 in prewar Germany and 886,000 in Eastern Europe) In 1969, the federal West German government ordered a further study to be conducted by the German Federal Archives, which was finished in 1974 and kept secret until 1989. The study was commissioned to survey crimes against humanity such as deliberate killings, which according to the report included deaths caused by military activity in the 1944–45 campaign, forced labor in the USSR and civilians kept in post-war internment camps. The authors maintained that the figures included only those deaths caused by violent acts and inhumanities (Unmenschlichkeiten) and do not include post-war deaths due to malnutrition and disease. Also not included are those who were raped or suffered mistreatment and did not die immediately. They estimated 600,000 deaths (150,000 during flight and evacuations, 200,000 as forced labour in the USSR and 250,000 in post-war internment camps. By region 400,000 east of the Oder Neisse line, 130,000 in Czechoslovakia and 80,000 in Yugoslavia). No figures were given for Romania and Hungary. A 1986 study by Gerhard Reichling "Die deutschen Vertriebenen in Zahlen" (the German expellees in figures) concluded 2,020,000 ethnic Germans perished after the war including 1,440,000 as a result of the expulsions and 580,000 deaths due to deportation as forced labourers in the Soviet Union. Reichling was an employee of the Federal Statistical Office who was involved in the study of German expulsion statistics since 1953. The Reichling study is cited by the German government to support their estimate of 2 million expulsion deaths Discourse The West German figure of 2 million deaths in the flight and expulsions was widely accepted by historians in the West prior to the fall of communism in Eastern Europe and the end of the Cold War. The recent disclosure of the German Federal Archives study and the Search Service figures have caused some scholars in Germany and Poland to question the validity of the figure of 2 million deaths; they estimate the actual total at 500–600,000. The German government continues to maintain that the figure of 2 million deaths is correct. The issue of the "expellees" has been a contentious one in German politics, with the Federation of Expellees staunchly defending the higher figure. Analysis by Rüdiger Overmans In 2000 the German historian Rüdiger Overmans published a study of German military casualties; his research project did not investigate civilian expulsion deaths. In 1994, Overmans provided a critical analysis of the previous studies by the German government which he believes are unreliable. Overmans maintains that the studies of expulsion deaths by the German government lack adequate support; he maintains that there are more arguments for the lower figures than for the higher figures. ("") In a 2006 interview, Overmans maintained that new research is needed to clarify the fate of those reported as missing. He found the 1965 figures of the Search Service to be unreliable because they include non-Germans; the figures according to Overmans include military deaths; the numbers of surviving people, natural deaths and births after the war in Eastern Europe are unreliable because the Communist governments in Eastern Europe did not extend full cooperation to West German efforts to trace people in Eastern Europe; the reports given by eyewitnesses surveyed are not reliable in all cases. In particular, Overmans maintains that the figure of 1.9 million missing people was based on incomplete information and is unreliable. Overmans found the 1958 demographic study to be unreliable because it inflated the figures of ethnic German deaths by including missing people of doubtful German ethnic identity who survived the war in Eastern Europe; the figures of military deaths is understated; the numbers of surviving people, natural deaths and births after the war in Eastern Europe are unreliable because the Communist governments in Eastern Europe did not extend full cooperation to West German efforts to trace people in Eastern Europe. Overmans maintains that the 600,000 deaths found by the German Federal Archives in 1974 is only a rough estimate of those killed, not a definitive figure. He pointed out that some deaths were not reported because there were no surviving eyewitnesses of the events; also there was no estimate of losses in Hungary, Romania and the USSR. Overmans conducted a research project that studied the casualties of the German military during the war and found that the previous estimate of 4.3 million dead and missing, especially in the final stages of the war, was about one million short of the actual toll. In his study Overmans researched only military deaths; his project did not investigate civilian expulsion deaths; he merely noted the difference between the 2.2 million dead estimated in the 1958 demographic study, of which 500,000 have so far have been verified. He found that German military deaths from areas in Eastern Europe were about 1.444 million, and thus 334,000 higher than the 1.1 million figure in the 1958 demographic study, lacking documents available today included the figures with civilian deaths. Overmans believes this will reduce the number of civilian deaths in the expulsions. Overmans further pointed out that the 2.225 million number estimated by the 1958 study would imply that the casualty rate among the expellees was equal to or higher than that of the military, which he found implausible. Analysis by historian Ingo Haar In 2006, Haar called into question the validity of the official government figure of 2 million expulsion deaths in an article in the German newspaper Süddeutsche Zeitung. Since then Haar has published three articles in academic journals that covered the background of the research by the West German government on the expulsions. Haar maintains that all reasonable estimates of deaths from expulsions lie between around 500,000 and 600,000, based on the information of Red Cross Search Service and German Federal Archives. Harr pointed out that some members of the Schieder commission and officials of the Statistisches Bundesamt involved in the study of the expulsions were involved in the Nazi plan to colonize Eastern Europe. Haar posits that figures have been inflated in Germany due to the Cold War and domestic German politics, and he maintains that the 2.225 million number relies on improper statistical methodology and incomplete data, particularly in regard to the expellees who arrived in East Germany. Haar questions the validity of population balances in general. He maintains that 27,000 German Jews who were Nazi victims are included in the West German figures. He rejects the statement by the German government that the figure of 500–600,000 deaths omitted those people who died of disease and hunger, and has stated that this is a "mistaken interpretation" of the data. He maintains that deaths due to disease, hunger and other conditions are already included in the lower numbers. According to Haar the numbers were set too high for decades, for postwar political reasons. Studies in Poland In 2001, Polish researcher Bernadetta Nitschke puts total losses for Poland at 400,000 (the same figure as the German Federal Archive study). She noted that historians in Poland have maintained that most of the deaths occurred during the flight and evacuation during the war, the deportations to the USSR for forced labour and, after the resettlement, due to the harsh conditions in the Soviet occupation zone in postwar Germany. Polish demographer Piotr Eberhardt found that, "Generally speaking, the German estimates... are not only highly arbitrary, but also clearly tendentious in presentation of the German losses." He maintains that the German government figures from 1958 overstated the total number of the ethnic Germans living in Poland prior to the war as well as the total civilian deaths due to the expulsions. For example, Eberhardt points out that "the total number of Germans in Poland is given as equal to 1,371,000. According to the Polish census of 1931, there were altogether only 741,000 Germans in the entire territory of Poland." Study by Hans Henning Hahn and Eva Hahn German historians Hans Henning Hahn and Eva Hahn published a detailed study of the flight and expulsions that is sharply critical of German accounts of the Cold War era. The Hahns regard the official German figure of 2 million deaths as an historical myth, lacking foundation. They place the ultimate blame for the mass flight and expulsion on the wartime policy of the Nazis in Eastern Europe. The Hahns maintain that most of the reported 473,013 deaths occurred during the Nazi organized flight and evacuation during the war, and the forced labor of Germans in the Soviet Union; they point out that there are 80,522 confirmed deaths in the postwar internment camps. They put the postwar losses in eastern Europe at a fraction of the total losses: Poland –15,000 deaths from 1945 to 1949 in internment camps; Czechoslovakia – 15,000–30,000 dead, including 4,000–5,000 in internment camps and ca. 15,000 in the Prague uprising; Yugoslavia – 5,777 deliberate killings and 48,027 deaths in internment camps; Denmark – 17,209 dead in internment camps; Hungary and Romania – no postwar losses reported. The Hahns point out that the official 1958 figure of 273,000 deaths for Czechoslovakia was prepared by Alfred Bohmann, a former Nazi Party member who had served in the wartime SS. Bohmann was a journalist for an ultra-nationalist Sudeten-Deutsch newspaper in postwar West Germany. The Hahns believe the population figures of ethnic Germans for eastern Europe include German-speaking Jews killed in the Holocaust. They believe that the fate of German-speaking Jews in Eastern Europe deserves the attention of German historians. ("Deutsche Vertreibungshistoriker haben sich mit der Geschichte der jüdischen Angehörigen der deutschen Minderheiten kaum beschäftigt.") German and Czech commission of historians In 1995, research by a joint German and Czech commission of historians found that the previous demographic estimates of 220,000 to 270,000 deaths in Czechoslovakia to be overstated and based on faulty information. They concluded that the death toll was at least 15,000 people and that it could range up to a maximum of 30,000 dead, assuming that not all deaths were reported. Rebuttal by the German government The German government maintains that the figure of 2–2.5 million expulsion deaths is correct. In 2005 the German Red Cross Search Service put the death toll at 2,251,500 but did not provide details for this estimate. On 29 November 2006, State Secretary in the German Federal Ministry of the Interior, Christoph Bergner, outlined the stance of the respective governmental institutions on Deutschlandfunk (a public-broadcasting radio station in Germany) saying that the numbers presented by the German government and others are not contradictory to the numbers cited by Haar and that the below 600,000 estimate comprises the deaths directly caused by atrocities during the expulsion measures and thus only includes people who were raped, beaten, or else killed on the spot, while the above two million estimate includes people who on their way to postwar Germany died of epidemics, hunger, cold, air raids and the like. Schwarzbuch der Vertreibung by Heinz Nawratil A German lawyer, Heinz Nawratil, published a study of the expulsions entitled Schwarzbuch der Vertreibung ("Black Book of Expulsion"). Nawratil claimed the death toll was 2.8 million: he includes the losses of 2.2 million listed in the 1958 West German study, and an estimated 250,000 deaths of Germans resettled in Poland during the war, plus 350,000 ethnic Germans in the USSR. In 1987, German historian Martin Broszat (former head of the Institute of Contemporary History in Munich) described Nawratil's writings as "polemics with a nationalist-rightist point of view and exaggerates in an absurd manner the scale of 'expulsion crimes'." Broszat found Nawratil's book to have "factual errors taken out of context." German historian Thomas E. Fischer calls the book "problematic". James Bjork (Department of History, King's College London) has criticized German educational DVDs based on Nawratil's book. Condition of the expellees after arriving in post-war Germany Those who arrived were in bad conditionparticularly during the harsh winter of 1945–46, when arriving trains carried "the dead and dying in each carriage (other dead had been thrown from the train along the way)". After experiencing Red Army atrocities, Germans in the expulsion areas were subject to harsh punitive measures by Yugoslav partisans and in post-war Poland and Czechoslovakia. Beatings, rapes and murders accompanied the expulsions. Some had experienced massacres, such as the Ústí (Aussig) massacre, in which 80–100 ethnic Germans died, or Postoloprty massacre, or conditions like those in the Upper Silesian Camp Łambinowice (Lamsdorf), where interned Germans were exposed to sadistic practices and at least 1,000 died. Many expellees had experienced hunger and disease, separation from family members, loss of civil rights and familiar environment, and sometimes internment and forced labour. Once they arrived, they found themselves in a country devastated by war. Housing shortages lasted until the 1960s, which along with other shortages led to conflicts with the local population. The situation eased only with the West German economic boom in the 1950s that drove unemployment rates close to zero. France did not participate in the Potsdam Conference, so it felt free to approve some of the Potsdam Agreements and dismiss others. France maintained the position that it had not approved the expulsions and therefore was not responsible for accommodating and nourishing the destitute expellees in its zone of occupation. While the French military government provided for the few refugees who arrived before July 1945 in the area that became the French zone, it succeeded in preventing entrance by later-arriving ethnic Germans deported from the East. Britain and the US protested against the actions of the French military government but had no means to force France to bear the consequences of the expulsion policy agreed upon by American, British and Soviet leaders in Potsdam. France persevered with its argument to clearly differentiate between war-related refugees and post-war expellees. In December 1946 it absorbed into its zone German refugees from Denmark, where 250,000 Germans had traveled by sea between February and May 1945 to take refuge from the Soviets. These were refugees from the eastern parts of Germany, not expellees; Danes of German ethnicity remained untouched and Denmark did not expel them. With this humanitarian act the French saved many lives, due to the high death toll German refugees faced in Denmark. Until mid-1945, the Allies had not reached an agreement on how to deal with the expellees. France suggested immigration to South America and Australia and the settlement of 'productive elements' in France, while the Soviets' SMAD suggested a resettlement of millions of expellees in Mecklenburg-Vorpommern. The Soviets, who encouraged and partly carried out the expulsions, offered little cooperation with humanitarian efforts, thereby requiring the Americans and British to absorb the expellees in their zones of occupation. In contradiction with the Potsdam Agreements, the Soviets neglected their obligation to provide supplies for the expellees. In Potsdam, it was agreed that 15% of all equipment dismantled in the Western zones—especially from the metallurgical, chemical and machine manufacturing industries—would be transferred to the Soviets in return for food, coal, potash (a basic material for fertiliser), timber, clay products, petroleum products, etc. The Western deliveries started in 1946, but this turned out to be a one-way street. The Soviet deliveries—desperately needed to provide the expellees with food, warmth, and basic necessities and to increase agricultural production in the remaining cultivation area—did not materialize. Consequently, the US stopped all deliveries on 3 May 1946, while the expellees from the areas under Soviet rule were deported to the West until the end of 1947. In the British and US zones the supply situation worsened considerably, especially in the British zone. Due to its location on the Baltic, the British zone already harbored a great number of refugees who had come by sea, and the already modest rations had to be further shortened by a third in March 1946. In Hamburg, for instance, the average living space per capita, reduced by air raids from in 1939 to 8.3 in 1945, was further reduced to in 1949 by billeting refugees and expellees. In May 1947, Hamburg trade unions organized a strike against the small rations, with protesters complaining about the rapid absorption of expellees. The US and Britain had to import food into their zones, even as Britain was financially exhausted and dependent on food imports having fought Nazi Germany for the entire war, including as the sole opponent from June 1940 to June 1941 (the period when Poland and France were defeated, the Soviet Union supported Nazi Germany, and the United States had not yet entered the war). Consequently, Britain had to incur additional debt to the US, and the US had to spend more for the survival of its zone, while the Soviets gained applause among Eastern Europeans—many of whom were impoverished by the war and German occupation—who plundered the belongings of expellees, often before they were actually expelled. Since the Soviet Union was the only power among the Allies that allowed and/or encouraged the looting and robbery in the area under its military influence, the perpetrators and profiteers blundered into a situation in which they became dependent on the perpetuation of Soviet rule in their countries to not be dispossessed of the booty and to stay unpunished. With ever more expellees sweeping into post-war Germany, the Allies moved towards a policy of assimilation, which was believed to be the best way to stabilise Germany and ensure peace in Europe by preventing the creation of a marginalised population. This policy led to the granting of German citizenship to the ethnic German expellees who had held citizenship of Poland, Czechoslovakia, Hungary, Yugoslavia, Romania, etc. before World War II. This effort was led by the Sonne Commission, a 14-member body consisting of nine Americans and five Germans within the Economic Cooperation Administration which was tasked with devising strategies to solve the refugee crisis. When the Federal Republic of Germany was founded, a law was drafted on 24 August 1952 that was primarily intended to ease the financial situation of the expellees. The law, termed the Lastenausgleichsgesetz, granted partial compensation and easy credit to the expellees; the loss of their civilian property had been estimated at 299.6 billion Deutschmarks (out of a total loss of German property due to the border changes and expulsions of 355.3 billion Deutschmarks). Administrative organisations were set up to integrate the expellees into post-war German society. While the Stalinist regime in the Soviet occupation zone did not allow the expellees to organise, in the Western zones expellees over time established a variety of organizations, including the All-German Bloc/League of Expellees and Deprived of Rights. The most prominent—still active today—is the Federation of Expellees (Bund der Vertriebenen, or BdV). "War children" of German ancestry in Western and Northern Europe In countries occupied by Nazi Germany during the war, sexual relations between Wehrmacht soldiers and local women resulted in the birth of significant numbers of children. Relationships between German soldiers and local women were particularly common in countries whose population was not dubbed "inferior" (Untermensch) by the Nazis. After the Wehrmacht's withdrawal, these women and their children of German descent were often ill-treated. Legacy of the expulsions With at least 12 million Germans directly involved, possibly 14 million or more, it was the largest movement or transfer of any single ethnic population in European history and the largest among the post-war expulsions in Central and Eastern Europe (which displaced 20 to 31 million people in total). The exact number of Germans expelled after the war is still unknown, because most recent research provides a combined estimate which includes those who were evacuated by the German authorities, fled or were killed during the war. It is estimated that between 12 and 14 million German citizens and foreign ethnic Germans and their descendants were displaced from their homes. The exact number of casualties is still unknown and is difficult to establish due to the chaotic nature of the last months of the war. Census figures placed the total number of ethnic Germans still living in Eastern Europe in 1950, after the major expulsions were complete, at approximately 2.6 million, about 12 percent of the pre-war total. The events have been usually classified as population transfer, or as ethnic cleansing. R.J. Rummel has classified these events as democide, and a few scholars go as far as calling it a genocide. Polish sociologist and philosopher Lech Nijakowski objects to the term "genocide" as inaccurate agitprop. The expulsions created major social disruptions in the receiving territories, which were tasked with providing housing and employment for millions of refugees. West Germany established a ministry dedicated to the problem, and several laws created a legal framework. The expellees established several organisations, some demanding compensation. Their grievances, while remaining controversial, were incorporated into public discourse. During 1945 the British press aired concerns over the refugees' situation; this was followed by limited discussion of the issue during the Cold War outside West Germany. East Germany sought to avoid alienating the Soviet Union and its neighbours; the Polish and Czechoslovakian governments characterised the expulsions as "a just punishment for Nazi crimes". Western analysts were inclined to see the Soviet Union and its satellites as a single entity, disregarding the national disputes that had preceded the Cold War. The fall of the Soviet Union and the reunification of Germany opened the door to a renewed examination of the expulsions in both scholarly and political circles. A factor in the ongoing nature of the dispute may be the relatively large proportion of German citizens who were among the expellees and/or their descendants, estimated at 20% in 2000. A 1993 novel, Summer of Dead Dreams, written by Harry Thürk—a German author who left Upper Silesia annexed by Poland shortly after the war had ended—contained graphic depictions of the treatment of Germans by Soviets and Poles in Thürk's hometown of Prudnik. It depicted the maltreatment of Germans while also acknowledging German guilt, as well as Polish animosity toward Germans and, in specific instances, friendships between Poles and Germans despite the circumstances. Thürk's novel, when serialized in Polish translation by the Tygodnik Prudnicki ("Prudnik Weekly") magazine, was met with criticism from some Polish residents of Prudnik, but also with praise, because it revealed to many local citizens that there had been a post-war German ghetto in the town and addressed the tensions between Poles and Soviets in post-war Poland. The serialization was followed by an exhibition on Thurk's life in Prudnik's town museum. Status in international law International law on population transfer underwent considerable evolution during the 20th century. Before World War II, several major population transfers were the result of bilateral treaties and had the support of international bodies such as the League of Nations. The tide started to turn when the charter of the Nuremberg trials of German Nazi leaders declared forced deportation of civilian populations to be both a war crime and a crime against humanity, and this opinion was progressively adopted and extended through the remainder of the century. Underlying the change was the trend to assign rights to individuals, thereby limiting the rights of nation-states to impose fiats which could adversely affect such individuals. The Charter of the then-newly formed United Nations stated that its Security Council could take no enforcement actions regarding measures taken against World War II "enemy states", defined as enemies of a Charter signatory in WWII. The Charter did not preclude action in relation to such enemies "taken or authorized as a result of that war by the Governments having responsibility for such action." Thus, the Charter did not invalidate or preclude action against World War II enemies following the war. This argument is contested by Alfred de Zayas, an American professor of international law. ICRC's legal adviser Jean-Marie Henckaerts posited that the contemporary expulsions conducted by the Allies of World War II themselves were the reason why expulsion issues were included neither in the UN Declaration of Human Rights of 1948, nor in the European Convention on Human Rights in 1950, and says it "may be called 'a tragic anomaly' that while deportations were outlawed at Nuremberg they were used by the same powers as a 'peacetime measure'". It was only in 1955 that the Settlement Convention regulated expulsions, yet only in respect to expulsions of individuals of the states who signed the convention. The first international treaty condemning mass expulsions was a document issued by the Council of Europe on 16 September 1963, Protocol No 4 to the Convention for the Protection of Human Rights and Fundamental Freedoms Securing Certain Rights and Freedoms Other than Those Already Included in the Convention and in the First Protocol, stating in Article 4: "collective expulsion of aliens is prohibited." This protocol entered into force on 2 May 1968, and as of 1995 was ratified by 19 states. There is now general consensus about the legal status of involuntary population transfers: "Where population transfers used to be accepted as a means to settle ethnic conflict, today, forced population transfers are considered violations of international law." No legal distinction is made between one-way and two-way transfers, since the rights of each individual are regarded as independent of the experience of others. Although the signatories to the Potsdam Agreements and the expelling countries may have considered the expulsions to be legal under international law at the time, there are historians and scholars in international law and human rights who argue that the expulsions of Germans from Central and Eastern Europe should now be considered as episodes of ethnic cleansing, and thus a violation of human rights. For example, Timothy V. Waters argues in "On the Legal Construction of Ethnic Cleansing" that if similar circumstances arise in the future, the precedent of the expulsions of the Germans without legal redress would also allow the future ethnic cleansing of other populations under international law. In the 1970s and 1980s, a Harvard-trained lawyer and historian, Alfred de Zayas, published Nemesis at Potsdam and A Terrible Revenge, both of which became bestsellers in Germany. De Zayas argues that the expulsions were war crimes and crimes against humanity even in the context of international law of the time, stating, "the only applicable principles were the Hague Conventions, in particular, the Hague Regulations, Articles 42–56, which limited the rights of occupying powers—and obviously occupying powers have no rights to expel the populations—so there was the clear violation of the Hague Regulations." He argued that the expulsions violated the Nuremberg Principles. In November 2000, a major conference on ethnic cleansing in the 20th century was held at Duquesne University in Pittsburgh, along with the publication of a book containing participants' conclusions. The former United Nations High Commissioner for Human Rights José Ayala Lasso of Ecuador endorsed the establishment of the Centre Against Expulsions in Berlin. José Ayala Lasso recognized the "expellees" as victims of gross violations of human rights. De Zayas, a member of the advisory board of the Centre Against Expulsions, endorses the full participation of the organisation representing the expellees, the Bund der Vertriebenen (Federation of Expellees), in the Centre in Berlin. Berlin Centre A Centre Against Expulsions was to be set up in Berlin by the German government based on an initiative and with active participation of the German Federation of Expellees. The centre's creation has been criticized in Poland. It was strongly opposed by the Polish government and president Lech Kaczyński. Former Polish prime minister Donald Tusk restricted his comments to a recommendation that Germany pursue a neutral approach at the museum. The museum apparently did not materialize. The only project along the same lines in Germany is "Visual Sign" (Sichtbares Zeichen) under the auspices of the Stiftung Flucht, Vertreibung, Versöhnung (SFVV). Several members of two consecutive international Advisory (scholar) Councils criticised some activities of the foundation and the new Director Winfried Halder resigned. Dr Gundula Bavendamm is a current Director. Historiography British historian Richard J. Evans wrote that although the expulsions of ethnic Germans from Eastern Europe was done in an extremely brutal manner that could not be defended, the basic aim of expelling the ethnic German population of Poland and Czechoslovakia was justified by the subversive role played by the German minorities before World War II. Evans wrote that under the Weimar Republic the vast majority of ethnic Germans in Poland and Czechoslovakia made it clear that they were not loyal to the states they happened to live under, and under Nazi rule, the German minorities in Eastern Europe were willing tools of German foreign policy. Evans also wrote that many areas of eastern Europe featured a jumble of various ethnic groups aside from Germans, and that it was the destructive role played by ethnic Germans as instruments of Nazi Germany that led to their expulsion after the war. Evans concluded by positing that the expulsions were justified as they put an end to a major problem that plagued Europe before the war; that gains to the cause of peace were a further benefit of the expulsions; and that if the Germans had been allowed to remain in Eastern Europe after the war, West Germany would have used their presence to make territorial claims against Poland and Czechoslovakia, and that given the Cold War, this could have helped cause World War III. Historian Gerhard Weinberg wrote that the expulsions of the Sudeten Germans was justified as the Germans themselves had scrapped the Munich Agreement. Political issues In January 1990, the president of Czechoslovakia, Václav Havel, requested forgiveness on his country's behalf, using the term expulsion rather than transfer. Public approval for Havel's stance was limited; in a 1996 opinion poll, 86% of Czechs stated they would not support a party that endorsed such an apology. The expulsion issue surfaced in 2002 during the Czech Republic's application for membership in the European Union, since the authorisation decrees issued by Edvard Beneš had not been formally renounced. In October 2009, Czech president Václav Klaus stated that the Czech Republic would require exemption from the European Charter of Fundamental Rights to ensure that the descendants of expelled Germans could not press legal claims against the Czech Republic. Five years later, in 2014, the government of Prime Minister Bohuslav Sobotka decided that the exemption was "no longer relevant" and that the withdrawal of the opt-out "would help improve Prague's position with regard to other EU international agreements." On 20 June 2018, which was World Refugee Day, German Chancellor Angela Merkel said that there had been "no moral or political justification" for the post-war expulsion of ethnic Germans. Misuse of graphical materials Nazi propaganda pictures produced during the Heim ins Reich and pictures of expelled Poles are sometimes published to show the flight and expulsion of Germans. See also Dutch annexation of German territory after World War II Expulsion of Poles by Germany Expulsion of Poles by Nazi Germany German reparations for World War II Integration of immigrants Internment of German Americans Istrian-Dalmatian exodus Operation Paperclip Persecution of Germans Population transfer in the Soviet Union Pursuit of Nazi collaborators Treaty of Zgorzelec Victor Gollancz War crimes in occupied Poland during World War II World War II evacuation and expulsion Deportation of Germans from Latin America during World War II Displaced persons camps in post–World War II Europe References Sources Baziur, Grzegorz. Armia Czerwona na Pomorzu Gdańskim 1945–1947 [Red Army Gdańsk Pomerania 1945–1947], Warsaw: Instytut Pamięci Narodowej, 2003; Beneš, Z., D. Jančík et al., Facing History: The Evolution of Czech and German Relations in the Czech Provinces, 1848–1948, Prague: Gallery; Blumenwitz, Dieter: Flucht und Vertreibung, Cologne: Carl Heymanns, 1987; Brandes, Detlef: Flucht und Vertreibung (1938–1950), European History Online, Mainz: Institute of European History, 2011, retrieved 25 February 2013. De Zayas, Alfred M.: A terrible Revenge. Palgrave Macmillan, New York, 1994. . De Zayas, Alfred M.: Nemesis at Potsdam, London, 1977; . Douglas, R.M.: Orderly and Humane. The Expulsion of the Germans after the Second World War. Yale University Press, 2012; German statistics (Statistical and graphical data illustrating German population movements in the aftermath of the Second World War published in 1966 by the West German Ministry of Refugees and Displaced Persons) Grau, Karl F. Silesian Inferno, War Crimes of the Red Army on its March into Silesia in 1945, Valley Forge, PA: The Landpost Press, 1992; Jankowiak, Stanisław. Wysiedlenie i emigracja ludności niemieckiej w polityce władz polskich w latach 1945–1970 [Expulsion and emigration of German population in the policies of Polish authorities in 1945–1970], Warsaw: Instytut Pamięci Narodowej, 2005; Naimark, Norman M. The Russians in Germany: A History of the Soviet Zone of Occupation, 1945–1949, Cambridge: Harvard University Press, 1995; Naimark, Norman M.: Fires of Hatred. Ethnic Cleansing in Twentieth–Century Europe. Cambridge: Harvard University Press, 2001; Overy, Richard. The Penguin Historical Atlas of the Third Reich, London: Penguin Books, 1996; , p. 111. Podlasek, Maria. Wypędzenie Niemców z terenów na wschód od Odry i Nysy Łużyckiej, Warsaw: Wydawnictwo Polsko-Niemieckie, 1995; Steffen Prauser, Arfon Rees (2004), The Expulsion of 'German' Communities from Eastern Europe at the end of the Second World War (PDF file, direct download), EUI Working Paper HEC No. 2004/1; Florence: European University Institute. Contributors: Steffen Prauser and Arfon Rees, Piotr Pykel, Tomasz Kamusella, Balazs Apor, Stanislav Sretenovic, Markus Wien, Tillmann Tegeler, and Luigi Cajani. Accessed 26 May 2015. Reichling, Gerhard. Die deutschen Vertriebenen in Zahlen, 1986; Truman Presidential Library: Marshall Plan Documents, trumanlibrary.org; accessed 6 December 2014. Zybura, Marek. Niemcy w Polsce [Germans in Poland], Wrocław: Wydawnictwo Dolnośląskie, 2004; . Suppan, Arnold: "Hitler – Benes – Tito". Wien 2014. Verlag der Österreichischen Akademie der Wissenschaften. Drei Bände. . External links A documentary film about the expulsion of the Germans from Hungary Timothy V. Waters, On the Legal Construction of Ethnic Cleansing, Paper 951, 2006, University of Mississippi School of Law (PDF) Interest of the United States in the transfer of German populations from Poland, Czechoslovakia, Hungary, Rumania, and Austria, Foreign relations of the United States: diplomatic papers, Volume II (1945) pp. 1227–1327 (Note: p. 1227 begins with a Czechoslovak document dated 23 November 1944, several months before Czechoslovakia was "liberated" by the Soviet Army.) (Main URL, wisc.edu) Frontiers and areas of administration. Foreign relations of the United States (the Potsdam Conference), Volume I (1945), wisc.edu History and Memory: mass expulsions and transfers 1939–1945–1949, M. Rutowska, Z. Mazur, H. Orłowski Forced Migration in Central and Eastern Europe, 1939–1950 "Unsere Heimat ist uns ein fremdes Land geworden..." Die Deutschen östlich von Oder und Neiße 1945–1950. Dokumente aus polnischen Archiven. Band 1: Zentrale Behörden, Wojewodschaft Allenstein Dokumentation der Vertreibung Displaced Persons Act of 1948 Flucht und Vertreibung Gallerie- Flight & Expulsion Gallery Deutsche Vertriebenen – German Expulsions (Histories & Documentation) 1940s in Germany 1950 in Germany Germans Forced migration in the Soviet Union Sudetenland Aftermath of World War II in Germany Aftermath of World War II in Poland Aftermath of World War II in the Soviet Union German diaspora in Europe German diaspora in Poland Germany–Soviet Union relations Czechoslovakia–Germany relations Estonia–Germany relations Germany–Latvia relations German occupation of Lithuania during World War II Ethnic cleansing of Germans Ethnic cleansing in Europe Anti-German sentiment in Europe Genocides in Europe Collective punishment 1944 in Germany American collusion with Soviet World War II crimes British collusion with Soviet World War II crimes Polish war crimes in World War II Soviet World War II crimes
Flight and expulsion of Germans (1944–1950)
Biology
19,726
23,921,051
https://en.wikipedia.org/wiki/Kilometres%20per%20hour
The kilometre per hour (SI symbol: km/h; non-SI abbreviations: kph, km/hr) is a unit of speed, expressing the number of kilometres travelled in one hour. History Although the metre was formally defined in 1799, the term "kilometres per hour" did not come into immediate use – the myriametre () and myriametre per hour were preferred to kilometres and kilometres per hour. In 1802 the term "myriamètres par heure" appeared in French literature. The Dutch on the other hand adopted the kilometre in 1817 but gave it the local name of the mijl (Dutch mile). Notation history The SI representations, classified as symbols, are "km/h", "" and "". Several other abbreviations of "kilometres per hour" have been used since the term was introduced and many are still in use today; for example, dictionaries list "kph", "kmph" and "km/hr" as English abbreviations. While these forms remain widely used, the International Bureau of Weights and Measures uses "km/h" in describing the definition and use of the International System of Units. The entries for "kph" and "kmph" in the Oxford Advanced Learner's Dictionary state that "the correct scientific unit is km/h and this is the generally preferred form". Abbreviations Abbreviations for "kilometres per hour" did not appear in the English language until the late nineteenth century. The kilometre, a unit of length, first appeared in English in 1810, and the compound unit of speed "kilometers per hour" was in use in the US by 1866. "Kilometres per hour" did not begin to be abbreviated in print until many years later, with several different abbreviations existing near-contemporaneously. With no central authority to dictate the rules for abbreviations, various publishing houses and standards bodies have their own rules that dictate whether to use upper-case letters, lower-case letters, periods and so on, reflecting both changes in fashion and the image of the publishing house concerned, In contrast to the "symbols" designated for use with the SI system, news organisations such as Reuters and The Economist require "kph". In informal Australian usage, km/h is more commonly pronounced "kays" or "kays an hour". In military usage, "klicks" is used, though written as km/h. Unit symbols In 1879, four years after the signing of the Treaty of the Metre, the International Committee for Weights and Measures (CIPM) proposed a range of symbols for the various metric units then under the auspices of the General Conference on Weights and Measures (CGPM). Among these were the use of the symbol "km" for "kilometre". In 1948, as part of its preparatory work for the SI, the CGPM adopted symbols for many units of measure that did not have universally agreed symbols, one of which was the symbol "h" for "hours". At the same time the CGPM formalised the rules for combining units quotients could be written in one of three formats resulting in , and being valid representations of "kilometres per hour". The SI standards, which were MKS-based rather than CGS-based, were published in 1960 and have since then have been adopted by many authorities around the globe including academic publishers and legal authorities. The SI explicitly states that unit symbols are not abbreviations and are to be written using a very specific set of rules. M. Danloux-Dumesnils provides the following justification for this distinction: SI, and hence the use of (or or ) has now been adopted around the world in many areas related to health and safety and in metrology in addition to the SI unit metres per second (, or ). SI is also the preferred system of measure in academia and in education. Non-SI abbreviations in official use km/j or km/jam (Indonesia and Malaysia) km/t or km/tim (Norway, Denmark and Sweden; also use km/h) kmph (Sri Lanka and India) กม./ชม. (Thailand; also uses km/hr) كم/س or كم/ساعة (Arabic-speaking countries, also use km/h) קמ"ש (Israel) км/ч (Russia and Belarus in a Russian-language context) км/г (Belarus in a Belarusian-language context) км/год (Ukraine) km/st (Azerbaijan) km/godz (Poland) Regulatory use During the early years of the motor car, each country developed its own system of road signs. In 1968 the Vienna Convention on Road Signs and Signals was drawn up under the auspices of the United Nations Economic and Social Council to harmonise road signs across the world. Many countries have since signed the convention and adopted its proposals. Speed limits signs that are either directly authorised by the convention or have been influenced by the convention are shown below: In 1972 the EU published a directive (overhauled in 1979 to take British and Irish interests into account) that required member states to abandon CGS-based units in favour of SI. The use of SI implicitly required that member states use "km/h" as the shorthand for "kilometres per hour" on official documents. Another EU directive, published in 1975, regulates the layout of speedometers within the European Union, and requires the text "km/h" in all languages, even where that is not the natural abbreviation for the local version of "kilometres per hour". Examples include: Dutch: "" ("hour" is "" – does not start with "h"), Portuguese: "" ("kilometre" is "" – does not start with "k") Irish: "" Greek: "" (a different script). In 1988 the United States National Highway Traffic Safety Administration promulgated a rule stating that "MPH and/or km/h" were to be used in speedometer displays. On May 15, 2000, this was clarified to read "MPH, or MPH and km/h". However, the Federal Motor Vehicle Safety Standard number 101 ("Controls and Displays") allows "any combination of upper- and lowercase letters" to represent the units. Conversions ≡ , the SI unit of speed, metre per second ≈ ≈ ≈ ≡ (exactly) ≡ See also Orders of magnitude (speed) Speed limits in the United Kingdom Speed limits in Canada Notes References Units of velocity Non-SI metric units
Kilometres per hour
Mathematics
1,354
12,563,101
https://en.wikipedia.org/wiki/Speech%20production
Speech production is the process by which thoughts are translated into speech. This includes the selection of words, the organization of relevant grammatical forms, and then the articulation of the resulting sounds by the motor system using the vocal apparatus. Speech production can be spontaneous such as when a person creates the words of a conversation, reactive such as when they name a picture or read aloud a written word, or imitative, such as in speech repetition. Speech production is not the same as language production since language can also be produced manually by signs. In ordinary fluent conversation people pronounce roughly four syllables, ten or twelve phonemes and two to three words out of their vocabulary (that can contain 10 to 100 thousand words) each second. Errors in speech production are relatively rare occurring at a rate of about once in every 900 words in spontaneous speech. Words that are commonly spoken or learned early in life or easily imagined are quicker to say than ones that are rarely said, learnt later in life, or are abstract. Normally speech is created with pulmonary pressure provided by the lungs that generates sound by phonation through the glottis in the larynx that then is modified by the vocal tract into different vowels and consonants. However speech production can occur without the use of the lungs and glottis in alaryngeal speech by using the upper parts of the vocal tract. An example of such alaryngeal speech is Donald Duck talk. The vocal production of speech may be associated with the production of hand gestures that act to enhance the comprehensibility of what is being said. The development of speech production throughout an individual's life starts from an infant's first babble and is transformed into fully developed speech by the age of five. The first stage of speech doesn't occur until around age one (holophrastic phase). Between the ages of one and a half and two and a half the infant can produce short sentences (telegraphic phase). After two and a half years the infant develops systems of lemmas used in speech production. Around four or five the child's lemmas are largely increased; this enhances the child's production of correct speech and they can now produce speech like an adult. An adult now develops speech in four stages: Activation of lexical concepts, select lemmas needed, morphologically and phonologically encode speech, and the word is phonetically encoded. Three stages The production of spoken language involves three major levels of processing: conceptualization, formulation, and articulation. The first is the processes of conceptualization or conceptual preparation, in which the intention to create speech links a desired concept to the particular spoken words to be expressed. Here the preverbal intended messages are formulated that specify the concepts to be expressed. The second stage is formulation in which the linguistic form required for the expression of the desired message is created. Formulation includes grammatical encoding, morpho-phonological encoding, and phonetic encoding. Grammatical encoding is the process of selecting the appropriate syntactic word or lemma. The selected lemma then activates the appropriate syntactic frame for the conceptualized message. Morpho-phonological encoding is the process of breaking words down into syllables to be produced in overt speech. Syllabification is dependent on the preceding and proceeding words, for instance: I-com-pre-hend vs. I-com-pre-hen-dit. The final part of the formulation stage is phonetic encoding. This involves the activation of articulatory gestures dependent on the syllables selected in the morpho-phonological process, creating an articulatory score as the utterance is pieced together and the order of movements of the vocal apparatus is completed. The third stage of speech production is articulation, which is the execution of the articulatory score by the lungs, glottis, larynx, tongue, lips, jaw and other parts of the vocal apparatus resulting in speech. Neuroscience The motor control for speech production in right handed people depends mostly upon areas in the left cerebral hemisphere. These areas include the bilateral supplementary motor area, the left posterior inferior frontal gyrus, the left insula, the left primary motor cortex and temporal cortex. There are also subcortical areas involved such as the basal ganglia and cerebellum. The cerebellum aids the sequencing of speech syllables into fast, smooth and rhythmically organized words and longer utterances. Disorders Speech production can be affected by several disorders: Aphasia Anomic aphasia Apraxia of speech Aprosodia Auditory processing disorder Cluttering Developmental verbal dyspraxia Dysprosody Infantile speech Lisp Malapropism Mispronunciation Speech disorder Speech error Speech sound disorder Spoonerism Stuttering History of speech production research Until the late 1960s research on speech was focused on comprehension. As researchers collected greater volumes of speech error data, they began to investigate the psychological processes responsible for the production of speech sounds and to contemplate possible processes for fluent speech. Findings from speech error research were soon incorporated into speech production models. Evidence from speech error data supports the following conclusions about speech production. Some of these ideas include: Speech is planned in advance. The lexicon is organized both semantically and phonologically. That is by meaning, and by the sound of the words. Morphologically complex words are assembled. Words that we produce that contain morphemes are put together during the speech production process. Morphemes are the smallest units of language that contain meaning. For example, "ed" on a past tense word. Affixes and functors behave differently from context words in slips of the tongue. This means the rules about the ways in which a word can be used are likely stored with them, which means generally when speech errors are made, the mistake words maintain their functions and make grammatical sense. Speech errors reflect rule knowledge. Even in our mistakes, speech is not nonsensical. The words and sentences that are produced in speech errors are typically grammatical, and do not violate the rules of the language being spoken. Aspects of speech production models Models of speech production must contain specific elements to be viable. These include the elements from which speech is composed, listed below. The accepted models of speech production discussed in more detail below all incorporate these stages either explicitly or implicitly, and the ones that are now outdated or disputed have been criticized for overlooking one or more of the following stages. The attributes of accepted speech models are: a) a conceptual stage where the speaker abstractly identifies what they wish to express. b) a syntactic stage where a frame is chosen that words will be placed into, this frame is usually sentence structure. c) a lexical stage where a search for a word occurs based on meaning. Once the word is selected and retrieved, information about it becomes available to the speaker involving phonology and morphology. d) a phonological stage where the abstract information is converted into a speech like form. e) a phonetic stage where instructions are prepared to be sent to the muscles of articulation. Also, models must allow for forward planning mechanisms, a buffer, and a monitoring mechanism. Following are a few of the influential models of speech production that account for or incorporate the previously mentioned stages and include information discovered as a result of speech error studies and other disfluency data, such as tip-of-the-tongue research. Model The Utterance Generator Model (1971) The Utterance Generator Model was proposed by Fromkin (1971). It is composed of six stages and was an attempt to account for the previous findings of speech error research. The stages of the Utterance Generator Model were based on possible changes in representations of a particular utterance. The first stage is where a person generates the meaning they wish to convey. The second stage involves the message being translated onto a syntactic structure. Here, the message is given an outline. The third stage proposed by Fromkin is where/when the message gains different stresses and intonations based on the meaning. The fourth stage Fromkin suggested is concerned with the selection of words from the lexicon. After the words have been selected in Stage 4, the message undergoes phonological specification. The fifth stage applies rules of pronunciation and produces syllables that are to be outputted. The sixth and final stage of Fromkin's Utterance Generator Model is the coordination of the motor commands necessary for speech. Here, phonetic features of the message are sent to the relevant muscles of the vocal tract so that the intended message can be produced. Despite the ingenuity of Fromkin's model, researchers have criticized this interpretation of speech production. Although The Utterance Generator Model accounts for many nuances and data found by speech error studies, researchers decided it still had room to be improved. The Garrett model (1975) A more recent (than Fromkin's) attempt to explain speech production was published by Garrett in 1975. Garrett also created this model by compiling speech error data. There are many overlaps between this model and the Fromkin model from which it was based, but he added a few things to the Fromkin model that filled some of the gaps being pointed out by other researchers. The Garrett Fromkin models both distinguish between three levels—a conceptual level, and sentence level, and a motor level. These three levels are common to contemporary understanding of Speech Production. Dell's model (1994) In 1994, Dell proposed a model of the lexical network that became fundamental in the understanding of the way speech is produced. This model of the lexical network attempts to symbolically represent the lexicon, and in turn, explain how people choose the words they wish to produce, and how those words are to be organized into speech. Dell's model was composed of three stages, semantics, words, and phonemes. The words in the highest stage of the model represent the semantic category. (In the image, the words representing semantic category are winter, footwear, feet, and snow represent the semantic categories of boot and skate.) The second level represents the words that refer to the semantic category (In the image, boot and skate). And, the third level represents the phonemes ( syllabic information including onset, vowels, and codas). Levelt model (1999) Levelt further refined the lexical network proposed by Dell. Through the use of speech error data, Levelt recreated the three levels in Dell's model. The conceptual stratum, the top and most abstract level, contains information a person has about ideas of particular concepts. The conceptual stratum also contains ideas about how concepts relate to each other. This is where word selection would occur, a person would choose which words they wish to express. The next, or middle level, the lemma-stratum, contains information about the syntactic functions of individual words including tense and function. This level functions to maintain syntax and place words correctly into sentence structure that makes sense to the speaker. The lowest and final level is the form stratum which, similarly to the Dell Model, contains syllabic information. From here, the information stored at the form stratum level is sent to the motor cortex where the vocal apparatus are coordinated to physically produce speech sounds. Places of articulation The physical structure of the human nose, throat, and vocal cords allows for the productions of many unique sounds, these areas can be further broken down into places of articulation. Different sounds are produced in different areas, and with different muscles and breathing techniques. Our ability to utilize these skills to create the various sounds needed to communicate effectively is essential to our speech production. Speech is a psychomotor activity. Speech between two people is a conversation - they can be casual, formal, factual, or transactional, and the language structure/ narrative genre employed differs depending upon the context. Affect is a significant factor that controls speech, manifestations that disrupt memory in language use due to affect include feelings of tension, states of apprehension, as well as physical signs like nausea. Language level manifestations that affect brings could be observed with the speaker's hesitations, repetitions, false starts, incompletion, syntactic blends, etc. Difficulties in manner of articulation can contribute to speech difficulties and impediments. It is suggested that infants are capable of making the entire spectrum of possible vowel and consonant sounds. IPA has created a system for understanding and categorizing all possible speech sounds, which includes information about the way in which the sound is produced, and where the sounds is produced. This is extremely useful in the understanding of speech production because speech can be transcribed based on sounds rather than spelling, which may be misleading depending on the language being spoken. Average speaking rates are in the 120 to 150 words per minute (wpm) range, and same is the recommended guidelines for recording audiobooks. As people grow accustomed to a particular language they are prone to lose not only the ability to produce certain speech sounds, but also to distinguish between these sounds. Articulation Articulation, often associated with speech production, is how people physically produce speech sounds. For people who speak fluently, articulation is automatic and allows 15 speech sounds to be produced per second. An effective articulation of speech include the following elements – fluency, complexity, accuracy, and comprehensibility. Fluency: Is the ability to communicate an intended message, or to affect the listener in the way that is intended by the speaker. While accurate use of language is a component in this ability, over-attention to accuracy may actually inhibit the development of fluency. Fluency involves constructing coherent utterances and stretches of speech, to respond and to speak without undue hesitation (limited use of fillers such as uh, er, eh, like, you know). It also involves the ability to use strategies such as simplification and gestures to aid communication. Fluency involves use of relevant information, appropriate vocabulary and syntax. Complexity: Speech where the message is communicated precisely. Ability to adjust the message or negotiate the control of conversation according to the responses of the listener, and use subordination and clausal forms appropriate per the roles and relationship between the speakers. It includes the use of sociolinguistic knowledge – the skills required to communicate effectively across cultures; the norms, the knowledge of what is appropriate to say in what situations and to whom. Accuracy: This refers to the use of proper and advanced grammar; subject-verb agreement; word order; and word form (excited/exciting), as well as appropriate word choice in spoken language. It is also the ability to self-correct during discourse, to clarify or modify spoken language for grammatical accuracy. Comprehensibility: This is the ability to be understood by others, it is related with the sound of the language. There are three components that influence one’s comprehensibility and they are: Pronunciation – saying the sounds of words correctly; Intonation – applying proper stress on words and syllables, using rising and falling pitch to indicate questions or statements, using voice to indicate emotion or emphasis, speaking with an appropriate rhythm; and Enunciation – speaking clearly at an appropriate pace, with effective articulation of words and phrases and appropriate volume. Development Before even producing a sound, infants imitate facial expressions and movements. Around 7 months of age, infants start to experiment with communicative sounds by trying to coordinate producing sound with opening and closing their mouths. Until the first year of life infants cannot produce coherent words, instead they produce a reoccurring babbling sound. Babbling allows the infant to experiment with articulating sounds without having to attend to meaning. This repeated babbling starts the initial production of speech. Babbling works with object permanence and understanding of location to support the networks of our first lexical items or words. The infant’s vocabulary growth increases substantially when they are able to understand that objects exist even when they are not present. The first stage of meaningful speech does not occur until around the age of one. This stage is the holophrastic phase. The holistic stage refers to when infant speech consists of one word at a time (i.e. papa). The next stage is the telegraphic phase. In this stage infants can form short sentences (i.e., Daddy sit, or Mommy drink). This typically occurs between the ages of one and a half and two and a half years old. This stage is particularly noteworthy because of the explosive growth of their lexicon. During this stage, infants must select and match stored representations of words to the specific perceptual target word in order to convey meaning or concepts. With enough vocabulary, infants begin to extract sound patterns, and they learn to break down words into phonological segments, increasing further the number of words they can learn. At this point in an infant's development of speech their lexicon consists of 200 words or more and they are able to understand even more than they can speak. When they reach two and a half years their speech production becomes increasingly complex, particularly in its semantic structure. With a more detailed semantic network the infant learns to express a wider range of meanings, helping the infant develop a complex conceptual system of lemmas. Around the age of four or five the child lemmas have a wide range of diversity, this helps them select the right lemma needed to produce correct speech. Reading to infants enhances their lexicon. At this age, children who have been read to and are exposed to more uncommon and complex words have 32 million more words than a child who is linguistically impoverished. At this age the child should be able to speak in full complete sentences, similar to an adult. See also FOXP2 KE family Neurocomputational speech processing Psycholinguistics Silent speech interface Speech perception Speech science References Further reading Kroeger BJ, Stille C, Blouw P, Bekolay T, Stewart TC (November 2020) "Hierarchical sequencing and feedforward and feedback control mechanisms in speech production: A preliminary approach for modeling normal and disordered speech" Frontiers in Computational Neuroscience 14:99 doi=10.3389/fncom.2020.573554 Language Phonetics Motor control Phonation Human voice Speech
Speech production
Biology
3,752
39,657,963
https://en.wikipedia.org/wiki/Timer%20coalescing
Timer coalescing is a computer system energy-saving technique that reduces central processing unit (CPU) power consumption by reducing the precision of software timers used for synchronization of process wake-ups, minimizing the number of times the CPU is forced to perform the relatively power-costly operation of entering and exiting idle states. Implementations of timer coalescing The Linux kernel gained support for deferrable timers in 2.6.22, and controllable "timer slack" for threads in 2.6.28 allowing timer coalescing. Timer coalescing has been a feature of Microsoft Windows from Windows 7 onward. Apple's XNU kernel based OS X gained support as of OS X Mavericks. FreeBSD supports it since September 2010. See also Advanced Configuration and Power Interface (ACPI) Advanced Programmable Interrupt Controller (APIC) High Precision Event Timer (HPET) HLT (x86 instruction) Interrupt coalescing Programmable interval timer Time Stamp Counter (TSC) References Operating system kernels Synchronization
Timer coalescing
Technology,Engineering
217
14,410,736
https://en.wikipedia.org/wiki/GPR6
G protein-coupled receptor 6, also known as GPR6, is a protein which in humans is encoded by the GPR6 gene. Function GPR6 is a member of the G protein-coupled receptor family of transmembrane receptors. It has been reported that GPR6 is both constitutively active but in addition is further activated by sphingosine-1-phosphate. GPR6 up-regulates cyclic AMP levels and promotes neurite outgrowth. Ligand Inverse Agonist Cannabidiol Evolution Paralogues to GPR6 gene Source: GPR3 GPR12 S1PR5 S1PR1 CNR1 S1PR2 LPAR2 CNR2 MC1R S1PR3 S1PR4 GPR119 MC3R MC4R MC5R LPAR1 LPAR3 MC2R See also Lysophospholipid receptor References Further reading External links G protein-coupled receptors
GPR6
Chemistry
200
29,510,292
https://en.wikipedia.org/wiki/Construction%20barrel
Construction barrels (coloqually known as "drums" in the United States) are traffic control devices used to channel motor vehicle traffic through construction sites or to warn motorists of construction activity near the roadway. They are used primarily in the United States, but are occasionally used in Canada, Mexico and Costa Rica. They are an alternative to traffic cones which are smaller and easily hit by vehicles. Drums tend to command more respect from drivers than cones as they are larger, more visible, and give the appearance of being formidable obstacles. Construction barrels are typically bright orange and have four alternating white and orange reflective bands. However some regions, such as the province of Ontario, Canada, use black barrels with orange stripes. Most have a rubber base that prevents the barrel from tipping over during high winds. Construction barrels have a handle at the top so they can be easily picked up and carried. The handle also allows crews to install barricade lights to increase visibility. The product makes up a $90 million industry in the United States. Until the late 1980s, construction crews typically used 55-gallon steel drums to guide traffic through construction areas. They were painted orange and white and filled with sand or water to keep them in place. Because the drums were steel and weighed down with sand or water, extensive damage would occur to vehicles striking them, and they were dangerous to workers if propelled into work areas by vehicles. Plastic barrels that are commonly seen on American roadways today began emerging in the late 1970s and 1980s; steel 55-gallon drums were largely phased out by the 1990s, with an outright prohibition on using metal drums appearing in the third revision of the 1988 Edition of the MUTCD, published in September 1993. By 1981, the drums were mainly a two-piece plastic design that included the top piece of the drum and a base that was filled with sandbags. The same year, an updated version of the invention was released by PSS; it included a flange to allow sandbag placement on the outside of the drum which made it easier to maneuver. In 1985, PSS released the modern-day version of the construction barrel, the LifeGard® drum. The LifeGard® utilized the sidewall of a recycled truck tire at its base to keep the drum securely in place on the roadway. This design is the most common one in use today. See also Bollard Road traffic control Traffic cone References External links Stabilized barrel-like traffic control element by Jack H. Kulp et al., a 1993 improved version of the traffic barrel, on Google Patents Road construction Road safety Road traffic management Safety equipment Streetworks Traffic signs
Construction barrel
Engineering
528
43,742,131
https://en.wikipedia.org/wiki/De%20novo%20peptide%20sequencing
In mass spectrometry, de novo peptide sequencing is the method in which a peptide amino acid sequence is determined from tandem mass spectrometry. Knowing the amino acid sequence of peptides from a protein digest is essential for studying the biological function of the protein. In the old days, this was accomplished by the Edman degradation procedure. Today, analysis by a tandem mass spectrometer is a more common method to solve the sequencing of peptides. Generally, there are two approaches: database search and de novo sequencing. Database search is a simple version as the mass spectra data of the unknown peptide is submitted and run to find a match with a known peptide sequence, the peptide with the highest matching score will be selected. This approach fails to recognize novel peptides since it can only match to existing sequences in the database. De novo sequencing is an assignment of fragment ions from a mass spectrum. Different algorithms are used for interpretation and most instruments come with de novo sequencing programs. Peptide fragmentation Peptides are protonated in positive-ion mode. The proton initially locates at the N-terminus or a basic residue side chain, but because of the internal solvation, it can move along the backbone breaking at different sites which result in different fragments. The fragmentation rules are well explained by some publications. Three different types of backbone bonds can be broken to form peptide fragments: alkyl carbonyl (CHR-CO), peptide amide bond (CO-NH), and amino alkyl bond (NH-CHR). Different types of fragment ions When the backbone bonds cleave, six different types of sequence ions are formed as shown in Fig. 1. The N-terminal charged fragment ions are classed as a, b or c, while the C-terminal charged ones are classed as x, y or z. The subscript n is the number of amino acid residues. The nomenclature was first proposed by Roepstorff and Fohlman, then Biemann modified it and this became the most widely accepted version. Among these sequence ions, a, b and y-ions are the most common ion types, especially in the low-energy collision-induced dissociation (CID) mass spectrometers, since the peptide amide bond (CO-NH) is the most vulnerable and the loss of CO from b-ions. Mass of b-ions = Σ (residue masses) + 1 (H+) Mass of y-ions = Σ (residue masses) + 19 (H2O+H+) Mass of a-ions = mass of b-ions – 28 (CO) Double backbone cleavage produces internal ions, acylium-type like H2N-CHR2-CO-NH-CHR3-CO+ or immonium-type like H2N-CHR2-CO-NH+=CHR3. These ions are usually disturbance in the spectra. Further cleavage happens under high-energy CID at the side chain of C-terminal residues, forming dn, vn, wn-ions. Fragmentation rules summary Most fragment ions are b- or y-ions. a-ions are also frequently seen by the loss of CO from b-ions. Satellite ions(wn, vn, dn-ions) are formed by high-energy CID. Ser-, Thr-, Asp- and Glu-containing ions generate neutral molecular loss of water (-18). Asn-, Gln-, Lys-, Arg-containing ions generate neutral molecular loss of ammonia (-17). Neutral loss of ammonia from Arg leads to fragment ions (y-17) or (b-17) ions with higher abundance than their corresponding ions. When C-terminus has a basic residue, the peptide generates (bn-1+18) ion. A complementary b-y ion pair can be observed in multiply charged ions spectra. For this b-y ion pair, the sum of their subscripts is equal to the total number of amino acid residues in the unknown peptide. If the C-terminus is Arg or Lys, y1-ion can be found in the spectrum to prove it. Methods for peptide fragmentation In low energy collision induced dissociation (CID), b- and y-ions are the main product ions. In addition, loss of ammonia (-17 Da) is observed in fragment with RKNQ amino acids in it. Loss of water (-18 Da) can be observed in fragment with STED amino acids in it. No satellite ions are shown in the spectra. In high energy CID, all different types of fragment ions can be observed but no losses of ammonia or water. In electron transfer dissociation (ETD) and electron capture dissociation (ECD), the predominant ions are c, y, z+1, z+2 and sometimes w ions. For post source decay (PSD) in MALDI, a, b, y-ions are most common product ions. Factors affecting fragmentation are the charge state (the higher charge state, the less energy is needed for fragmentation), mass of the peptide (the larger mass, the more energy is required), induced energy (higher energy leads to more fragmentation), primary amino acid sequence, mode of dissociation and collision gas. Guidelines for interpretation For interpretation, first, look for single amino acid immonium ions (H2N+=CHR2). Corresponding immonium ions for amino acids are listed in Table 1. Ignore a few peaks at the high-mass end of the spectrum. They are ions that undergo neutral molecules losses (H2O, NH3, CO2, HCOOH) from [M+H]+ ions. Find mass differences at 28 Da since b-ions can form a-ions by loss of CO. Look for b2-ions at low-mass end of the spectrum, which helps to identify yn-2-ions too. Mass of b2-ions are listed in Table 2, as well as single amino acids that have equal mass to b2-ions. The mass of b2-ion = mass of two amino acid residues + 1. Identify a sequence ion series by the same mass difference, which matches one of the amino acid residue masses (see Table 1). For example, mass differences between an and an-1, bn and bn-1, cn and cn-1 are the same. Identify yn-1-ion at the high-mass end of the spectrum. Then continue to identify yn-2, yn-3... ions by matching mass differences with the amino acid residue masses (see Table 1). Look for the corresponding b-ions of the identified y-ions. The mass of b+y ions is the mass of the peptide +2 Da. After identifying the y-ion series and b-ion series, assign the amino acid sequence and check the mass. The other method is to identify b-ions first and then find the corresponding y-ions. Algorithms and software Manual de novo sequencing is labor-intensive and time-consuming. Usually algorithms or programs come with the mass spectrometer instrument are applied for the interpretation of spectra. Development of de novo sequencing algorithms An old method is to list all possible peptides for the precursor ion in mass spectrum, and match the mass spectrum for each candidate to the experimental spectrum. The possible peptide that has the most similar spectrum will have the highest chance to be the right sequence. However, the number of possible peptides may be large. For example, a precursor peptide with a molecular weight of 774 has 21,909,046 possible peptides. Even though it is done in the computer, it takes a long time. Another method is called "subsequencing", which instead of listing whole sequence of possible peptides, matches short sequences of peptides that represent only a part of the complete peptide. When sequences that highly match the fragment ions in the experimental spectrum are found, they are extended by residues one by one to find the best matching. In the third method, graphical display of the data is applied, in which fragment ions that have the same mass differences of one amino acid residue are connected by lines. In this way, it is easier to get a clear image of ion series of the same type. This method could be helpful for manual de novo peptide sequencing, but doesn't work for high-throughput condition. The fourth method, which is considered to be successful, is the graph theory. Applying graph theory in de novo peptide sequencing was first mentioned by Bartels. Peaks in the spectrum are transformed into vertices in a graph called "spectrum graph". If two vertices have the same mass difference of one or several amino acids, a directed edge will be applied. The SeqMS algorithm, Lutefisk algorithm, Sherenga algorithm are some examples of this type. Deep Learning More recently, deep learning techniques have been applied to solve the de novo peptide sequencing problem. The first breakthrough was DeepNovo, which adopted the convolutional neural network structure, achieved major improvements in sequence accuracy, and enabled complete protein sequence assembly without assisting databases Subsequently, additional network structures, such as PointNet (PointNovo), have been adopted to extract features from a raw spectrum. The de novo peptide sequencing problem is then framed as a sequence prediction problem. Given previously predicted partial peptide sequence, neural-network-based de novo peptide sequencing models will repeatedly generate the most probable next amino acid until the predicted peptide's mass matches the precursor mass. At inference time, search strategies such as beam search can be adopted to explore a larger search space while keeping the computational cost low. Comparing with previous methods, neural-network-based models have demonstrated significantly better accuracy and sensitivity. Moreover, with a careful model design, deep-learning-based de novo peptide sequencing algorithms can also be fast enough to achieve real-time peptide de novo sequencing. PEAKS software incorporates this neural network learning in their de novo sequencing algorithms. Software packages As described by Andreotti et al. in 2012, Antilope is a combination of Lagrangian relaxation and an adaptation of Yen's k shortest paths. It is based on 'spectrum graph' method and contains different scoring functions, and can be comparable on the running time and accuracy to "the popular state-of-the-art programs" PepNovo and NovoHMM. Grossmann et al. presented AUDENS in 2005 as an automated de novo peptide sequencing tool containing a preprocessing module that can recognize signal peaks and noise peaks. Lutefisk can solve de novo sequencing from CID mass spectra. In this algorithm, significant ions are first found, then determine the N- and C-terminal evidence list. Based on the sequence list, it generates complete sequences in spectra and scores them with the experimental spectrum. However, the result may include several sequence candidates that have only little difference, so it is hard to find the right peptide sequence. A second program, CIDentify, which is a modified version by Alex Taylor of Bill Pearson's FASTA algorithm, can be applied to distinguish those uncertain similar candidates. Mo et al. presented the MSNovo algorithm in 2007 and proved that it performed "better than existing de novo tools on multiple data sets". This algorithm can do de novo sequencing interpretation of LCQ, LTQ mass spectrometers and of singly, doubly, triply charged ions. Different from other algorithms, it applied a novel scoring function and use a mass array instead of a spectrum graph. Fisher et al. proposed the NovoHMM method of de novo sequencing. A hidden Markov model (HMM) is applied as a new way to solve de novo sequencing in a Bayesian framework. Instead of scoring for single symbols of the sequence, this method considers posterior probabilities for amino acids. In the paper, this method is proved to have better performance than other popular de novo peptide sequencing methods like PepNovo by a lot of example spectra. PEAKS is a complete software package for the interpretation of peptide mass spectra. It contains de novo sequencing, database search, PTM identification, homology search and quantification in data analysis. Ma et al. described a new model and algorithm for de novo sequencing in PEAKS, and compared the performance with Lutefisk of several tryptic peptides of standard proteins, by the quadrupole time-of-flight (Q-TOF) mass spectrometer. PepNovo is a high throughput de novo peptide sequencing tool and uses a probabilistic network as scoring method. It usually takes less than 0.2 seconds for interpretation of one spectrum. Described by Frank et al., PepNovo works better than several popular algorithms like Sherenga, PEAKS, Lutefisk. Now a new version PepNovo+ is available. Chi et al. presented pNovo+ in 2013 as a new de novo peptide sequencing tool by using complementary HCD and ETD tandem mass spectra. In this method, a component algorithm, pDAG, largely speeds up the acquisition time of peptide sequencing to 0.018s on average, which is three times as fast as the other popular de novo sequencing software. As described by Jeong et al., compared with other do novo peptide sequencing tools, which works well on only certain types of spectra, UniNovo is a more universal tool that has a good performance on various types of spectra or spectral pairs like CID, ETD, HCD, CID/ETD, etc. It has a better accuracy than PepNovo+ or PEAKS. Moreover, it generates the error rate of the reported peptide sequences. Ma published Novor in 2015 as a real-time de novo peptide sequencing engine. The tool is sought to improve the de novo speed by an order of magnitude and retain similar accuracy as other de novo tools in the market. On a Macbook Pro laptop, Novor has achieved more than 300 MS/MS spectra per second. Pevtsov et al. compared the performance of the above five de novo sequencing algorithms: AUDENS, Lutefisk, NovoHMM, PepNovo, and PEAKS . QSTAR and LCQ mass spectrometer data were employed in the analysis, and evaluated by relative sequence distance (RSD) value, which was the similarity between de novo peptide sequencing and true peptide sequence calculated by a dynamic programming method. Results showed that all algorithms had better performance in QSTAR data than on LCQ data, while PEAKS as the best had a success rate of 49.7% in QSTAR data, and NovoHMM as the best had a success rate of 18.3% in LCQ data. The performance order in QSTAR data was PEAKS > Lutefisk, PepNovo > AUDENS, NovoHMM, and in LCQ data was NovoHMM > PepNovo, PEAKS > Lutefisk > AUDENS. Compared in a range of spectrum quality, PEAKS and NovoHMM also showed the best performance in both data among all 5 algorithms. PEAKS and NovoHMM had the best sensitivity in both QSTAR and LCQ data as well. However, no evaluated algorithms exceeded a 50% of exact identification for both data sets. Recent progress in mass spectrometers made it possible to generate mass spectra of ultra-high resolution . The improved accuracy, together with the increased amount of mass spectrometry data that are being generated, draws the interests of applying deep learning techniques to de novo peptide sequencing. In 2017 Tran et al. proposed DeepNovo, the first deep learning based de novo sequencing software. The benchmark analysis in the original publication demonstrated that DeepNovo outperformed previous methods, including PEAKS, Novor and PepNovo, by a significant margin. DeepNovo is implemented in python with the Tensorflow framework. To represent a mass spectrum as a fixed-dimensional input to the neural-network, DeepNovo discretized each spectrum into a length 150,000 vector. This unnecessarily large spectrum representation, and the single-thread CPU usage in the original implementation, prevents DeepNovo from performing peptide sequencing in real time. To further improve efficiency of de novo peptide sequencing models, Qiao et al. proposed PointNovo in 2020. PointNovo is a python software implemented with the PyTorch framework and it gets rid of the space consuming spectrum-vector-representation adopted by DeepNovo. Comparing with DeepNovo, PointNovo managed to achieve better accuracy and efficiency at the same time by directly representing a spectrum as a set of m/z and intensity pairs. References Mass spectrometry Proteomic sequencing
De novo peptide sequencing
Physics,Chemistry,Biology
3,427
12,601,888
https://en.wikipedia.org/wiki/Universal%20measuring%20machine
Universal measuring machines (UMM) are measurement devices used for objects in which geometric relationships are the most critical element, with dimensions specified from geometric locations (see GD&T) rather than absolute coordinates. The very first uses for these machines was the inspection of gauges and parts produced by jig grinding. While bearing some resemblance to a coordinate-measuring machine (CMM), its usage and accuracy envelope differs significantly. While CMMs typically move in three dimensions and measure with a touch probe, a UMM aligns a spindle (4th axis) with a part geometry using a continuous scanning probe. Originally, universal measuring machines were created to fill a need to continuously measure geometric features in both an absolute and comparative capacity, rather than by a point based coordinate measuring system. A CMM provides a rapid method for inspecting absolute points, but geometric relationships, such as runout, parallelism, perpendicularity, etc., must be calculated rather than measured directly. By aligning an accurate spindle with an electronic test indicator with a geometric feature of interest, rather than using a non-scanning cartesian probe to estimate an alignment, a universal measuring machine fills this need. The indicator can be accurately controlled and moved across a part, either along a linear axis or radially around the spindle, to continuously record profile and determine geometry. This gives the universal machine a very strong advantage over non-scanning measuring methods when profiling flats, radii, contours, and holes, as the detail of the feature can be at the resolution of the probe. More modern CMMs do have scanning probes and thus can determine geometry similarly. In practice, the 1970s-era universal measuring machine is a very slow machine that requires a highly skilled and patient operator to use, and the accuracy built into these machines far outstripped the needs of most industries. As a result, the universal measuring machine today is uncommon, only found as a special-purpose machine in metrology laboratories. Because the machine can make comparative length measurements without moving linear axes, it is a valuable tool in comparing master gauges and length standards. While universal measuring machines were never a mass-produced item, they are no longer available on a production basis, and are produced on a to-order basis tailored to the needs of the metrology lab purchasing it. Manufacturers that perform work that must be measured on such a machine will frequently opt to subcontract the measurement to a laboratory which specializes in it. Universal measuring machines placed under corrected interferometric control and using non-contact gauge heads can measure features to millionths of an inch across the machine's entire envelope, where other types of machine are limited either in number of axes or accuracy of the measurement. The accuracy of the machine itself is negligible, as the environment the machine is the limiting factor to effective accuracy. The earlier mechanical machines were built to hold 10 to 20 millionths of an inch accuracy across the entire machine envelope. References American Society for Precision Engineering, Achieving Accuracy in the Modern Machine Shop Wayne R. Moore, Foundations of Mechanical Accuracy Dimensional instruments Metalworking measuring instruments
Universal measuring machine
Physics,Mathematics
627
60,810,627
https://en.wikipedia.org/wiki/Newton%E2%80%93Gauss%20line
In geometry, the Newton–Gauss line (or Gauss–Newton line) is the line joining the midpoints of the three diagonals of a complete quadrilateral. The midpoints of the two diagonals of a convex quadrilateral with at most two parallel sides are distinct and thus determine a line, the Newton line. If the sides of such a quadrilateral are extended to form a complete quadrangle, the diagonals of the quadrilateral remain diagonals of the complete quadrangle and the Newton line of the quadrilateral is the Newton–Gauss line of the complete quadrangle. Complete quadrilaterals Any four lines in general position (no two lines are parallel, and no three are concurrent) form a complete quadrilateral. This configuration consists of a total of six points, the intersection points of the four lines, with three points on each line and precisely two lines through each point. These six points can be split into pairs so that the line segments determined by any pair do not intersect any of the given four lines except at the endpoints. These three line segments are called diagonals of the complete quadrilateral. Existence of the Newton−Gauss line It is a well-known theorem that the three midpoints of the diagonals of a complete quadrilateral are collinear. There are several proofs of the result based on areas or wedge products or, as the following proof, on Menelaus's theorem, due to Hillyer and published in 1920. Let the complete quadrilateral be labeled as in the diagram with diagonals and their respective midpoints . Let the midpoints of be respectively. Using similar triangles it is seen that intersects at , intersects at and intersects at . Again, similar triangles provide the following proportions, However, the line intersects the sides of triangle , so by Menelaus's theorem the product of the terms on the right hand sides is −1. Thus, the product of the terms on the left hand sides is also −1 and again by Menelaus's theorem, the points are collinear on the sides of triangle . Applications to cyclic quadrilaterals The following are some results that use the Newton–Gauss line of complete quadrilaterals that are associated with cyclic quadrilaterals, based on the work of Barbu and Patrascu. Equal angles Given any cyclic quadrilateral , let point be the point of intersection between the two diagonals and . Extend the diagonals and until they meet at the point of intersection, . Let the midpoint of the segment be , and let the midpoint of the segment be (Figure 1). Theorem If the midpoint of the line segment is , the Newton–Gauss line of the complete quadrilateral and the line determine an angle equal to . Proof First show that the triangles are similar. Since and , we know . Also, In the cyclic quadrilateral , these equalities hold: Therefore, . Let be the radii of the circumcircles of respectively. Apply the law of sines to the triangles, to obtain: Since and , this shows the equality The similarity of triangles follows, and . Remark If is the midpoint of the line segment , it follows by the same reasoning that . Isogonal lines Theorem The line through parallel to the Newton–Gauss line of the complete quadrilateral and the line are isogonal lines of , that is, each line is a reflection of the other about the angle bisector. (Figure 2) Proof Triangles are similar by the above argument, so . Let be the point of intersection of and the line parallel to the Newton–Gauss line through . Since and , and . Therefore, Two cyclic quadrilaterals sharing a Newton-Gauss line Lemma Let and be the orthogonal projections of the point on the lines and respectively. The quadrilaterals and are cyclic quadrilaterals. Proof , as previously shown. The points and are the respective circumcenters of the right triangles . Thus, and . Therefore, Therefore, is a cyclic quadrilateral, and by the same reasoning, also lies on a circle. Theorem Extend the lines to intersect at respectively (Figure 4). The complete quadrilaterals and have the same Newton–Gauss line. Proof The two complete quadrilaterals have a shared diagonal, . lies on the Newton–Gauss line of both quadrilaterals. is equidistant from and , since it is the circumcenter of the cyclic quadrilateral . If triangles are congruent, and it will follow that lies on the perpendicular bisector of the line . Therefore, the line contains the midpoint of , and is the Newton–Gauss line of . To show that the triangles are congruent, first observe that is a parallelogram, since the points are midpoints of respectively. Therefore, Also note that Hence, Therefore, and are congruent by SAS. Remark Due to being congruent triangles, their circumcircles are also congruent. Relation with the Miquel point The point at infinity along the Newton-Gauss line is the isogonal conjugate of the Miquel point. Generalization Dao Thanh Oai showed a generalization of the Newton-Gauss line. For a triangle , let an arbitrary line and the Cevian triangle of an arbitrary point . intersects , and at and respectively. Then , and are colinear. If is the centroid of the triangle , the line is Newton-Gauss line of the quadrilateral composed of and . History The Newton–Gauss line proof was developed by the two mathematicians it is named after: Sir Isaac Newton and Carl Friedrich Gauss. The initial framework for this theorem is from the work of Newton, in his previous theorem on the Newton line, in which Newton showed that the center of a conic inscribed in a quadrilateral lies on the Newton–Gauss line. The theorem of Gauss and Bodenmiller states that the three circles whose diameters are the diagonals of a complete quadrilateral are coaxal. Notes References (available on-line as) External links Geometry Quadrilaterals
Newton–Gauss line
Mathematics
1,294
38,157,620
https://en.wikipedia.org/wiki/Glass%20poling
Glass poling is the physical process through which the distribution of the electrical charges is changed. In principle, the charges are randomly distributed and no permanent electric field exists inside the glass. When the charges are moved and fixed at a place then a permanent field will be recorded in the glass. This electric field will permit various optical functions in the glass, impossible otherwise. The resulting effect would be like having positive and negative poles as in a battery, but inside an optical fibre. The effect will be a change of the optical fibre properties. For instance glass poling will permit to realize second-harmonic light generation which consists of converting an input light into another wavelength, twice the original radiation frequency and half of the wave length. For instance a near infrared radiation around 1030 nm could be converted with this process to the 515 nm wavelength, corresponding to green light. Glass poling also allows for the creation of the linear electro-optic effect that can be used for other functions like light modulation. So, glass poling relies on recording an electric field which breaks the original symmetry of the material. Poling of glass is done by applying high voltage to the medium, while exciting it with heat, ultraviolet light or some other source of energy. Heat will permit the charges to move by diffusion and the high voltage permits to give a direction to the charges displacement. Optical poling of silica fibers allows for second-harmonic generation through the creation of a self-organized periodic distribution of charges at the core-cladding interface. UV poling received much attention because of the high non-linearity reported, but interest dwindled when various groups failed to reproduce the results. Thermal poling Strong electric fields are created by thermal poling of silica, subjecting the glass simultaneously to temperatures in the range of 280 °C and a few kilovolts bias for several minutes. Cations are mobile at elevated temperature (e.g., Na+) and are displaced by the poling field from the anode side of the sample. This creates a region a few micrometers thick of high electrical resistivity depleted of positive ions near the anodic surface. The depleted region is negatively charged, and if the sample is cooled to room temperature when the poling voltage is on, the distribution of electrons becomes frozen. After poling, positive charge attracted to the anodic surface and negative charge inside the glass create a recorded field that can reach 109 V/m. More detailed studies, show that there is little or no accumulation of cations near the cathode electrode, and that the layer nearest to the anode suffers partial neutralization if poling persists for an excessively long time. The process of glass poling is very similar to the one used for Anodic bonding, where the recorded electric field bonds the glass sample to the anode. In thermal poling, one exploits effects of nonlinear optics created by the strong recorded field. An effective second-order optical non-linearity arises from χ(2)eff ~ 3 χ(3) Erec. In silica glass, the non-linear coefficient induced is ~1 pm/V, while in fibers it is a fraction of this value. The use of fibers with internal electrodes makes it possible to pole the fibers to make them exhibit the linear electro-optic effect and then control the refractive index with the application of voltage, for switching and modulation. The recorded field in a poled fiber can be erased by exposing the poled fiber from the side to UV radiation. This makes it possible to artificially create an electric-field grating with arbitrary period, which satisfies the condition necessary for quasi-phase-matching. Periodic poling is used for efficient frequency-doubling in optical fibers. References Glass physics Nonlinear optics
Glass poling
Physics,Materials_science,Engineering
768
14,770,924
https://en.wikipedia.org/wiki/HOXB4
Homeobox protein Hox-B4 is a protein that in humans is encoded by the HOXB4 gene. Function This gene is a member of the Antp homeobox family and encodes a nuclear protein with a homeobox DNA-binding domain. It is included in a cluster of homeobox B genes located on chromosome 17. The encoded protein functions as a sequence-specific transcription factor that is involved in development. Intracellular or ectopic expression of this protein expands hematopoietic stem and progenitor cells in vivo and in vitro, making it a potential candidate for therapeutic stem cell expansion. See also Homeobox References Further reading External links Transcription factors
HOXB4
Chemistry,Biology
142
20,783,988
https://en.wikipedia.org/wiki/3D%20user%20interaction
In computing, 3D interaction is a form of human-machine interaction where users are able to move and perform interaction in 3D space. Both human and machine process information where the physical position of elements in the 3D space is relevant. The 3D space used for interaction can be the real physical space, a virtual space representation simulated on the computer, or a combination of both. When the real physical space is used for data input, the human interacts with the machine performing actions using an input device that detects the 3D position of the human interaction, among other things. When it is used for data output, the simulated 3D virtual scene is projected onto the real environment through one output device. The principles of 3D interaction are applied in a variety of domains such as tourism, art, gaming, simulation, education, information visualization, or scientific visualization. History Research in 3D interaction and 3D display began in the 1960s, pioneered by researchers like Ivan Sutherland, Fred Brooks, Bob Sproull, Andrew Ortony and Richard Feldman. But it was not until 1962 when Morton Heilig invented the Sensorama simulator. It provided 3D video feedback, as well motion, audio, and feedbacks to produce a virtual environment. The next stage of development was Dr. Ivan Sutherland’s completion of his pioneering work in 1968, the Sword of Damocles. He created a head-mounted display that produced 3D virtual environment by presenting a left and right still image of that environment. Availability of technology as well as impractical costs held back the development and application of virtual environments until the 1980s. Applications were limited to military ventures in the United States. Since then, further research and technological advancements have allowed new doors to be opened to application in various other areas such as education, entertainment, and manufacturing. Background In 3D interaction, users carry out their tasks and perform functions by exchanging information with computer systems in 3D space. It is an intuitive type of interaction because humans interact in three dimensions in the real world. The tasks that users perform have been classified as selection and manipulation of objects in virtual space, navigation, and system control. Tasks can be performed in virtual space through interaction techniques and by utilizing interaction devices. 3D interaction techniques were classified according to the task group it supports. Techniques that support navigation tasks are classified as navigation techniques. Techniques that support object selection and manipulation are labeled selection and manipulation techniques. Lastly, system control techniques support tasks that have to do with controlling the application itself. A consistent and efficient mapping between techniques and interaction devices must be made in order for the system to be usable and effective. Interfaces associated with 3D interaction are called 3D interfaces. Like other types of user interfaces, it involves two-way communication between users and system, but allows users to perform action in 3D space. Input devices permit the users to give directions and commands to the system, while output devices allow the machine to present information back to them. 3D interfaces have been used in applications that feature virtual environments, and augmented and mixed realities. In virtual environments, users may interact directly with the environment or use tools with specific functionalities to do so. 3D interaction occurs when physical tools are controlled in 3D spatial context to control a corresponding virtual tool. Users experience a sense of presence when engaged in an immersive virtual world. Enabling the users to interact with this world in 3D allows them to make use of natural and intrinsic knowledge of how information exchange takes place with physical objects in the real world. Texture, sound, and speech can all be used to augment 3D interaction. Currently, users still have difficulty in interpreting 3D space visuals and understanding how interaction occurs. Although it’s a natural way for humans to move around in a three-dimensional world, the difficulty exists because many of the cues present in real environments are missing from virtual environments. Perception and occlusion are the primary perceptual cues used by humans. Also, even though scenes in virtual space appear three-dimensional, they are still displayed on a 2D surface so some inconsistencies in depth perception will still exist. 3D user interfaces User interfaces are the means for communication between users and systems. 3D interfaces include media for 3D representation of system state, and media for 3D user input or manipulation. Using 3D representations is not enough to create 3D interaction. The users must have a way of performing actions in 3D as well. To that effect, special input and output devices have been developed to support this type of interaction. Some, such as the 3D mouse, were developed based on existing devices for 2D interaction. 3D user interfaces, are user interfaces where 3D interaction takes place, this means that the user's tasks occur directly within a three-dimensional space. The user must communicate with commands, requests, questions, intent, and goals to the system, and in turn this one has to provide feedback, requests for input, information about their status, and so on. Both the user and the system do not have the same type of language, therefore to make possible the communication process, the interfaces must serve as intermediaries or translators between them. The way the user transforms perceptions into actions is called Human transfer function, and the way the system transforms signals into display information is called System transfer function. 3D user interfaces are actually physical devices that communicate the user and the system with the minimum delay, in this case there are two types: 3D User Interface Output Hardware and 3D User Interface Input Hardware. 3D user interface output hardware Output devices, also called display devices, allow the machine to provide information or feedback to one or more users through the human perceptual system. Most of them are focused on stimulating the visual, auditory, or haptic senses. However, in some unusual cases they also can stimulate the user's olfactory system. 3D visual displays This type of devices are the most popular and its goal is to present the information produced by the system through the human visual system in a three-dimensional way. The main features that distinguish these devices are: field of regard and field of view, spatial resolution, screen geometry, light transfer mechanism, refresh rate and ergonomics. Another way to characterize these devices is according to the different categories of depth perception cues used to achieve that the user can understand the three-dimensional information. The main types of displays used in 3D user interfaces are: monitors, surround-screen displays, workbenches, hemispherical displays, head-mounted displays, arm-mounted displays and autostereoscopic displays. Virtual reality headsets and CAVEs (Cave Automatic Virtual Environment) are examples of a fully immersive visual display, where the user can see only the virtual world and not the real world. Semi-immersive displays allow users to see both. Monitors and workbenches are examples of semi-immersive displays. 3D audio displays 3D Audio displays are devices that present information (in this case sound) through the human auditory system, which is especially useful when supplying location and spatial information to the users. Its objective is to generate and display a spatialized 3D sound so the user can use its psychoacoustic skills and be able to determine the location and direction of the sound. There are different localizations cues: binaural cues, spectral and dynamic cues, head-related transfer functions, reverberation, sound intensity and vision and environment familiarity. Adding background audio component to a display also adds to the sense of realism. 3D haptic displays These devices use the sense of touch to simulate the physical interaction between the user and a virtual object. There are three different types of 3D Haptic displays: those that provide the user a sense of force, the ones that simulate the sense of touch and those that use both. The main features that distinguish these devices are: haptic presentation capability, resolution and ergonomics. The human haptic system has 2 fundamental kinds of cues, tactile and kinesthetic. Tactile cues are a type of human touch cues that have a wide variety of skin receptors located below the surface of the skin that provide information about the texture, temperature, pressure and damage. Kinesthetic cues are a type of human touch cues that have many receptors in the muscles, joints and tendons that provide information about the angle of joints and stress and length of muscles. 3D user interface input hardware These hardware devices are called input devices and their aim is to capture and interpret the actions performed by the user. The degrees of freedom (DOF) are one of the main features of these systems. Classical interface components (such as mouse and keyboards and arguably touchscreen) are often inappropriate for non 2D interaction needs. These systems are also differentiated according to how much physical interaction is needed to use the device, purely active need to be manipulated to produce information, purely passive do not need to. The main categories of these devices are standard (desktop) input devices, tracking devices, control devices, navigation equipment, gesture interfaces, 3D mice, and brain–computer interfaces. Desktop Input devices This type of devices are designed for an interaction 3D on a desktop, many of them have an initial design thought in a traditional interaction in two dimensions, but with an appropriate mapping between the system and the device, this can work perfectly in a three-dimensional way. There are different types of them: keyboards, 2D mice and trackballs, pen-based tablets and stylus, and joysticks. Nonetheless, many studies have questioned the appropriateness of desktop interface components for 3D interaction though this is still debated. Tracking devices 3D user interaction systems are based primarily on motion tracking technologies, to obtain all the necessary information from the user through the analysis of their movements or gestures, these technologies are called, tracking technologies. Trackers detect or monitor head, hand or body movements and send that information to the computer. The computer then translates it and ensures that position and orientation are reflected accurately in the virtual world. Tracking is important in presenting the correct viewpoint, coordinating the spatial and sound information presented to users as well the tasks or functions that they could perform. 3D trackers have been identified as mechanical, magnetic, ultrasonic, optical, and hybrid inertial. Examples of trackers include motion trackers, eye trackers, and data gloves. A simple 2D mouse may be considered a navigation device if it allows the user to move to a different location in a virtual 3D space. Navigation devices such as the treadmill and bicycle make use of the natural ways that humans travel in the real world. Treadmills simulate walking or running and bicycles or similar type equipment simulate vehicular travel. In the case of navigation devices, the information passed on to the machine is the user's location and movements in virtual space. Wired gloves and bodysuits allow gestural interaction to occur. These send hand or body position and movement information to the computer using sensors. For the full development of a 3D User Interaction system, is required to have access to a few basic parameters, all this technology-based system should know, or at least partially, as the relative position of the user, the absolute position, angular velocity, rotation data, orientation or height. The collection of these data is achieved through systems of space tracking and sensors in multiple forms, as well as the use of different techniques to obtain. The ideal system for this type of interaction is a system based on the tracking of the position, using six degrees of freedom (6-DOF), these systems are characterized by the ability to obtain absolute 3D position of the user, in this way will get information on all possible three-dimensional field angles. The implementation of these systems can be achieved by using various technologies, such as electromagnetic fields, optical, or ultrasonically tracking, but all share the main limitation, they should have a fixed external reference, either a base, an array of cameras, or a set of visible markers, so this single system can be carried out in prepared areas. Inertial tracking systems do not require external reference such as those based on movement, are based on the collection of data using accelerometers, gyroscopes, or video cameras, without a fixed reference mandatory, in the majority of cases, the main problem of this system, is based on not obtaining the absolute position, since not part of any pre-set external reference point so it always gets the relative position of the user, aspect that causes cumulative errors in the process of sampling data. The goal to achieve in a 3D tracking system would be based on obtaining a system of 6-DOF able to get absolute positioning and precision of movement and orientation, with a precision and an uncut space very high, a good example of a rough situation would be a mobile phone, since it has all the motion capture sensors and also GPS tracking of latitude, but currently these systems are not so accurate to capture data with a precision of centimeters and therefore would be invalid. However, there are several systems that are closely adapted to the objectives pursued, the determining factor for them is that systems are auto content, i.e., all-in-one and does not require a fixed prior reference, these systems are as follows: Nintendo Wii Remote ("Wiimote") The Wii Remote device does not offer a technology based on 6-DOF since again, cannot provide absolute position, in contrast, is equipped with a multitude of sensors, which convert a 2D device in a great tool of interaction in 3D environments. This device has gyroscopes to detect rotation of the user, accelerometers ADXL3000, for obtaining speed and movement of the hands, optical sensors for determining orientation and electronic compasses and infra-red devices to capture the position. This type of device can be affected by external references of infra-red light bulbs or candles, causing errors in the accuracy of the position. Google Tango Devices The Tango Platform is an augmented reality computing platform, developed and authored by the Advanced Technology and Projects (ATAP), a skunkworks division of Google. It uses computer vision and internal sensors (like gyroscopes) to enable mobile devices, such as smartphones and tablets, to detect their position relative to the world around them without using GPS or other external signals. It can therefore be used to provide 6-DOF input which can also be combined with its multi-touch screen. The Google Tango devices can be seen as more integrated solutions than the early prototypes combining spatially-tracked devices with touch-enabled-screens for 3D environments. Microsoft Kinect The Microsoft Kinect device offers us a different motion capture technology for tracking. Instead of basing its operation on sensors, this is based on a structured light scanner, located in a bar, which allows tracking of the entire body through the detection of about 20 spatial points, of which 3 different degrees of freedom are measured to obtain position, velocity and rotation of each point. Its main advantage is ease of use, and the no requirement of an external device attached by the user, and its main disadvantage lies in the inability to detect the orientation of the user, thus limiting certain space and guidance functions. Leap Motion The Leap Motion is a new system of tracking of hands, designed for small spaces, allowing a new interaction in 3D environments for desktop applications, so it offers a great fluidity when browsing through three-dimensional environments in a realistic way. It is a small device that connects via USB to a computer, and used two cameras with infra-red light LED, allowing the analysis of a hemispheric area about 1 meter on its surface, thus recording responses from 300 frames per second, information is sent to the computer to be processed by the specific software company. 3D Interaction Techniques 3D Interaction Techniques are the different ways that the user can interact with the 3D virtual environment to execute different kind of tasks. The quality of these techniques has a profound effect on the quality of the entire 3D User Interfaces. They can be classified into three different groups: Navigation, Selection and manipulation and System control. Navigation The computer needs to provide the user with information regarding location and movement. Navigation is the most used by the user in big 3D environments and presents different challenges as supporting spatial awareness, giving efficient movements between distant places and making navigation bearable so the user can focus on more important tasks. These techniques, navigation tasks, can be divided into two components: travel and wayfinding. Travel involves moving from the current location to the desired point. Wayfinding refers to finding and setting routes to get to a travel goal within the virtual environment. Travel Travel is a conceptual technique that consists in the movement of the viewpoint (virtual eye, virtual camera) from one location to another. This orientation is usually handled in immersive virtual environments by head tracking. There exists five types of travel interaction technique: Physical movement: uses the user's body motion to move through the virtual environment. This is an appropriate technique when you need an augmented perception of the feeling of being present, or when the user needs to do physical effort for a simulation. Manual viewpoint manipulation: the user's hands movements determine the action in the virtual environment. One example could be when the user moves their hands in a way that seems like they are grabbing a virtual rope, and then pulls themself up. This technique could be easy to learn and efficient, but can cause fatigue. Steering: the user has to constantly indicate in what direction to move. This is a common and efficient technique. One example of this is the gaze-directed steering, where the head orientation determines the direction of travel. Target-based travel: the user specifies a destination point, and the viewpoint moves to the new location. This travel can be executed by teleport, where the user is instantly moved to the destination point, or, the system can execute some stream of transition movements to the destiny. These techniques are very simple from the user's point of view, because they only have to indicate the destination. Route planning: the user specifies the path that should be taken through the environment, and the system executes the movement. The user may draw a path on a map. This technique allows users to control travel, while they have the ability to do other tasks during motion. Wayfinding Wayfinding is the cognitive process of defining a route for the surrounding environment, using and acquiring spatial knowledge to construct a cognitive map of the environment. In virtual space it is different and more difficult to do than in the real world because synthetic environments are often missing perceptual cues and movement constraints. It can be supported using user-centered techniques such as using a larger field of view and supplying motion cues, or environment-centered techniques like structural organization and wayfinding principles. In order for a good wayfinding, users should receive wayfinding supports during the virtual environment travel to facilitate it because of the constraints from the virtual world. These supports can be user-centered supports such as a large field-of-view or even non-visual support such as audio, or environment-centered support, artificial cues and structural organization to define clearly different parts of the environment. Some of the most used artificial cues are maps, compasses and grids, or even architectural cues like lighting, color and texture. Selection and Manipulation Selection and Manipulation techniques for 3D environments must accomplish at least one of three basic tasks: object selection, object positioning and object rotation. Users need to be able to manipulate virtual objects. Manipulation tasks involve selecting and moving an object. Sometimes, the rotation of the object is involved as well. Direct-hand manipulation is the most natural technique because manipulating physical objects with the hand is intuitive for humans. However, this is not always possible. A virtual hand that can select and re-locate virtual objects will work as well. 3D widgets can be used to put controls on objects: these are usually called 3D Gizmos or Manipulators (a good example are the ones from Blender). Users can employ these to re-locate, re-scale or re-orient an object (Translate, Scale, Rotate). Other techniques include the Go-Go technique and ray casting, where a virtual ray is used to point to and select an object. Selection The task of selecting objects or 3D volumes in a 3D environments requires first being able to find the desired target and then being able to select it. Most 3D datasets/environments are severed by occlusion problems, so the first step of finding the target relies on manipulation of the viewpoint or of the 3D data itself in order to properly identify the object or volume of interest. This initial step is then of course tightly coupled with manipulations in 3D. Once the target is visually identified, users have access to a variety of techniques to select it. Usually, the system provides the user a 3D cursor represented as a human hand whose movements correspond to the motion of the hand tracker. This virtual hand technique is rather intuitive because simulates a real-world interaction with objects but with the limit of objects that we can reach inside a reach-area. To avoid this limit, there are many techniques that have been suggested, like the Go-Go technique. This technique allows the user to extend the reach-area using a non-linear mapping of the hand: when the user extends the hand beyond a fixed threshold distance, the mapping becomes non-linear and the hand grows. Another technique to select and manipulate objects in 3D virtual spaces consists in pointing at objects using a virtual-ray emanating from the virtual hand. When the ray intersects with the objects, it can be manipulated. Several variations of this technique has been made, like the aperture technique, which uses a conic pointer addressed for the user's eyes, estimated from the head location, to select distant objects. This technique also uses a hand sensor to adjust the conic pointer size. Many other techniques, relying on different input strategies, have also been developed. Manipulation 3D Manipulations occurs before a selection task (in order to visually identify a 3D selection target) and after a selection has occurred, to manipulate the selected object. 3D Manipulations require 3 DOF for rotations (1 DOF per axis, namely x, y, z) and 3 DOF for translations (1 DOF per axis) and at least 1 additional DOF for uniform zoom (or alternatively 3 additional DOF for non-uniform zoom operations). 3D Manipulations, like navigation, is one of the essential tasks with 3D data, objects or environments. It is the basis of many 3D software (such as Blender, Autodesk, VTK) which are widely used. These software, available mostly on computers, are thus almost always combined with a mouse and keyboard. To provide enough DOFs (the mouse only offers 2), these software rely on modding with a key in order to separately control all the DOFs involved in 3D manipulations. With the recent avent of multi-touch enabled smartphones and tablets, the interaction mappings of these software have been adapted to multi-touch (which offers more simultaneous DOF manipulations than a mouse and keyboard). A survey conducted in 2017 of 36 commercial and academic mobile applications on Android and iOS however suggested that most applications did not provide a way to control the minimum 6 DOFs required, but that among those which did, most made use of a 3D version of the RST (Rotation Scale Translation) mapping: 1 finger is used for rotation around x and y, while two-finger interaction controls rotation around z, and translation along x, y, and z. System Control System control techniques allows the user to send commands to an application, activate some functionality, change the interaction (or system) mode, or modify a parameter. The command sender always includes the selection of an element from a set. System control techniques as techniques that support system control tasks in three-dimensions can be categorized into four groups: Graphical menus: visual representations of commands. Voice commands: menus accessed via voice. Gestural interaction: command accessed via body gesture. Tools: virtual objects with an implicit function or mode. Also exists different hybrid techniques that combine some of the types. Symbolic input This task allows the user to enter and/or edit, for example, text, making it possible to annotate 3D scenes or 3D objects. See also Finger tracking Interaction technique Interaction design Human–computer interaction Cave Automatic Virtual Environment (CAVE) Virtual reality References Reading List 3D Interaction With and From Handheld Computers. Visited March 28, 2008 Bowman, D., Kruijff, E., LaViola, J., Poupyrev, I. (2001, February). An Introduction to 3-D User Interface Design. Presence, 10(1), 96–108. Bowman, D., Kruijff, E., LaViola, J., Poupyrev, I. (2005). 3D User Interfaces: Theory and Practice. Boston: Addison–Wesley. Bowman, Doug. 3D User Interfaces. Interaction Design Foundation. Retrieved October 15, 2015 Burdea, G. C., Coiffet, P. (2003). Virtual Reality Technology (2nd ed.). New Jersey: John Wiley & Sons Inc. Carroll, J. M. (2002). Human–Computer Interaction in the New Millennium. New York: ACM Press Csisinko, M., Kaufmann, H. (2007, March). Towards a Universal Implementation of 3D User Interaction Techniques [Proceedings of Specification, Authoring, Adaptation of Mixed Reality User Interfaces Workshop, IEEE VR]. Charlotte, NC, USA. Interaction Techniques. DLR - Simulations- und Softwaretechnik. Retrieved October 18, 2015 Larijani, L. C. (1993). The Virtual Reality Primer. United States of America: R. R. Donnelley and Sons Company. Rhijn, A. van (2006). Configurable Input Devices for 3D Interaction using Optical Tracking. Eindhoven: Technische Universiteit Eindhoven. Stuerzlinger, W., Dadgari, D., Oh, J-Y. (2006, April). Reality-Based Object Movement Techniques for 3D. CHI 2006 Workshop: "What is the Next Generation of Human–Computer Interaction?". Workshop presentation. The CAVE (CAVE Automatic Virtual Environment). Visited March 28, 2007 The Java 3-D Enabled CAVE at the Sun Centre of Excellence for Visual Genomics. Visited March 28, 2007 Vince, J. (1998). Essential Virtual Reality Fast. Great Britain: Springer-Verlag London Limited Virtual Reality. Visited March 28, 2007 Yuan, C., (2005, December). Seamless 3D Interaction in AR – A Vision-Based Approach. In Proceedings of the First International Symposium, ISVC (pp. 321–328). Lake Tahoe, NV, USA: Springer Berlin/ Heidelberg. External links Bibliography on 3D Interaction and Spatial Input The Inventor of the 3D Window Interface 1998 3DI Group 3D Interaction in Virtual Environments Human–computer interaction Virtual reality User interface techniques
3D user interaction
Engineering
5,486
47,047,358
https://en.wikipedia.org/wiki/Eigenoperator
In mathematics, an eigenoperator, A, of a matrix H is a linear operator such that where is a corresponding scalar called an eigenvalue. References Linear algebra Matrix theory
Eigenoperator
Mathematics
41
49,676,205
https://en.wikipedia.org/wiki/NBDY
Negative regulator of P-body association is a protein that in humans is encoded by the NBDY gene. References Proteins
NBDY
Chemistry
26
22,265,290
https://en.wikipedia.org/wiki/Push%20broom%20scanner
A push broom scanner, also known as an along-track scanner, is a device for obtaining images with spectroscopic sensors. The scanners are regularly used for passive remote sensing from space, and in spectral analysis on production lines, for example with near-infrared spectroscopy used to identify contaminated food and feed. The moving scanner line in a traditional photocopier (or a scanner or facsimile machine) is also a familiar, everyday example of a push broom scanner. Push broom scanners and the whisk broom scanners variant (also known as across-track scanners) are often contrasted with staring arrays (such as in a digital camera), which image objects without scanning, and are more familiar to most people. In orbital push broom sensors, a line of sensors arranged perpendicular to the flight direction of the spacecraft is used. Different areas of the surface are imaged as the spacecraft flies forward. A push broom scanner can gather more light than a whisk broom scanner because it looks at a particular area for a longer time, like a long exposure on a camera. One drawback of push broom sensors is the varying sensitivity of the individual detectors. Another drawback is that the resolution is lower than a whisk broom scanner because the entire image is captured at once. Examples of spacecraft cameras using push broom imagers include Mars Express's High Resolution Stereo Camera, Lunar Reconnaissance Orbiter Camera NAC, Mars Global Surveyor's Mars Orbiter Camera WAC, and the Multi-angle Imaging SpectroRadiometer on board the Terra satellite. See also Time delay and integration Whisk broom scanner References External links Earth Observing-1 (NASA), with animated whisk broom and push broom illustrations Airborne Pushbroom Line Scan (PDF) – overview article Linear Pushbroom Cameras (PDF) – detailed modelling theory Spectrometers Image sensors
Push broom scanner
Physics,Chemistry
381
2,137,681
https://en.wikipedia.org/wiki/Pauson%E2%80%93Khand%20reaction
The Pauson–Khand (PK) reaction is a chemical reaction, described as a [2+2+1] cycloaddition. In it, an alkyne, an alkene, and carbon monoxide combine into a α,β-cyclopentenone in the presence of a metal-carbonyl catalyst Ihsan Ullah Khand (1935–1980) discovered the reaction around 1970, while working as a postdoctoral associate with Peter Ludwig Pauson (1925–2013) at the University of Strathclyde in Glasgow. Pauson and Khand's initial findings were intermolecular in nature, but the reaction has poor selectivity. Some modern applications instead apply the reaction for intramolecular ends. The traditional reaction requires a stoichiometric amounts of dicobalt octacarbonyl, stabilized by a carbon monoxide atmosphere. Catalytic metal quantities, enhanced reactivity and yield, or stereoinduction are all possible with the right chiral auxiliaries, choice of transition metal (Ti, Mo, W, Fe, Co, Ni, Ru, Rh, Ir and Pd), and additives. Mechanism While the mechanism has not yet been fully elucidated, Magnus' 1985 explanation is widely accepted for both mono- and dinuclear catalysts, and was corroborated by computational studies published by Nakamura and Yamanaka in 2001. The reaction starts with dicobalt hexacarbonyl acetylene complex. Binding of an alkene gives a metallacyclopentene complex. CO then migratorily inserts into an M-C bond. Reductive elimination delivers the cyclopentenone. Typically, the dissociation of carbon monoxide from the organometallic complex is rate limiting. Selectivity The reaction works with both terminal and internal alkynes, although internal alkynes tend to give lower yields. The order of reactivity for the alkene is(strained cyclic) > (terminal) > (disubstituted) > (trisubstituted). Tetrasubstituted alkenes and alkenes with strongly electron-withdrawing groups are unsuitable. With unsymmetrical alkenes or alkynes, the reaction is rarely regioselective, although some patterns can be observed. For mono-substituted alkenes, alkyne substituents typically direct: larger groups prefer the C2 position, and electron-withdrawing groups prefer the C3 position. But the alkene itself struggles to discriminate between the C4 and C5 position, unless the C2 position is sterically congested or the alkene has a chelating heteroatom. The reaction's poor selectivity is ameliorated in intramolecular reactions. For this reason, the intramolecular Pauson-Khand is common in total synthesis, particularly the formation of 5,5- and 6,5-membered fused bicycles. Generally, the reaction is highly syn-selective about the bridgehead hydrogen and substituents on the cyclopentane. Appropriate chiral ligands or auxiliaries can make the reaction enantioselective (see ). BINAP is commonly employed. Additives Typical Pauson-Khand conditions are elevated temperatures and pressures in aromatic hydrocarbon (benzene, toluene) or ethereal (tetrahydrofuran, 1,2-dichloroethane) solvents. These harsh conditions may be attenuated with the addition of various additives. Absorbent surfaces Adsorbing the metallic complex onto silica or alumina can enhance the rate of decarbonylative ligand exchange as exhibited in the image below. This is because the donor posits itself on a solid surface (i.e. silica). Additionally using a solid support restricts conformational movement (rotamer effect). Lewis bases Traditional catalytic aids such as phosphine ligands make the cobalt complex too stable, but bulky phosphite ligands are operable. Lewis basic additives, such as n-BuSMe, are also believed to accelerate the decarbonylative ligand exchange process. However, an alternative view holds that the additives make olefin insertion irreversible instead. Sulfur compounds are typically hard to handle and smelly, but n-dodecyl methyl sulfide and tetramethylthiourea do not suffer from those problems and can improve reaction performance. Amine N-oxides The two most common amine N-oxides are N-methylmorpholine N-oxide (NMO) and trimethylamine N-oxide (TMANO). It is believed that these additives remove carbon monoxide ligands via nucleophilic attack of the N-oxide onto the CO carbonyl, oxidizing the CO into CO2, and generating an unsaturated organometallic complex. This renders the first step of the mechanism irreversible, and allows for more mild conditions. Hydrates of the aforementioned amine N-oxides have similar effect. N-oxide additives can also improve enantio- and diastereoselectivity, although the mechanism thereby is not clear. Alternative catalysts (Co)4(CO)12 and Co3(CO)9(μ3-CH) also catalyze the PK reaction although Takayama et al detail a reaction catalyzed by dicobalt octacarbonyl. One stabilization method is to generate the catalyst in situ. Chung reports that Co(acac)2 can serve as a precatalyst, activated by sodium borohydride. Other metals catalyst requires a silver triflate co-catalyst to effect the Pauson–Khand reaction: Molybdenum hexacarbonyl is a carbon monoxide donor in PK-type reactions between allenes and alkynes with dimethyl sulfoxide in toluene. Titanium, nickel, and zirconium complexes admit the reaction. Other metals can also be employed in these transformations. Substrate tolerance In general allenes, support the Pauson–Khand reaction; regioselectivity is determined by the choice of metal catalyst. Density functional investigations show the variation arises from different transition state metal geometries. Heteroatoms are also acceptable: Mukai et al's total synthesis of physostigmine applied the Pauson–Khand reaction to a carbodiimide. Cyclobutadiene also lends itself to a [2+2+1] cycloaddition, although this reactant is too active to store in bulk. Instead, ceric ammonium nitrate cyclobutadiene is generated in situ from decomplexation of stable cyclobutadiene iron tricarbonyl with (CAN). An example of a newer version is the use of the chlorodicarbonylrhodium(I) dimer, [(CO)2RhCl]2, in the synthesis of (+)-phorbol by Phil Baran. In addition to using a rhodium catalyst, this synthesis features an intramolecular cyclization that results in the normal 5-membered α,β-cyclopentenone as well as 7-membered ring. Carbon monoxide generation in situ The cyclopentenone motif can be prepared from aldehydes, carboxylic acids, and formates. These examples typically employ rhodium as the catalyst, as it is commonly used in decarbonylation reactions. The decarbonylation and PK reaction occur in the same reaction vessel. See also Nicholas reaction Further reading For Khand and Pauson's perspective on the reaction: For a modern perspective: References Cycloadditions Multiple component reactions Name reactions
Pauson–Khand reaction
Chemistry
1,649
11,405,691
https://en.wikipedia.org/wiki/Electrical%20conductivity%20meter
An electrical conductivity meter (EC meter) measures the electrical conductivity in a solution. It has multiple applications in research and engineering, with common usage in hydroponics, aquaculture, aquaponics, and freshwater systems to monitor the amount of nutrients, salts or impurities in the water. Principle Common laboratory conductivity meters employ a potentiometric method and four electrodes. Often, the electrodes are cylindrical and arranged concentrically. The electrodes are usually made of platinum metal. An alternating current is applied to the outer pair of the electrodes. The potential between the inner pair is measured. Conductivity could in principle be determined using the distance between the electrodes and their surface area using Ohm's law but generally, for accuracy, a calibration is employed using electrolytes of well-known conductivity. Industrial conductivity probes often employ an inductive method, which has the advantage that the fluid does not wet the electrical parts of the sensor. Here, two inductively-coupled coils are used. One is the driving coil producing a magnetic field and it is supplied with accurately-known voltage. The other forms a secondary coil of a transformer. The liquid passing through a channel in the sensor forms one turn in the secondary winding of the transformer. The induced current is the output of the sensor. Another way is to use four-electrode conductivity sensors that are made from corrosion-resistant materials. A benefit of four-electrode conductivity sensors compared to inductive sensors is scaling compensation and the ability to measure low (below 100 μS/cm) conductivities (a feature especially important when measuring near-100% hydrofluoric acid). Temperature dependence The conductivity of a solution is highly temperature dependent, so it is important either to use a temperature compensated instrument, or to calibrate the instrument at the same temperature as the solution being measured. Unlike metals, the conductivity of common electrolytes typically increases with increasing temperature. Over a limited temperature range, the way temperature affects the conductivity of a solution can be modeled linearly using the following formula: where T is the temperature of the sample, Tcal is the calibration temperature, σT is the electrical conductivity at the temperature T, σTcal is the electrical conductivity at the calibration temperature Tcal, α is the temperature compensation gradient of the solution. The temperature compensation gradient for most naturally occurring samples of water is about 2%/°C; however it can range between 1 and 3%/°C. The compensation gradients for some common water solutions are listed in the table below. Conductivity measurement applications Conductivity measurement is a versatile tool in process control. The measurement is simple and fast, and most advanced sensors require only a little maintenance. The measured conductivity reading can be used to make various assumptions on what is happening in the process. In some cases it is possible to develop a model to calculate the concentration of the liquid. Concentration of pure liquids can be calculated when the conductivity and temperature is measured. The preset curves for various acids and bases are commercially available. For example, one can measure the concentration of high purity hydrofluoric acid using conductivity-based concentration measurement [Zhejiang Quhua Fluorchemical, China Valmet Concentration 3300]. A benefit of conductivity- and temperature-based concentration measurement is the superior speed of inline measurement compared to an on-line analyzer. Conductivity-based concentration measurement has limitations. The concentration-conductivity dependence of most acids and bases is not linear. Conductivity-based measurement cannot determine on which side of the peak the measurement is, and therefore the measurement is only possible on a linear section of the curve. Kraft pulp mills use conductivity-based concentration measurement to control alkali additions to various stages of the cook. Conductivity measurement will not determine the specific amount of alkali components, but it is a good indication on the amount of effective alkali (NaOH + Na2S as NaOH or Na2O) or active alkali (NaOH + Na2S as NaOH or Na2O) in the cooking liquor. The composition of the liquor varies between different stages of the cook. Therefore, it is necessary to develop a specific curve for each measurement point or to use commercially available products. The high pressure and temperature of cooking process, combined with high concentration of alkali components, put a heavy strain on conductivity sensors that are installed in process. The scaling on the electrodes needs to be taken into account, otherwise the conductivity measurement drifts, requiring increased calibration and maintenance. See also Conductivity factor Salinometer Total dissolved solids TDS meter References External links ASTM D1125-23 Standard Test Methods for Electrical Conductivity and Resistivity of Water ASTM D5682 DIN 55667 Electrochemistry Measuring instruments
Electrical conductivity meter
Chemistry,Technology,Engineering
994
66,080,932
https://en.wikipedia.org/wiki/Bird%20of%20Washington
The Bird of Washington, Washington Eagle, or Great Sea Eagle (Falco washingtonii, F. washingtoniensis, F. washingtonianus, or Haliaetus washingtoni) was a putative species of sea eagle which was claimed in 1826 and published by John James Audubon in his famous work The Birds of America. It is now not recognised as a valid species. Theories about its true nature include the following: It was a juvenile specimen or subspecies of bald eagle (Haliaeetus leucocephalus). It was an invention and that the picture was plagiarised from a picture of a golden eagle in Rees's Cyclopædia. It was actually a genuine species, but it was rare and became extinct after Audubon's sightings. John James Audubon's painting of the bird was acquired by Sidney Dillon Ripley, and his family donated it to the Smithsonian American Art Museum in 1994. References Further reading Allen, J. A. 1870. "What is the ‘Washington Eagle'?" The American Naturalist 4: pp 524–527. Audubon, J. J. 1828. "Notes on the Bird of Washington (Fálco Washingtoniàna), or Great American Sea Eagle." Magazine of Natural History 1: pp 115–120. Maruna, S. 2006. "Substantiating Audubon's Washington Eagle." Ohio Cardinal 29: pp 140–150. Cryptozoology Fictional birds of prey Scientific misconduct John James Audubon Ornithological fraud Hypothetical species
Bird of Washington
Technology,Biology
322
28,013,962
https://en.wikipedia.org/wiki/Lactarius%20fumosus
Lactarius fumosus, commonly known as the smoky milkcap, is a species of fungus in the family Russulaceae. Taxonomy The species was first described by American mycologist Charles Horton Peck in 1872. "Lactarius fumosus" var. fumosus is considered a synonym. Lactarius fumosus is the type species of the section Fumosi of the subgenus Plinthogalus of the genus Lactarius. It is commonly known as the "smoky milkcap". Description The cap is wide, broadly convex to nearly plane, sometimes shallowly depressed. The margin (cap edge) is irregular, often wavy, and lobed or ribbed. The cap surface is dry, unpolished, azonate, usually becoming somewhat wrinkled with age, pale dingy yellow-brown to whitish overall, with a smoky tinge, sometimes with tawny olive, pinkish buff, or dull brown areas. The gills are attached to subdecurrent (running slightly down the length of the stem), narrow, crowded together, whitish, becoming dingy yellow-buff, staining reddish when bruised. The stem is long, thick, nearly equal, dry, dull, stuffed, colored like the cap, whitish towards the base, staining reddish, but more slowly than the gills. The flesh is pale white, staining reddish-salmon when cut. Its odor is not distinctive, but the taste is variable: quickly acrid then mild then slowly staining acrid, or very slowly faintly acrid. The latex is white on exposure, unchanging, staining tissues reddish. The spore print is pinkish-buff. The edibility is unknown. Microscopic characters The spores are 6–8 by 6–8 μm, spherical or nearly so, ornamented with ridges that form a partial reticulum, prominences up to 1.5 μm high, hyaline (translucent), amyloid. The cap cuticle is a palisade of cylindrical to club-shaped cells. Similar species Lactarius musciola has darker colors, and its gills do not stain reddish where bruised. Lactarius fuliginosus differs in having broad subdistant gills. Habitat and distribution The fruit bodies of L. fumosus grow solitary, scattered, or in groups on the ground in woods from July–October. The fungus is widely distributed in eastern North America, and has also been reported from western Canada. Its frequency of occurrence is described as occasional. Its range extends south to northwestern Mexico, where it is found associated with Liquidambar, Magnolia, Acer, and Quercus species. Bioactive compounds Extracts of the fruit bodies are toxic to the corn earworm, Heliothis zea and the large milkweed bug, Oncopeltus fasciatus. The insecticidal activity is thought to be caused by compounds called chromenes. See also List of Lactarius species References Cited text External links fumosus Fungi described in 1872 Fungi of North America Taxa named by Charles Horton Peck Fungus species
Lactarius fumosus
Biology
633
57,691,785
https://en.wikipedia.org/wiki/Muskingum%20River%20Navigation%20Historic%20District
The Muskingum River Navigation Historic District is a historic district in Ohio's Coshocton, Morgan, Muskingum, and Washington counties, which was listed on the National Register of Historic Places in 2007. The listing includes 12 contributing buildings, 32 contributing structures, and a contributing site. The "Muskingum River lock system was designated the first Navigation Historic District in the United States by the National Park Service." The Muskingum River Navigation System was also designated as a National Historic Civil Engineering Landmark by the American Society of Civil Engineers in 2001. It is traversed by the Muskingum River Water Trail. References Canals in Ohio National Register of Historic Places in Coshocton County, Ohio National Register of Historic Places in Morgan County, Ohio National Register of Historic Places in Muskingum County, Ohio National Register of Historic Places in Washington County, Ohio Historic districts on the National Register of Historic Places in Ohio Buildings and structures completed in 1816 Historic Civil Engineering Landmarks
Muskingum River Navigation Historic District
Engineering
195
2,991,258
https://en.wikipedia.org/wiki/Lead%20oxide
Lead oxides are a group of inorganic compounds with formulas including lead (Pb) and oxygen (O). Common lead oxides include: Lead(II) oxide, PbO, litharge (red), massicot (yellow) Lead tetroxide or red lead, , minium, which is a lead (II,IV) oxide and may be thought of as lead(II) orthoplumbate(IV) , vivid orange crystals Lead dioxide (lead(IV) oxide), , dark-brown or black powder Less common lead oxides are: Lead sesquioxide, , which is a lead (II,IV) oxide as well (lead(II) metaplumbate(IV) ), reddish yellow , monoclinic, dark-brown or black crystals The so-called black lead oxide, which is a mixture of PbO and fine-powdered Pb metal and used in the production of lead–acid batteries. Lead compounds Oxides
Lead oxide
Chemistry
206
44,307,291
https://en.wikipedia.org/wiki/Chemical%20reaction%20model
Chemical reaction models transform physical knowledge into a mathematical formulation that can be utilized in computational simulation of practical problems in chemical engineering. Computer simulation provides the flexibility to study chemical processes under a wide range of conditions. Modeling of a chemical reaction involves solving conservation equations describing convection, diffusion, and reaction source for each component species. Species transport equation Ri is the net rate of production of species i by chemical reaction and Si is the rate of creation by addition from the dispersed phase and the user defined source. Ji is the diffusion flux of species i, which arises due to concentration gradients and differs in both laminar and turbulent flows. In turbulent flows, computational fluid dynamics also considers the effects of turbulent diffusivity. The net source of chemical species i due to reaction, Ri which appeared as the source term in the species transport equation is computed as the sum of the reaction sources over the NR reactions among the species. Reaction models These reaction rates R can be calculated by following models: Laminar finite rate model Eddy dissipation model Eddy dissipation concept Laminar finite rate model The laminar finite rate model computes the chemical source terms using the Arrhenius expressions and ignores turbulence fluctuations. This model provides with the exact solution for laminar flames but gives inaccurate solution for turbulent flames, in which turbulence highly affects the chemistry reaction rates, due to highly non-linear Arrhenius chemical kinetics. However this model may be accurate for combustion with small turbulence fluctuations, for example supersonic flames. Eddy dissipation model The eddy dissipation model or the Magnussen model, based on the work of Magnussen and Hjertager, is a turbulent-chemistry reaction model. Most fuels are fast burning and the overall rate of reaction is controlled by turbulence mixing. In the non-premixed flames, turbulence slowly mixes the fuel and oxidizer into the reaction zones where they burn quickly. In premixed flames the turbulence slowly mixes cold reactants and hot products into the reaction zones where reaction occurs rapidly. In such cases the combustion is said to be mixing-limited, and the complex and often unknown chemical kinetics can be safely neglected. In this model, the chemical reaction is governed by large eddy mixing time scale. Combustion initiates whenever there is turbulence present in the flow. It does not need an ignition source to initiate the combustion. This type of model is valid for the non-premixed combustion, but for the premixed flames the reactant is assumed to burn at the moment it enters the computation model, which is a shortcoming of this model as in practice the reactant needs some time to get to the ignition temperature to initiate the combustion. Eddy dissipation concept The eddy dissipation concept (EDC) model is an extension of the eddy dissipation model to include detailed chemical mechanism in turbulent flows. The EDC model attempts to incorporate the significance of fine structures in a turbulent reacting flow in which combustion is important. EDC has been proven efficient without the need for changing the constants for a great variety of premixed and diffusion controlled combustion problems, both where the chemical kinetics is faster than the overall fine structure mixing as well as in cases where the chemical kinetics has a dominating influence. References Ansys Fluent Help, Chapters 7, 8. Henk Kaarle Versteeg, Weeratunge Malalasekera. An Introduction to Computational Fluid Dynamics: The Finite Volume Method. Magnussen, B. F. & B. H. Hjertager (1977). "On Mathematical Models of Turbulent Combustion with Special Emphasis on Soot Formation and Combustion". Symposium (International) on Combustion. 16 (1): 719–729. doi:10.1016/S0082-0784(77)80366-4. Bjørn F. Magnussen. Norwegian University of Science and Technology Trondheim (Norway), Computational Industry Technologies AS (ComputIT), The Eddy Dissipation Concept: A Bridge Between Science and Technology. Schlögl, Friedrich. "Chemical reaction models for non-equilibrium phase transitions." Zeitschrift für Physik 253.2 (1972): 147–161. Levenspiel, Octave. Chemical reaction engineering. Vol. 2. New York etc.: Wiley, 1972. Chemical reaction engineering Mathematical modeling
Chemical reaction model
Chemistry,Mathematics,Engineering
902
56,012,962
https://en.wikipedia.org/wiki/Graph%20polynomial
In mathematics, a graph polynomial is a graph invariant whose value is a polynomial. Invariants of this type are studied in algebraic graph theory. Important graph polynomials include: The characteristic polynomial, based on the graph's adjacency matrix. The chromatic polynomial, a polynomial whose values at integer arguments give the number of colorings of the graph with that many colors. The dichromatic polynomial, a 2-variable generalization of the chromatic polynomial The flow polynomial, a polynomial whose values at integer arguments give the number of nowhere-zero flows with integer flow amounts modulo the argument. The (inverse of the) Ihara zeta function, defined as a product of binomial terms corresponding to certain closed walks in a graph. The Martin polynomial, used by Pierre Martin to study Euler tours The matching polynomials, several different polynomials defined as the generating function of the matchings of a graph. The reliability polynomial, a polynomial that describes the probability of remaining connected after independent edge failures The Tutte polynomial, a polynomial in two variables that can be defined (after a small change of variables) as the generating function of the numbers of connected components of induced subgraphs of the given graph, parameterized by the number of vertices in the subgraph. See also Knot polynomial References Polynomials Graph invariants
Graph polynomial
Mathematics
263
12,872,269
https://en.wikipedia.org/wiki/Sortir%20du%20nucl%C3%A9aire%20%28France%29
Sortir du nucléaire (; English "Nuclear phase-out") is a French federation of anti-nuclear groups. Founded in 1997 as a result of the success of the struggle against the Superphénix, the organisation regularly campaigns against the use of nuclear power in France and in the world. In September 2007, Sortir du nucléaire declined taking part in the talks with the French government, dubbed "Grenelle de l'environnement", in which major ecological organisations participated, because discussions about nuclear energy were forbidden by French president Nicolas Sarkozy. March 2007 protests against the EPR On March 17, 2007 simultaneous protests, organised by Sortir du nucléaire, were staged in 5 French towns to protest construction of EPR plants; Rennes, Lyon, Toulouse, Lille, and Strasbourg. Stop-EPR claimed that a total of over 60,000 people attended the rallies. The news outlet Evening Echo reported that it was a way to get the issue in the eye of candidates in the April–May two-round presidential elections of 2007. The largest crowd was in Rennes, close to Flamanville in Normandy, where preliminary construction on the EPR is underway. Organisers claimed the number of protesters in Rennes was 30,000 to 40,000. Police estimated the crowd at 10,000. See also Anti-nuclear movement Anti-nuclear movement in France Anti-nuclear movement in Germany Nuclear power in France References External links French web site (in French and English) : www.sortirdunucleaire.org Safety of EPR nuclear reactors Anti-nuclear organizations Environmental organizations based in France Nuclear energy in France Nuclear safety in France Anti-nuclear protests International Campaign to Abolish Nuclear Weapons
Sortir du nucléaire (France)
Engineering
356
35,742,786
https://en.wikipedia.org/wiki/German%20Society%20for%20Stem%20Cell%20Research
The German Society for Stem Cell Research (Deutsche Gesellschaft für Stammzellforschung or GSZ), established in 2003 by Juergen Hescheler, brings scientists from around Germany together and has an emphasis on basic research in stem cell biology. The main purpose of the society is to promote stem cell research. In order to achieve this goal the society promotes the stem cell research in basic research and in academic teaching by allocating available funds to support training programs, to organize seminars and conferences, as well as instigating the exchange of students and scientists on national and international level for collaborative projects and resulting publications. The German Society for Stem Cell Research aims at establishing a network of scientists in stem cell research nationwide and eventually offering a platform to provide competent and independent counsel for all questions related to stem cell research. History In 2003 scientists from around Germany initiated the establishment of the German Society for Stem Cell Research with emphasis on basic research in stem cell biology. The society is a non-profit organisation, financially and politically autonomous, and is registered with the district court Cologne under the number VR 14639 since November 4, 2004. Juergen Hescheler is the chairman of the organisation. Objectives The main purpose of the society is to promote stem cell research. In order to achieve this goal the society will promote the stem cell research in basic research and in academic teaching by allocating available funds to support training programs, to organize seminars and conferences, as well as instigating the exchange of students and scientists on the national and international level for collaborative projects and resulting publications. The German Society for Stem Cell Research aims at establishing a network of scientists in stem cell research nationwide, and eventually bringing them under a single platform and making a competent and independent counsel for all questions related to stem cell research. The Journal of Stem Cells & Regenerative Medicine is the official journal of the society. References External links 7th Fraunhofer Life Science Symposium Leipzig 2012 and 7th Annual Congress of the German Society for Stem Cell Research German Society for Regenerative Medicine Biology organisations based in Germany Scientific organizations established in 2003 Medical and health organisations based in North Rhine-Westphalia Stem cell research 2003 establishments in Germany Organisations based in Cologne Scientific societies based in Germany
German Society for Stem Cell Research
Chemistry,Biology
447
75,290,234
https://en.wikipedia.org/wiki/HD%20200073
HD 200073 (HR 8046; 43 G. Microscopii) is a solitary star located in the southern constellation Microscopium northwest of Zeta Microscopii. It is faintly visible to the naked eye as an orange-hued point of light with an apparent magnitude of 5.94. The object is located relatively close at a distance of 227 light-years based on Gaia DR3 parallax measurements, but it is receding with a heliocentric radial velocity of . At its current distance, HD 200073's brightness is diminished by an interstellar extinction of 0.13 magnitudes and it has an absolute magnitude of +1.79. It has a relatively high proper motion across the celestial sphere, moving at a rate of 213 mas/yr. HD 200073 has a stellar classification of K2 III, indicating that it is an evolved K-type giant that has exhausted hydrogen at its core and left the main sequence. Astronomer David Stanley Evans gave a class of K0 IV, indicating that it is a slightly evolved subgiant that is ceasing hydrogen fusion at its core. HD 200073 is currently on the red giant branch, fusing hydrogen in a shell around an inert helium core. It has a comparable mass to the Sun but at the age of 8.79 billion years, it has expanded to 9.15 times the radius of the Sun. It radiates 28.8 times the luminosity of the Sun from its enlarged photosphere at an effective temperature of . HD 200073 is slightly metal-deficient with an iron abundance of [Fe/H] = −0.13 or 74.1% of the Sun's. It spins modestly with a projected rotational velocity of . References K-type giants Microscopium Microscopii, 43 CD-39 14079 200073 103836 8046 00115011683
HD 200073
Astronomy
401
197,025
https://en.wikipedia.org/wiki/Paisley%20%28design%29
Paisley or paisley pattern is an ornamental textile design using the boteh () or buta, a teardrop-shaped motif with a curved upper end. Of Persian origin, paisley designs became popular in the West in the 18th and 19th centuries, following imports of post-Mughal Empire versions of the design from India, especially in the form of Kashmir shawls, and were then replicated locally. The English name for the patterns comes from the town of Paisley, in the west of Scotland, a centre for textiles where paisley designs were reproduced using jacquard looms. The pattern is still commonly seen in Britain and other English-speaking countries on men's ties, waistcoats, and scarfs, and remains popular in other items of clothing and textiles in Iran and South and Central Asian countries. Origins Some design scholars believe the buta is the convergence of a stylized floral spray and a cypress tree: a Zoroastrian symbol of life and eternity. The "bent" cedar is also a sign of strength and resistance but modesty. The floral motif originated in the Sassanid dynasty, was used later in the Safavid dynasty of Persia (1501–1736), and was a major textile pattern in Iran during the Qajar and Pahlavi dynasties. In these periods, the pattern was used to decorate royal regalia, crowns, and court garments, as well as textiles used by the general population. Persian and Central Asian designs usually range the motifs in orderly rows, with a plain background. Another likely theory is that is based on the shape of a mango. Ancient Indo-Iranian origins There is significant speculation as to the origins and symbolism of boteh jegheh, or "ancient motif", known in English as paisley. With experts contesting different time periods for its emergence, to understand the proliferation in the popularity of boteh jegheh design and eventually Paisley, it is important to understand South Asian history. The early Indo-Iranian people flourished in South Asia, where they eventually exchanged linguistic, cultural, and even religious similarities. The ancient Indo-Iranian people shared a religion called Zoroastrianism. Zoroastrianism, some experts argue, served as one of the earliest influences for boteh jegheh's design with the shape representing the cypress tree, an ancient Zoroastrian religious symbol. Others contest that the earliest representation of the pattern's shape comes from the later Sassanid dynasty. The design was representative of a tear drop. Some will argue that boteh jegheh's origins stem from old religious beliefs and its meaning could symbolize the sun, a phoenix, or even an ancient Iranian religious sign for an eagle. Around the same time, a pattern called Boteh was gaining popularity in Iran; the pattern was a floral design, and was used to represent elite status, mostly serving to decorate royal objects. The pattern was traditionally woven onto silk clothing using silver and gold material. The earliest evidence of the design being traded with other cultures was found at the Red Sea, with both Egyptian and Greek peoples trading from the 1400s. Islamic control in South Asia and spread of the pattern One of the earliest evidence of the pattern as it relates to Islamic culture has been found at Noh Gumba mosque, in the city of Balkh in Afghanistan, where it is believed that the pattern was included in the design as early as the 800s when the mosque was built. In early Iranian culture, the design was woven onto Termeh, one of the most valuable materials in early Iran where the design served to make clothing for the nobility. At this time, the Iranian nobility wore distinct uniforms called Khalaat, historically, the design was commonly found on the Khalaat uniforms. It is stated that at some point in the 1400s, Boteh was transported from Persia to Kashmir. In the same century, in the 1400s, some of the earliest recorded Kashmir shawls were produced in India, records from the 1500s, during Emperor Akbar's reign over the Mughal people in this area indicate that shawl making was already fashionable in India prior to Mughal conquest which took place in the early 1400s. It has been stated that during Emperor Akbars reign over the Mughal empire, boteh jegheh shawls were extremely popular and fashionable. While one shawl was traditionally worn previously, it was during the rule of Emperor Akbar that the emperor decided to wear two shawls at a time to serve as a status symbol. Along with wearing the shawls frequently, Emperor Akbar also used the shawls as gifts to other rulers and high officials. It is believed that by the 1700s, Kashmir shawls were produced in the image that someone today would associate with modern paisley. Introduction of boteh jegheh to Western culture In the 18th and 19th centuries, the British East India Company introduced Kashmir shawls from India to England and Scotland, where they were extremely fashionable and soon duplicated. The first place in the Western world to imitate the design was the town of Paisley in Scotland, Europe's top producer of textiles at this time. Before being produced in Paisley, thus gaining its name in Western culture, the paisley motif was originally referred to by Westerners simply as "pine and cone." European technological innovation in textile manufacturing made Western imitations of Kashmir shawls competitive with Indian-made shawls from Kashmir. The shawls from India could be quite expensive at the time, but, with the industrial revolution taking place in Europe, paisley shawls were manufactured on a large scale, so lowering their price that they became commonplace among the middle class and boosting the design's popularity even more. While the Western world appropriated much of Eastern culture and design, the Boteh design was by far the most popular. Records indicate that William Moorcroft, an English businessman and explorer, visited the Himalayan mountains in the mid-1800s; upon his arrival, he was enthralled by Boteh-adorned Kashmir shawls and tried to arrange for entire families of Indian textile workers to move to the United Kingdom. The earliest paisley shawls made in the United Kingdom, in Paisley, Scotland, were of fleece, a material with a soft, fluffy texture on one side. In Asia, the paisley shawls were primarily worn by males, often in formal or ceremonial contexts, but in Europe they were primarily worn instead by women. While still closely resembling its original form, the paisley design would change once it began to be produced in Western culture, with different towns in the United Kingdom applying their own spin to the design. In the 1800s, European production of paisley increased, particularly in the Scottish town from which the pattern takes its modern name. Soldiers returning from the colonies brought home cashmere wool shawls from India, and the East India Company imported more. The design was copied from the costly silk and wool Kashmir shawls and adapted first for use on handlooms, and, after 1820, on Jacquard looms. The paisley pattern also appeared on European-made bandanas from the early 1800s, the patterns imitating Kashmir shawls. From roughly 1800 to 1850, the weavers of the town of Paisley in Renfrewshire, Scotland, became the foremost producers of Paisley shawls. Unique additions to their hand-looms and Jacquard looms allowed them to work in five colours when most weavers were producing paisley using only two. The design became known as the Paisley pattern. By 1860, Paisley could produce shawls with 15 colours, which was still only a quarter of the number used in the multicolour paisleys then still being imported from Kashmir. In addition to the loom-woven fabric, the town of Paisley became a major site for the manufacture of printed cotton and wool in the 1800s, according to the Paisley Museum and Art Galleries. In this process, the paisley pattern was printed, rather than woven, onto other textiles, including cotton squares which were the precursors of the modern bandanna. Printed paisley was cheaper than the costly woven paisley, and this added to its popularity. The key places of printing paisley were Britain and the Alsace region of France. The peak period of paisley as a fashionable design ended in the 1870s, perhaps as so many cheap printed versions were on the market. Modern use The 1960s proved to be a time of great revival for the paisley design in Western culture. Popular culture in the United States developed a sort of fixation on eastern cultures, including many traditionally Indian styles. Paisley was one of them, being worn by the likes of the Beatles; even the guitar company Fender used the design to decorate one of their most famous electric guitars, the Fender Telecaster. Today, Brad Paisley plays a Telecaster decorated in that pattern, and the design remains common, appearing on jewellery, suit ties, pocket books, cake decorations, tattoos, mouse pads for computers, scarves, and dresses. Paisley bandanas, long a fixture of cowboys, came in the latter twentieth century to be worn by many blue-collar and labor workers as protection from dust and were sported by entertainers popular with such workers, such as the country musician Willie Nelson. The motif also influences furniture design internationally, with many countries applying paisley decoration to wallpaper, pillows, curtains, bed spreads, and like furnishings. Music In the mid- to late 1960s, paisley became identified with psychedelic style and enjoyed mainstream popularity, partly due to the Beatles. The style was particularly popular during the Summer of Love in 1967. The company Fender made a pink paisley version of their Telecaster guitar by sticking paisley wallpaper onto the guitar bodies. Prince paid tribute to the rock and roll history of paisley when he created the Paisley Park Records recording label and established Paisley Park Studios, both named after his 1985 song "Paisley Park". The Paisley Underground was a music scene active around the same time. Architecture Paisley was a favorite design element of British-Indian architect Laurie Baker. He has made numerous drawings and collages of what he called "mango designs". He used to include the shape in the buildings he designed also. Sports At the 2010 Winter Olympics, Azerbaijan's team sported colorful paisley trousers. It was the emblem of the 2012 FIFA U-17 Women's World Cup, held in Azerbaijan. It was part of the emblem for the 2020 FIFA U-17 Women's World Cup, held in India. Other languages In Persian, Boteh can be translated to shrub or bush, while in Kashmir it carried the same meaning but was referred to as Buta, or Bu. The modern French words for paisley are (meaning bush, cluster of leaves or a flower bud) in Persian as well as () and ("palm", which – along with the pine and the cypress – is one of the traditional botanical motifs thought to have influenced the shape of the paisley element as it is now known).. In various languages of Bangladesh, India and Pakistan, the design's name is related to the word for mango: In Bengali: kalka In Telugu: mamidi pinde, young mango pattern In Tamil: mankolam, mango pattern In Marathi: koyari, mango seed In Sindhi: aami or ambri, small mango. In Hindi/Urdu: carrey or kerii, means unripe mango In Punjabi: ambi, from amb, mango. In Chinese, paisley is known as the "ham hock pattern" () in mainland China, or "amoeba pattern" in Taiwan (). In Russia, this ornament is known as "cucumbers" (). References Sources Dusenbury, Mary M.; Bier, Carol, Flowers, Dragons & Pine Trees: Asian Textiles in the Spencer Museum of Art, 2004, Hudson Hills, , p. 48 Petri, F. Origin of the Book of the Dead Angient Egipt. 1926. June part 2 с 41–45 Ashurbeyli, S. [] "New research on the history of Baku and the Maiden Tower" []. Almanac of Arts []. 1972. In Russian. Ashurbeyli, S. [] "On the dating and purpose of Giz Galasy in the fortress" []. Elm []. 1974. In Russian. Further reading . . . External links 17th-century fashion 18th-century fashion 19th-century fashion 20th-century fashion 21st-century fashion Hippie movement Renfrewshire Persian culture Tamil culture Scottish design Scottish clothing Textile patterns Visual motifs Paisley, Renfrewshire
Paisley (design)
Mathematics
2,542
37,059,545
https://en.wikipedia.org/wiki/Diary%20studies
Diary studies is a research method that collects qualitative information by having participants record entries about their everyday lives in a log, diary or journal about the activity or experience being studied. This collection of data uses a longitudinal technique, meaning participants are studied over a period of time. This research tool, although not being able to provide results as detailed as a true field study, can still offer a vast amount of contextual information without the costs of a true field study. Diary studies are also known as experience sampling or ecological momentary assessment (EMA) methodology. Traditionally diary studies involved participants keeping a written diary of events. However the emergence of smartphones now enables participants to diary with photos, videos and text using a variety of online or offline apps and tools. Since the diary studies are recorded sequentially over time, it can be used to investigate time-based phenomena, temporal dynamics, and fluctuating phenomena such as moods. Diary studies can also be employed together with other research techniques within a mixed method framework and is particularly useful in obtaining rich subjective data. For instance, experience sampling method (ESM) combines it with questionnaires to gather data and examine people's experiences in daily life. History An early example of a diary study was "How Workingmen Spend Their Time" (Bevans, 1913), which went unpublished by George Esdras Bevans. Background Diary studies originate from the fields of psychology and anthropology. In the field of human–computer interaction (HCI), diary studies have been adopted as one method of learning about user needs towards designing more appropriate technologies. Temporal processes in diary studies A key characteristic of diary studies is their ability to track daily events over time. Researchers have begun conducting studies that allow them to explore the connection between a previous day's events and a current day's outcome, or to what extent prior events make people responsive to current events. Researchers Robert E. Wickham and C. Raymond Knee have concluded that future research studies would benefit from evaluating temporal processes, or processes related to time, in diary studies. This would serve as a way for researchers to test unique questions and hypotheses. Evaluating in-person change in diary Researchers have been able to use diary studies to evaluate how people can change over time. Traditional diary studies have evaluated change between individuals, but new studies have been conducted to evaluate within-person changes using diary studies. Through a framework of the generalizability theory, researchers have used a condensed version of the Profile of Mood States (POMS) to study within-person emotional changes via diaries. Types of diary studies Feedback studies Feedback studies involve participants answering predefined questions about the phenomenon of interest in a natural setting, with the answers acting as a diary entry. This is usually at assigned times, frequencies, or occurrences of the phenomenon, stated by the researcher. The most common method is using ‘paper and pencil’, although there are some studies that both utilise and suggest other technologies such as mobile phones or Psion Organisers. As such, feedback studies involve asynchronous communication between the participants and the researchers, as the participants’ data is recorded in their diary first, and then passed on to the researchers once complete. Feedback studies are scalable - that is, a large-scale sample can be used, since it is mainly the participants themselves who are responsible for collecting and recording data. Elicitation studies In elicitation studies, participants capture media as soon as the phenomenon occurs - the media is usually in the form of a photograph, but can be in other different forms as well, and so the recording is generally quick and less effortful than feedback studies. These media are then used as prompts and memory cues - to elicit memories and discussion - in interviews that take place much later. As such, elicitation studies involve synchronous communication between the participants and the researchers, usually through interviews. In these later interviews, the media and other memory cues (such as what activities were done before and after the event) can improve participants’ episodic memory. In particular, photos were found to elicit more specific recall than all other media types. Comparisons There are two prominent trade-offs between each type of study. Feedback studies involve answering questions more frequently and in situ, therefore enabling more accurate recall but more effortful recording. In contrast, elicitation studies involve quickly capturing media in situ but answering questions much later, therefore enabling less effortful recording but potentially inaccurate recall. When to use diary studies Diary studies are most often used when observing behavior over time in a natural environment. They can be beneficial when one is looking to find new qualitative and quantitative data. Diary studies aim to measure people's behavior over an extended period. They provide the opportunity for exploratory research, collecting large quantities of precise data that are both in-depth and contextual. What makes diary studies particularly unique is that these substantial amounts of data are collected at the micro-level. When the subject of research undergoes a change, utilizing diary studies becomes interesting because they allow for the measurement of change over an extended period and the observation of its effects on the individual; this is referred to as within-subject analysis. Additionally, it is possible to conduct between-subjects analysis to observe the different effects among respondents. A diary study offers the advantage over a traditional survey study in that it allows for the collection of data on a daily basis or even multiple times a day. In contrast, a survey study typically gathers data at a single point in time, or in the case of a longitudinal study, with time lags spanning months or years. Advantages Advantages of diary studies are numerous. They allow: collecting longitudinal and temporal information; reporting events and experiences in context and in-the-moment; participants to diary their behaviours, thoughts and feelings in-the-moment thereby minimising the potential for post rationalisation; determining the antecedents, correlations, and consequences of daily experiences and behaviors. respondents are not coerced regular reporting leads to a rich data collection an autonomous report is created, with no influence of social desirability it is unobtrusive: there is little distortion due to setting or investigations it is suitable for Small-n, medium-n, or even large-n (depending on the purpose and analysis method) it is a unique window on human phenomenology Limitations There are some limitations of diary studies, mainly due to their characteristics of reliance on memory and self-report measures. There is low control, low participation and there is a risk of disturbing the action. In feedback studies, it can be troubling and disturbing to write everything down. This is called a respondent burden. Inaccurate recall The validity of diary studies rests on the assumption that participants will accurately recall and record their experiences. This is somewhat more easily enabled by the fact that diaries are completed & media is captured in a natural environment and closer, in real-time, to any occurrences of the phenomenon of interest. However, there are multiple barriers to obtaining accurate data, such as: Social desirability bias - where participants may answer in a way that makes them appear more socially desirable.  This may be more prominent in longitudinal studies where participants frequently communicate with researchers. Recall bias - where participants recall events or feelings inaccurately due to systematic errors. Human memory is unreliable, as it is an active reconstruction which is susceptible to errors and biases. For example, after recording their emotions in daily diaries, older adults tended to recall more positive emotions than younger adults, while younger adults tended to recall more negative emotions than older adults. The phenomenon of interest itself - participants may find it difficult to both record and experience the phenomenon accurately at the same time. For example, a diary study assessing the drinking urges of recently treated alcoholics found that, if/when participants began to drink alcohol, they tended to stop recording in their diary. Reactive bias - As people are required to keep a diary, they may start thinking differently. This can lead to the Hawthorne effect. Non-compliance and retrospective entries Diary studies tend to dictate specific times or a frequency at which the participant should complete their diary entries or capture media. This is usually a time that is close to the event or phenomenon of interest to the researcher, and it is assumed that participants will comply with these instructions. However, researchers often find it difficult to verify the extent to which this assumption is met, and have observed various forms of non-compliance. One example of this is hoarding, where previously missed entries are 'backfilled' or completed all at once, therefore being completed retrospectively rather than at the assigned time. A further example of this is where entries are 'forward-filled', i.e. recorded for a future date. Researchers have found significant rates of non-compliance and entries written retrospectively in feedback studies. A study by Hyland and colleagues (1993) estimated that the percentage of errors in paper and pencil diaries could be anywhere from 2-24%, due to diaries written retrospectively and/or due to inaccuracies in recording. Another study by Stone and colleagues (2003) compared paper diaries versus compliance-enhancing electronic diaries. The paper diaries contained a hidden instrument which detected when the diary was opened - from this, actual compliance rate was found to be only 11%, while 79% of entries were faked, i.e. not written within 30 minutes of the assigned time. In contrast, the electronic diaries created timestamps for each entry, which prevented the possibility of fake timestamped entries, and therefore yielded a compliance rate of 94%. Notable diary studies In 2015, a diary study was conducted at a Dutch University of Applied Sciences to evaluate how learning spaces affect students' learning activities. 52 business management students kept records of the learning activities that they worked on, where they worked on them, and why they worked there for a week. Through evaluating the diary entries of this study, researchers found a significant correlation between the spaces in which students chose to work and their learning activities. Tools PACO is an open source mobile platform for behavioral science. See also Diary studies in TESOL Event sampling methodology Qualitative research References Further reading External links Open Source PACO Personal Analytics Companion Github of Open Source PACO Personal Analytics Companion Diary study guide The dos and donts of diary studies Diary studies in HCI psychology slides Anthropology Psychological methodology Human–computer interaction Market research Qualitative research Diaries
Diary studies
Engineering
2,124
77,406,715
https://en.wikipedia.org/wiki/HE0435-1223
HE0435-1223 is a quadruple-lensed quasar and rare Einstein Cross located in the constellation Eridanus at a distance of approximately 2.33 billion light years away from Earth. HE 0435-1223 was discovered in October 2008 by astronomer Patrick Foley during a study and search for gravitational quadruple lenses in deep sky objects. Physical properties The main physical characteristic of HE0435-1223 is the fact that it is divided into four frames by the galaxy WSB2002 0435-1223 G. All images are spaced a maximum of 2.6 arcsecs apart, the brightest image named "A" has an apparent magnitude of 19 while the other three images ("B","C" and "D") have an apparent magnitude of 19.6. The quasar itself is estimated to have an apparent magnitude of 17.71. All these images are pale blue in color. According to measurements in the I band, the galaxy producing the lens would be a giant elliptical galaxy with a diameter of 12 kpc. In 2006, a research team studied HE0435-1223 with the Hubble Space Telescope, they observed that the brightness of the four images varies in a particular way, if image A varies, image B will vary with a delay compared to image A. According to scientists, the object producing the lensing may not be a galaxy but a possible unorganized galactic structure which would produce several gravitational lenses that distort HE0435-1223, and this would explain the delay between the magnitudes of each image. Supermassive black hole In 2017, scientists studied the emission lines as well as the inert zone of the quasar using the MMT Observatory. By recombining the emissions from the different images, the team of scientists were able to carry out fairly precise measurements. By studying the microwaves emitted by HE0435-1223, they were able to estimate the speed and temperature of the black hole's accretion disk. With this data, they were able to estimate the mass of the black hole which sits at the center of the quasar; for this process, they used the relationship between the measurements as well as the mass of the central black hole. Data from the variation of emission fluxes indicate that the central black hole of HE0435-1223 would have a mass of approximately ~10 billion solar masses. See also Gravitational lens List of quasars External links HE 0435-1223 at SIMBAD References Quasars Eridanus (constellation)
HE0435-1223
Astronomy
534
3,596,297
https://en.wikipedia.org/wiki/CD38
CD38 (cluster of differentiation 38), also known as cyclic ADP ribose hydrolase, is a glycoprotein found on the surface of many immune cells (white blood cells), including CD4+, CD8+, B lymphocytes and natural killer cells. CD38 also functions in cell adhesion, signal transduction and calcium signaling. In humans, the CD38 protein is encoded by the CD38 gene which is located on chromosome 4. CD38 is a paralog of CD157, which is also located on chromosome 4 (4p15) in humans. History CD38 was first identified in 1980 as a surface marker (cluster of differentiation) of thymus cell lymphocytes. In 1992 it was additionally described as a surface marker on B cells, monocytes, and natural killer cells (NK cells). About the same time, CD38 was discovered to be not simply a marker of cell types, but an activator of B cells and T cells. In 1992 the enzymatic activity of CD38 was discovered, having the capacity to synthesize the calcium-releasing second messengers cyclic ADP-ribose (cADPR) and nicotinic acid adenine dinucleotide phosphate (NAADP). Tissue distribution CD38 is most frequently found on plasma B cells, followed by natural killer cells, followed by B cells and T cells, and then followed by a variety of cell types. Function CD38 can function either as a receptor or as an enzyme. As a receptor, CD38 can attach to CD31 on the surface of T cells, thereby activating those cells to produce a variety of cytokines. CD38 activation cooperates with TRPM2 channels to initiate physiological responses such as cell volume regulation. CD38 is a multifunctional enzyme that catalyzes the synthesis of ADP ribose (ADPR) (97%) and cyclic ADP-ribose (cADPR) (3%) from NAD+. CD38 is thought to be a major regulator of NAD+ levels, its NADase activity is much higher than its function as an ADP-rybosyl-cyclase: for every 100 molecules of NAD+ converted to ADP ribose it generates one molecule of cADPR. When nicotinic acid is present under acidic conditions, CD38 can hydrolyze nicotinamide adenine dinucleotide phosphate (NADP+) to NAADP. These reaction products are essential for the regulation of intracellular Ca2+. CD38 occurs not only as an ectoenzyme on cell outer surfaces, but also occurs on the inner surface of cell membranes, facing the cytosol performing the same enzymatic functions. CD38 is believed to control or influence neurotransmitter release in the brain by producing cADPR. CD38 within the brain enables release of the affiliative neuropeptide oxytocin. Like CD38, CD157 is a member of the ADP-ribosyl cyclase family of enzymes that catalyze the formation of cADPR from NAD+, although CD157 is a much weaker catalyst than CD38. The SARM1 enzyme also catalyzes the formation of cADPR from NAD+, but SARM1 elevates cADPR much more efficiently than CD38. Clinical significance The loss of CD38 function is associated with impaired immune responses, metabolic disturbances, and behavioral modifications including social amnesia possibly related to autism. CD31 on endothelial cells binds to the CD38 receptor on natural killer cells for those cells to attach to the endothelium. CD38 on leukocytes attaching to CD31 on endothelial cells allows for leukocyte binding to blood vessel walls, and the passage of leukocytes through blood vessel walls. The cytokine interferon gamma and the Gram negative bacterial cell wall component lipopolysaccharide induce CD38 expression on macrophages. Interferon gamma strongly induces CD38 expression on monocytes. The cytokine tumor necrosis factor strongly induces CD38 on airway smooth muscle cells inducing cADPR-mediated Ca2+, thereby increasing dysfunctional contractility resulting in asthma. The CD38 protein is a marker of cell activation. It has been connected to HIV infection, leukemias, myelomas, solid tumors, type II diabetes mellitus and bone metabolism, as well as some genetically determined conditions. CD38 increases airway contractility hyperresponsiveness, is increased in the lungs of asthmatic patients, and amplifies the inflammatory response of airway smooth muscle of those patients. Clinical application CD38 inhibitors may be used as therapeutics for the treatment of asthma. CD38 has been used as a prognostic marker in leukemia. Daratumumab (Darzalex) which targets CD38 has been used in treating multiple myeloma. The use of Daratumumab can interfere with pre-blood transfusion tests, as CD38 is weakly expressed on the surface of erythrocytes. Thus, a screening assay for irregular antibodies against red blood cell antigens or a direct immunoglobulin test can produce false-positive results. This can be sidelined by either pretreatment of the erythrocytes with dithiothreitol (DTT) or by using an anti-CD38 antibody neutralizing agent, e.g. DaraEx. Inhibitors Cassic acid (Rhein) CD38-IN-78c Chrysanthemin (Kuromanin) compound 1ai compound 1am Daratumumab Isatuximab Felzartamab (MOR202) apigenin Luteolinidin MK-0159 TNB-738 Aging studies A gradual increase in CD38 has been implicated in the decline of NAD+ with age. Treatment of old mice with a specific CD38 inhibitor, 78c, prevents age-related NAD+ decline. CD38 knockout mice have twice the levels of NAD+ and are resistant to age-associated NAD+ decline, with dramatically increased NAD+ levels in major organs (liver, muscle, brain, and heart). On the other hand, mice overexpressing CD38 exhibit reduced NAD+ and mitochondrial dysfunction. Macrophages are believed to be primarily responsible for the age-related increase in CD38 expression and NAD+ decline. Cellular senescence of macrophages increases CD38 expression. Macrophages accumulate in visceral fat and other tissues with age, leading to chronic inflammation. The inflammatory transcription factor NF-κB and CD38 are mutually activating. Secretions from senescent cells induce high levels of expression of CD38 on macrophages, which becomes the major cause of NAD+ depletion with age. Decline of NAD+ in the brain with age may be due to increased CD38 on astrocytes and microglia, leading to neuroinflammation and neurodegeneration. References Further reading External links GeneCard CD38 CD38 Clusters of differentiation Glycoproteins
CD38
Chemistry
1,467
9,738,540
https://en.wikipedia.org/wiki/Phylogenetic%20comparative%20methods
Phylogenetic comparative methods (PCMs) use information on the historical relationships of lineages (phylogenies) to test evolutionary hypotheses. The comparative method has a long history in evolutionary biology; indeed, Charles Darwin used differences and similarities between species as a major source of evidence in The Origin of Species. However, the fact that closely related lineages share many traits and trait combinations as a result of the process of descent with modification means that lineages are not independent. This realization inspired the development of explicitly phylogenetic comparative methods. Initially, these methods were primarily developed to control for phylogenetic history when testing for adaptation; however, in recent years the use of the term has broadened to include any use of phylogenies in statistical tests. Although most studies that employ PCMs focus on extant organisms, many methods can also be applied to extinct taxa and can incorporate information from the fossil record. PCMs can generally be divided into two types of approaches: those that infer the evolutionary history of some character (phenotypic or genetic) across a phylogeny and those that infer the process of evolutionary branching itself (diversification rates), though there are some approaches that do both simultaneously. Typically the tree that is used in conjunction with PCMs has been estimated independently (see computational phylogenetics) such that both the relationships between lineages and the length of branches separating them is assumed to be known. Applications Phylogenetic comparative approaches can complement other ways of studying adaptation, such as studying natural populations, experimental studies, and mathematical models. Interspecific comparisons allow researchers to assess the generality of evolutionary phenomena by considering independent evolutionary events. Such an approach is particularly useful when there is little or no variation within species. And because they can be used to explicitly model evolutionary processes occurring over very long time periods, they can provide insight into macroevolutionary questions, once the exclusive domain of paleontology. Phylogenetic comparative methods are commonly applied to such questions as: What is the slope of an allometric scaling relationship? → Example: how does brain mass vary in relation to body mass? Do different clades of organisms differ with respect to some phenotypic trait? → Example: do canids have larger hearts than felids? Do groups of species that share a behavioral or ecological feature (e.g., social system, diet) differ in average phenotype? → Example: do carnivores have larger home ranges than herbivores? What was the ancestral state of a trait? → Example: where did endothermy evolve in the lineage that led to mammals? → Example: where, when, and why did placentas and viviparity evolve? Does a trait exhibit significant phylogenetic signal in a particular group of organisms? Do certain types of traits tend to "follow phylogeny" more than others? → Example: are behavioral traits more labile during evolution? Do species differences in life history traits trade-off, as in the so-called fast-slow continuum? → Example: why do small-bodied species have shorter life spans than their larger relatives? Phylogenetically independent contrasts Felsenstein proposed the first general statistical method in 1985 for incorporating phylogenetic information, i.e., the first that could use any arbitrary topology (branching order) and a specified set of branch lengths. The method is now recognized as an algorithm that implements a special case of what are termed phylogenetic generalized least-squares models. The logic of the method is to use phylogenetic information (and an assumed Brownian motion like model of trait evolution) to transform the original tip data (mean values for a set of species) into values that are statistically independent and identically distributed. The algorithm involves computing values at internal nodes as an intermediate step, but they are generally not used for inferences by themselves. An exception occurs for the basal (root) node, which can be interpreted as an estimate of the ancestral value for the entire tree (assuming that no directional evolutionary trends [e.g., Cope's rule] have occurred) or as a phylogenetically weighted estimate of the mean for the entire set of tip species (terminal taxa). The value at the root is equivalent to that obtained from the "squared-change parsimony" algorithm and is also the maximum likelihood estimate under Brownian motion. The independent contrasts algebra can also be used to compute a standard error or confidence interval. Phylogenetic generalized least squares (PGLS) Probably the most commonly used PCM is phylogenetic generalized least squares (PGLS). This approach is used to test whether there is a relationship between two (or more) variables while accounting for the fact that lineage are not independent. The method is a special case of generalized least squares (GLS) and as such the PGLS estimator is also unbiased, consistent, efficient, and asymptotically normal. In many statistical situations where GLS (or, ordinary least squares [OLS]) is used residual errors ε are assumed to be independent and identically distributed random variables that are assumed to be normal whereas in PGLS the errors are assumed to be distributed as where V is a matrix of expected variance and covariance of the residuals given an evolutionary model and a phylogenetic tree. Therefore, it is the structure of residuals and not the variables themselves that show phylogenetic signal. This has long been a source of confusion in the scientific literature. A number of models have been proposed for the structure of V such as Brownian motion Ornstein-Uhlenbeck, and Pagel's λ model. (When a Brownian motion model is used, PGLS is identical to the independent contrasts estimator.). In PGLS, the parameters of the evolutionary model are typically co-estimated with the regression parameters. PGLS can only be applied to questions where the dependent variable is continuously distributed; however, the phylogenetic tree can also be incorporated into the residual distribution of generalized linear models, making it possible to generalize the approach to a broader set of distributions for the response. Phylogenetically informed Monte Carlo computer simulations Martins and Garland proposed in 1991 that one way to account for phylogenetic relations when conducting statistical analyses was to use computer simulations to create many data sets that are consistent with the null hypothesis under test (e.g., no correlation between two traits, no difference between two ecologically defined groups of species) but that mimic evolution along the relevant phylogenetic tree. If such data sets (typically 1,000 or more) are analyzed with the same statistical procedure that is used to analyze a real data set, then results for the simulated data sets can be used to create phylogenetically correct (or "PC") null distributions of the test statistic (e.g., a correlation coefficient, t, F). Such simulation approaches can also be combined with such methods as phylogenetically independent contrasts or PGLS (see above). See also Allometry Behavioral ecology Biodiversity Bioinformatics Cladistics Comparative anatomy Comparative method in linguistics Comparative physiology Computational phylogenetics Disk-covering method Ecophysiology Evolutionary neurobiology Evolutionary physiology Generalized least squares (GLS) Generalized linear model Joe Felsenstein Mark Pagel Maximum likelihood Maximum parsimony Paul H. Harvey Phylogenetics Phylogenetic reconciliation Roderic D.M. Page Sexual selection Statistics Systematics Theodore Garland Jr. References Further reading Ackerly, D. D. 1999. Comparative plant ecology and the role of phylogenetic information. Pages 391–413 in M. C. Press, J. D. Scholes, and M. G. Braker, eds. Physiological plant ecology. The 39th symposium of the British Ecological Society held at the University of York 7–9 September 1998. Blackwell Science, Oxford, U.K. Brooks, D. R., and D. A. McLennan. 1991. Phylogeny, ecology, and behavior: a research program in comparative biology. Univ. Chicago Press, Chicago. 434 pp. Eggleton, P., and R. I. Vane-Wright, eds. 1994. Phylogenetics and ecology. Linnean Society Symposium Series Number 17. Academic Press, London. Felsenstein, J. 2004. Inferring phylogenies. Sinauer Associates, Sunderland, Mass. xx + 664 pp. Ives, A. R. 2018. Mixed and phylogenetic models: a conceptual introduction to correlated data. leanpub.com, 125 pp., https://leanpub.com/correlateddata Maddison, W. P., and D. R. Maddison. 1992. MacClade. Analysis of phylogeny and character evolution. Version 3. Sinauer Associates, Sunderland, Mass. 398 pp. Martins, E. P., ed. 1996. Phylogenies and the comparative method in animal behavior. Oxford University Press, Oxford. 415 pp. Erratum Am. Nat. 153:448. Page, R. D. M., ed. 2003. Tangled trees: phylogeny, cospeciation, and coevolution. University of Chicago Press, Chicago. Rezende, E. L., and Garland, T. Jr. 2003. Comparaciones interespecíficas y métodos estadísticos filogenéticos. Pages 79–98 in F. Bozinovic, ed. Fisiología Ecológica & Evolutiva. Teoría y casos de estudios en animales. Ediciones Universidad Católica de Chile, Santiago. PDF Ridley, M. 1983. The explanation of organic diversity: The comparative method and adaptations for mating. Clarendon, Oxford, U.K. External links Adaptation and the comparative method online lecture, with worked example of phylogenetically independent contrasts and mastery quiz List of phylogeny programs Phylogenetic Tools for Comparative Biology Phylogeny of Sleep website Tree of Life Journals American Naturalist Behavioral Ecology Ecology Evolution Evolutionary Ecology Research Functional Ecology Journal of Evolutionary Biology Philosophical Transactions of the Royal Society of London B Physiological and Biochemical Zoology Systematic Biology Software packages (incomplete list) Analyses of Phylogenetics and Evolution BayesTraits Comparative Analysis by Independent Contrasts COMPARE Felsenstein's List Mesquite PDAP:PDTree for Mesquite mvMorph ouch: Ornstein-Uhlenbeck for Comparative Hypotheses PDAP: Phenotypic Diversity Analysis Programs Phylogenetic Regression PHYSIG Laboratories Ackerly Bininda-Emonds Blomberg Butler Felsenstein Freckleton Garland Gittleman Grafen Hansen Harmon Harvey Housworth Irschick Ives Losos Martins Mooers Mort Nunn Oakley Page Pagel Paradis Purvis Rambaut Rohlf Sanderson Phylogenetics
Phylogenetic comparative methods
Biology
2,195
3,304,108
https://en.wikipedia.org/wiki/Structured%20content
Structured content is information or content that is organized in a predictable way and is usually classified with metadata. XML is a common storage format, but structured content can also be stored in other standard or proprietary formats. When working in structured content, writers need to build the structure of their content as well as add the text, images, etc. They build the structure by adding elements, and there are elements for different types of content. The structure must be valid according to the standard being used, and it is often enforced by the authoring tool. This helps to ensure consistency, as writers must use the appropriate elements in a consistent way. See also Structure mining References Metadata
Structured content
Technology
134
3,242,790
https://en.wikipedia.org/wiki/Stieltjes%20moment%20problem
In mathematics, the Stieltjes moment problem, named after Thomas Joannes Stieltjes, seeks necessary and sufficient conditions for a sequence (m0, m1, m2, ...) to be of the form for some measure μ. If such a function μ exists, one asks whether it is unique. The essential difference between this and other well-known moment problems is that this is on a half-line [0, ∞), whereas in the Hausdorff moment problem one considers a bounded interval [0, 1], and in the Hamburger moment problem one considers the whole line (−∞, ∞). Existence Let be a Hankel matrix, and Then { mn : n = 1, 2, 3, ... } is a moment sequence of some measure on with infinite support if and only if for all n, both { mn : n = 1, 2, 3, ... } is a moment sequence of some measure on with finite support of size m if and only if for all , both and for all larger Uniqueness There are several sufficient conditions for uniqueness, for example, Carleman's condition, which states that the solution is unique if References Probability problems Mathematical analysis Moment (mathematics) Mathematical problems
Stieltjes moment problem
Physics,Mathematics
258
36,212,889
https://en.wikipedia.org/wiki/C10H14N2O6
{{DISPLAYTITLE:C10H14N2O6}} The molecular formula C10H14N2O6 (molar mass: 258.23 g/mol) may refer to: 3-Methyluridine, also called N3-methyluridine 5-Methyluridine, also called ribothymidine
C10H14N2O6
Chemistry
73
10,574,314
https://en.wikipedia.org/wiki/Digital%20journalism
Digital journalism, also known as netizen journalism or online journalism, is a contemporary form of journalism where editorial content is distributed via the Internet, as opposed to publishing via print or broadcast. What constitutes digital journalism is debated by scholars; however, the primary product of journalism, which is news and features on current affairs, is presented solely or in combination as text, audio, video, or some interactive forms like storytelling stories or newsgames, and disseminated through digital media technology. Fewer barriers to entry, lowered distribution costs, and diverse computer networking technologies have led to the widespread practice of digital journalism. It has democratized the flow of information that was previously controlled by traditional media including newspapers, magazines, radio, and television. In the context of digital journalism, online journalists are often expected to possess a wide range of skills, yet there is a significant gap between the perceived and actual performance of these skills, influenced by time pressures and resource allocation decisions. Some have asserted that a greater degree of creativity can be exercised with digital journalism when compared to traditional journalism and traditional media. The digital aspect may be central to the journalistic message and remains, to some extent, within the creative control of the writer, editor, and/or publisher. While technological innovation has been a primary focus in online journalism research, particularly in interactivity, multimedia, and hypertext, there is a growing need to explore other factors that influence its evolution. It has been acknowledged that reports of its growth have tended to be exaggerated. In fact, a 2019 Pew survey showed a 16% decline in the time spent on online news sites since 2016. Overview Digital journalism flows as journalism flows and is difficult to pinpoint where it is and where it is going. In partnership with digital media, digital journalism uses facets of digital media to perform journalist tasks, for example, using the internet as a tool rather than a singular form of digital media. There is no absolute agreement as to what constitutes digital journalism. Mu Lin argues that, “Web and mobile platforms demand us to adopt a platform-free mindset for an all-inclusive production approach – create the [digital] contents first, then distribute via appropriate platforms." The repurposing of print content for an online audience is sufficient for some, while others require content created with the digital medium's unique features like hypertextuality. Fondevila Gascón adds multimedia and interactivity to complete the digital journalism essence. For Deuze, online journalism can be functionally differentiated from other kinds of journalism by its technological component which journalists have to consider when creating or displaying content. Digital journalistic work may range from purely editorial content like CNN (produced by professional journalists) online to public-connectivity websites like Slashdot (communication lacking formal barriers of entry). The difference of digital journalism from traditional journalism may be in its reconceptualised role of the reporter in relation to audiences and news organizations. The expectations of society for instant information was important for the evolution of digital journalism. However, it is likely that the exact nature and roles of digital journalism will not be fully known for some time. Some researchers even argue that the free distribution of online content, online advertisement and the new way recipients use news could undermine the traditional business model of mass media distributors that is based on single-copy sales, subscriptions and the selling of advertisement space. History The first type of digital journalism, called teletext, was invented in the UK in 1970. Teletext is a system allowing viewers to choose which stories they wish to read and see it immediately. The information provided through teletext is brief and instant, similar to the information seen in digital journalism today. The information was broadcast between the frames of a television signal in what was called the vertical blanking interval or VBI. American journalist Hunter S. Thompson relied on early digital communication technology beginning by using a fax machine to report from the 1971 US presidential campaign trail as documented in his book Fear and Loathing on the Campaign Trail. After the invention of teletext was the invention of videotex, of which Prestel was the world's first system, launching commercially in 1979 with various British newspapers such as the Financial Times lining up to deliver newspaper stories online through it. Videotex closed down in 1986 due to failing to meet end-user demand. American newspaper companies took notice of the new technology and created their own videotex systems, the largest and most ambitious being Viewtron, a service of Knight-Ridder launched in 1981. Others were Keycom in Chicago and Gateway in Los Angeles. All of them had closed by 1986. Next came computer Bulletin Board Systems. In the late 1980s and early 1990s, several smaller newspapers started online news services using BBS software and telephone modems. The first of these was the Albuquerque Tribune in 1989. Computer Gaming World in September 1992 broke the news of Electronic Arts' acquisition of Origin Systems on Prodigy, before its next issue went to press. Online news websites began to proliferate in the 1990s. An early adopter was The News & Observer in Raleigh, North Carolina which offered online news as Nando. Steve Yelvington wrote on the Poynter Institute website about Nando, owned by The N&O, by saying "Nando evolved into the first serious, professional news site on the World Wide Web". It originated in the early 1990s as "NandO Land". It is believed that a major increase in digital online journalism occurred around this time when the first commercial web browsers, Netscape Navigator (1994), and Internet Explorer (1995). By 1996, most news outlets had an online presence. Although journalistic content was repurposed from original text/video/audio sources without change in substance, it could be consumed in different ways because of its online form through toolbars, topically grouped content, and intertextual links. A twenty-four-hour news cycle and new ways of user-journalist interaction web boards were among the features unique to the digital format. Later, portals such as AOL and Yahoo! and their news aggregators (sites that collect and categorize links from news sources) led to news agencies such as The Associated Press to supplying digitally suited content for aggregation beyond the limit of what client news providers could use in the past. Also, Salon, was founded in 1995. In 2001 the American Journalism Review called Salon the Internet's "preeminent independent venue for journalism." In 2008, for the first time, more Americans reported getting their national and international news from the internet, rather than newspapers. Young people aged 18 to 29 now primarily get their news via the Internet, according to a Pew Research Center report. Audiences to news sites continued to grow due to the launch of new news sites, continued investment in news online by conventional news organizations, and the continued growth in internet audiences overall. Sixty-five percent of youth now primarily access the news online. Mainstream news sites are the most widespread form of online newsmedia production. As of 2000, the vast majority of journalists in the Western world now use the internet regularly in their daily work. In addition to mainstream news sites, digital journalism is found in index and category sites (sites without much original content but many links to existing news sites), meta- and comment sites (sites about newsmedia issues like media watchdogs), and share and discussion sites (sites that facilitate the connection of people, like Slashdot). Blogs are also another digital journalism phenomenon capable of fresh information, ranging from personal sites to those with audiences of hundreds of thousands. Digital journalism is involved in the cloud journalism phenomenon, a constant flow of contents in the Broadband Society. Prior to 2008, the industry had hoped that publishing news online would prove lucrative enough to fund the costs of conventional newsgathering. In 2008, however, online advertising began to slow down, and little progress was made towards development of new business models. The Pew Project for Excellence in Journalism describes its 2008 report on the State of the News Media, its sixth, as its bleakest ever. Despite the uncertainty, online journalists report expanding newsrooms. They believe advertising is likely to be the best revenue model supporting the production of online news. Many news organizations based in other media also distribute news online, but the amount they use of the new medium varies. Some news organizations use the Web exclusively or as a secondary outlet for their content. The Online News Association, founded in 1999, is the largest organization representing online journalists, with more than 1,700 members whose principal livelihood involves gathering or producing news for digital presentation. The Internet challenges traditional news organizations in several ways. Newspapers may lose classified advertising to websites, which are often targeted by interest instead of geography. These organizations are concerned about real and perceived loss of viewers and circulation to the Internet. Hyperlocal journalism is journalism within a very small community. Hyperlocal journalism, like other types of digital journalism, is very convenient for the reader and offers more information than former types of journalism. It is free or inexpensive. Reports of Facebook interfering in journalism It has been acknowledged that Facebook has invested heavily in news sources and purchasing time on local news media outlets. Tech Crunch journalist Josh Constine even stated in February 2018 that the company "stole the news business" and used sponsorship to make many news publishers its "ghostwriters." In January 2019, founder Mark Zuckerberg announced that he will spend $300 million in local news buys over a three-year period. Impact on readers Digital journalism allows for connection and discussion at levels that print does not offer on its own. People can comment on articles and start discussion boards to discuss articles. Before the Internet, spontaneous discussion between readers who had never met was impossible. The process of discussing a news item is a big portion of what makes for digital journalism. People add to the story and connect with other people who want to discuss the topic. The interaction between the press and the online public has led to a shift towards a participatory model in news framing, where alternative discourses emerge alongside traditional journalism. Digital journalism creates an opportunity for niche audiences, allowing people to have more options as to what to view and read. Digital journalism opens up new ways of storytelling; through the technical components of the new medium, digital journalists can provide a variety of media, such as audio, video, and digital photography. Regarding to how this affects the users and how it changes their usage of news, research finds that, other than a different layout and presentation in which the news are perceived, there is no drastic difference in remembering and processing the news. Digital journalism represents a revolution of how news is consumed by society. Online sources are able to provide quick, efficient, and accurate reporting of breaking news in a matter of seconds, providing society with a synopsis of events as they occur. Throughout the development of the event, journalists are able to feed online sources the information keeping readers up-to-date in mere seconds. The speed in which a story can be posted can affect the accuracy of the reporting in a way that doesn't usually happen in print journalism. Before the emergence of digital journalism the printing process took much more time, allowing for the discovery and correction of errors. News consumers must become Web literate and use critical thinking to evaluate the credibility of sources. Because it is possible for anyone to write articles and post them on the Internet, the definition of journalism is changing. Because it is becoming increasingly simple for the average person to have an impact in the news world through tools like blogs and even comments on news stories on reputable news websites, it becomes increasingly difficult to sift through the massive amount of information coming in from the digital area of journalism. There are great advantages with digital journalism and the new blogging evolution that people are becoming accustomed to, but there are disadvantages. For instance, people are used to what they already know and can't always catch up quickly with the new technologies in the 21st century. The goals of print and digital journalism are the same, although different tools are needed to function. The interaction between the writer and consumer is new, and this can be credited to digital journalism. There are many ways to get personal thoughts on the Web. There are some disadvantages to this, however, the main one being factual information. There is a pressing need for accuracy in digital journalism, and until they find a way to press accuracy, they will still face some criticism. One major dispute regards the credibility of these online news websites. A digital journalism credibility study performed by the Online News Association compares the online public credibility ratings to actual media respondent credibility ratings. Looking at a variety of online media sources, the study found that overall the public saw online media as more credible than it actually is. A separate study on Finnish online journalism sourcing practices suggests that while transparency is valued, there's a notable gap between audience expectations and actual journalistic practices, highlighting the need for a closer alignment between journalistic standards and audience perceptions in digital media. The effects of digital journalism are evident worldwide. This form of journalism has pushed journalists to reform and evolve. Older journalists who are not tech savvy have felt the blunt force of this. In recent months, a number of older journalists have been pushed out and younger journalists brought in because of their lower cost and ability to work in advanced technology settings. In the dynamic landscape of journalism, as news consumption habits evolve and traditional outlets face declining audiences, there's a growing imperative to reevaluate established models of information dissemination. Exploring diverse storytelling approaches, beyond the conventional inverted pyramid, offers an opportunity to optimize communication effectiveness in the digital realm, catering to the evolving needs and preferences of contemporary audiences. Impact on publishers Many newspapers, such as The New York Times, have created online sites to remain competitive and have taken advantage of audio, video, and text linking to remain at the top of news consumers' lists as most of the news enthusiasm now reach their base through hand held devices such as smartphones, tablets etc. Hence audio or video backing is a definite advantage. Newspapers rarely break news stories any more, with most websites reporting on breaking news before the cable news channels. Digital journalism allows for reports to start out vague and generalized, and progress to a better story. Newspapers and TV cable are at a disadvantage because they generally can only put together stories when an ample amount of detail and information are available. Often, newspapers have to wait for the next day, or even two days later if it is a late-breaking story, before being able to publish it. Newspapers lose a lot of ground to their online counterparts, with advertising revenue shifting to the Internet, and subscription to the printed paper decreasing. People are now able to find the news they want, when they want, without having to leave their homes or pay to receive the news , even though there are still people who are willing to pay for online journalistic content. Because of this, many people have viewed digital journalism as the death of journalism. According to communication scholar Nicole Cohen, "four practices stand out as putting pressure on traditional journalism production: outsourcing, unpaid labour, metrics and measurement, and automation". Free advertising on websites such as Craigslist has transformed how people publicize; the Internet has created a faster, cheaper way for people to get news out, thus creating the shift in ad sales from standard newspapers to the Internet. There has been a substantial effect of digital journalism and media on the newspaper industry, with the creation of new business models. It is now possible to contemplate a time in the near future when major towns will no longer have a newspaper and when magazines and network news operations will employ no more than a handful of reporters. Many newspapers and individual print journalists have been forced out of business because of the popularity of digital journalism. The newspapers that have not been willing to be forced out of business have attempted to survive by saving money, laying off staff, shrinking the size of the publications, eliminating editions, as well as partnering with other businesses to share coverage and content. In 2009, one study concluded that most journalists are ready to compete in a digital world and that these journalists believe the transition from print to digital journalism in their newsroom is moving too slowly. Some highly specialized positions in the publishing industry have become obsolete. The growth in digital journalism and the near collapse of the economy has also led to downsizing for those in the industry. Students wishing to become journalists now need to be familiar with digital journalism in order to be able to contribute and develop journalism skills. Not only must a journalist analyze their audience and focus on effective communication with them, they have to be quick; news websites are able to update their stories within minutes of the news event. Other skills may include creating a website and uploading information using basic programming skills. Critics believe digital journalism has made it easier for individuals who are not qualified journalists to misinform the general public. Many believe that this form of journalism has created a number of sites that do not have credible information. Sites such as PerezHilton.com have been criticized for blurring the lines between journalism and opinionated writing. Some critics believe that newspapers should not switch to a solely Internet-based format, but instead keep a component of print as well as digital. Digital journalism allows citizens and readers the opportunity to join in on threaded discussions relating to a news article that has been read by the public. This offers an excellent source for writers and reporters to decide what is important and what should be omitted in the future. These threads can provide useful information to writers of digital journalism so that future articles can be pruned and improved to possibly create a better article the next time around. Implications on traditional journalism Digitization is currently causing many changes to traditional journalistic practices. The labor of journalists, in general, is becoming increasingly dependent on digital journalism. Scholars outline that this is a change to the execution of journalism and not the conception part of the labor process. They also contend that this is simply the de-skilling of some skills and the up-skilling of others. This theory is in contention to the notion that technological determinism is negatively effecting journalism, as it should be understood that it is just changing the traditional skill set. Communication scholar Nicole Cohen believes there are several trends putting pressure on this traditional skill set. Some of which being outsourcing, algorithms, and automation. Although Cohen believes that technology could be used to improve journalism, she feels the current trends in digital journalism are so far affecting the practice in a negative way. There is also the impact that digital journalism is influencing the uprising of citizen journalism. Because digital journalism takes place online and is contributed mostly by citizens on user-generated content sites, there is competition growing between the two. Citizen journalism allows anyone to post anything, and because of that, journalists are being forced by their employers to publish more news content than before, which often means rushing news stories and failing to verify the source of information. Outlets such as Vice Media have also created a resurgence in Gonzo journalism in the form of digital videos and articles. Work outside traditional press The Internet has also given rise to more participation by people who are not normally journalists, such as with Indy Media (Max Perez). Bloggers write on web logs or blogs. Traditional journalists often do not consider bloggers to automatically be journalists. This has more to do with standards and professional practices than the medium. For instance, crowdsourcing and crowdfunding journalism attracts amateur journalists, as well as ambitious professionals that are restrained by the boundaries set by traditional press. However, the implication of these types of journalism is that it disregards the professional norms of journalistic practices that ensures accuracy and impartiality of the content. But, , blogging has generally gained at least more attention and has led to some effects on mainstream journalism, such as exposing problems related to a television piece about President George W. Bush's National Guard Service. Recent legal judgements have determined that bloggers are entitled to the same protections as other journalists subject to the same responsibilities. In the United States, the Electronic Frontier Foundation has been instrumental in advocating for the rights of journalist bloggers. The Supreme Court of Canada ruled that: "[96] A second preliminary question is what the new defence should be called. In arguments before us, the defence was referred to as the responsible journalism test. This has the value of capturing the essence of the defence in succinct style. However, the traditional media are rapidly being complemented by new ways of communicating on matters of public interest, many of them online, which do not involve journalists. These new disseminators of news and information should, absent good reasons for exclusion, be subject to the same laws as established media outlets. I agree with Lord Hoffmann that the new defence is "available to anyone who publishes material of public interest in any medium": Jameel, at para. 54." Other significant tools of on-line journalism are Internet forums, discussion boards and chats, especially those representing the Internet version of official media. The widespread use of the Internet all over the world created a unique opportunity to create a meeting place for both sides in many conflicts, such as the Israeli–Palestinian conflict and the First and Second Chechen Wars. Often this gives a unique chance to find new, alternative solutions to the conflict, but often the Internet is turned into the battlefield by contradicting parties creating endless "online battles." Internet radio and podcasts are other growing independent media based on the Internet. In addition, many journalistic media have created Application Programming Interfaces (APIs) that provide online access to their data and content for research, to encourage links in other publications, or the development of specialized apps. Blogs With the rise of digital media, there is a move from the traditional journalist to the blogger or amateur journalist. Blogs can be seen as a new genre of journalism because of their "narrative style of news characterized by personalization" that moves away from traditional journalism's approach, changing journalism into a more conversational and decentralized type of news. Blogging has become a large part of the transmitting of news and ideas across cites, states, and countries, and bloggers argue that blogs themselves are now breaking stories. Even online news publications have blogs that are written by their affiliated journalists or other respected writers. Blogging allows readers and journalists to be opinionated about the news and talk about it in an open environment. Blogs allow comments where some news outlets do not, due to the need to constantly monitor what is posted. By allowing comments, the reader can interact with a story instead of just absorbing the words on the screen. According to one 2007 study, 15% of those who read blogs read them for news. However, many blogs are highly opinionated and have a bias. Some are not verified to be true. The Federal Trade Commission (FTC) established guidelines mandating that bloggers disclose any free goods or services they receive from third parties in 2009 in response to a question of the integrity of product and service reviews in the online community. The development of blogging communities has partly resulted because of the lack of local news coverage, the spread of misinformation, and the manipulation of news. Blogging platforms are often used as mediums to spread ideas and connect to others with similar mentalities. Anonymity lives within these platforms that circulates different perspectives. Some have postulated that blogs' usage of public opinions as facts has gained them status and creditability. Memes are often shared on these blogs due to its social phenomenon and its relation to existing subcultures which often attain high engagement. Traditional journalism has helped set the foundation for blogs, which are frequently used to question mainstream media reported by journalist. Citizen journalism Digital journalism's lack of a traditional "editor" has given rise to citizen journalism. The early advances that the digital age offered journalism were faster research, easier editing, conveniences, and a faster delivery time for articles. The Internet has broadened the effect that the digital age has on journalism. Because of the popularity of the Internet, most people have access and can add their forms of journalism to the information network. This allows anyone who wants to share something they deem important that has happened in their community. Individuals who are not professional journalists who present news through their blogs or websites are often referred to as citizen journalists. One does not need a degree to be a citizen journalist. Citizen journalists are able to publish information that may not be reported otherwise, and the public has a greater opportunity to be informed. Some companies use the information that a citizen journalist relays when they themselves can not access certain situations, for example, in countries where freedom of the press is limited. Anyone can record events happening and send it anywhere they wish, or put it on their website. Non-profit and grassroots digital journalism sites may have far fewer resources than their corporate counterparts, yet due to digital media are able to have websites that are technically comparable. Other media outlets can then pick up their story and run with it as they please, thus allowing information to reach wider audiences. For citizen journalism to be effective and successful, there needs to be citizen editors. Their role being to solicit other people to provide accurate information and to mediate interactivity among users. An example can be found in the start-up of the South Korean online daily newspaper, OhMyNews, where the founder recruited several hundred volunteer "citizen reporters" to write news articles that were edited and processed by four professional journalists. News collections The Internet also offers options such as personalized news feeds and aggregators, which compile news from different websites into one site. One of the most popular news aggregators is Google News. Others include Topix.net, and TheFreeLibrary.com. But, some people see too much personalization as detrimental. For example, some fear that people will have narrower exposure to news, seeking out only those commentators who already agree with them. As of March 2005, Wikinews rewrites articles from other news organizations. Original reporting remains a challenge on the Internet as the burdens of verification and legal risks (especially from plaintiff-friendly jurisdictions like BC) remain high in the absence of any net-wide approach to defamation. See also Online newspaper Open source journalism Wikinews Toons Mag User-generated content References Sources Bentley, Clyde H. 2011. Citizen journalism: Back to the future? Geopolitics, History, and International Relations 3 (1): p. 103ff. Deuze, Mark. 2003. The web and its journalisms: Considering the consequences of different types of newsmedia online. New Media & Society 5 (2): 203-230. Fondevila Gascón, Joan Francesc (2009). El papel decisivo de la banda ancha en el Espacio Iberoamericano del Conocimiento. Revista Iberoamericana de Ciencia, Tecnología y Sociedad–CTS, n. 2, pp. 1–15. Fondevila Gascón, Joan Francesc (2010). El cloud journalism: un nuevo concepto de producción para el periodismo del siglo XXI. Observatorio (OBS*) Journal, v. 4, n. 1 (2010), pp. 19–35. Fondevila Gascón, Joan Francesc; Del Olmo Arriaga, Josep Lluís and Sierra Sánchez, Javier (2011). New communicative markets, new business models in the digital press. Trípodos (Extra 2011-VI International Conference on Communication and Reality-Life without Media, Universitat Ramon Llull), pp. 301–310. Kawamoto, Kevin. 2003. Digital Journalism: Emerging Media and the Changing Horizons of Journalism. Lanham, Md.: Rowman & Littlefield Online Journalism Review. 2002. The third wave of online journalism. Online Journalism Review Rogers, Tony. What is hyperlocal journalism? Sites that focus on areas often ignored by larger news outlets" about.com , accessdate= September 12, 2011 Scott, Ben. 2005. A contemporary history of digital journalism. Television & New Media 6 (1): 89-126 Wall, Melissa. 2005. "Blogs of war: Weblogs as news." Journalism 6 (2): 153-172 Digital media Types of journalism Citizen journalism Technology in society Citizen media New media
Digital journalism
Technology
5,775
18,560,944
https://en.wikipedia.org/wiki/Russula%20acrifolia
Russula acrifolia is a species of mushroom. Its cap is coloured grey to blackish-grey; the cap becomes red when it is injured, but then turns blackish-gray. It is edible and described as having an acrid taste. It grows on rich soils. Distribution Russula acrifolia is a holarctic species that needs a temperate climate. The species is spread in the Caucasus, Siberia, Korea and Japan, Northern America, Northern Africa and Europe. Ecological properties Russula acrifolia is a mycorrhizal mushroom for different trees. Its favourite symbionic partners are Fagus sylvatica and spruce. If those are not available, it can also form symbiotic partnerships with larix, pines, betula, oaks and tilia. See also List of Russula species References acrifolia Fungi described in 1962 Fungi of Europe Fungus species
Russula acrifolia
Biology
181
2,524,429
https://en.wikipedia.org/wiki/Hiroshi%20Okamura
was a Japanese mathematician who made contributions to analysis and the theory of differential equations. He was a professor at Kyoto University. He discovered the necessary and sufficient conditions on initial value problems of ordinary differential equations for the solution to be unique. He also refined the second mean value theorem of integration. Works (posthumous) References 1905 births 1948 deaths 20th-century Japanese mathematicians Mathematical analysts Academic staff of Kyoto University Kyoto University alumni Scientists from Kyoto
Hiroshi Okamura
Mathematics
86
60,825
https://en.wikipedia.org/wiki/Endorphins
Endorphins (contracted from endogenous morphine) are peptides produced in the brain that block the perception of pain and increase feelings of wellbeing. They are produced and stored in the pituitary gland of the brain. Endorphins are endogenous painkillers often produced in the brain and adrenal medulla during physical exercise or orgasm and inhibit pain, muscle cramps, and relieve stress. History Opioid peptides in the brain were first discovered in 1973 by investigators at the University of Aberdeen, John Hughes and Hans Kosterlitz. They isolated "enkephalins" (from the Greek ) from pig brain, identified as Met-enkephalin and Leu-enkephalin. This came after the discovery of a receptor that was proposed to produce the pain-relieving analgesic effects of morphine and other opioids, which led Kosterlitz and Hughes to their discovery of the endogenous opioid ligands. Research during this time was focused on the search for a painkiller that did not have the addictive character or overdose risk of morphine. Rabi Simantov and Solomon H. Snyder isolated morphine-like peptides from calf brain. Eric J. Simon, who independently discovered opioid receptors, later termed these peptides as endorphins. This term was essentially assigned to any peptide that demonstrated morphine-like activity. In 1976, Choh Hao Li and David Chung recorded the sequences of α-, β-, and γ-endorphin isolated from camel pituitary glands for their opioid activity. Li determined that β-endorphin produced strong analgesic effects. Wilhelm Feldberg and Derek George Smyth in 1977 confirmed this, finding β-endorphin to be more potent than morphine. They also confirmed that its effects were reversed by naloxone, an opioid antagonist. Studies have subsequently distinguished between enkephalins, endorphins, and endogenously produced morphine, which is not a peptide. Opioid peptides are classified based on their precursor propeptide: all endorphins are synthesized from the precursor proopiomelanocortin (POMC), encoded by proenkephalin A, and dynorphins encoded by pre-dynorphin. Etymology The word endorphin is derived from / meaning "within" (endogenous, / , "proceeding from within"), and morphine, from Morpheus (), the god of dreams in the Greek mythology. Thus, endorphin is a contraction of 'endo(genous) (mo)rphin' (morphin being the old spelling of morphine). Types The class of endorphins consists of three endogenous opioid peptides: α-endorphin, β-endorphin, and γ-endorphin. The endorphins are all synthesized from the precursor protein, proopiomelanocortin, and all contain a Met-enkephalin motif at their N-terminus: Tyr-Gly-Gly-Phe-Met. α-endorphin and γ-endorphin result from proteolytic cleavage of β-endorphin between the Thr(16)-Leu(17) residues and Leu(17)-Phe(18) respectively. α-endorphin has the shortest sequence, and β-endorphin has the longest sequence. α-endorphin and γ-endorphin are primarily found in the anterior and intermediate pituitary. While β-endorphin is studied for its opioid activity, α-endorphin and γ-endorphin both lack affinity for opiate receptors and thus do not affect the body in the same way that β-endorphin does. Some studies have characterized α-endorphin activity as similar to that of psychostimulants and γ-endorphin activity to that of neuroleptics separately. Synthesis Endorphin precursors are primarily produced in the pituitary gland. All three types of endorphins are fragments of the precursor protein proopiomelanocortin (POMC). At the trans-Golgi network, POMC binds to a membrane-bound protein, carboxypeptidase E (CPE). CPE facilitates POMC transport into immature budding vesicles. In mammals, pro-peptide convertase 1 (PC1) cleaves POMC into adrenocorticotropin (ACTH) and beta-lipotropin (β-LPH). β-LPH, a pituitary hormone with little opiate activity, is then continually fragmented into different peptides, including α-endorphin, β-endorphin, and γ-endorphin. Peptide convertase 2 (PC2) is responsible for cleaving β-LPH into β-endorphin and γ-lipotropin. Formation of α-endorphin and γ-endorphin results from proteolytic cleavage of β-endorphin. Regulation Noradrenaline has been shown to increase endorphins production within inflammatory tissues, resulting in an analgesic effect; the stimulation of sympathetic nerves by electro-acupuncture is believed to be the cause of its analgesic effects. Mechanism of action Endorphins are released from the pituitary gland, typically in response to pain, and can act in both the central nervous system (CNS) and the peripheral nervous system (PNS). In the PNS, β-endorphin is the primary endorphin released from the pituitary gland. Endorphins inhibit transmission of pain signals by binding μ-receptors of peripheral nerves, which block their release of neurotransmitter substance P. The mechanism in the CNS is similar but works by blocking a different neurotransmitter: gamma-aminobutyric acid (GABA). In turn, inhibition of GABA increases the production and release of dopamine, a neurotransmitter associated with reward learning. Functions Endorphins play a major role in the body's inhibitory response to pain. Research has demonstrated that meditation by trained individuals can be used to trigger endorphin release. Laughter may also stimulate endorphin production and elevate one's pain threshold. Endorphin production can be triggered by vigorous aerobic exercise. The release of β-endorphin has been postulated to contribute to the phenomenon known as "runner's high". However, several studies have supported the hypothesis that the runner's high is due to the release of endocannabinoids rather than that of endorphins. Endorphins may contribute to the positive effect of exercise on anxiety and depression. The same phenomenon may also play a role in exercise addiction. Regular intense exercise may cause the brain to downregulate the production of endorphins in periods of rest to maintain homeostasis, causing a person to exercise more intensely in order to receive the same feeling. See also Neurobiological effects of physical exercise Enkephalin References External links Opioid peptides Analgesics Neuropeptides Stress (biological and psychological) Stress (biology) Psychological stress Motivation Pain Grief Anxiety Happy hormones
Endorphins
Biology
1,585
35,703,931
https://en.wikipedia.org/wiki/Genome%20diversity%20and%20karyotype%20evolution%20of%20mammals
The 2000s witnessed an explosion of genome sequencing and mapping in evolutionarily diverse species. While full genome sequencing of mammals is rapidly progressing, the ability to assemble and align orthologous whole chromosomal regions from more than a few species is not yet possible. The intense focus on the building of comparative maps for domestic (dogs and cats), laboratory (mice and rats) and agricultural (cattle) animals has traditionally been used to understand the underlying basis of disease-related and healthy phenotypes. These maps also provide an unprecedented opportunity to use multispecies analysis as a tool to infer karyotype evolution. Comparative chromosome painting and related techniques are very powerful approaches in comparative genome studies. Homologies can be identified with high accuracy using molecularly defined DNA probes for fluorescence in situ hybridization (FISH) on chromosomes of different species. Chromosome painting data are now available for members of nearly all mammalian orders. It was found that in most orders, there are species with rates of chromosome evolution that can be considered as 'default' rates. It needs to be noted that the number of rearrangements that have become fixed in evolutionary history seems relatively low, due to 180 million years of the mammalian radiation. Thus a record of the history of karyotype changes that have occurred during evolution have been attained through comparative chromosome maps. Mammalian phylogenomics Modern mammals (class Mammalia) are divided into Monotremes, Marsupials, and Placentals. The subclass Prototheria (Monotremes) comprises the five species of egg-laying mammals: platypus and four echidna species. The infraclasses Metatheria (Marsupials) and Eutheria (Placentals) together form the subclass Theria. In the 2000s understanding of the relationships among eutherian mammals has experienced a virtual revolution. Molecular phylogenomics, new fossil finds and innovative morphological interpretations now group the more than 4600 extant species of eutherians into four major super-ordinal clades: Euarchontoglires (including Primates, Dermoptera, Scandentia, Rodentia, and Lagomorpha), Laurasiatheria (Cetartiodactyla, Perissodactyla, Carnivora, Chiroptera, Pholidota, and Eulipotyphla), Xenarthra, and Afrotheria (Proboscidea, Sirenia, Hyracoidea, Afrosoricida, Tubulidentata, and Macroscelidea). This tree is very useful in unifying the parts of a puzzle in comparative mammalian cytogenetics. Karyotypes: a global view of the genome Each gene maps to the same chromosome in every cell. Linkage is determined by the presence of two or more loci on the same chromosome. The entire chromosomal set of a species is known as a karyotype. A seemingly logical consequence of descent from common ancestors is that more closely related species should have more chromosomes in common. However, it is now widely thought that species may have phenetically similar karyotypes due to genomic conservation. Therefore, in comparative cytogenetics, phylogenetic relationships should be determined on the basis of the polarity of chromosomal differences (derived traits). Historical development of comparative cytogenetics Mammalian comparative cytogenetics, an indispensable part of phylogenomics, has evolved in a series of steps from pure description to the more heuristic science of the genomic era. Technical advances have marked the various developmental steps of cytogenetics. Classical phase of cytogenetics The first step of the Human Genome Project took place when Tjio and Levan, in 1956, reported the accurate diploid number of human chromosomes as 2n = 46. During this phase, data on the karyotypes of hundreds of mammalian species (including information on diploid numbers, relative length and morphology of chromosomes, presence of B chromosomes) were described. Diploid numbers (2n) were found to vary from 2n = 6–7 in the Indian muntjac to over 100 in some rodents. Chromosome banding The second step derived from the invention of C-, G-, R- and other banding techniques and was marked by the Paris Conference (1971), which led to a standard nomenclature to recognize and classify each human chromosome. G- and R- banding The most widely used banding methods are G-banding (Giemsa-banding) and R-banding (reverse-banding). These techniques produce a characteristic pattern of contrasting dark and light transverse bands on the chromosomes. Banding makes it possible to identify homologous chromosomes and construct chromosomal nomenclatures for many species. Banding of homologous chromosomes allows chromosome segments and rearrangements to be identified. The banded karyotypes of 850 mammalian species were summarized in the Atlas of Mammalian Chromosomes. C-banding and heterochromatin Karyotype variability in mammals is mainly due to the varying amount of heterochromatin in each mammal. Once the amount of heterochromatin is subtracted from total genome content, all mammals have very similar genome sizes. Mammalian species differ considerably in heterochromatin content and location. Heterochromatin is most often detected using C-banding. Early studies using C-banding showed that differences in the fundamental number (i.e., the number of chromosome arms) could be entirely due to the addition of heterochromatic chromosome arms. Heterochromatin consists of different types of repetitive DNA, not all seen with C-banding that can vary greatly between karyotypes of even closely related species. The differences of the amount of heterochromatin among congeneric rodent species may reach 33% of nuclear DNA in Dipodomys species, 36% in Peromyscus species, 42% in Ammospermophilus and 60% in Thomomys species where C-value (haploid DNA content) ranges between 2.1 and 5.6 pg. The red viscacha rat (Tympanoctomys barrerae) has a record C-value among mammals—9.2 pg. Although tetrapoidy was first proposed to be a reason for its high genome size and diploid chromosome number, Svartman et al. showed that the high genome size was due to the enormous amplification of heterochromatin. Although one single copy gene was found to be duplicated in its genome, data on absence of large genome segment duplications (single paints of most Octodon degu probes) and repetitive DNA hybridization evidence rules against tetraploidy. The study of heterochromatin composition, repeated DNA amount and its distribution on chromosomes of octodontids is absolutely necessary to define exactly what heterochromatin fraction is responsible for the large genomes of the red viscacha rat. In comparative cytogenetics, chromosome homology between species was proposed on the basis of similarities in banding patterns. Closely related species often had very similar banding pattern and after 40 years of comparing bands it seems safe to generalize that karyotype divergence in most taxonomic groups follows their phylogenetic relationship, despite notable exceptions. The conservation of large chromosomal segments makes comparison between species worthwhile. Chromosome banding has been a reliable indicator of chromosome homology overall, i.e. that the chromosome identified on the basis of banding actually carries the same genes. This relationship may fail for phylogenetically distant species or species that have experienced extremely rapid chromosome evolution. Banding is still morphological and is not always a foolproof indicator of DNA content. Comparative molecular cytogenetics The third step occurred when molecular techniques were incorporated into cytogenetics. These techniques use DNA probes of diverse sizes to compare chromosomes at the DNA level. Homology can be confidently compared even between phylogenetically distant species or highly rearranged species (e.g., gibbons). Using cladistic analysis rearrangements that have diversified the mammalian karyotype are more precisely mapped and placed in a phylogenomic perspective. "Comparative chromosomics" defines the field of cytogenetics dealing with molecular approaches, although "chromosomics" was originally introduced to define the research of chromatin dynamics and morphological changes in interphase chromosome structures. Chromosome painting or Zoo-FISH was the first technique to have a wide-ranging impact. With this method the homology of chromosome regions between different species are identified by hybridizing DNA probes of an individual, whole chromosomes of one species to metaphase chromosomes of another species. Comparative chromosome painting allows a rapid and efficient comparison of many species and the distribution of homologous regions makes it possible to track the translocation of chromosomal evolution. When many species covering different mammalian orders are compared, this analysis can provide information on trends and rates of chromosomal evolution in different branches. However, homology is only detected qualitatively, and resolution is limited by the size of visualized regions. Thus, the method does not detect all minuscule homologous regions from multiple rearrangements (as between mouse and human). The method also fails to report internal inversions within large segments. Another limitation is that painting across great phylogenetic distance often results in a decreased efficiency. Nevertheless, the use of painting probes derived from different species combined with comparative sequencing projects help to increase the resolution of the method. In addition to sorting, microdissection of chromosomes and chromosome regions was also used to obtain probes for chromosome painting. Best results were obtained when a series of microdissection probes covering the total human genome were localized on anthropoid primate chromosomes via multicolor banding (MCB). However a limitation of MCB is that it can only be used within a group of closely related species ("phylogenetic" resolution is too low). Spectral karyotyping (SKY) and MFISH—the ratio labeling and simultaneous hybridization of a complete chromosomal set have similar drawbacks and little application outside of clinical studies. Comparative genomics data including chromosome painting confirmed the substantial conservation of mammalian chromosomes. Total human chromosomes or their arms can efficiently paint extended chromosome regions in many placentals down to Afrotheria and Xenarthra. Gene localization data on human chromosomes can be extrapolated to the homologous chromosome regions of other species with high reliability. Usefully, humans express conserved syntenic chromosome organization similar to the ancestral condition of all placental mammals. Post-genomic time and comparative chromosomics After the Human Genome Project researchers focused on evolutionary comparisons of the genome structures of different species. The whole genome of any species can be sequenced completely and repeatedly to obtain a comprehensive single-nucleotide map. This method makes it possible to compare genomes for any two species regardless of their taxonomic distance. Sequencing efforts provided a variety of products useful in molecular cytogenetics. Fluorescence in situ hybridization (FISH) with DNA clones (BAC and YAC clones, cosmids) allowed the construction of chromosome maps at a resolution of several megabases that could detect relatively small chromosome rearrangements. A resolution of several kilobases can be achieved on interphase chromatin. A limitation is that hybridization efficiencies decrease with increasing phylogenetic distance. Radiation hybrid (RH) genome mapping is another efficient approach. This method includes the irradiation of cells to disrupt the genome into the desired number of fragments that are subsequently fused with Chinese hamster cells. The resulting somatic cell hybrids contain individual fragments of the relevant genome. Then, 90–100 (sometimes, more) clones covering the total genome are selected, and the sequences of interest are localized on the cloned fragments via the polymerase chain reaction (PCR) or direct DNA–DNA hybridization. To compare the genomes and chromosomes of two species, RHs should be obtained for both species. Sex chromosome evolution In contrast to many other taxa, therian mammals and birds are characterized by highly conserved systems of genetic sex determination that lead to special chromosomes, i.e. the sex chromosomes. Although the XX/XY sex chromosome system is the most common among eutherian species, it is not universal. In some species X-autosomal translocations result in the appearance of "additional Y" chromosomes (for example, XX/XY1Y2Y3 systems in black muntjac). In other species Y-autosomal translocations lead to appearance of additional X chromosomes (for example, in some New World primates such as howler monkeys). Regarding this aspect, rodents again represent a peculiar derived group, comprising the record number of species with non-classical sex chromosomes such as the wood lemming, the collared lemming, the creep vole, the spinous country rat, the Akodon and the bandicoot rat. References Cytogenetics Genomics Genome Mammal genetics Phylogenetics
Genome diversity and karyotype evolution of mammals
Biology
2,722
27,921,256
https://en.wikipedia.org/wiki/Engineers%20of%20Sweden
Engineers of Sweden () is a trade union and professional association in Sweden, gathering 160,000 members. It was created in 2007 by merger of Sveriges Civilingenjörsförbund, which also used the name Swedish Association of Graduate Engineers in English, with the smaller Ingenjörsförbundet; in 2022 the union changed its English name to the less formal-sounding name Engineers of Sweden. References External links Trade unions in Sweden Professional associations based in Sweden Engineering organizations
Engineers of Sweden
Engineering
99
68,185,380
https://en.wikipedia.org/wiki/Telecommunication%20Instructional%20Modeling%20System
TIMS, or Telecommunication Instructional Modeling System, is an electronic device invented by Tim Hooper and developed by Australian engineering company Emona Instruments that is used as a telecommunications trainer in educational settings and universities. History TIMS was designed at the University of New South Wales by Tim Hooper in 1971. It was developed to run student experiments for electrical engineering communications courses. Hooper’s concept was developed into the current TIMS model in the late 1980s. In 1986, the project won a competition organized by Electronics Australia for development work using the Texas Instruments TMS320. Emona Instruments also received an award for TIMS at the fifth Secrets of Australian ICT Innovation Competition. Methodology TIMS uses a block diagram-based interface for experiments in the classroom. It can model mathematical equations to simulate electric signals, or it can use block diagrams to simulate telecommunications systems. It uses a different hardware card to represent functions for each block of the diagram. TIMS consists of a server, a chassis, and boards that can emulate the configurations of a telecommunications system. It uses electronic circuits as modules to simulate the components of analog and digital communications systems. The modules can perform different functions such as signal generation, signal processing, signal measurement, and digital signal processing. Variants The block diagram approach to modeling the mathematics of a telecommunication system has also been ported across to other domains. Simulation Where the blocks are patched together onscreen to mimic the hardware implementation but with a simulation engine (known as TutorTIMS). Remote access It can be used by multiple students at once across the internet or LAN via a browser based client screen. This utilises a statistical time division multiplexing architecture in the control unit. The method is applied both to Telecommunications and Electronics Laboratories (known as netCIRCUITlabs). V References External links Official website 1971 establishments Electronics Electrical engineering Telecommunications
Telecommunication Instructional Modeling System
Technology,Engineering
370
60,066,677
https://en.wikipedia.org/wiki/CoRoT-16b
CoRoT-16b is a transiting exoplanet orbiting the G or K type main sequence star CoRoT-16 2,433 light years away in the southern constellation Scutum. The planet was discovered in June 2011 by the French-led CoRoT mission. CoRoT-16b was detected using the transit method, which measures the brightness changes during an eclipse. However, this planet has an eccentric orbit, which is unusual due to CoRoT-16b's proximity to its parent star and the age. Due to its orbit, CoRoT-16b is classified as a "hot Jupiter". It only takes about 5 days to orbit CoRoT-16, but has an unusually eccentric orbit. CoRoT-16b has 52.9% the mass of Jupiter, but is 17% larger than the latter. Due to the low mass and high radius, CoRoT-16b has 41% the density of water; the orbit gives it an equilibrium temperature of 1,086 K. However, this is only an estimate due to the eccentricity of CoRoT-16b. References Hot Jupiters Transiting exoplanets Exoplanets discovered in 2011 16b Scutum (constellation)
CoRoT-16b
Astronomy
253
74,293,775
https://en.wikipedia.org/wiki/Iberian%20orca%20attacks
Beginning in 2020, a subpopulation of orcas (Orcinus orca) began ramming boats and attacking their rudders in waters off the Iberian Peninsula. The behaviour has generally been directed towards slow-moving, medium-sized sailboats in the Strait of Gibraltar and off the Portuguese, Moroccan and Galician coasts. The novel behaviour is thought to have spread between different pods, with over 500 reported interactions from 2020 to 2023 attributed to fifteen different individual orcas (the exact number is still debated between certain scientists). Background The Iberian orca subpopulation lives in the coastal waters of the Iberian Peninsula and is genetically distinct from other orca populations in the Northeast Atlantic. The orcas follow the seasonal migration of Atlantic bluefin tuna (Thunnus thynnus), their primary food source, gathering in the early spring in the Strait of Gibraltar. Through the summer, they remain in the Strait before travelling north along the coast of Portugal and Spain's Galicia, then head to deeper waters in the fall. While orcas typically engage in persistence hunting, two of the residential orca pods have been seen taking fish from Moroccan and Spanish fishery droplines. A complete census of the Iberian orca subpopulation was undertaken in 2011, finding 39 members divided into five pods. The subpopulation was listed as endangered by the Spanish National Catalogue of Endangered Species the same year and as critically endangered in the International Union for the Conservation of Nature's Red List in 2019. The Gladises Fifteen individual Iberian orcas involved in the interactions have been identified through photography and witness descriptions. Each of the orcas involved in incidents and having contact with vessels was given the designation Gladis. Iberian orcas are given the designation Gladis to indicate that they have been involved in interactions with ships. The name "Gladis" is a reference to the old scientific name for orcas, Orcinus gladiator, which means "whale-fighter" in Latin. In a 2022 journal article analysing photographic evidence and testimonies from the incidents, 31 distinct orcas were identified, nine of which had direct contact with vessels and were given the designation Gladis. Two pods of orcas were identified, one including the adult Gladis Blanca (White Gladis), her offspring, Gladis Filabres (b. 2021), and her sisters, Gladis Dalila and Gladis Clara. Gladis Blanca's mother, Gladis Lamari, was also observed but never approached the vessel. The second residential pod consists of three juveniles, Gladis Gris (Grey Gladis) and the siblings Gladis Peque and Gladis Negra (Black Gladis), as well as their mother Gladis Herbille, who was also occasionally observed during the interactions but did not participate. By 2023, the number of Gladises had increased to 15. Method of attack In interactions where orcas have come in physical contact with vessels, the pod typically approaches stealthily from the stern. Contact with the vessels includes ramming, nudging, and biting, usually focusing on the rudder. Orcas have been observed using their heads to push the rudder or using their bodies to make lever movements, causing the rotation of the rudder and "in some cases pivoting the boat almost 360°". Inspection of vessels reporting physical contact revealed that orcas had raked their teeth against the bow, keel, and rudders. More seriously damaged rudders were split in half, completely detached, or bent at their stocks. At least one orca has been observed tearing off a boat rudder with its teeth. Monohulled sailing vessels are the most frequent targets of the orcas, with yachts, catamarans, and vessels with spade rudders being the types most often attacked and damaged. The vessels reporting interactions have been an average of 12 metres in length and were travelling, on average, at 5.93 knots, a speed easily matched by orcas. Interactions between the orcas and vessels have occurred most frequently during the day, peaking around midday, and usually last for less than half an hour, though engagements up to two hours have been reported. Attempts by crews to control the wheel or increase the speed of their vessel have often resulted in more frequent and forceful pushes from the orcas. The orcas usually lose interest after the human crews slow or stop their vessels. History of interactions Since 2020, there have been around 500 recorded interactions between orcas and vessels. Over 250 boats have been damaged by the orcas and four vessels have sunk. The frequency of attacks has increased over time. From July until November 2020, 52 orca interactions were reported. The behaviour continued into 2021, with another 197 interactions recorded, and into 2022, with 207 interactions. Researchers from the Atlantic Orca Working Group reported that only 20 percent of vessels having physical interactions with the orcas had been severely damaged. No humans have been harmed during any of the interactions. The first reported orca-boat interaction occurred in the Strait of Gibraltar in May 2020. Other incidents were reported in July of that year, both in the Strait and off the coast of Portugal. Later in mid-August, interactions between orcas and vessels were observed in northern Spain, off of Galicia. Sunken vessels A sailboat with five passengers sank following an orca encounter in July 2022. Another sailboat with four people aboard sank in November 2022. During an incident in the Strait of Gibraltar on 4 May 2023, the Swiss sailing yacht Champagne was running under engine when it was set upon by three orcas. The larger orca rammed the vessel from the side, while two smaller orcas shook the rudder. The rudder was pierced and had two holes and the quadrant was broken off. A crewmember reported that the two smaller orcas were copying the behaviour of the larger one, ramming into the rudder and the keel. The crew was rescued by the Spanish coast guard and the vessel was towed to the port of Barbate, where it capsized at the entrance. On 31 October 2023, the yacht Grazie Mamma II had an encounter with a pod of orcas. The orcas interacted with the yacht for 45 minutes, bumping against the blade of the rudder, causing damage and leaks. No humans were harmed and the vessel sank near the entrance to the port of Tanger-Med. On 12 May 2024, the Spanish yacht Alboran Cognac was attacked by orcas and holed. Both people on board were rescued by a tanker. The yacht consequently sank in the Strait of Gibraltar. In a similar incident, orcas attacked and sank the British sailing yacht Bonhomie William in the Strait of Gibraltar on 26 July, 2024. All three people onboard were rescued by Spanish coastguards. North Sea incident An incident involving an orca ramming a yacht in the North Sea near Shetland occurred in June 2023. The interaction led to speculation that the Iberian orca behaviour was "leapfrogging through the various pods/communities". Possible motivations An article in Marine Mammal Science published in 2022 suggested various possible motivations for the orca behaviour. The interactions may be playful, and a result of the marine mammals' natural curiosity. Researcher Deborah Giles said that orcas are "incredibly curious and playful animals and so this might be more of a play thing as opposed to an aggressive thing." Gibraltar-based marine biologist Eric Shaw argued that the orcas were displaying protective behaviours and were intentionally targeting the rudder with the understanding that it would immobilize the vessel, just as attacking the tail of a prey animal would immobilize it, a documented predation behaviour. The behaviour could also be the result of a combination of factors including disturbances created by vessels, depletion of the orcas' prey and interaction with fisheries. A third possibility is that the behaviour was triggered by a "punctual aversive incident", such as one of the orcas colliding with a vessel and sustaining injuries. Researchers have also suggested that the behaviour could be a fad. Other such cultural phenomena among orcas have been short-lived, such as in 1987 when southern resident orcas from Puget Sound carried dead salmon around on their heads. CIRCE Conservación Information and Research coordinator Renaud de Stephanis suggested that the orcas break the rudder out of frustration, preferring the sensation of the propeller when a sailboat is running its engine. Human response The rate of orca-boat interactions and their dispersal prompted the formation in August 2020 of a working group for the issue, the Atlantic Orca Working Group (Grupo de Trabajo Orca Atlántica; GTOA). A Facebook group, Orca Attack Reports, was created to facilitate the sharing of information about the interactions. Radio warnings have been issued alerting vessels to the orcas' presence and suggesting keeping a distance. In 2020 and 2021, authorities from the Spanish Maritime Traffic Security briefly prohibited sailing vessels under 15 metres from navigation along the coast where interactions had occurred. The development and testing of acoustic deterrents to dissuade the orcas was announced by the Portuguese National Association of Cruise Ships (Associação Nacional de Cruzeiros) in 2023. Media outlets have sensationalised the incidents, often providing anthropomorphic rationales for the orca behaviour. Many have attributed the incidents to being revenge for some kind of wrong inflicted on one of the orcas, usually White Gladis. Social media reactions have included the generation of memes related to an "orca-uprising" or "orca wars", with some observers calling the behaviour "an act of anti-capitalist solidarity from 'orca comrades' and 'orca saboteurs'". In 2023, the Spanish government planned to satellite tag six orcas involved in these attacks, in order to track their movements and minimize further interactions. See also Killer whales of Eden, New South Wales Orca attacks, a list of attacks by captive orcas References External links Orca Attack Reports Facebook group Atlantic Orca Working Group FriendShip Project of the Atlantic Orca Working Group CIRCE - (Conservación, Información y Estudio sobre Cetáceos) - Research of Orca Iberica ORCINUS APP - App for reporting and tracking of Orcas - Portos de Galicia orcas.pt - Website for reporting / tracking of orcas sightings / attacks Recommendations for boaters if orcas interact with the boat Yachting Ships sunk with no fatalities Orca attacks Maritime incidents in 2020 Maritime incidents in 2021 Maritime incidents in 2022 Maritime incidents in 2023 Whale collisions with ships Sailing in Spain Maritime incidents in Spain Sailing in Portugal Maritime incidents in Portugal History of the Iberian Peninsula Abnormal behaviour in animals
Iberian orca attacks
Biology
2,186
9,197,779
https://en.wikipedia.org/wiki/Dambo
A dambo is a class of complex shallow wetlands in central, southern and eastern Africa, particularly in Zambia, Malawi and Zimbabwe. They are generally found in higher rainfall flat plateau areas and have river-like branching forms which in themselves are not very large but combined add up to a large area. Dambos have been estimated to comprise 12.5% of the area of Zambia. Similar African words include mbuga (commonly used in East Africa), matoro (Mashonaland), vlei (South Africa), fadama (Nigeria), and bolis (Sierra Leone); the French bas-fond and German Spültal have also been suggested as referring to similar grassy wetlands. Characteristics Dambos are characterised by grasses, rushes and sedges, contrasting with surrounding woodland such as miombo woodland. They may be substantially dry at the end of the (dry season), revealing grey soils or black clays, but unlike a flooded grassland, they retain wet lines of drainage through the dry season. They are inundated (waterlogged) in the (wet season) but not generally above the height of the vegetation, and any open water surface is usually confined to streams and small ponds or lagoons (small swamps) at the lowest point generally near the centre. The name dambo is most frequently used for wetlands on flat plateaus which form the (River source|headwaters) of streams. The definition for scientific purposes has been proposed as “seasonally waterlogged, predominantly grass covered, depressions bordering headwater drainage lines”. Types The problem with the preceding definition is that the word may also be used for wetlands bordering rivers far from the headwaters, for example the dambo of the Mbereshi River where it enters the swamps of the Luapula River in Zambia, . A 1998 report of the Food and Agriculture Organization distinguishes between ‘hydromorphic/phreatic’ dambos (associated with headwaters) and ‘fluvial’ dambos (associated with rivers), and also referred to five geomorphological types in Zambia’s Luapula Province: upland, valley, hanging, sand dune and pan dambos. Hydrology Dambos are fed by rainfall which drains out slowly to feed streams and are therefore a vital part of the water cycle. As well as being complex ecosystems, they also play a role in the biodiversity of the region. There is a popular idea that dambos act like sponges to soak up the wet season rain which they release slowly into rivers during the dry season thus ensuring a year-round flow, but this is opposed by some research which suggests that in the middle to late dry season the water is actually released from aquifers. Springs are seen in some dambos. Thus it may take a long time—perhaps several years—for water from a heavy rainy season to percolate through hills and emerge in a dambo, creating lagoons there or a flow in downstream rivers which cannot be explained by the previous year's rainfall. Dambos may be involved, for instance, in explaining puzzling variations in water level or flow in Lake Mweru Wantipa and Lake Chila in Mbala. Use Traditionally, dambos have been exploited: as a dry-season water source for rushes used as thatching and fencing material for clay used for building, brick-making and earthenware for hunting (especially birds and small antelope) for growing vegetables and other food crops, which can be vital in drought years since dambo soils usually retain enough moisture to produce a harvest when the rains fail for soaking bitter cassava in dug ponds for fishing (generally using fish traps) in those dambos with streams More recently, they have been used for fish ponds and growing upland rice. Efforts to develop dambos agriculturally have been hampered by a lack of research on the hydrology and soils of dambos, which have proved to be variable and complex. Example A dambo can be seen at (30 km south of Mansa, Zambia) in a forest reserve. Unlike in the neighbouring areas which have been cleared for farming and charcoal-burning, the dambo contrasts well with the undisturbed miombo woodland canopy. Headwater dambos have a branching structure like rivers. Most of the dambos have roughly the same width and form the same sort of pattern. An example of a pan dambo can be seen at (102 km north-west of Mulobezi, Zambia). The water in the pan has dried out, and the grass has been burnt off giving the dark appearance at the centre of the dambo. To the east and west of the pan dambo a series of dambos can be seen along two river courses. References Wetlands Landforms Wetlands of Zambia Wetlands of Zimbabwe
Dambo
Environmental_science
970
4,051,670
https://en.wikipedia.org/wiki/Secular%20resonance
A secular resonance is a type of orbital resonance between two bodies with synchronized precessional frequencies. In celestial mechanics, secular refers to the long-term motion of a system, and resonance is periods or frequencies being a simple numerical ratio of small integers. Typically, the synchronized precessions in secular resonances are between the rates of change of the argument of the periapses or the rates of change of the longitude of the ascending nodes of two system bodies. Secular resonances can be used to study the long-term orbital evolution of asteroids and their families within the asteroid belt. Description Secular resonances occur when the precession of two orbits is synchronised (a precession of the perihelion, with frequency g, or the ascending node, with frequency s, or both). A small body (such as a small Solar System body) in secular resonance with a much larger one (such as a planet) will precess at the same rate as the large body. Over relatively short time periods (a million years or so), a secular resonance will change the eccentricity and the inclination of the small body. One can distinguish between: linear secular resonances between a body (no subscript) and a single other large perturbing body (e.g. a planet, subscript as numbered from the Sun), such as the ν6 = g − g6 secular resonance between asteroids and Saturn; and nonlinear secular resonances, which are higher-order resonances, usually combination of linear resonances such as the z1 = (g − g6) + (s − s6), or the ν6 + ν5 = 2g − g6 − g5 resonances. ν6 resonance A prominent example of a linear resonance is the ν6 secular resonance between asteroids and Saturn. Asteroids that approach Saturn have their eccentricity slowly increased until they become Mars-crossers, when they are usually ejected from the asteroid belt by a close encounter with Mars. The resonance forms the inner and "side" boundaries of the asteroid belt around 2 AU and at inclinations of about 20°. See also Orbital resonance Asteroid belt References Orbital perturbations Orbital resonance
Secular resonance
Physics,Chemistry,Astronomy
449
51,902,843
https://en.wikipedia.org/wiki/Population%20health%20policies%20and%20interventions
Population health, a field which focuses on the improvement of the health outcomes for a group of individuals, has been described as consisting of three components: "health outcomes, patterns of health determinants, and policies and interventions". Policies and Interventions define the methods in which health outcomes and patterns of health determinants are implemented. Policies which are helpful "improve the conditions under which people live". Interventions encourage healthy behaviors for individuals or populations through "program elements or strategies designed to produce behavior changes or improve health status". Policies and interventions are needed due to the inequalities amongst populations and the inconsistent way care is administered. Policies can include "necessary community and personal social and health services" as well as taxes on alcohol and soft drinks and implement smoking cessation policies. Interventions can include therapeutic or preventative health care and may also include actions taken by the individual or by someone on behalf of the individual. The application of population health is determined by the policies and interventions which can be implemented within an organization, city, state or country. Common methodology Countries, states, provinces and providers across the globe are shifting towards better systems to respond to inconsistent health outcomes, mitigate decreasing margins and replace outdated methods such as fee-for-service health delivery. Payment model reforms, including the Accountable Care Organization (ACO), provide roadmaps for healthcare reform and drive many of its constituents towards more effective and innovative means for improving health outcomes. Population health management is a common approach for resolving these challenges but it involves new methods, tools, systems and implementations to correct inefficiencies and improve health outcomes. Population health tools and computer systems include data exchange, large datasets, and advanced software which are used to supply data scientists and research teams with appropriate information which can then be used by policy makers and change agents. This method helps to set policies around population health as well as intervention strategies which are then used to respond to the needs of a population. Policies and policymakers Policy for population health "sets priorities" and are a "guide to action to change what would otherwise occur". Policies are based on "social sciences of sociology, economics, demography, public health, anthropology, and epidemiology" and determine how outcomes can be accomplished are implemented at various levels. Such guides determine laws, policies, and ordinances and are defined by policymakers. Examples of policies include "smoking bans, excise taxes on cigarettes and alcohol, seat belt laws, water fluoridation, and restaurant menu labeling". They may be applied to a commercial establishment such as a restaurant, business workplace or within a city or state level. Policies should be evidence based and require academic studies or research to support the approach. This will assure that the appropriate measures needed for each demographic are promoted to encourage the necessary intervention practices which can be applied to each population or to the nation as a whole. Policymakers can be classified as both private and public and are defined as someone who is in a position of authority to implement health policies. A public policy maker could be a government official and a private policymaker could be a business owner or administrator. Policymakers are influenced by, and can also be, change agents. Change agents include "legislators in Washington, an attorney general, regulators at the FDA, an advocacy group or other organizations that clearly have influence". Political strategy The goal for any political strategies surrounding population health is to "improve chances of success for policy adoption and implementation". Such strategies include the creation of funds to support initiatives and the construction of strategies which limit conflicts of interest in the implementation of public policy. Tobacco control A political strategy implemented to limit the sale and exposure to tobacco products and restrict the tobacco company's ability to benefit politically from charitable donations is the creation of the World Health Framework Convention on Tobacco Control (FCTC) treaty. The legally binding document is supported by numerous countries, government/nongovernment agencies and provides the necessary power to prevent negative influences on population health policies. Interventions Interventions in population health "shift the distribution of health risk by addressing the underlying social, economic and environmental conditions" and are implemented through "programs or policies designed and developed in the health sector, but they are more likely to be in sectors elsewhere, such as education, housing or employment". They are aimed at reducing such things as childhood obesity, cardiovascular disease, smoking and mental health issues throughout society. The means in which interventions are devised is through extensive research and contributions from medical scientists, researchers, and medical professionals. They are implemented by but are not limited to educators, practicing physicians, business administrators and mental health professionals. Approaches and implementations A typical approach includes assessing the conditions and possible improvements which can be made within the social determinants that have been identified. Each approach is handled at a state or health plan level. One example was a workplace in China which implemented policies and interventions for their staff to fight depression. By recognizing the importance of mental health, they were able to reduce depression and improve job satisfaction across the company. The company published its research and findings to promote "enterprises taking more responsibility for the provision of mental health services to their employees". Another example was the implementation of a smoking cessation program to the province of Ontario. Studies were performed on weekly visit rates to psychiatric emergency departments before and after the implementation. The result was a "15.5% reduction in patient visits for patients with a primary diagnosis of psychotic disorder". Inequalities and variance of implementation As is the common understanding of population health, health inequalities, defined as a "generic term used to designate differences, variations, and disparities in the health achievements of individuals and groups", must be considered to correctly implement the most effective policies and interventions. Based on a population and its socioeconomic, geographic, ethnicity and other factors, policies and interventions may vary. Policies implemented for one population may be less effective and more costly than it would be for another similar population. For example, US policies tend to be more costly than European and have less impact. Research has shown that in some instances, "Americans had worse outcomes than their international peers" and also had "the lowest life expectancy at birth of the countries studied". See also Population health Community health Economic inequality Health disparities Health impact assessment Inequality in disease List of countries by income equality Social determinants of health Sin tax Sugary drinks tax WHO Framework Convention on Tobacco Control (FCTC) References Further reading Agafonow, Alejandro (2018). Setting the bar of social enterprise research high. Learning from medical science, Social Science & Medicine Vol 214, October, Pages 49–56, DOI: 10.1016/j.socscimed.2018.08.020 External links http://www.improvingpopulationhealth.org/blog/policies-and-programs.html Demography Global health Health economics Social classes
Population health policies and interventions
Environmental_science
1,394
63,917,914
https://en.wikipedia.org/wiki/Carbohydride
Carbohydrides (or carbide hydrides) are solid compounds in one phase composed of a metal with carbon and hydrogen in the form of carbide and hydride ions. The term carbohydride can also refer to a hydrocarbon. Structure and bonding Many of the transition metal carbohydrides are non-stochiometric, particularly with respect to the hydrogen that can vary in proportion up to a theoretical balanced proportion. The hydrogen and carbon occupy holes in the metal crystalline lattice. The carbon takes up octahedral sites (surrounded by six metal atoms) and the hydrogen takes up tetrahedral sites in the metal lattice. The hydrogen atoms go to sites away from the carbon atoms, and away from each other, at least 2 Å apart, so there are no covalent bonds between the carbon or hydrogen atoms. Overall the lattice retains a high symmetry of the original metal. Nomenclature A carbodeuteride (or carbo-deuteride) is a compound where the hydrogen is of the isotope deuterium. Properties Reactions Metal carbide hydrides give off hydrogen when heated, and are in equilibrium with a partial pressure of hydrogen that depends on the temperature. When Ca2LiC3H is heated with ammonium chloride, the gas C3H4 (methylacetylene-propadiene) is produced. Comparisons There are also metal cluster molecules and ions that contain both carbon and hydrogen. Methylidyne complexes contain the CH group with three bonds to a metal e.g. NiCH+ or PtCH+. Natural occurrence Iron carbide hydrides do not appear to be stable at the conditions present in the Earth's inner core, even though carbon or hydrogen have been proposed as alloying light elements in the core. Applications Carbohydrides are studied for their ability in hydrogen storage. Carbohydrides may be made when carbides are manufactured by milling, using hydrocarbons as a carbon source. Since the carbohydride is not the desired outcome, other material like graphite is added to try to maximise carbide production. Preparation Transition metal carbohydrides can be produced by heating a metal carbide in hydrogen, for example at 2000 °C and 3 bars. This reaction is exothermic, and just needs to be ignited at a much lower temperature. The process is called self-propagating high-temperature synthesis or SHS. A hydrocarbide may be formed when the metal is milled in a hydrocarbon, for example in the manufacture of titanium carbide. Rare earth carbohydrides can be prepared by heating a metal hydride with graphite in a closed metal container, with a hydrogen atmosphere. List References Carbides Hydrides Mixed anion compounds
Carbohydride
Physics,Chemistry
584
416,754
https://en.wikipedia.org/wiki/Photodynamic%20therapy
Photodynamic therapy (PDT) is a form of phototherapy involving light and a photosensitizing chemical substance used in conjunction with molecular oxygen to elicit cell death (phototoxicity). PDT is used in treating acne, wet age-related macular degeneration, psoriasis, and herpes. It is used to treat malignant cancers, including head and neck, lung, bladder and skin. Advantages lessen the need for delicate surgery and lengthy recuperation and minimal formation of scar tissue and disfigurement. A side effect is the associated photosensitisation of skin tissue. Basics PDT applications involve three components: a photosensitizer, a light source and tissue oxygen. The wavelength of the light source needs to be appropriate for exciting the photosensitizer to produce radicals and/or reactive oxygen species. These are free radicals (Type I) generated through electron abstraction or transfer from a substrate molecule and highly reactive state of oxygen known as singlet oxygen (Type II). PDT is a multi-stage process. First a photosensitiser, ideally with negligible toxicity other than its phototoxicity, is administered in the absence of light, either systemically or topically. When a sufficient amount of photosensitiser appears in diseased tissue, the photosensitiser is activated by exposure to light for a specified period. The light dose supplies sufficient energy to stimulate the photosensitiser, but not enough to damage neighbouring healthy tissue. The reactive oxygen kills the target cells. Reactive oxygen species In air and tissue, molecular oxygen (O2) occurs in a triplet state, whereas almost all other molecules are in a singlet state. Reactions between triplet and singlet molecules are forbidden by quantum mechanics, making oxygen relatively non-reactive at physiological conditions. A photosensitizer is a chemical compound that can be promoted to an excited state upon absorption of light and undergo intersystem crossing (ISC) with oxygen to produce singlet oxygen. This species is highly cytotoxic, rapidly attacking any organic compounds it encounters. It is rapidly eliminated from cells, in an average of 3 μs. Photochemical processes When a photosensitiser is in its excited state (3Psen*) it can interact with molecular triplet oxygen (3O2) and produce radicals and reactive oxygen species (ROS), crucial to the Type II mechanism. These species include singlet oxygen (1O2), hydroxyl radicals (•OH) and superoxide (O2−) ions. They can interact with cellular components including unsaturated lipids, amino acid residues and nucleic acids. If sufficient oxidative damage ensues, this will result in target-cell death (only within the illuminated area). Photochemical mechanisms When a chromophore molecule, such as a cyclic tetrapyrrolic molecule, absorbs a photon, one of its electrons is promoted into a higher-energy orbital, elevating the chromophore from the ground state (S0) into a short-lived, electronically excited state (Sn) composed of vibrational sub-levels (Sn′). The excited chromophore can lose energy by rapidly decaying through these sub-levels via internal conversion (IC) to populate the first excited singlet state (S1), before quickly relaxing back to the ground state. The decay from the excited singlet state (S1) to the ground state (S0) is via fluorescence (S1 → S0). Singlet state lifetimes of excited fluorophores are very short (τfl. = 10−9–10−6 seconds) since transitions between the same spin states (S → S or T → T) conserve the spin multiplicity of the electron and, according to the Spin Selection Rules, are therefore considered "allowed" transitions. Alternatively, an excited singlet state electron (S1) can undergo spin inversion and populate the lower-energy first excited triplet state (T1) via intersystem crossing (ISC); a spin-forbidden process, since the spin of the electron is no longer conserved. The excited electron can then undergo a second spin-forbidden inversion and depopulate the excited triplet state (T1) by decaying to the ground state (S0) via phosphorescence (T1→ S0). Owing to the spin-forbidden triplet to singlet transition, the lifetime of phosphorescence (τP = 10−3 − 1 second) is considerably longer than that of fluorescence. Photosensitisers and photochemistry Tetrapyrrolic photosensitisers in the excited singlet state (1Psen*, S>0) are relatively efficient at intersystem crossing and can consequently have a high triplet-state quantum yield. The longer lifetime of this species is sufficient to allow the excited triplet state photosensitiser to interact with surrounding bio-molecules, including cell membrane constituents. Photochemical reactions Excited triplet-state photosensitisers can react via Type-I and Type-II processes. Type-I processes can involve the excited singlet or triplet photosensitiser (1Psen*, S1; 3Psen*, T1), however due to the short lifetime of the excited singlet state, the photosensitiser can only react if it is intimately associated with a substrate. In both cases the interaction is with readily oxidisable or reducible substrates. Type-II processes involve the direct interaction of the excited triplet photosensitiser (3Psen*, T1) with molecular oxygen (3O2, 3Σg). Type-I processes Type-I processes can be divided into Type I(i) and Type I(ii). Type I (i) involves the transfer of an electron (oxidation) from a substrate molecule to the excited state photosensitiser (Psen*), generating a photosensitiser radical anion (Psen•−) and a substrate radical cation (Subs•+). The majority of the radicals produced from Type-I(i) reactions react instantaneously with molecular oxygen (O2), generating a mixture of oxygen intermediates. For example, the photosensitiser radical anion can react instantaneously with molecular oxygen (3O2) to generate a superoxide radical anion (O2•−), which can go on to produce the highly reactive hydroxyl radical (OH•), initiating a cascade of cytotoxic free radicals; this process is common in the oxidative damage of fatty acids and other lipids. The Type-I process (ii) involves the transfer of a hydrogen atom (reduction) to the excited state photosensitiser (Psen*). This generates free radicals capable of rapidly reacting with molecular oxygen and creating a complex mixture of reactive oxygen intermediates, including reactive peroxides. Type-II processes Type-II processes involve the direct interaction of the excited triplet state photosensitiser (3Psen*) with ground state molecular oxygen (3O2, 3Σg); a spin allowed transition—the excited state photosensitiser and ground state molecular oxygen are of the same spin state (T). When the excited photosensitiser collides with molecular oxygen, a process of triplet-triplet annihilation takes place (3Psen* →1Psen and 3O2 →1O2). This inverts the spin of one oxygen molecule's (3O2) outermost antibonding electrons, generating two forms of singlet oxygen (1Δg and 1Σg), while simultaneously depopulating the photosensitiser's excited triplet state (T1 → S0). The higher-energy singlet oxygen state (1Σg, 157kJ mol−1 > 3Σg) is very short-lived (1Σg ≤ 0.33 milliseconds (methanol), undetectable in H2O/D2O) and rapidly relaxes to the lower-energy excited state (1Δg, 94kJ mol−1 > 3Σg). It is, therefore, this lower-energy form of singlet oxygen (1Δg) that is implicated in cell injury and cell death. The highly-reactive singlet oxygen species (1O2) produced via the Type-II process act near to their site generation and within a radius of approximately 20 nm, with a typical lifetime of approximately 40 nanoseconds in biological systems. It is possible that (over a 6 μs period) singlet oxygen can diffuse up to approximately 300 nm in vivo. Singlet oxygen can theoretically only interact with proximal molecules and structures within this radius. ROS initiate reactions with many biomolecules, including amino acid residues in proteins, such as tryptophan; unsaturated lipids like cholesterol and nucleic acid bases, particularly guanosine and guanine derivatives, with the latter base more susceptible to ROS. These interactions cause damage and potential destruction to cellular membranes and enzyme deactivation, culminating in cell death. It is probable that in the presence of molecular oxygen and as a direct result of the photoirradiation of the photosensitiser molecule, both Type-I and II pathways play a pivotal role in disrupting cellular mechanisms and cellular structure. Nevertheless, considerable evidence suggests that the Type-II photo-oxygenation process predominates in the induction of cell damage, a consequence of the interaction between the irradiated photosensitiser and molecular oxygen. Cells in vivo may be partially protected against the effects of photodynamic therapy by the presence of singlet oxygen scavengers (such as histidine). Certain skin cells are somewhat resistant to PDT in the absence of molecular oxygen; further supporting the proposal that the Type-II process is at the heart of photoinitiated cell death. The efficiency of Type-II processes is dependent upon the triplet state lifetime τT and the triplet quantum yield (ΦT) of the photosensitiser. Both of these parameters have been implicated in phototherapeutic effectiveness; further supporting the distinction between Type-I and Type-II mechanisms. However, the success of a photosensitiser is not exclusively dependent upon a Type-II process. Multiple photosensitisers display excited triplet lifetimes that are too short to permit a Type-II process to occur. For example, the copper metallated octaethylbenzochlorin photosensitiser has a triplet state lifetime of less than 20 nanoseconds and is still deemed to be an efficient photodynamic agent. Photosensitizers Many photosensitizers for PDT exist. They divide into porphyrins, chlorins and dyes. Examples include aminolevulinic acid (ALA), Silicon Phthalocyanine Pc 4, m-tetrahydroxyphenylchlorin (mTHPC) and mono-L-aspartyl chlorin e6 (NPe6). Photosensitizers commercially available for clinical use include Allumera, Photofrin, Visudyne, Levulan, Foscan, Metvix, Hexvix, Cysview and Laserphyrin, with others in development, e.g. Antrin, Photochlor, Photosens, Photrex, Lumacan, Cevira, Visonac, BF-200 ALA, Amphinex and Azadipyrromethenes. The major difference between photosensitizers is the parts of the cell that they target. Unlike in radiation therapy, where damage is done by targeting cell DNA, most photosensitizers target other cell structures. For example, mTHPC localizes in the nuclear envelope. In contrast, ALA localizes in the mitochondria and methylene blue in the lysosomes. Cyclic tetrapyrrolic chromophores Cyclic tetrapyrrolic molecules are fluorophores and photosensitisers. Cyclic tetrapyrrolic derivatives have an inherent similarity to the naturally occurring porphyrins present in living matter. Porphyrins Porphyrins are a group of naturally occurring and intensely coloured compounds, whose name is drawn from the Greek word porphura, or purple. These molecules perform biologically important roles, including oxygen transport and photosynthesis and have applications in fields ranging from fluorescent imaging to medicine. Porphyrins are tetrapyrrolic molecules, with the heart of the skeleton a heterocyclic macrocycle, known as a porphine. The fundamental porphine frame consists of four pyrrolic sub-units linked on opposing sides (α-positions, numbered 1, 4, 6, 9, 11, 14, 16 and 19) through four methine (CH) bridges (5, 10, 15 and 20), known as the meso-carbon atoms/positions. The resulting conjugated planar macrocycle may be substituted at the meso- and/or β-positions (2, 3, 7, 8, 12, 13, 17 and 18): if the meso- and β-hydrogens are substituted with non-hydrogen atoms or groups, the resulting compounds are known as porphyrins. The inner two protons of a free-base porphyrin can be removed by strong bases such as alkoxides, forming a dianionic molecule; conversely, the inner two pyrrolenine nitrogens can be protonated with acids such as trifluoroacetic acid affording a dicationic intermediate. The tetradentate anionic species can readily form complexes with most metals. Absorption spectroscopy Porphyrin's highly conjugated skeleton produces a characteristic ultra-violet visible (UV-VIS) spectrum. The spectrum typically consists of an intense, narrow absorption band (ε > 200000 L⋅mol−1 cm−1) at around 400 nm, known as the Soret band or B band, followed by four longer wavelength (450–700 nm), weaker absorptions (ε > 20000 L⋅mol−1⋅cm−1 (free-base porphyrins)) referred to as the Q bands. The Soret band arises from a strong electronic transition from the ground state to the second excited singlet state (S0 → S2); whereas the Q band is a result of a weak transition to the first excited singlet state (S0 → S1). The dissipation of energy via internal conversion (IC) is so rapid that fluorescence is only observed from depopulation of the first excited singlet state to the lower-energy ground state (S1 → S0). Ideal photosensitisers The key characteristic of a photosensitiser is the ability to preferentially accumulate in diseased tissue and induce a desired biological effect via the generation of cytotoxic species. Specific criteria: Strong absorption with a high extinction coefficient in the red/near infrared region of the electromagnetic spectrum (600–850 nm)—allows deeper tissue penetration. (Tissue is much more transparent at longer wavelengths (~700–850 nm). Longer wavelengths allow the light to penetrate deeper and treat larger structures.) Suitable photophysical characteristics: a high-quantum yield of triplet formation (ΦT ≥ 0.5); a high singlet oxygen quantum yield (ΦΔ ≥ 0.5); a relatively long triplet state lifetime (τT, μs range); and a high triplet-state energy (≥ 94 kJ mol−1). Values of ΦT= 0.83 and ΦΔ = 0.65 (haematoporphyrin); ΦT = 0.83 and ΦΔ = 0.72 (etiopurpurin); and ΦT = 0.96 and ΦΔ = 0.82 (tin etiopurpurin) have been achieved Low dark toxicity and negligible cytotoxicity in the absence of light. (The photosensitizer should not be harmful to the target tissue until the treatment beam is applied.) Preferential accumulation in diseased/target tissue over healthy tissue Rapid clearance from the body post-procedure High chemical stability: single, well-characterised compounds, with a known and constant composition Short and high-yielding synthetic route (with easy translation into multi-gram scales/reactions) Simple and stable formulation Soluble in biological media, allowing intravenous administration. Otherwise, a hydrophilic delivery system must enable efficient and effective transportation of the photosensitiser to the target site via the bloodstream. Low photobleaching to prevent degradation of the photosensitizer so it can continue producing singlet oxygen Natural fluorescence (Many optical dosimetry techniques, such as fluorescence spectroscopy, depend on fluorescence.) First generation Porfimer sodium Porfimer sodium is a drug used to treat some types of cancer. When absorbed by cancer cells and exposed to light, porfimer sodium becomes active and kills the cancer cells. It is a type of photodynamic therapy (PDT) agent and also called Photofrin. PDT was first discovered more than a century ago in Germany, it was not until Thomas Dougherty's when PDT became more mainstream. Prior to Dr. Dougherty, researchers had ways of using light-sensitive compounds to treat disease. Dougherty successfully treated cancer with PDT in preclinical models in 1975. Three years later, he conducted the first controlled clinical study in humans. In 1994, the FDA approved PDT with the photosensitizer porfimer sodium for palliative treatment of advanced esophageal cancer, specifically the palliation of patients with completely obstructing esophageal cancer, or for patients with partially obstructing esophageal cancer. Porfimer Sodium is also FDA-approved for the treatment of types of lung cancer, more specifically for the treatment of microinvasive endobronchial non-small-cell lung cancer (NSCLC) in patients for whom surgery and radiotherapy are not indicated and also FDA approved in the US for high grade dysplasia in Barrett's Esophagus. Disadvantages associated with first generation photosensitisers include skin sensitivity and absorption at 630 nm permitted some therapeutic use, but they markedly limited application to the wider field of disease. Second generation photosensitisers were key to the development of photodynamic therapy. Second generation 5-Aminolaevulinic acid 5-Aminolaevulinic acid (ALA) is a prodrug used to treat and image multiple superficial cancers and tumours. ALA a key precursor in the biosynthesis of the naturally occurring porphyrin, haem. Haem is synthesised in every energy-producing cell in the body and is a key structural component of haemoglobin, myoglobin and other haemproteins. The immediate precursor to haem is protoporphyrin IX (PPIX), an effective photosensitiser. Haem itself is not a photosensitiser, due to the coordination of a paramagnetic ion in the centre of the macrocycle, causing significant reduction in excited state lifetimes. The haem molecule is synthesised from glycine and succinyl coenzyme A (succinyl CoA). The rate-limiting step in the biosynthesis pathway is controlled by a tight (negative) feedback mechanism in which the concentration of haem regulates the production of ALA. However, this controlled feedback can be by-passed by artificially adding excess exogenous ALA to cells. The cells respond by producing PPIX (photosensitiser) at a faster rate than the ferrochelatase enzyme can convert it to haem. ALA, marketed as Levulan, has shown promise in photodynamic therapy (tumours) via both intravenous and oral administration, as well as through topical administration in the treatment of malignant and non-malignant dermatological conditions, including psoriasis, Bowen's disease and Hirsutism (Phase II/III clinical trials). ALA accumulates more rapidly in comparison to other intravenously administered sensitisers. Typical peak tumour accumulation levels post-administration for PPIX are usually achieved within several hours; other (intravenous) photosensitisers may take up to 96 hours to reach peak levels. ALA is also excreted more rapidly from the body (~24 hours) than other photosensitisers, minimising photosensitivity side effects. Esterified ALA derivatives with improved bioavailability have been examined. A methyl ALA ester (Metvix) is now available for basal cell carcinoma and other skin lesions. Benzyl (Benvix) and hexyl ester (Hexvix) derivatives are used for gastrointestinal cancers and for the diagnosis of bladder cancer. Verteporfin Benzoporphyrin derivative monoacid ring A (BPD-MA), marketed as Visudyne (Verteporfin, for injection), has been approved by health authorities in multiple jurisdictions, including US FDA, for the treatment of wet AMD beginning in 1999. It has also undergone Phase III clinical trials (USA) for the treatment of cutaneous non-melanoma skin cancer. The chromophore of BPD-MA has a red-shifted and intensified long-wavelength absorption maxima at approximately 690 nm. Tissue penetration by light at this wavelength is 50% greater than that achieved for Photofrin (λmax. = 630 nm). Verteporfin has further advantages over the first generation sensitiser Photofrin. It is rapidly absorbed by the tumour (optimal tumour-normal tissue ratio 30–150 minutes post-intravenous injection) and is rapidly cleared from the body, minimising patient photosensitivity (1–2 days). Purlytin Chlorin photosensitiser tin etiopurpurin is marketed as Purlytin. Purlytin has undergone Phase II clinical trials for cutaneous metastatic breast cancer and Kaposi's sarcoma in patients with AIDS (acquired immunodeficiency syndrome). Purlytin has been used successfully to treat the non-malignant conditions psoriasis and restenosis. Chlorins are distinguished from the parent porphyrins by a reduced exocyclic double bond, decreasing the symmetry of the conjugated macrocycle. This leads to increased absorption in the long-wavelength portion of the visible region of the electromagnetic spectrum (650–680 nm). Purlytin is a purpurin; a degradation product of chlorophyll. Purlytin has a tin atom chelated in its central cavity that causes a red-shift of approximately 20–30 nm (with respect to Photofrin and non-metallated etiopurpurin, λmax.SnEt2 = 650 nm). Purlytin has been reported to localise in skin and produce a photoreaction 7–14 days post-administration. Foscan Tetra(m-hydroxyphenyl)chlorin (mTHPC) is in clinical trials for head and neck cancers under the trade name Foscan. It has also been investigated in clinical trials for gastric and pancreatic cancers, hyperplasia, field sterilisation after cancer surgery and for the control of antibiotic-resistant bacteria. Foscan has a singlet oxygen quantum yield comparable to other chlorin photosensitisers but lower drug and light doses (approximately 100 times more photoactive than Photofrin). Foscan can render patients photosensitive for up to 20 days after initial illumination. Lutex Lutetium texaphyrin, marketed under the trade name Lutex and Lutrin, is a large porphyrin-like molecule. Texaphyrins are expanded porphyrins that have a penta-aza core. It offers strong absorption in the 730–770 nm region. Tissue transparency is optimal in this range. As a result, Lutex-based PDT can (potentially) be carried out more effectively at greater depths and on larger tumours. Lutex has entered Phase II clinical trials for evaluation against breast cancer and malignant melanomas. A Lutex derivative, Antrin, has undergone Phase I clinical trials for the prevention of restenosis of vessels after cardiac angioplasty by photoinactivating foam cells that accumulate within arteriolar plaques. A second Lutex derivative, Optrin, is in Phase I trials for AMD. Texaphyrins also have potential as radiosensitisers (Xcytrin) and chemosensitisers. Xcytrin, a gadolinium texaphyrin (motexafin gadolinium), has been evaluated in Phase III clinical trials against brain metastases and Phase I clinical trials for primary brain tumours. ATMPn 9-Acetoxy-2,7,12,17-tetrakis-(β-methoxyethyl)-porphycene has been evaluated as an agent for dermatological applications against psoriasis vulgaris and superficial non-melanoma skin cancer. Zinc phthalocyanine A liposomal formulation of zinc phthalocyanine (CGP55847) has undergone clinical trials (Phase I/II, Switzerland) against squamous cell carcinomas of the upper aerodigestive tract. Phthalocyanines (PCs) are related to tetra-aza porphyrins. Instead of four bridging carbon atoms at the meso-positions, as for the porphyrins, PCs have four nitrogen atoms linking the pyrrolic sub-units. PCs also have an extended conjugate pathway: a benzene ring is fused to the β-positions of each of the four-pyrrolic sub-units. These rings strengthen the absorption of the chromophore at longer wavelengths (with respect to porphyrins). The absorption band of PCs is almost two orders of magnitude stronger than the highest Q band of haematoporphyrin. These favourable characteristics, along with the ability to selectively functionalise their peripheral structure, make PCs favourable photosensitiser candidates. A sulphonated aluminium PC derivative (Photosense) has entered clinical trials (Russia) against skin, breast and lung malignancies and cancer of the gastrointestinal tract. Sulphonation significantly increases PC solubility in polar solvents including water, circumventing the need for alternative delivery vehicles. PC4 is a silicon complex under investigation for the sterilisation of blood components against human colon, breast and ovarian cancers and against glioma. A shortcoming of many of the metallo-PCs is their tendency to aggregate in aqueous buffer (pH 7.4), resulting in a decrease, or total loss, of their photochemical activity. This behaviour can be minimised in the presence of detergents. Metallated cationic porphyrazines (PZ), including PdPZ+, CuPZ+, CdPZ+, MgPZ+, AlPZ+ and GaPZ+, have been tested in vitro on V-79 (Chinese hamster lung fibroblast) cells. These photosensitisers display substantial dark toxicity. Naphthalocyanines Naphthalocyanines (NCs) are an extended PC derivative. They have an additional benzene ring attached to each isoindole sub-unit on the periphery of the PC structure. Subsequently, NCs absorb strongly at even longer wavelengths (approximately 740–780 nm) than PCs (670–780 nm). This absorption in the near infrared region makes NCs candidates for highly pigmented tumours, including melanomas, which present significant absorption problems for visible light. However, problems associated with NC photosensitisers include lower stability, as they decompose in the presence of light and oxygen. Metallo-NCs, which lack axial ligands, have a tendency to form H-aggregates in solution. These aggregates are photoinactive, thus compromising the photodynamic efficacy of NCs. Silicon naphthalocyanine attached to copolymer PEG-PCL (poly(ethylene glycol)-block-poly(ε-caprolactone)) accumulates selectively in cancer cells and reaches a maximum concentration after about one day. The compound provides real time near-infrared (NIR) fluorescence imaging with an extinction coefficient of 2.8 × 105 M−1 cm−1 and combinatorial phototherapy with dual photothermal and photodynamic therapeutic mechanisms that may be appropriate for adriamycin-resistant tumors. The particles had a hydrodynamic size of 37.66 ± 0.26 nm (polydispersity index = 0.06) and surface charge of −2.76 ± 1.83 mV. Functional groups Altering the peripheral functionality of porphyrin-type chromophores can affect photodynamic activity. Diamino platinum porphyrins show high anti-tumour activity, demonstrating the combined effect of the cytotoxicity of the platinum complex and the photodynamic activity of the porphyrin species. Positively charged PC derivatives have been investigated. Cationic species are believed to selectively localise in the mitochondria. Zinc and copper cationic derivatives have been investigated. The positively charged zinc complexed PC is less photodynamically active than its neutral counterpart in vitro against V-79 cells. Water-soluble cationic porphyrins bearing nitrophenyl, aminophenyl, hydroxyphenyl and/or pyridiniumyl functional groups exhibit varying cytotoxicity to cancer cells in vitro, depending on the nature of the metal ion (Mn, Fe, Zn, Ni) and on the number and type of functional groups. The manganese pyridiniumyl derivative has shown the highest photodynamic activity, while the nickel analogue is photoinactive. Another metallo-porphyrin complex, the iron chelate, is more photoactive (towards HIV and simian immunodeficiency virus in MT-4 cells) than the manganese complexes; the zinc derivative is photoinactive. The hydrophilic sulphonated porphyrins and PCs (AlPorphyrin and AlPC) compounds were tested for photodynamic activity. The disulphonated analogues (with adjacent substituted sulphonated groups) exhibited greater photodynamic activity than their di-(symmetrical), mono-, tri- and tetra-sulphonated counterparts; tumour activity increased with increasing degree of sulphonation. Third generation Many photosensitisers are poorly soluble in aqueous media, particularly at physiological pH, limiting their use. Alternate delivery strategies range from the use of oil-in-water (o/w) emulsions to carrier vehicles such as liposomes and nanoparticles. Although these systems may increase therapeutic effects, the carrier system may inadvertently decrease the "observed" singlet oxygen quantum yield (ΦΔ): the singlet oxygen generated by the photosensitiser must diffuse out of the carrier system; and since singlet oxygen is believed to have a narrow radius of action, it may not reach the target cells. The carrier may limit light absorption, reducing singlet oxygen yield. Another alternative that does not display the scattering problem is the use of moieties. Strategies include directly attaching photosensitisers to biologically active molecules such as antibodies. Metallation Various metals form into complexes with photosensitiser macrocycles. Multiple second generation photosensitisers contain a chelated central metal ion. The main candidates are transition metals, although photosensitisers co-ordinated to group 13 (Al, AlPcS4) and group 14 (Si, SiNC and Sn, SnEt2) metals have been synthesised. The metal ion does not confer definite photoactivity on the complex. Copper (II), cobalt (II), iron (II) and zinc (II) complexes of Hp are all photoinactive in contrast to metal-free porphyrins. However, texaphyrin and PC photosensitisers do not contain metals; only the metallo-complexes have demonstrated efficient photosensitisation. The central metal ion, bound by a number of photosensitisers, strongly influences the photophysical properties of the photosensitiser. Chelation of paramagnetic metals to a PC chromophore appears to shorten triplet lifetimes (down to nanosecond range), generating variations in the triplet quantum yield and triplet lifetime of the photoexcited triplet state. Certain heavy metals are known to enhance inter-system crossing (ISC). Generally, diamagnetic metals promote ISC and have a long triplet lifetime. In contrast, paramagnetic species deactivate excited states, reducing the excited-state lifetime and preventing photochemical reactions. However, exceptions to this generalisation include copper octaethylbenzochlorin. Many metallated paramagnetic texaphyrin species exhibit triplet-state lifetimes in the nanosecond range. These results are mirrored by metallated PCs. PCs metallated with diamagnetic ions, such as Zn2+, Al3+ and Ga3+, generally yield photosensitisers with desirable quantum yields and lifetimes (ΦT 0.56, 0.50 and 0.34 and τT 187, 126 and 35 μs, respectively). Photosensitiser ZnPcS4 has a singlet oxygen quantum yield of 0.70; nearly twice that of most other mPCs (ΦΔ at least 0.40). Expanded metallo-porphyrins Expanded porphyrins have a larger central binding cavity, increasing the range of potential metals. Diamagnetic metallo-texaphyrins have shown photophysical properties; high triplet quantum yields and efficient generation of singlet oxygen. In particular, the zinc and cadmium derivatives display triplet quantum yields close to unity. In contrast, the paramagnetic metallo-texaphyrins, Mn-Tex, Sm-Tex and Eu-Tex, have undetectable triplet quantum yields. This behaviour is parallel with that observed for the corresponding metallo-porphyrins. The cadmium-texaphyrin derivative has shown in vitro photodynamic activity against human leukemia cells and Gram positive (Staphylococcus) and Gram negative (Escherichia coli) bacteria. Although follow-up studies have been limited with this photosensitiser due to the toxicity of the complexed cadmium ion. A zinc-metallated seco-porphyrazine has a high quantum singlet oxygen yield (ΦΔ 0.74). This expanded porphyrin-like photosensitiser has shown the best singlet oxygen photosensitising ability of any of the reported seco-porphyrazines. Platinum and palladium derivatives have been synthesised with singlet oxygen quantum yields of 0.59 and 0.54, respectively. Metallochlorins/bacteriochlorins The tin (IV) purpurins are more active when compared with analogous zinc (II) purpurins, against human cancers. Sulphonated benzochlorin derivatives demonstrated a reduced phototherapeutic response against murine leukemia L1210 cells in vitro and transplanted urothelial cell carcinoma in rats, whereas the tin (IV) metallated benzochlorins exhibited an increased photodynamic effect in the same tumour model. Copper octaethylbenzochlorin demonstrated greater photoactivity towards leukemia cells in vitro and a rat bladder tumour model. It may derive from interactions between the cationic iminium group and biomolecules. Such interactions may allow electron-transfer reactions to take place via the short-lived excited singlet state and lead to the formation of radicals and radical ions. The copper-free derivative exhibited a tumour response with short intervals between drug administration and photodynamic activity. Increased in vivo activity was observed with the zinc benzochlorin analogue. Metallo-phthalocyanines PCs properties are strongly influenced by the central metal ion. Co-ordination of transition metal ions gives metallo-complexes with short triplet lifetimes (nanosecond range), resulting in different triplet quantum yields and lifetimes (with respect to the non-metallated analogues). Diamagnetic metals such as zinc, aluminium and gallium, generate metallo-phthalocyanines (MPC) with high triplet quantum yields (ΦT ≥ 0.4) and short lifetimes (ZnPCS4 τT = 490 Fs and AlPcS4 τT = 400 Fs) and high singlet oxygen quantum yields (ΦΔ ≥ 0.7). As a result, ZnPc and AlPc have been evaluated as second generation photosensitisers active against certain tumours. Metallo-naphthocyaninesulfobenzo-porphyrazines (M-NSBP) Aluminium (Al3+) has been successfully coordinated to M-NSBP. The resulting complex showed photodynamic activity against EMT-6 tumour-bearing Balb/c mice (disulphonated analogue demonstrated greater photoactivity than the mono-derivative). Metallo-naphthalocyanines Work with zinc NC with various amido substituents revealed that the best phototherapeutic response (Lewis lung carcinoma in mice) with a tetrabenzamido analogue. Complexes of silicon (IV) NCs with two axial ligands in anticipation the ligands minimise aggregation. Disubstituted analogues as potential photodynamic agents (a siloxane NC substituted with two methoxyethyleneglycol ligands) are an efficient photosensitiser against Lewis lung carcinoma in mice. SiNC[OSi(i-Bu)2-n-C18H37]2 is effective against Balb/c mice MS-2 fibrosarcoma cells. Siloxane NCs may be efficacious photosensitisers against EMT-6 tumours in Balb/c mice. The ability of metallo-NC derivatives (AlNc) to generate singlet oxygen is weaker than the analogous (sulphonated) metallo-PCs (AlPC); reportedly 1.6–3 orders of magnitude less. In porphyrin systems, the zinc ion (Zn2+) appears to hinder the photodynamic activity of the compound. By contrast, in the higher/expanded π-systems, zinc-chelated dyes form complexes with good to high results. An extensive study of metallated texaphyrins focused on the lanthanide (III) metal ions, Y, In, Lu, Cd, Nd, Sm, Eu, Gd, Tb, Dy, Ho, Er, Tm and Yb found that when diamagnetic Lu (III) was complexed to texaphyrin, an effective photosensitiser (Lutex) was generated. However, using the paramagnetic Gd (III) ion for the Lu metal, exhibited no photodynamic activity. The study found a correlation between the excited-singlet and triplet state lifetimes and the rate of ISC of the diamagnetic texaphyrin complexes, Y(III), In (III) and Lu (III) and the atomic number of the cation. Paramagnetic metallo-texaphyrins displayed rapid ISC. Triplet lifetimes were strongly affected by the choice of metal ion. The diamagnetic ions (Y, In and Lu) displayed triplet lifetimes ranging from 187, 126 and 35 μs, respectively. Comparable lifetimes for the paramagnetic species (Eu-Tex 6.98 μs, Gd-Tex 1.11, Tb-Tex < 0.2, Dy-Tex 0.44 × 10−3, Ho-Tex 0.85 × 10−3, Er-Tex 0.76 × 10−3, Tm-Tex 0.12 × 10−3 and Yb-Tex 0.46) were obtained. Three measured paramagnetic complexes measured significantly lower than the diamagnetic metallo-texaphyrins. In general, singlet oxygen quantum yields closely followed the triplet quantum yields. Various diamagnetic and paramagnetic texaphyrins investigated have independent photophysical behaviour with respect to a complex's magnetism. The diamagnetic complexes were characterised by relatively high fluorescence quantum yields, excited-singlet and triplet lifetimes and singlet oxygen quantum yields; in distinct contrast to the paramagnetic species. The +2 charged diamagnetic species appeared to exhibit a direct relationship between their fluorescence quantum yields, excited state lifetimes, rate of ISC and the atomic number of the metal ion. The greatest diamagnetic ISC rate was observed for Lu-Tex; a result ascribed to the heavy atom effect. The heavy atom effect also held for the Y-Tex, In-Tex and Lu-Tex triplet quantum yields and lifetimes. The triplet quantum yields and lifetimes both decreased with increasing atomic number. The singlet oxygen quantum yield correlated with this observation. Photophysical properties displayed by paramagnetic species were more complex. The observed data/behaviour was not correlated with the number of unpaired electrons located on the metal ion. For example: ISC rates and the fluorescence lifetimes gradually decreased with increasing atomic number. Gd-Tex and Tb-Tex chromophores showed (despite more unpaired electrons) slower rates of ISC and longer lifetimes than Ho-Tex or Dy-Tex. To achieve selective target cell destruction, while protecting normal tissues, either the photosensitizer can be applied locally to the target area, or targets can be locally illuminated. Skin conditions, including acne, psoriasis and also skin cancers, can be treated topically and locally illuminated. For internal tissues and cancers, intravenously administered photosensitizers can be illuminated using endoscopes and fiber optic catheters. Photosensitizers can target viral and microbial species, including HIV and MRSA. Using PDT, pathogens present in samples of blood and bone marrow can be decontaminated before the samples are used further for transfusions or transplants. PDT can also eradicate a wide variety of pathogens of the skin and of the oral cavities. Given the seriousness that drug resistant pathogens have now become, there is increasing research into PDT as a new antimicrobial therapy. Applications Acne PDT is currently in clinical trials as a treatment for severe acne. Initial results have shown for it to be effective as a treatment only for severe acne. A systematic review conducted in 2016 found that PDT is a "safe and effective method of treatment" for acne. The treatment may cause severe redness and moderate to severe pain and burning sensation in some people. (see also: Levulan) One phase II trial, while it showed improvement, was not superior to blue/violet light alone. Cancer The FDA has approved photodynamic therapy to treat actinic keratosis, advanced cutaneous T-cell lymphoma, Barrett esophagus, basal cell skin cancer, esophageal (throat) cancer, non-small cell lung cancer, and squamous cell skin cancer (Stage 0). Photodynamic therapy is also used to relieve symptoms of some cancers, including esophageal cancer when it blocks the throat and non-small cell lung cancer when it blocks the airways. When cells that have absorbed photosensitizers are exposed to a specific wavelength of light, the photosensitizer produces a form of oxygen, called an oxygen radical, that kills them. Photodynamic therapy (PDT) may also damage blood vessels in the tumor, which prevents it from receiving the blood it needs to keep growing. PDT may trigger the immune system to attack tumor cells, even in other areas of the body. PDT is a minimally invasive treatment that is used to treat many conditions including acne, psoriasis, age related macular degeneration, and several cancers such as skin, lung, brain, mesothelioma, bladder, bile-duct, esophageal, and head and neck cancers. In February 2019, medical scientists announced that iridium attached to albumin, creating a photosensitized molecule, can penetrate cancer cells and, after being irradiated with light, destroy the cancer cells. Ophthalmology As cited above, verteporfin was widely approved for the treatment of wet AMD beginning in 1999. The drug targets the neovasculature that is caused by the condition. Photoimmunotherapy Photoimmunotherapy is an oncological treatment for various cancers that combines photodynamic therapy of tumor with immunotherapy treatment. Combining photodynamic therapy with immunotherapy enhances the immunostimulating response and has synergistic effects for metastatic cancer treatment. Vascular targeting Some photosensitisers naturally accumulate in the endothelial cells of vascular tissue allowing 'vascular targeted' PDT. Verteporfin was shown to target the neovasculature resulting from macular degeneration in the macula within the first thirty minutes after intravenous administration of the drug. Compared to normal tissues, most types of cancers are especially active in both the uptake and accumulation of photosensitizers agents, which makes cancers especially vulnerable to PDT. Since photosensitizers can also have a high affinity for vascular endothelial cells. Antimicrobial effects Photodynamic skin disinfection is effective at killing topical microbes, including drug-resistant bacteria, viruses, and fungi. Photodynamic disinfection remains effective after repeat treatments, with no evidence of resistance formation. The method can effectively treat polymicrobial antibiotic resistant Pseudomonas aeruginosa and methicillin-resistant Staphylococcus aureus biofilms in a maxillary sinus cavity model. History Modern era In the late nineteenth century. Finsen successfully demonstrated phototherapy by employing heat-filtered light from a carbon-arc lamp (the "Finsen lamp") in the treatment of a tubercular condition of the skin known as lupus vulgaris, for which he won the 1903 Nobel Prize in Physiology or Medicine. In 1913 another German scientist, Meyer-Betz, described the major stumbling block of photodynamic therapy. After injecting himself with haematoporphyrin (Hp, a photosensitiser), he swiftly experienced a general skin sensitivity upon exposure to sunlight—a recurrent problem with many photosensitisers. The first evidence that agents, photosensitive synthetic dyes, in combination with a light source and oxygen could have potential therapeutic effect was made at the turn of the 20th century in the laboratory of Hermann von Tappeiner in Munich, Germany. Germany was leading the world in industrial dye synthesis at the time. While studying the effects of acridine on paramecia cultures, Oscar Raab, a student of von Tappeiner observed a toxic effect. Fortuitously Raab also observed that light was required to kill the paramecia. Subsequent work in von Tappeiner's laboratory showed that oxygen was essential for the 'photodynamic action' – a term coined by von Tappeiner. Von Tappeiner and colleagues performed the first PDT trial in patients with skin carcinoma using the photosensitizer, eosin. Of six patients with a facial basal cell carcinoma, treated with a 1% eosin solution and long-term exposure either to sunlight or arc-lamp light, four patients showed total tumour resolution and a relapse-free period of 12 months. In 1924 Policard revealed the diagnostic capabilities of hematoporphyrin fluorescence when he observed that ultraviolet radiation excited red fluorescence in the sarcomas of laboratory rats. Policard hypothesized that the fluorescence was associated with endogenous hematoporphyrin accumulation. In 1948 Figge and co-workers showed on laboratory animals that porphyrins exhibit a preferential affinity to rapidly dividing cells, including malignant, embryonic and regenerative cells. They proposed that porphyrins could be used to treat cancer. Photosensitizer Haematoporphyrin Derivative (HpD), was first characterised in 1960 by Lipson. Lipson sought a diagnostic agent suitable for tumor detection. HpD allowed Lipson to pioneer the use of endoscopes and HpD fluorescence. HpD is a porphyrin species derived from haematoporphyrin, Porphyrins have long been considered as suitable agents for tumour photodiagnosis and tumour PDT because cancerous cells exhibit significantly greater uptake and affinity for porphyrins compared to normal tissues. This had been observed by other researchers prior to Lipson. Thomas Dougherty and co-workers at Roswell Park Comprehensive Cancer Center in Buffalo, New York, clinically tested PDT in 1978. They treated 113 cutaneous or subcutaneous malignant tumors with HpD and observed total or partial resolution of 111 tumors. Dougherty helped expand clinical trials and formed the International Photodynamic Association, in 1986. John Toth, product manager for Cooper Medical Devices Corp/Cooper Lasersonics, noticed the "photodynamic chemical effect" of the therapy and wrote the first white paper naming the therapy "Photodynamic Therapy" (PDT) with early clinical argon dye lasers circa 1981. The company set up 10 clinical sites in Japan where the term "radiation" had negative connotations. HpD, under the brand name Photofrin, was the first PDT agent approved for clinical use in 1993 to treat a form of bladder cancer in Canada. Over the next decade, both PDT and the use of HpD received international attention and greater clinical acceptance and led to the first PDT treatments approved by U.S. Food and Drug Administration Japan and parts of Europe for use against certain cancers of the oesophagus and non-small cell lung cancer. Photofrin had the disadvantages of prolonged patient photosensitivity and a weak long-wavelength absorption (630 nm). This led to the development of second generation photosensitisers, including Verteporfin (a benzoporphyrin derivative, also known as Visudyne) and more recently, third generation targetable photosensitisers, such as antibody-directed photosensitisers. In the 1980s, David Dolphin, Julia Levy and colleagues developed a novel photosensitizer, verteporfin. Verteporfin, a porphyrin derivative, is activated at 690 nm, a much longer wavelength than Photofrin. It has the property of preferential uptake by neovasculature. It has been widely tested for its use in treating skin cancers and received FDA approval in 2000 for the treatment of wet age related macular degeneration. As such it was the first medical treatment ever approved for this condition, which is a major cause of vision loss. Russian scientists pioneered a photosensitizer called Photogem which, like HpD, was derived from haematoporphyrin in 1990 by Mironov and coworkers. Photogem was approved by the Ministry of Health of Russia and tested clinically from February 1992 to 1996. A pronounced therapeutic effect was observed in 91 percent of the 1500 patients. 62 percent had total tumor resolution. A further 29 percent had >50% tumor shrinkage. In early diagnosis patients 92 percent experienced complete resolution. Russian scientists collaborated with NASA scientists who were looking at the use of LEDs as more suitable light sources, compared to lasers, for PDT applications. Since 1990, the Chinese have been developing clinical expertise with PDT, using domestically produced photosensitizers, derived from Haematoporphyrin. China is notable for its expertise in resolving difficult-to-reach tumours. Miscellany PUVA therapy uses psoralen as photosensitiser and UVA ultraviolet as light source, but this form of therapy is usually classified as a separate form of therapy from photodynamic therapy. To allow treatment of deeper tumours some researchers are using internal chemiluminescence to activate the photosensitiser. See also Antimicrobial photodynamic therapy Blood irradiation therapy Laser medicine Light Harvesting Materials Photoimmunotherapy Photomedicine Photopharmacology Photostatin Sonodynamic therapy Photosensitizer Nanodumbbells, being studied for possible use in photodynamic therapy Neurotherapy References External links International Photodynamic Association Photodynamic Therapy for Cancer from the NCI Cancer treatments Medical physics Laser medicine Light therapy
Photodynamic therapy
Physics
11,194
16,405,530
https://en.wikipedia.org/wiki/Homology%20directed%20repair
Homology-directed repair (HDR) is a mechanism in cells to repair double-strand DNA lesions. The most common form of HDR is homologous recombination. The HDR mechanism can only be used by the cell when there is a homologous piece of DNA present in the nucleus, mostly in G2 and S phase of the cell cycle. Other examples of homology-directed repair include single-strand annealing and breakage-induced replication. When the homologous DNA is absent, another process called non-homologous end joining (NHEJ) takes place instead. Cancer suppression HDR is important for suppressing the formation of cancer. HDR maintains genomic stability by repairing broken DNA strands; it is assumed to be error free because of the use of a template. When a double strand DNA lesion is repaired by NHEJ there is no validating DNA template present so it may result in a novel DNA strand formation with loss of information. A different nucleotide sequence in the DNA strand results in a different protein expressed in the cell. This protein error may cause processes in the cell to fail. For example, a receptor of the cell that can receive a signal to stop dividing may malfunction, so the cell ignores the signal and keeps dividing and can form a cancer. The importance of HDR can be seen from the fact that the mechanism is conserved throughout evolution. The HDR mechanism has also been found in more simple organisms, such as yeast. Biological pathway The pathway of HDR has not been totally elucidated yet (March 2008). However, a number of experimental results point to the validity of certain models. It is generally accepted that histone H2AX (noted as γH2AX) is phosphorylated within seconds after damage occurs. H2AX is phosphorylated throughout the area surrounding the damage, not only precisely at the break. Therefore, it has been suggested that γH2AX functions as an adhesive component for attracting proteins to the damaged location. Several research groups have suggested that the phosphorylation of H2AX is done by ATM and ATR in cooperation with MDC1. It has been suggested that before or while H2AX is involved with the repair pathway, the MRN complex (which consists of Mre11, Rad50 and NBS1) is attracted to the broken DNA ends and other MRN complexes to keep the broken ends together. This action by the MRN complex may prevent chromosomal breaks. At some later point the DNA ends are processed so that unnecessary residuals of chemical groups are removed and single strand overhangs are formed. Meanwhile, from the beginning, every piece of single stranded DNA is covered by the protein RPA (Replication Protein A). The function of RPA is likely to keep the single stranded DNA pieces stable until the complementary piece is resynthesized by a polymerase. After this, Rad51 replaces RPA and forms filaments on the DNA strand. Working together with BRCA2 (Breast Cancer Associated), Rad51 couples a complementary DNA piece which invades the broken DNA strand to form a template for the polymerase. The polymerase is held onto the DNA strand by PCNA (Proliferating Cell Nuclear Antigen). PCNA forms typical patterns in the nucleus of the cell through which the current cell cycle can be determined. The polymerase synthesizes the missing part of the broken strand. When the broken strand is rebuilt, both strands need to uncouple again. Multiple ways of "uncoupling" have been suggested, but evidence is not yet sufficient to choose between models (March 2008). After the strands are separated the process is done. The co-localization of Rad51 with the damage indicates that HDR has been initiated instead of NHEJ. In contrast, the presence of a Ku complex (Ku70 and Ku80) indicates that NHEJ has been initiated instead of HDR. HDR and NHEJ repair double strand breaks. Other mechanisms such as NER (Nucleotide Excision Repair), BER (Base Excision Repair) and MMR recognise lesions and replace them via single strand perturbation. Mitosis In the budding yeast Saccharomyces cerevisiae homology directed repair is primarily a response to spontaneous or induced damage that occurs during vegetative growth. (Also reviewed in Bernstein and Bernstein, pp 220–221). In order for yeast cells to undergo homology directed repair there must be present in the same nucleus a second DNA molecule containing sequence homology with the region to be repaired. In a diploid cell in G1 phase of the cell cycle, such a molecule is present in the form of the homologous chromosome. However, in the G2 stage of the cell cycle (following DNA replication), a second homologous DNA molecule is also present: the sister chromatid. Evidence indicates that, due to the special nearby relationship they share, sister chromatids are not only preferred over distant homologous chromatids as substrates for recombinational repair, but have the capacity to repair more DNA damage than do homologs. Meiosis During meiosis up to one-third of all homology directed repair events occur between sister chromatids. The remaining two-thirds, or more, of homology directed repair occurs as a result of interaction between non-sister homologous chromatids. Oocytes The fertility of females and the health of potential offspring critically depend on an adequate availability of high quality oocytes. Oocytes are largely maintained in the ovaries in a state of meiotic prophase arrest. In mammalian females the period of arrest may last for years. During this period of arrest, oocytes are subject to spontaneous DNA damage including double-strand breaks. However, the oocytes can efficiently repair DNA double-strand breaks, allowing the restoration of genetic integrity and the protection of offspring health. The process by which oocyte DNA damage can be corrected is referred to as homology directed homologous recombination repair. See also Homologous recombination References Further reading DNA repair
Homology directed repair
Biology
1,277
75,409,524
https://en.wikipedia.org/wiki/Betatron%20oscillations
Betatron oscillations are the fast transverse oscillations of a charged particle in various focusing systems: linear accelerators, storage rings, transfer channels. Oscillations are usually considered as a small deviations from the ideal reference orbit and determined by transverse forces of focusing elements i.e. depending on transverse deviation value: quadrupole magnets, electrostatic lenses, RF-fields. This transverse motion is the subject of study of electron optics. Betatron oscillations were firstly studied by D.W. Kerst and R. Serber in 1941 while commissioning the fist betatron. The fundamental study of betatron oscillations was carried out by Ernest Courant, Milton S.Livingston and Hartland Snyder that lead to the revolution in high energy accelerators design by applying strong focusing principle. Hill's equations To hold particles of the beam inside the vacuum chamber of accelerator or transfer channel magnetic or electrostatic elements are used. The guiding field of dipole magnets sets the reference orbit of the beam while focusing magnets with field linearly depending on transverse coordinate returns the particles with small deviations forcing them to oscillate stably around reference orbit. For any orbit one can set locally the co-propagating with the reference particle Frenet–Serret coordinate system. Assuming small deviations of the particle in all directions and after linearization of all the fields one will come to the linear equations of motion which are a pair of Hill equations: Here , are periodic functions in a case of cyclic accelerator such as betatron or synchrotron. is a gradient of magnetic field. Prime means derivative over s, path along the beam trajectory. The product of guiding field over curvature radius is magnetic rigidity, which is via Lorentz force strictly related to the momentum , where is a particle charge. As the equation of transverse motion independent from each other they can be solved separately. For one dimensional motion the solution of Hill equation is a quasi-periodical oscillation. It can be written as , where is Twiss beta-function, is a betatron phase advance and is an invariant amplitude known as Courant-Snyder invariant. References Literature Accelerator physics
Betatron oscillations
Physics
449
6,359
https://en.wikipedia.org/wiki/Crux
Crux () is a constellation of the southern sky that is centred on four bright stars in a cross-shaped asterism commonly known as the Southern Cross. It lies on the southern end of the Milky Way's visible band. The name Crux is Latin for cross. Even though it is the smallest of all 88 modern constellations, Crux is among the most easily distinguished as its four main stars each have an apparent visual magnitude brighter than +2.8. It has attained a high level of cultural significance in many Southern Hemisphere states and nations. Blue-white α Crucis (Acrux) is the most southerly member of the constellation and, at magnitude 0.8, the brightest. The three other stars of the cross appear clockwise and in order of lessening magnitude: β Crucis (Mimosa), γ Crucis (Gacrux), and δ Crucis (Imai). ε Crucis (Ginan) also lies within the cross asterism. Many of these brighter stars are members of the Scorpius–Centaurus association, a large but loose group of hot blue-white stars that appear to share common origins and motion across the southern Milky Way. Crux contains four Cepheid variables, each visible to the naked eye under optimum conditions. Crux also contains the bright and colourful open cluster known as the Jewel Box (NGC 4755) on its eastern border. Nearby to the southeast is a large dark nebula spanning 7° by 5° known as the Coalsack Nebula, portions of which are mapped in the neighbouring constellations of Centaurus and Musca. History The bright stars in Crux were known to the Ancient Greeks, where Ptolemy regarded them as part of the constellation Centaurus. They were entirely visible as far north as Britain in the fourth millennium BC. However, the precession of the equinoxes gradually lowered the stars below the European horizon, and they were eventually forgotten by the inhabitants of northern latitudes. By 400 AD, the stars in the constellation now called Crux never rose above the horizon throughout most of Europe. Dante may have known about the constellation in the 14th century, as he describes an asterism of four bright stars in the southern sky in his Divine Comedy. His description, however, may be allegorical, and the similarity to the constellation a coincidence. The 15th century Venetian navigator Alvise Cadamosto made note of what was probably the Southern Cross on exiting the Gambia River in 1455, calling it the carro dell'ostro ("southern chariot"). However, Cadamosto's accompanying diagram was inaccurate. Historians generally credit João Faras for being the first European to depict it correctly. Faras sketched and described the constellation (calling it "las guardas") in a letter written on the beaches of Brazil on 1 May 1500 to the Portuguese monarch. Explorer Amerigo Vespucci seems to have observed not only the Southern Cross but also the neighboring Coalsack Nebula on his second voyage in 1501–1502. Another early modern description clearly describing Crux as a separate constellation is attributed to Andrea Corsali, an Italian navigator who from 1515 to 1517 sailed to China and the East Indies in an expedition sponsored by King Manuel I. In 1516, Corsali wrote a letter to the monarch describing his observations of the southern sky, which included a rather crude map of the stars around the south celestial pole including the Southern Cross and the two Magellanic Clouds seen in an external orientation, as on a globe. Emery Molyneux and Petrus Plancius have also been cited as the first uranographers (sky mappers) to distinguish Crux as a separate constellation; their representations date from 1592, the former depicting it on his celestial globe and the latter in one of the small celestial maps on his large wall map. Both authors, however, depended on unreliable sources and placed Crux in the wrong position. Crux was first shown in its correct position on the celestial globes of Petrus Plancius and Jodocus Hondius in 1598 and 1600. Its stars were first catalogued separately from Centaurus by Frederick de Houtman in 1603. The constellation was later adopted by Jakob Bartsch in 1624 and Augustin Royer in 1679. Royer is sometimes wrongly cited as initially distinguishing Crux. Characteristics Crux is bordered by the constellations Centaurus (which surrounds it on three sides) on the east, north and west, and Musca to the south. Covering 68 square degrees and 0.165% of the night sky, it is the smallest of the 88 constellations. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Cru". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of four segments. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −55.68° and −64.70°. Its totality figures at least part of the year south of the 25th parallel north. In tropical regions Crux can be seen in the sky from April to June. Crux is exactly opposite to Cassiopeia on the celestial sphere, and therefore it cannot appear in the sky with the latter at the same time. In this era, south of Cape Town, Adelaide, and Buenos Aires (the 34th parallel south), Crux is circumpolar and thus always appears in the sky. Crux is sometimes confused with the nearby False Cross asterism by stargazers. The False Cross consists of stars in Carina and Vela, is larger and dimmer, does not have a fifth star, and lacks the two prominent nearby "Pointer Stars". Between the two is the even larger and dimmer Diamond Cross. Visibility Crux is easily visible from the southern hemisphere, south of 35th parallel at practically any time of year as circumpolar. It is also visible near the horizon from tropical latitudes of the northern hemisphere for a few hours every night during the northern winter and spring. For instance, it is visible from Cancun or any other place at latitude 25° N or less at around 10 pm at the end of April. There are 5 main stars. Due to precession, Crux will move closer to the South Pole in the next millennia, up to 67 degrees south declination for the middle of the constellation. However, by the year 14,000, Crux will be visible for most parts of Europe and the continental United States. Its visibility will extend to North Europe by the year 18,000 when it will be less than 30 degrees south declination. Use in navigation In the Southern Hemisphere, the Southern Cross is frequently used for navigation in much the same way that Polaris is used in the Northern Hemisphere. Projecting a line from γ to α Crucis (the foot of the crucifix) approximately times beyond gives a point close to the Southern Celestial Pole which is also, coincidentally, where intersects a perpendicular line taken southwards from the east–west axis of Alpha Centauri to Beta Centauri, which are stars at an alike declination to Crux and of a similar width as the cross, but higher magnitude. Argentine gauchos are documented as using Crux for night orientation in the Pampas and Patagonia. Alpha and Beta Centauri are of similar declinations (thus distance from the pole) and are often referred as the "Southern Pointers" or just "The Pointers", allowing people to easily identify the Southern Cross, the constellation of Crux. Very few bright stars lie between Crux and the pole itself, although the constellation Musca is fairly easily recognised immediately south of Crux. Bright stars Down to apparent magnitude +2.5 are 92 stars that shine the brightest as viewed from the Earth. Three of these stars are in Crux making it the most densely populated as to those stars (this being 3.26% of these 92 stars, and in turn being 19.2 times more than the expected 0.17% that would result on a homogenous distribution of all bright stars and a randomised drawing of all 88 constellations, given its area, 0.17% of the sky). Features Stars Within the constellation's borders, there are 49 stars brighter than or equal to apparent magnitude 6.5. The four main stars that form the asterism are Alpha, Beta, Gamma, and Delta Crucis. α Crucis or Acrux is a triple star 321 light-years from Earth. A rich blue in colour, with a visual magnitude 0.8 to the unaided eye, it has two close components of a similar magnitude, 1.3 and 1.8 respectively, plus another much wider component of the 5th magnitude. The two close components are resolved in a small amateur telescope and the wide component is readily visible in a pair of binoculars. β Crucis or Mimosa is a blue-hued giant star of magnitude 1.3, and lies 353 light-years from Earth. It is a Beta Cephei-type variable star with a variation of less than 0.1 magnitudes. γ Crucis or Gacrux is an optical double star. The primary is a red-hued giant star of magnitude 1.6, 88 light-years from Earth, and is one of the closest red giants to Earth. Its secondary component is magnitude 6.5, 264 light-years from Earth. δ Crucis (Imai) is a magnitude 2.8 blue-white hued star about 345 light-years from Earth. Like Mimosa it is a Beta Cepheid variable. There is also a fifth star, that is often included with the Southern Cross. ε Crucis (Ginan) is an orange-hued giant star of magnitude 3.6, 228 light-years from Earth. There are several other naked-eye stars within the borders of Crux, especially: Iota Crucis is a visual double star 125 light-years from Earth. The primary is an orange-hued giant of magnitude 4.6 and the secondary at magnitude 9.5. Mu Crucis or Mu1,2 Crucis is a wide double star where the components are about 370 light-years from Earth. Equally blue-white in colour, the components are magnitude 4.0 and 5.1 respectively, and are easily divisible in small amateur telescopes or large binoculars. Scorpius–Centaurus association Unusually, a total of 15 of the 23 brightest stars in Crux are spectrally blue-white B-type stars. Among the five main bright stars, Delta, and probably Alpha and Beta, are likely co-moving B-type members of the Scorpius–Centaurus association, the nearest OB association to the Sun. They are among the highest-mass stellar members of the Lower Centaurus–Crux subgroup of the association, with ages of roughly 10 to 20 million years. Other members include the blue-white stars Zeta, Lambda and both the components of the visual double star, Mu. Variable stars Crux contains many variable stars. It boasts four Cepheid variables that may all reach naked eye visibility. BG Crucis ranges from magnitude 5.34 to 5.58 over 3.3428 days, T Crucis ranges from 6.32 to 6.83 over 6.73331 days, S Crucis ranges from 6.22 to 6.92 over 4.68997 days, R Crucis ranges from 6.4 to 7.23 over 5.82575 days. Other well studied variable stars includes: Lambda Crucis and Theta2 Crucis, that are both Beta Cepheid type variable stars. BH Crucis, also known as Welch's Red Variable, is a Mira variable that ranges from magnitude 6.6 to 9.8 over 530 days. Discovered in October 1969, it has become redder and brighter (mean magnitude changing from 8.047 to 7.762) and its period lengthened by 25% in the first thirty years since its discovery. Host star exoplanets in Crux The star HD 106906 has been found to have a planet—HD 106906 b—that has one of the widest orbits of any currently known planetary-mass companions. Objects beyond the Local Arm Crux is backlit by the multitude of stars of the Scutum-Crux Arm (more commonly called the Scutum-Centaurus Arm) of the Milky Way. This is the main inner arm in the local radial quarter of the galaxy. Part-obscuring this is: The Coalsack Nebula lies partially within Crux and partly in the neighboring constellations of Musca and Centaurus. It is the most prominent dark nebula in the skies, and is easily visible to the naked eye as a prominent dark patch in the southern Milky Way. It can be found 6.5° southeast from the centre of Crux or 3° east from α Crucis. Its large area covers about 7° by 5°, and is away from Earth. A key feature of the Scutum-Crux Arm is: The Jewel Box, κ Crucis Cluster or NGC 4755, is a small but bright open cluster that appears as a fuzzy star to the naked eye and is very close to the easternmost boundary of Crux: about 1° southeast of Beta Crucis. The combined or total magnitude is 4.2 and it lies at a distance of from Earth. The cluster was given its name by John Herschel, based on the range of colours visible throughout the star cluster in his telescope. About seven million years old, it is one of the youngest open clusters in the Milky Way, and it appears to have the shape of a letter 'A'. The Jewel Box Cluster is classified as Shapley class 'g' and Trumpler class 'I 3 r -' cluster; it is a very rich, centrally-concentrated cluster detached from the surrounding star field. It has more than 100 stars that range significantly in brightness. The brightest cluster stars are mostly blue supergiants, though the cluster contains at least one red supergiant. Kappa Crucis is a true member of the cluster that bears its name, and is one of the brighter stars at magnitude 5.9. Cultural significance The most prominent feature of Crux is the distinctive asterism known as the Southern Cross. It has great significance in the cultures of the southern hemisphere, particularly of Australia, Brazil, Chile and New Zealand. Flags and symbols Several southern countries and organisations have traditionally used Crux as a national or distinctive symbol. The four or five brightest stars of Crux appear, heraldically standardised in various ways, on the flags of Australia, Brazil, New Zealand, Papua New Guinea and Samoa. They also appear on the flags of the Australian state of Victoria, the Australian Capital Territory, the Northern Territory, as well as the flag of Magallanes Region of Chile, the flag of Londrina (Brazil) and several Argentine provincial flags and emblems (for example, Tierra del Fuego and Santa Cruz). The flag of the Mercosur trading zone displays the four brightest stars. Crux also appears on the Brazilian coat of arms and, , on the cover of Brazilian passports. Five stars appear in the logo of the Brazilian football team Cruzeiro Esporte Clube and in the insignia of the Order of the Southern Cross, and the cross has featured as name of the Brazilian currency (the cruzeiro from 1942 to 1986 and again from 1990 to 1994). All coins of the (1998) series of the Brazilian real display the constellation. Songs and literature reference the Southern Cross, including the Argentine epic poem Martín Fierro. The Argentinian singer Charly García says that he is "from the Southern Cross" in the song "No voy en tren". The Cross gets a mention in the lyrics of the Brazilian National Anthem (1909): "A imagem do Cruzeiro resplandece" ("the image of the Cross shines"). The Southern Cross is mentioned in the Australian National Anthem, "Beneath our radiant Southern Cross we'll toil with hearts and hands" The Southern Cross features in the coat of arms of William Birdwood, 1st Baron Birdwood, the British officer who commanded the Australian and New Zealand Army Corps during the Gallipoli Campaign of the First World War. The Southern Cross is also mentioned in the Samoan National Anthem. "Vaai 'i na fetu o lo'u a agiagia ai: Le faailoga lea o Iesu, na maliu ai mo Samoa." ("Look at those stars that are waving on it: This is the symbol of Jesus, who died on it for Samoa.") The 1952-53 NBC Television Series Victory At Sea contained a musical number entitled "Beneath the Southern Cross". "Southern Cross" is a single released by Crosby, Stills and Nash in 1981. It reached #18 on Billboard Hot 100 in late 1982. "The Sign of the Southern Cross" is a song released by Black Sabbath in 1981. The song was released on the album "Mob Rules". The Order of the Southern Cross is a Brazilian order of chivalry awarded to "those who have rendered significant service to the Brazilian nation". In "O Sweet Saint Martin's Land", the lyrics mention the Southern Cross: Thy Southern Cross the night. A stylized version of Crux appears on the Australian Eureka Flag. The constellation was also used on the dark blue, shield-like patch worn by personnel of the U.S. Army's Americal Division, which was organized in the Southern Hemisphere, on the island of New Caledonia, and also on the blue diamond of the U.S. 1st Marine Division, which fought on the Southern Hemisphere islands of Guadalcanal and New Britain. The Petersflagge flag of the German East Africa Company of 1885–1920, which included a constellation of five white five-pointed Crux "stars" on a red ground, later served as the model for symbolism associated with generic German colonial-oriented organisations: the Reichskolonialbund of 1936–1943 and the (1956/1983 to the present). Southern Cross station is a major rail terminal in Melbourne, Australia. The Personal Ordinariate of Our Lady of the Southern Cross is a personal ordinariate of the Roman Catholic Church primarily within the territory of the Australian Catholic Bishops Conference for groups of Anglicans who desire full communion with the Catholic Church in Australia and Asia. The Knights of the Southern Cross (KSC) is a Catholic fraternal order throughout Australia. Various cultures In India, there is a story related to the creation of Trishanku Swarga (त्रिशंकु), meaning Cross (Crux), created by Sage Vishwamitra. In Chinese, (), meaning Cross, refers to an asterism consisting of γ Crucis, α Crucis, β Crucis and δ Crucis. In Australian Aboriginal astronomy, Crux and the Coalsack mark the head of the 'Emu in the Sky' (which is seen in the dark spaces rather than in the patterns of stars) in several Aboriginal cultures, while Crux itself is said to be a possum sitting in a tree (Boorong people of the Wimmera region of northwestern Victoria), a representation of the sky deity Mirrabooka (Quandamooka people of Stradbroke Island), a stingray (Yolngu people of Arnhem Land), or an eagle (Kaurna people of the Adelaide Plains). Two Pacific constellations also included Gamma Centauri. Torres Strait Islanders in modern-day Australia saw Gamma Centauri as the handle and the four stars as the left hand of Tagai, and the stars of Musca as the trident of the fishing spear he is holding. In Aranda traditions of central Australia, the four Cross stars are the talon of an eagle and Gamma Centauri as its leg. Various peoples in the East Indies and Brazil viewed the four main stars as the body of a ray. In both Indonesia and Malaysia, it is known as Bintang Pari and Buruj Pari, respectively ("ray stars"). This aquatic theme is also shared by an archaic name of the constellation in Vietnam, where it was once known as sao Cá Liệt (the ponyfish star). Among Filipino people, the southern cross have various names pertaining to tops, including kasing (Visayan languages), paglong (Bikol), and pasil (Tagalog). It is also called butiti (puffer fish) in Waray. The Javanese people of Indonesia called this constellation Gubug pèncèng ("raking hut") or lumbung ("the granary"), because the shape of the constellation was like that of a raking hut. The Southern Cross (α, β, γ and δ Crucis) together with μ Crucis is one of the asterisms used by Bugis sailors for navigation, called bintoéng bola képpang, meaning "incomplete house star" The Māori name for the Southern Cross is Māhutonga and it is thought of as the anchor (Te Punga) of Tama-rereti's waka (the Milky Way), while the Pointers are its rope. In Tonga it is known as Toloa ("duck"); it is depicted as a duck flying south, with one of his wings (δ Crucis) wounded because Ongo tangata ("two men", α and β Centauri) threw a stone at it. The Coalsack is known as Humu (the "triggerfish"), because of its shape. In Samoa the constellation is called Sumu ("triggerfish") because of its rhomboid shape, while α and β Centauri are called Luatagata (Two Men), just as they are in Tonga. The peoples of the Solomon Islands saw several figures in the Southern Cross. These included a knee protector and a net used to catch Palolo worms. Neighboring peoples in the Marshall Islands saw these stars as a fish. Peninsular Malays also see the likeness of a fish in the Crux, particularly the Scomberomorus or its local name Tohok. In Mapudungun, the language of Patagonian Mapuches, the name of the Southern Cross is Melipal, which means "four stars". In Quechua, the language of the Inca civilization, Crux is known as "Chakana", which means literally "stair" (chaka, bridge, link; hanan, high, above), but carries a deep symbolism within Quechua mysticism. Alpha and Beta Crucis make up one foot of the Great Rhea, a constellation encompassing Centaurus and Circinus along with the two bright stars. The Great Rhea was a constellation of the Bororo of Brazil. The Mocoví people of Argentina also saw a rhea including the stars of Crux. Their rhea is attacked by two dogs, represented by bright stars in Centaurus and Circinus. The dogs' heads are marked by Alpha and Beta Centauri. The rhea's body is marked by the four main stars of Crux, while its head is Gamma Centauri and its feet are the bright stars of Musca. The Bakairi people of Brazil had a sprawling constellation representing a bird snare. It included the bright stars of Crux, the southern part of Centaurus, Circinus, at least one star in Lupus, the bright stars of Musca, Beta and the optical double star Delta1,2 Chamaeleontis: and some of the stars of Volans, and Mensa. The Kalapalo people of Mato Grosso state in Brazil saw the stars of Crux as Aganagi angry bees having emerged from the Coalsack, which they saw as the beehive. Among Tuaregs, the four most visible stars of Crux are considered iggaren, i.e. four Maerua crassifolia trees. The Tswana people of Botswana saw the constellation as Dithutlwa, two giraffes – Alpha and Beta Crucis forming a male, and Gamma and Delta forming the female. See also Trishanku Northern Cross Crux (Chinese astronomy) Notes References Citations Sources External links Finding the South Pole in the sky The clickable Crux Southern Cross in Te Ara – the Encyclopedia of New Zealand Andrea Corsali – Letter to Giuliano de Medici, 1516 showing the Southern Cross at the State Library of NSW Letter of Andrea Corsali 1516–1989: with additional material ("the first description and illustration of the Southern Cross, with speculations about Australia ...") digitised by the National Library of Australia. 'The Southern Cross': A Poem by Adam Sedia National symbols of Australia National symbols of Brazil National symbols of New Zealand National symbols of Papua New Guinea National symbols of Samoa Southern constellations Heraldic charges Constellations listed by Petrus Plancius
Crux
Astronomy
5,237
57,716,248
https://en.wikipedia.org/wiki/Daridorexant
Daridorexant, sold under the brand name Quviviq, is an orexin antagonist medication which is used for the treatment of insomnia. Daridorexant is taken by mouth. Side effects of daridorexant include headache, somnolence, and fatigue. The medication is a dual orexin receptor antagonist (DORA). It acts as a selective dual antagonist of the orexin receptors OX1 and OX2. Daridorexant has a relatively short elimination half-life of 8hours and a time to peak of about 1 to 2hours. It is not a benzodiazepine or Z-drug and does not interact with GABA receptors, instead having a distinct mechanism of action. Daridorexant was approved for medical use in the United States in January 2022 and became available in May 2022. It was approved in the European Union in April 2022, and is the first orexin receptor antagonist to become available in European Union. The medication is a schedule IV controlled substance in the United States and may have a modest potential for misuse. Besides daridorexant, other orexin receptor antagonists, like suvorexant and lemborexant, have also been introduced. Medical uses Daridorexant is indicated for the treatment of adults with insomnia characterized by difficulties with sleep onset and/or sleep maintenance. The medication has been found to significantly improve latency to persistent sleep (LPS), wake after sleep onset (WASO), and subjective total sleep time (TST) in regulatory clinical trials. At doses of 25 to 50mg and in terms of treatment–placebo difference, it reduces LPS by 6 to 12minutes, reduces WASO by 10 to 23minutes, and increases subjective TST by 10 to 22minutes. Daridorexant has also been found to improve daytime functioning at a dose of 50mg but not at 25mg. Network meta-analyses have assessed the sleep-promoting effects of orexin receptor antagonists and have compared them between one another as well as to other sleep aids including benzodiazepines, Z-drugs, antihistamines, sedative antidepressants (e.g., trazodone, doxepin, amitriptyline, mirtazapine), and melatonin receptor agonists. A major systematic review and network meta-analysis of insomnia medications published in 2022 found that daridorexant had an effect size (standardized mean difference (SMD)) against placebo for treatment of insomnia at 4weeks of 0.23 (95% –0.01 to 0.48). This was similar to but numerically lower than the effect sizes at 4weeks for suvorexant (SMD 0.31, 95% CI 0.01 to 0.62) and lemborexant (SMD 0.36, 95% CI 0.08 to 0.63). Benzodiazepines and Z-drugs generally showed larger effect sizes than orexin receptor antagonists (e.g., SMDs of 0.45 to 0.83). The review concluded on the basis of daridorexant's small effect size that it did not show an overall material benefit in the treatment of insomnia. Conversely, it concluded that lemborexant—as well as the Z-drug eszopiclone—had the best profiles overall in terms of efficacy, tolerability, and acceptability among all of the assessed insomnia medications. Orexin receptor antagonists are not used as first-line treatments for insomnia due to their costs and concerns about possible misuse liability. Population pharmacokinetic modeling indicates that differences between subjects do not require dose adjustments, and that lean body weight and fat mass effect the pharmacokinetics of daridorexant better than other body size descriptors. Treatment with daridorexant is generally safe and well tolerated for up to one year, with improvements in sleep and daytime functioning persisting throughout treatment. The Department of Defense (DOD) is testing the effectiveness of daridorexant in patients with post-traumatic stress disorder (PTSD) as the link between insomnia and PTSD is well established. Available forms In the United States and Canada, daridorexant is available in the form of 25 and 50mg oral tablets. It is provided as the salt daridorexant hydrochloride, with each tablet containing 27 or 54mg of this substance (equivalent to 25 or 50mg daridorexant). Contraindications Daridorexant is contraindicated in people with narcolepsy. It is not recommended in people with severe hepatic impairment, whereas a lower maximum dose is recommended in people with moderate hepatic impairment. Concomitant use of daridorexant with strong CYP3A4 inhibitors and moderate to strong CYP3A4 inducers is not recommended and should be avoided due to unfavorable modification of daridorexant exposure. Side effects Side effects of daridorexant include headache (6% at 25mg vs. 7% at 50mg vs. 5% for placebo), somnolence or fatigue (includes somnolence, sedation, fatigue, hypersomnia, and lethargy) (6% at 25mg vs. 5% at 50mg vs. 4% for placebo), dizziness (2% at 25mg vs. 3% at 50mg vs. 2% for placebo), and nausea (0% at 25mg vs. 3% at 50mg vs. 2% for placebo). No residual effects have been found after administration of 25mg daridorexant in the evening to either young or elderly individuals. However, daridorexant may cause next-morning driving impairment at the start of treatment or in some individuals. Orexin receptor antagonists like daridorexant may have less or no propensity for causing tolerance compared to other sedatives and hypnotics based on animal studies. Daridorexant did not produce signs of withdrawal or dependence upon discontinuation in animal studies and clinical trials, and orexin receptor antagonists are not associated with rebound insomnia. Loss of sleep-promoting effectiveness occurs rapidly upon discontinuation of daridorexant. Preclinical research has suggested that orexin antagonists may reduce appetite, but daridorexant and other orexin antagonists have not been associated with weight loss in clinical trials. Daridorexant may have a small risk of suicidal ideation. Orexin receptor antagonists can affect the reward system and produce drug-liking responses in humans. Daridorexant at a dose of 50mg (the maximum recommended dose) showed significantly greater drug liking than placebo but significantly less drug liking than zolpidem (30mg) and suvorexant (150mg) in recreational sedative drug users. At higher doses of 100 and 150mg (greater than the recommended maximum dose), drug liking with daridorexant was similar to that with zolpidem (30mg) and suvorexant (150mg). In other studies, suvorexant showed similar drug liking compared to zolpidem but lower misuse potential on other measures (e.g., overall rate of misuse potential adverse events of 58% for zolpidem and 31% for suvorexant in recreational drug users). No reports indicative of misuse liability were observed in clinical trials with daridorexant, although these studies excluded participants with history of drug or alcohol misuse. Overdose There is limited clinical experience with overdose of daridorexant. Overdose of the medication at a dose of up to four times the maximum recommended dose may result in adverse effects including somnolence, muscle weakness, cataplexy-like symptoms, sleep paralysis, attention disturbances, fatigue, headache, and constipation. There is no specific antidote to overdose of daridorexant. Interactions CYP3A4 inhibitors and inducers can increase and decrease exposure to daridorexant, respectively. The weak CYP3A4 inhibitor ranitidine (150mg) is predicted to increase overall exposure to daridorexant by 1.5-fold; the moderate CYP3A4 inhibitor diltiazem (240mg) increased exposure to daridorexant by 2.4-fold; and the strong CYP3A4 inhibitor itraconazole, on the basis of physiologically-based pharmacokinetic modeling, would be expected to increase daridorexant exposure by more than 4-fold. Conversely, the moderate CYP3A4 inducer efavirenz (600mg) decreased daridorexant overall exposure by 35 to 60% and the strong CYP3A4 inducer rifampin similarly decreased daridorexant exposure by more than 50%. Concomitant use of daridorexant with strong CYP3A4 inhibitors or with moderate or strong CYP3A4 inducers should be avoided, while it is recommended that the maximum dose of daridorexant be reduced with moderate CYP3A4 inhibitors. Examples of important CYP3A4 modulators which are expected to interact with daridorexant include the strong CYP3A4 inhibitors boceprevir, clarithromycin, conivaptan, indinavir, itraconazole, ketoconazole, lopinavir, nefazodone, nelfinavir, posaconazole, ritonavir, saquinavir, telaprevir, and telithromycin (concomitant use not recommended); the moderate CYP3A4 inhibitors amprenavir, aprepitant, atazanavir, ciprofloxacin, diltiazem, dronedarone, erythromycin, fluconazole, fluvoxamine, fosamprenavir, grapefruit juice, imatinib, and verapamil (lower doses of daridorexant recommended); and the strong CYP3A4 inducers apalutamide, carbamazepine, efavirenz, enzalutamide, phenytoin, rifampin, and St. John's wort (expected to decrease daridorexant effectiveness). Gastric pH modifiers like famotidine can decrease peak levels of daridorexant without affecting total exposure. Alcohol and selective serotonin reuptake inhibitors (SSRIs) like citalopram have not shown significant pharmacokinetic interactions with daridorexant. Coadministration of daridorexant with other sedatives like benzodiazepines, opioids, tricyclic antidepressants, and alcohol may increase the risk of central nervous system depression and daytime impairment. Daridorexant has not been found to significantly influence the pharmacokinetics of other drugs including midazolam (a CYP3A4 substrate), rosuvastatin (a BCRP substrate), and the SSRI citalopram (primarily a CYP2C19 substrate). Pharmacology Pharmacodynamics Daridorexant acts as a selective dual antagonist of the orexin (hypocretin) receptors OX1 and OX2. The affinities (Ki) of daridorexant for the orexin receptors are 0.47nM for the OX1 receptor and 0.93nM for the OX2 receptor. Its Kb values for the human orexin receptors have been reported to be 0.5nM for the OX1 receptor and 0.8nM for the OX2 receptor. Hence, daridorexant is approximately equipotent in its antagonism of the orexin receptors. Daridorexant is selective for the orexin receptors over many other targets. In contrast to certain other sedatives and hypnotics, daridorexant is not a benzodiazepine or Z-drug and does not interact with GABA receptors. Mechanism of action The endogenous orexin neuropeptides, orexin A and orexin B, are involved in the regulation of sleep–wake cycles and act to promote wakefulness. Deficiency of orexin signaling is thought to be the primary cause of the sleep disorder narcolepsy. Disturbances in orexin signaling may also be involved in insomnia. Research suggests that orexin signaling may change with age, and this has been implicated in age-related sleep disturbances. By blocking the actions of orexins and modulating sleep–wake cycles, orexin receptor antagonists like daridorexant reduce wakefulness and improve sleep. The sleep-promoting effects of dual orexin receptor antagonists are thought to be mediated specifically by blockade of the OX2 receptor in the lateral hypothalamus. Although narcoleptic symptoms were a theoretical concern during the development of orexin receptor antagonists, this has not been observed in clinical trials of these agents. Pharmacokinetics Absorption The absolute bioavailability of daridorexant is 62%. The poor aqueous solubility of daridorexant limits its bioavailability. It reaches peak concentrations within 1 to 2hours following a dose. Food prolonged the time to peak by 1.3 to 2hours and decreased the peak concentrations by 16 to 24%, but did not affect area-under-the-curve concentrations. Distribution The volume of distribution of daridorexant is 31L. Its plasma protein binding is 99.7%. The plasma-to-blood ratio of daridorexant is 0.64. Daridorexant is a lipophilic molecule and effectively crosses the blood–brain barrier in animals. Metabolism Daridorexant is extensively metabolized primarily by CYP3A4 (89%). Other cytochrome P450 enzymes contribute individually to less than 3% of the clearance of daridorexant. Daridorexant has 77 identified metabolites. Its major metabolites are less active than daridorexant as orexin receptor antagonists. Recently, a study was carried out using human liver microsomes reported that daridorexant underwent 3 reaction; hydroxylation at the methyl group of the benzimidazole moiety, oxidative O-demethylation of the anisole to the corresponding phenol, and hydroxylation to a 4-hydroxy piperidinol derivative. Researchers proved that the chemical structures of the benzylic alcohol and the phenol are products of standard CYP450 reactions; while the latter hydroxylation product was incompatible with the initially postulated hydroxylation of the pyrrolidine ring, hence they suggested that human monooxygenase CYP3A4 catalyzes the intramolecular rearrangement of daridorexant. In detail, they proposed the following mechanism, hydroxylation in 5-position of the pyrrolidine ring initially will yield a cyclic hemiaminal which subsequently will hydrolyze to a ring-open amino aldehyde. Afterwards, cyclization of the latter onto one of the nitrogen atoms of the benzimidazole moiety will yield the final 4-hydroxy piperidinol metabolite. Elimination Daridorexant is eliminated primarily by feces (57%) then by urine (28%). It is excreted mainly in the form of metabolites, with only trace amounts of the parent compound identified. The medication has an elimination half-life of about 8hours or of 6 to 10hours. The half-life of daridorexant may be longer in elderly individuals compared to young adults (9–10hours in the elderly versus 6hours in young adults). Its half-life is shorter than that of other orexin receptor antagonists such as suvorexant (12hours) and lemborexant (~18–55hours). The relatively short half-life of daridorexant may allow for reduced daytime sedation. The duration of action of daridorexant in terms of sedative effects is approximately 8hours with a 50mg dose. Chemistry Daridorexant is a small-molecule compound. The chemical name of daridorexant is (S)-(2-(5-chloro-4-methyl-1H-benzo[d]imidazol-2-yl)-2-methylpyrrolidin-1-yl)(5-methoxy-2-(2H-1,2,3-triazol-2-yl)phenyl)methanone. Its molecular formula is C23H23N6O2Cl and its molecular weight is 450.93g/mol (or 487.38g/mol for the hydrochloride). Daridorexant hydrochloride is a white to light yellowish powder. Daridorexant is a lipophilic compound and daridorexant hydrochloride is very slightly soluble in water. History Daridorexant was originated by Actelion Pharmaceuticals and was further developed by Idorsia. It was patented in 2013 and was first described in the scientific literature in 2017. It was in development for 25 years by the husband-wife team Jean-Paul and Martine Clozel. Daridorexant was approved for medical use in the United States in January 2022 and became available in May 2022. On 24 February 2022, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Quviviq, intended for the treatment of insomnia. On 29 April 2022, daridorexant was authorized for use in the European Union. It was the first orexin receptor antagonist to become available for use in the European Union. (The earlier orexin receptor antagonists suvorexant and lemborexant are not available in the European Union.) Regulatory review is also ongoing in Canada and Switzerland and is planned for the United Kingdom. Society and culture Legal status Daridorexant is a schedule IV controlled substance under the Controlled Substances Act in the United States. Daridorexant (Quviviq) was approved for medical use in the European Union in April 2022. Daridorexant (Quviviq) was approved by Health Canada in April 2023. References Further reading Benzimidazoles Chlorobenzene derivatives Ethers Hypnotics Orexin antagonists Pyrrolidines Triazoles
Daridorexant
Chemistry,Biology
4,028
72,270,279
https://en.wikipedia.org/wiki/Cubical%20bipyramid
In 4-dimensional geometry, the cubical bipyramid is the direct sum of a cube and a segment, {4,3} + { }. Each face of a central cube is attached with two square pyramids, creating 12 square pyramidal cells, 30 triangular faces, 28 edges, and 10 vertices. A cubical bipyramid can be seen as two cubic pyramids augmented together at their base. It is the dual of a octahedral prism. Being convex and regular-faced, it is a CRF polytope. Coordinates It is a Hanner polytope with coordinates: [2] (0, 0, 0; ±1) [8] (±1, ±1, ±1; 0) See also Tetrahedral bipyramid Dodecahedral bipyramid Icosahedral bipyramid References External links Cubic tegum 4-polytopes
Cubical bipyramid
Mathematics
190
14,170,826
https://en.wikipedia.org/wiki/ETV4
ETS translocation variant 4 (ETV4), also known as polyoma enhancer activator 3 (PEA3), is a member of the PEA3 subfamily of Ets transcription factors. Disease marker Two variants of a disease associated with ETV4 is Ewing Sarcoma and Extraosseous Ewing's Sarcoma. While both are cancerous tumors, the former grows in the bones most commonly affecting the arms, legs, hips, and spine, while the later affects the soft tissue in the chest, foot, pelvis and spine. References Further reading External links Transcription factors
ETV4
Chemistry,Biology
124
66,153,538
https://en.wikipedia.org/wiki/Kinkeeping
Kinkeeping is the act of maintaining and strengthening familial ties. It is a form of emotional labor done both out of a sense of obligation and because of emotional attachment. Sociologist Carolyn Rosenthal defined the term in her 1985 article, "Kinkeeping in the Familial Division of Labor". According to her, kinkeepers play an important role in maintaining family cohesion and continuity. Their efforts contribute significantly to the family's social capital, providing emotional support and a sense of belonging to family members. Kinkeeping activities help extended family members of differing households stay in touch with one another and strengthen intergenerational bonds. It facilitates the transfer of family traditions, values, and histories from one generation to the next. Families with active kinkeepers tend to feel more connected as a family. Kinkeeping methods may include telephone calls, writing letters, visiting, sending gifts, acting as a caregiver for disabled or infirm family members, or providing economic aid. Maintaining family traditions, such as preparing particular foods for holidays, is a form of kinkeeping. Women are more likely to act as kinkeepers than men and often organize family events and reunions. A 2006 survey of three different cohorts of Americans including those born before 1930, 1946–1964, and 1965–1976 found that women reported more contact with relatives than men in every cohort. Kinkeeping tends to be time-consuming. The kinkeepers may enjoy their role, or they may find it burdensome. References Interpersonal relationships Family
Kinkeeping
Biology
306
1,132,290
https://en.wikipedia.org/wiki/Norbaeocystin
Norbaeocystin, also known as 4-phosphoryloxytryptamine (4-PO-T), is a psilocybin mushroom alkaloid and analog of psilocybin. It is found as a minor compound in most psilocybin mushrooms together with psilocin, psilocybin, aeruginascin, and baeocystin, from which it is a derivative. Norbaeocystin is an N-demethylated derivative of baeocystin (itself an N-demethylated derivative of psilocybin), and a phosphorylated derivative of 4-hydroxytryptamine (4-HT). The latter is notable as a positional isomer of serotonin, which is 5-hydroxytryptamine. Norbaeocystin is thought to be a prodrug of 4-HT, analogously to how psilocybin is a prodrug of psilocin and baeocystin is thought to be a prodrug of norpsilocin. 4-HT is a potent and centrally penetrant serotonin 5-HT2A receptor agonist and also interacts with other serotonin receptors. In spite of this however, 4-HT and norbaeocystin do not produce the head-twitch response, a behavioral proxy of psychedelic effects, in animals, and hence are putatively non-hallucinogenic. The reasons for this are unknown, but may be due to β-arrestin2-preferring biased agonism of the serotonin 5-HT2A receptor. See also Aeruginascin Baeocystin Norpsilocin Psilocybin References Experimental non-hallucinogens Non-hallucinogenic 5-HT2A receptor agonists Organophosphates Prodrugs Psychedelic tryptamines Tryptamine alkaloids
Norbaeocystin
Chemistry
414
217,104
https://en.wikipedia.org/wiki/Social%20welfare%20function
In welfare economics and social choice theory, a social welfare function—also called a social ordering, ranking, utility, or choice function—is a function that ranks a set of social states by their desirability. Each person's preferences are combined in some way to determine which outcome is considered better by society as a whole. It can be seen as mathematically formalizing Rousseau's idea of a general will. Social choice functions are studied by economists as a way to identify socially-optimal decisions, giving a procedure to rigorously define which of two outcomes should be considered better for society as a whole (e.g. to compare two different possible income distributions). They are also used by democratic governments to choose between several options in elections, based on the preferences of voters; in this context, a social choice function is typically referred to as an electoral system. The notion of social utility is analogous to the notion of a utility function in consumer choice. However, a social welfare function is different in that it is a mapping of individual utility functions onto a single output, in a way that accounts for the judgments of everyone in a society. There are two different notions of social welfare used by economists: Ordinal (or ranked voting) functions only use ordinal information, i.e. whether one choice is better than another. Cardinal (or rated voting) functions also use cardinal information, i.e. how much better one choice is compared to another. Arrow's impossibility theorem is a key result on social welfare functions, showing an important difference between social and consumer choice: whereas it is possible to construct a rational (non-self-contradictory) decision procedure for consumers based only on ordinal preferences, it is impossible to do the same in the social choice setting, making any such ordinal decision procedure a second-best. Terminology and equivalence Some authors maintain a distinction between three closely-related concepts: A social choice function selects a single best outcome (a single candidate who wins, or multiple if there happens to be a tie). A social ordering function lists the candidates, from best to worst. A social scoring function maps each candidate to a number representing their quality. For example, the standard social scoring function for first-preference plurality is the total number of voters who rank a candidate first. Every social ordering can be made into a choice function by considering only the highest-ranked outcome. Less obviously, though, every social choice function is also an ordering function. Deleting the best outcome, then finding the new winner, results in a runner-up who is assigned second place. Repeating this process gives a full ranking of all candidates. Because of this close relationship, the three kinds of functions are often conflated by abuse of terminology. Example Consider an instant-runoff election between Top, Center, and Bottom. Top has the most first-preference votes; Bottom has the second-most; and Center (positioned between the two) has the fewest first preferences. Under instant-runoff voting, Top is the winner. Center is eliminated in the first round, and their second-preferences are evenly split between Top and Bottom, allowing Top to win. To find the second-place finisher, we find the winner if Top had not run. In this case, the election is between Center and Bottom. (Note that the finishing order is not the same as the elimination order for sequential elimination methods: despite being eliminated first, Center is the runner-up in this election.) Ordinal welfare In a 1938 article, Abram Bergson introduced the term social welfare function, with the intention "to state in precise form the value judgments required for the derivation of the conditions of maximum economic welfare." The function was real-valued and differentiable. It was specified to describe the society as a whole. Arguments of the function included the quantities of different commodities produced and consumed and of resources used in producing different commodities, including labor. Necessary general conditions are that at the maximum value of the function: The marginal "dollar's worth" of welfare is equal for each individual and for each commodity The marginal "dis-welfare" of each "dollar's worth" of labor is equal for each commodity produced of each labor supplier The marginal "dollar" cost of each unit of resources is equal to the marginal value productivity for each commodity. Bergson argued that welfare economics had described a standard of economic efficiency despite dispensing with interpersonally-comparable cardinal utility, the hypothesization of which may merely conceal value judgments, and purely subjective ones at that. Earlier neoclassical welfare theory, heir to the classical utilitarianism of Bentham, often treated the law of diminishing marginal utility as implying interpersonally comparable utility. Irrespective of such comparability, income or wealth is measurable, and it was commonly inferred that redistributing income from a rich person to a poor person tends to increase total utility (however measured) in the society. But Lionel Robbins (1935, ch. VI) argued that how or how much utilities, as mental events, change relative to each other is not measurable by any empirical test, making them unfalsifiable. Robbins therefore rejected such as incompatible with his own philosophical behaviorism. Auxiliary specifications enable comparison of different social states by each member of society in preference satisfaction. These help define Pareto efficiency, which holds if all alternatives have been exhausted to put at least one person in a more preferred position with no one put in a less preferred position. Bergson described an "economic welfare increase" (later called a Pareto improvement) as at least one individual moving to a more preferred position with everyone else indifferent. The social welfare function could then be specified in a substantively individualistic sense to derive Pareto efficiency (optimality). Paul Samuelson (2004, p. 26) notes that Bergson's function "could derive Pareto optimality conditions as necessary but not sufficient for defining interpersonal normative equity." Still, Pareto efficiency could also characterize one dimension of a particular social welfare function with distribution of commodities among individuals characterizing another dimension. As Bergson noted, a welfare improvement from the social welfare function could come from the "position of some individuals" improving at the expense of others. That social welfare function could then be described as characterizing an equity dimension. Samuelson (1947, p. 221) himself stressed the flexibility of the social welfare function to characterize any one ethical belief, Pareto-bound or not, consistent with: a complete and transitive ranking (an ethically "better", "worse", or "indifferent" ranking) of all social alternatives and one set out of an infinity of welfare indices and cardinal indicators to characterize the belief. As Samuelson (1983, p. xxii) notes, Bergson clarified how production and consumption efficiency conditions are distinct from the interpersonal ethical values of the social welfare function. Samuelson further sharpened that distinction by specifying the welfare function and the possibility function (1947, pp. 243–49). Each has as arguments the set of utility functions for everyone in the society. Each can (and commonly does) incorporate Pareto efficiency. The possibility function also depends on technology and resource restraints. It is written in implicit form, reflecting the feasible locus of utility combinations imposed by the restraints and allowed by Pareto efficiency. At a given point on the possibility function, if the utility of all but one person is determined, the remaining person's utility is determined. The welfare function ranks different hypothetical sets of utility for everyone in the society from ethically lowest on up (with ties permitted), that is, it makes interpersonal comparisons of utility. Welfare maximization then consists of maximizing the welfare function subject to the possibility function as a constraint. The same welfare maximization conditions emerge as in Bergson's analysis. Kenneth Arrow's 1963 book demonstrated the problems with such an approach, though he would not immediately realize this. Along earlier lines, Arrow's version of a social welfare function, also called a 'constitution', maps a set of individual orderings (ordinal utility functions) for everyone in society to a social ordering, which ranks alternative social states (such as which of several candidates should be elected). Arrow found that contrary to the assertions of Lionel Robbins and other behaviorists, dropping the requirement of real-valued (and thus cardinal) social orderings makes rational or coherent behavior at the social level impossible. This result is now known as Arrow's impossibility theorem. Arrow's theorem shows that it is impossible for an ordinal social welfare function to satisfy a standard axiom of rational behavior, called independence of irrelevant alternatives. This axiom says that changing the value of one outcome should not affect choices that do not involve this outcome. For example, if a customer buys apples because he prefers them to blueberries, telling them that cherries are on sale should not make them buy blueberries instead of apples. John Harsanyi later strengthened this result by showing that if societies must make decisions under uncertainty, the unique social welfare function satisfying coherence and Pareto efficiency is the utilitarian rule. Cardinal welfare A cardinal social welfare function is a function that takes as input numeric representations of individual utilities (also known as cardinal utility), and returns as output a numeric representation of the collective welfare. The underlying assumption is that individuals utilities can be put on a common scale and compared. Examples of such measures include life expectancy or per capita income. For the purposes of this section, income is adopted as the measurement of utility. The form of the social welfare function is intended to express a statement of objectives of a society. The utilitarian or Benthamite social welfare function measures social welfare as the total or sum of individual utilities: where is social welfare and is the income of individual among individuals in society. In this case, maximizing the social welfare means maximizing the total income of the people in the society, without regard to how incomes are distributed in society. It does not distinguish between an income transfer from rich to poor and vice versa. If an income transfer from the poor to the rich results in a bigger increase in the utility of the rich than the decrease in the utility of the poor, the society is expected to accept such a transfer, because the total utility of the society has increased as a whole. Alternatively, society's welfare can also be measured under this function by taking the average of individual incomes: In contrast, the max-min or Rawlsian social welfare function (based on the philosophical work of John Rawls) measures the social welfare of society on the basis of the welfare of the least well-off individual member of society: Here maximizing societal welfare would mean maximizing the income of the poorest person in society without regard for the income of other individuals. These two social welfare functions express very different views about how a society would need to be organised in order to maximize welfare, with the first emphasizing total incomes and the second emphasizing the needs of the worst-off. The max-min welfare function can be seen as reflecting an extreme form of uncertainty aversion on the part of society as a whole, since it is concerned only with the worst conditions that a member of society could face. Amartya Sen proposed a welfare function in 1973: The average per capita income of a measured group (e.g. nation) is multiplied with where is the Gini index, a relative inequality measure. James E. Foster (1996) proposed to use one of Atkinson's Indexes, which is an entropy measure. Due to the relation between Atkinsons entropy measure and the Theil index, Foster's welfare function also can be computed directly using the Theil-L Index. The value yielded by this function has a concrete meaning. There are several possible incomes which could be earned by a person, who randomly is selected from a population with an unequal distribution of incomes. This welfare function marks the income, which a randomly selected person is most likely to have. Similar to the median, this income will be smaller than the average per capita income. Here the Theil-T index is applied. The inverse value yielded by this function has a concrete meaning as well. There are several possible incomes to which a Euro may belong, which is randomly picked from the sum of all unequally distributed incomes. This welfare function marks the income, which a randomly selected Euro most likely belongs to. The inverse value of that function will be larger than the average per capita income. Axioms of cardinal welfarism Suppose we are given a preference relation R on utility profiles. R is a weak total order on utility profiles—it can tell us, given any two utility profiles, if they are indifferent or one of them is better than the other. A reasonable preference ordering should satisfy several axioms: 1. Monotonicity: if the utility of one individual increases, while all other utilities remain equal, R should strictly prefer the second profile. For example, it should prefer the profile (1, 4, 4, 5) to (1, 2, 4, 5). Such a change is called a Pareto improvement. 2. Symmetry: reordering or relabeling the values in the utility profile should not change the output of R. This axiom formalizes the idea that every person should be treated equally in society. For example, R should be indifferent between (1, 4, 4, 5) and (5, 4, 4, 1), because the only difference is whether 3. Continuity: for every profile v, the set of profiles weakly better than v and the set of profiles weakly worse than v are closed sets. 4. Independence of unconcerned agents: R should be independent of individuals whose utilities have not changed. For example, if R prefers (2, 2, 4) to (1, 3, 4), it also prefers (2, 2, 9) to (1, 3, 9); the utility of agent 3 should not affect the comparison between two utility profiles of agents 1 and 2. This property can also be called locality or separability. It allows us to treat allocation problems in a local way, and separate them from the allocation in the rest of society. Every preference relation with properties 1–4 can be represented as by a function W which is a sum of the form: where w is a continuous increasing function. Harsanyi's theorem Introducing one additional axiom—the nonexistence of Dutch Books, or equivalently that social choice behaves according to the axioms of rational choice—implies that the social choice function must be the utilitarian rule, i.e. the weighting function must be equal to the utility functions of each individual. This result is known as Harsanyi's utilitarian theorem. Non-utilitarian By Harsanyi's theorem, any non-utilitarian social choice function will be incoherent; in other words, it will agree to some bets that are unanimously opposed by every member of society. However, it is still possible to establish properties of such functions. Instead of imposing rational behavior on the social utility function, we can impose a weaker criterion called independence of common scale: the relation between two utility profiles does not change if both of them are multiplied by the same constant. For example, the utility function should not depend on whether we measure incomes in cents or dollars. If the preference relation has properties 1–5, then the function w must be the isoelastic function: This family has some familiar members: The limit when is the leximin ordering. For we get the Nash bargaining solution—maximizing the product of utilities. For we get the utilitarian welfare function—maximizing the sum of utilities. The limit when is the leximax ordering. If we require the Pigou–Dalton principle (that inequality is not a positive good) then in the above family must be at most 1. See also Aggregation problem Arrow's impossibility theorem Community indifference curve Distribution (economics) Economic welfare Extended sympathy Gorman polar form Justice (economics) Liberal paradox Production-possibility frontier Social choice theory Welfare economics Notes References Kenneth J. Arrow, 1951, 2nd ed., 1963, Social Choice and Individual Values Abram Bergson (Burk),"A Reformulation of Certain Aspects of Welfare Economics," Quarterly Journal of Economics, 52(2), February 1938, 310–34 Bergson–Samuelson social welfare functions in Paretian welfare economics from the New School. James E. Foster and Amartya Sen, 1996, On Economic Inequality, expanded edition with annexe, . John C. Harsanyi, 1987, “interpersonal utility comparisons," The New Palgrave: A Dictionary of Economics, v. 2, 955–58 Also available as: a journal article. Jan de Van Graaff, 1957, "Theoretical Welfare Economics", 1957, Cambridge, UK: Cambridge University Press. Lionel Robbins, 1935, 2nd ed.. An Essay on the Nature and Significance of Economic Science, ch. VI , 1938, "Interpersonal Comparisons of Utility: A Comment," Economic Journal, 43(4), 635–41 Paul A. Samuelson, 1947, Enlarged ed. 1983, Foundations of Economic Analysis, pp. xxi–xxiv & ch. VIII, "Welfare Economics," _, 1977. "Reaffirming the Existence of 'Reasonable' Bergson–Samuelson Social Welfare Functions," Economica, N.S., 44(173), p pp. 81–88. Reprinted in (1986) The Collected Scientific Papers of Paul A. Samuelson, pp. 47–54. _, 1981. "Bergsonian Welfare Economics", in S. Rosefielde (ed.), Economic Welfare and the Economics of Soviet Socialism: Essays in Honor of Abram Bergson, Cambridge University Press, Cambridge, pp. 223–66. Reprinted in (1986) The Collected Scientific Papers of Paul A. Samuelson, pp. 3 –46. Sen, Amartya K. (1963). "Distribution, Transitivity and Little's Welfare Criteria," Economic Journal, 73(292), pp. 771–78. _, 1970 [1984], Collective Choice and Social Welfare (description), ch. 3, "Collective Rationality." _ (1982). Choice, Welfare and Measurement, MIT Press. Description and scroll to chapter-preview links. Kotaro Suzumura (1980). "On Distributional Value Judgments and Piecemeal Welfare Criteria," Economica, 47(186), p pp. 125–39. _, 1987, “social welfare function," The New Palgrave: A Dictionary of Economics, v. 4, 418–20 Welfare economics Social choice theory Mathematical economics
Social welfare function
Mathematics
3,864
65,553,988
https://en.wikipedia.org/wiki/Ping%20Zhang%20%28biologist%29
Ping Zhang is an American structural biologist researching the structural and mechanistic basis of multi-component kinase signaling complexes that are linked to human cancers and other diseases, with a long-term goal of developing new therapeutic strategies. She is a NIH Stadtman Investigator in the Structural Biophysics Laboratory at the National Cancer Institute. Education Zhang completed a Ph.D. in Michael Rossmann’s lab at Purdue University in the field of biochemistry and structural virology. Her Ph.D. project was resolving the structures of poliovirus-receptor complexes using X-ray crystallography and cryogenic electron microscopy (cryo-EM).  She completed her postdoctoral training at Howard Hughes Medical Institute and in Susan S. Taylor’s laboratory at University of California, San Diego, working on a signal transduction system related to human diseases and learning other techniques in structural biology and cell signaling that are suited for studying dynamic signaling complexes. Career and research Ping was an assistant project scientist in the department of pharmacology at the University of California, San Diego. She joined the Structural Biophysics Laboratory at National Cancer Institute (NCI) as a NIH Stadtman Tenure Track Investigator in August 2016. She researches the structural and mechanistic basis of multi-component kinase signaling complexes that are linked to human cancers and other diseases, with a long-term goal of developing new therapeutic strategies. Current research topics include the Raf family kinases, and the leucine-rich repeat kinases and an oncogenic PKA kinase fusion protein. Zhang's lab applies integrated structural biology (single-particle cryo-electron microscopy and X-ray crystallography) and biochemical approaches to achieve our objective of studying these kinase complexes in their functional states. This strategy is used to reveal the mechanistic details and factors critical for driving the functional activities of these kinases and how these activities may be altered in pathological states. References Living people Year of birth missing (living people) Place of birth missing (living people) Purdue University alumni National Institutes of Health people University of California, San Diego faculty Structural biologists American women biologists 21st-century American women scientists American people of Chinese descent American women medical researchers American cancer researchers
Ping Zhang (biologist)
Chemistry
450
10,454,382
https://en.wikipedia.org/wiki/Harrogate%20Spring%20Water
Harrogate Spring Water is a private limited company incorporated on 16 August 2000 which manufactures plastic and glass bottled spring water, from Harrogate, North Yorkshire, England and distributes its bottles all over the world. Spa waters were first discovered in Harrogate in the 16th century, with water bottled in glass only the town from the 1740s. Sources The main spring is sourced from an aquifer in the millstone grit series, below the Harrogate Pinewoods. The Thirsty Planet brand takes water from an aquifer located in sand and gravel. History Founded in August 2000, initially under the name HSW Limited, the product was launched in January 2002. Harrogate Spa Water manufactures bottled water and sells it locally, nationally and internationally, being exported to as far away as Australia. A change in majority share owner distribution was made during 2020 resulting in Danone becoming the majority holder, displacing the Cain family from their ownership. The company was previously owned by Harrogate Water Brands, which also owned the charity Thirsty Planet, producing its own brand of bottled water. In 2021, a plan to expand the bottling plant over an area of woodland was criticised by Harrogate residents because Harrogate Spring Water sought to destroy an area of established woodland and natural habitat planted previously by volunteers and local primary school children. Market Harrogate Spring Water is a popular brand within the United Kingdom, selling over annually in 2013, which was a market share of 1.4%. In 2019, the company achieved sales of £21.6 million. Furthermore, all major British airlines (i.e. British Airways, Virgin Atlantic, Jet2.com, TUI Airways, and Easyjet) provide Harrogate Water on their flights, and, in the case of British Airways, in their premium lounges at London Heathrow. They also supply Cunard for their ships. Harrogate Spring Water also supply water to the Masons Gin distillery in Aiskew, North Yorkshire. References External links https://find-and-update.company-information.service.gov.uk/company/04056786 Bottled water brands Companies based in Harrogate Mineral water Groupe Danone brands
Harrogate Spring Water
Chemistry
453
2,282,444
https://en.wikipedia.org/wiki/Rare-earth%20magnet
A rare-earth magnet is a strong permanent magnet made from alloys of rare-earth elements. Developed in the 1970s and 1980s, rare-earth magnets are the strongest type of permanent magnets made, producing significantly stronger magnetic fields than other types such as ferrite or alnico magnets. The magnetic field typically produced by rare-earth magnets can exceed 1.2 teslas, whereas ferrite or ceramic magnets typically exhibit fields of 0.5 to 1 tesla. There are two types: neodymium magnets and samarium–cobalt magnets. Rare-earth magnets are extremely brittle and also vulnerable to corrosion, so they are usually plated or coated to protect them from breaking, chipping, or crumbling into powder. The development of rare-earth magnets began around 1966, when K. J. Strnat and G. Hoffer of the US Air Force Materials Laboratory discovered that an alloy of yttrium and cobalt, YCo5, had by far the largest magnetic anisotropy constant of any material then known. The term "rare earth" can be misleading, as some of these metals can be as abundant in the Earth's crust as tin or lead, but rare earth ores do not exist in seams (like coal or copper), so in any given cubic kilometre of crust they are "rare". China has the highest production but China imports significant amounts of REE ore from Myanmar. Some countries classify rare earth metals as strategically important, and Chinese export restrictions on these materials have led other countries, including the United States, to initiate research programs to develop strong magnets that do not require rare earth metals. Properties The rare-earth (lanthanide) elements are metals that are ferromagnetic, meaning that like iron they can be magnetized to become permanent magnets, but their Curie temperatures (the temperature above which their ferromagnetism disappears) are below room temperature, so in pure form their magnetism only appears at low temperatures. However, they form compounds with the transition metals such as iron, nickel, and cobalt, and some of these compounds have Curie temperatures well above room temperature. Rare-earth magnets are made from these compounds. The greater strength of rare-earth magnets is mostly due to two factors: Firstly, their crystalline structures have very high magnetic anisotropy. This means that a crystal of the material preferentially magnetizes along a specific crystal axis but is very difficult to magnetize in other directions. Like other magnets, rare-earth magnets are composed of microcrystalline grains, which are aligned in a powerful magnetic field during manufacture, so their magnetic axes all point in the same direction. The resistance of the crystal lattice to turning its direction of magnetization gives these compounds a very high magnetic coercivity (resistance to being demagnetized), so that the strong demagnetizing field within the finished magnet does not reduce the material's magnetization. Secondly, atoms of rare-earth elements can have high magnetic moments. Their orbital electron structures contain many unpaired electrons; in other elements, almost all of the electrons exist in pairs with opposite spins, so their magnetic fields cancel out, but in rare-earths, there is much less magnetic cancellation. This is a consequence of incomplete filling of the f-shell, which can contain up to 7 unpaired electrons. In a magnet, it is the unpaired electrons, aligned so they spin in the same direction, which generate the magnetic field. This gives the materials high remanence (saturation magnetization J). The maximal energy density B·H is proportional to J, so these materials have the potential for storing large amounts of magnetic energy. The magnetic energy product B·H of neodymium magnets is about 18 times greater than "ordinary" magnets by volume. This allows rare-earth magnets to be smaller than other magnets with the same field strength. Some important properties used to compare permanent magnets are: remanence (Br), which measures the strength of the magnetic field; coercivity (Hci), the material's resistance to becoming demagnetized; energy product (B·Hmax), the density of magnetic energy; and Curie temperature (TC), the temperature at which the material loses its magnetism. Rare-earth magnets have higher remanence, much higher coercivity and energy product, but (for neodymium) lower Curie temperature than other types. The table below compares the magnetic performance of the two types of rare-earth magnets, neodymium (Nd2Fe14B) and samarium–cobalt (SmCo5), with other types of permanent magnets. Types Samarium–cobalt Samarium–cobalt magnets (chemical formula: SmCo5), the first family of rare-earth magnets invented, are less used than neodymium magnets because of their higher cost and lower magnetic field strength. However, samarium–cobalt has a higher Curie temperature, creating a niche for these magnets in applications where high field strength is needed at high operating temperatures. They are highly resistant to oxidation, but sintered samarium–cobalt magnets are brittle and prone to chipping and cracking and may fracture when subjected to thermal shock. Neodymium Neodymium magnets, invented in the 1980s, are the strongest and most affordable type of rare-earth magnet. They are made of an alloy of neodymium, iron, and boron (Nd2Fe14B), sometimes abbreviated as NIB. Neodymium magnets are used in numerous applications requiring strong, compact permanent magnets, such as electric motors for cordless tools, hard disk drives, magnetic hold-downs, and jewellery clasps. They have the highest magnetic field strength and have a higher coercivity (which makes them magnetically stable), but they have a lower Curie temperature and are more vulnerable to oxidation than samarium–cobalt magnets. Corrosion can cause unprotected magnets to spall off a surface layer or to crumble into a powder. Use of protective surface treatments such as gold, nickel, zinc, and tin plating and epoxy-resin coating can provide corrosion protection; the majority of neodymium magnets use nickel plating to provide a robust protection. Originally, the high cost of these magnets limited their use to applications requiring compactness together with high field strength. Both the raw materials and the patent licenses were expensive. However, since the 1990s, NIB magnets have become steadily less expensive, and their lower cost has inspired new uses such as magnetic construction toys. Applications Since their prices became competitive in the 1990s, neodymium magnets have been replacing alnico and ferrite magnets in the many applications in modern technology requiring powerful magnets. Their greater strength allows smaller and lighter magnets to be used for a given application. Common applications of rare-earth magnets include: Computer hard disk drives Wind turbine generators speakers / headphones Bicycle dynamos MRI scanners Fishing reel brakes Permanent magnet motors in cordless tools High-performance AC servo motors Traction motors and integrated starter-generators in hybrid and electric vehicles Mechanically powered flashlights, employing rare earth magnets for generating electricity in a shaking motion or rotating (hand-crank-powered) motion Industrial uses such as maintaining product purity, equipment protection, and quality control Capture of fine metallic particles in lubricating oils (crankcases of internal combustion engines, also gearboxes and differentials), so as to keep said particles out of circulation, thereby rendering them unable to cause abrasive wear of moving machine parts Other applications of rare-earth magnets include: Linear motors (used in maglev trains, etc.) Stop motion animation: as tie-downs when the use of traditional screw and nut tie-downs is impractical. Diamagnetic levitation experimentation, the study of magnetic field dynamics and superconductor levitation. Electrodynamic bearings Launched roller coaster technology found on roller coaster and other thrill rides. LED Throwies, small LEDs attached to a button cell battery and a small rare earth magnet, used as a form of non-destructive graffiti and temporary public art. Desk toys Electric guitar pickups Miniature figures, for which rare-earth magnets have gained popularity in the miniatures gaming community for their small size and relative strength assisting in basing and swapping weapons between models. Research on cancer treatment is exploring the use of magnetic nanoparticles(MNPs) made from rare earth metals. In magnetic hyperthermia, MNPs generate localized heat within tumor cells, leading to their selective destruction. In targeted delivery systems, MNPs are attached to therapeutics and guided by an external magnetic field to concentrate and retain them at the desired site. Hazards and legislation The greater force exerted by rare-earth magnets creates hazards that are not seen with other types of magnet. Magnets larger than a few centimeters are strong enough to cause injuries to body parts pinched between two magnets or a magnet and a metal surface, even causing broken bones. Magnets allowed to get too near each other can strike each other with enough force to chip and shatter the brittle material, and the flying chips can cause injuries. Starting in 2005, powerful magnets breaking off toys or from magnetic construction sets started causing injuries and deaths. Young children who have swallowed several magnets have had a fold of the digestive tract pinched between the magnets, causing injury and in one case intestinal perforations, sepsis, and death. The swallowing of small magnets such as neodymium magnetic spheres can result in intestinal injury requiring surgery. The magnets attract each other through the walls of the stomach and intestine, perforating the bowel. The U.S. Centers for Disease Control reported 33 cases as of 2010 requiring surgery and one death. The magnets have been swallowed by both toddlers and teens (who were using the magnets to pretend to have tongue piercings). North America A voluntary standard for toys, permanently fusing strong magnets to prevent swallowing, and capping unconnected magnet strength, was adopted in 2007. In 2009, a sudden growth in sales of magnetic desk toys for adults caused a surge in injuries, with emergency room visits estimated at 3,617 in 2012. In response, the U.S. Consumer Product Safety Commission passed a rule in 2012 restricting rare-earth magnet size in consumer products, but it was vacated by a US federal court decision in November 2016, in a case brought by the one remaining manufacturer. After the rule was nullified, the number of ingestion incidents in the country rose sharply, and is estimated to exceed 1,500 in 2019, leading the CPSC to advise children under the age of 14 to not use the magnets. In 2009 US company Maxfield & Oberton, maker of Buckyballs, decided to repackage sphere magnets and sell them as toys. Buckyballs launched at New York International Gift Fair in 2009 and sold in the hundreds of thousands before the U.S. Consumer Product Safety Commission issued a recall on packaging labeled 13+. According to the CPSC, 175,000 units had been sold to the public. Fewer than 50 were returned. Buckyballs labeled "Keep Away From All Children" were not recalled. Subsequently, Maxfield & Oberton changed all mentions of "toy" to "desk toy", positioning the product as a stress-reliever for adults and restricted sales from stores that sold primarily children's products. In the United States, as a result of an estimated 2,900 emergency room visits between 2009 and 2013 due to either "ball-shaped" or "high-powered" magnets, or both, the U.S. Consumer Product Safety Commission (CPSC) has undergone rulemaking to attempt to restrict their sale. Further investigation by the CPSC published in 2012 found an increasing trend of magnet ingestion incidents in young children and teens since 2009. Incidents involving older children and teens were unintentional and the result of using the magnets to mimic body piercings such as tongue studs. The commission cited hidden complications if more than one magnet becomes attached across tissue inside the body. Another recall was issued for Buckyballs in 2012 along with similar products marketed as toys in the US. Recalls and administrative complaints were filed against other similar US companies. Maxfield & Oberton refused the recall and continued selling their desktop toys. The company launched a political campaign against the CPSC, and Craig Zucker, the company's co-founder, debated the safety commission on FOX News. In June 2012, due to a letter by U.S. Senator Kirsten Gillibrand to U.S. Consumer Product Safety Commission Chairwoman Inez Tenenbaum, the United States Consumer Product Safety Commission filed administrative complaints, attempting to ban the sale of Buckyballs and Zen Magnets. Zen Magnets LLC is the first company to ever receive this sort of complaint without record of injury. In November 2012, Buckyballs announced that they had stopped production due to a CPSC lawsuit. In March 2016, Zen Magnets (a manufacturer of neodymium magnet spheres) won in a major 2014 court hearing concerning the danger posed by "defective" warning labels on their spherical magnets. It was decided by a DC court (CPSC Docket No: 12-2) that "Proper use of Zen Magnets and Neoballs creates no exposure to danger whatsoever." As of January 2017, many brands of magnet spheres including Zen Magnets have resumed the sale of small neodymium magnet spheres following a successful appeal by Zen Magnets in the Tenth Circuit US Court of Appeals which vacated the 2012 CPSC regulation banning these products and thereby rendered the sale of small neodymium magnets once again legal in the United States. It was the CPSC's first such loss in more than 30 years. A study published in the Journal of Pediatric Gastroenterology and Nutrition found a significant increase in magnet ingestions by children after 2017, including "a 5-fold increase in the escalation of care for multiple magnet ingestions". On June 3, 2020, the CPSC submitted a "Petition Response Staff Briefing Package" to the commission, even after the petition was rescinded. It outlines a desire to conduct research in 2021 with a suggested rule proposal in 2022 for a vote. As of 2019, manufacturers are working on a similar voluntary standard at the ASTM. On October 26, 2017, the CPSC filed an administrative complaint against Zen Magnets, alleging that the magnet sets contained product defects that created a substantial risk of injury to children, declaring that "It is illegal under federal law for any person to sell, offer for sale, manufacture, distribute in commerce, or import into the United States any Zen Magnets and Neoballs." Sales of "certain products with small, powerful magnets" are prohibited in Canada since 2015. Oceania In November 2012, following an interim ban in New South Wales, a permanent ban on the sale of neodymium magnets went into effect throughout Australia. In January 2013, Consumer Affairs Minister Simon Bridges announced a ban on the import and sale of neodymium magnet sets in New Zealand, effective from January 24, 2013. Environmental impact The European Union's ETN-Demeter project (European Training Network for the Design and Recycling of Rare-Earth Permanent Magnet Motors and Generators in Hybrid and Full Electric Vehicles) is examining sustainable design of electric motors used in vehicles. They are, for example, designing electric motors in which the magnets can be easily removed for recycling the rare earth metals. The European Union's European Research Council also awarded to Principal Investigator, Prof. Thomas Zemb, and co-Principal Investigator, Dr. Jean-Christophe P. Gabriel, an Advanced Research Grant for the project "Rare Earth Element reCYCling with Low harmful Emissions : REE-CYCLE", which aimed at finding new processes for the recycling of rare earth. Alternatives The United States Department of Energy has identified a need to find substitutes for rare-earth metals in permanent-magnet technology and has begun funding such research. The Advanced Research Projects Agency-Energy (ARPA-E) has sponsored a Rare Earth Alternatives in Critical Technologies (REACT) program, to develop alternative materials. In 2011, ARPA-E awarded $31.6 million to fund Rare-Earth Substitute projects. See also References Further reading Furlani Edward P. (2001). "Permanent Magnet and Electromechanical Devices: Materials, Analysis and Applications". Academic Press Series in Electromagnetism. . Campbell Peter (1996). "Permanent Magnet Materials and their Application" (Cambridge Studies in Magnetism). . External links Standard Specifications for Permanent Magnet Materials (Magnetic Materials Producers Association) Ferromagnetic materials Loudspeaker technology Magnetic levitation Types of magnets
Rare-earth magnet
Physics
3,469
18,013,633
https://en.wikipedia.org/wiki/HD%2016004
HD 16004 is blue-white hued star in the northern constellation of Andromeda. It is a challenge to see with the naked eye even under good viewing conditions, having an apparent visual magnitude of 6.26. Located approximately away from the Sun based on parallax, it is drifting closer with a heliocentric radial velocity of −7 km/s. This is a chemically peculiar mercury-manganese star with a stellar classification of . It is an estimated 162 million years old and is spinning with a projected rotational velocity of . The star is radiating 158 times the luminosity of the Sun from its photosphere at an effective temperature of . References B-type giants Mercury-manganese stars Andromeda (constellation) Durchmusterung objects 016004 012057 0746
HD 16004
Astronomy
167
64,552,953
https://en.wikipedia.org/wiki/Railways%20in%20Sardinia
The railway network of Sardinia includes lines that develop for a total of about 1,038 km in length, of which 430 km with an ordinary gauge and about 608 km narrow gauge (950 mm), with an average density of 43 m of rail per km2, a figure that drops to 25 m/km2 considering only public transport lines. Railway operations on the island are managed by two companies. The first, the Ferrovie dello Stato Italiane group, manages the 4 ordinary gauge railway lines that make up the main network of the island through the subsidiaries RFI and Trenitalia. The remaining 4 sections active in public transport, all narrow gauge, constitute the secondary network, extended by 169 km and entirely managed by ARST Sp A., a transport company wholly owned by the Autonomous Region of Sardinia. This company also controls 438 km of tourist lines, always narrow gauge, active especially in summer and at the request of groups of tourists. The Sardinian railway network is present in all provinces, even if there are areas without railways. There are also several railways (all narrow gauge) which over the decades have been closed and dismantled. History The construction of Ferrovie Reali Sardinia, immediately after the Unification of Italy, found itself to be the only territory in the Kingdom without a railway network for public transport: the only lines present were in fact private railways for industrial use. In this regard, the first railway ever to enter into operation on the island was the one between the mine of San Leone and the pier of La Maddalena near Capoterra, a line open to traffic in 1862 . The lack of a public railway network led island politicians several times to request government intervention to grant this service also to Sardinia. After various doubts and objections from national politicians, in 1862 an Italian-English consortium, headed by cavalier Gaetano Semenza, obtained the concession for the construction of the network that would link Cagliari to Iglesias, Porto Torres and Terranova Pausania (in Olbia). The consortium formed the Royal Company of the Sardinian Railways in London, which between studies of the routes, problems of conventions with the State and of various kinds, opened the first stretch of railway (Cagliari- Villasor) in April 1871. The construction of the planned lines, based on a project by the Welsh engineer Benjamin Piercy, ended in 1881, but in the meantime, for the traffic of passengers to the continent, it was decided to use the new maritime docking of Golfo Aranci instead of that of Terranova. The fact made it necessary to build an extension of the railway, which joined the two Gallura ports in 1883. Sardinia finally had its own railways and, as of December 31, 1899, 30 steam locomotives, 106 carriages, 23 baggage and 436 wagons for freight service were operating on the Royal Railways. The connection of peripheral areas However, the layout of the Royal Railways network excluded various areas of the island from the possibility of using trains. In fact, many centers complained that they had been cut off from this very important progress in island transport because of the distance from the railway tracks. It was thus decided in 1885, with Law 3011 of March 22 of that year, to grant the possibility of building a secondary network that connected the more isolated centers with the main cities and with the network of the Royal Railways. Given the specific request for an economy construction, it was decided to use a 950 mm track gauge, which would also have helped the engineers in planning the routes in the inaccessible internal areas of Sardinia/ The following year the works were entrusted to the " Italian Society for the Secondary Railways of Sardinia " (SFSS), which, building at a fast pace, inaugurated its first lines after only 17 months. In fact, already on February 15, 1888 the SFSS opened the Cagliari-Isili and the line from Tempio Pausania to the SFSS station in Monti, which borders the homonymous port of the Royal Railways . By the end of the decade they were also inaugurated the dorsal Bosa - Macomer - Nuoro and Sassari-Alghero, while from Isili the railway was extended to arise . Before the end of the century, the Mandas-Arbatax and its Gairo-Jerzu branch were also inaugurated, in addition another link connecting the main and secondary networks was opened to traffic, connecting the Tirso station of Macomer-Nuoro to the strategic Chilivani slipway . In all 590 km of railway track were built, and in many cases the works were completed well in advance, also thanks to the workers who came to build on average 300 meters of line per day. This figure is even more significant if we consider the morphology of the territories where the lines were made and the physical effort that the excavations and drilling in the rock required to the teams of workers. However, the project was not without criticism from many users, who contested the excessive distance of most of the stations from their respective villages, which was linked to the exploitation of the internal forests of the island, whose valuable timber was transported by the new railway. Furthermore, the average speeds maintained by the SFSS trains, certainly not very high, created some discontent among the passengers. In 1898, meanwhile, the extension of the ordinary gauge network grew by 6 km, those of the new portion of the railway track open between Iglesias and its hamlet Monteponi, strategically important for the transport of minerals that were extracted in this location and in the surrounding area. From the beginning of the 20th century to WWII Despite the complaints, both networks fully achieved the purpose for which they were born, that is to encourage the transport of people and goods between the various areas of Sardinia, until then only linked to animal traction vehicles. The importance of the railway for the island can be seen from the length of the trips, which were no longer measured in days, but in hours. In any case, the areas isolated from the railway were still several, and in the years immediately preceding the First World War the regional authorities asked the various mayors for proposals and advice for new railway lines. Among the recommended railways, many were rejected due to lack of funds or uneconomicness. Other lines were planned, such as those of the Sulcis (whose connections at that time were ensured by car transport), but the war forced a postponement of the works, while some proposals for relations in the Sassari area were taken into consideration in the subsequent years. In those years the only railways to see the light were Isili-Villacidro and its Villamar-Ales branch. The project for the construction of these lines was approved in 1912, and construction was entrusted to the " Society for the Complementary Railways of Sardinia " (FCS). The project also included the use of 5 km of the Isili-Sorgono line, in the stretch between Isili and the Sarcidano station (where the new railway actually started), which led to the common management of this portion of the line with the SFSS. The inauguration of the two lines dates back to June 21, 1915, and the first passengers on the trains were the soldiers leaving for the battlefields of the First World War. Network extension Over the years, the Sardinian railway network has remained largely unchanged as a route: not considering the closure of some sections, the only changes to the routes concerned mainly small / medium-sized variants and rectification works to speed up journeys. In any case, the total lack of electrification of the network and the tortuosity of the lines in certain areas mean that the average train speeds are rather low compared to the rest of Italy, which in some cases has compromised the competitiveness of the railway towards bus lines. Commonly, the Sardinian network is divided into the main ordinary gauge network (that of the FS, managed through the subsidiary Rete Ferroviaria Italiana) and the secondary narrow gauge network (of the ARST). Closed lines Various lines closed after the Second World War and were subsequently dismantled, in almost all cases due to the choice of converting services to road transport, which was considered cheaper. The grounds and infrastructure works (bridges, tunnels) are however still present, and in more than one case the proposal has been made to recover these routes as cycle paths. References Transport systems Railway lines in Sardinia
Railways in Sardinia
Physics,Technology
1,706
555,004
https://en.wikipedia.org/wiki/Wingdings
Wingdings is a series of dingbat fonts that render letters as a variety of symbols. They were originally developed in 1990 by Microsoft by combining glyphs from Lucida Icons, Arrows, and Stars licensed from Charles Bigelow and Kris Holmes. Certain versions of the font's copyright string include attribution to Type Solutions, Inc., the maker of a tool used to hint the font. None of the characters were mapped to Unicode at the time; however, Unicode approved the addition of many symbols in the Wingdings and Webdings fonts in Unicode 7.0. Wingdings Wingdings is a TrueType dingbat font included in all versions of Microsoft Windows from version 3.1 until Windows Vista/Server 2008, and also in a number of application packages of that era. The Wingdings trademark is owned by Microsoft, and the design and glyph order was awarded U.S. Design Patent D341848 in 1993. The patent expired in 2005. In many other countries, a Design Patent would be called a registered design. It is registration of a design to deter imitation, rather than a claim of a novel invention. This font contains many largely recognized shapes and gestures as well as some recognized world symbols, such as the Star of David, the symbols of the zodiac, index or manicule signs, hand gestures, and obscure ampersands. Wingdings 2 Wingdings 2 is a TrueType font distributed with a variety of Microsoft applications, including Microsoft Office up to version 2010. The font was developed in 1990 by Type Solutions, Inc. The current copyright holder is Microsoft Corporation. Among the features of Wingdings 2 are 16 forms of the index, Enclosed Alphanumerics from 0 to 10, multiple forms of ampersand and interrobang, several geometric shapes and an asterism. Wingdings 3 Wingdings 3 is a TrueType dingbat font distributed with Microsoft Office (up to version 2010) and some other Microsoft products. The font was originally developed in 1990 by Type Solutions, Inc. Currently, the copyright holder is Microsoft Corporation. Wingdings 3 consists almost entirely of arrow variations and includes many symbols for keytops as defined in ISO/IEC 9995-7. Controversy In 1992, only days after the release of Windows 3.1, it was discovered that "NYC" (New York City) in Wingdings was rendered as a skull and crossbones symbol, Star of David, and thumbs up gesture. This was often said to be an antisemitic message referencing New York's large Jewish community. Microsoft strongly denied this was intentional, and insisted that the final arrangement of the glyphs in the font was largely random. "NYC" in the later-released Webdings font was intentionally rendered as eye, heart, and city skyline, referring to the I Love New York logo. After September 11, 2001, an email was circulated claiming that "Q33 NY", which it claims is the flight number of the first plane to hit the Twin Towers, in Wingdings would bring up a character sequence of a plane flying into two rectangular paper sheet icons which may be interpreted as skyscrapers, followed by the skull and crossbones symbol and the Star of David. This is a hoax; the flight numbers of the airplanes that hit the towers were AA11 and UA175; the tail numbers were N334AA and N612UA. In popular culture In the indie video game Undertale made by Toby Fox, a hidden character known as W. D. Gaster uses the Wingdings typeface to speak. In specific, Gaster uses it to speak during his Lab Entry #17, and on older versions of the Deltarune website. In a Saturday Night Live sketch in April 2024, Ryan Gosling plays Stephen Wingdings, son of Wingdings' fictitious creator — Jonathan Wingdings — a dad who was always "hard to read." See also Dingbat Zapf Dingbats Webdings Unicode symbols Zodiac symbols Emoji References Symbol typefaces Typefaces and fonts introduced in 1990 Windows XP typefaces Typefaces designed by Charles Bigelow (type designer) Typefaces designed by Kris Holmes Antisemitism in the United States Computing-related controversies
Wingdings
Technology
883
2,832,880
https://en.wikipedia.org/wiki/Experimental%20testing%20of%20time%20dilation
Time dilation as predicted by special relativity is often verified by means of particle lifetime experiments. According to special relativity, the rate of a clock C traveling between two synchronized laboratory clocks A and B, as seen by a laboratory observer, is slowed relative to the laboratory clock rates. Since any periodic process can be considered a clock, the lifetimes of unstable particles such as muons must also be affected, so that moving muons should have a longer lifetime than resting ones. A variety of experiments confirming this effect have been performed both in the atmosphere and in particle accelerators. Another type of time dilation experiments is the group of Ives–Stilwell experiments measuring the relativistic Doppler effect. Atmospheric tests Theory The emergence of the muons is caused by the collision of cosmic rays with the upper atmosphere, after which the muons reach Earth. The probability that muons can reach the Earth depends on their half-life, which itself is modified by the relativistic corrections of two quantities: a) the mean lifetime of muons and b) the length between the upper and lower atmosphere (at Earth's surface). This allows for a direct application of length contraction upon the atmosphere at rest in inertial frame S, and time dilation upon the muons at rest in S′. Time dilation and length contraction Length of the atmosphere: The contraction formula is given by , where L0 is the proper length of the atmosphere and L its contracted length. As the atmosphere is at rest in S, we have γ=1 and its proper Length L0 is measured. As it is in motion in S′, we have γ>1 and its contracted length L′ is measured. Decay time of muons: The time dilation formula is , where T0 is the proper time of a clock comoving with the muon, corresponding with the mean decay time of the muon in its proper frame. As the muon is at rest in S′, we have γ=1 and its proper time T′0 is measured. As it is moving in S, we have γ>1, therefore its proper time is shorter with respect to time T. (For comparison's sake, another muon at rest on Earth can be considered, called muon-S. Therefore, its decay time in S is shorter than that of muon-S′, while it is longer in S′.) In S, muon-S′ has a longer decay time than muon-S. Therefore, muon-S' has sufficient time to pass the proper length of the atmosphere in order to reach Earth. In S′, muon-S has a longer decay time than muon-S′. But this is no problem, since the atmosphere is contracted with respect to its proper length. Therefore, even the faster decay time of muon-S′ suffices in order to be passed by the moving atmosphere and to be reached by Earth. Minkowski diagram The muon emerges at the origin (A) by collision of radiation with the upper atmosphere. The muon is at rest in S′, so its worldline is the ct′-axis. The upper atmosphere is at rest in S, so its worldline is the ct-axis. Upon the axes of x and x′, all events are present that are simultaneous with A in S and S′, respectively. The muon and Earth are meeting at D. As the Earth is at rest in S, its worldline (identical with the lower atmosphere) is drawn parallel to the ct-axis, until it intersects the axes of x′ and x. Time: The interval between two events present on the worldline of a single clock is called proper time, an important invariant of special relativity. As the origin of the muon at A and the encounter with Earth at D is on the muon's worldline, only a clock comoving with the muon and thus resting in S′ can indicate the proper time T′0=AD. Due to its invariance, also in S it is agreed that this clock is indicating exactly that time between the events, and because it is in motion here, T′0=AD is shorter than time T indicated by clocks resting in S. This can be seen at the longer intervals T=BD=AE parallel to the ct-axis. Length: Event B, where the worldline of Earth intersects the x-axis, corresponds in S to the position of Earth simultaneous with the emergence of the muon. C, where the Earth's worldline intersects the x′-axis, corresponds in S′ to the position of Earth simultaneous with the emergence of the muon. Length L0=AB in S is longer than length L′=AC in S′. Experiments If no time dilation exists, then those muons should decay in the upper regions of the atmosphere, however, as a consequence of time dilation they are present in considerable amount also at much lower heights. The comparison of those amounts allows for the determination of the mean lifetime as well as the half-life of muons. is the number of muons measured in the upper atmosphere, at sea level, is the travel time in the rest frame of the Earth by which the muons traverse the distance between those regions, and is the mean proper lifetime of the muons: Rossi–Hall experiment In 1940 at Echo Lake (3240 m) and Denver in Colorado (1616 m), Bruno Rossi and D. B. Hall measured the relativistic decay of muons (which they thought were mesons). They measured muons in the atmosphere traveling above 0.99 c (c being the speed of light). Rossi and Hall confirmed the formulas for relativistic momentum and time dilation in a qualitative manner. Knowing the momentum and lifetime of moving muons enabled them to compute their mean proper lifetime too – they obtained ≈ 2.4 μs (modern experiments improved this result to ≈ 2.2 μs). Frisch–Smith experiment A much more precise experiment of this kind was conducted by David H. Frisch and Smith (1962) and documented by a film. They measured approximately 563 muons per hour in six runs on Mount Washington at 1917m above sea-level. By measuring their kinetic energy, mean muon velocities between 0.995 c and 0.9954 c were determined. Another measurement was taken in Cambridge, Massachusetts at sea-level. The time the muons need from 1917m to 0m should be about . Assuming a mean lifetime of 2.2 μs, only 27 muons would reach this location if there were no time dilation. However, approximately 412 muons per hour arrived in Cambridge, resulting in a time dilation factor of . Frisch and Smith showed that this is in agreement with the predictions of special relativity: The time dilation factor for muons on Mount Washington traveling at 0.995 c to 0.9954 c is approximately 10.2. Their kinetic energy and thus their velocity was diminished until they reached Cambridge to 0.9881 c and 0.9897 c due to the interaction with the atmosphere, reducing the dilation factor to 6.8. So between the start (≈ 10.2) and the target (≈ 6.8) an average time dilation factor of was determined by them, in agreement with the measured result within the margin of errors (see the above formulas and the image for computing the decay curves). Other experiments Since then, many measurements of the mean lifetime of muons in the atmosphere and time dilation have been conducted in undergraduate experiments. Accelerator and atomic clock tests Time dilation and CPT symmetry Much more precise measurements of particle decays have been made in particle accelerators using muons and different types of particles. Besides the confirmation of time dilation, also CPT symmetry was confirmed by comparing the lifetimes of positive and negative particles. This symmetry requires that the decay rates of particles and their antiparticles have to be the same. A violation of CPT invariance would also lead to violations of Lorentz invariance and thus special relativity. Today, time dilation of particles is routinely confirmed in particle accelerators along with tests of relativistic energy and momentum, and its consideration is obligatory in the analysis of particle experiments at relativistic velocities. Twin paradox and moving clocks Bailey et al. (1977) measured the lifetime of positive and negative muons sent around a loop in the CERN Muon storage ring. This experiment confirmed both time dilation and the twin paradox, i.e. the hypothesis that clocks sent away and coming back to their initial position are slowed with respect to a resting clock. Other measurements of the twin paradox involve gravitational time dilation as well. In the Hafele–Keating experiment, actual cesium-beam atomic clocks were flown around the world and the expected differences were found compared to a stationary clock. Clock hypothesis - lack of effect of acceleration The clock hypothesis states that the extent of acceleration does not influence the value of time dilation. In most of the former experiments mentioned above, the decaying particles were in an inertial frame, i.e. unaccelerated. However, in Bailey et al. (1977) the particles were subject to a transverse acceleration of up to ~1018 g. Since the result was the same, it was shown that acceleration has no impact on time dilation. In addition, Roos et al. (1980) measured the decay of Sigma baryons, which were subject to a longitudinal acceleration between 0.5 and 5.0 × 1015 g. Again, no deviation from ordinary time dilation was measured. See also Tests of special relativity References External links Time Dilation - An Experiment With Mu-Mesons Muon Paradox Bonizzoni, Ilaria; Giuliani, Giuseppe, The interpretations by experimenters of experiments on 'time dilation': 1940-1970 circa, Physics experiments Special relativity 1940 in science
Experimental testing of time dilation
Physics
2,064
302,493
https://en.wikipedia.org/wiki/1%2C1%2C1-Trichloroethane
The organic compound 1,1,1-trichloroethane, also known as methyl chloroform and chlorothene, is a chloroalkane with the chemical formula CH3CCl3. It is an isomer of 1,1,2-trichloroethane. A colourless and sweet-smelling liquid, it was once produced industrially in large quantities for use as a solvent. It is regulated by the Montreal Protocol as an ozone-depleting substance and as such use has declined since 1996. Trichloroethane should not be confused with the similar-sounding trichloroethene which is also commonly used as a solvent. Production 1,1,1-Trichloroethane was first reported by Henri Victor Regnault in 1840. Industrially, it is usually produced in a two-step process from vinyl chloride. In the first step, vinyl chloride reacts with hydrogen chloride at 20-50 °C to produce 1,1-dichloroethane: CH=CHCl + HCl → CHCHCl This reaction is catalyzed by a variety of Lewis acids, mainly aluminium chloride, iron(III) chloride, or zinc chloride. The 1,1-dichloroethane is then converted to 1,1,1-trichloroethane by reaction with chlorine under ultraviolet irradiation: CHCHCl + Cl → CHCCl + HCl This reaction proceeds at 80-90% yield, and the hydrogen chloride byproduct can be recycled to the first step in the process. The major side-product is the related compound 1,1,2-trichloroethane, from which the 1,1,1-trichloroethane can be separated by distillation. A somewhat smaller amount of 1,1,1-trichloroethane is produced from the reaction of 1,1-dichloroethene and hydrogen chloride in the presence of an iron(III) chloride catalyst: CH=CCl + HCl → CHCCl 1,1,1-Trichloroethane is sold with stabilizers because it is unstable with respect to dehydrochlorination and attacks some metals. Stabilizers comprise up to 8% of the formulation, including acid scavengers (epoxides, amines) and complexants. Uses 1,1,1-Trichloroethane is an excellent solvent for many organic compounds and also one of the least toxic of the chlorinated hydrocarbons. It is generally considered non-polar, but owing to the good polarizability of the chlorine atoms, it is a superior solvent for organic compounds that do not dissolve well in hydrocarbons such as hexane. Prior to the Montreal Protocol, it was widely used for cleaning metal parts and circuit boards, as a photoresist solvent in the electronics industry, as an aerosol propellant, as a cutting fluid additive, and as a solvent for inks, paints, adhesives, and other coatings. 1,1,1-Trichloroethane was used to dry-clean leather and suede. 1,1,1-Trichloroethane is also used as an insecticidal fumigant. It was also the standard cleaner for photographic film (movie/slide/negatives, etc.). Other commonly available solvents damage emulsion and base (acetone will severely damage triacetate base on most films), and thus are not suitable for this application. The standard replacement, Forane 141 is much less effective, and tends to leave a residue. 1,1,1-Trichloroethane was used as a thinner in correction fluid products such as liquid paper. Many of its applications previously used carbon tetrachloride (which was banned in US consumer products in 1970). In turn, 1,1,1-trichloroethane itself is now being replaced by other solvents in the laboratory. Anaesthetic research 1,1,1-Trichloroethane was one of the volatile organochlorides that have been tried as alternatives to chloroform in anaesthesia. In the 1880s, it was found to be a safe and strong substitute for chloroform but its production was too expensive and difficult for the era. In 1880, 1,1,1-Trichloroethane was suggested as an anaesthetic. It was first referred to as "methyl-chloroform" in the same year. At the time, the narcotic effects of chloral hydrate were owed to a hypothetical metabolic pathway to chloroform in "alkaline blood". Trichloroethane was studied for its structural similarity to chloral and potential anaesthetic effects. However, trichloroethane did not exhibit any conversion to chloroform in laboratory experiments. The 1,1,2-trichloroethane isomer, which lacked a trichloromethyl group, exhibited anaesthetic effects even stronger than the 1,1,1 isomer. Safety Although not as toxic as many similar compounds, inhaled or ingested 1,1,1-trichloroethane does act as a central nervous system depressant and can cause effects similar to those of ethanol intoxication, including dizziness, confusion, and, in sufficiently high concentrations, unconsciousness and death. Fatal poisonings and illnesses linked to intentional inhalation of trichloroethane have been reported. Prolonged skin contact with the liquid can result in the removal of fats from the skin, resulting in skin irritation. The International Agency for Research on Cancer places 1,1,1-trichloroethane in Group 2A as a probable carcinogen. Atmospheric concentration 1,1,1-Trichloroethane is a fairly potent greenhouse gas with a 100-year global warming potential of 169 relative to carbon dioxide. This is nonetheless less than a tenth that of carbon tetrachloride — which it replaced as a solvent — due to its relatively short atmospheric lifetime of about 5 years. The Montreal Protocol targeted 1,1,1-trichloroethane as a compound responsible for ozone depletion and banned its use beginning in 1996. Since then, its manufacture and use have been phased out throughout most of the world, and its atmospheric concentration has declined substantially. References Further reading 5-HT3 agonists Chloroalkanes Dry cleaning Excipients GABAA receptor positive allosteric modulators Glycine receptor agonists Greenhouse gases Halogenated solvents Hazardous air pollutants Hypnotics Ozone-depleting chemical substances Sedatives
1,1,1-Trichloroethane
Chemistry,Biology,Environmental_science
1,425
48,578,727
https://en.wikipedia.org/wiki/Symmetrization%20methods
In mathematics the symmetrization methods are algorithms of transforming a set to a ball with equal volume and centered at the origin. B is called the symmetrized version of A, usually denoted . These algorithms show up in solving the classical isoperimetric inequality problem, which asks: Given all two-dimensional shapes of a given area, which of them has the minimal perimeter (for details see Isoperimetric inequality). The conjectured answer was the disk and Steiner in 1838 showed this to be true using the Steiner symmetrization method (described below). From this many other isoperimetric problems sprung and other symmetrization algorithms. For example, Rayleigh's conjecture is that the first eigenvalue of the Dirichlet problem is minimized for the ball (see Rayleigh–Faber–Krahn inequality for details). Another problem is that the Newtonian capacity of a set A is minimized by and this was proved by Polya and G. Szego (1951) using circular symmetrization (described below). Symmetrization If is measurable, then it is denoted by the symmetrized version of i.e. a ball such that . We denote by the symmetric decreasing rearrangement of nonnegative measurable function f and define it as , where is the symmetrized version of preimage set . The methods described below have been proved to transform to i.e. given a sequence of symmetrization transformations there is , where is the Hausdorff distance (for discussion and proofs see ) Steiner symmetrization Steiner symmetrization was introduced by Steiner (1838) to solve the isoperimetric theorem stated above. Let be a hyperplane through the origin. Rotate space so that is the ( is the nth coordinate in ) hyperplane. For each let the perpendicular line through be . Then by replacing each by a line centered at H and with length we obtain the Steiner symmetrized version. It is denoted by the Steiner symmetrization wrt to hyperplane of nonnegative measurable function and for fixed define it as Properties It preserves convexity: if is convex, then is also convex. It is linear: . Super-additive: . Circular symmetrization A popular method for symmetrization in the plane is Polya's circular symmetrization. After, its generalization will be described to higher dimensions. Let be a domain; then its circular symmetrization with regard to the positive real axis is defined as follows: Let i.e. contain the arcs of radius t contained in . So it is defined If is the full circle, then . If the length is , then . iff . In higher dimensions , its spherical symmetrization wrt to positive axis of is defined as follows: Let i.e. contain the caps of radius r contained in . Also, for the first coordinate let if . So as above If is the full cap, then . If the surface area is , then and where is picked so that its surface area is . In words, is a cap symmetric around the positive axis with the same area as the intersection . iff . Polarization Let be a domain and be a hyperplane through the origin. Denote the reflection across that plane to the positive halfspace as or just when it is clear from the context. Also, the reflected across hyperplane H is defined as . Then, the polarized is denoted as and defined as follows If , then . If , then . If , then . In words, is simply reflected to the halfspace . It turns out that this transformation can approximate the above ones (in the Hausdorff distance) (see ). References Geometric inequalities Geometric algorithms
Symmetrization methods
Mathematics
778
54,613,107
https://en.wikipedia.org/wiki/Rigetti%20Computing
Rigetti Computing, Inc. is a Berkeley, California-based developer of quantum integrated circuits used for quantum computers. Rigetti also develops a cloud platform called Forest that enables programmers to write quantum algorithms. History Rigetti Computing was founded in 2013 by Chad Rigetti, a physicist with a background in quantum computers from IBM, and studied under Michel Devoret. The company emerged from startup incubator Y Combinator in 2014 as a so-called "spaceshot" company. Later that year, Rigetti also participated in The Alchemist Accelerator, a venture capital programme. By February 2016, Rigetti created its first quantum processor, a three-qubit chip made using aluminum circuits on a silicon wafer. That same year, Rigetti raised Series A funding of US$24 million in a round led by Andreessen Horowitz. In November, the company secured Series B funding of $40 million in a round led by investment firm Vy Capital, along with additional funding from Andreessen Horowitz and other investors. Y Combinator also participated in both rounds. By Spring of 2017, Rigetti had advanced to testing eight-qubit quantum computers. In June, the company announced the release of Forest 1.0, a quantum computing platform designed to enable developers to create quantum algorithms. This was a major milestone. In October 2021, Rigetti announced plans to go public via a SPAC merger, with estimated valuation of around US$1.5 billion. This deal was expected to raise an additional US$458 million, bringing the total funding to US$658 million. The fund will be used to accelerate the company's growth, including scaling its quantum processors from 80 qubits to 1,000 qubits by 2024, and to 4,000 by 2026. The SPAC deal closed on 2 March 2022, and Rigetti began trading on the NASDAQ under the ticker symbol RGTI. In December 2022, Subodh Kulkarni became president and CEO of the company. In July 2023 Rigetti launched a single-chip 84 qubit quantum processor that can scale to even larger systems. Products and technology Rigetti Computing is a full-stack quantum computing company, a term that indicates that the company designs and fabricates quantum chips, integrates them with a controlling architecture, and develops software for programmers to use to build algorithms for the chips. Forest cloud computing platform The company hosts a cloud computing platform called Forest, which gives developers access to quantum processors so they can write quantum algorithms for testing purposes. The computing platform is based on a custom instruction language the company developed called Quil, which stands for Quantum Instruction Language. Quil facilitates hybrid quantum/classical computing, and programs can be built and executed using open source Python tools. As of June 2017, the platform allows coders to write quantum algorithms for a simulation of a quantum chip with 36 qubits. Fab-1 The company operates a rapid prototyping fabrication ("fab") lab called Fab-1, designed to quickly create integrated circuits. Lab engineers design and generate experimental designs for 3D-integrated quantum circuits for qubit-based quantum hardware. Recognition The company was recognized in 2016 by X-Prize founder Peter Diamandis as being one of the three leaders in the quantum computing space, along with IBM and Google. MIT Technology Review named the company one of the 50 smartest companies of 2017. See also D-Wave Systems Locations Rigetti Computing is headquartered in Berkeley, California, where it hosts developmental systems and cooling equipment. The company also operates its Fab-1 manufacturing facility in nearby Fremont. References External links Computer companies of the United States Computer hardware companies Quantum information science Companies involved in quantum computing Companies based in Berkeley, California American companies established in 2013 Computer companies established in 2013 2013 establishments in California Companies listed on the Nasdaq Special-purpose acquisition companies
Rigetti Computing
Technology
795
14,881,646
https://en.wikipedia.org/wiki/CACNA2D1
Voltage-dependent calcium channel subunit alpha-2/delta-1 is a protein that in humans is encoded by the CACNA2D1 gene. This gene encodes a member of the alpha-2/delta subunit family, a protein in the voltage-dependent calcium channel complex. Calcium channels mediate the influx of calcium ions into the cell upon membrane depolarization and consist of a complex of alpha-1, alpha-2/delta, beta, and gamma subunits in a 1:1:1:1 ratio. Research on a highly similar protein in rabbit suggests the protein described in this record is cleaved into alpha-2 and delta subunits. Alternate transcriptional splice variants of this gene have been observed, but have not been thoroughly characterized. In mammals, alpha-2/delta proteins exist in four subtypes coded by four separate but closely related genes, CACNA2D1, CACNA2D2, CACNA2D3 and CACNA2D4. Recently, alpha-2/delta1 proteins, in addition to calcium channels, have been found to interact directly with N-methyl-D-aspartate type glutamate receptors (NMDAR), AMPA type glutamate receptors (AMPAR) and the extracellular adhesion protein, thrombospondin. Gabapentinoids Alpha-2/delta proteins are believed to be the molecular target of the gabapentinoids gabapentin and pregabalin, which are used to treat epilepsy and neuropathic pain. Only alpha-2/delta subtypes 1 and 2 (but not 3 and 4) are substrates for gabapentinoid drug binding. See also Voltage-dependent calcium channel Gabapentinoid drugs References Further reading External links Ion channels
CACNA2D1
Chemistry
379
13,726,331
https://en.wikipedia.org/wiki/Inertia%20wheel%20pendulum
An inertia wheel pendulum is a pendulum with an inertia wheel attached. It can be used as a pedagogical problem in control theory. This type of pendulum is often confused with the gyroscopic effect, which has completely different physical nature. See also Inverted pendulum Robotic unicycle Spinning top References Mark W. Spong, Peter Corke, Rogelio Lozano. Nonlinear Control of the Gyroscopic Pendulum. Pendulums Control engineering
Inertia wheel pendulum
Physics,Engineering
98
27,554,979
https://en.wikipedia.org/wiki/Cryosuction
Cryosuction is the concept of negative pressure in freezing liquids so that more liquid is sucked into the freezing zone. In soil, the transformation of liquid water to ice in the soil pores causes water to migrate through soil pores to the freezing zone through capillary action. History of discovery In 1930, Stephen Taber demonstrated that liquid water migrates towards the freeze line within soil. He showed that other liquids, such as benzene, which contracts when it freezes, also produce frost heave. Fine-grained soils such as clays and silts enable greater negative pressures than more coarse-grained soils due to the smaller pore size. In periglacial environments, this mechanism is highly significant and it is the predominant process in ice lens formation in permafrost areas. As of 2001, several models for ice-lens formation by cryosuction existed, among others the hydrodynamic model and the Premelting model, many of them based on the Clausius–Clapeyron relation with various assumptions, yielding cryosuction potentials of 11 to 12 atm per degree Celsius below zero depending on pore size. In 2023, experiments from the ETH Zurich were published, in which the process could be observed between glass slides in a confocal microscope. In single-crystal experiments the rate of ice growth was slow, but with polycrystalline ice there were many more channels to suck in water to grow ice. How solutes in the water influence cryosuction is still unexplored. See also Pore water pressure Suction References External links Cryosuction Cryosphere glossary, National Snow and Ice Data Center, Canada, accessed 22 November 2023 Geography of the Arctic Geomorphology Hydrology
Cryosuction
Chemistry,Engineering,Environmental_science
368
70,691,095
https://en.wikipedia.org/wiki/HD%2011025
HD 11025 (HR 525) is a suspected astrometric binary in the southern circumpolar constellation Octans. It has an apparent magnitude of 5.67, making it visible to the naked eye if viewed under ideal conditions. Located 378 light years away, it is receding with a heliocentric radial velocity of . The visible component is a yellow giant of spectral class G8 III. At present it has 2.61 times the mass of the Sun but at an age of 500 million years, has expanded to 9.52 times the radius of the Sun. It shines at from its enlarged photosphere at an effective temperature of , giving it a yellow glow. HD 11025 has a solar metallicity and spins with a moderate projected rotational velocity of . References Octans G-type giants 011025 007568 0525 PD-85 17 Octantis, 4
HD 11025
Astronomy
183
475,276
https://en.wikipedia.org/wiki/Downforce
Downforce is a downwards lift force created by the aerodynamic features of a vehicle. If the vehicle is a car, the purpose of downforce is to allow the car to travel faster by increasing the vertical force on the tires, thus creating more grip. If the vehicle is a fixed-wing aircraft, the purpose of the downforce on the horizontal stabilizer is to maintain longitudinal stability and allow the pilot to control the aircraft in pitch. Fundamental principles The same principle that allows an airplane to rise off the ground by creating lift from its wings is used in reverse to apply force that presses the race car against the surface of the track. This effect is referred to as "aerodynamic grip" and is distinguished from "mechanical grip", which is a function of the car's mass, tires, and suspension. The creation of downforce by passive devices can be achieved only at the cost of increased aerodynamic drag (or friction), and the optimum setup is almost always a compromise between the two. The aerodynamic setup for a car can vary considerably between race tracks, depending on the length of the straights and the types of corners. Because it is a function of the flow of air over and under the car, downforce increases with the square of the car's speed and requires a certain minimum speed in order to produce a significant effect. Some cars have had rather unstable aerodynamics, such that a minor change in angle of attack or height of the vehicle can cause large changes in downforce. In the very worst cases this can cause the car to experience lift, not downforce; for example, by passing over a bump on a track or slipstreaming over a crest: this could have some disastrous consequences, such as Mark Webber's and Peter Dumbreck's Mercedes-Benz CLR in the 1999 24 Hours of Le Mans, which flipped spectacularly after closely following a competitor car over a hump. Two primary components of a racing car can be used to create downforce when the car is travelling at racing speed: the shape of the body, and the use of airfoils. Most racing formulae have a ban on aerodynamic devices that can be adjusted during a race, except during pit stops. The downforce exerted by a wing is usually expressed as a function of its lift coefficient: where: F is downforce (SI unit: newtons) CL is the lift coefficient ρ is air density (SI unit: kg/m3) v is velocity (SI unit: m/s) A is the area of the wing (SI unit: meters squared), which depends on its wingspan and chord if using top wing area basis for CL, or the wingspan and thickness of the wing if using frontal area basis In certain ranges of operating conditions and when the wing is not stalled, the lift coefficient has a constant value: the downforce is then proportional to the square of airspeed. In aerodynamics, it is usual to use the top-view projected area of the wing as a reference surface to define the lift coefficient. Body The rounded and tapered shape of the top of a car is designed to slice through the air and minimize wind resistance. Detailed pieces of bodywork on top of the car can be added to allow a smooth flow of air to reach the downforce-creating elements (e.g., wings or spoilers, and underbody tunnels). The overall shape of a car resembles an airplane wing. Almost all road cars produce aerodynamic lift as a result of this shape. There are many techniques that are used to counterbalance this lift. Looking at the profile of most road cars, the front bumper has the lowest ground clearance followed by the section between the front and rear tires, and followed by a rear bumper, usually with the highest clearance. Using this layout, the air flowing under the front bumper will be constricted to a lower cross-sectional area, and thus achieve a lower pressure. Additional downforce comes from the rake (or angle) of the vehicle's body, which directs the underside air up and creates a downward force, increasing the pressure on top of the car because the airflow direction comes closer to perpendicular to the surface. Volume does not affect the air pressure because it is not an enclosed volume, despite the common misconception. Race cars amplify this effect by adding a rear diffuser to accelerate air under the car in front of the diffuser, and raise the air pressure behind it, lessening the car's wake. Other aerodynamic components that can be found on the underside to improve downforce and/or reduce drag, include splitters and vortex generators. Some cars, such as the DeltaWing, do not have wings, and generate all of their downforce through their body. Airfoils The magnitude of the downforce created by the wings or spoilers on a car is dependent primarily on three things: The shape, including surface area, aspect ratio and cross-section of the device, The device's orientation (or angle of attack), and The speed of the vehicle. A larger surface area creates greater downforce and greater drag. The aspect ratio is the width of the airfoil divided by its chord. If the wing is not rectangular, aspect ratio is written AR=b2/s, where AR=aspect ratio, b=span, and s=wing area. Also, a greater angle of attack (or tilt) of the wing or spoiler, creates more downforce, which puts more pressure on the rear wheels and creates more drag. Front The function of the airfoils at the front of the car is twofold. They create downforce that enhances the grip of the front tires, while also optimizing (or minimizing disturbance to) the flow of air to the rest of the car. The front wings on an open-wheeled car undergo constant modification as data is gathered from race to race, and are customized for every characteristic of a particular circuit (see top photos). In most series, the wings are even designed for adjustment during the race itself when the car is serviced. Rear The flow of air at the rear of the car is affected by the front wings, front wheels, mirrors, driver's helmet, side pods and exhaust. This causes the rear wing to be less aerodynamically efficient than the front wing, Yet, because it must generate more than twice as much downforce as the front wings in order to maintain the handling to balance the car, the rear wing typically has a much larger aspect ratio, and often uses two or more elements to compound the amount of downforce created (see photo at left). Like the front wings, each of these elements can often be adjusted when the car is serviced, before or even during a race, and are the object of constant attention and modification. Wings in unusual places Partly as a consequence of rules aimed at reducing downforce from the front and rear wings of F1 cars, several teams have sought to find other places to position wings. Small wings mounted on the rear of the cars' sidepods began to appear in mid-1994, and were virtually standard on all F1 cars in one form or another, until all such devices were outlawed in 2009. Other wings have sprung up in various other places about the car, but these modifications are usually only used at circuits where downforce is most sought, particularly the twisty Hungary and Monaco racetracks. The 1995 McLaren Mercedes MP4/10 was one of the first cars to feature a "midwing", using a loophole in the regulations to mount a wing on top of the engine cover. This arrangement has since been used by every team on the grid at one time or another, and in the 2007 Monaco Grand Prix all but two teams used them. These midwings are not to be confused either with the roll-hoop mounted cameras which each car carries as standard in all races, or with the bull-horn shaped flow controllers first used by McLaren and since by BMW Sauber, whose primary function is to smooth and redirect the airflow in order to make the rear wing more effective rather than to generate downforce themselves. A variation on this theme was "X-wings", high wings mounted on the front of the sidepods which used a similar loophole to midwings. These were first used by Tyrrell in 1997, and were last used in the 1998 San Marino Grand Prix, by which time Ferrari, Sauber, Jordan and others had used such an arrangement. However it was decided they would have to be banned in view of the obstruction they caused during refueling and the risk they posed to the driver should a car roll over. Various other extra wings have been tried from time to time, but nowadays it is more common for teams to seek to improve the performance of the front and rear wings by the use of various flow controllers such as the afore-mentioned "bull-horns" used by McLaren. See also Bernoulli's principle Body kit Formula One car Grip (auto racing) Ground effect in cars Lift (force) Wind tunnel Further reading Simon McBeath, Competition Car Downforce: A Practical Handbook, SAE International, 2000, Simon McBeath, Competition Car Aerodynamics, Sparkford, Haynes, 2006 Enrico Benzing, Ali / Wings. Progettazione e applicazione su auto da corsa. Their design and application to racing car, Milano, Nada, 2012. Bilingual (Italian-English) References External links Aerodynamics In Car Racing Aerodynamics Motorsport terminology Vehicle dynamics it:Deportanza
Downforce
Chemistry,Engineering
1,936
28,277,542
https://en.wikipedia.org/wiki/GAF%20domain
The GAF domain is a type of protein domain that is found in a wide range of proteins from all species. The GAF domain is named after some of the proteins it is found in: cGMP-specific phosphodiesterases, adenylyl cyclases and FhlA. The first structure of a GAF domain solved by Ho and colleagues showed that this domain shared a similar fold with the PAS domain. In mammals, GAF domains are found in five members of the cyclic nucleotide phosphodiesterase superfamily: PDE2, PDE5, and PDE6 which bind cGMP to the GAF domain, PDE10 which binds cAMP, and PDE11 which binds both cGMP and cAMP. Examples Human proteins containing this domain include: PDE2A, PDE5A, PDE6A, PDE6B, PDE6C, PDE10A, PDE11A References Protein domains
GAF domain
Biology
204
17,111,115
https://en.wikipedia.org/wiki/Ceramic%20molding
Ceramic molding is a versatile and precise manufacturing process that transforms clay or porcelain into intricate shapes. Employing techniques like slip casting or press molding, artisans create precise replicas of original models. After molding, the ceramics are fired at high temperatures, ensuring durability and aesthetic appeal. This method is favored for producing intricate pottery, decorative tiles, and even complex industrial components. With its ability to capture fine details and yield consistent results, ceramic molding remains a cornerstone in the world of artistic and functional ceramic production. History & Archaeology Ceramic molding, an ancient practice dating back centuries, emerged following humanity's discovery of fire. The experimentation with clay and fire marked the inception of the technique now known as ceramic molding or pottery. Archaeologists have unearthed various types of pottery, each intricately connected to the historical context of the locations where they were discovered. Historians, leveraging the examination of pottery and clay, have pinpointed specific dates and times of significant events. Through meticulous analysis of these artifacts, historians can precisely determine their age, enabling accurate estimations of when historical events transpired. Process 1. Pattern Creation   Craft the pattern using versatile materials such as plastic, wood, or metal. These materials should withstand extreme temperatures. 2. Binder Injection   Inject the mix into a binder to form a base for the molding process. 3. Refractory Ceramic Powder Addition   Extract a portion of refractory ceramic powder to enhance the molding mixture. 4. Special Gelling Incorporation   Introduce a specialized gelling agent into the binder, ensuring thorough mixing. 5. Slurry Placement   Place the slurry mixture into the pattern, forming the desired shape for the ceramic mold. 6. High-Temperature Heating   Subject the slurry-filled pattern to high temperatures, allowing for proper curing and shaping of the ceramic mold. 7. Cooling Phase   Cool the molded slurry to finalize the ceramic casting process. See also Ceramic forming techniques Ceramic mold casting Sources https://web.archive.org/web/20110717162640/http://www.unicastdev.com/process.htm http://www.freepatentsonline.com/5266252.html Ceramic materials Pottery
Ceramic molding
Engineering
461
3,490,877
https://en.wikipedia.org/wiki/Meat%20thermometer
A meat thermometer or cooking thermometer is a thermometer used to measure the internal temperature of meat, especially roasts and steaks, and other cooked foods. The degree of "doneness" of meat or bread correlates closely with the internal temperature, so that a thermometer reading indicates when it is cooked as desired. When cooking, food should always be cooked so that the interior reaches a temperature sufficient, that in the case of meat is enough to kill pathogens that may cause foodborne illness or, in the case of bread, that is done baking; the thermometer helps to ensure this. Characteristics A meat thermometer is a unit which will measure core temperature of meats while cooking. It will have a metal probe with a sharp point which is pushed into the meat, and a dial or digital display. Some show the temperature only; others also have markings to indicate when different kinds of meat are done to a specified degree (e.g., "beef medium rare"). Meat thermometers are usually designed to have the probe in the meat during cooking. Some use a bimetallic strip which rotates a needle which shows the temperature on a dial; the whole thermometer can be left inside the oven during cooking. Another variety commonly used on turkey is the pop-up timer, which uses a spring held in by a soft material that "pops up" when the meat reaches a set temperature. Bimetal coil thermometers and pop-up devices are the least reliable types of meat thermometers and should not be trusted as a standalone meat thermometer. Other types use an electronic sensor in the probe, connected by a flexible heat-resistant cable to a display. The probe is inserted in the meat, and the cable comes out of the oven (oven seals are flexible enough to allow this without damage) and is connected to the display. These types can be set to sound an alarm when the specified temperature is reached. Wireless types, where the display does not have to be close to the oven, are also available. Models Meat thermometers have many different models, such as single probe and multiprobe. Single probe models are usually the cheapest, but they can only monitor one section of the meat, which requires the probe inserted into different places to monitor the whole piece of meat. Multiprobe models are more expensive, and usually can connect 2–8 probes. More probes means the temperature of the whole piece of meat can be more accurately monitored. Use The probe can be inserted into the meat before starting cooking, and cooking continued until the desired internal temperature is reached. Alternatively the meat can be cooked for a certain time and taken out of the oven, and the temperature checked before serving. The tip of the probe should be in the thickest part of the meat, but not touching bone, which conducts heat and gives an overestimate of the meat temperature. Poultry For poultry insert the meat thermometer into the thigh, but do not touch the bone. The suggested temperature for poultry to reach before it is safe to consume is 74 °C (165 °F), unless the poultry is stuffed, in which case the temperature in the center of the stuffing should be about 74 °C (165 °F). Beef, lamb, or veal For beef, lamb, or veal insert the meat thermometer away from bone, fat, or cartilage. The meat should reach a temperature of between 63 °C (145 °F) for medium-rare, and 77 °C (170 °F) for well done. Pork Pork needs to reach the same temperature 71 °C (160 °F) as beef, lamb, or veal and the same rules for use of the thermometer apply. Ground meat For ground meat, you should insert the digital food thermometer into the thickest part of the piece. For hamburgers you should insert the thermometer probe through the side of the patty, all the way to the middle. Make sure to check each piece of meat or patty because heat can be uneven. Temperature should be 71 °C (160 °F) for beef, lamb, veal, or pork and 74 °C (165 °F) for poultry. Casseroles, and eggs For casseroles, and eggs insert the thermometer into the thickest area. The temperature for casseroles should be 71 °C (160 °F) and for eggs 74 °C (165 °F). Seafood For fish the temperature should be 70 °C (158 °F). For shellfish (for example, shrimp, lobster, crab, scallops, clams, mussels and oysters) the temperature should be at 74 °C (165 °F). Other cooking thermometers Candy thermometer References External links Food thermometers Food preparation utensils Thermometers Cooking thermometers
Meat thermometer
Technology,Engineering
996
11,548,174
https://en.wikipedia.org/wiki/Ustilaginoidea%20virens
Ustilaginoidea virens, perfect sexual stage Villosiclava virens, is a plant pathogen which causes the disease "false smut" of rice which reduces both grain yield and grain quality. The disease occurs in more than 40 countries, especially in the rice producing countries of Asia. but also in the U.S. As the common name suggests, it is not a true smut (fungus), but an ascomycete. False smut does not replace all or part of the kernel with a mass of black spores, rather sori form erupting through the palea and lemma forming a ball of mycelia, the outermost layers are spore-producing. Infected rice kernels are always destroyed by the disease. Of particular concern are the production of alkaloids in the grain as with the Claviceps spp. causing ergot however U. virens is not a Claviceps/ergot fungus, is not known to produce ergotism, and lacks enzymes necessary in ergot synthesis. Although U. virens does infect maize/corn, it does not do so often, produce significant disease, or have economic consequences. Disease cycle U. virens has a peculiar life cycle. White hyphae are produced by the fungi after initial infection of the floral organs of the rice crop. As the infection matures with time, darker brownish green chlamydospores are produced on the rice spikelets. Additionally, sclerotia can be present towards the end of the fall season. During its life cycle, U. virens undergoes a sexual (ascospores) stage as well as an asexual (chlamydospores) stage. The chlamydospores are the main survival structure, and they can live in the soil for up to four months. The additional formation of sclerotia allows U. virens to survive even longer, almost up to a year. These sclerotia, which can be present either on or below the surface of the soil, mature to form an ascocarp (fruiting body). The ascospores from these fruiting bodies act as the primary source of infection to spread disease throughout the paddy field. Infection The rice false smut pathogen, Ustilaginoidea virens, invades through a small gap at the apex of a rice spikelet before heading. The primary source of infection is the presence of chlamydospores in the soil. During the vegetative stage of the growth of the rice crop, the fungus colonizes the tissue on the growing points on the tillers. This happens when conidia get deposited on the spikelets of the rice crop, which later lead to the growth of hyphae. The mycelia from these hyphae invade the floral organs in the spikelets. Disease control The rice false smut pathogen causes mostly qualitative damage to the rice crop. Removal of the brown "smut balls" is important to maintain the visual integrity of the harvested crop. Additionally, certain steps can be taken to manage and/or prevent the onset of disease. Most rice varieties are susceptible to the disease; however, some cultivars of rice provide a small amount of resistance against U. virens. there are still no known resistant cultivars, with the very best being only of "moderate" susceptibility. Planting rice earlier in the season can also reduce the amount of disease caused by false smut. In some studies, rice planted in April showed much less presence of false smut than rice planted after 15 May. As is the case for most rice diseases, large amounts of fertilizer in the soil lead to increase in disease. Maintaining the nitrogen rate in the soil to a level below 160 pounds per acre has proven to be most efficient against stopping disease. Although there are no specific fungicide recommendations for the eradication of the false smut pathogen of rice, Cartwright reported that propiconazole was the most effective ingredient after studying it for over three years. References Further reading "False Smut" – Oña et al. – http://www.knowledgebank.irri.org/training/fact-sheets/pest-management/diseases/item/false-smut Specific adaptation of Ustilaginoidea virens in occupying host florets revealed by comparative and functional genomics – Zhang et al., 2014. Rice false smut pathogen, Ustilaginoidea virens, invades through small gap at the apex of a rice spikelet before heading – Ashizawa et al., 2012. Quantification of the rice false smut pathogen Ustilaginoidea virens from soil in Japan using real-time PCR – Ashizawa et al., 2010. Elucidation of the infection process of Ustilaginoidea virens in rice spikelets – Tang et al., 2012. Fungal plant pathogens and diseases Maize diseases Rice diseases Enigmatic Ascomycota taxa Fungi described in 1878 Fungus species
Ustilaginoidea virens
Biology
1,057