id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
1,257,741
https://en.wikipedia.org/wiki/Kruger%2060
Krüger 60 (DO Cephei) is a binary star system located from Earth, being one of nearest stars. It is made up of a pair of red dwarfs stars orbiting each other every 45 years. Description The larger, primary star is designated component A, while the secondary, smaller star is labeled component B. Component A has about 27% of the Sun's mass and 30% of the Sun's radius. Component B has about 18% of the Sun's mass and 22% of the Sun's radius. In 1951, Peter van de Kamp and Sarah Lee Lippincott announced that component B is a flare star. It was given the variable star designation "DO Cephei". Flares lasting as long as one hour have been recorded. This system is orbiting through the Milky Way at a distance from the core that varies from 7–9 kpc with an orbital eccentricity of 0.126–0.130. The closest approach to the Sun will occur in about 88,600 years when this system will come within . Considering the orbit of the members of Krüger 60, detecting an exoplanet through radial velocity could prove difficult, as its orbit would be inclined only 13 degrees from our point of view, and create 1/5th as strong a radial velocity signal as an exoplanet orbiting edge-on from the point of view of the Solar System. Origin of 2I/Borisov Krüger 60 was proposed as the origin of interstellar comet 2I/Borisov (formerly named C/2019 Q4 (Borisov)) in a preprint submitted to arXiv by Dybczyński, Królikowska, and Wysoczańska. These authors had from other work a list of stars and stellar systems that can potentially act as perturbers of the Oort cloud comets, and searched it for a past close proximity of 2I/Borisov at a very small relative velocity. While hampered by uncertainty about the orbit of 2I/Borisov and particularly its non-gravitational acceleration (due to cometary outgassing), they initially reached a conclusion that 1 Myr ago 2I/Borisov passed Krüger 60 at a small distance of 1.74 pc while having an extremely small relative velocity of 3.43 km/s. Perturbations of 2I/Borisov's incoming orbit altered the intersection distance with relatively small changes in the relative velocity. However, further study by the same authors presented in the revised version of the preprint instead ruled out the possibility of Krüger 60 as a home system for 2I/Borisov. References Further reading External links Hires LRGB CCD Image Cepheus (constellation) Local Bubble M-type main-sequence stars Flare stars Binary stars 239960 110893 0860 BD+56 2783 Cephei, DO
Kruger 60
Astronomy
585
44,558,555
https://en.wikipedia.org/wiki/Cieplak%20effect
In organic chemistry, the Cieplak effect is a predictive model to rationalize why nucleophiles preferentially add to one face of a carbonyl over another. Proposed by Andrzej Stanislaw Cieplak in 1980, it correctly predicts results that could not be justified by the other standard models at the time, such as the Cram and Felkin–Anh models. In the Cieplak model, electrons from a neighboring bond delocalize into the forming carbon–nucleophile (C–Nuc) bond, lowering the energy of the transition state and accelerating the rate of reaction. Whichever bond can best donate its electrons into the C–Nuc bond determines which face of the carbonyl the nucleophile will add to. The nucleophile may be any of a number of reagents, most commonly organometallic or reducing agents. The Cieplak effect is subtle, and often competes with sterics, solvent effects, counterion complexation of the carbonyl oxygen, and other effects to determine product distribution. Subsequent work has questioned its legitimacy (see Criticisms). Background The Cieplak effect relies on the stabilizing interaction of mixing full and empty orbitals to delocalize electrons, known as hyperconjugation. When the highest occupied molecular orbital (HOMO) of one system and the lowest unoccupied molecular orbital (LUMO) of another system have comparable energies and spatial overlap, the electrons can delocalize and sink into a lower energy level. Often, the HOMO of a system is a full σ (bonding) orbital and the LUMO is an empty σ* (antibonding) orbital. This mixing is a stabilizing interaction and has been widely used to explain such phenomena as the anomeric effect. A common requirement of hyperconjugation is that the bonds donating and accepting electron density are antiperiplanar to each other, to allow for maximum orbital overlap. The Cieplak effect uses hyperconjugation to explain the face-selective addition of nucleophiles to carbonyl carbons. Specifically, donation into the low-lying σ*C-Nuc bond by antiperiplanar electron-donating substituents is the stabilizing interaction which lowers the transition state energy of one stereospecific reaction pathway and thus increases the rate of attack from one side. In the simplest model, a conformationally constrained cyclohexanone is reduced to the corresponding alcohol. Reducing agents add a hydride to the carbonyl carbon via attack along the Burgi–Dunitz angle, which can come from the top along a pseudo-axial trajectory or from below, along a pseudo-equatorial trajectory. It has long been known that large reducing agents add hydride to the equatorial position to avoid steric interactions with axial hydrogens on the ring. Small hydride sources, however, add hydride to an axial position for reasons which are still disputed. The Cieplak effect explains this phenomenon by postulating that hyperconjugation of the forming σ*C–H orbital with geometrically aligned σ orbitals is the stabilizing interaction that controls stereoselectivity. In an equatorial approach, the bonds that are geometrically aligned antiperiplanar to the forming C–H bond are the C–C bonds of the ring, so they donate electron density to σ*C–H. In an axial approach, the neighboring axial C–H bonds are aligned antiperiplanar to the forming C–H bond, so they donate electron density to σ*C–H. Because C-H bonds are better electron donors than C–C bonds, they are better able to participate in this stabilizing interaction and so this pathway is favored. Evidence Cieplak's proposal is supported by investigating the effects of various electronic substituents on product distribution. By installing an electron-withdrawing substituent such as a methoxy group at the C2 position, the reduction of substituted cyclohexanones begins to favor equatorial attack. This is because the axial C-O bond is a worse electron donor than a C–C bond, so axial attack is less favored. Cieplak also demonstrated this effect by introducing electron withdrawing substituents on C3, which decrease the electron-donating capability of the ring C–C bond and therefore disfavor equatorial attack, which is antiperiplanar to this bond. Electron-donating substituents at C3 subsequently favor equatorial approach, since increasing C–C electron density favors σC-C donation into σ*C-Nuc and thereby encourages equatorial approach. This effect can also be investigated by changing the electronic nature of the nucleophile. In the case of an electron-deficient nucleophile, the σ* of the forming C-Nuc bond is lower in energy and better stabilized by attack antiperiplanar to electron-rich axial C-H bonds. Attack therefore occurs axially. If the nucleophile is electron-rich, however, the donation of more electron density is less favored and equatorial attack may prevail. These trends have been observed even when normalizing for steric bulk of the nucleophile. In substituted norbornones, nucleophilic attack will come antiperiplanar to the bonds which can best donate into σ*C-Nuc. The bonds positioned for this interaction are the bridgehead C–C bonds on the six-membered ring. Substituents which donate electron density to these bonds, such as ethyl groups, increase the rate of addition anti to the alkyl groups, which is the antiperiplanar trajectory. If electron-withdrawing substituents such as esters are appended to the C–C bonds, however, the selectivity favors syn addition, so that the bonds donating into σ*C-Nuc are the more electron-rich C–C bonds which are hydrogen-substituted. A similar example is seen in substituted 2-adamantones, where varying the electronic properties at the remote 5 position has profound effects on product distribution. A hydroxyl group is able to donate electron density inductively to the forming σ*C–H bond antiperiplanar, so attack from that side is favored. The electron-withdrawing ester substituent, however, lacks this stabilization ability. Instead, the C–H bonds are better electron donors than the C–CO2Me bonds, so attack comes anti to the hydrogen substituents and subsequently syn to the ester group. This explains the effect of remote electron-donor groups on stereochemical outcomes, which has been difficult to explain with other stereochemical models. The rigidity of the adamantone skeleton allows for tight control of conformation and minimization of competing effects. Criticisms The Cieplak model has been met with mixed reviews, and criticisms of both its basic logic and predictive ability have emerged. The stabilizing interaction of donating electron density into the transition state σ* orbital of a forming bond was immediately questioned, as this interaction has been widely invoked to explain exactly the opposite—the destabilization of bonds. Traditionally, forming bonds are stabilized when they donate electron density from their bonding HOMO into a neighboring antibonding LUMO, not by accepting electron density into their LUMO. To this end, David A. Evans said of Cieplak's proposal: "Structures are stabilized by stabilizing their highest energy filled states. This is one of the fundamental assumptions in frontier molecular orbital theory. The Cieplak hypothesis is nonsense." However, Hahn and le Noble refute this point by invoking the principle of microscopic reversibility, where the process of bond formation and cleavage are fundamentally equivalent in an equilibrium, and little value should be placed on the terms ‘bonding’ and ‘antibonding’, σ or σ*. In another criticism of the model, Houk questions Cieplak's fundamental assumption that C-H bonds are better electron-donors than C-C bonds. This point is still contested and represents another major dispute in the field. To further refute the Cieplak effect, Houk puts forth a system in which Cieplak's model predicts the wrong product distribution trend. In the case of substituted trans-decalones, electron-withdrawing substituents equatorial at C4 should discourage equatorial attack and yield more axial product, since the ring C–C bonds are deactivated for donation into the forming C-Nuc bond. Experimental evidence, however, shows that axial electron-withdrawing C4 substituents are more directing towards axial attack than equatorial substituents. Since axial orbitals are not aligned for hyperconjugation in this system, Houk rationalized this trend by invoking electrostatic arguments, described below. Alternative explanations In an effort to explain the surprising stereoselectivities in the systems above, alternative explanations to the Cieplak effect have been proposed. In substituted cyclohexanones, the tendency of small reducing agents to add hydride axially is proposed to be caused by torsional strain instead of hyperconjugation. In an equatorial attack, the nucleophile approaches by eclipsing a neighboring hydrogen atom and subsequently pushes the carbonyl substituents into eclipsing positions as it pyramidalizes the carbonyl carbon. In an axial approach, the nucleophile approaches gauche to neighboring hydrogen atoms and so does not cause eclipsing interactions while pyramidalizing the carbonyl carbon. It is this torsional strain—the energy cost of rotating the bonds out of an eclipsed position—that favors axial over equatorial approach. In the case of substituted norbornones, stereoselectivity may be explained by electrostatic interactions between substituents and nucleophiles. Electron-withdrawing groups create a partial positive charge on the alpha carbon, which interacts favorably with the partial negative charge on the incoming nucleophile. This interaction may guide attack syn to the electron-withdrawing substituent, and anti to electron-donating substituents. This conclusion is supported by computations, where modeling the partial charges predicts product distribution without including orbital interactions. The same explanation has been made to justify similar results in the case of substituted adamantones. Similarly, in Houk's trans-decalone system, the nucleophile with its partial negative charge prefers to attack away from the partial negative charge of the acyl ester. When this substituent is axial, the equatorial pathway brings the nucleophile into closer proximity and is therefore disfavored. This is less pronounced for equatorially substituted ester because the group is now positioned further away from the carbonyl. References Organic chemistry
Cieplak effect
Chemistry
2,258
58,008,487
https://en.wikipedia.org/wiki/Reagent%20Chemicals
Reagent Chemicals is a publication of the American Chemical Society (ACS) Committee on Analytical Reagents, detailing standards of purity for over four hundred of the most widely used chemicals in laboratory analyses and chemical research. Chemicals that meet this standard may be sold as "ACS Reagent Grade" materials. Reagent standards relieve chemists of concern over chemical purity. "ACS Reagent Grade", is regarded as a gold standard measure and is in some cases required for use in chemical manufacturing, usually where stringent quality specifications and a purity of equal to or greater than 95% are required. The American Chemical Society does not validate the purity of chemicals sold with this designation, but it relies on suppliers, acting in their self-interest, to meet these standards. In practice, the reliability of supplier stated purity is at times questionable. In addition to specifications for each chemical, Reagent Chemicals provides detailed methods for determining how to measure the properties and impurities listed in the specifications. Included are detailed explanations for numerous common analytical methods such as gas, liquid, ion, and headspace chromatography, atomic absorption spectroscopy, and optical emission spectroscopy. Reagent Chemicals is primarily of interest to manufacturers and suppliers of chemicals to laboratories worldwide, and less so to research laboratories. Many standards organizations and federal agencies that set guidelines require the use of ACS-grade regent chemicals for many test procedures. This includes the United States Pharmacopeia (USP) and the U.S. Environmental Protection Agency (EPA). An exception would be those working on trace analyses (measuring contaminants in the environment, for example), where small impurities in reagents would be significant. Reagent Chemicals Online After eleven paper editions over 68 years, Reagent Chemicals became an electronic resource in 2017. The publication is updated several times a year to include new reagents and methods of analysis. Changes are published online six months prior to becoming an official standard, allowing manufacturers to adjust their labels or processes. While the full details of most reagents are behind a paywall, that for acetone is publicly available to showcase a typical entry. History of Reagent Chemicals 1903: The American Chemical Society created the Committee on the Purity of Reagents, the forerunner of the Committee on Analytical Reagents, acknowledging the increasing needs for purity and standard in chemical research and manufacturing. 1917: The American Chemical Society established the ACS Committee on Analytical Reagents. William F. Hillebrand (1853-1925), one of Washington's most distinguished chemists, was elected as the first chair. He played a key editorial role in judging which analytical methods would be published as ACS standards, with colleagues referring to him as "Supreme Court of Chemistry". He additionally achieved stature with Geological Survey and the Bureau of Standards. 1920s: The Committee began publishing specifications for chemical reagents and test methods in scientific journals. At this point, analytical methods were primarily what we now consider to be “Classical Wet Methods". 1950: The 1st edition of Reagent Chemicals was published and introduced the application of analytical instrumentation. Reagent Chemicals had significant impact on chemical laboratories by enabling greatly improved accuracy and sensitivity. 2000: The 9th edition was published and continued a trend toward eliminating or simplifying tedious classical procedures for trace analyses and adding instrumental methods, where possible. 2006: The release of the 10th edition introduced Monographs for Standard-Grade Reference Materials. 2016: The 11th edition introduces heavy metal test methodologies utilizing ICP-OES. 2017: The new online edition of Reagent Chemicals, based on the 11th edition in print, improved the speed and simplicity with which the Committee communicates updates and changes by bringing the entire reference resource to the ACS journals platform. Committee The ACS Committee on Analytical Reagents is responsible for the Reagent Chemicals publication and standards included within. The committee includes members from chemical and pharmaceutical manufacturers, academia, and government organizations (NIST, EPA, USGS). Notes References Reagent Chemicals American Chemical Society Chemistry reference works 1950 in science
Reagent Chemicals
Chemistry
838
72,466,368
https://en.wikipedia.org/wiki/Zhang%20Rujing
Zhang Rujing (), alternatively known as Richard Chang Ru-gin, is a Taiwanese businessman and entrepreneur known for founding the largest contract chip manufacturer in mainland China, the Semiconductor Manufacturing International Corporation (SMIC). In mainland China, Zhang is known as "the father of China's foundry industry" and China's "godfather of semiconductors". Early life and education Zhang Rujing was born in 1948 in the city of Nanjing, Jiangsu Province (then in the Republic of China) to a steelworker, Zhang Xilun, and his wife Liu Peijin. Less than a year old, Zhang and his family fled with the retreating Kuomintang aboard a boat to Kaohsiung on the southern coast of Taiwan in 1949. Growing up in Kaohsiung, Zhang excelled in his studies and was admitted to study at National Taiwan University (NTU) in the capital, Taipei. After graduating from NTU's Mechanical Engineering Department in 1970, Zhang moved to the United States where he earned his master's degree in engineering from University at Buffalo's School of Engineering and moved to Southern Methodist University in Texas to earn his doctorate in electrical engineering. Career In 1977, at 29 years old, Zhang began working at the semi-conductor giant Texas Instruments alongside experts in integrated circuits with his first boss, Nobel Prize in Physics laureate Jack Kilby. Starting as a design engineer, Zhang would then develop under the mentorship of Shao Zifan, and help establish large-scale microchip factories including four in Texas, and others in Italy, Japan, Singapore, and Taiwan. Zhang would bring his then retired parents to the United States from Taiwan. In 1996, leaders from a visiting delegation from the now-defunct Chinese Ministry of Electronics Industry approached Zhang in the United States and, noting China's twenty-year gap in semiconductor manufacture, encouraged Zhang to return to mainland China and help his birth nation establish their own chip fabrication industry. In 1997, after twenty years of work at Texas Instruments, Chang returned to China. Establishment of SMIC Returning first to mainland China at age 50, Zhang began searching for locations for a Chinese semiconductor factory. He decided to travel back to Taiwan and founded Shida Semiconductor with the help of his contacts at Texas Instruments. As TSMC expanded in Taiwan, its head, Zhang Zhongmou, convinced TSMC shareholders in 2000 to acquire Shida Semiconductor for $5 billion USD. Zhang Zhongmou, reportedly appreciative of Zhang Rujing's talent and expertise, requested that Zhang Rujing continue to lead Shida Semiconductor, a deal Zhang Rujing accepted on the purported condition that a factory one day be built in mainland China. Learning that Zhang Zhongmou did not intend to establish a factory in mainland China, in 2000, Zhang Rujing resigned, gave up his shares of TSMC, and travelled to the PRC capital of Beijing. Finding neither the city's mayor or vice mayor for science and technology who weren't in the city at that time, Zhang met with Deputy Directory of the Shanghai Economic Commission Jiang Shangzhou who brought Zhang south to Shanghai and introduced him to Zhangjiang Hi-Tech Park. That year, on 3 April 2000, Zhang founded the Semiconductor Manufacturing International Corporation (SMIC). By May, Zhang had recruited hundreds of engineers to Shanghai and construction of the plant began in August 2000. Zhang also moved both his mother (then over 90 years of age) and his American wife to mainland China. Zhang also reportedly built a 1,500 unit housing area for his employees and a bilingual K-12 school for children of employees. Resignation Already experienced in the establishment of semiconductor factories, Zhang continued to expand SMIC by building three 8 inch wafer factories in Shanghai, two 12 inch factories in Beijing, and purchased an 8 inch factory in Tianjin from Motorola. In 2002, the Taiwanese government, allegedly feeling pressure from SMIC's primary competitor, TSMC, ordered Zhang to withdraw his investment. After Zhang's refusal, the government fined him 15 million Taiwanese dollars threatening to bring more fines should Zhang not desist. In August 2003, as SMIC planned to launch an IPO in Hong Kong, TSMC sued SMIC in the United States courts for intellectual property theft and patent infringement. In 2005, SMIC was ordered to pay US$175 million to TSMC in damages, surrender TSMC documents, and halt the use of TSMC technology and processes in SMIC's fabrication. Later, in a separate lawsuit, a California jury would find that SMIC breached the terms of the 2005 settlement by not returning documents and disclosing TSMC trade secrets in patent applications. Along with a compensation of $200 million USD and 10% equity given by SMIC to TSMC in 2009, Zhang, then 61, was prohibited from operating in the chip industry for a period of three years. Later life In 2014, having passed his three-year prohibition from the semiconductor industry, a 66-year-old Zhang founded Shanghai Xinsheng, the first 300 mm large silicon wafer company in mainland China. In 2018, Zhang established SiEn (Qingdao) Integrated Circuits which, in 2021, began producing 8 inch silicon wafers and was testing 12 inch production. Zhang has continued to play an active role in the advocacy of the People's Republic of China's chip industry. Zhang has expressed his confidence that China would catch up to global leaders in the industries of third-generation gallium nitride (GaN) and silicon carbide (SiC) semiconductors. SiEn, meanwhile, has discussed potential partnerships with Huawei Technologies to allow access to semiconductor development services in what the Japanese Financial Newspaper Nikkei asserts is an attempt "to plug holes in its semiconductor supply chain caused by the U.S. crackdown on the tech giant". References 1948 births Businesspeople from Nanjing Texas Instruments people Living people Taiwanese company founders 20th-century Taiwanese businesspeople 21st-century Taiwanese businesspeople Businesspeople from Jiangsu University at Buffalo alumni Southern Methodist University alumni National Taiwan University alumni Electrical engineers
Zhang Rujing
Engineering
1,234
73,091,262
https://en.wikipedia.org/wiki/Environmental%20Science%20Center
The Environmental Science Center is a research center at Qatar University and was established in 1980 to promote environmental studies across the state of Qatar with main focus on marine science, atmospheric and biological sciences. For the past 18 years, ESC monitored and studied Hawksbill turtle nesting sites in Qatar. History in 1980 it was named Scientific and Applied Research Center (SARC). in 2005 it was restructured and renamed Environmental Studies Center (ESC). in 2015, the business name was changed to Environmental Science Center (ESC) to better reflect the research-driven objectives. Research clusters The ESC has 3 major research clusters that cover areas of strategic importance to Qatar. The clusters are: Atmospheric sciences cluster Earth sciences cluster Marine sciences cluster with 2 majors: Terrestrial Ecology Physical and Chemical Oceanography UNESCO Chair in marine sciences The first of its kind in the Arabian Gulf region, United Nations Educational, Scientific and Cultural Organization (UNESCO) have announced the establishment of the UNESCO Chair in marine sciences at QU's Environmental Science Center. The chair is aiming to providing sustainable marine environment in the Arabian Gulf and protection of marine ecosystems. Inventions Marine clutch technology. Mushroom artificial reef technology (mushroom forest). Accreditation The ESC labs have been granted ISO/IEC 17025 by American Association of Laboratory Accreditation (A2LA), affirming their status as world-class facilities operating to best practice. Facilities ESC is the home of wide range of facilities. The most notable one is the mobile labs on board the JANAN Research Vessel. JANAN is a 42.80 m. multipurpose Research Vessel and was named after the island located in the western coast of the Qatari peninsula. It was donated to Qatar University by H.H. Sheikh Tamim bin Hamad Al Thani the Amir of Qatar. JANAN is used extensively in studying the state of marine environment in the Exclusive Economic Zone (EEZ) of the State of Qatar and to advance critical marine environmental studies and research in Qatar and the wider Gulf. The center also has 12 labs equipped with state-of-arts instruments. See also Qatar University Qatar University Library Mariam Al Maadeed Center for Advanced Materials (CAM) External links Research and Graduate Studies Office at Qatar University Qatar University Newsroom References 1980 establishments in Qatar Organisations based in Doha Research institutes in Qatar Educational institutions established in 1980 Qatar University Education by subject Human impact on the environment Oceanographic organizations Fishing and the environment Earth science research institutes Biological research institutes Environmental research institutes
Environmental Science Center
Environmental_science
496
28,975,107
https://en.wikipedia.org/wiki/Eurocopter%20X%C2%B3
The Eurocopter X³ (X-Cubed) is a retired experimental high-speed compound helicopter developed by Airbus Helicopters (formerly Eurocopter). A technology demonstration platform for "high-speed, long-range hybrid helicopter" or H³ concept, the X³ achieved in level flight on 7 June 2013, setting an unofficial helicopter speed record. In June 2014, it was placed in a French air museum in the village of Saint-Victoret. Design and development Technology The X³ demonstrator is based on the Eurocopter AS365 Dauphin helicopter, with the addition of short span wings each fitted with a tractor propeller, having a different pitch to counter the torque effect of the main rotor. Conventional helicopters use tail rotors to counter the torque effect. The tractor propellers are gear driven from the two main turboshaft engines which also drive the five-bladed main rotor system, taken from a Eurocopter EC155. Test pilots describe the X³ flight as smooth, but the X³ does not have passive or active anti-vibration systems and can fly without stability augmentation systems, unlike the Sikorsky X2. The helicopter is designed to prove the concept of a high-speed helicopter which depends on slowing the rotor speed (by 15%) to avoid drag from the advancing blade tip, and to avoid retreating blade stall by unloading the rotor while a small wing provides 40–80% lift instead. The X³ can hover with a pitch attitude between minus 10 and plus 15 degrees. Its bank range is 40 degrees in hover, and is capable of flying at bank angles of 120 to 140 degrees. During testing the aircraft demonstrated a rate of climb of 5,500 feet per minute and high-G turn rates of 2Gs at 210 knots. Performance The X³ first flew on 6 September 2010 from French Direction générale de l'armement facility at Istres-Le Tubé Air Base. On 12 May 2011 the X³ demonstrated a speed of while using less than 80 percent of available power. In May 2012, it was announced that the Eurocopter X³ development team had received the American Helicopter Society's Howard Hughes Award for 2012. Eurocopter demonstrated the X³ in the United States during the summer of 2012, the aircraft logging 55 flight hours, with a number of commercial and military operators being given the opportunity to fly the aircraft. With an aerodynamic fairing installed on the rotor head, the X³ demonstrated a speed of in level flight and in a shallow dive on 7 June 2013, beating the Sikorsky X2's unofficial record set in September 2010, and thus becoming the world's fastest non-jet augmented compound helicopter. Variants Eurocopter suggested that a production H³ application could appear as soon as 2020. The company had also previously expressed an interest in offering an H³ technology based solution for the United States' Future Vertical Lift program, with EADS North America submitting bid to build a technology demonstrator under the US Army's Joint Multi Role (JMR) program, but later withdrew due to cost and because Eurocopter might have to transfer X³ intellectual property to the US, and Eurocopter chose to focus on the Armed Aerial Scout instead. Ultimately the company was not downselected for the JMR effort, and the AAS program was cancelled. Eurocopter saw the offshore oil market and Search and rescue community as potential customers for X³ technology. An X³-based unpressurised compound helicopter called LifeRCraft is also among the projects planned under the European Union's €4 billion ($5.44 billion) Clean Sky 2 research program as one of two high-speed rotorcraft flight demonstrators. Airbus began development of the hybrid composite helicopter with a 4.6-litre V-8 piston engine in 2014, froze the design in 2016 to start building in 2017, and had plans to fly it in 2019. The X³ was moved to Musée de l’air et de l’espace in 2014 for public display. RACER The Airbus RACER (Rapid And Cost-Effective Rotorcraft) is a development revealed at the June 2017 Paris air show, final assembly was planned to start in 2019 for a 2020 first flight. Cruising up to , it aims for a 25% cost reduction per distance over a conventional helicopter. Specifications See also References External links . . . . Video X³ Video X³, Cockpit Making of . Airbus Helicopters aircraft Compound helicopters Experimental helicopters 2010s French helicopters High-wing aircraft Slowed rotor Twin-turbine helicopters Aircraft first flown in 2010
Eurocopter X³
Engineering
938
30,871,845
https://en.wikipedia.org/wiki/Anticausal%20system
In systems theory, an anticausal system is a hypothetical system with outputs and internal states that depend solely on future input values. Some textbooks and published research literature might define an anticausal system to be one that does not depend on past input values, allowing also for the dependence on present input values. An acausal system is a system that is not a causal system, that is one that depends on some future input values and possibly on some input values from the past or present. This is in contrast to a causal system which depends only on current and/or past input values. This is often a topic of control theory and digital signal processing (DSP). Anticausal systems are also acausal, but the converse is not always true. An acausal system that has any dependence on past input values is not anticausal. An example of acausal signal processing is the production of an output signal that is processed from an input signal that was recorded by looking at input values both forward and backward in time (from a predefined time arbitrarily denoted as the "present" time). In reality, that "present" time input, as well as the "future" time input values, have been recorded at some time in the past, but conceptually it can be called the "present" or "future" input values in this acausal process. This type of processing cannot be done in real time as future input values are not yet known, but is done after the input signal has been recorded and is post-processed. Digital room correction in some sound reproduction systems rely on acausal filters. References See also Anti-causal filter Control theory Digital signal processing Systems theory
Anticausal system
Mathematics
354
26,174,020
https://en.wikipedia.org/wiki/Hawaiian%20ethnobiology
Hawaiian ethnobiology is the study of how people in Hawaii, particularly pertaining to those of pre-western contact, interacted with the plants around them. This includes to practices of agroforestry, horticulture, religious plants, medical plants, agriculture, and aquaculture. Conservation Often in conservation, "Hawaiian ethnobiology" describes the state of ecology in the Hawaiian Islands prior to human contact. However, since "ethno" refers to people, "Hawaiian ethnobiology" is the study of how people, past and present, interact with the living world around them. The concept of conservation was, like many things in pre-contact ancient Hawaii, decentralized. At the ahupuaa level, a konohiki managed the natural resource wealth. He would gather information on people's observations and make decisions as to what was kapu (strictly forbidden) during what times. Also, the concept of kuleana (responsibility) fueled conservation. Families were delegated a fishing area. It was their responsibility to not take more than they needed during fishing months, and to feed the fish kalo (Colocasia esculenta) and breadfruit (Artocarpus altilis) during a certain season. The same idea of not collecting more than what was needed, and tending to the care of "wild" harvested products extended up into the forest. In modern times, this role is institutionalized within a central state government. This causes animosity between natural resource collectors (subsistence fisherman) and state legislature (local Department of Fish and Wildlife). Agroforestry Managing the forest resources around you is agroforestry. This includes timber and non-timber forest crops. Hawaiian agroforestry practices Religious Plants If a religious belief system influences a culture's practices in how the perceive and manage their environment, then that plant is part of a "sacred ecology". Hawaiian sacred plants include awa (Piper methysticum), which was used both religiously as a sacrament, and by the common people as a relaxant/sedative. Other religious plants that have shaped ecology are Ki (Cordyline fruticosa) Kalo. Ki is a sterile plant, so the wide distribution of the plant across the main Hawaiian islands indicated human activity; if not directly planted, then through gravitational fragmentation. Kalo was the staple starch crop of the Hawaiian diet. In Hawaiian genealogy, Haloa was the first born of Papa (Earth Mother) and Wakea (Sky Father). He was a still birth, so Papa went out and buried Haloa. Haloa then sprouted into the first kalo plant. Their second son they also named Haloa. He was charged with the kuleana to always care for his older brother. The historical Hawaiian people draw their direct lineage from Haloa, and did, and some still do, assume his responsibility to care for kalo. This responsibility, and need for food, drove the building of huge kalo growing complexes called loi. References Ethnobiology Environment of Hawaii
Hawaiian ethnobiology
Biology,Environmental_science
635
587,163
https://en.wikipedia.org/wiki/Light-time%20correction
Light-time correction is a displacement in the apparent position of a celestial object from its true position (or geometric position) caused by the object's motion during the time it takes its light to reach an observer. Light-time correction occurs in principle during the observation of any moving object, because the speed of light is finite. The magnitude and direction of the displacement in position depends upon the distance of the object from the observer and the motion of the object, and is measured at the instant at which the object's light reaches the observer. It is independent of the motion of the observer. It should be contrasted with the aberration of light, which depends upon the instantaneous velocity of the observer at the time of observation, and is independent of the motion or distance of the object. Light-time correction can be applied to any object whose distance and motion are known. In particular, it is usually necessary to apply it to the motion of a planet or other Solar System object. For this reason, the combined displacement of the apparent position due to the effects of light-time correction and aberration is known as planetary aberration. By convention, light-time correction is not applied to the positions of stars, because their motion and distance may not be known accurately. Calculation A calculation of light-time correction usually involves an iterative process. An approximate light-time is calculated by dividing the object's geometric distance from Earth by the speed of light. Then the object's velocity is multiplied by this approximate light-time to determine its approximate displacement through space during that time. Its previous position is used to calculate a more precise light-time. This process is repeated as necessary. For planetary motions, a few (3–5) iterations are sufficient to match the accuracy of the underlying ephemerides. Discovery The effect of the finite speed of light on observations of celestial objects was first recognised by Ole Rømer in 1675, during a series of observations of eclipses of the moons of Jupiter. He found that the interval between eclipses was less when Earth and Jupiter are approaching each other, and more when they are moving away from each other. He correctly deduced that this difference was caused by the appreciable time it took for light to travel from Jupiter to the observer on Earth. References P. Kenneth Seidelmann (ed.), Explanatory Supplement to the Astronomical Almanac (Mill Valley, Calif., University Science Books, 1992), 23, 393. Arthur Berry, A Short History of Astronomy (John Murray, 1898 – republished by Dover, 1961), 258–265. Astrometry Time
Light-time correction
Physics,Astronomy,Mathematics
540
62,902,993
https://en.wikipedia.org/wiki/AquaSalina
AquaSalina is a salt de-icer made from produced water (or brine) at Duck Creek Energy's vertical oil and gas wells. It is then filtered in Cleveland, Ohio and Mogadore, Ohio. The Ohio Department of Transportation approved AquaSalina in 2004, and it has been sold at Lowe's and elsewhere. In the winter of 2017–2018, the Ohio Department of Transportation sprayed over 500,000 gallons of AquaSalina deicer on highways. In the 2018–2019 winter they applied over 620,000 gallons of it. In the winter of 2018–2019, they applied nearly 800,000 gallons. In 2017, the Ohio Department of Natural Resources (ODNR) tested samples and found high radium levels, as has a Duquesne University scientist, who called it "a nightmare". While ODNR's tests indicated the results were 300 times higher than allowed in drinking water and above the levels allowed for the discharge of radioactive waste, it met their standards to be used as a deicer. Specifically, 0.005 picocuries per liter of radium is allowed for disposal, but there is no limit for spreading on roadways. The ODNR samples contained between 66 and 9602 picocuries per liter, including one sample that was higher than raw brine. Several bills have been introduced in the Ohio legislatures from 2017 to 2019 to consider brine deicers a commodity, rather than toxic waste, to exempt them from ODNR testing. Fracking water lawsuit Duck Creek Energy won a defamation lawsuit in 2013 against two individuals who said AquaSalina was "frac waste" or "fracking water". AquaSalina's source is vertical oil and gas wells, not fracking wells. They were allowed to continue describing it as "toxic". The ruling made a distinction stating AquaSalina "is" versus "contains" fracking water. References Further reading Water pollution Radioactive contamination Radiation health effects Ice in transportation Economy of Ohio
AquaSalina
Physics,Chemistry,Materials_science,Technology,Environmental_science
413
35,724,928
https://en.wikipedia.org/wiki/France%20AEROTECH
France AEROTECH is the name of the French national network for aeronautical and space grandes écoles (engineering graduate schools). It has been created in 2011 by Arts et Métiers ParisTech, École centrale de Lyon, École centrale de Nantes, École nationale de l'aviation civile and École nationale supérieure d’électronique, informatique, télécommunications, mathématique et mécanique de Bordeaux. The goals of France AEROTECH are to provide French courses abroad, developing international research projects and courses in aeronautical and space engineering, and helping emerging markets. To achieve all these projects, the universities will create a summer program in embedded systems and a master in airworthiness. References Aviation schools Aerospace engineering organizations Aviation schools in France Grandes écoles Organizations established in 2011 École nationale de l'aviation civile
France AEROTECH
Engineering
168
63,931,134
https://en.wikipedia.org/wiki/C16H12ClN3O3
{{DISPLAYTITLE:C16H12ClN3O3}} The molecular formula C16H12ClN3O3 (molar mass: 329.738 g/mol, exact mass: 329.0567 u) may refer to: Meclonazepam Ro05-4082 Molecular formulas
C16H12ClN3O3
Physics,Chemistry
74
6,503,281
https://en.wikipedia.org/wiki/Nv%20network
A Nv network is a term used in BEAM robotics referring to the small electrical Neural Networks that make up the bulk of BEAM-based robot control mechanisms. Building blocks The most basic component included in Nv Networks is the Nv neuron. The purpose of a Nv neuron is simply to take an input, do something with it, and give an output. The most common action of Nv neurons is to give a delay. BEAM Nv Neurons The standard for BEAM-based neurons is a capacitor that has one lead as an input, and the other going into the input line of an inverter. That inverter's output is the output of the neuron. The capacitor lead that is inputting into the inverter is pulled to ground with a resistor. The neuron functions because when an input is received (positive power on the input line), it charges the capacitor. Once the input is lost (negative power on the input line), the capacitor discharges into the inverter, causing the inverter to produce an output that is passed to the next neuron. The rate that the capacitor discharges is tied to the resistor that is pulling the input to the inverter to the negative. The larger the resistor, the longer it will take for the capacitor to fully discharge, and the longer it will take for that neuron to completely fire. Types There are many common network topologies used in BEAM robots, the most common of which are listed here. Bicore Probably the most utilized Nv Net topology in BEAM, the Bicore consists of two neurons placed in a loop that alternates current to the output. Input into the loop is given in the form of changing the resistance in each separate Neuron, which changes the rate at which the Neuron discharges, affecting the pace at which the loop oscillates. Master/Slave bicores Another common topology is using two bicores in a master/slave layout where the master bicore leads the slave and sets the pace, while the slave bicore follows at an offset pace. This layout is most commonly used for dual-motor walkers. Larger networks Other larger network topologies include the Tricore, and Quadcore which are laid out in a similar way the bicore is, except with more Neurons in the loop. More complex networks exist, but are not as common due to the simplistic nature of BEAM. Structure A basic Nv network is built upon several Nv neurons in a loop. The loop's timing is often varied by input sensors. This difference in timing is often meant to affect the output pattern of the Nv loop. An example of this can be seen in a simple BEAM walker robot utilizing a bicore network (2 neurons). The neural network is set up to alternate current going to the main motor in a way where under equal input from the main sensors, the neurons oscillate at an equal pace to each other, producing a steady walking gait. When input (e.g. from light sensors) is present, the timing of each neuron in the loop is varied based on the input from the sensors, affecting the pace at which the loop oscillates. This affected pace is often used to alter the walking gait of a robot in order to steer it based on the input from its sensors. External articles and other references BEAM NV Articles on the BEAM Robotics Wiki On Bicores on the BEAM Robotics Wiki Electrical circuits BEAM robotics
Nv network
Engineering,Biology
728
5,386,096
https://en.wikipedia.org/wiki/Peter%20J.%20Salzman
Peter J. Salzman was a computer hacker and former senior member of the hacking group, Legion of Doom, in the 1980s. He was the first hacker apprehended during Operation Sundevil and was caught while serving in the United States Air Force as a computer cryptography specialist. Salzman was the founder and many time president of the Linux Users Group of Davis. He finished a Ph.D. at University of California at Davis in physics, doing a dissertation on the semi-classical theory of gravitation, a subtopic of quantum gravity. He is also the author and former maintainer of the popular guides Using GNU's GDB Debugger and Linux Kernel Module Programming Guide. He co-authored (along with Norman Matloff) a popular book on computer program debugging called "The Art of Debugging with GDB", which was published on April 15, 2008. Salzman finished a Master of Quantitative Finance at Baruch College. He worked as a quantitative developer for Fitch Ratings and Fitch Solutions before becoming a quantitative analyst for Algorithmics. He is currently a quantitative analyst for IBM. External links Peter Jay Salzman's dissertation: Investigation of the Time Dependent Schrodinger-Newton Equation Living people Financial economists University of California, Davis alumni Year of birth missing (living people)
Peter J. Salzman
Technology
270
43,036,900
https://en.wikipedia.org/wiki/Westnile%20Distilling%20Company%20Limited
Westnile Distilling Company Limited is a private beverage company that started in Arua, Uganda. It manufactures gin, mineral water, and glucose solution. Genesis Although conceived as an idea in 1972, factory construction started at a fast pace in Arua around 1978. Then civil wars and instability halted work plus forced the company's founders into exile. When Uganda regained peace, construction was resumed in 1989 and finished. Arson attack In 2005, arsonists struck and burnt down the 4 Billion UgX factory on Ujuo Road (behind Arua Public Secondary School). Manufacturing lines and products worth millions were destroyed, but the concrete tower still remains. The rest of the factory was saved by Civil Aviation Authority trucks from Arua Airfield after the company owner Dr Eric Adriko phoned the CAA Deputy Managing Director Dr. Makuza. Meanwhile, the LC 1 Chairman and residents protected the other units. Recovery In January 2007, a new factory was reborn out of the ashes and relocated to Plot 6 - 12 Makamba Road, Lungujja, in a neighbourhood named Kosovo (within Kampala). The company launched Hunters Gin plus its own 'Sunshine Mineral Water' extracted from underground rocks on the new premise. It had now joined the mineral water craze gripping Uganda. With a process and machinery unique in East and Central Africa, it continues to produce its products: Adrikos 7 Hills Vodka, White Rhino Gin and Rum Raggi. The company has a strict policy against consumption of alcohol by Under 18s. See also Alcohol References Arua District Food and drink companies of Uganda Distilleries Arua
Westnile Distilling Company Limited
Chemistry
330
5,115,749
https://en.wikipedia.org/wiki/Typed%20assembly%20language
In computer science, a typed assembly language (TAL) is an assembly language that is extended to include a method of annotating the datatype of each value that is manipulated by the code. These annotations can then be used by a program (type checker) that processes the assembly language code in order to analyse how it will behave when it is executed. Specifically, such a type checker can be used to prove the type safety of code that meets the criteria of some appropriate type system. Typed assembly languages usually include a high-level memory management system based on garbage collection. A typed assembly language with a suitably expressive type system can be used to enable the safe execution of untrusted code without using an intermediate representation like bytecode, allowing features similar to those currently provided by virtual machine environments like Java and .NET. See also Proof-carrying code Further reading Greg Morrisett. "Typed assembly language" in Advanced Topics in Types and Programming Languages. Editor: Benjamin C. Pierce. External links TALx86, a research project from Cornell University which has implemented a typed assembler for the Intel IA-32 architecture. Assembly languages Programming language theory Cybersecurity engineering
Typed assembly language
Technology,Engineering
241
2,032,752
https://en.wikipedia.org/wiki/Reed%E2%80%93Muller%20code
Reed–Muller codes are error-correcting codes that are used in wireless communications applications, particularly in deep-space communication. Moreover, the proposed 5G standard relies on the closely related polar codes for error correction in the control channel. Due to their favorable theoretical and mathematical properties, Reed–Muller codes have also been extensively studied in theoretical computer science. Reed–Muller codes generalize the Reed–Solomon codes and the Walsh–Hadamard code. Reed–Muller codes are linear block codes that are locally testable, locally decodable, and list decodable. These properties make them particularly useful in the design of probabilistically checkable proofs. Traditional Reed–Muller codes are binary codes, which means that messages and codewords are binary strings. When r and m are integers with 0 ≤ r ≤ m, the Reed–Muller code with parameters r and m is denoted as RM(r, m). When asked to encode a message consisting of k bits, where holds, the RM(r, m) code produces a codeword consisting of 2m bits. Reed–Muller codes are named after David E. Muller, who discovered the codes in 1954, and Irving S. Reed, who proposed the first efficient decoding algorithm. Description using low-degree polynomials Reed–Muller codes can be described in several different (but ultimately equivalent) ways. The description that is based on low-degree polynomials is quite elegant and particularly suited for their application as locally testable codes and locally decodable codes. Encoder A block code can have one or more encoding functions that map messages to codewords . The Reed–Muller code has message length and block length . One way to define an encoding for this code is based on the evaluation of multilinear polynomials with m variables and total degree at most r. Every multilinear polynomial over the finite field with two elements can be written as follows: The are the variables of the polynomial, and the values are the coefficients of the polynomial. Note that there are exactly coefficients. With this in mind, an input message consists of values which are used as these coefficients. In this way, each message gives rise to a unique polynomial in m variables. To construct the codeword , the encoder evaluates the polynomial at all points , where the polynomial is taken with multiplication and addition mod 2 . That is, the encoding function is defined via The fact that the codeword suffices to uniquely reconstruct follows from Lagrange interpolation, which states that the coefficients of a polynomial are uniquely determined when sufficiently many evaluation points are given. Since and holds for all messages , the function is a linear map. Thus the Reed–Muller code is a linear code. Example For the code , the parameters are as follows: Let be the encoding function just defined. To encode the string x = 1 1010 010101 of length 11, the encoder first constructs the polynomial in 4 variables:Then it evaluates this polynomial at all 16 evaluation points (0101 means : As a result, C(1 1010 010101) = 1101 1110 0001 0010 holds. Decoder As was already mentioned, Lagrange interpolation can be used to efficiently retrieve the message from a codeword. However, a decoder needs to work even if the codeword has been corrupted in a few positions, that is, when the received word is different from any codeword. In this case, a local decoding procedure can help. The algorithm from Reed is based on the following property: you start from the code word, that is a sequence of evaluation points from an unknown polynomial of of degree at most that you want to find. The sequence may contains any number of errors up to included. If you consider a monomial of the highest degree in and sum all the evaluation points of the polynomial where all variables in have the values 0 or 1, and all the other variables have value 0, you get the value of the coefficient (0 or 1) of in (There are such points). This is due to the fact that all lower monomial divisors of appears an even number of time in the sum, and only appears once. To take into account the possibility of errors, you can also remark that you can fix the value of other variables to any value. So instead of doing the sum only once for other variables not in with 0 value, you do it times for each fixed valuations of the other variables. If there is no error, all those sums should be equals to the value of the coefficient searched. The algorithm consists here to take the majority of the answers as the value searched. If the minority is larger than the maximum number of errors possible, the decoding step fails knowing there are too many errors in the input code. Once a coefficient is computed, if it's 1, update the code to remove the monomial from the input code and continue to next monomial, in reverse order of their degree. Example Let's consider the previous example and start from the code. With we can fix at most 1 error in the code. Consider the input code as 1101 1110 0001 0110 (this is the previous code with one error). We know the degree of the polynomial is at most , we start by searching for monomial of degree 2. we start by looking for evaluation points with . In the code this is: 1101 1110 0001 0110. The first sum is 1 (odd number of 1). we look for evaluation points with . In the code this is: 1101 1110 0001 0110. The second sum is 1. we look for evaluation points with . In the code this is: 1101 1110 0001 0110. The third sum is 1. we look for evaluation points with . In the code this is: 1101 1110 0001 0110. The third sum is 0 (even number of 1). The four sums don't agree (so we know there is an error), but the minority report is not larger than the maximum number of error allowed (1), so we take the majority and the coefficient of is 1. We remove from the code before continue : code : 1101 1110 0001 0110, valuation of is 0001000100010001, the new code is 1100 1111 0000 0111 1100 1111 0000 0111. Sum is 0 1100 1111 0000 0111. Sum is 0 1100 1111 0000 0111. Sum is 1 1100 1111 0000 0111. Sum is 0 One error detected, coefficient is 0, no change to current code. 1100 1111 0000 0111. Sum is 0 1100 1111 0000 0111. Sum is 0 1100 1111 0000 0111. Sum is 1 1100 1111 0000 0111. Sum is 0 One error detected, coefficient is 0, no change to current code. 1100 1111 0000 0111. Sum is 1 1100 1111 0000 0111. Sum is 1 1100 1111 0000 0111. Sum is 1 1100 1111 0000 0111. Sum is 0 One error detected, coefficient is 1, valuation of is 0000 0011 0000 0011, current code is now 1100 1100 0000 0100. 1100 1100 0000 0100. Sum is 1 1100 1100 0000 0100. Sum is 1 1100 1100 0000 0100. Sum is 1 1100 1100 0000 0100. Sum is 0 One error detected, coefficient is 1, valuation of is 0000 0000 0011 0011, current code is now 1100 1100 0011 0111. 1100 1100 0011 0111. Sum is 0 1100 1100 0011 0111. Sum is 1 1100 1100 0011 0111. Sum is 0 1100 1100 0011 0111. Sum is 0 One error detected, coefficient is 0, no change to current code. We know now all coefficient of degree 2 for the polynomial, we can start mononials of degree 1. Notice that for each next degree, there are twice as much sums, and each sums is half smaller. 1100 1100 0011 0111. Sum is 0 1100 1100 0011 0111. Sum is 0 1100 1100 0011 0111. Sum is 0 1100 1100 0011 0111. Sum is 0 1100 1100 0011 0111. Sum is 0 1100 1100 0011 0111. Sum is 0 1100 1100 0011 0111. Sum is 1 1100 1100 0011 0111. Sum is 0 One error detected, coefficient is 0, no change to current code. 1100 1100 0011 0111. Sum is 1 1100 1100 0011 0111. Sum is 1 1100 1100 0011 0111. Sum is 1 1100 1100 0011 0111. Sum is 1 1100 1100 0011 0111. Sum is 1 1100 1100 0011 0111. Sum is 1 1100 1100 0011 0111. Sum is 1 1100 1100 0011 0111. Sum is 0 One error detected, coefficient is 1, valuation of is 0011 0011 0011 0011, current code is now 1111 1111 0000 0100. Then we'll find 0 for , 1 for and the current code become 1111 1111 1111 1011. For the degree 0, we have 16 sums of only 1 bit. The minority is still of size 1, and we found and the corresponding initial word 1 1010 010101 Generalization to larger alphabets via low-degree polynomials Using low-degree polynomials over a finite field of size , it is possible to extend the definition of Reed–Muller codes to alphabets of size . Let and be positive integers, where should be thought of as larger than . To encode a message of width , the message is again interpreted as an -variate polynomial of total degree at most and with coefficient from . Such a polynomial indeed has coefficients. The Reed–Muller encoding of is the list of all evaluations of over all . Thus the block length is . Description using a generator matrix A generator matrix for a Reed–Muller code of length can be constructed as follows. Let us write the set of all m-dimensional binary vectors as: We define in N-dimensional space the indicator vectors on subsets by: together with, also in , the binary operation referred to as the wedge product (not to be confused with the wedge product defined in exterior algebra). Here, and are points in (N-dimensional binary vectors), and the operation is the usual multiplication in the field . is an m-dimensional vector space over the field , so it is possible to write We define in N-dimensional space the following vectors with length and where 1 ≤ i ≤ m and the Hi are hyperplanes in (with dimension ): The generator matrix The Reed–Muller code of order r and length N = 2m is the code generated by v0 and the wedge products of up to r of the vi, (where by convention a wedge product of fewer than one vector is the identity for the operation). In other words, we can build a generator matrix for the code, using vectors and their wedge product permutations up to r at a time , as the rows of the generator matrix, where . Example 1 Let m = 3. Then N = 8, and and The RM(1,3) code is generated by the set or more explicitly by the rows of the matrix: Example 2 The RM(2,3) code is generated by the set: or more explicitly by the rows of the matrix: Properties The following properties hold: The set of all possible wedge products of up to m of the vi form a basis for . The RM (r, m) code has rank where '|' denotes the bar product of two codes. has minimum Hamming weight 2m − r. Proof Decoding RM codes RM(r, m) codes can be decoded using majority logic decoding. The basic idea of majority logic decoding is to build several checksums for each received code word element. Since each of the different checksums must all have the same value (i.e. the value of the message word element weight), we can use a majority logic decoding to decipher the value of the message word element. Once each order of the polynomial is decoded, the received word is modified accordingly by removing the corresponding codewords weighted by the decoded message contributions, up to the present stage. So for a rth order RM code, we have to decode iteratively r+1, times before we arrive at the final received code-word. Also, the values of the message bits are calculated through this scheme; finally we can calculate the codeword by multiplying the message word (just decoded) with the generator matrix. One clue if the decoding succeeded, is to have an all-zero modified received word, at the end of (r + 1)-stage decoding through the majority logic decoding. This technique was proposed by Irving S. Reed, and is more general when applied to other finite geometry codes. Description using a recursive construction A Reed–Muller code RM(r,m) exists for any integers and . RM(m, m) is defined as the universe () code. RM(−1,m) is defined as the trivial code (). The remaining RM codes may be constructed from these elementary codes using the length-doubling construction From this construction, RM(r,m) is a binary linear block code (n, k, d) with length , dimension and minimum distance for . The dual code to RM(r,m) is RM(m-r-1,m). This shows that repetition and SPC codes are duals, biorthogonal and extended Hamming codes are duals and that codes with are self-dual. Special cases of Reed–Muller codes Table of all RM(r,m) codes for m≤5 All codes with and alphabet size 2 are displayed here, annotated with the standard [n,k,d] coding theory notation for block codes. The code is a -code, that is, it is a linear code over a binary alphabet, has block length , message length (or dimension) , and minimum distance . Properties of RM(r,m) codes for r≤1 or r≥m-1 codes are repetition codes of length , rate and minimum distance . codes are parity check codes of length , rate and minimum distance . codes are single parity check codes of length , rate and minimum distance . codes are the family of extended Hamming codes of length with minimum distance . References Further reading Chapter 4. Chapter 4.5. External links MIT OpenCourseWare, 6.451 Principles of Digital Communication II, Lecture Notes section 6.4 GPL Matlab-implementation of RM-codes Source GPL Matlab-implementation of RM-codes Error detection and correction Coding theory Theoretical computer science
Reed–Muller code
Mathematics,Engineering
3,062
586,091
https://en.wikipedia.org/wiki/Cannabinoid%20receptor
Cannabinoid receptors, located throughout the body, are part of the endocannabinoid system of vertebrates a class of cell membrane receptors in the G protein-coupled receptor superfamily. As is typical of G protein-coupled receptors, the cannabinoid receptors contain seven transmembrane spanning domains. Cannabinoid receptors are activated by three major groups of ligands: Endocannabinoids; Phytocannabinoids (plant-derived such as tetrahydrocannabinol (THC) produced by cannabis); Synthetic cannabinoids (such as HU-210). All endocannabinoids and phytocannabinoids are lipophilic. There are two known subtypes of cannabinoid receptors, termed CB1 and CB2. The CB1 receptor is expressed mainly in the brain (central nervous system or "CNS"), but also in the lungs, liver and kidneys. The CB2 receptor is expressed mainly in the immune system, in hematopoietic cells, and in parts of the brain. The protein sequences of CB1 and CB2 receptors are about 44% similar. When only the transmembrane regions of the receptors are considered, amino acid similarity between the two receptor subtypes is approximately 68%. In addition, minor variations in each receptor have been identified. Cannabinoids bind reversibly and stereo-selectively to the cannabinoid receptors. Subtype selective cannabinoids have been developed which theoretically may have advantages for treatment of certain diseases such as obesity. Enzymes involved in biosynthesis/inactivation of endocannabinoids and endocannabinoid signaling in general (involving targets other than CB1/2-type receptors) occur throughout the animal kingdom. Discovery The existence of cannabinoid receptors in the brain was discovered from in vitro studies in the 1980s, with the receptor designated as the cannabinoid receptor type 1 or CB1. The DNA sequence that encodes a G-protein-coupled cannabinoid receptor in the human brain was identified and cloned in 1990. These discoveries led to determination in 1993 of a second brain cannabinoid receptor named cannabinoid receptor type 2 or CB2. A neurotransmitter for a possible endocannabinoid system in the brain and peripheral nervous system, anandamide (from 'ananda', Sanskrit for 'bliss'), was first characterized in 1992, followed by discovery of other fatty acid neurotransmitters that behave as endogenous cannabinoids having a low-to-high range of efficacy for stimulating CB1 receptors in the brain and CB2 receptors in the periphery. Types CB1 Cannabinoid receptor type 1 (CB1) receptors are thought to be one of the most widely expressed Gαi protein-coupled receptors in the brain. One mechanism through which they function is endocannabinoid-mediated depolarization-induced suppression of inhibition, a very common form of retrograde signaling, in which the depolarization of a single neuron induces a reduction in GABA-mediated neurotransmission. Endocannabinoids released from the depolarized post-synaptic neuron bind to CB1 receptors in the pre-synaptic neuron and cause a reduction in GABA release due to limited presynaptic calcium ions entry. They are also found in other parts of the body. For instance, in the liver, activation of the CB1 receptor is known to increase de novo lipogenesis. CB2 CB2 receptors are expressed on T cells of the immune system, on macrophages and B cells, in hematopoietic cells, and in the brain and CNS (2019). They also have a function in keratinocytes. They are also expressed on peripheral nerve terminals. These receptors play a role in antinociception, or the relief of pain. In the brain, they are mainly expressed by microglial cells, where their role remains unclear. While the most likely cellular targets and executors of the CB2 receptor-mediated effects of endocannabinoids or synthetic agonists are the immune and immune-derived cells (e.g. leukocytes, various populations of T and B lymphocytes, monocytes/macrophages, dendritic cells, mast cells, microglia in the brain, Kupffer cells in the liver, astrocytes, etc.), the number of other potential cellular targets is expanding, now including endothelial and smooth muscle cells, fibroblasts of various origins, cardiomyocytes, and certain neuronal elements of the peripheral or central nervous systems (2011). Other The existence of additional cannabinoid receptors has long been suspected, due to the actions of compounds such as abnormal cannabidiol that produce cannabinoid-like effects on blood pressure and inflammation, yet do not activate either CB1 or CB2. Recent research strongly supports the hypothesis that the N-arachidonoyl glycine (NAGly) receptor GPR18 is the molecular identity of the abnormal cannabidiol receptor and additionally suggests that NAGly, the endogenous lipid metabolite of anandamide (also known as arachidonoylethanolamide or AEA), initiates directed microglial migration in the CNS through activation of GPR18. Other molecular biology studies have suggested that the orphan receptor GPR55 should in fact be characterised as a cannabinoid receptor, on the basis of sequence homology at the binding site. Subsequent studies showed that GPR55 does indeed respond to cannabinoid ligands. This profile as a distinct non-CB1/CB2 receptor that responds to a variety of both endogenous and exogenous cannabinoid ligands, has led some groups to suggest GPR55 should be categorized as the CB3 receptor, and this re-classification may follow in time. However this is complicated by the fact that another possible cannabinoid receptor has been discovered in the hippocampus, although its gene has not yet been cloned, suggesting that there may be at least two more cannabinoid receptors to be discovered, in addition to the two that are already known. GPR119 has been suggested as a fifth possible cannabinoid receptor, while the PPAR family of nuclear hormone receptors can also respond to certain types of cannabinoid. Signaling Cannabinoid receptors are activated by cannabinoids, generated naturally inside the body (endocannabinoids) or introduced into the body as cannabis or a related synthetic compound. Similar responses are produced when introduced in alternative methods, only in a more concentrated form than what is naturally occurring. After the receptor is engaged, multiple intracellular signal transduction pathways are activated. At first, it was thought that cannabinoid receptors mainly inhibited the enzyme adenylate cyclase (and thereby the production of the second messenger molecule cyclic AMP), and positively influenced inwardly rectifying potassium channels (=Kir or IRK). However, a much more complex picture has appeared in different cell types, implicating other potassium ion channels, calcium channels, protein kinase A and C, Raf-1, ERK, JNK, p38, c-fos, c-jun and many more. For example, in human primary leukocytes CB2 displays a complex signalling profile, activating adenylate cyclase via stimulatory Gαs alongside the classical Gαi signalling, and induces ERK, p38 and pCREB pathways. Separation between the therapeutically undesirable psychotropic effects, and the clinically desirable ones, however, has not been reported with agonists that bind to cannabinoid receptors. THC, as well as the two major endogenous compounds identified so far that bind to the cannabinoid receptors —anandamide and 2-arachidonylglycerol (2-AG)— produce most of their effects by binding to both the CB1 and CB2 cannabinoid receptors. While the effects mediated by CB1, mostly in the central nervous system, have been thoroughly investigated, those mediated by CB2 are not equally well defined. Prenatal cannabis exposure (PCE) has been shown to perturb the fetal endogenous cannabinoid signaling system. This perturbation has not been shown to directly affect neurodevelopment nor cause lifelong cognitive, behavioral, or functional abnormalities, but it may predispose offspring to abnormalities in cognition and altered emotionality from post-natal factors. Additionally, PCE may alter the wiring of brain circuitry in foetal development and cause significant molecular modifications to neurodevelopmental programs that may lead to neurophysiological disorders and behavioural abnormalities. Cannabinoid treatments Synthetic tetrahydrocannabinol (THC) is prescribed under the INN dronabinol or the brand name Marinol, to treat vomiting and for enhancement of appetite, mainly in people with AIDS as well as for refractory nausea and vomiting in people undergoing chemotherapy. Use of synthetic THC is becoming more common as the known benefits become more prominent within the medical industry. THC is also an active ingredient in nabiximols, a specific extract of Cannabis that was approved as a botanical drug in the United Kingdom in 2010 as a mouth spray for people with multiple sclerosis to alleviate neuropathic pain, spasticity, overactive bladder, and other symptoms. Ligands Binding affinity and selectivity of cannabinoid ligands: See also Cannabinoid receptor antagonist Endocannabinoid enhancer Endocannabinoid reuptake inhibitor Cannabidiol Effects of cannabis References External links G protein-coupled receptors
Cannabinoid receptor
Chemistry
2,060
1,091,018
https://en.wikipedia.org/wiki/Gas%20electron%20diffraction
Gas electron diffraction (GED) is one of the applications of electron diffraction techniques. The target of this method is the determination of the structure of gaseous molecules, i.e., the geometrical arrangement of the atoms from which a molecule is built up. GED is one of two experimental methods (besides microwave spectroscopy) to determine the structure of free molecules, undistorted by intermolecular forces, which are omnipresent in the solid and liquid state. The determination of accurate molecular structures by GED studies is fundamental for an understanding of structural chemistry. Introduction Diffraction occurs because the wavelength of electrons accelerated by a potential of a few thousand volts is of the same order of magnitude as internuclear distances in molecules. The principle is the same as that of other electron diffraction methods such as LEED and RHEED, but the obtainable diffraction pattern is considerably weaker than those of LEED and RHEED because the density of the target is about one thousand times smaller. Since the orientation of the target molecules relative to the electron beams is random, the internuclear distance information obtained is one-dimensional. Thus only relatively simple molecules can be completely structurally characterized by electron diffraction in the gas phase. It is possible to combine information obtained from other sources, such as rotational spectra, NMR spectroscopy or high-quality quantum-mechanical calculations with electron diffraction data, if the latter are not sufficient to determine the molecule's structure completely. The total scattering intensity in GED is given as a function of the momentum transfer, which is defined as the difference between the wave vector of the incident electron beam and that of the scattered electron beam and has the reciprocal dimension of length. The total scattering intensity is composed of two parts: the atomic scattering intensity and the molecular scattering intensity. The former decreases monotonically and contains no information about the molecular structure. The latter has sinusoidal modulations as a result of the interference of the scattering spherical waves generated by the scattering from the atoms included in the target molecule. The interferences reflect the distributions of the atoms composing the molecules, so the molecular structure is determined from this part. Experiment Figure 1 shows a drawing and a photograph of an electron diffraction apparatus. Scheme 1 shows the schematic procedure of an electron diffraction experiment. A fast electron beam is generated in an electron gun, enters a diffraction chamber typically at a vacuum of 10−7 mbar. The electron beam hits a perpendicular stream of a gaseous sample effusing from a nozzle of a small diameter (typically 0.2 mm). At this point, the electrons are scattered. Most of the sample is immediately condensed and frozen onto the surface of a cold trap held at -196 °C (liquid nitrogen). The scattered electrons are detected on the surface of a suitable detector in a well-defined distance to the point of scattering. The scattering pattern consists of diffuse concentric rings (see Figure 2). The steep decent of intensity can be compensated for by passing the electrons through a fast rotation sector (Figure 3). This is cut in a way, that electrons with small scattering angles are more shadowed than those at wider scattering angles. The detector can be a photographic plate, an electron imaging plate (usual technique today) or other position sensitive devices such as hybrid pixel detectors (future technique). The intensities generated from reading out the plates or processing intensity data from other detectors are then corrected for the sector effect. They are initially a function of distance between primary beam position and intensity, and then converted into a function of scattering angle. The so-called atomic intensity and the experimental background are subtracted to give the final experimental molecular scattering intensities as a function of s (the change of momentum). These data are then processed by suitable fitting software like UNEX for refining a suitable model for the compound and to yield precise structural information in terms of bond lengths, angles and torsional angles. Theory GED can be described by scattering theory. The outcome if applied to gases with randomly oriented molecules is provided here in short: Scattering occurs at each individual atom (), but also at pairs (also called molecular scattering) (), or triples (), of atoms. is the scattering variable or change of electron momentum, and its absolute value is defined as with being the electron wavelength defined above, and being the scattering angle. The above-mentioned contributions of scattering add up to the total scattering where is the experimental background intensity, which is needed to describe the experiment completely. The contribution of individual atom scattering is called atomic scattering and easy to calculate: with , being the distance between the point of scattering and the detector, being the intensity of the primary electron beam, and being the scattering amplitude of the i-th atom. In essence, this is a summation over the scattering contributions of all atoms independent of the molecular structure. is the main contribution and easily obtained if the atomic composition of the gas (sum formula) is known. The most interesting contribution is the molecular scattering, because it contains information about the distance between all pairs of atoms in a molecule (bonded or non-bonded): with being the parameter of main interest: the atomic distance between two atoms, being the mean square amplitude of vibration between the two atoms, the anharmonicity constant (correcting the vibration description for deviations from a purely harmonic model), and is a phase factor, which becomes important if a pair of atoms with very different nuclear charge is involved. The first part is similar to the atomic scattering, but contains two scattering factors of the involved atoms. Summation is performed over all atom pairs. is negligible in most cases and not described here in more detail. is mostly determined by fitting and subtracting smooth functions to account for the background contribution. So it is the molecular scattering intensity that is of interest, and this is obtained by calculation all other contributions and subtracting them from the experimentally measured total scattering function. Results Figure 5 shows two typical examples of results. The molecular scattering intensity curves are used to refine a structural model by means of a least squares fitting program. This yield precise structural information. The Fourier transformation of the molecular scattering intensity curves gives the radial distribution curves (RDC). These represent the probability to find a certain distance between two nuclei of a molecule. The curves below the RDC represent the diffrerence between the experiment and the model, i.e. the quality of fit. The very simple example in Figure 5 shows the results for evaporated white phosphorus, P4. It is a perfectly tetrahedral molecule and has thus only one P-P distance. This makes the molecular scattering intensity curve a very simple one; a sine curve which is damped due to molecular vibration. The radial distribution curve (RDC) shows a maximum at 2.1994 Å with a least-squares error of 0.0003 Å, represented as 2.1994(3) Å. The width of the peak represents the molecular vibration and is the result of Fourier transformation of the damping part. This peak width means that the P-P distance varies by this vibration within a certain range given as a vibrational amplitude u, in this example uT(P‒P) = 0.0560(5) Å. The slightly more complicated molecule P3As has two different distances P-P and P-As. Because their contributions overlap in the RDC, the peak is broader (also seen in a more rapid damping in the molecular scattering). The determination of these two independent parameters is more difficult and results in less precise parameter values than for P4. Some selected other examples of important contributions to the structural chemistry of molecules are provided here: Structure of diborane B2H6 Structure of the planar trisilylamine Determinations of the structures of gaseous elemental phosphorus P4 and of the binary P3As Determination of the structure of C60 and C70 Structure of tetranitromethane Absence of local C3 symmetry in the simplest phosphonium ylide H2C=PMe3 and in amino-phosphanes like P(NMe2)3 and ylides H2C=P(NMe2)3 Determination of intramolecular London dispersion interaction effects on gas-phase and solid-state structures of diamondoid dimers Links http://molwiki.org/wiki/Main_Page—A free encyclopaedia, mainly focused on molecular structure and dynamics. The story of gas-phase electron diffraction (GED) in Norway References Diffraction
Gas electron diffraction
Physics,Chemistry,Materials_science
1,770
1,576,209
https://en.wikipedia.org/wiki/Variadic%20function
In mathematics and in computer programming, a variadic function is a function of indefinite arity, i.e., one which accepts a variable number of arguments. Support for variadic functions differs widely among programming languages. The term variadic is a neologism, dating back to 1936–1937. The term was not widely used until the 1970s. Overview There are many mathematical and logical operations that come across naturally as variadic functions. For instance, the summing of numbers or the concatenation of strings or other sequences are operations that can be thought of as applicable to any number of operands (even though formally in these cases the associative property is applied). Another operation that has been implemented as a variadic function in many languages is output formatting. The C function and the Common Lisp function are two such examples. Both take one argument that specifies the formatting of the output, and any number of arguments that provide the values to be formatted. Variadic functions can expose type-safety problems in some languages. For instance, C's , if used incautiously, can give rise to a class of security holes known as format string attacks. The attack is possible because the language support for variadic functions is not type-safe: it permits the function to attempt to pop more arguments off the stack than were placed there, corrupting the stack and leading to unexpected behavior. As a consequence of this, the CERT Coordination Center considers variadic functions in C to be a high-severity security risk. In functional programming languages, variadics can be considered complementary to the apply function, which takes a function and a list/sequence/array as arguments, and calls the function with the arguments supplied in that list, thus passing a variable number of arguments to the function. In the functional language Haskell, variadic functions can be implemented by returning a value of a type class ; if instances of are a final return value and a function , this allows for any number of additional arguments . A related subject in term rewriting research is called hedges, or hedge variables. Unlike variadics, which are functions with arguments, hedges are sequences of arguments themselves. They also can have constraints ('take no more than 4 arguments', for example) to the point where they are not variable-length (such as 'take exactly 4 arguments') - thus calling them variadics can be misleading. However they are referring to the same phenomenon, and sometimes the phrasing is mixed, resulting in names such as variadic variable (synonymous to hedge). Note the double meaning of the word variable and the difference between arguments and variables in functional programming and term rewriting. For example, a term (function) can have three variables, one of them a hedge, thus allowing the term to take three or more arguments (or two or more if the hedge is allowed to be empty). Examples In C To portably implement variadic functions in the C language, the standard header file is used. The older header has been deprecated in favor of . In C++, the header file is used. #include <stdarg.h> #include <stdio.h> double average(int count, ...) { va_list ap; int j; double sum = 0; va_start(ap, count); /* Before C23: Requires the last fixed parameter (to get the address) */ for (j = 0; j < count; j++) { sum += va_arg(ap, int); /* Increments ap to the next argument. */ } va_end(ap); return sum / count; } int main(int argc, char const *argv[]) { printf("%f\n", average(3, 1, 2, 3)); return 0; } This will compute the average of an arbitrary number of arguments. Note that the function does not know the number of arguments or their types. The above function expects that the types will be , and that the number of arguments is passed in the first argument (this is a frequent usage but by no means enforced by the language or compiler). In some other cases, for example printf, the number and types of arguments are figured out from a format string. In both cases, this depends on the programmer to supply the correct information. (Alternatively, a sentinel value like or may be used to indicate the end of the parameter list.) If fewer arguments are passed in than the function believes, or the types of arguments are incorrect, this could cause it to read into invalid areas of memory and can lead to vulnerabilities like the format string attack. Depending on the system, even using as a sentinel may encounter such problems; or a dedicated null pointer of the correct target type may be used to avoid them. declares a type, , and defines four macros: , , , and . Each invocation of and must be matched by a corresponding invocation of . When working with variable arguments, a function normally declares a variable of type ( in the example) that will be manipulated by the macros. takes two arguments, a object and a reference to the function's last parameter (the one before the ellipsis; the macro uses this to get its bearings). In C23, the second argument will no longer be required and variadic functions will no longer need a named parameter before the ellipsis. It initialises the object for use by or . The compiler will normally issue a warning if the reference is incorrect (e.g. a reference to a different parameter than the last one, or a reference to a wholly different object), but will not prevent compilation from completing normally. takes two arguments, a object (previously initialised) and a type descriptor. It expands to the next variable argument, and has the specified type. Successive invocations of allow processing each of the variable arguments in turn. Unspecified behavior occurs if the type is incorrect or there is no next variable argument. takes one argument, a object. It serves to clean up. If one wanted to, for instance, scan the variable arguments more than once, the programmer would re-initialise your object by invoking and then again on it. takes two arguments, both of them objects. It clones the second (which must have been initialised) into the first. Going back to the "scan the variable arguments more than once" example, this could be achieved by invoking on a first , then using to clone it into a second . After scanning the variable arguments a first time with and the first (disposing of it with ), the programmer could scan the variable arguments a second time with and the second . needs to also be called on the cloned before the containing function returns. In C# C# describes variadic functions using the keyword. A type must be provided for the arguments, although can be used as a catch-all. At the calling site, you can either list the arguments one by one, or hand over a pre-existing array having the required element type. Using the variadic form is Syntactic sugar for the latter. using System; class Program { static int Foo(int a, int b, params int[] args) { // Return the sum of the integers in args, ignoring a and b. int sum = 0; foreach (int i in args) sum += i; return sum; } static void Main(string[] args) { Console.WriteLine(Foo(1, 2)); // 0 Console.WriteLine(Foo(1, 2, 3, 10, 20)); // 33 int[] manyValues = new int[] { 13, 14, 15 }; Console.WriteLine(Foo(1, 2, manyValues)); // 42 } } In C++ The basic variadic facility in C++ is largely identical to that in C. The only difference is in the syntax, where the comma before the ellipsis can be omitted. C++ allows variadic functions without named parameters but provides no way to access those arguments since va_start requires the name of the last fixed argument of the function. #include <iostream> #include <cstdarg> void simple_printf(const char* fmt...) // C-style "const char* fmt, ..." is also valid { va_list args; va_start(args, fmt); while (*fmt != '\0') { if (*fmt == 'd') { int i = va_arg(args, int); std::cout << i << '\n'; } else if (*fmt == 'c') { // note automatic conversion to integral type int c = va_arg(args, int); std::cout << static_cast<char>(c) << '\n'; } else if (*fmt == 'f') { double d = va_arg(args, double); std::cout << d << '\n'; } ++fmt; } va_end(args); } int main() { simple_printf("dcff", 3, 'a', 1.999, 42.5); } Variadic templates (parameter pack) can also be used in C++ with language built-in fold expressions. #include <iostream> template <typename... Ts> void foo_print(Ts... args) { ((std::cout << args << ' '), ...); } int main() { std::cout << std::boolalpha; foo_print(1, 3.14f); // 1 3.14 foo_print("Foo", 'b', true, nullptr); // Foo b true nullptr } The CERT Coding Standards for C++ strongly prefers the use of variadic templates (parameter pack) in C++ over the C-style variadic function due to a lower risk of misuse. In Go Variadic functions in Go can be called with any number of trailing arguments. is a common variadic function; it uses an empty interface as a catch-all type. package main import "fmt" // This variadic function takes an arbitrary number of ints as arguments. func sum(nums ...int) { fmt.Print("The sum of ", nums) // Also a variadic function. total := 0 for _, num := range nums { total += num } fmt.Println(" is", total) // Also a variadic function. } func main() { // Variadic functions can be called in the usual way with individual // arguments. sum(1, 2) // "The sum of [1 2] is 3" sum(1, 2, 3) // "The sum of [1 2 3] is 6" // If you already have multiple args in a slice, apply them to a variadic // function using func(slice...) like this. nums := []int{1, 2, 3, 4} sum(nums...) // "The sum of [1 2 3 4] is 10" } Output: The sum of [1 2] is 3 The sum of [1 2 3] is 6 The sum of [1 2 3 4] is 10 In Java As with C#, the type in Java is available as a catch-all. public class Program { // Variadic methods store any additional arguments they receive in an array. // Consequentially, `printArgs` is actually a method with one parameter: a // variable-length array of `String`s. private static void printArgs(String... strings) { for (String string : strings) { System.out.println(string); } } public static void main(String[] args) { printArgs("hello"); // short for printArgs(["hello"]) printArgs("hello", "world"); // short for printArgs(["hello", "world"]) } } In JavaScript JavaScript does not care about types of variadic arguments. function sum(...numbers) { return numbers.reduce((a, b) => a + b, 0); } console.log(sum(1, 2, 3)); // 6 console.log(sum(3, 2)); // 5 console.log(sum()); // 0 It's also possible to create a variadic function using the arguments object, although it is only usable with functions created with the keyword. function sum() { return Array.prototype.reduce.call(arguments, (a, b) => a + b, 0); } console.log(sum(1, 2, 3)); // 6 console.log(sum(3, 2)); // 5 console.log(sum()); // 0 In Lua Lua functions may pass varargs to other functions the same way as other values using the keyword. tables can be passed into variadic functions by using, in Lua version 5.2 or higher , or Lua 5.1 or lower . Varargs can be used as a table by constructing a table with the vararg as a value.function sum(...) --... designates varargs local sum=0 for _,v in pairs({...}) do --creating a table with a varargs is the same as creating one with standard values sum=sum+v end return sum end values={1,2,3,4} sum(5,table.unpack(values)) --returns 15. table.unpack should go after any other arguments, otherwise not all values will be passed into the function. function add5(...) return ...+5 --this is incorrect usage of varargs, and will only return the first value provided end entries={} function process_entries() local processed={} for i,v in pairs(entries) do processed[i]=v --placeholder processing code end return table.unpack(processed) --returns all entries in a way that can be used as a vararg end print(process_entries()) --the print function takes all varargs and writes them to stdout separated by newlines In Pascal Pascal is standardized by ISO standards 7185 (“Standard Pascal”) and 10206 (“Extended Pascal”). Neither standardized form of Pascal supports variadic routines, except for certain built-in routines (/ and /, and additionally in /). Nonetheless, dialects of Pascal implement mechanisms resembling variadic routines. Delphi defines an data type that may be associated with the last formal parameter. Within the routine definition the is an , an array of variant records. The member of the aforementioned data type allows inspection of the argument’s data type and subsequent appropriate handling. The Free Pascal Compiler supports Delphi’s variadic routines, too. This implementation, however, technically requires a single argument, that is an . Pascal imposes the restriction that arrays need to be homogenous. This requirement is circumvented by utilizing a variant record. The GNU Pascal defines a real variadic formal parameter specification using an ellipsis (), but as of 2022 no portable mechanism to use such has been defined. Both GNU Pascal and FreePascal allow externally declared functions to use a variadic formal parameter specification using an ellipsis (). In PHP PHP does not care about types of variadic arguments unless the argument is typed. function sum(...$nums): int { return array_sum($nums); } echo sum(1, 2, 3); // 6 And typed variadic arguments: function sum(int ...$nums): int { return array_sum($nums); } echo sum(1, 'a', 3); // TypeError: Argument 2 passed to sum() must be of the type int (since PHP 7.3) In Python Python does not care about types of variadic arguments. def foo(a, b, *args): print(args) # args is a tuple (immutable sequence). foo(1, 2) # () foo(1, 2, 3) # (3,) foo(1, 2, 3, "hello") # (3, "hello") Keyword arguments can be stored in a dictionary, e.g. . In Raku In Raku, the type of parameters that create variadic functions are known as slurpy array parameters and they're classified into three groups: Flattened slurpy These parameters are declared with a single asterisk (*) and they flatten arguments by dissolving one or more layers of elements that can be iterated over (i.e, Iterables). sub foo($a, $b, *@args) { say @args.perl; } foo(1, 2) # [] foo(1, 2, 3) # [3] foo(1, 2, 3, "hello") # [3 "hello"] foo(1, 2, 3, [4, 5], [6]); # [3, 4, 5, 6] Unflattened slurpy These parameters are declared with two asterisks (**) and they do not flatten any iterable arguments within the list, but keep the arguments more or less as-is: sub bar($a, $b, **@args) { say @args.perl; } bar(1, 2); # [] bar(1, 2, 3); # [3] bar(1, 2, 3, "hello"); # [3 "hello"] bar(1, 2, 3, [4, 5], [6]); # [3, [4, 5], [6]] Contextual slurpy These parameters are declared with a plus (+) sign and they apply the "single argument rule", which decides how to handle the slurpy argument based upon context. Simply put, if only a single argument is passed and that argument is iterable, that argument is used to fill the slurpy parameter array. In any other case, +@ works like **@ (i.e., unflattened slurpy). sub zaz($a, $b, +@args) { say @args.perl; } zaz(1, 2); # [] zaz(1, 2, 3); # [3] zaz(1, 2, 3, "hello"); # [3 "hello"] zaz(1, 2, [4, 5]); # [4, 5], single argument fills up array zaz(1, 2, 3, [4, 5]); # [3, [4, 5]], behaving as **@ zaz(1, 2, 3, [4, 5], [6]); # [3, [4, 5], [6]], behaving as **@ In Ruby Ruby does not care about types of variadic arguments. def foo(*args) print args end foo(1) # prints `[1]=> nil` foo(1, 2) # prints `[1, 2]=> nil` In Rust Rust does not support variadic arguments in functions. Instead, it uses macros. macro_rules! calculate { // The pattern for a single `eval` (eval $e:expr) => {{ { let val: usize = $e; // Force types to be integers println!("{} = {}", stringify!{$e}, val); } }}; // Decompose multiple `eval`s recursively (eval $e:expr, $(eval $es:expr),+) => {{ calculate! { eval $e } calculate! { $(eval $es),+ } }}; } fn main() { calculate! { // Look ma! Variadic `calculate!`! eval 1 + 2, eval 3 + 4, eval (2 * 3) + 1 } } Rust is able to interact with C's variadic system via a feature switch. As with other C interfaces, the system is considered to Rust. In Scala object Program { // Variadic methods store any additional arguments they receive in an array. // Consequentially, `printArgs` is actually a method with one parameter: a // variable-length array of `String`s. private def printArgs(strings: String*): Unit = { strings.foreach(println) } def main(args: Array[String]): Unit = { printArgs("hello"); // short for printArgs(["hello"]) printArgs("hello", "world"); // short for printArgs(["hello", "world"]) } } In Swift Swift cares about the type of variadic arguments, but the catch-all type is available. func greet(timeOfTheDay: String, names: String...) { // here, names is [String] print("Looks like we have \(names.count) people") for name in names { print("Hello \(name), good \(timeOfTheDay)") } } greet(timeOfTheDay: "morning", names: "Joseph", "Clara", "William", "Maria") // Output: // Looks like we have 4 people // Hello Joseph, good morning // Hello Clara, good morning // Hello William, good morning // Hello Maria, good morning In Tcl A Tcl procedure or lambda is variadic when its last argument is : this will contain a list (possibly empty) of all the remaining arguments. This pattern is common in many other procedure-like methods. proc greet {timeOfTheDay args} { puts "Looks like we have [llength $args] people" foreach name $args { puts "Hello $name, good $timeOfTheDay" } } greet "morning" "Joseph" "Clara" "William" "Maria" # Output: # Looks like we have 4 people # Hello Joseph, good morning # Hello Clara, good morning # Hello William, good morning # Hello Maria, good morning See also Varargs in Java programming language Variadic macro (C programming language) Variadic template Notes References External links Variadic function. Rosetta Code task showing the implementation of variadic functions in over 120 programming languages. Variable Argument Functions — A tutorial on Variable Argument Functions for C++ GNU libc manual Subroutines Programming language comparisons Articles with example C code Articles with example C++ code Articles with example C Sharp code Articles with example Haskell code Articles with example Java code Articles with example JavaScript code Articles with example Pascal code Articles with example Perl code Articles with example Python (programming language) code Articles with example Ruby code Articles with example Rust code Articles with example Swift code Articles with example Tcl code
Variadic function
Technology
5,161
724,958
https://en.wikipedia.org/wiki/Thorium%20dioxide
Thorium dioxide (ThO2), also called thorium(IV) oxide, is a crystalline solid, often white or yellow in colour. Also known as thoria, it is mainly a by-product of lanthanide and uranium production. Thorianite is the name of the mineralogical form of thorium dioxide. It is moderately rare and crystallizes in an isometric system. The melting point of thorium oxide is 3300 °C – the highest of all known oxides. Only a few elements (including tungsten and carbon) and a few compounds (including tantalum carbide) have higher melting points. All thorium compounds, including the dioxide, are radioactive because there are no stable isotopes of thorium. Structure and reactions Thoria exists as two polymorphs. One has a fluorite crystal structure. This is uncommon among binary dioxides. (Other binary oxides with fluorite structure include cerium dioxide, uranium dioxide and plutonium dioxide.) The band gap of thoria is about 6 eV. A tetragonal form of thoria is also known. Thorium dioxide is more stable than thorium monoxide (ThO). Only with careful control of reaction conditions can oxidation of thorium metal give the monoxide rather than the dioxide. At extremely high temperatures, the dioxide can convert to the monoxide either by a disproportionation reaction (equilibrium with liquid thorium metal) above or by simple dissociation (evolution of oxygen) above . Applications Nuclear fuels Thorium dioxide (thoria) can be used in nuclear reactors as ceramic fuel pellets, typically contained in nuclear fuel rods clad with zirconium alloys. Thorium is not fissile (but is "fertile", breeding fissile uranium-233 under neutron bombardment); hence, it must be used as a nuclear reactor fuel in conjunction with fissile isotopes of either uranium or plutonium. This can be achieved by blending thorium with uranium or plutonium, or using it in its pure form in conjunction with separate fuel rods containing uranium or plutonium. Thorium dioxide offers advantages over conventional uranium dioxide fuel pellets, because of its higher thermal conductivity (lower operating temperature), considerably higher melting point, and chemical stability (does not oxidize in the presence of water/oxygen, unlike uranium dioxide). Thorium dioxide can be turned into a nuclear fuel by breeding it into uranium-233 (see below and refer to the article on thorium for more information on this). The high thermal stability of thorium dioxide allows applications in flame spraying and high-temperature ceramics. Alloys Thorium dioxide is used as a stabilizer in tungsten electrodes in TIG welding, electron tubes, and aircraft gas turbine engines. As an alloy, thoriated tungsten metal is not easily deformed because the high-fusion material thoria augments the high-temperature mechanical properties, and thorium helps stimulate the emission of electrons (thermions). It is the most popular oxide additive because of its low cost, but is being phased out in favor of non-radioactive elements such as cerium, lanthanum and zirconium. Thoria-dispersed nickel finds its applications in various high-temperature operations like combustion engines because it is a good creep-resistant material. It can also be used for hydrogen trapping. Catalysis Thorium dioxide has almost no value as a commercial catalyst, but such applications have been well investigated. It is a catalyst in the Ruzicka large ring synthesis. Other applications that have been explored include petroleum cracking, conversion of ammonia to nitric acid and preparation of sulfuric acid. Radiocontrast agents Thorium dioxide was the primary ingredient in Thorotrast, a once-common radiocontrast agent used for cerebral angiography, however, it causes a rare form of cancer (hepatic angiosarcoma) many years after administration. This use was replaced with injectable iodine or ingestable barium sulfate suspension as standard X-ray contrast agents. Lamp mantles Another major use in the past was in gas mantle of lanterns developed by Carl Auer von Welsbach in 1890, which are composed of 99% ThO2 and 1% cerium(IV) oxide. Even as late as the 1980s it was estimated that about half of all ThO2 produced (several hundred tonnes per year) was used for this purpose. Some mantles still use thorium, but yttrium oxide (or sometimes zirconium oxide) is used increasingly as a replacement. Glass manufacture When added to glass, thorium dioxide helps increase its refractive index and decrease dispersion. Such glass finds application in high-quality lenses for cameras and scientific instruments. The radiation from these lenses can darken them and turn them yellow over a period of years and degrade film, but the health risks are minimal. Yellowed lenses may be restored to their original colourless state by lengthy exposure to intense ultraviolet radiation. Thorium dioxide has since been replaced by rare-earth oxides such as lanthanum oxide in almost all modern high-index glasses, as they provide similar effects and are not radioactive. References Cited sources Hepatotoxins Oxides Thorium(IV) compounds Refractory materials Fluorite crystal structure
Thorium dioxide
Physics,Chemistry
1,090
67,417,515
https://en.wikipedia.org/wiki/Land%20Art%20Generator%20Initiative
Land Art Generator Initiative (LAGI), founded by Elizabeth Monoian and Robert Ferry, is an organization dedicated to devising alternative energy solutions through sustainable design and public art by providing platforms for scientists and engineers to collaborate with artists, architects and other creatives on public art projects that generate sustainable energy infrastructures. Since 2010, LAGI has hosted biannual international competitions stimulating artists to design public art that produces renewable energy. Sites for these contests have included Abu Dhabi, United Arab Emirates, Copenhagen, Denmark, New York City and Santa Monica, California. Land Art Generator Initiative also led efforts that have resulted in the world's first Solar Mural artworks. LAGI International Design Competitions Every two years since 2010, Land Art Generator Initiative has conducted international competitions leading design teams from over forty countries to create for art-based solutions to renewable energy challenges. 2010 - Adu Dhabi, United Arab Emirates 2012 - Freshkills Park, New York 2014 - Copenhagen, Denmark 2016 - Santa Monica, California 2018 - Melbourne, Australia 2019 - Abu Dubai, United Arab Emirates 2020 - Fly Ranch, Nevada 2022 - Mannheim, Germany Solar Mural Artworks The world's first Solar Mural artworks, developed through leadership from the Land Art Generator Initiative, are located in San Antonio, Texas. These artworks are the result of an advanced photovoltaic film technology that allows light to filter through an image-printed film adhered to solar panels. The first is a stand-alone work called La Monarca. The world's first wall-mounted Solar Mural artwork is on the facade of Brackenridge Elementary School. References Energy organizations Sustainable design Public art
Land Art Generator Initiative
Engineering
328
43,402,512
https://en.wikipedia.org/wiki/Bondage%20number
In the mathematical field of graph theory, the bondage number of a nonempty graph is the cardinality of the smallest set of edges such that the domination number of the graph with the edges removed is strictly greater than the domination number of the original graph. The concept was introduced by Fink et al. References Graph invariants
Bondage number
Mathematics
68
13,730,623
https://en.wikipedia.org/wiki/Armoured%20flight%20deck
An armoured flight deck is an aircraft carrier flight deck that incorporates substantial armour in its design. Comparison is often made between the carrier designs of the Royal Navy (RN) and the United States Navy (USN). The two navies followed differing philosophies in the use of armour on carrier flight decks, starting with the design of the RN's and ending with the design of the , when the USN also adopted armoured flight decks. The two classes most easily compared are the RN's Illustrious class and and their nearest USN contemporaries, the and es. The Illustrious class followed the Yorktown but preceded the Essex, while the Implacable-class design predated the Essex but these ships were completed after the lead ships of the Essex class. The development of armoured flight deck carriers proceeded during World War II, and before the end of World War II both the USN, with , and the Imperial Japanese Navy (IJN), with and would also commission armoured flight deck carriers, while all USN fleet aircraft carriers built since 1945 feature armoured flight decks. The remainder of the IJN carrier force during World War II had unarmoured flight decks just like the Yorktown and Essex classes of the USN. Design In choosing the best design for their carriers, the British had to consider the advantages and disadvantages of hangar design. There was a choice between open or closed hangar and the position of the armour. The placing of the strongest deck affected the strength of the hull. The further apart the deck and the keel, the stronger the design. If the flight deck was placed above the main deck then it had to be built to allow for movement with expansion sections. A closed hangar design was the strongest structurally and made for a lighter hull. The RN carried this concept one step further and designed the armoured flight deck to also act as the strength deck without any underlying plating, thus achieving an armoured flight deck on the lowest possible displacement. The carriers that were built with armoured decks fall into two distinct types – those with armour at the flight deck level protecting the hangar and those that only had armour for the lower levels of the ship, typically the hangar deck. The different thickness of armour, and how they were distributed, are described in the table below. Theory Armour at the flight deck level would protect the hangar deck and the aircraft stored there from most bombs. The armour of the Illustrious class was intended to protect against 1,000 pound bombs. In the Illustrious class, the armoured flight deck extended for about two-thirds of the length of the ship, bounded by the two aircraft lifts (which were without the armour). The deck was closed by armoured sides and bulkheads, forming an armoured box. The bulkheads had sliding armoured portals to allow access between the hangar and the aircraft lift. There were lateral strakes of main deck armour that extended from the base of the hangar side-wall to the top of the main side belt. The latter protected the machinery, magazines and aircraft fuel and weaponry stores. The RN's closed and armoured hangars were capable of being environmentally sealed for protection against chemical weapon attack. The armoured design meant that it would have to be attacked with Armour Piercing (AP) bombs, which have much less blast effect than higher-capacity General Purpose (GP) bombs carrying about twice the explosive amount. GP bombs also caused severe hull damage if they exploded in the water close to the hull; AP bombs, much less so. The USN open hangar design allowed large numbers of aircraft to be warmed up while inside, theoretically reducing the time required to range and launch a strike, but storage of fuelled and armed aircraft in an unarmoured hangar was extremely dangerous: During the war, the British fitted immersion heaters to the oil tanks of their aircraft so minimal warm-up was required when they reached the flight deck. American carriers after the Lexington-class, and the earlier Japanese carriers, had their armour placed at the hangar deck, essentially treating the hangar spaces and flight deck as superstructure – making these areas very vulnerable to the blast from GP bombs and other explosions, which in turn caused massive casualties in comparison to RN designs. A bomb that struck the flight deck would likely penetrate and explode in the hangar deck, but the armour there could still protect the ship's vitals – including the engine spaces and fuel storage. The flight deck could also possibly fuze light bombs prematurely, which would reduce the chance of them going through the hangar deck. Such a design allowed for larger, open-sided hangar bays (improving ventilation but making the ship very vulnerable to chemical weapon attack) and the installation of deck-edge elevators. USN carriers with hangar deck armour only usually had wooden decking over thin mild steel flight decks which were easy to repair. The USN moved the structural strength deck to the flight deck, starting with the which had "...an enclosed..." hangar. Aviation fuel delivery and stowage systems were extremely vulnerable. The Royal Navy stowed aviation fuel in cylindrical tanks, that in turn were surrounded by seawater. RN aviation fuel lines were purged with carbon dioxide when not in use. The USN used a similar system, which was further improved after the two navies began exchanging information in 1940. Pre-war USN and IJN carrier designs used a fuel stowage system which was not as secure as that used by the RN, but allowed for much greater stowage capacity. Several USN and IJN carriers were lost due to aviation gas fume explosions. Doctrine and design The Royal Navy had to be ready to fight a war in the confines of the North Sea and Mediterranean Sea, under the umbrella of land-based enemy air forces. The Royal Navy, with its extensive network of bases and colonies in the Pacific Ocean, had also to be ready to fight in the vast expanses of the Pacific, as did the USN and the IJN, but the USN and IJN did not have to worry about operating in the Mediterranean. The differences in construction were determined by doctrine that was largely driven by the different approaches to the same tactical problem: How to destroy the enemy's aircraft carriers while surviving the inevitable counter strike. Prior to WWII the RN and USN both recognised that the dive bomber could disable the flight decks of enemy aircraft carriers: The RN was thus faced with designing a carrier that would be survivable under the conditions to be expected in the Atlantic, Mediterranean, and Pacific Oceans, and before the development of effective naval radar; these conflicting demands resulted in the development of aircraft carriers whose decks were armoured against 500 lb armour piercing bombs and 1000 lb general-purpose bombs. The RN considered that an unarmoured carrier would be unlikely to be able fly off more than one deck load of strike aircraft prior to being attacked, so the armoured flight deck carriers accepted a reduction in hangar capacity to the equivalent to one deck load of aircraft. USN, IJN, and some RN Fleet carriers such as Ark Royal had sufficient aircraft capacity to allow for two ranges, each equal to a full deck load of strike aircraft. The RN and IJN limited their aircraft carrier's aircraft capacity to the capacity of their hangars, and struck down all aircraft between operations. The USN, typically, used a permanent deck park to augment the capacity of their aircraft carrier's hangars. The use of a permanent deck park appeared to give USN carriers a much larger aircraft capacity than contemporary RN armoured flight deck carriers. The flight deck armour also reduced the length of the flight deck, reducing the maximum aircraft capacity of the armoured flight deck carrier, but the largest part of the disparity between RN and USN carriers in aircraft capacity was due to the use of a permanent deck park on USN carriers. The Royal Navy also had the disadvantage that they entered into World War II with the Royal Navy being pitted against large, land based, air-forces whose aircraft also had superior performance to all existing naval aircraft, while the RAF's increased demand for high performance land based aircraft, after the Fall of France, actually reduced the production and development of Fleet Air Arm aircraft. On the other hand, the RN rapidly introduced new technologies, such as radar, which enhanced the defensive capability of aircraft carriers. The RN thus had to develop new operational doctrines during the war. The USN, in contrast, was able to benefit from technology transfers from the UK and the wartime experiences of the RN, which was freely shared with the USN, prior to its entry into the war, allowing it to anticipate the changes needed to prepare its carriers for the coming conflict with Japan. The USN designed the armoured flight decks of the Midway-class carriers based upon an analysis of the effectiveness of RN armoured flight decks. The IJN also benefited from being able to observe the effectiveness of RN aircraft carriers in action, while both the USN and IJN were able to introduce new aircraft types, prior to their entry into World War II. Aircraft restrictions All RN fleet carriers had hangar heights, except the two Implacable-class ships, which had heights, and Indomitable which had a lower hangar and a upper hangar. The Illustrious class had a single high hangar that was long. Within the confines of ship design, and the Second London Naval Treaty to which they complied, the Indomitable and Implacable-class carriers had to accept a reduction in hangar heights (to keep the metacentric height within acceptable limits without exceeding the treaty restrictions on overall displacement) and size, and as a result, had some restriction on aircraft types supplied via Lend-Lease. IJN carriers typically had high hangars, including Taihō and Shinano. The USN had hangar heights while the Yorktown, Wasp, Essex, and Midway classes had hangar heights. Defences The British approach of armoured flight decks was meant as an effective form of passive defence from bombs and kamikaze attacks that actually struck their carriers, while the American carriers primarily relied on fighters to prevent the carriers from being hit in the first place. In addition, RN carriers such as Ark Royal or Illustrious had far heavier anti-aircraft (AA) outfits than their USN counterparts, up to the introduction of the USN Essex-class carriers. Ark Royal, in 1940, carried 16 x 4.5-inch guns, 32 x 40mm "Pom-pom" and 32 x 0.5 inch 0.5 inch Vickers machine guns against 8 x 5-inch, 16 x 28 mm and 24 x .5-inch guns for Enterprise, in 1940. "In wartime, however, the US Navy found the armoured carriers fascinating. After having examined HMS Formidable in 1940, the US naval attaché commented that, were he crossing the Pacific, he would prefer her to a Yorktown, the closest US equivalent, on the basis that she might carry fewer aircraft, but she would be much more likely to get there". Late in the war when the USN operated many carriers together and had improved radar, their fighter and AA defence was reasonably effective, yet both conventional and kamikaze attacks were still able to penetrate USN defences. Bunker Hill and Franklin nearly succumbed to them in 1945. The larger air groups (80–110 planes, vs. 52–81 for late war British ships of the Implacable-class) allowed for a more effective combat air patrol (CAP) without reducing strike capability, improving the protection of the whole battle group and lessening the workload of the carrier escorts. Carrier fighters were able to shoot down far more kamikaze aircraft than any amount of deck armour would have protected against showing the value of absolute numbers, but in the early war period IJN aircraft had little difficulty in penetrating USN CAPs; near the end of the war, veteran American fighter pilots in superior Grumman F6F Hellcat and F4U Corsair fighters were able to defeat the young, inexperienced and ill-trained kamikaze pilots with ease and run up huge kill scores but attackers were still able to get through. (In addition to larger aircraft complements, the US Navy had larger fleets and more resources, so they could establish destroyer pickets as part of their "Big blue blanket" defense system, and develop dedicated AAW ships such as the antiaircraft cruisers which would have also drawn attention away from the carriers.) On the surface, the record seems balanced. British naval historian D.K. Brown put the practical difference between American and British design philosophies in no uncertain terms: "More fighters would have been better protection than armour," but that British designs were good for the circumstances in which they were meant to be used. Yet, even , Britain's newest carrier prior to World War II, never operated close to her theoretical aircraft capacity. Prior to the development of effective radar and high speed monoplane fighters, a successful fighter defence was extremely unlikely for any navy thus calling into doubt D.K. Brown's conclusions. The benefits of flight deck armour were intended to counter these issues. Fewer aircraft meant a lower priority to attack than the more heavily armed American carriers and the RN's operational doctrine dictated smaller airgroups, and the armoured hangar carriers had smaller avgas and ammunition supplies to match. However, RN carriers carried far more aircraft later in the war, making use of the deck park when they adopted USN style operational doctrine. The 2nd generation RN armoured carriers, Indomitable and the Implacable class which had an additional half length lower hangar, were considerably less outmatched by their USN counterparts in the numbers of aircraft operated. The RN operating in harsher weather protected their aircraft from the elements and did not use a permanent deck park in the earlier part of the war. Damage analysis US carriers and their fighters shot down more than 1,900 suicide aircraft during Operation Kikusui (the last and largest kamikaze attack in the Okinawa campaign), versus a mere 75 for the British, yet both forces suffered the same number of serious hits (four), on their carriers. However the kamikazes made 173 strikes against other USN targets and the 4 USN carriers suffered a massive death toll, in contrast to the relatively light casualties on the RN carriers. The kamikaze threat overall was serious, but Allied defences neutralised it, and many kamikaze strikes missed the deck armour entirely, or bounced off the decks of both British or American carriers. In some cases, kamikazes either struck glancing blows that did only superficial damage that was fixed within minutes or hours, or missed entirely, due to the poor training and poorer flight experience of their pilots. The majority of kamikazes that did inflict harm caused no more damage than they would have against smaller ships. After a successful kamikaze hit, the British were able to clear the flight deck and resume flight operations in just hours, while their American counterparts often could do the same, but not always; in some cases repairs took a few days or even months. The USN liaison officer on commented: "When a kamikaze hits a U.S. carrier it means 6 months of repair at Pearl [Harbor]. When a kamikaze hits a Limey carrier it’s just a case of "Sweepers, man your brooms."” American carriers of the Essex class suffered very high casualties from serious kamikaze hits, though all did survive. The ships were most vulnerable during the period just prior to and during the launching of strikes. Early versions of the design also had a unified ventilation system that would transfer smoke and heat from the hangar deck to the lower decks in the event of a fire. While not a kamikaze attack was attacked by a dive bomber and struck by two bombs, one semi-armour piercing (SAP) and one general purpose (GP), when she had 47 aircraft preparing for a strike on Honshu. Both bombs penetrated into her hangar and set off ordnance and fuel from ruptured aircraft tanks for a planned ground attack relying on GP bombs and Tiny Tim missiles, killing 724 personnel. was severely damaged by pair of kamikaze hits during preparations for an attack on Okinawa which killed 346 men. Each of these USN carriers suffered more casualties than all the British RN armoured carriers combined, illustrating the life saving features of RN carrier design. Illustrious, which had the highest toll, suffered 126 fatal casualties and 84 wounded when hit by six 1100 lb bombs on 10 January 1941. The USN studied the superior defensive qualities of Royal Navy armoured carriers and this analysis is partly revealed in the damage report following the attack on on 13 March 1945: Paul Silverstone in US warships of World War II notes regarding US carriers that,'vast damage was often caused by suicide planes (Kamikaze) crashing through the wooden flight decks into the hangar below'. Whereas in British carriers 'the steel flight decks showed their worth against kamikaze attacks.' The only Allied carriers lost to a deck hit was the American light carrier, and escort carrier . Indeed, many light and escort carriers were unarmoured, with no protection on the hangar or flight deck, and thus they fared poorly against deck hits. Postwar analysis What was not discovered until late in the war was that the kamikaze impacts proved to have a long-term effect on the structural integrity of some British carriers. Their postwar life was shortened, as the RN had a surplus of carriers with many in shipyards being constructed. The USN rebuilt carriers such as Franklin that had been completely gutted and the crew decimated by IJN attacks. was an excellent example of this; while she weathered a severe kamikaze hit in 1945 which cratered her deck armour, the hit caused severe internal structural damage and permanently warped the hull (damage worsened in a postwar aircraft-handling accident wherein a Vought Corsair rolled off a lift and raked the hangar deck with 20mm cannon fire, causing a severe fire; but plans to rebuild her as per Victorious were abandoned due to budget cuts, not structural damage, and she lingered in reserve until 1956 before being towed off to the breakers. However, no citation is ever given for this accident which appears to be a distorted fabrication of Formidable's 18 May 1945, hangar fire. She carried no air group post war, and never carried the 20 mm Corsair. The Royal Navy planned to rebuild most of the armoured carriers in the early postwar period: Illustrious suffered a similar battering, especially off of Malta in 1941 when hit by German dive bombers and late in the war was limited to 22 knots (41 km/h) because her centreline shaft was disabled due to accumulated wartime damage; she spent five years as a training and trials carrier (1948–53) and was disposed of in 1954. Indomitable was completely refit to like-new condition, only to suffer a severe motor spirit explosion on board, which caused "considerable structural and electrical damage to the ship". Indomitable was refitted between 1948 and 1950 and served as flagship of the Home Fleet then served a tour of duty in the Mediterranean, where she was damaged by the petrol explosion. She was partially repaired before proceeding under her own power to Queen Elizabeth II's 1953 Coronation Review, before being placed in reserve in 1954. Indomitable was scrapped in 1956. The explosion which occurred on Indomitables hangar deck, while severe, would also have caused severe casualties and extensive damage to an Essex-class carrier, several of which returned to service after hangar explosions, primarily due to the USN's considerable financial and material resources. The postwar Royal Navy could only afford to rebuild Victorious and had to abandon plans to rebuild four other armoured carriers due to cost, and to provide crews to man the postwar built carriers, such as , due to reductions in manpower. Another factor is the advantage in resources that the US Navy had over the Royal Navy. The numerous and capacious American yards on the East and West Coasts allowed the US Navy to build and repair carriers at a more leisurely pace while producing ships collectively at a furious rate. The British with their strained facilities were forced to rush repairs (indeed the overloaded British shipyards had forced some vessels to be sent to the US for repairs) and some ships such as Illustrious, were forced into service even though not fully repaired. The RN was in a state of continual contraction after WWII, and simply did not have the resources or inclination to repair ships that it could no longer man. Midway and Forrestal classes While flight-deck-level armour was eventually adopted by the Americans for the design, the strength deck remained on the hangar level. Midway had originally been planned to have a very heavy gun armament (8-inch weapons). The removal of these weapons freed up enough tonnage to add of armour at the flight deck level. While this made a great deal of sense from an air group perspective, the Midway ships sat very low in the water for carriers (due to their much greater displacement), certainly much lower than the smaller Essex-class carriers, and had a great deal of difficulty operating in heavy seas. Flight deck armoured ships almost universally (except for the Midway class as completed) possessed a hurricane bow, where the bows were sealed up to the flight deck; wartime experience demonstrated that ships with the hurricane bow configuration (also including the American Lexington class) shipped less water than ships with an open bow. Late-life refits to Midway to bulge her hull and improve freeboard instead gave her a dangerously sharp roll, and made flight operations difficult even in moderate seas. This was therefore not repeated on Coral Sea (Franklin D. Roosevelt had been decommissioned years earlier). After the war, most of the Essex-class ships were modified with a hurricane bow and in the case of Oriskany the wooden flight deck surface was replaced with aluminium for improved resistance against the blast of jet engines, making them appear to have armoured flight decks, but in fact their armour remained at hangar level. The supercarriers of the postwar era, starting with the — nearly longer and wider in the beam than their World War II counterparts – would eventually be forced to move the strength deck up to the flight deck level as a result of their great size; a shallow hull of those dimensions became too impractical to continue. The issue of protection had no influence on the change; the Forrestal class had an armoured flight deck of at least 1.5" thickness. Some of the follow-on classes to the Forrestals also had armoured flight decks although deck armour is of little to no use against modern anti-ship missiles, it may help limit the damage from flight deck explosions. The experience of World War II caused the USN to change their design policy in favour of armoured flight decks: References External links Armoured aircraft carrier action and damage reports, 1940–1945 Were Armored Flight Decks on British Carriers Worthwhile? DEBUNKING SLADE AND WORTH'S ARMOURED CARRIER ESSAYS CV13 War Damage Report Naval architecture Naval armour Military comparisons Aircraft carriers
Armoured flight deck
Engineering
4,650
489,394
https://en.wikipedia.org/wiki/Schl%C3%A4fli%20symbol
In geometry, the Schläfli symbol is a notation of the form that defines regular polytopes and tessellations. The Schläfli symbol is named after the 19th-century Swiss mathematician Ludwig Schläfli, who generalized Euclidean geometry to more than three dimensions and discovered all their convex regular polytopes, including the six that occur in four dimensions. Definition The Schläfli symbol is a recursive description, starting with {p} for a p-sided regular polygon that is convex. For example, {3} is an equilateral triangle, {4} is a square, {5} a convex regular pentagon, etc. Regular star polygons are not convex, and their Schläfli symbols {p/q} contain irreducible fractions p/q, where p is the number of vertices, and q is their turning number. Equivalently, {p/q} is created from the vertices of {p}, connected every q. For example, is a pentagram; is a pentagon. A regular polyhedron that has q regular p-sided polygon faces around each vertex is represented by {p,q}. For example, the cube has 3 squares around each vertex and is represented by {4,3}. A regular 4-dimensional polytope, with r {p,q} regular polyhedral cells around each edge is represented by {p,q,r}. For example, a tesseract, {4,3,3}, has 3 cubes, {4,3}, around an edge. In general, a regular polytope {p,q,r,...,y,z} has z {p,q,r,...,y} facets around every peak, where a peak is a vertex in a polyhedron, an edge in a 4-polytope, a face in a 5-polytope, and an (n-3)-face in an n-polytope. Properties A regular polytope has a regular vertex figure. The vertex figure of a regular polytope {p,q,r,...,y,z} is {q,r,...,y,z}. Regular polytopes can have star polygon elements, like the pentagram, with symbol , represented by the vertices of a pentagon but connected alternately. The Schläfli symbol can represent a finite convex polyhedron, an infinite tessellation of Euclidean space, or an infinite tessellation of hyperbolic space, depending on the angle defect of the construction. A positive angle defect allows the vertex figure to fold into a higher dimension and loops back into itself as a polytope. A zero angle defect tessellates space of the same dimension as the facets. A negative angle defect cannot exist in ordinary space, but can be constructed in hyperbolic space. Usually, a facet or a vertex figure is assumed to be a finite polytope, but can sometimes itself be considered a tessellation. A regular polytope also has a dual polytope, represented by the Schläfli symbol elements in reverse order. A self-dual regular polytope will have a symmetric Schläfli symbol. In addition to describing Euclidean polytopes, Schläfli symbols can be used to describe spherical polytopes or spherical honeycombs. History and variations Schläfli's work was almost unknown in his lifetime, and his notation for describing polytopes was rediscovered independently by several others. In particular, Thorold Gosset rediscovered the Schläfli symbol which he wrote as |p|q|r|...|z| rather than with brackets and commas as Schläfli did. Gosset's form has greater symmetry, so the number of dimensions is the number of vertical bars, and the symbol exactly includes the sub-symbols for facet and vertex figure. Gosset regarded |p as an operator, which can be applied to |q|...|z| to produce a polytope with p-gonal faces whose vertex figure is |q|...|z|. Cases Symmetry groups Schläfli symbols are closely related to (finite) reflection symmetry groups, which correspond precisely to the finite Coxeter groups and are specified with the same indices, but square brackets instead [p,q,r,...]. Such groups are often named by the regular polytopes they generate. For example, [3,3] is the Coxeter group for reflective tetrahedral symmetry, [3,4] is reflective octahedral symmetry, and [3,5] is reflective icosahedral symmetry. Regular polygons (plane) The Schläfli symbol of a convex regular polygon with p edges is {p}. For example, a regular pentagon is represented by {5}. For nonconvex star polygons, the constructive notation is used, where p is the number of vertices and is the number of vertices skipped when drawing each edge of the star. For example, represents the pentagram. Regular polyhedra (3 dimensions) The Schläfli symbol of a regular polyhedron is {p,q} if its faces are p-gons, and each vertex is surrounded by q faces (the vertex figure is a q-gon). For example, {5,3} is the regular dodecahedron. It has pentagonal (5 edges) faces, and 3 pentagons around each vertex. See the 5 convex Platonic solids, the 4 nonconvex Kepler-Poinsot polyhedra. Topologically, a regular 2-dimensional tessellation may be regarded as similar to a (3-dimensional) polyhedron, but such that the angular defect is zero. Thus, Schläfli symbols may also be defined for regular tessellations of Euclidean or hyperbolic space in a similar way as for polyhedra. The analogy holds for higher dimensions. For example, the hexagonal tiling is represented by {6,3}. Regular 4-polytopes (4 dimensions) The Schläfli symbol of a regular 4-polytope is of the form {p,q,r}. Its (two-dimensional) faces are regular p-gons ({p}), the cells are regular polyhedra of type {p,q}, the vertex figures are regular polyhedra of type {q,r}, and the edge figures are regular r-gons (type {r}). See the six convex regular and 10 regular star 4-polytopes. For example, the 120-cell is represented by {5,3,3}. It is made of dodecahedron cells {5,3}, and has 3 cells around each edge. There is one regular tessellation of Euclidean 3-space: the cubic honeycomb, with a Schläfli symbol of {4,3,4}, made of cubic cells and 4 cubes around each edge. There are also 4 regular compact hyperbolic tessellations including {5,3,4}, the hyperbolic small dodecahedral honeycomb, which fills space with dodecahedron cells. If a 4-polytope's symbol is palindromic (e.g. {3,3,3} or {3,4,3}), its bitruncation will only have truncated forms of the vertex figure as cells. Regular n-polytopes (higher dimensions) For higher-dimensional regular polytopes, the Schläfli symbol is defined recursively as if the facets have Schläfli symbol and the vertex figures have Schläfli symbol . A vertex figure of a facet of a polytope and a facet of a vertex figure of the same polytope are the same: . There are only 3 regular polytopes in 5 dimensions and above: the simplex, {3, 3, 3, ..., 3}; the cross-polytope, {3, 3, ..., 3, 4}; and the hypercube, {4, 3, 3, ..., 3}. There are no non-convex regular polytopes above 4 dimensions. Dual polytopes If a polytope of dimension n≥2 has Schläfli symbol {p1, p2, ..., pn−1} then its dual has Schläfli symbol {pn−1, ..., p2, p1}. If the sequence is palindromic, i.e. the same forwards and backwards, the polytope is self-dual. Every regular polytope in 2 dimensions (polygon) is self-dual. Prismatic polytopes Uniform prismatic polytopes can be defined and named as a Cartesian product (with operator "×") of lower-dimensional regular polytopes. In 0D, a point is represented by ( ). Its Coxeter diagram is empty. Its Coxeter notation symmetry is ][. In 1D, a line segment is represented by { }. Its Coxeter diagram is . Its symmetry is [ ]. In 2D, a rectangle is represented as { } × { }. Its Coxeter diagram is . Its symmetry is [2]. In 3D, a p-gonal prism is represented as { } × {p}. Its Coxeter diagram is . Its symmetry is [2,p]. In 4D, a uniform {p,q}-hedral prism is represented as { } × {p,q}. Its Coxeter diagram is . Its symmetry is [2,p,q]. In 4D, a uniform p-q duoprism is represented as {p} × {q}. Its Coxeter diagram is . Its symmetry is [p,2,q]. The prismatic duals, or bipyramids can be represented as composite symbols, but with the addition operator, "+". In 2D, a rhombus is represented as { } + { }. Its Coxeter diagram is . Its symmetry is [2]. In 3D, a p-gonal bipyramid, is represented as { } + {p}. Its Coxeter diagram is . Its symmetry is [2,p]. In 4D, a {p,q}-hedral bipyramid is represented as { } + {p,q}. Its Coxeter diagram is . Its symmetry is [p,q]. In 4D, a p-q duopyramid is represented as {p} + {q}. Its Coxeter diagram is . Its symmetry is [p,2,q]. Pyramidal polytopes containing vertices orthogonally offset can be represented using a join operator, "∨". Every pair of vertices between joined figures are connected by edges. In 2D, an isosceles triangle can be represented as ( ) ∨ { } = ( ) ∨ [( ) ∨ ( )]. In 3D: A digonal disphenoid can be represented as { } ∨ { } = [( ) ∨ ( )] ∨ [( ) ∨ ( )]. A p-gonal pyramid is represented as ( ) ∨ {p}. In 4D: A p-q-hedral pyramid is represented as ( ) ∨ {p,q}. A 5-cell is represented as ( ) ∨ [( ) ∨ {3}] or [( ) ∨ ( )] ∨ {3} = { } ∨ {3}. A square pyramidal pyramid is represented as ( ) ∨ [( ) ∨ {4}] or [( ) ∨ ( )] ∨ {4} = { } ∨ {4}. When mixing operators, the order of operations from highest to lowest is ×, +, ∨. Axial polytopes containing vertices on parallel offset hyperplanes can be represented by the ‖ operator. A uniform prism is {n}‖{n} and antiprism {n}‖r{n}. Extension of Schläfli symbols Polygons and circle tilings A truncated regular polygon doubles in sides. A regular polygon with even sides can be halved. An altered even-sided regular 2n-gon generates a star figure compound, 2{n}. Polyhedra and tilings Coxeter expanded his usage of the Schläfli symbol to quasiregular polyhedra by adding a vertical dimension to the symbol. It was a starting point toward the more general Coxeter diagram. Norman Johnson simplified the notation for vertical symbols with an r prefix. The t-notation is the most general, and directly corresponds to the rings of the Coxeter diagram. Symbols have a corresponding alternation, replacing rings with holes in a Coxeter diagram and h prefix standing for half, construction limited by the requirement that neighboring branches must be even-ordered and cuts the symmetry order in half. A related operator, a for altered, is shown with two nested holes, represents a compound polyhedra with both alternated halves, retaining the original full symmetry. A snub is a half form of a truncation, and a holosnub is both halves of an alternated truncation. Alternations, quarters and snubs Alternations have half the symmetry of the Coxeter groups and are represented by unfilled rings. There are two choices possible on which half of vertices are taken, but the symbol does not imply which one. Quarter forms are shown here with a + inside a hollow ring to imply they are two independent alternations. Altered and holosnubbed Altered and holosnubbed forms have the full symmetry of the Coxeter group, and are represented by double unfilled rings, but may be represented as compounds. ß, looking similar to the greek letter beta (β), is the German alphabet letter eszett. Polychora and honeycombs Alternations, quarters and snubs Bifurcating families Tessellations Spherical {2,n} s{2,2n} { } ⊕ {n} t{2, n} { } + {n} Regular {2,∞} {3,6} {4,4} {6,3} Semi-regular s{4,4} e{3,6} sr{2,∞} sr{6,3} rr{6,3} r{6,3} t{6,3} t{2,∞} tr{6,3} tr{4,4} Hyperbolic sr{5,4} sr{6,4} sr{7,4} sr{8,4} sr{∞,4} s{5,4} sr{6,5} s{6,4} sr{8,6} sr{7,7} s{8,4} s{4,6} sr{∞,∞} sr{7,3} sr{8,3} sr{∞,3} s{3,8} {3,7} {3,8} {3,∞} h{8,3} h{6,4} q{4,6} rr{7,3} rr{8,3} rr{∞,3} h2{8,3} r{7,3} r{8,3} t{7,3} t{8,3} r{∞,3} t{∞,3} rr{5,4} rr{6,4} rr{7,4} rr{8,4} rr{∞,4} {4,5} {4,6} {4,7} {4,8} {4,∞} r{5,4} r{6,4} tr{6,3} tr{7,3} ??? tr{8,3} ??? tr{∞,3} r{7,4} r{8,4} tr{5,4} ??? tr{6,4} tr{7,4} tr{8,4} tr{∞,4} t{5,4} tr{6,5} t{6,4} tr{8,6} t{7,4} t{8,4} t{∞,4} r{∞,4} {5,4} {5,5} {5,6} {5,∞} rr{6,5} r{6,5} t{4,5} t{5,5} t{6,5} r{∞,5} {6,4} {6,5} {6,6} {6,8} rr{8,6} r{8,6} t{4,6} t{5,6} t{6,6} t{8,6} {7,3} {7,4} {7,7} t{3,7} t{4,7} t{7,7} {8,3} {8,4} {8,6} {8,8} t{6,8} t{3,8} t{8,8} {∞,3} {∞,4} {∞,5} {∞,∞} t{3,∞} t{4,∞} References Sources (Paper 22) pp. 251–278 MR 2,10 (Paper 23) pp. 279–312 (Paper 24) pp. 313–358 External links Polytope notation systems
Schläfli symbol
Mathematics
3,710
68,048,565
https://en.wikipedia.org/wiki/James%20Taiclet
James Donald Taiclet Jr. (born May 13, 1960) is an American business executive who has been the president and chief executive officer (CEO) of Lockheed Martin since June 2020, and chairman since March 2021. Early life and education James Taiclet was born in Pittsburgh, on May 13, 1960. His father, James Sr., served in the U.S. Army at the Wiesbaden Air Base in Germany, and later became a boilermaker in Pittsburgh. His mother, Mary Ann (née Foley), was a homemaker and school administrator. Taiclet graduated from the U.S. Air Force Academy in 1982 with a degree in engineering and international relations. While at the academy, Taiclet played on the rugby team, serving as captain during his senior year. Taiclet earned a master's degree in public affairs from Princeton University. Taiclet has a fellowship at the Princeton School of Public and International Affairs. Military service From 1985 to 1991, Taiclet was a pilot in the United States Air Force, serving as aircraft commander, instructor pilot and unit chief of standardization and evaluation. During Operation Desert Shield, he flew multiple missions in a Lockheed C-141 Starlifter transport jet. His rotational assignments included the Joint Staff and Air Staff at the Pentagon. Business career Taiclet first worked in the private sector as a management consultant at McKinsey & Co. from July 1991 to February 1996. He then joined Pratt & Whitney as vice president of engine services until 1999, and was then president of Honeywell Aerospace Services until 2001. In 2001, American Tower recruited Taiclet for the role of chief operating officer. He was named chief executive officer of American Tower in October 2003 after the departure of Steven B. Dodge, and was selected as chairman in February 2004. He remained as CEO and on the board of American Tower until 2020. In 2018, Taiclet joined the board of directors of Lockheed Martin. In June 2020, Taiclet was named as CEO of Lockheed Martin, succeeding Marillyn Hewson. He was named chairman of the company in March 2021. On Feb. 16, China placed two companies, Lockheed Martin Corporation and Raytheon Missiles & Defense, on its unreliable entities list as they sold arms to Taiwan, banning them from engaging in China-related import or export activities and making new investments in China. Senior executives of the two companies, including Taiclet, have since then been prohibited from entering China, as well as working, staying and residing in China. Other memberships While he was CEO of American Tower, Taiclet and his wife supported the Newton-Wellesley Hospital Charitable Foundation as well as the Charles River Center. He also serves on the board of the Brigham and Women's Hospital as a trustee. Taiclet hold memberships on the boards of various non-profits and NGOs such as the Council on Foreign Relations, Catalyst.org, the U.S.-India Business Council, the U.S.-India Strategic Partnership Forum, and has attended the World Economic Forum. Recognition From 2013 to 2018, Taiclet was named to Harvard Business Review's list of Best-Performing CEOs in the World. References External links Taiclet Bio on Lockheed Martin website Taiclet profile on Linkedin 1960 births Living people Lockheed Martin people Princeton University alumni McKinsey & Company people United States Air Force Academy alumni 21st-century American businesspeople American technology chief executives 20th-century American businesspeople American individuals subject to Chinese sanctions American technology businesspeople American technology executives Businesspeople from Pittsburgh United States Air Force personnel of the Gulf War Aviators from Pennsylvania Military personnel from Pennsylvania
James Taiclet
Technology
746
58,702,303
https://en.wikipedia.org/wiki/Aspergillus%20pachycaulis
Aspergillus pachycaulis is a species of fungus in the genus Aspergillus. It is from the Robusti section. The species was first described in 2017. It has been isolated from air in the United States. It has been reported to produce asperphenamate, indole alkaloid A, indole alkaloid B, phthalide, and mycophenolic acid. References pachycaulis Fungi described in 2017 Fungus species
Aspergillus pachycaulis
Biology
101
36,968,172
https://en.wikipedia.org/wiki/Inter-universal%20Teichm%C3%BCller%20theory
Inter-universal Teichmüller theory (IUT or IUTT) is the name given by mathematician Shinichi Mochizuki to a theory he developed in the 2000s, following his earlier work in arithmetic geometry. According to Mochizuki, it is "an arithmetic version of Teichmüller theory for number fields equipped with an elliptic curve". The theory was made public in a series of four preprints posted in 2012 to his website. The most striking claimed application of the theory is to provide a proof for various outstanding conjectures in number theory, in particular the abc conjecture. Mochizuki and a few other mathematicians claim that the theory indeed yields such a proof but this has so far not been accepted by the mathematical community. History The theory was developed entirely by Mochizuki up to 2012, and the last parts were written up in a series of four preprints. Mochizuki made his work public in August 2012 with none of the fanfare that typically accompanies major advances, posting the papers only to his institution's preprint server and his website, and making no announcement to colleagues. Soon after, the papers were picked up by Akio Tamagawa and Ivan Fesenko and the mathematical community at large was made aware of the claims to have proven the abc conjecture. The reception of the claim was at first enthusiastic, though number theorists were baffled by the original language introduced and used by Mochizuki. Workshops on IUT were held at RIMS in March 2015, in Beijing in July 2015, in Oxford in December 2015 and at RIMS in July 2016. The last two events attracted more than 100 participants. Presentations from these workshops are available online. However, these did not lead to broader understanding of Mochizuki's ideas and the status of his claimed proof was not changed by these events. In 2017, a number of mathematicians who had examined Mochizuki's argument in detail pointed to a specific point which they could not understand, near the end of the proof of Corollary 3.12, in paper three of four. In March 2018, Peter Scholze and Jakob Stix visited Kyoto University for five days of discussions with Mochizuki and Yuichiro Hoshi; while this did not resolve the differences, it brought into focus where the difficulties lay. It also resulted in the publication of reports of the discussion by both sides: In May 2018, Scholze and Stix wrote a 10-page report, updated in September 2018, detailing the (previously identified) gap in Corollary 3.12 in the proof, describing it as "so severe that in [their] opinion small modifications will not rescue the proof strategy", and that Mochizuki's preprint cannot claim a proof of abc. In September 2018, Mochizuki wrote a 41-page summary of his view of the discussions and his conclusions about which aspects of his theory he considers misunderstood. In particular he names: "re-initialization" of (mathematical) objects, making their previous "history" inaccessible; "labels" for different "versions" of objects; the emphasis on the types ("species") of objects. In July and October 2018, Mochizuki wrote 8- and 5-page reactions to the May and September versions of the Scholze and Jakob Stix report, maintaining that the gap is the result of their simplifications, and that there is no gap in his theory. Mochizuki published his work in a series of four journal papers in 2021, in the journal Publications of the Research Institute for Mathematical Sciences, Kyoto University, for which he is editor-in-chief. In a review of these papers in zbMATH, Peter Scholze wrote that his concerns from 2017 and 2018 "have not been addressed in the published version". Other authors have pointed to the unresolved dispute between Mochizuki and Scholze over the correctness of this work as an instance in which the peer review process of mathematical journal publication has failed in its usual function of convincing the mathematical community as a whole of the validity of a result. Mathematical significance Scope of the theory Inter-universal Teichmüller theory is a continuation of Mochizuki's previous work in arithmetic geometry. This work, which has been peer-reviewed and well received by the mathematical community, includes major contributions to anabelian geometry, and the development of p-adic Teichmüller theory, Hodge–Arakelov theory and Frobenioid categories. It was developed with explicit references to the aim of getting a deeper understanding of abc and related conjectures. In the geometric setting, analogues to certain ideas of IUT appear in the proof by Bogomolov of the geometric Szpiro inequality. The key prerequisite for IUT is Mochizuki's mono-anabelian geometry and its reconstruction results, which allows to retrieve various scheme-theoretic objects associated to a hyperbolic curve over a number field from the knowledge of its fundamental group, or of certain Galois groups. IUT applies algorithmic results of mono-anabelian geometry to reconstruct relevant schemes after applying arithmetic deformations to them; a key role is played by three rigidities established in Mochizuki's etale theta theory. Roughly speaking, arithmetic deformations change the multiplication of a given ring, and the task is to measure how much the addition is changed. Infrastructure for deformation procedures is decoded by certain links between so called Hodge theaters, such as a theta-link and a log-link. These Hodge theaters use two main symmetries of IUT: multiplicative arithmetic and additive geometric. On one hand, Hodge theaters generalize such classical objects in number theory as the adeles and ideles in relation to their global elements. On the other hand, they generalize certain structures appearing in the previous Hodge–Arakelov theory of Mochizuki. The links between theaters are not compatible with ring or scheme structures and are performed outside conventional arithmetic geometry. However, they are compatible with certain group structures, and absolute Galois groups as well as certain types of topological groups play a fundamental role in IUT. Considerations of multiradiality, a generalization of functoriality, imply that three mild indeterminacies have to be introduced. Consequences in number theory The main claimed application of IUT is to various conjectures in number theory, among them the abc conjecture, but also more geometric conjectures such as Szpiro's conjecture on elliptic curves and Vojta's conjecture for curves. The first step is to translate arithmetic information on these objects to the setting of Frobenioid categories. It is claimed that extra structure on this side allows one to deduce statements which translate back into the claimed results. One issue with Mochizuki's arguments, which he acknowledges, is that it does not seem possible to get intermediate results in his claimed proof of the abc conjecture using IUT. In other words, there is no smaller subset of his arguments more easily amenable to an analysis by outside experts, which would yield a new result in Diophantine geometries. Vesselin Dimitrov extracted from Mochizuki's arguments a proof of a quantitative result on abc, which could in principle give a refutation of the proof. References External links Shinichi Mochizuki (1995–2018), Papers of Shinichi Mochizuki Shinichi Mochizuki (2014), A panoramic overview of inter-universal Teichmüller theory Yuichiro Hoshi; Go Yamashita (2015), RIMS Joint Research Workshop: On the verification and further development of inter-universal Teichmuller theory Ivan Fesenko (2015), Arithmetic deformation theory via arithmetic fundamental groups and nonarchimedean theta functions, notes on the work of Shinichi Mochizuki. Yuichiro Hoshi (2015) Introduction to inter-universal Teichmüller theory, a survey in Japanese Algebraic geometry Number theory
Inter-universal Teichmüller theory
Mathematics
1,639
24,356,587
https://en.wikipedia.org/wiki/Regucalcin
Regucalcin is a protein that in humans is encoded by the RGN gene The protein encoded by this gene is a highly conserved, calcium-binding protein, that is preferentially expressed in the liver, kidney and other tissues. It may have an important role in calcium homeostasis. Studies in rats indicate that this protein may also play a role in aging, as it shows age-associated down-regulation. This gene is part of a gene cluster on chromosome Xp11.3-Xp11.23. Alternative splicing results in two transcript variants having different 5' UTRs, but encoding the same protein. Regucalcin is a proposed name for a calcium-binding protein that was discovered in 1978 This protein is also known as Senescence Marker Protein-30 (SMP30). Regucalcin differs from calmodulin and other Ca2+-related proteins as it does not contain an EF-hand motif of Ca2+-binding domain. It may regulate the effect of Ca2+ on liver cell functions. From many investigations, regucalcin has been shown to play a multifunctional role in many cell types as a regulatory protein in the intracellular signaling system. Gene Regucalcin and its gene (rgn) are identified in 16 species consisting of regucalcin family. Regucalcin is greatly expressed in the liver of rats, although the protein is found in small amounts in other tissues and cells. The rat regucalcin gene consists of seven exons and six introns, and several consensus regulatory elements exist upstream of the 5’-flanking region. The gene is localized on the proximal end of rat chromosome Xq11.1-12 and human Xp11.3-Xp11.23. AP-1, NFI-A1, RGPR-p117, and Wnt/β-catenin/TCF4 can bind to the promoter region of the rat regucalcin gene to mediate the Ca2+ and other signaling responses with various hormones and cytokines for transcriptional activation. Function Regucalcin plays a pivotal role in the keep of intracellular Ca2+ homeostasis due to activating Ca2+ pump enzymes in the plasma membrane (basolateral membrane), microsomes (endoplasmic reticulum) and mitochondria of many cells. Regucalcin is localized in the cytoplasm, mitochondria, microsomes and nucleus. Regucalcin is translocated from cytoplasm to nucleus with hormone stimulation. Regucalcin has a suppressive effect on calcium signaling from the cytoplasm to the nucleus in the proliferative cells. Also, regucalcin has been demonstrated to transport into the nucleus of cells, and it can inhibit nuclear protein kinase, protein phosphatase, and deoxyribonucleic acid and ribonucleic acid synthesis. Regucalcin can control enhancement of cell proliferation due to hormonal stimulation. Moreover, regucalcin has been shown to have an inhibitory effect on aminoacyl t-RNA synthetase, a rate limiting enzyme at translational process of protein synthesis and an activatory effect on cystein protease and superoxide dismutase in liver and kidney cells. Regucalcin is expressed in the neuron of brain tissues, and the decrease of brain regucalcin causes accumulation of calcium in the brain microsomes. Regucalcin has an inhibitory effect on protein kinase and protein phosphatase activity dependent on Ca signaling. Regucalcin has been shown to have an activatory effect on Ca pumping enzyme (Ca-ATPase) in heart sarcoplasmic reticulum. Regucalcin plays a role in the promotion of urinary calcium transport in the epithelial cells of kidney cortex. Overexpression of regucalcin suppresses cell death and apoptosis in the cloned rat hepatoma cells and normal rat kidney epithelial cells (NRK52E) induced by various signaling factors. Thus, regucalcin plays a multifunctional role in the regulation of cell functions in liver, kidney cortex, heart and brain. Thus, regucalcin plays a pivotal role in keep of cell homeostasis and function. Regucalcin plays a pivotal role as a suppressor protein for cell signaling systems in many cell types. Pathophysiologic role Overexpressing of regucalcin in rats (transgenic rats) has been shown to induce bone loss and hyperlipidemia with increasing age, indicating a pathophysiologic role. Regucalcin transgenic rat may be a useful tool as animal model in osteoporosis and hyperlipidemia. Also, regucalcin/SMP30-knockout mice are known to induce a suppression in ascorbic acid biosynthesis. The disorder of regucalcin expression has been proposed to be induced cancer, brain function, heart injury, kidney failure, osteoporosis, and hyperlipidemia. Regucalcin plays a novel role as a suppressor in carcinogenesis of human patients with various types of cancer including pancreatic cancer, breast cancer, hepatoma, and lung cancer. Of note, it has been conducted a systematic search to identify biomarker candidates for a frailty biomarker panel. Gene expression databases were to identify genes regulated in aging, longevity, and age-related diseases with a focus on secreted factors or molecules detectable in body fluids as potential frailty biomarkers. A total of 44 markers were evaluated in the seven categories listed above, and 19 were awarded a high priority score, 22 identified as medium priority and three were low priority. In each category high and medium priority markers were identified. Regucalcin (RGN) was proposed to be a core gene (protein) with high priority of frailty biomarkers in order to ascertain their diagnostic, prognostic and therapeutic potential. Notably, it has been shown that epigenetic modifications of survivin and regucalcin in non-small cell lung cancer tissues contribute to malignancy. References Further reading Aging-related proteins Calcium signaling Proteins
Regucalcin
Chemistry,Biology
1,310
9,019,987
https://en.wikipedia.org/wiki/Nuclear%20flask
A nuclear flask is a shipping container that is used to transport active nuclear materials between nuclear power station and spent fuel reprocessing facilities. Each shipping container is designed to maintain its integrity under normal transportation conditions and during hypothetical accident conditions. They must protect their contents against damage from the outside world, such as impact or fire. They must also contain their contents from leakage, both for physical leakage and for radiological shielding. Spent nuclear fuel shipping casks are used to transport spent nuclear fuel used in nuclear power plants and research reactors to disposal sites such as the nuclear reprocessing center at COGEMA La Hague site. International United Kingdom Railway-carried flasks are used to transport spent fuel from nuclear power stations in the UK and the Sellafield spent nuclear fuel reprocessing facility. Each flask weighs more than , and transports usually not more than of spent nuclear fuel. Over the past 35 years, British Nuclear Fuels plc (BNFL) and its subsidiary PNTL have conducted over 14,000 cask shipments of SNF worldwide, transporting more than 9,000 tonnes of SNF over 16 million miles via road, rail, and sea without a radiological release. BNFL designed, licensed, and currently own and operate a fleet of approximately 170 casks of the Excellox design. BNFL has maintained a fleet of transport casks to ship SNF for the United Kingdom, continental Europe, and Japan for reprocessing. In the UK a series of public demonstrations were conducted in which spent fuel flasks (loaded with steel bars) were subjected to simulated accident conditions. A randomly selected flask (never used for holding used fuel) from the production line was first dropped from a tower. The flask was dropped in such a way that the weakest part of it would hit the ground first. The lid of the flask was slightly damaged but very little material escaped from the flask. A little water escaped from the flask but it was thought that in a real accident that the escape of radioactivity associated with this water would not be a threat to humans or their environment. For a second test the same flask was fitted with a new lid, filled again with steel bars and water before a train was driven into it at high speed. The flask survived with only cosmetic damage while the train was destroyed. Although referred to as a test, the actual stresses the flask underwent were well below what they are designed to withstand, as much of the energy from the collision was absorbed by the train and in moving the flask some distance. This flask is on display at the training centre at Heysham 1 Power Station. Description Introduced in the early 1960s, Magnox flasks consists of four layers; an internal skip containing the waste; guides and protectors surrounding the skip; all contained within the steel main body of flask itself, with characteristic cooling fins; and (since the early 1990s) a transport cabin of panels which provide an external housing. Flasks for waste from the later advanced gas cooled reactor power stations are similar, but have thinner steel main walls at thickness, to allow room for extensive internal lead shielding. The flask is protected by a bolt hasp which prevents the content from being accessed during transit. Transport All the flasks are owned by the Nuclear Decommissioning Authority, the owners of Direct Rail Services. A train conveying flasks would be hauled by two locomotives, either Class 20 or Class 37, but Class 66 and Class 68 locomotives are increasingly being used; locomotives are used in pairs as a precaution in case one fails en route. Greenpeace protest that flasks in rail transit pose a hazard to passengers standing on platforms, although many tests performed by the Health and Safety Executive have proved that it is safe for passengers to stand on the platform while a flask passes by. Safety The crashworthiness of the flask was demonstrated publicly when a British Rail Class 46 locomotive was forcibly driven into a derailed flask (containing water and steel rods in place of radioactive material) at ; the flask sustaining minimal superficial damage without compromising its integrity, while both the flatbed wagon carrying it and the locomotive were more-or-less destroyed. Additionally, flasks were heated to temperatures of over to prove safety in a fire. However, critics consider the testing flawed for various reasons. The heat test is claimed to be considerably below that of theoretical worst-case fires in a tunnel, and the worst case impact today would have a closing speed of around . Nevertheless, there have been several accidents involving flasks, including derailments, collisions, and even a flask being dropped during transfer from train to road, with no leakage having occurred. Problems have been found where flasks "sweat", when small amounts of radioactive material absorbed into paint migrate to the surface, causing contamination risks. Studies identified that 10–15% of flasks in the United Kingdom were suffering from this problem, but none exceeded the international recommended safety limits. Similar flasks in mainland Europe were found to marginally exceed the contamination limits during testing, and additional monitoring procedures were put into place. In order to reduce the risk, current UK flask wagons are fitted with a lockable cover to ensure any surface contamination remains within the container, and all containers are tested before shipment, with those exceeding the safety level being cleaned until they are within the limit. A report in 2001 identified potential risks, and actions to be taken to ensure safety. United States In the United States, the acceptability of the design of each cask is judged against Title 10, Part 71, of the Code of Federal Regulations (other nations' shipping casks, possibly excluding Russia's, are designed and tested to similar standards (International Atomic Energy Agency "Regulations for the Safe Transport of Radioactive Material" No. TS-R-1)). The designs must demonstrate (possibly by computer modelling) protection against radiological release to the environment under all four of the following hypothetical accident conditions, designed to encompass 99% of all accidents: A 9-meter (30 ft) free fall onto an unyielding surface A puncture test allowing the container to free-fall 1 meter (about 39 inches) onto a steel rod 15 centimeters (about 6 inches) in diameter A 30-minute, all-engulfing fire at 800 degrees Celsius (1475 degrees Fahrenheit) An 8-hour immersion under 0.9 meter (3 ft) of water. Further, an undamaged package must be subjected to a one-hour immersion under 200 meters (655 ft) of water. In addition, between 1975 and 1977 Sandia National Laboratories conducted full-scale crash tests on spent nuclear fuel shipping casks. Although the casks were damaged, none would have leaked. Although the U.S. Department of Transportation (DOT) has the primary responsibility for regulating the safe transport of radioactive materials in the United States, the Nuclear Regulatory Commission (NRC) requires that licensees and carriers involved in spent fuel shipments: Follow only approved routes; Provide armed escorts for heavily populated areas; Use immobilization devices; Provide monitoring and redundant communications; Coordinate with law enforcement agencies before shipments; and Notify in advance the NRC and States through which the shipments will pass. Since 1965, approximately 3,000 shipments of spent nuclear fuel have been transported safely over the U.S.'s highways, waterways, and railroads. Baltimore train tunnel fire On July 18, 2001, a freight train carrying hazardous (non-nuclear) materials derailed and caught fire while passing through the Howard Street railroad tunnel in downtown Baltimore, Maryland, United States. The fire burned for 3 days, with temperatures as high as 1000 °C (1800 °F). Since the casks are designed for a 30-minute fire at 800 °C (1475 °F), several reports have been made regarding the inability of the casks to survive a fire similar to the Baltimore one. However, nuclear waste would never be transported together with hazardous (flammable or explosive) materials on the same train or track. State of Nevada The State of Nevada, USA, released a report entitled, "Implications of the Baltimore Rail Tunnel Fire for Full-Scale Testing of Shipping Casks" on February 25, 2003. In the report, they said a hypothetical spent nuclear fuel accident based on the Baltimore fire: "Concluded steel-lead-steel cask would have failed after 6.3 hours; monolithic steel cask would have failed after 11-12.5 hours." "Contaminated Area: 32 square miles (82 km2)" "Latent cancer fatalities: 4,000-28,000 over 50 years (200-1,400 during first year)" "Cleanup cost: $13.7 Billion (2001 Dollars)" National Academy of Sciences The National Academy of Sciences, at the request of the State of Nevada, produced a report on July 25, 2003. The report concluded that the following should be done: "Need to 3-D model (bolts, seals, etc) more than HI-STAR cask for extreme fire environments." "For safety and risk analysis, casks should be physically tested to destruction." "NRC should release all thermal calculations; Holtec is withholding allegedly proprietary information." NRC The U.S. Nuclear Regulatory Commission released a report in November 2006. It concluded: The results of this evaluation also strongly indicate that neither spent nuclear fuel (SNF) particles nor fission products would be released from a spent fuel transportation package carrying intact spent fuel involved in a severe tunnel fire such as the Baltimore tunnel fire. None of the three package designs analyzed for the Baltimore tunnel fire scenario (TN-68, HI-STAR 100, and NAC LWT) experienced internal temperatures that would result in rupture of the fuel cladding. Therefore, radioactive material (i.e., SNF particles or fission products) would be retained within the fuel rods. There would be no release from the HI-STAR 100, because the inner welded canister remains leak tight. While a release is unlikely, the potential releases calculated for the TN-68 rail package and the NAC LWT truck package indicate that any release of CRUD from either package would be very small - less than an A2 quantity. Canada By comparison there has been limited spent nuclear fuel transport in Canada. Transportation casks have been designed for truck and rail transport and Canada's regulatory body, the Canadian Nuclear Safety Commission, granted approval for casks, which may be used for barge shipments as well. The commission's regulations prohibit the disclosure of location, routing and timing of shipments of nuclear materials, such as spent fuel. International maritime transport Nuclear flasks containing spent nuclear fuel are sometimes transported by sea for the purposes of reprocessing or relocation to a storage facility. Vessels receiving these cargoes are variously classified INF-1, INF-2 or INF-3 by the International Maritime Organisation. The code was introduced as a voluntary system in 1993 and became mandatory in 2001. The "INF" acronym stands for "Irradiated Nuclear Fuel" though the classification also covers "plutonium and high-level waste" cargoes. In order to receive these classifications, vessels must meet a range of structural and safety standards. Vessels used for the transportation of spent nuclear fuel are typically purpose built and are commonly referred to as Nuclear Fuel Carriers. The global fleet includes vessels under flags of the United Kingdom, Japan, Russian Federation, China and Sweden. See also Dry cask storage Nuclear reprocessing Radioactive waste Safeguards Transporter Spent fuel pool References External links Nuclear Transports in Britain (1999), Cumbrians Opposed to a Radioactive Environment Crash! Pictures of the 1984 train-crash test Train test crash 1984 BBC News footage of the 1984 train-crash test. Operation Smash Hit Publicity film made for the CEGB (subsequently Magnox Electric Ltd) about the train-crash test Risks of transporting of irradiated fuel and nuclear materials in the UK, Large & Associates, for Greenpeace (2007) Background Report on the Basic Issues to be Addressed, QuantSci Limited, for the Greater London Authority (2001) U.S. Transport Council Nuclear Regulatory Commission "Safety of Spent Fuel Transportation" (NUREG/BR-0292) Nuclear Regulatory Commission Backgrounder on Transportation of Spent Fuel and Radioactive Materials Pro-nuclear group's summary Fuel Solutions Inc.(BFS), the world’s largest shippers of nuclear materials World Nuclear Transport Institute Radioactive waste Shipping containers Hazardous materials
Nuclear flask
Physics,Chemistry,Technology
2,574
56,809,815
https://en.wikipedia.org/wiki/Disease%20X
Disease X is a placeholder name that was adopted by the World Health Organization (WHO) in February 2018 on their shortlist of blueprint priority diseases to represent a hypothetical, unknown pathogen. The WHO adopted the placeholder name to ensure that their planning was sufficiently flexible to adapt to an unknown pathogen (e.g., broader vaccines and manufacturing facilities). Former Director of the US National Institute of Allergy and Infectious Diseases Anthony Fauci stated that the concept of Disease X would encourage WHO projects to focus their research efforts on entire classes of viruses (e.g., flaviviruses), instead of just individual strains (e.g., zika virus), thus improving WHO capability to respond to unforeseen strains. In 2020, experts, including some of the WHO's own expert advisors, speculated that COVID-19, caused by the SARS-CoV-2 virus strain, met the requirements to be the first Disease X. In December 2024, an unidentified disease in the Democratic Republic of the Congo was sometimes referred to as Disease X, after infecting over 400 people and killing at least 79, later revealed to be an aggressive strain of malaria. Rationale In May 2015, in pandemic preparations prior to the COVID-19 pandemic, the WHO was asked by member organizations to create an "R&D Blueprint for Action to Prevent Epidemics" to generate ideas that would reduce the time lag between the identification of viral outbreaks and the approval of vaccines/treatments, to stop the outbreaks from turning into a "public health emergency". The focus was to be on the most serious emerging infectious diseases (EIDs) for which few preventive options were available. A group of global experts, the "R&D Blueprint Scientific Advisory Group", was assembled by the WHO to draft a shortlist of less than ten "blueprint priority diseases". Since 2015, the shortlist of EIDs has been reviewed annually and originally included widely known diseases such as Ebola and Zika which have historically caused epidemics, as well as lesser known diseases which have potential for serious outbreaks, such as SARS, Lassa fever, Marburg virus, Rift Valley fever, and Nipah virus. Since then, COVID-19 has been added to the list. In February 2018, after the "2018 R&D Blueprint" meeting in Geneva, the WHO added Disease X to the shortlist as a placeholder for a "knowable unknown" pathogen. The Disease X placeholder acknowledged the potential for a future epidemic that could be caused by an unknown pathogen, and by its inclusion, challenged the WHO to ensure their planning and capabilities were flexible enough to adapt to such an event. At the 2018 announcement of the updated shortlist of blueprint priority diseases, the WHO said: "Disease X represents the knowledge that a serious international epidemic could be caused by a pathogen currently unknown to cause human disease". John-Arne Røttingen, of the R&D Blueprint Special Advisory Group, said: "History tells us that it is likely the next big outbreak will be something we have not seen before", and "It may seem strange to be adding an 'X' but the point is to make sure we prepare and plan flexibly in terms of vaccines and diagnostic tests. We want to see 'plug and play' platforms developed which will work for any or a wide number of diseases; systems that will allow us to create countermeasures at speed". US expert Anthony Fauci said: "WHO recognizes it must 'nimbly move' and this involves creating platform technologies", and that to develop such platforms, WHO would have to research entire classes of viruses, highlighting flaviviruses. He added: "If you develop an understanding of the commonalities of those, you can respond more rapidly". Adoption Jonathan D. Quick, the author of End of Epidemics, described the WHO's act of naming Disease X as "wise in terms of communicating risk", saying "panic and complacency are the hallmarks of the world's response to infectious diseases, with complacency currently in the ascendance". Women's Health wrote that the establishment of the term "might seem like an uncool move designed to incite panic" but that the whole purpose of including it on the list was to "get it on people's radars". Richard Hatchett of the Coalition for Epidemic Preparedness Innovations (CEPI), wrote "It might sound like science fiction, but Disease X is something we must prepare for", noting that despite the success in controlling the 2014 Western African Ebola virus epidemic, strains of the disease had returned in 2018. In February 2019, CEPI announced funding of US$34 million to the German-based CureVac biopharmaceutical company to develop an "RNA Printer prototype", that CEPI said could "prepare for rapid response to unknown pathogens (i.e., Disease X)". Parallels were drawn with the efforts by the United States Agency for International Development (USAID) and their PREDICT program, which was designed to act as an early warning pandemic system, by sourcing and researching animal viruses in particular "hot spots" of animal-human interaction. In September 2019, The Daily Telegraph reported on how Public Health England (PHE) had launched its own investigation for a potential Disease X in the United Kingdom from the diverse range of diseases reported in their health system; they noted that 12 novel diseases and/or viruses had been recorded by PHE in the last decade. In October 2019 in New York, the WHO's Health Emergencies Program ran a "Disease X dummy run" to simulate a global pandemic by Disease X, for its 150 participants from various world health agencies and public health systems to better prepare and share ideas and observations for combatting such an eventuality. In March 2020, The Lancet Infectious Diseases published a paper titled "Disease X: accelerating the development of medical countermeasures for the next pandemic", which expanded the term to include Pathogen X (the pathogen that leads to Disease X), and identified areas of product development and international coordination that would help in combatting any future Disease X. In April 2020, The Daily Telegraph described remdesivir, a drug being trialed to combat COVID-19, as an anti-viral that Gilead Sciences started working on a decade previously to treat a future Disease X. In August 2023, the UK Government announced the creation of a new research center, located on the Porton Down campus, which is tasked at researching pathogens with the potential to emerge as Disease X. Live viruses will be kept in specialist containment facilities in order to develop tests and potential vaccines within 100 days in case a new threat is identified. In January 2024, during the World Economic Forum's annual meeting, Disease X was once again discussed as being a potential threat following the COVID-19 pandemic. Strategy A paper published in 2022 listed the following strategies in preparation for Disease X: steps to reduce the risk of spillover and the consequent introduction and spread of a new disease in humans; improving disease surveillance in humans and animals, to rapidly detect and sequence the infectious agent; strengthening research programs to shorten the time lag between the development and production of medical countermeasures; rapid implementation of pharmaceutical (e.g. vaccination) and non-pharmaceutical (e.g. social distancing) measures, to contain a large-scale epidemic; develop international protocols to ensure fair distribution and global coverage of drugs and vaccines. Candidates Zoonotic viruses On the addition of Disease X in 2018, the WHO said it could come from many sources citing hemorrhagic fevers and the more recent non-polio enterovirus. However, Røttingen speculated that Disease X would be more likely to come from zoonotic transmission (an animal virus that jumps to humans), saying: "It's a natural process and it is vital that we are aware and prepare. It is probably the greatest risk". WHO special advisor Professor Marion Koopmans, also noted that the rate at which zoonotic diseases were appearing was accelerating, saying: "The intensity of animal and human contact is becoming much greater as the world develops. This makes it more likely new diseases will emerge but also modern travel and trade make it much more likely they will spread". COVID-19 (2019–present) From the outset of the COVID-19 pandemic, experts have speculated whether COVID-19 met the criteria to be Disease X. In early February 2020, Chinese virologist Shi Zhengli from the Wuhan Institute of Virology wrote that the first Disease X is from a coronavirus. Later that month, Marion Koopmans, Head of Viroscience at Erasmus University Medical Center in Rotterdam, and a member of the WHO's R&D Blueprint Special Advisory Group, wrote in the scientific journal Cell, "Whether it will be contained or not, this outbreak is rapidly becoming the first true pandemic challenge that fits the disease X category". At the same time, Peter Daszak, also a member of the WHO's R&D Blueprint, wrote in an opinion piece in The New York Times saying: "In a nutshell, Covid-19 is Disease X". Synthetic viruses/bioweapons At the 2018 announcement of the updated shortlist of blueprint priority diseases, the media speculated that a future Disease X could be created intentionally as a biological weapon. In 2018, WHO R&D Blueprint Special Advisor Group member Røttingen was questioned about the potential of Disease X to come from the ability of gene-editing technology to produce synthetic viruses (e.g., the 2017 synthesis of Orthopoxvirus in Canada was cited), which could be released through an accident or even an act of terror. Røttingen said it was unlikely that a future Disease X would originate from a synthetic virus or a bio-weapon. However, he noted the seriousness of such an event, saying, "Synthetic biology allows for the creation of deadly new viruses. It is also the case that where you have a new disease there is no resistance in the population and that means it can spread fast". Bacterial infection In September 2019, Public Health England (PHE) reported that the increasing antibiotic resistance of bacteria, even to "last-resort" antibiotics such as carbapenems and colistin, could also turn into a potential Disease X, citing the antibiotic resistance in gonorrhea as an example. In popular culture In 2018, the Museum of London ran an exhibition titled "Disease X: London's next epidemic?", hosted for the centenary of the Spanish flu epidemic from 1918. The term features in the title of several works of fiction that involve global pandemic diseases, such as Disease (2020), and Disease X: The Outbreak (2019). Conspiracy theories Disease X has become the subject of several conspiracy theories, claiming that it may be a real disease, or conceived as a biological weapon, or engineered to create a planned epidemic. See also Bioterrorism Coalition for Epidemic Preparedness Innovations (CEPI) Global Research Collaboration for Infectious Disease Preparedness (GloPIR-R) Synthetic virology Nuremberg Code References External links Blueprint priority diseases ()—World Health Organization (6–7 February 2018) Prioritizing diseases for research and development in emergency contexts—World Health Organization (March 2018) (Video) What is Disease X—World Health Organization (16 March 2018) The mystery viruses far worse than flu—BBC News (November 2018) Disaster management Pandemics Placeholder names Viruses World Health Organization
Disease X
Biology
2,431
870,889
https://en.wikipedia.org/wiki/Plasma%20globe
A plasma ball, plasma globe, or plasma lamp is a clear glass container filled with noble gases, usually a mixture of neon, krypton, and xenon, that has a high-voltage electrode in the center of the container. When voltage is applied, a plasma is formed within the container. Plasma filaments extend from the inner electrode to the outer glass insulator, giving the appearance of multiple constant beams of colored light. Plasma balls were popular as novelty items in the 1980s. The plasma lamp was invented by Nikola Tesla, during his experimentation with high-frequency currents in an evacuated glass tube for the purpose of studying high voltage phenomena. Tesla called his invention an "inert gas discharge tube". The modern plasma lamp design was developed by James Falk and MIT student Bill Parker. A crackle tube is a related device filled with phosphor-coated beads. Construction Although many variations exist, a plasma ball is usually a clear glass sphere filled with a mixture of various gases (most commonly neon, sometimes with other noble gases such as argon, xenon and krypton) at nearly atmospheric pressure. Plasma balls are driven by high-frequency (approximately ) alternating current at . The drive circuit is essentially a specialized power inverter, in which current from a lower-voltage DC supply powers a high-frequency electronic oscillator circuit whose output is stepped up by a high-frequency, high-voltage transformer, for example a miniature Tesla coil or a flyback transformer. The radio-frequency energy from the transformer is transmitted into the gas within the ball through an electrode at its center. Additionally, some designs utilize the ball as a resonant cavity, which provides positive feedback to the drive transistor through the transformer. A much smaller hollow glass orb can also serve as an electrode when it is filled with metal wool or a conducting fluid that is in communication with the transformer output. In this case, the radio-frequency energy is admitted into the larger space by capacitive coupling right through the glass. Plasma filaments extend from the inner electrode to the outer glass insulator, giving the appearance of moving tendrils of colored light within the volume of the ball . If a hand is placed close to the ball it produces a faint smell of ozone, as the gas is produced by high voltage interaction with atmospheric oxygen. Some balls have a control knob that varies the amount of power going to the center electrode. At the very lowest setting that will light or "strike" the ball, a single tendril is made. This single tendril's plasma channel engages enough space to transmit this lowest striking energy to the outside world through the glass of the ball. As the power is increased, this single channel's capacity is overwhelmed and a second channel forms, then a third, and so on. The tendrils each compete for a footprint on the inner orb as well. The energies flowing through these are all of the same polarity, so they repel each other as like charges: a thin dark boundary surrounds each footprint on the inner electrode. The ball is prepared by pumping out as much air as is practical. The ball is then backfilled with neon to a pressure similar to one atmosphere. If the radio-frequency power is turned on, if the ball is "struck" or "lit", now, the whole ball will glow a diffuse red. If a little argon is added, the filaments will form. If a very small amount of xenon is added, the "flowers" will bloom at the ends of the filaments. The neon available for purchase for a neon-sign shop often comes in glass flasks at the pressure of a partial vacuum. These cannot be used to fill a ball with a useful mixture. Tanks of gas, each with its specific, proper, pressure regulator and fitting, are required: one for each of the gases involved. Of the other noble gases, radon is radioactive, helium escapes through the glass relatively quickly, and krypton is expensive. Other gases such as mercury vapor can be used. Molecular gases may be dissociated by the plasma. Interaction Placing a finger tip on the glass creates an attractive spot for the energy to flow because the conductive human body (having an internal resistance less than 1000 ohms) is more easily polarized than the dielectric material around the electrode (i.e. the gas within the ball) providing an alternative discharge path having less resistance. Therefore, the capacity of the large conducting body to accept radio frequency energy is greater than that of the surrounding air. The energy available to the filaments of plasma within the ball will preferentially flow toward the better acceptor. This flow also causes a single filament, from the inner ball to the point of contact, to become brighter and thinner. The filament is brighter because there is more current flowing through it and into the human body, which has a capacitance of about 100 pF. The filament is thinner because the magnetic fields around it, augmented by the now-higher current flowing through it, cause a magnetohydrodynamic effect called pinch: the plasma channel's own magnetic fields create a force acting to compress the size of the plasma channel itself. Much of the movement of the filaments is due to heating of the gas around the filament. When gas along the filament is heated, it becomes more buoyant and rises, carrying the filament with it. If the filament is discharging into a fixed object (like a hand) on the side of the ball, it will begin to deform into a curved path between the central electrode and the object. When the distance between the electrode and the object becomes too great to maintain, the filament will break and a new filament will reform between the electrode and the hand . An electric current is produced within any conductive object near the orb. The glass acts as a dielectric in a capacitor formed between the ionized gas and the hand. Gallery By adjusting the voltage, frequency, chemical composition and pressure of gas in the globe, many colorful effects can be achieved History In ("Incandescent Electric Light", 1894 February 6), Nikola Tesla describes a plasma lamp. This patent is for one of the first high-intensity discharge lamps. Tesla used an incandescent-type lamp ball with a single internal conductive element and excited the element with high voltage currents from a Tesla coil, thus creating the brush discharge emanation. He gained patent protection on a particular form of the lamp in which a light-giving small body or button of refractory material is supported by a conductor entering a very highly exhausted ball or receiver. Tesla called this invention the single terminal lamp, or, later, the "Inert Gas Discharge Tube". The Groundstar style of plasma ball was created by James Falk and marketed to collectors and science museums in the 1970s and 1980s. Jerry Pournelle in 1984 praised Orb Corporation's Omnisphere as "the most fabulous object in the entire world" and "magnificent ... a new kind of art object", stating "you can't buy mine for any price". The technology needed to formulate gas mixtures used in today's plasma spheres was not available to Tesla. Modern lamps typically use combinations of xenon, krypton and neon, although other gases can be used. These gas mixtures, along with different glass shapes and integrated-circuit-driven electronics, create the vivid colors, range of motions, and complex patterns seen in today's plasma spheres. Applications Plasma balls are mainly used as curiosities or toys for their unique lighting effects and the "tricks" that can be performed on them by users moving their hands around them. They might also form part of a school's laboratory equipment for demonstration purposes. They are not usually employed for general lighting. However, in recent years, some novelty stores have begun selling miniature plasma ball nightlights that can be mounted on a standard light socket. Plasma balls can be used for experimenting with high voltages. If a conductive plate or wire coil is placed on the ball, capacitive coupling can transfer enough voltage to the plate or coil to produce a small arc or energize a high voltage load. This is possible because the plasma inside the ball and the conductor outside it act as plates of a capacitor, with the glass in between as a dielectric. A step-down transformer connected between the plate and the ball's electrode can produce lower-voltage, higher-current radio frequency output. Careful earth grounding is essential to prevent injury or damage to equipment. Hazards Bringing conductive materials or electronic devices close to a plasma ball may cause the glass to become hot. The high voltage radio frequency energy coupled to them from within the ball may cause a mild electric shock to the person touching, even through a protective glass casing. The radio frequency field produced by plasma balls can interfere with the operation of touchpads used on laptop computers, digital audio players, cell phones, and other similar devices. Some types of plasma ball can radiate sufficient radio frequency interference (RFI) to interfere with cordless telephones and Wi-Fi devices several feet or some meters away. If an electrical conductor touches the outside of the ball, capacitive coupling can induce enough potential on it to produce a small arc. This is possible because the ball's glass acts as a capacitor dielectric: the inside of the lamp acts as one plate, and the conductive object on the outside acts as the opposite capacitor plate. This is a dangerous action that can damage the ball or other electronic devices, and presents a fire ignition hazard. Perceptible amounts of ozone can be formed on the surface of a plasma ball. Many people can detect ozone at concentrations of , which is right below the lowest concentration at which ozone is considered harmful to health. Exposure of produces headaches, burning eyes, and irritation to the respiratory passages. In July 2022, a spark from a plasma globe at the Questacon museum in Australia ignited the alcohol-based hand sanitiser that had been applied to a child's hands leaving them with serious burns. See also Fusor List of light sources Sulfur lamp Vacuum arc References 1980s fads and trends Articles containing video clips Gas discharge lamps Inventions by Nikola Tesla Novelty items Plasma technology and applications
Plasma globe
Physics
2,149
1,517,620
https://en.wikipedia.org/wiki/Steady-state%20economy
A steady-state economy is an economy made up of a constant stock of physical wealth (capital) and a constant population size. In effect, such an economy does not grow in the course of time. The term usually refers to the national economy of a particular country, but it is also applicable to the economic system of a city, a region, or the entire world. Early in the history of economic thought, classical economist Adam Smith of the 18th century developed the concept of a stationary state of an economy: Smith believed that any national economy in the world would sooner or later settle in a final state of stationarity. Since the 1970s, the concept of a steady-state economy has been associated mainly with the work of leading ecological economist Herman Daly. As Daly's concept of a steady-state includes the ecological analysis of natural resource flows through the economy, his concept differs from the original classical concept of a stationary state. One other difference is that Daly recommends immediate political action to establish the steady-state economy by imposing permanent government restrictions on all resource use, whereas economists of the classical period believed that the final stationary state of any economy would evolve by itself without any government intervention. Critics of the steady-state economy usually object to it by arguing that resource decoupling, technological development, and the operation of market mechanisms are capable of overcoming resource scarcity, pollution, or population overshoot. Proponents of the steady-state economy, on the other hand, maintain that these objections remain insubstantial and mistaken — and that the need for a steady-state economy is becoming more compelling every day. A steady-state economy is not to be confused with economic stagnation: Whereas a steady-state economy is established as the result of deliberate political action, economic stagnation is the unexpected and unwelcome failure of a growth economy. An ideological contrast to the steady-state economy is formed by the concept of a post-scarcity economy. Definition and vision Since the 1970s, the concept of a steady-state economy has been associated mainly with the work of leading ecological economist Herman Daly — to such an extent that even his boldest critics recognize the prominence of his work. Herman Daly defines his concept of a steady-state economy as an economic system made up of a constant stock of physical wealth (capital) and a constant stock of people (population), both stocks to be maintained by a flow of natural resources through the system. The first component, the constant stocks, is similar to the concept of the stationary state, originally used in classical economics; the second component, the flow of natural resources, is a new ecological feature, presently also used in the academic discipline of ecological economics. The durability of both of the constant stocks is to be maximized: The more durable the stock of capital is, the smaller the flow of natural resources is needed to maintain the stock; likewise, a 'durable' population means a population enjoying a high life expectancy — something desirable by itself — maintained by a low birth rate and an equally low death rate. Taken together, higher durability translates into better ecology in the system as a whole. Daly's concept of a steady-state economy is based on the vision that man's economy is an open subsystem embedded in a finite natural environment of scarce resources and fragile ecosystems. The economy is maintained by importing valuable natural resources from the input end and exporting valueless waste and pollution at the output end in a constant and irreversible flow. Any subsystem of a finite nongrowing system must itself at some point also become nongrowing and start maintaining itself in a steady-state as far as possible. This vision is opposed to mainstream neoclassical economics, where the economy is represented by an isolated and circular model with goods and services exchanging endlessly between companies and households, without exhibiting any physical contact to the natural environment. In the early 2010s, reviewers sympathetic towards Daly's concept of a steady-state economy have passed the concurrent judgement that although his concept remains beyond what is politically feasible at present, there is room for mainstream thinking and collective action to approach the concept in the future. In 2022 a research (chapters 4–5) described degrowth toward a steady state economy as something possible and probably positive. The study ends by the words:"The case for a transition to a steady-state economy with low throughput and low emissions, initially in the high-income economies and then in rapidly growing economies, needs more serious attention and international cooperation. Historical background For centuries, economists and other scholars have considered matters of natural resource scarcity and limits to growth, from the early classical economists in the 18th and 19th centuries down to the ecological concerns that emerged in the second half of the 20th century and developed into the formation of ecological economics as an independent academic subdiscipline in economics. Concept of the stationary state in classical economics From Adam Smith and onwards, economists in the classical period of economic theorising described the general development of society in terms of a contrast between the scarcity of arable agricultural land on the one hand, and the growth of population and capital on the other hand. The incomes from gross production were distributed as rents, profits and wages among landowners, capitalists and labourers respectively, and these three classes were incessantly engaged in the struggle for increasing their own share. The accumulation of capital (net investments) would sooner or later come to an end as the rate of profit fell to a minimum or to nil. At that point, the economy would settle in a final stationary state with a constant population size and a constant stock of capital. Adam Smith's concept Adam Smith's magnum opus on The Wealth of Nations, published in 1776, laid the foundation of classical economics in Britain. Smith thereby disseminated and established a concept that has since been a cornerstone in economics throughout most of the world: In a liberal capitalist society, provided with a stable institutional and legal framework, an 'invisible hand' will ensure that the enlightened self-interest of all members of society will contribute to the growth and prosperity of society as a whole, thereby leading to an 'obvious and simple system of natural liberty'. Smith was convinced of the beneficial effect of the enlightened self-interest on the wealth of nations; but he was less certain this wealth would grow forever. Smith observed that any country in the world found itself in either a 'progressive', a 'stationary', or a 'declining' state: Although England was wealthier than its North American colonies, wages were higher in the latter place as wealth in North America was growing faster than in England; hence, North America was in the 'cheerful and hearty' progressive state. In China, on the other hand, wages were low, the condition of poor people was scantier than in any nation in Europe, and more marriages were contracted here because the 'horrid' killing of newborn babies was permitted and even widely practised; hence, China was in the 'dull' stationary state, although it did not yet seem to be declining. In nations situated in the 'melancholic' declining state, the higher ranks of society would fall down and settle for occupation amid the lower ranks, while the lowest ranks would either subsist on a miserable and insufficient wage, resort to begging or crime, or slide into starvation and early death. Bengal and some other English settlements in the East Indies possibly found themselves in this state, Smith reckoned. Smith pointed out that as wealth was growing in any nation, the rate of profit would tend to fall and investment opportunities would diminish. In a nation that had thereby reached this 'full complement of riches', society would finally settle in a stationary state with a constant stock of people and capital. In an 18th-century anticipation of The Limits to Growth (see below), Smith described the state as follows: According to Smith, Holland seemed to be approaching this stationary state, although at a much higher level than in China. Smith believed the laws and institutions of China prevented this country from achieving the potential wealth its soil, climate and situation might have admitted of. Smith was unable to provide any contemporary examples of a nation in the world that had in fact reached the full complement of riches and thus had settled in stationarity, because, as he conjectured, "... perhaps no country has ever yet arrived at this degree of opulence." David Ricardo's concept In the early 19th century, David Ricardo was the leading economist of the day and the champion of British laissez-faire liberalism. He is known today for his free trade principle of comparative advantage, and for his formulation of the controversial labor theory of value. Ricardo replaced Adam Smith's empirical reasoning with abstract principles and deductive argument. This new methodology would later become the norm in economics as a science. In Ricardo's times, Britain's trade with the European continent was somewhat disrupted during the Napoleonic Wars that had raged since 1803. The Continental System brought into effect a large-scale embargo against British trade, whereby the nation's food supply came to rely heavily on domestic agriculture to the benefit of the landowning classes. When the wars ended with Napoleon's final defeat in 1815, the landowning classes dominating the British parliament had managed to tighten the existing Corn Laws in order to retain their monopoly status on the home market during peacetime. The controversial Corn Laws were a protectionist two-sided measure of subsidies on corn exports and tariffs on corn imports. The tightening was opposed by both the capitalist and the labouring classes, as the high price of bread effectively reduced real profits and real wages in the economy. So was the political setting when Ricardo published his treatise On the Principles of Political Economy and Taxation in 1817. According to Ricardo, the limits to growth were ever present due to scarcity of arable agricultural land in the country. In the wake of the wartime period, the British economy seemed to be approaching the stationary state as population was growing, plots of land with lower fertility were put into agricultural use, and the rising rents of the rural landowning class were crowding out the profits of the urban capitalists. This was the broad outline of Ricardo's controversial land rent theory. Ricardo believed that the only way for Britain to avoid the stationary state was to increase her volume of international trade: The country should export more industrial products and start importing cheap agricultural products from abroad in turn. However, this course of development was impeded by the Corn Laws that seemed to be hampering both the industrialisation and the internationalization of the British economy. In the 1820s, Ricardo and his followers – Ricardo himself died in 1823 – directed much of their fire at the Corn Laws in order to have them repealed, and various other free trade campaigners borrowed indiscriminately from Ricardo's doctrines to suit their agenda. The Corn Laws were not repealed before 1846. In the meantime, the British economy kept growing, a fact that effectively undermined the credibility and thrust of Ricardian economics in Britain; but Ricardo had by now established himself as the first stationary state theorist in the history of economic thought. Ricardo's preoccupation with class conflict anticipated the work of Karl Marx (see below). John Stuart Mill's concept John Stuart Mill was the leading economist, philosopher and social reformer in mid-19th century Britain. His economics treatise on the Principles of Political Economy, published in 1848, attained status as the standard textbook in economics throughout the English-speaking world until the turn of the century. A champion of classical liberalism, Mill believed that an ideal society should allow all individuals to pursue their own good without any interference from others or from government. Also a utilitarian philosopher, Mill regarded the 'Greatest Happiness Principle' as the ultimate ideal for a harmonious society: Mill's concept of the stationary state was strongly coloured by these ideals. Mill conjectured that the stationary state of society was not too far away in the future: Contrary to both Smith and Ricardo before him, Mill took an optimistic view on the future stationary state. Mill could not "... regard the stationary state of capital and wealth with the unaffected aversion so generally manifested toward it by political economists of the old school." Instead, Mill attributed many important qualities to this future state, he even believed the state would bring about "... a very considerable improvement on our present condition." According to Mill, the stationary state was at one and the same time inevitable, necessary and desirable: It was inevitable, because the accumulation of capital would bring about a falling rate of profit that would diminish investment opportunities and hamper further accumulation; it was also necessary, because mankind had to learn how to reduce its size and its level of consumption within the boundaries set by nature and by employment opportunities; finally, the stationary state was desirable, as it would ease the introduction of public income redistibution schemes, create more equality and put an end to man's ruthless struggle to get by — instead, the human spirit would be liberated to the benefit of more elevated social and cultural activities, 'the graces of life'. Hence, Mill was able to express all of his liberal ideals for mankind through his concept of the stationary state. It has been argued that Mill essentially made a quality-of-life argument for the stationary state. Main developments in economics since Mill When the influence of John Stuart Mill and his Principles declined, the classical-liberalist period of economic theorising came to an end. By the turn of the 19th century, Marxism and neoclassical economics had emerged to dominate economics: Although a classical economist in his own right, Karl Marx abandoned the earlier concept of a stationary state and replaced it with his own unique vision of historical materialism, according to which human societies pass through several 'modes of production', eventually leading to communism. In each mode of production, man's increasing mastery over nature and the 'productive forces' of society develop to a point where the class conflict bursts into revolutions, followed by the establishment of a new mode of production. In opposition to his liberalist predecessors in the field, Marx did not regard natural resource scarcity as a factor constraining future economic growth; instead, the capitalist mode of production was to be overturned before the productive forces of society could fully develop, bringing about an abundance of goods in a new society based on the principle of "from each according to ability, to each according to need" — that is, communism. The assumption, based on technological optimism, was that communism would overcome any resource scarcity ever to be encountered. For ideological reasons, then, orthodox Marxism has mostly been opposed to any concern with natural resource scarcity ever since Marx's own day. However, the march of history has been hard on this ideology: By 1991, German sociologist Reiner Grundmann was able to make the rather sweeping observation that "Orthodox Marxism has vanished from the scene, leftism has turned green, and Marxists have become ecologists." In neoclassical economics, on the other hand, the preoccupation with society's long term growth and development inherent in classical economics was abandoned altogether; instead, economic analysis came to focus on the study of the relationship between given ends and given scarce means, forming the concept of general equilibrium theory within an essentially static framework. Hence, neoclassical economics achieved greater generality, but only by asking easier questions; and any concern with natural resource scarcity was neglected. For this reason, modern ecological economists have deplored the simplified and ecologically harmful features of neoclassical economics: It has been argued that neoclassical economics has become a pseudoscience of choice between anything in general and nothing in particular, while neglecting the preferences of future generations; that the very terminology of neoclassical economics is so ecologically illiterate as to rarely even refer to natural resources or ecological limits; and that neoclassical economics has developed to become a dominant free market ideology legitimizing an ideal of society resembling a perpetual motion machine of economic growth at intolerable environmental and human costs. Taken together, it has been argued that "... if Judeo-Christian monotheism took nature out of religion, Anglo-American economists (after about 1880) took nature out of economics." Almost one century later, Herman Daly has reintegrated nature into economics in his concept of a steady-state economy (see below). John Maynard Keynes's concept of reaching saturation John Maynard Keynes was the paradigm founder of modern macroeconomics, and is widely considered today to be the most influential economist of the 20th century. Keynes rejected the basic tenet of classical economics that free markets would lead to full employment by themselves. Consequently, he recommended government intervention to stimulate aggregate demand in the economy, a macroeconomic policy now known as Keynesian economics. Keynes also believed that capital accumulation would reach saturation at some point in the future. In his essay from 1930 on The Economic Possibilities of Our Grandchildren, Keynes ventured to look one hundred years ahead into the future and predict the standard of living in the 21st century. Writing at the beginning of the Great Depression, Keynes rejected the prevailing "bad attack of economic pessimism" of his own time and foresaw that by 2030, the grandchildren of his generation would live in a state of abundance, where saturation would have been reached. People would find themselves liberated from such economic activities as saving and capital accumulation, and be able to get rid of 'pseudo-moral principles' — avarice, exaction of interest, love of money — that had characterized capitalistic societies so far. Instead, people would devote themselves to the true art of life, to live "wisely and agreeably and well." Mankind would finally have solved "the economic problem," that is, the struggle for existence. The similarity between John Stuart Mill's concept of the stationary state (see above) and Keynes's predictions in this essay has been noted. It has been argued that although Keynes was right about future growth rates, he underestimated the inequalities prevailing today, both within and across countries. He was also wrong in predicting that greater wealth would induce more leisure spent; in fact, the reverse trend seems to be true. In his magnum opus on The General Theory of Employment, Interest and Money, Keynes looked only one generation ahead into the future and predicted that state intervention balancing aggregate demand would by then have caused capital accumulation to reach the point of saturation. The marginal efficiency of capital as well as the rate of interest would both be brought down to zero, and — if population was not increasing rapidly — society would finally "... attain the conditions of a quasi-stationary community where change and progress would result only from changes in technique, taste, population and institutions ..." Keynes believed this development would bring about the disappearance of the rentier class, something he welcomed: Keynes argued that rentiers incurred no sacrifice for their earnings, and their savings did not lead to productive investments unless aggregate demand in the economy was sufficiently high. "I see, therefore, the rentier aspect of capitalism as a transitional phase which will disappear when it has done its work." Post-war economic expansion and emerging ecological concerns The economic expansion following World War II took place while mainstream economics largely neglected the importance of natural resources and environmental constraints in the development. Addressing this discrepancy, ecological concerns emerged in academia around 1970. Later on, these concerns developed into the formation of ecological economics as an academic subdiscipline in economics. Post-war economic expansion and the neglect of mainstream economics After the ravages of World War II, the industrialised part of the world experienced almost three decades of unprecedented and prolonged economic expansion. This expansion — known today as the Post–World War II economic expansion — was brought about by international financial stability, low oil prices and ever increasing labour productivity in manufacturing. During the era, all the advanced countries who founded — or later joined — the OECD enjoyed robust and sustained growth rates as well as full employment. In the 1970s, the expansion ended with the 1973 oil crisis, resulting in the 1973–75 recession and the collapse of the Bretton Woods monetary system. Throughout this era, mainstream economics — dominated by both neoclassical economics and Keynesian economics — developed theories and models where natural resources and environmental constraints were neglected. Conservation issues related specifically to agriculture and forestry were left to specialists in the subdiscipline of environmental economics at the margins of the mainstream. As the theoretical framework of neoclassical economics — namely general equilibrium theory — was uncritically adopted and maintained by even environmental economics, this subdiscipline was rendered largely unable to consider important issues of concern to environmental policy. In the years around 1970, the widening discrepancy between an ever-growing world economy on the one hand, and a mainstream economics discipline not taking into account the importance of natural resources and environmental constraints on the other hand, was finally addressed — indeed, challenged — in academia by a few unorthodox economists and researchers. Emerging ecological concerns During the short period of time from 1966 to 1972, four works were published addressing the importance of natural resources and the environment to human society: In his 1966 philosophical-minded essay on The Economics of the Coming Spaceship Earth, economist and systems scientist Kenneth E. Boulding argued that mankind would soon have to adapt to economic principles much different than the past 'open earth' of illimitable plains and exploitative behaviour. On the basis of the thermodynamic principle of the conservation of matter and energy, Boulding developed the view that the flow of natural resources through the economy is a rough measure of the Gross national product (GNP); and, consequently, that society should start regarding the GNP as a cost to be minimized rather than a benefit to be maximized. Therefore, mankind would have to find its place in a cyclical ecological system without unlimited reservoirs of anything, either for extraction or for pollution — like a spaceman on board a spaceship. Boulding was not the first to make use of the 'Spaceship Earth' metaphor, but he was the one who combined this metaphor with the analysis of natural resource flows through the economy. In his 1971 magnum opus on The Entropy Law and the Economic Process, Romanian American economist Nicholas Georgescu-Roegen integrated the thermodynamic concept of entropy with economic analysis, and argued that all natural resources are irreversibly degraded when put to use in economic activity. What happens in the economy is that all matter and energy is transformed from states available for human purposes (valuable natural resources) to states unavailable for human purposes (valueless waste and pollution). In the history of economic thought, Georgescu-Roegen was also the first economist of some standing to theorise on the premise that all of earth's mineral resources will eventually be exhausted at some point (see below). Also in 1971, pioneering ecologist and general systems analyst Howard T. Odum published his book on Environment, Power and Society, where he described human society in terms of ecology. He formulated the maximum power principle, according to which all organisms, ecosystems and human societies organise themselves in order to maximize their use of available energy for survival. Odum pointed out that those human societies with access to the higher quality of energy sources enjoyed an advantage over other societies in the Darwinian evolutionary struggle. Odum later co-developed the concept of emergy (i.e., embodied energy) and made other valuable contributions to ecology and systems analysis. His work provided the biological term 'ecology' with its broader societal meaning used today. In 1972, environmental scientist and systems analyst Donella Meadows and her team of researchers had their study on The Limits to Growth published by the Club of Rome. The Meadows team modelled aggregate trends in the world economy and made the projection — not prediction — that by the mid to latter part of the 21st century, industrial production per capita, food supply per capita and world population would all reach a peak, and then rapidly decline in a vicious overshoot-and-collapse trajectory. Due to its dire pessimism, the study was scorned and dismissed by most mainstream economists at the time of its publication. However, well into the 21st century, several independent researchers have confirmed that world economic trends so far do indeed match up to the original 'standard run' projections made by the Meadows team, indicating that a global collapse may still loom large in the not too distant future. Taken together, these four works were seminal in bringing about the formation of ecological economics later on. Formation of ecological economics as an academic subdiscipline Although most of the theoretical and foundational work behind ecological economics was in place by the early 1970s, a long gestation period elapsed before this new academic subdiscipline in economics was properly named and institutionalized. Ecological economics was formally founded in 1988 as the culmination of a series of conferences and meetings through the 1980s, where key scholars interested in the ecology-economy interdependency were interacting with each other. The most important people involved in the establishment were Herman Daly and Robert Costanza from the US; AnnMari Jansson from Sweden; and Juan Martínez-Alier from Spain (Catalonia). Since 1989, the discipline has been organised in the International Society for Ecological Economics that publishes the journal of Ecological Economics. When the ecological economics subdiscipline was established, Herman Daly's 'preanalytic vision' of the economy was widely shared among the members who joined in: The human economy is an open subsystem of a finite and non-growing ecosystem (earth's natural environment), and any subsystem of a fixed nongrowing system must itself at some point also become nongrowing. Indeed, it has been argued that the subdiscipline itself was born out of frustration with the unwillingness of the established disciplines to accept this vision. However, ecological economics has since been overwhelmed by the influence and domination of neoclassical economics and its everlasting free market orthodoxy. This development has been deplored by activistic ecological economists as an 'incoherent', 'shallow' and overly 'pragmatic' slide. Herman Daly's concept of a steady-state economy In the 1970s, Herman Daly became the world's leading proponent of a steady-state economy. Throughout his career, Daly published several books and articles on the subject. He also helped to found the Center for the Advancement of the Steady-State Economy (CASSE). He received several prizes and awards in recognition of his work. According to two independent comparative studies of American Daly's steady-state economics versus the later, competing school of degrowth from continental Europe, no differences of analytical substance exist between the two schools; only, Daly's bureaucratic — or even technocratic — top-down management of the economy fares badly with the more radical grassroots appeal of degrowth, as championed by French political scientist Serge Latouche (see below). The premise underlying Daly's concept of a steady-state economy is that the economy is an open subsystem of a finite and non-growing ecosystem (earth's natural environment). The economy is maintained by importing low-entropy matter-energy (resources) from nature; these resources are put through the economy, being transformed and manufactured into goods along the way; eventually, the throughput of matter-energy is exported to the environment as high-entropy waste and pollution. Recycling of material resources is possible, but only by using up some energy resources as well as an additional amount of other material resources; and energy resources, in turn, cannot be recycled at all, but are dissipated as waste heat. Out of necessity, then, any subsystem of a fixed nongrowing system must itself at some point also become nongrowing. Daly argues that nature has provided basically two sources of wealth at man's disposal, namely a stock of terrestrial mineral resources and a flow of solar energy. An 'asymmetry' between these two sources of wealth exist in that we may — within some practical limits — extract the mineral stock at a rate of our own choosing (that is, rapidly), whereas the flow of solar energy is reaching earth at a rate beyond human control. Since the Sun will continue to shine on earth at a fixed rate for billions of years to come, it is the terrestrial mineral stock — and not the Sun — that constitutes the crucial scarcity factor regarding man's economic future. Daly points out that today's global ecological problems are rooted in man's historical record: Until the Industrial Revolution that took place in Britain in the second half of the 18th century, man lived within the limits imposed by what Daly terms a 'solar-income budget': The Palaeolithic tribes of hunter-gatherers and the later agricultural societies of the Neolithic and onwards subsisted primarily — though not exclusively — on earth's biosphere, powered by an ample supply of renewable energy, received from the Sun. The Industrial Revolution changed this situation completely, as man began extracting the terrestrial mineral stock at a rapidly increasing rate. The original solar-income budget was thereby broken and supplemented by the new, but much scarcer source of wealth. Mankind still lives in the after-effect of this revolution. Daly cautions that more than two hundred years of worldwide industrialisation is now confronting mankind with a range of problems pertaining to the future existence and survival of our species: Following the work of Nicholas Georgescu-Roegen, Daly argues that the laws of thermodynamics restrict all human technologies and apply to all economic systems: This view on the role of technology in the economy was later termed 'entropy pessimism' (see below). In Daly's view, mainstream economists tend to regard natural resource scarcity as only a relative phenomenon, while human needs and wants are granted absolute status: It is believed that the price mechanism and technological development (however defined) is capable of overcoming any scarcity ever to be faced on earth; it is also believed that all human wants could and should be treated alike as absolutes, from the most basic necessities of life to the extravagant and insatiable craving for luxuries. Daly terms this belief 'growthmania', which he finds pervasive in modern society. In opposition to the dogma of growthmania, Daly submits that "... there is such a thing as absolute scarcity, and there is such a thing as purely relative and trivial wants". Once it is recognised that scarcity is imposed by nature in an absolute form by the laws of thermodynamics and the finitude of earth; and that some human wants are only relative and not worthy of satisfying; then we are all well on the way to the paradigm of a steady-state economy, Daly concludes. Consequently, Daly recommends that a system of permanent government restrictions on the economy is established as soon as possible, a steady-state economy. Whereas the classical economists believed that the final stationary state would settle by itself as the rate of profit fell and capital accumulation came to an end (see above), Daly wants to create the steady-state politically by establishing three institutions of the state as a superstructure on top of the present market economy: The first institution is to correct inequality to some extent by putting minimum and maximum limits on incomes, maximum limits on wealth, and then redistribute accordingly. The second institution is to stabilise the population by issuing transferable reproduction licenses to all fertile women at a level corresponding with the general replacement fertility in society. The third institution is to stabilise the level of capital by issuing and selling depletion quotas that impose quantitative restrictions on the flow of resources through the economy. Quotas effectively minimise the throughput of resources necessary to maintain any given level of capital (as opposed to taxes, that merely alter the prevailing price structure). The purpose of these three institutions is to stop and prevent further growth by combining what Daly calls "a nice reconciliation of efficiency and equity" and providing "the ecologically necessary macrocontrol of growth with the least sacrifice in terms of microlevel freedom and variability." Among the generation of his teachers, Daly ranks Nicholas Georgescu-Roegen and Kenneth E. Boulding as the two economists he has learned the most from. However, both Georgescu-Roegen and Boulding have assessed that a steady-state economy may serve only as a temporary societal arrangement for mankind when facing the long-term issue of global mineral resource exhaustion: Even with a constant stock of people and capital, and a minimised (yet constant) flow of resources put through the world economy, earth's mineral stock will still be exhausted, although at a slower rate than is presently the situation (see below). Responding specifically to the criticism levelled at him by Georgescu-Roegen, Daly concedes that a steady-state economy will serve only to postpone, and not to prevent, the inevitable mineral resource exhaustion: "A steady-state economy cannot last forever, but neither can a growing economy, nor a declining economy". A frank and committed Protestant, Daly further argues that... Later, several other economists in the field have agreed that not even a steady-state economy can last forever on earth. Ecological reasons for a steady-state economy In 2021, a study checked if the current situation confirms the predictions of the book Limits to Growth. The conclusion was that in 10 years the global GDP will begin to decline. If it will not happen by deliberate transition it will happen by ecological disaster. Planetary boundaries The world's mounting ecological problems have stimulated interest in the concept of a steady-state economy. Since the 1990s, most metrics have provided evidence that the volume of the world economy far exceeds critical global limits to economic growth already. According to the ecological footprint measure, Earth's carrying capacity — that is, Earth's long-term capacity to sustain human populations and consumption levels — was exceeded by some 30 percent in 1995. By 2018, this figure had increased to some 70 percent. In 2020 multinational team of scientists published a study, saying that overconsumption is the biggest threat to sustainability. According to the study a drastic change in lifestyle is necessary for solving the ecological crisis. According to one of the authors Julia Steinberger: "To protect ourselves from the worsening climate crisis, we must reduce inequality and challenge the notion that riches, and those who possess them, are inherently good." The research was published on the site of the World Economic Forum. The leader of the forum, professor Klaus Schwab, calls for a "great reset of capitalism". In effect, mankind is confronted by an ecological crisis, in which humans are living outside of planetary boundaries which will have significant effects on human health and wellbeing. The significant impact of human activities on Earth's ecosystems has motivated some geologists to propose the present epoch be named the anthropocene. The following issues have raised much concern worldwide: Pollution and global warming Air pollution emanating from motor vehicles and industrial plants is damaging public health and increasing mortality rates. The concentration of carbon dioxide and other greenhouse gases in the atmosphere is the apparent source of global warming and climate changes. Extreme regional weather patterns and rising sea levels caused by warming degrade living conditions in many — if not all — parts of the world. The warming already poses a security threat to many nations and works as a so-called 'threat multiplier' to geo-political stability. Even worse, the loss of Arctic permafrost may be triggering a massive release of methane and other greenhouse gases from thawing soils in the region, thereby overwhelming political action to counter climate change. If critical temperature thresholds are crossed, Earth's climate may transit from an 'icehouse' to a 'greenhouse' state for the first time in 34 million years. One of the most common solutions to the climate crisis is transitioning to renewable energy, but it also has some environmental impacts. They are presented by the proponents of theories like degrowth steady-state economy and circular economy as one of the proofs that for achieving sustainability technological methods are not enough and there is a need to limit consumption In 2019 a new report "Plastic and Climate" was published. According to the report, in 2019, plastic will contribute greenhouse gases in the equivalent of 850 million tons of carbon dioxide () to the atmosphere. In current trend, annual emissions will grow to 1.34 billion tons by 2030. By 2050 plastic could emit 56 billion tons of greenhouse gas emissions, as much as 14 percent of the Earth's remaining carbon budget, except the harm to Phytoplankton. The report says that only solutions which involve a reduction in consumption can solve the problem, while others like biodegradable plastic, ocean cleanup, using renewable energy in plastic industry can do little, and in some cases may even worsen it. Another report referring to all the environmental and health effects of plastic says the same. Depletion of non-renewable minerals Non-renewable mineral reserves are currently extracted at high and unsustainable rates from Earth's crust. Remaining reserves are likely to become ever more costly to extract in the near future, and will reach depletion at some point. The era of relatively peaceful economic expansion that has prevailed globally since World War II may be interrupted by unexpected supply shocks or simply be succeeded by the peaking depletion paths of oil and other valuable minerals. In 2020 in the first time the rate of use of natural resources arrived to more than 110 billion tons per year Economist Jason Hickel has written critically about the ideology of green-growth, the idea that as capitalism and systems expand, natural resources will also expand naturally, as it is compatible with our planet's ecology. This contradicts with the idea of no-growth economics, or degrowth economics, where the sustainability and stability of the economy is prioritized over the uncontrolled profit of those in power. Models around creating development in communities have found that failing to account for sustainability in early stages leads to failure in the long term. These models contradict green growth theory and do not support ideas about expansion of natural resources. Additionally, those living in poorer areas tend to be exposed to higher levels of toxins and pollutants as a result of systematic environmental racism. Increasing natural resources and increasing local involvement in their distribution are potential solutions to alleviate pollution and address poverty in these areas. Net depletion of renewable resources Use of renewable resources in excess of their replenishment rates is undermining ecological stability worldwide. Between 2000 and 2012, deforestation resulted in some 14 percent of the equivalent of Earth's original forest cover to be cut down. Tropical rainforests have been subject to deforestation at a rapid pace for decades — especially in west and central Africa and in Brazil — mostly due to subsistence farming, population pressure, and urbanization. Population pressures also strain the world's soil systems, leading to land degradation, mostly in developing countries. Global erosion rates on conventional cropland are estimated to exceed soil creation rates by more than ten times. Widespread overuse of groundwater results in water deficits in many countries. By 2025, water scarcity could impact the living conditions of two-thirds of the world's population. Loss of biodiversity The destructive impact of human activity on wildlife habitats worldwide is accelerating the extinction of rare species, thereby substantially reducing Earth's biodiversity. The natural nitrogen cycle is heavily overloaded by industrial nitrogen fixation and use, thereby disrupting most known types of ecosystems. The accumulating plastic debris in the oceans decimates aquatic life. Ocean acidification due to the excess concentration of carbon dioxide in the atmosphere is resulting in coral bleaching and impedes shell-bearing organisms. Arctic sea ice decline caused by global warming is endangering the polar bear. In 2019, a summary for policymakers of the largest, most comprehensive study to date of biodiversity and ecosystem services was published by the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services. The report was finalised in Paris. The main conclusions: Over the last 50 years, the state of nature has deteriorated at an unprecedented and accelerating rate. The main drivers of this deterioration have been changes in land and sea use, exploitation of living beings, climate change, pollution and invasive species. These five drivers, in turn, are caused by societal behaviors, from consumption to governance. Damage to ecosystems undermines 35 of 44 selected UN targets, including the UN General Assembly's Sustainable Development Goals for poverty, hunger, health, water, cities' climate, oceans and land. It can cause problems with food, water and humanity's air supply. To fix the problem, humanity will need a transformative change, including sustainable agriculture, reductions in consumption and waste, fishing quotas and collaborative water management. On page 8 of the summary the authors state that one of the main measures is: " enabling visions of a good quality of life that do not entail ever-increasing material consumption; These mounting concerns have prompted an increasing number of academics and other writers — beside Herman Daly — to point to limits to economic growth, and to question — and even oppose — the prevailing ideology of infinite economic growth. In September 2019, 1 day before the Global Climate Strike on 20 September 2019 in the Guardian was published an article that summarizes a lot of research and say that limiting consumption is necessary for saving the biosphere. Steady-state economy and well-being Except the reasons linked to resource depletion and the carrying capacity of the ecological system, there are other reasons to limit consumption: overconsumption hurts the well-being of those who consume too much. In the same time when the ecological footprint of humanity exceeded the sustainable level, while GDP more than tripled from 1950, one of the well-being measures, the genuine progress indicator, has fallen from 1978. This is one of the reasons for pursuing the steady-state economy. In some cases reducing consumption can increase the living standard. In Costa Rica the GDP is 4 times smaller than in many countries in Western Europe and North America, but people live longer and better. An American study shows that when the income is higher than $75,000, an increase in profits does not increase well-being. To better measure well-being, the New Economics Foundation's has launched the Happy Planet Index. The food industry is a large sector of consumption, responsible for 37% of global greenhouse-gas emissions and studies show that people waste a fifth of food products just through disposal or overconsumption. By the time food reaches the consumer, 9% (160 million tons) goes uneaten and 10% is lost to overconsumption, meaning consumers ate more than the calorie intake requirement. When the consumer takes in too much, this not only explains losses at the beginning of the stage at production (and overproduction) but also lends itself to overconsumption of energy and protein, having harmful effects on the body like obesity. A report from the Lancet commission says the same. The experts write: "Until now, undernutrition and obesity have been seen as polar opposites of either too few or too many calories [...] In reality, they are both driven by the same unhealthy, inequitable food systems, underpinned by the same political economy that is single-focused on economic growth, and ignores the negative health and equity outcomes. Climate change has the same story of profits and power". Obesity was a medical problem for people who overconsumed food and worked too little already in ancient Rome, and its impact slowly grew through history. As to 2012, mortality from obesity was 3 times higher than from hunger, reaching 2.8 million people per year by 2017. Cycling reduces greenhouse gas emissions while reducing the effects of a sedentary lifestyle at the same time. As of 2002, a sedentary lifestyle claimed 2 million lives per year. The World Health Organization stated that "60 to 85% of people in the world—from both developed and developing countries—lead sedentary lifestyles, making it one of the more serious yet insufficiently addressed public health problems of our time." By 2012, according to a study published in The Lancet, the number reached 5.3 million. Reducing the use of screens can help fight many diseases, among others depression, the leading cause of disability globally. It also can lower greenhouse gas emission. As of 2018, 3.7% of global emissions were from digital technologies, more than from aviation; the number is expected to achieve 8% by 2025, equal to the emissions from cars. Reducing light pollution can reduce greenhouse-gas emissions and improve health. In September 2019, 1 day before the Global Climate Strike on 20 September 2019, an article was published in The Guardian that summarizes much research and says that limiting consumption is necessary for the health of overconsumers: it can increase empathy, improve the contacts with other people, and more. Connection with other ideologies and movements The concept of a steady-state economy is connected to other concepts that can be generally defined as ecological economics and anti-consumerism, because it serves as the final target of those concepts: Those ideologies are not calling for poverty but want to reach a level of consumption that is the best for people and the environment. Degrowth The Center for the Advancement of the Steady State Economy (CASSE) defines steady-state economy not only as an economy with some''' constant level of consumption, but as an economy with the best possible level of consumption maintained constantly. To define what that level is, it considers not only ecology, but also living standards. The Center writes: "In cases where the benefits of growth outweigh the costs (for example, where people are not consuming enough to meet their needs), growth or redistribution of resources may be required. In cases where the size of the economy has surpassed the carrying capacity of the ecosystems that contain it (a condition known as overshoot), degrowth may be required before establishing a steady state economy that can be maintained over the long term". In February 2020, the same organization proposed a slogan of "Degrowth Toward a Steady State Economy" because it can unite degrowthers and steady staters. In the statement it is mentioned that "[i]n 2018 the nascent DegrowUS adopted the mission statement, "Our mission is a democratic and just transition to a smaller, steady state economy in harmony with nature, family, and community." In his article on Economic de-growth vs. steady-state economy, Christian Kerschner has integrated the strategy of declining-state, or degrowth, with Herman Daly's concept of the steady-state economy to the effect that degrowth should be considered a path taken by the rich industrialized countries leading towards a globally equitable steady-state economy. This ultra-egalitarian path will then make ecological room for poorer countries to catch up and combine into a final world steady-state, maintained at some internationally agreed upon intermediate and 'optimum' level of activity for some period of time — although not forever. Kerschner admits that this goal of a world steady-state may remain unattainable in the foreseeable future, but such seemingly unattainable goals could stimulate visions about how to better approach them. The concept of Overdevelopment by Leopold Cohr In 1977 Leopold Kohr published a book named The Overdeveloped Nations: The Diseconomies Of Scale, talking primarily about overconsumption. This book is the basis for the theory of overdevelopment, saying that the global north, the rich countries are too developed, which increases the Ecological footprint of humanity and create many problems both in overdeveloped and underdeveloped countries. Conceptual and ideological disagreements Several conceptual and ideological disagreements presently exist concerning the steady-state economy in particular and the dilemma of growth in general. The following issues are considered below: The role of technology; resource decoupling and the rebound effect; a declining-state economy; the possibility of having capitalism without growth; and the possibility of pushing some of the terrestrial limits into outer space. In 2019 a research, presenting an overview of the attempts to achieve constant economic growth without environmental destruction and their results, was published. It shows that by the year 2019 the attempts were not successful. It does not give a clear answer about future attempts. Herman Daly's approach to these issues are presented throughout the text. Role of technology Technology is usually defined as the application of scientific method in the production of goods or in other social achievements. Historically, technology has mostly been developed and implemented in order to improve labour productivity and increase living standards. In economics, disagreement presently exists regarding the role of technology when considering its dependency on natural resources: In neoclassical economics, on the one hand, the role of 'technology' is usually represented as yet another factor of production contributing to economic growth, like land, labour and capital contribute. However, in neoclassical production functions, where the output of produced goods are related to the inputs provided by the factors of production, no mention is made of the contribution of natural resources to the production process. Hence, 'technology' is reified as a separate, self-contained device, capable of contributing to production without receiving any natural resource inputs beforehand. This representation of 'technology' also prevails in standard mainstream economics textbooks on the subject. In ecological economics, on the other hand, 'technology' is represented as the way natural resources are transformed in the production process. Hence, Herman Daly argues that the role of technology in the economy cannot be properly conceptualized without taking into account the flow of natural resources necessary to support the technology itself: An internal combustion engine runs on fuels; machinery and electric devices run on electricity; all capital equipment is made out of material resources to begin with. In physical terms, any technology — useful though it is — works largely as a medium for transforming valuable natural resources into material goods that eventually end up as valueless waste and pollution, thereby increasing the entropy — or disorder — of the world as a whole. This view of the role of technology in the economy has been termed 'entropy pessimism'. From the ecological point of view, it has been suggested that the disagreement boils down to a matter of teaching some elementary physics to the uninitiated neoclassical economists and other technological optimists. From the neoclassical point of view, leading growth theorist and Nobel Prize laureate Robert Solow has defended his much criticised position by replying in 1997 that 'elementary physics' has not by itself prevented growth in the industrialized countries so far. Resource decoupling and the rebound effect Resource decoupling occurs when economic activity becomes less intensive ecologically: A declining input of natural resources is needed to produce one unit of output on average, measured by the ratio of total natural resource consumption to gross domestic product (GDP). Relative resource decoupling occurs when natural resource consumption declines on a ceteris paribus assumption — that is, all other things being equal. Absolute resource decoupling occurs when natural resource consumption declines, even while GDP is growing. In the history of economic thought, William Stanley Jevons was the first economist of some standing to analyse the occurrence of resource decoupling, although he did not use this term. In his 1865 book on The Coal Question, Jevons argued that an increase in energy efficiency would by itself lead to more, not less, consumption of energy: Due to the income effect of the lowered energy expenditures, people would be rendered better off and demand even more energy, thereby outweighing the initial gain in efficiency. This mechanism is known today as the Jevons paradox or the rebound effect. Jevons's analysis of this seeming paradox formed part of his general concern that Britain's industrial supremacy in the 19th century would soon be set back by the inevitable exhaustion of the country's coal mines, whereupon the geopolitical balance of power would tip in favour of countries abroad possessing more abundant mines. In 2009, two separate studies were published that — among other things — addressed the issues of resource decoupling and the rebound effect: German scientist and politician Ernst Ulrich von Weizsäcker published Factor Five: Transforming the Global Economy through 80% Improvements in Resource Productivity, co-authored with a team of researchers from The Natural Edge Project. British ecological economist Tim Jackson published Prosperity Without Growth, drawing extensively from an earlier report authored by him for the UK Sustainable Development Commission. Consider each in turn: Ernst Ulrich von Weizsäcker argues that a new economic wave of innovation and investment — based on increasing resource productivity, renewable energy, industrial ecology and other green technology — will soon kick off a 'Green Kondratiev' cycle, named after the Russian economist Nikolai Kondratiev. This new long-term cycle is expected to bring about as much as an 80 percent increase in resource productivity, or what amounts to a 'Factor Five' improvement of the gross input per output ratio in the economy, and reduce environmental impact accordingly, von Weizsäcker promises. Regarding the adverse rebound effect, von Weizsäcker notes that "... efforts to improve efficiency have been fraught with increasing overall levels of consumption." As remedies, von Weizsäcker recommends three separate approaches: Recycling of and imposing restrictions on the use of materials; establishing capital funds from natural resource proceeds for reinvestments in order to compensate for the future bust caused by depletion; and finally, taxing resource consumption so as to balance it with the available supplies. Tim Jackson points out that according to empirical evidence, the world economy has indeed experienced some relative resource decoupling: In the period from 1970 to 2009, the 'energy intensity' — that is, the energy content embodied in world GDP—decreased by 33 percent; but as the world economy also kept growing, carbon dioxide emissions from fossil fuels have increased by 80 percent during the same period of time. Hence, no absolute energy resource decoupling materialized. Regarding key metal resources, the development was even worse in that not even relative resource decoupling have materialized in the period from 1990 to 2007: The extraction of iron ore, bauxite, copper and nickel was rising faster than world GDP to the effect that "resource efficiency is going in the wrong direction," mostly due to emerging economies — notably China — building up their infrastructure. Jackson concludes his survey by noting that the 'dilemma of growth' is evident when any resource efficiency squeezed out of the economy will sooner or later be pushed back up again by a growing GDP. Jackson further cautions that "simplistic assumptions that capitalism's propensity for efficiency will stabilize the climate and solve the problem of resource scarcity are almost literally bankrupt." Herman Daly has argued that the best way to increase natural resource efficiency (decouple) and to prevent the occurrence of any rebound effects is to impose quantitative restrictions on resource use by establishing a cap and trade system of quotas, managed by a government agency. Daly believes this system features a unique triple advantage: Absolute and permanent limits are set on the extraction rate of, use of and pollution with the resources flowing through the economy; as opposed to taxes that merely alter the prevailing price structure without stopping growth; and as opposed to pollution standards and control which are both costly and difficult to enact and enforce. More efficiency and recycling efforts are induced by the higher resource prices resulting from the restrictions (quota prices plus regular extraction costs). No rebound effects are able to appear, as any temporary excess demand will result only in inflation or shortages, or both — and not in increased supply, which is to remain constant and limited on a permanent basis. For all its merits, Daly himself points to the existence of physical, technological and practical limitations to how much efficiency and recycling can be achieved by this proposed system. The idea of absolute decoupling ridding the economy as a whole of any dependence on natural resources is ridiculed polemically by Daly as 'angelizing GDP': It would work only if we ascended to become angels ourselves. Declining-state economy A declining-state economy is an economy made up of a declining stock of physical wealth (capital) or a declining population size, or both. A declining-state economy is not to be confused with a recession: Whereas a declining-state economy is established as the result of deliberate political action, a recession is the unexpected and unwelcome failure of a growing or a steady economy. Proponents of a declining-state economy generally believe that a steady-state economy is not far-reaching enough for the future of mankind. Some proponents may even reject modern civilization as such, either partly or completely, whereby the concept of a declining-state economy begins bordering on the ideology of anarcho-primitivism, on radical ecological doomsaying or on some variants of survivalism. Romanian American economist Nicholas Georgescu-Roegen was the teacher and mentor of Herman Daly and is presently considered the main intellectual figure influencing the degrowth movement that formed in France and Italy in the early 2000s. In his paradigmatic magnum opus on The Entropy Law and the Economic Process, Georgescu-Roegen argues that the carrying capacity of earth — that is, earth's capacity to sustain human populations and consumption levels — is bound to decrease sometime in the future as earth's finite stock of mineral resources is presently being extracted and put to use; and consequently, that the world economy as a whole is heading towards an inevitable future collapse. In effect, Georgescu-Roegen points out that the arguments advanced by Herman Daly in support of his steady-state economy apply with even greater force in support of a declining-state economy: When the overall purpose is to ration and stretch mineral resource use for as long time into the future as possible, zero economic growth is more desirable than growth is, true; but negative growth is better still! Instead of Daly's steady-state economics, Georgescu-Roegen proposed his own so-called 'minimal bioeconomic program', featuring restrictions even more severe than those propounded by his former student Daly (see above). American political advisor Jeremy Rifkin, French champion of the degrowth movement Serge Latouche and Austrian degrowth theorist Christian Kerschner — who all take their cue from Georgescu-Roegen's work — have argued in favour of declining-state strategies. Consider each in turn: In his book on Entropy: A New World View, Jeremy Rifkin argues that the impending exhaustion of earth's mineral resources will mark the decline of the industrial age, followed by the advent of a new solar age, based on renewable solar power. Due to the diffuse, low-intensity property of solar radiation, this source of energy is incapable of sustaining industrialism, whether capitalist or socialist. Consequently, Rifkin advocates an anarcho-primitivist future solar economy — or what he terms an 'entropic society' — based on anti-consumerism, deindustrialization, counterurbanization, organic farming and prudential restraints on childbirths. Rifkin cautions that the transition to the solar age is likely to become a troublesome phase in the history of mankind, as the present world economy is so dependent on the non-renewable mineral resources. In his manifesto on Farewell to Growth, Serge Latouche develops a strategy of so-called 'ecomunicipalism' to initiate a 'virtuous cycle of quiet contraction' or degrowth of economic activity at the local level of society: Consumption patterns and addiction to work should be reduced; systems of fair taxation and consumption permits should redistribute the gains from economic activity within and among countries; obsolescence and waste should be reduced, products designed so as to make recycling easier. This bottom-up strategy opposes overconsumption in rich countries as well as emerging, poor countries to aspire this overconsumption of the rich. Instead, the purpose of degrowth is to establish the convivial and sustainable society where people can live better lives whilst working and consuming less. Latouche further cautions that "the very survival of humanity ... means that ecological concerns must be a central part of our social, political, cultural and spiritual preoccupation with human life." Herman Daly on his part is not opposed to the concept of a declining-state economy; but he does point out that the steady-state economy should serve as a preliminary first step on a declining path, once the optimal levels of population and capital have been properly defined. However, this first step is an important one: Daly concedes that it is 'difficult, probably impossible' to define such optimum levels; even more, in his final analysis Daly agrees with his teacher and mentor Georgescu-Roegen that no defined optimum will be able to last forever (see above). Capitalism without growth Several radical critics of capitalism have questioned the possibility of ever imposing a steady-state or a declining-state (degrowth) system as a superstructure on top of capitalism. Taken together, these critics point to the following growth dynamics inherent in capitalism: Economic activity is generally guided by the profit motive, a competitive work ethos and the drive to accumulate capital and wealth for its own sake to gratify personal ambition, provide social prestige — or simply to get rich in a hurry. Psychologically, these drives in the work sphere repress and distort biological and social homeostasis in most people. Employments and incomes depend directly on sales revenues, that is, on people spending money on the consumption of goods and services for sale on the market. This dependency creates a pecuniary incentive to increase sales as much as possible. To this end, much cunning advertising is devised to manipulate human wants and prop up consumption patterns, often resulting in lavish and wasteful consumerism. The financial system is based on fractional-reserve banking, enabling commercial banks to hold reserves in amounts that are less than their deposit liabilities. This credit creation is multiplying the monetary base supplied by the central bank in order to assist private corporations expanding their activities. Technological development exhibits a strong labour-saving bias, creating the need to provide new employment elsewhere in the economy for workers displaced by the introduction of new technology. Private corporations generally resist government regulations and restrictions that impede profits and deter investment opportunities. Attempts to downscale the economy would rapidly degenerate into economic crisis and political instability on this count alone. Governments need tax revenues to service their debt obligations, run their institutions and finance their welfare programmes for the benefit of the public. Tax revenues are collected from general economic activity. In the capitalist world economy, globalisation intensifies competition everywhere, both within and between countries. National governments are compelled to compete and struggle with each other to provide employment, investments, tax revenues and wealth for their own populations. — In short: There is no end to the systemic and ecologically harmful growth dynamics in modern capitalism, radical critics assert. Fully aware of the massive growth dynamics of capitalism, Herman Daly on his part poses the rhetorical question whether his concept of a steady-state economy is essentially capitalistic or socialistic. He provides the following answer (written in 1980): Daly concludes by inviting all (most) people — both liberal supporters of and radical critics of capitalism — to join him in his effort to develop a steady-state economy. Pushing some of the terrestrial limits into outer space Ever since the beginning of the modern Space Age in the 1950s, some space advocates have pushed for space habitation, frequently in the form of colonization, some arguing as a reason for alleviating human overpopulation, overconsumption and mitigate the human impact on the environment on Earth (if not for other reasons). In the 1970s, physicist and space activist Gerard K. O'Neill developed a large plan to build human settlements in outer space to solve the problems of overpopulation and limits to growth on earth without recourse to political repression. According to O'Neill's vision, mankind could — and indeed should — expand on this man-made frontier to many times the current world population and generate large amounts of new wealth in space. Herman Daly countered O'Neill's vision by arguing that a space colony would become subject to much harsher limits to growth — and hence, would have to be secured and managed with much more care and discipline — than a steady-state economy on large and resilient earth. Although the number of individual colonies supposedly could be increased without end, living conditions in any one particular colony would become very restricted nonetheless. Therefore, Daly concluded: "The alleged impossibility of a steady-state on earth provides a poor intellectual launching pad for space colonies." By the 2010s, O'Neill's old vision of space colonisation had long since been turned upside down in many places: Instead of dispatching colonists from earth to live in remote space settlements, some ecology-minded space advocates conjecture that resources could be mined from asteroids in space and transported back to earth for use here. This new vision has the same double advantage of (partly) mitigating ecological pressures on earth's limited mineral reserves while also boosting exploration and colonisation of space. The building up of industrial infrastructure in space would be required for the purpose, as well as the establishment of a complete supply chain up to the level of self-sufficiency and then beyond, eventually developing into a permanent extraterrestrial source of wealth to provide an adequate return on investment for stakeholders. In the future, such an 'exo-economy' (off-planet economy) could possibly even serve as the first step towards mankind's cosmic ascension to a 'Type II' civilisation on the hypothetical Kardashev scale, in case such an ascension will ever be accomplished. However, it is yet uncertain whether an off-planet economy of the type specified will develop in due time to match both the volume and the output mix needed to fully replace earth's dwindling mineral reserves. Sceptics like Herman Daly and others point to exorbitant earth-to-orbit launch costs of any space mission, inaccurate identification of target asteroids suitable for mining, and remote in situ ore extraction difficulties as obvious barriers to success: Investing a lot of terrestrial resources in order to recover only a few resources from space in return is not worthwhile in any case, regardless of the scarcities, technologies and other mission parameters involved in the venture. In addition, even if an off-planet economy could somehow be established at some future point, one long-term predicament would then loom large regarding the continuous mining and transportation of massive volumes of materials from space back to earth: How to keep up that volume flowing on a steady and permanent basis in the face of the astronomically long distances and time scales ever present in space. In the worst of cases, all of these obstacles could forever prevent any substantial pushing of limits into outer space — and then limits to growth on earth will remain the only limits of concern throughout mankind's entire span of existence. Implementation Today, steady state economy is not implemented officially by any state, but there are some measures that limit growth and means a steady level of consumption of some products per capita: Phase-out of lightweight plastic bags that reduce consumption of bags and limit the number of bags per capita. Reducing the consumption of energy is a very popular measure implemented by many, called generally Energy Efficiency and Energy Saving. A coalition named "3% Club for Energy Efficiency" was formed with a target of increasing energy efficiency by 3% per year. According to the International Energy Agency, Energy Efficiency can deliver more than 40% of the reduction in Greenhouse-gas emissions needed to reach the target of Paris Agreement. In the 2019 UN Climate Action Summit, a coalition was created named "Action Towards Climate Friendly Transport"; its main targets include city planning that will reduce the need for transport and shifting to a non-motorized transport system Such measures reduce the consumption of fuel. A method with growing popularity is Reduce, reuse and recycle. For example, reuse clothes, through the second hand market and renting clothes. The second hand market worth 24 billion$ as of 2018 and is expected to achieve bigger profit than the fast fashion market in the next years. The H&M company tries to implement it. Some countries accepted measurements, alternatives to Gross domestic product to measure success: Bhutan measure success in Gross National Happiness. This measurement was implemented partly in other countries. Other popular measurements include Gross National Well-being, Better Life Index and Social Progress Index (see pages). As of 2014, the Happy Planet Index is used in 153 countries, the OECD Better Life Index in 36 countries, members of OECD. Ecuador and Bolivia included in their constitutions the ideology of Sumac Kawsay (Buen Vivir) that "incorporates ideas of de-growth", e.g. contain some principles of the steady state economy. See also History of economic thought Circular economy Classical economics Creative destruction Criticism of capitalism Degrowth Ecological economics Economic equilibrium Post-growthThe Limits to GrowthProsperity Without GrowthMarket failure: Ecological market failure Environmentalism Ecological footprint Planetary boundaries Planned economy Sustainability: Carrying capacity Human overpopulation Jevons paradox Peak minerals Kenneth E. Boulding Herman Daly Nicholas Georgescu-Roegen: Criticising Daly's steady-state economics Sea level rise References External links Websites CASSE, Center for the Advancement of the Steady State Economy. ISEE, The International Society for Ecological Economics. Global Footprint Network. Advancing the Science of Sustainability. Steady State Revolution. Fighting for a Sustainable World with a Steady State Economy. Post Growth Institute. Creating global prosperity without economic growth. Articles Interviews and other material related to Herman Daly (Lengthy interview spanning fifteen web pages) (Excerpt from his Steady-state economics'') (Essay summarizing his views) Economics of sustainability Economic growth Demographic economic problems Human overpopulation Human impact on the environment Global environmental issues Ecological economics Environmental social science Green politics Waste minimisation Energy conservation Natural resource management Schools of economic thought Economic systems Degrowth Future problems Ecological economics concepts
Steady-state economy
Environmental_science
14,336
2,732,992
https://en.wikipedia.org/wiki/S%C3%B8rensen%20formol%20titration
The Sørensen formol titration(SFT) invented by S. P. L. Sørensen in 1907 is a titration of an amino acid with potassium hydroxide in the presence of formaldehyde. It is used in the determination of protein content in samples. If instead of an amino acid an ammonium salt is used the reaction product with formaldehyde is hexamethylenetetramine: The liberated hydrochloric acid is then titrated with the base and the amount of ammonium salt used can be determined. With an amino acid the formaldehyde reacts with the amino group to form a methylene amino (R-N=CH2) group. The remaining acidic carboxylic acid group can then again be titrated with base. In winemaking Formol titration is one of the methods used in winemaking to measure yeast assimilable nitrogen needed by wine yeast in order to successfully complete fermentation. Accuracy in formol titration There has been some inaccuracies of the SFT caused by the differences in the basicity of the nitrogen in different amino acids which were explained by S. L. Jodidi. For instances, proline(an amino acid), histidine, and lysine yields too low values compared to the theory. Unlike alpha, monobasic (containing one amino group per molecule) amino acids, these amino (or imino) acids' nitrogens have inconstant basicity, which results in partial reaction with formaldehyde. In case of tyrosine, the actual results are too high due to the negative hydroxyl group (-OH), which acts as a base. This explanation is supported by the fact that phenylalanine can be accurately titrated. References Biochemistry methods Titration
Sørensen formol titration
Chemistry,Biology
383
53,887,539
https://en.wikipedia.org/wiki/Dimitri%20Sverjensky
Dimitri Alexander Sverjensky is a professor in Earth and Planetary Sciences at Johns Hopkins University where his research is focused on geochemistry. Career Dimitri Sverjensky received his B.Sc. from the University of Sydney, Australia in 1974. He went on to Yale University where he received his Masters and Ph.D in Geology in 1977 and 1980. After leaving Yale, Sverjensky worked as a staff scientist at the Lawrence Berkeley Laboratory, before becoming an assistant professor at SUNY Stony Brook. In 1984, he was appointed an assistant professor at Johns Hopkins University, and later promoted to associate professor. Since 1991 he has been a professor in the Department of Earth and Planetary Sciences at Johns Hopkins University. Throughout his academic career, he has served as associate editor for Economic Geology and Geochimica et Cosmochimica Acta. From 2005 to 2015, he was the senior visiting investigator at the Geophysical Laboratory at the Carnegie Institution of Washington. Sverjensky is a member of the Deep Carbon Observatory’s Extreme Physics and Chemistry Community, where he also serves on its Scientific Steering Committee. Research initiatives Sverjensky’s research areas include aqueous geochemistry, mineral surface geochemistry, thermodynamics, and water-rock interaction. In 2005 he started a collaboration at the Carnegie Institution of Washington in astrobiology, addressing the role of mineral-water interfacial reactions in the origin of life and the role of hydrothermal fluids. They also developed a historical approach to the appearance of minerals on Earth called mineral evolution. Sverjensky is investigating the surface environments on early Earth using theoretical models of weathering and element mobility. In 2012 he launched a new field of research through the Deep Carbon Observatory investigating the role of water in deep Earth. Current areas of investigation include the origins of fluids in diamonds, the species and transport of carbon, sulfur and nitrogen in subduction zones, and the role of fluids in oxidation of mantle wedges. Publications Awards and honors 2021: Fellow of the American Geophysical Union 2011: Fellow of the Geochemical Society and the European Association of Geochemistry 1988: Waldemar Lindgren Award (Society of Economic Geologists) References Further reading External links Google Scholar Page Home page Year of birth missing (living people) Living people Australian geochemists Yale University alumni Fellows of the American Geophysical Union
Dimitri Sverjensky
Chemistry
482
3,925,795
https://en.wikipedia.org/wiki/Unconventional%20computing
Unconventional computing (also known as alternative computing or nonstandard computation) is computing by any of a wide range of new or unusual methods. The term unconventional computation was coined by Cristian S. Calude and John Casti and used at the First International Conference on Unconventional Models of Computation in 1998. Background The general theory of computation allows for a variety of methods of computation. Computing technology was first developed using mechanical systems and then evolved into the use of electronic devices. Other fields of modern physics provide additional avenues for development. Models of Computation A model of computation describes how the output of a mathematical function is computed given its input. The model describes how units of computations, memories, and communications are organized. The computational complexity of an algorithm can be measured given a model of computation. Using a model allows studying the performance of algorithms independently of the variations that are specific to particular implementations and specific technology. A wide variety of models are commonly used; some closely resemble the workings of (idealized) conventional computers, while others do not. Some commonly used models are register machines, random-access machines, Turing machines, lambda calculus, rewriting systems, digital circuits, cellular automata, and Petri nets. Mechanical computing Historically, mechanical computers were used in industry before the advent of the transistor. Mechanical computers retain some interest today, both in research and as analogue computers. Some mechanical computers have a theoretical or didactic relevance, such as billiard-ball computers, while hydraulic ones like the MONIAC or the Water integrator were used effectively. While some are actually simulated, others are not. No attempt is made to build a functioning computer through the mechanical collisions of billiard balls. The domino computer is another theoretically interesting mechanical computing scheme. Analog computing An analog computer is a type of computer that uses analog signals, which are continuous physical quantities, to model and solve problems. These signals can be electrical, mechanical, or hydraulic in nature. Analog computers were widely used in scientific and industrial applications, and were often faster than digital computers at the time. However, they started to become obsolete in the 1950s and 1960s and are now mostly used in specific applications such as aircraft flight simulators and teaching control systems in universities. Examples of analog computing devices include slide rules, nomograms, and complex mechanisms for process control and protective relays. The Antikythera mechanism, a mechanical device that calculates the positions of planets and the Moon, and the planimeter, a mechanical integrator for calculating the area of an arbitrary 2D shape, are also examples of analog computing. Electronic digital computers Most modern computers are electronic computers with the Von Neumann architecture based on digital electronics, with extensive integration made possible following the invention of the transistor and the scaling of Moore's law. Unconventional computing is, according to a conference description, "an interdisciplinary research area with the main goal to enrich or go beyond the standard models, such as the Von Neumann computer architecture and the Turing machine, which have dominated computer science for more than half a century". These methods model their computational operations based on non-standard paradigms, and are currently mostly in the research and development stage. This computing behavior can be "simulated" using classical silicon-based micro-transistors or solid state computing technologies, but it aims to achieve a new kind of computing. Generic approaches These are unintuitive and pedagogical examples that a computer can be made out of almost anything. Physical objects A billiard-ball computer is a type of mechanical computer that uses the motion of spherical billiard balls to perform computations. In this model, the wires of a Boolean circuit are represented by paths for the balls to travel on, the presence or absence of a ball on a path encodes the signal on that wire, and gates are simulated by collisions of balls at points where their paths intersect. A domino computer is a mechanical computer that uses standing dominoes to represent the amplification or logic gating of digital signals. These constructs can be used to demonstrate digital concepts and can even be used to build simple information processing modules. Both billiard-ball computers and domino computers are examples of unconventional computing methods that use physical objects to perform computation. Reservoir computing Reservoir computing is a computational framework derived from recurrent neural network theory that involves mapping input signals into higher-dimensional computational spaces through the dynamics of a fixed, non-linear system called a reservoir. The reservoir, which can be virtual or physical, is made up of individual non-linear units that are connected in recurrent loops, allowing it to store information. Training is performed only at the readout stage, as the reservoir dynamics are fixed, and this framework allows for the use of naturally available systems, both classical and quantum mechanical, to reduce the effective computational cost. One key benefit of reservoir computing is that it allows for a simple and fast learning algorithm, as well as hardware implementation through physical reservoirs. Tangible computing Tangible computing refers to the use of physical objects as user interfaces for interacting with digital information. This approach aims to take advantage of the human ability to grasp and manipulate physical objects in order to facilitate collaboration, learning, and design. Characteristics of tangible user interfaces include the coupling of physical representations to underlying digital information and the embodiment of mechanisms for interactive control. There are five defining properties of tangible user interfaces, including the ability to multiplex both input and output in space, concurrent access and manipulation of interface components, strong specific devices, spatially aware computational devices, and spatial reconfigurability of devices. Human computing The term "human computer" refers to individuals who perform mathematical calculations manually, often working in teams and following fixed rules. In the past, teams of people were employed to perform long and tedious calculations, and the work was divided to be completed in parallel. The term has also been used more recently to describe individuals with exceptional mental arithmetic skills, also known as mental calculators. Human-robot interaction Human-robot interaction, or HRI, is the study of interactions between humans and robots. It involves contributions from fields such as artificial intelligence, robotics, and psychology. Cobots, or collaborative robots, are designed for direct interaction with humans within shared spaces and can be used for a variety of tasks, including information provision, logistics, and unergonomic tasks in industrial environments. Swarm computing Swarm robotics is a field of study that focuses on the coordination and control of multiple robots as a system. Inspired by the emergent behavior observed in social insects, swarm robotics involves the use of relatively simple individual rules to produce complex group behaviors through local communication and interaction with the environment. This approach is characterized by the use of large numbers of simple robots and promotes scalability through the use of local communication methods such as radio frequency or infrared. Physics approaches Optical computing Optical computing is a type of computing that uses light waves, often produced by lasers or incoherent sources, for data processing, storage, and communication. While this technology has the potential to offer higher bandwidth than traditional computers, which use electrons, optoelectronic devices can consume a significant amount of energy in the process of converting electronic energy to photons and back. All-optical computers aim to eliminate the need for these conversions, leading to reduced electrical power consumption. Applications of optical computing include synthetic-aperture radar and optical correlators, which can be used for object detection, tracking, and classification. Spintronics Spintronics is a field of study that involves the use of the intrinsic spin and magnetic moment of electrons in solid-state devices. It differs from traditional electronics in that it exploits the spin of electrons as an additional degree of freedom, which has potential applications in data storage and transfer, as well as quantum and neuromorphic computing. Spintronic systems are often created using dilute magnetic semiconductors and Heusler alloys. Atomtronics Atomtronics is a form of computing that involves the use of ultra-cold atoms in coherent matter-wave circuits, which can have components similar to those found in electronic or optical systems. These circuits have potential applications in several fields, including fundamental physics research and the development of practical devices such as sensors and quantum computers. Fluidics Fluidics, or fluidic logic, is the use of fluid dynamics to perform analog or digital operations in environments where electronics may be unreliable, such as those exposed to high levels of electromagnetic interference or ionizing radiation. Fluidic devices operate without moving parts and can use nonlinear amplification, similar to transistors in electronic digital logic. Fluidics are also used in nanotechnology and military applications. Quantum computing Quantum computing, perhaps the most well-known and developed unconventional computing method, is a type of computation that utilizes the principles of quantum mechanics, such as superposition and entanglement, to perform calculations. Quantum computers use qubits, which are analogous to classical bits but can exist in multiple states simultaneously, to perform operations. While current quantum computers may not yet outperform classical computers in practical applications, they have the potential to solve certain computational problems, such as integer factorization, significantly faster than classical computers. However, there are several challenges to building practical quantum computers, including the difficulty of maintaining qubits' quantum states and the need for error correction. Quantum complexity theory is the study of the computational complexity of problems with respect to quantum computers. Neuromorphic quantum computing Neuromorphic Quantum Computing (abbreviated as 'n.quantum computing') is an unconventional type of computing that uses neuromorphic computing to perform quantum operations. It was suggested that quantum algorithms, which are algorithms that run on a realistic model of quantum computation, can be computed equally efficiently with neuromorphic quantum computing. Both traditional quantum computing and neuromorphic quantum computing are physics-based unconventional computing approaches to computations and don't follow the von Neumann architecture. They both construct a system (a circuit) that represents the physical problem at hand, and then leverage their respective physics properties of the system to seek the "minimum". Neuromorphic quantum computing and quantum computing share similar physical properties during computation. Superconducting computing Superconducting computing is a form of cryogenic computing that utilizes the unique properties of superconductors, including zero resistance wires and ultrafast switching, to encode, process, and transport data using single flux quanta. It is often used in quantum computing and requires cooling to cryogenic temperatures for operation. Microelectromechanical systems Microelectromechanical systems (MEMS) and nanoelectromechanical systems (NEMS) are technologies that involve the use of microscopic devices with moving parts, ranging in size from micrometers to nanometers. These devices typically consist of a central processing unit (such as an integrated circuit) and several components that interact with their surroundings, such as sensors. MEMS and NEMS technology differ from molecular nanotechnology or molecular electronics in that they also consider factors such as surface chemistry and the effects of ambient electromagnetism and fluid dynamics. Applications of these technologies include accelerometers and sensors for detecting chemical substances. Chemistry approaches Molecular computing Molecular computing is an unconventional form of computing that utilizes chemical reactions to perform computations. Data is represented by variations in chemical concentrations, and the goal of this type of computing is to use the smallest stable structures, such as single molecules, as electronic components. This field, also known as chemical computing or reaction-diffusion computing, is distinct from the related fields of conductive polymers and organic electronics, which use molecules to affect the bulk properties of materials. Biochemistry approaches Peptide computing Peptide computing is a computational model that uses peptides and antibodies to solve NP-complete problems and has been shown to be computationally universal. It offers advantages over DNA computing, such as a larger number of building blocks and more flexible interactions, but has not yet been practically realized due to the limited availability of specific monoclonal antibodies. DNA computing DNA computing is a branch of unconventional computing that uses DNA and molecular biology hardware to perform calculations. It is a form of parallel computing that can solve certain specialized problems faster and more efficiently than traditional electronic computers. While DNA computing does not provide any new capabilities in terms of computability theory, it can perform a high number of parallel computations simultaneously. However, DNA computing has slower processing speeds, and it is more difficult to analyze the results compared to digital computers. Membrane computing Membrane computing, also known as P systems, is a subfield of computer science that studies distributed and parallel computing models based on the structure and function of biological membranes. In these systems, objects such as symbols or strings are processed within compartments defined by membranes, and the communication between compartments and with the external environment plays a critical role in the computation. P systems are hierarchical and can be represented graphically, with rules governing the production, consumption, and movement of objects within and between regions. While these systems have largely remained theoretical, some have been shown to have the potential to solve NP-complete problems and have been proposed as hardware implementations for unconventional computing. Biological approaches Biological computing, also known as bio-inspired computing or natural computation, is the study of using models inspired by biology to solve computer science problems, particularly in the fields of artificial intelligence and machine learning. It encompasses a range of computational paradigms including artificial neural networks, evolutionary algorithms, swarm intelligence, artificial immune systems, and more, which can be implemented using traditional electronic hardware or alternative physical media such as biomolecules or trapped-ion quantum computing devices. It also includes the study of understanding biological systems through engineering semi-synthetic organisms and viewing natural processes as information processing. The concept of the universe itself as a computational mechanism has also been proposed. Neuroscience Neuromorphic computing involves using electronic circuits to mimic the neurobiological architectures found in the human nervous system, with the goal of creating artificial neural systems that are inspired by biological ones. These systems can be implemented using a variety of hardware, such as memristors, spintronic memories, and transistors, and can be trained using a range of software-based approaches, including error backpropagation and canonical learning rules. The field of neuromorphic engineering seeks to understand how the design and structure of artificial neural systems affects their computation, representation of information, adaptability, and overall function, with the ultimate aim of creating systems that exhibit similar properties to those found in nature. Wetware computers, which are composed of living neurons, are a conceptual form of neuromorphic computing that has been explored in limited prototypes. Cellular automata and amorphous computing Cellular automata are discrete models of computation consisting of a grid of cells in a finite number of states, such as on and off. The state of each cell is determined by a fixed rule based on the states of the cell and its neighbors. There are four primary classifications of cellular automata, ranging from patterns that stabilize into homogeneity to those that become extremely complex and potentially Turing-complete. Amorphous computing refers to the study of computational systems using large numbers of parallel processors with limited computational ability and local interactions, regardless of the physical substrate. Examples of naturally occurring amorphous computation can be found in developmental biology, molecular biology, neural networks, and chemical engineering. The goal of amorphous computation is to understand and engineer novel systems through the characterization of amorphous algorithms as abstractions. Evolutionary computation Evolutionary computation is a type of artificial intelligence and soft computing that uses algorithms inspired by biological evolution to find optimized solutions to a wide range of problems. It involves generating an initial set of candidate solutions, stochastically removing less desired solutions, and introducing small random changes to create a new generation. The population of solutions is subjected to natural or artificial selection and mutation, resulting in evolution towards increased fitness according to the chosen fitness function. Evolutionary computation has proven effective in various problem settings and has applications in both computer science and evolutionary biology. Mathematical approaches Ternary computing Ternary computing is a type of computing that uses ternary logic, or base 3, in its calculations rather than the more common binary system. Ternary computers use trits, or ternary digits, which can be defined in several ways, including unbalanced ternary, fractional unbalanced ternary, balanced ternary, and unknown-state logic. Ternary quantum computers use qutrits instead of trits. Ternary computing has largely been replaced by binary computers, but it has been proposed for use in high-speed, low-power consumption devices using the Josephson junction as a balanced ternary memory cell. Reversible computing Reversible computing is a type of unconventional computing where the computational process can be reversed to some extent. In order for a computation to be reversible, the relation between states and their successors must be one-to-one, and the process must not result in an increase in physical entropy. Quantum circuits are reversible as long as they do not collapse quantum states, and reversible functions are bijective, meaning they have the same number of inputs as outputs. Chaos computing Chaos computing is a type of unconventional computing that utilizes chaotic systems to perform computation. Chaotic systems can be used to create logic gates and can be rapidly switched between different patterns, making them useful for fault-tolerant applications and parallel computing. Chaos computing has been applied to various fields such as meteorology, physiology, and finance. Stochastic computing Stochastic computing is a method of computation that represents continuous values as streams of random bits and performs complex operations using simple bit-wise operations on the streams. It can be viewed as a hybrid analog/digital computer and is characterized by its progressive precision property, where the precision of the computation increases as the bit stream is extended. Stochastic computing can be used in iterative systems to achieve faster convergence, but it can also be costly due to the need for random bit stream generation and is vulnerable to failure if the assumption of independent bit streams is not met. It is also limited in its ability to perform certain digital functions. See also Network computing (disambiguation) WDR paper computer MONIAC hydraulic computer Hypercomputation References Classes of computers
Unconventional computing
Technology
3,687
22,603,166
https://en.wikipedia.org/wiki/Leptospermum%20squarrosum
Leptospermum squarrosum, commonly known as the peach blossom tea-tree, is an upright shrub of the family Myrtaceae and is endemic to central eastern New South Wales. It has thin, firm bark, broadly lance-shaped to elliptical leaves, relatively large white or pink flowers and fruit that remain on the plant when mature. Description Leptospermum squarrosum is an erect shrub of variable habit, growing to a height of less than to or more and has thin, firm bark. Young stem are silky-hairy at first, soon glabrous. The leaves are variable but mostly broadly lance-shaped to elliptical, long and wide with a sharply-pointed tip and tapering to a short petiole. The flowers are white or pink, mostly wide and arranged singly on short side shoots. The floral cup is sessile, long and glabrous. The sepals are also glabrous, long, the five petals long and the stamens long. Flowering mostly occurs from March to April and the fruit is a capsule mostly wide that remain on the plant at maturity. Taxonomy Leptospermum squarrosum was first formally described in 1788 by Joseph Gaertner in his book De Fructibus et Seminibus Plantarum from specimens collected by Joseph Banks. Distribution and habitat Peach blossom tea-tree grows in shrubland on sandstone soils in coastal areas and nearby tablelands of New South Wales, but especially in the Sydney region. Use in horticulture This tea-tree is a hardy shrub that grows best in a sunny situation in well-drained soil, but is salt-resistant and tolerates exposed positions. References Flora of New South Wales Halophytes squarrosum Myrtales of Australia Plants described in 1788 Taxa named by Joseph Gaertner
Leptospermum squarrosum
Chemistry
374
62,024,401
https://en.wikipedia.org/wiki/3-Methyluridine
The chemical compound 3-methyluridine, also called N3-methyluridine, is a pyrimidine nucleoside (abbreviated m3U). In living organisms it is present as RNA modification which has been detected in 23S rRNA of archaea, 16S and 23S rRNA of eubacteria, and 18S, 25S, and 28S of eukaryotic ribosomal RNAs. See also 5-Methyluridine References Nucleosides Pyrimidinediones
3-Methyluridine
Chemistry
109
29,444,637
https://en.wikipedia.org/wiki/Comerford%20Crown
The Comerford Crown or Ikerrin Crown is the name of a lost archaeological relic probably dating from the Bronze Age that was in the possession of the noble Comerford Family from its discovery in 1692 in Ireland, later removal from Ireland, and possibly lost by that family while living in exile in France during the Reign of Terror (c. 1793). The crown was an encased gold cap or crown that was discovered 10 ft underground, by turf-cutters, at the Devil's Bit, County Tipperary, in 1692. Joseph Comerford bought it and saved it from being melted down. Other similar antiquities (see: Golden hat) have been discovered elsewhere in Europe and have dated from the Bronze Age. The crown weighed about five troy ounces (156 grams), and may have been melted down for its intrinsic value during the Reign of Terror. The eventual fate of the crown rescued by Comerford remains a mystery. The crown appears to have survived in safe hands for a long time after his death. In his Histoire d’Irlande (1758), the Abbé MacGeoghegan, described this gold crown as being in the shape of a bonnet, and added: “This curious part of antiquity was sold to Joseph Comerford and must be preserved in the Castle of Anglure, where he had bought the estate.” Supporting the theory the crown survived the Reign of Terror is a contributor to the Dublin Penny Journal who in August 1832 claimed that the crown was then still preserved in the Château d’Anglure. However, Dr Czernicki, whose father bought the Chateau d’Anglure in 1832 from Monsieur Tissandier, said: “I never heard anyone speak about the piece of antiquity that you refer to." It is therefore generally considered to have been lost and probably melted down during the French Revolution. References Archaeological artifacts Bronze Age Ireland Gold objects Lost objects
Comerford Crown
Physics
385
49,392,887
https://en.wikipedia.org/wiki/Island%20Conservation
Island Conservation is a non-profit organization with the mission to "restore islands for nature and people worldwide" and has therefore focused its efforts on islands with species categorized as Critically Endangered and Endangered on the IUCN's Red List. Working in partnership with local communities, government management agencies, and conservation organizations, Island Conservation develops plans and implements the removal of invasive alien species, and conducts field research to document the benefits of the work and to inform future projects. Island Conservation's approach is now being shown to have a wider beneficial effect on the marine systems surrounding its project areas. In addition, invasive vertebrate eradication has now been shown to have many benefits besides conservation of species. Specifically, the approach has been found to align with 13 UN Sustainable Development Goals and 42 associated targets encompassing marine and terrestrial biodiversity conservation, promotion of local and global partnerships, economic development, climate change mitigation, human health and sanitation and sustainable production and consumption. To date Island Conservation has deployed teams to protect 1,195 populations of 487 species and subspecies on 64 islands. The work of Island Conservation is not without controversy, This is documented in the book Battle at the End of Eden. Restoring islands requires removing whole populations of an invasive species. There is an ethical question of whether humankind has the right to remove one species to save others. However, a 2019 study suggests that if eradications of invasive animals were conducted on just 169 islands, the survival prospects of 9.4% of the Earth's most highly threatened terrestrial insular vertebrates would be improved. History Island Conservation was founded by Bernie Tershy and Don Croll, both Professors at UCSC's Long Marine Lab. These scientists learned about the story of Clipperton Island which had been visited by ornithologist Ken Stager of the Los Angeles County Museum in 1958. Appalled at the depredations visited by feral pigs upon the island's brown booby and masked booby colonies (reduced to 500 and 150 birds, respectively), Stager procured a shotgun and removed all 58 pigs. By 2003, the colonies numbered 25,000 brown boobies and 112,000 masked boobies, the world's second-largest brown booby colony and largest masked booby colony. Much of organization's early focus was working in Mexico in conjunction with its sister organization, Grupo de Ecología y Conservación de Islas, in the Gulf of California and off the Pacific Coast. Subsequently, Island Conservation expanded its geographic scope to the Channel Islands of California, Pacific Coast of Canada, The Aleutians Islands, Hawaiian Islands, and finally to the Pacific, Caribbean, and South America. Island Conservation has a strong scientific grounding. Over 160 peer-reviewed publications in major journals such as Biological Conservation, Conservation Biology and Proceedings of the National Academy of Sciences have been authored or co-authored by Island Conservation staff and contractors. Partnerships As Island Conservation does not have management responsibility over any islands itself, all projects are in partnership with the island owner/manager, island users, local communities and regulatory authorities. Since its founding in 1994, the organization has developed partnerships with over 100 organizations. Partners include conservation organizations, government agencies, regulatory agencies, scientific institutions, and international conservation consortiums. Island Conservation is a member of the International Union for Conservation of Nature (IUCN), Alliance for Zero Extinction, and has a Memorandum of Understanding with the US Fish & Wildlife Service, and BirdLife International, amongst others. Advisory council The organization's founding advisory board is composed of prominent scientists, practitioners, and authors in the fields of conservation biology and invasive species including Paul Ehrlich, José Sarukhán Kermez, Russell Mittermeier, Harold Mooney, David Quammen, Peter Raven, Michael Soulé, and Edward O. Wilson. Programs North America In this region, Island Conservation currently works in the United States and Canada. In the United States, the Anacapa Island Restoration Project was completed in 2002 and benefited the Scripps's murrelet, Cassin's auklet, and Anacapa Deer Mouse. The Lehua Island Restoration Project was completed in 2006 which benefited Newell's shearwater and black-footed albatross. Subsequently, projects completed include the Hawadax Island Restoration Project in 2008, the San Nicolas Island Project in 2010, and the Palmyra Island Restoration Project in 2011. Key federal government partnerships in North America include in the US Department of Interior, USFWS, NPS, the US Department of Agriculture-APHIS, National Wildlife Research Center, NOAA, Parks Canada Agency, and Environment and Climate Change Canada. Island Conservation is working with the following non-governmental organizations: Coastal Conservation Association (CA), Bird Studies Canada, American Bird Conservancy, The Nature Conservancy, and Grupo de Ecología y Conservación de Islas. Pacific Since 2010, Island Conservation has contributed to the development and implementation of island restoration projects in Australia (Lord Howe Island and Norfolk Island), French Polynesia (Tetiꞌaroa Restoration Project in 2022, Acteon-Gambier Archipelago Restoration Project in 2015), Tonga (Late Island and numerous small islets), Republic of Palau (including within the Rock Islands Southern Lagoon World Heritage Area), Federated States of Micronesia (Ulithi Lagoon), and New Caledonia (Walpole Island). Island Conservation is an active member of the Pacific Invasives Partnership. Other key partnerships include Invasive Species Council, BirdLife International, New Zealand Department of Conservation, SPREP and the Ornithological Society of French Polynesia. Caribbean In this region, Island Conservation works primarily in Puerto Rico, The Commonwealth of The Bahamas, and the Dominican Republic. In May 2012, Island Conservation and the Bahamas National Trust worked together to remove invasive house mice from Allen Cay to protect native species including the Allen Cays rock iguana and Audubon's shearwater. Since 2008, Island Conservation and the US Fish and Wildlife Service (USFWS) have worked together to remove invasive vertebrates from Desecheo National Wildlife Refuge in Puerto Rico, primarily benefiting the Higo Chumbo cactus, three endemic reptiles, two endemic invertebrates, and to recover globally significant seabird colonies of brown boobies, red footed boobies, and bridled terns. Future work will focus on important seabird populations, key reptile groups including West Indian Rock Iguanas, and the restoration of Mona Island, Alto Velo, and offshore cays in the Puerto Rican Bank and The Bahamas. Key partnerships include the USFWS, Puerto Rico DNER, the Bahamas National Trust, and the Dominican Republic Ministry of Environment and Natural Resources. South America In this region, Island Conservation works primarily in Ecuador and Chile. In Ecuador, the Rábida Island Restoration Project was completed in 2010. A gecko (Phyllodactylus sp.) found during monitoring in late 2012 was only recorded from subfossils estimated at more than 5,700 years old. Live Rábida Island endemic land snails (Bulimulus (Naesiotus) rabidensis), not seen since collected over 100 years ago, were also collected in late 2012. This was followed in 2012 by the Pinzon and Plaza Sur Island Restoration Project primarily benefiting the Pinzón giant tortoise, Opuntia galapageia, Galápagos land iguana. As a result of the project, Pinzon Giant Tortoise hatched from eggs and were surviving in the wild for the first time in more than 150 years In 2019, The Directorate of Galápagos National Park with Island Conservation used drones to eradicate invasive rats from North Seymour Island - this was the first time such an approach has been used on vertebrates in the wild. The expectation is that this innovation will pave the way for cheaper invasive species eradications in the future on small and mid-sized islands. The current focus in Ecuador is Floreana Island with 55 IUCN threatened species present and 13 extirpated species that could be reintroduced after invasive mammals are eradicated. Partners include: The Leona M. and Harry B. Helmsley Charitable Trust, Ministry of Environment (Galápagos National Park Directorate, Galápagos Biosecurity Agency), the Ministry of Agriculture, the Floreana Parish Council and the Galapagos Government Council. In 2009 Chile, Island Conservation initiated formal collaborations with CONAF, the country's protected areas agency, to further restoration of islands under their administration. In January 2014, the Choros Island Restoration Project was completed benefiting the Humboldt penguin, Peruvian diving petrel, and the local eco-tourism industry. The focus of future work includes the Humboldt Penguin National Reserve and the Juan Fernández Archipelago, where technology developed by Wildlife Drones is being used to support conservation efforts. This includes tracking endangered species and collecting ecological data across challenging terrains. Conservation innovation From its earliest days, Island Conservation has prided itself on innovating its tools and approach to eradication projects. Island Conservation implemented its first helicopter-based aerial broadcast eradication on Anacapa Island in 2001 refining technology developed in New Zealand for agriculture and pest control, this has been replicated on more than 10 international island restoration projects since. Island Conservation has developed practices for holding native species in captivity for re-release and mitigating risks to species, including the successful capture and release of endemic mice on Anacapa and hawks on Pinzon. In 2010, Island Conservation partnered with the U.S. Humane Society to remove feral cats from San Nicolas Island for relocation to a sanctuary on the mainland California. New tools including a remote trap monitoring system, digital data collection system, and statistical decision support tools improved the humanness of removal methods, reduced project cost, and reduced time to declare success. Following a series of failed eradication attempts in 2012, Island Conservation led a group of international experts to identify challenges on tropical islands resulting in recommend practices for tropical rodent eradications. Applying these lessons following a failed attempt on Desecheo island 2017 resulted in success. Island Conservation led a horizon scan in 2015 that identified drones, genetic biocontrol, and conflict transformation as critical innovations to increase the scale, scope, and pace of rodent eradications. Since this exercise, Island Conservation formed the Genetic Biocontrol for Invasive Rodents (GBIRd) partnership to cautiously explore the development of safe and ethical genetic technologies to prevent extinctions, supported sustainable community-driven approaches to conservation projects, and implemented the world’s first drone-powered rat eradication. The current focus of the Conservation Innovation program is to advance methods that increase safety, reduce cost, and improve the feasibility of eradicating invasive vertebrates from islands. References External links Grupo de Ecología y Conservación de Islas (GECI) Database of Island Invasive Species Eradications (DIISE) Threatened Island Biodiversity database (TIB) Genetic Biocontrol for Invasive Rodents (GBIRd) Nature conservation organizations based in the United States Environmental organizations based in California Environmental organizations based in the San Francisco Bay Area Conservation and restoration organizations Organizations established in 1994 Island restoration Synthetic biology Genetics Conservation biology Marine conservation Insular ecology Climate change mitigation Extinction Rewilding advocates Bird conservation organizations
Island Conservation
Biology
2,297
608,751
https://en.wikipedia.org/wiki/Pinealocyte
Pinealocytes are the main cells contained in the pineal gland, located behind the third ventricle and between the two hemispheres of the brain. The primary function of the pinealocytes is the secretion of the hormone melatonin, important in the regulation of circadian rhythms. In humans, the suprachiasmatic nucleus of the hypothalamus communicates the message of darkness to the pinealocytes, and as a result, controls the day and night cycle. It has been suggested that pinealocytes are derived from photoreceptor cells. Research has also shown the decline in the number of pinealocytes by way of apoptosis as the age of the organism increases. There are two different types of pinealocytes, type I and type II, which have been classified based on certain properties including shape, presence or absence of infolding of the nuclear envelope, and composition of the cytoplasm. Types of pinealocytes Type 1 pinealocytes Type 1 pinealocytes are also known as light pinealocytes because they stain at a low density when viewed under a light microscope and appear lighter to the human eye. These Type 1 cells have been identified through research to have a round or oval shape and a diameter ranging from 7–11 micrometers. Type 1 pinealocytes are typically more numerous in both children and adults than Type 2 pinealocytes. They are also considered to be the more active cell because of the presence of certain cellular contents, including a high concentration of mitochondria. Another finding consistent with Type 1 pinealocytes is the increase in the amount of lysosomes and dense granules present in the cells as the age of the organism increases, possibly indicating the importance of autophagocytosis in these cells. Research has also shown that Type 1 pinealocytes contain the neurotransmitter serotonin, which later is converted to melatonin, the main hormone secreted by the pineal gland. Type 2 pinealocytes Type 2 pinealocytes are also known as dark pinealocytes because they stain at a high density when viewed under a light microscope and appear darker to the human eye. As indicated by research and microscopy, they are round, oval, or elongated cells with a diameter of about 7–11.2 micrometers. The nucleus of a Type 2 pinealocyte contains many infoldings which contain large amounts of rough endoplasmic reticulum and ribosomes. An abundance of cilia and centrioles has also been found in these Type 2 cells of the pineal gland. Unique to the Type 2 is the presence of vacuoles containing 2 layers of membrane. As Type 1 cells contain serotonin, Type 2 cells contain melatonin and are thought to have similar characteristics as endocrine and neuronal cells. Synaptic ribbons Synaptic ribbons are organelles seen in pinealocytes using electron microscopy. Synaptic ribbons are found in pinealocytes in both children and adults, but are not found in human fetuses. Research on rats has revealed more information about these organelles. The characteristic protein of synaptic ribbons is RIBEYE, as revealed by light and electron microscopy. In lower vertebrates, synaptic ribbons serve as a photoreceptive organ, but in upper vertebrates, they serve secretory functions within the cell. The presence of proteins such as Munc13-1 indicates that they are important in neurotransmitter release. At night, synaptic ribbons of rats appear larger and slightly curved, but during the day, they appear smaller and rod-like. Evolution of pinealocytes A common theory on the evolution of pinealocytes is that they evolved from photoreceptor cells. It is speculated that in ancestral vertebrates, the pinealocytes served the same function as photoreceptor cells, such as retinal cells; in many non-mammalian vertebrates, pineal cells in the retina are still actively photoreceptive, although these cell do not contribute to a visual image. Structural, functional, and genetic similarities exist between the two cell types. Structurally, both develop from the area of the brain designated the diencephalon, also the area containing the thalamus and hypothalamus, during embryological development. Both types of cells have similar features, including cilia, folded membranes, and polarity. Functional evidence for this theory of evolution can be seen in non-mammalian vertebrates. The retention of photosensitivity of the pinealocytes of lampreys, fish, amphibians, reptiles, and birds and the secretion of melatonin by some of these lower vertebrates suggests that mammalian pinealocytes may have once served as photoreceptor cells. Researchers have also indicated the presence of several photoreceptor proteins found in the retina in the pinealocytes in chicken and fish. Genetic evidence demonstrates that phototransduction genes expressed in the photoreceptors of the retina are also present in pinealocytes. More evidence for the evolution of pinealocytes from photoreceptor cells is the similarities between the ribbon complexes in the two types of cells. The presence of the protein RIBEYE and other proteins in both pinealocytes and sensory cells (both photoreceptors and hair cells) suggests that the two cells are related to one another evolutionarily. Differences between the two synaptic ribbons exist in the presence of certain proteins, such as ERC2/CAST1, and the distribution of proteins within the complexes of each cell. Melatonin Regulation Regulation of melatonin synthesis is important to melatonin’s main function in circadian rhythms. The main molecular control mechanism that exists for melatonin secretion in vertebrates is the enzyme AANAT (arylalkylamine N-acetyltransferase). The expression of the AANAT gene is controlled by the transcription factor pCREB, and this is evident when cells treated with epithalone, a peptide which affects pCREB transcription, have a resulting increase in melatonin synthesis. AANAT is activated through a protein kinase A system in which cyclic AMP (cAMP) is involved. The activation of AANAT leads to an increase in melatonin production. Though there are some differences specific to certain species of vertebrates, the effect of cAMP on AANAT and AANAT on melatonin synthesis remains fairly consistent. Melatonin synthesis is also regulated by the nervous system. Nerve fibers in the retinohypothalamic tract connect the retina to the suprachiasmatic nucleus (SCN). The SCN stimulates the release of norepinephrine from sympathetic nerve fibers from the superior cervical ganglia that synapse with the pinealocytes. Norepinephrine causes the production of melatonin in the pinealocytes by stimulating the production of cAMP. Because the release of norepinephrine from the nerve fibers occurs at night, this system of regulation maintains the body’s circadian rhythms. Synthesis Pinealocytes synthesize the hormone melatonin by first converting the amino acid tryptophan to serotonin. The serotonin is then acetylated by the AANAT enzyme and converted into N-acetylserotonin. N-acetylserotonin is converted into melatonin by the enzyme hydroxyindole O-methyltransferase (HIOMT), also known as acetylserotonin O-methyltransferase (ASMT). Activity of these enzymes is high during the night and regulated by the mechanisms previously discussed involving norepinephrine. See also List of human cell types derived from the germ layers References External links Endocrine system
Pinealocyte
Biology
1,619
6,048,023
https://en.wikipedia.org/wiki/Perflubron
Perflubron (INN/USAN, or perfluorooctyl bromide; brand name Imagent) is a contrast medium for magnetic resonance imaging, computer tomography and sonography. It was approved for this use in the United States by the Food and Drug Administration in 1993. Experimental research Perflubron has also been tested experimentally for use in liquid breathing in premature infants with respiratory distress. References MRI contrast agents Organofluorides Orphan drugs Organobromides Haloalkanes
Perflubron
Chemistry
106
588,441
https://en.wikipedia.org/wiki/Trolling%20motor
A trolling motor is a self-contained marine propulsion unit that includes an electric motor, propeller and control system, and is affixed to an angler's boat, either at the bow or stern. A gasoline-powered outboard used in trolling, if it is not the vessel's primary source of propulsion, may also be referred to as a trolling motor. The main function of trolling motors was once to keep the boat running at a consistent, low speed suitable for trolling, but that function has been augmented by GPS-tracking trolling motors that function as "virtual anchors" to automatically maintain a boat's position relative to a desired location, such as a favorite fishing spot. Trolling motors are often lifted from the water to reduce drag when the boat's primary engine is in operation. Uses Trolling for game fish; a motor used for this purpose is usually a secondary means of propulsion, and mounted on the transom alongside the primary outboard motor or on a bracket made for the purpose. Auxiliary power for precision maneuvering of the boat, to enable the angler to cast his bait to where the fish are located. Trolling motors designed for this application are typically mounted in the bow. History An 1895 article in Scientific American entitled "A Portable Electric Propeller for Boats" stated: "Briefly described, it consists of a movable tube which is hinged at the stern of the boat, much as an oar is used in sculling. The tube contains a flexible shaft formed of three coils of phosphor bronze. This tube extends down and out into the water, where it carries a propeller, and at the inboard end an electric Motor is attached, which is itself driven by batteries." It was invented and sold by the Electric Boat company. The electric trolling motor was invented by O.G. Schmidt in 1934 in Fargo, North Dakota, when he took a starter motor from a Ford Model A, added a flexible shaft, and a propeller. Because his manufacturing company was near the Minnesota/North Dakota border, he decided to call the new company Minn Kota. The company still is a major manufacturer of trolling motors. Design Electric trolling motors Modern electric trolling motors are designed around a 12-volt, 24-volt or 36-volt brushed DC electric motor, to take advantage of the availability of 12-volt deep cycle batteries designed specifically for marine use. The motor itself is sealed inside a watertight compartment at the end of the shaft. It is submerged during operation, which prevents overheating. The propeller is fitted directly on to the propshaft. Hand-control: tiller for steering, with speed control either built into the tiller or a control knob on top of the unit. Hand controlled trolling motors are attached to the boat with a clamp. Foot-control: on/off and speed controls are foot-operated, and built into a pedal that also controls the steering mechanism. Steering may be via electronically controlled servo motors, or in early-model (and late-model low-end units), a push-pull cable. Foot controlled trolling motors require a specialized mounting bracket that bolts horizontally to the deck. Main advantage of foot controls is that fisherman has both hands free for fishing and landing the hooked fish. On the other hand, it is sometimes hard to coordinate foot work with hands, especially in wavy and windy conditions. Wireless remote: available on high-end late-model trolling motors. Servo-controlled steering and speed control both respond to a wireless device, either in a foot pedal or a key-fob transmitter (similar to an automotive remote keyless system). Gasoline-powered trolling motors Small outboard motors are frequently used as trolling motors on boats with much larger engines that do not operate as efficiently or quietly at trolling speeds. These typically are designed with a manual pull start system, throttle and gearshift controls mounted on the body of the motor, and a tiller for steering, but in a trolling application, will be connected to the steering mechanism at the helm. See also Electric boat Electric outboard motor Outboard motor Trolling References External links Marine engines Marine propulsion
Trolling motor
Technology,Engineering
860
7,547,036
https://en.wikipedia.org/wiki/Agnes%20Giberne
Agnes Giberne (19 November 1845 – 20 August 1939) was a British novelist and scientific writer. Her fiction was typical of Victorian evangelical fiction with moral or religious themes for children. She also wrote books on science for young people, a handful of historical novels, and one well-regarded biography. Biography Giberne was born in Belgaum, Karnataka, India, the daughter of Captain Charles Giberne of the Bengal Native Infantry and Lydia Mary Wilson. Her ancestors were Huguenots from Languedoc in France where the "de Gibernes" lived in Chateau de Gibertain. Charles Giberne was from a large family. He had eight sisters and four brothers. Three of his brothers also served in India. Giberne's parents married at St. Mary the Virgin, Walthamstow on 11 December 1838. It is not absolutely clear how many siblings Giberne had. The British Library's India Family History and Families in British India Society records show: Mary Lydia Giberne, on 1 December 1840 at Karrack, Persian Gulf. She died at Ahymednuggar on 7 May 1842, aged 17 months. Twins born on 21 January 1844 at Ahmednuggur, with the boy still-born and the girl, Helen Mary Giberne, surviving. However, She died in the first quarter of 1861, aged 17. Agnes, born on 19 November 1845 at Belgaum. Florence, born on 1 June 1847 at Poona. However, she died in Brighton on 5 September 1858, aged 11 years. Eliza, born on 5 December 1848, At her maternal grandfather's at Worton House, Over-Warton, Oxfordshire. Died aged 79 on 22 February 1928. By the time of the 1851 census, Lydia Mary was staying with her four surviving daughters at Beach in Weston-super-mare with the Rector of Eyam in Derbyshire and his family. Charles Giberne had already been pensioned off and was staying at no 17, Beaufort, in Bath with two servants. By the time of the 1861 census, only two girls survived, Giberne and her sister Eliza. Eliza was educated privately, by governesses and special masters. She began to scribble stories at age seven and shared these with her sisters She ascribed her literary tastes to her mother and her scientific curiosity to her father. Writing Giberne states that she began to publish children's stories at seventeen. These were probably short stories in magazines. The first children's book by Giberne in the British Library is A Visit to Aunt Agnes (Religious Tract Society, London, 1864). It was advertised on 24 November 1864 at the price of two shillings. Giberne would have been 19 by then. Copson states that her children's stories were "typical works of Victorian evangelical fiction emphasizing childish faults and the need for salvation." The lithographs by Kronheim & Co. for A Visit to Aunt Agnes, by courtesy of the University of Florida Digital Collections. Initially, Giberne's work was signed either A. G. or she was indirectly indicated through identifying other works she had written. The first book in England which bears her name was The Curate's House which she wrote to draw attention to clerical poverty. Giberne had a wider range than just evangelical and didactic stories for young children. She also wrote books targeted at young adolescent girls, which was mainly published by the Religious Tract Society. Giberne also wrote historical novels including: Detained in France : a tale of the first French empire (Seeley, 1871). A story about the English people detailed by Napoleon on the outbreak of war. Aimée: a tale of the days of James the Second (Seeley, 1872). A story about the Huguenot persecution in France and their flight to England. Coulyng Castle, or, A knight of the olden days (Seeley, 1875). A picture of castle life under Henry IV and Henry V. Roy. A tale in the days of Sir John Moore. (Pearson, 1901). Returns to the theme of those detained by Napoleon, but adds in Sir John Moore's famous retreat and the Battle of Corunna. Under Puritan rule: a tale of troublous days (National Society's Depository, 1909). Focuses on the sufferings of those Anglican clergy who were deprived of their livings by the Puritans. In 1895 Giberne published A lady of England: the life and letters of Charlotte Maria Tucker (Hodder & Stoughton), who wrote children's fiction under the pseudonym "A Lady of England" (A.L.O.E.), and late in life, became a missionary in India. Giverne's Aunt Caroline Cuffley Giberne (1803-1885) had also worked as a missionary in India, and also concentrated on work with women and girls. However, Giberne is best remembered for her books popularising science. Giberne was an amateur astronomer who worked on the committee setting up the British Astronomical Association and became a founder-member in 1890. Giberne's first foray into science was a book on astronomy Sun, Moon and Stars: Astronomy for Beginners (Seeley, 1879). She had sent the proofs to Charles Pritchard (29 February 1808 – 28 May 1893), the Savilian Professor of Astronomy at Oxford University and he was so impressed by it that he wrote, without being asked, a very positive introduction.. The Graphic stated that "As an introduction to a science, it could scarcely be more attractive, and it is the best book of the kind we have seen." The book remained in print for many years and had sold 10,000 copies by 1884, 24,000 copies by 1898, and 26,000 by 1903, when she issued another revised edition. However, this total probably does not include the sales in the United States, where the book was published as The Story of The Sun, Moon, and Stars, as the totals cited come from the edition count on the title page of the Seeley editions, and Seeley would only have counted their own editions, and not those of another publisher. Giberne wrote several other books on Astronomy including: Among the Stars, or wonderful things in the sky (Seeley, 1884), intended for younger children, where a boy called Ikon has the solar system and stars explained to him by a professor. St James's Gazette said that Giberne "tells about the wonderful things in the sky in clear pleasant language that every child can understand, and in a manner that is probably new to them. Some of the lessons are illustrated by little experiments which will be both amusing and instructive to repeat in the nursery; and there are visits the sun and moon that read like strange and beautiful fairy-tales. In every way this is a most excellent book for children. The starry skies, first lessons on the sun, moon and stars (Seeley & Co). In this book Giberne "offered lucid and simple explanations of gravity, the seasons, the rotation of the earth, the moon, the sun, the planets of the solar system, comets, meteors, stars, and nebulae". As with some of the other books for children Giberne used the power of the imagination to help teach scientific fact. Radiant suns (Seeley & Co, 1895), a sequel to Sun, moon and star but more advanced. It covered the history of astronomy, the relatively new science of spectral analysis, and a discussion of the stellar universe. This Wonderful Universe (SPCK, 1895). Completely rewritten and revised for an illustrated edition in 1920. An introduction to the heavens for younger readers. Giberne did not ignore the other sciences, she also wrote books on: Geology, with The world's foundations, or Geology for beginners (Seeley, 1882). In her preface, Gilberne noted that some counted Geology to be a dangerous subject, and that there can be no conflict between the Bible, as the word of God, and His handiwork, in the shape of the Geology of the Earth. Physics, with Twilight Talks, or easy lessons on things around us (Religious Tract Society, 1882).A little volume for children on scientific subjects. In her preface, Giberne says that if the book were called "An introduction to Physics" it would frighten off its intended users. Hydrology with Father Aldur. A water story, etc. Here again imagination (a sleeping boy dreams of the river as a living being) and scientific fact are interwoven. Meteorology, with The Ocean of Air, Meteorology for Beginners (Seely, 1890). This volume also had an enthusiastic preface written by Charles Pritchard, again volunteered by him, when he read the proofs. The book described the "gases, water, forms of life, movement, disturbances, and forces within air." The photographs illustrating the book were said to be "genuine works of art". Natural History, with A modern Puck, a fairly story for children (Jarrold, 1898). This was ostensibly a fairy story but contained lots of nuggets of information about animal behaviour, insects etc. One review said that the book was one which "teaches much", but unobtrusively and not "as if it were teaching at all," and every healthy-minded child must be delighted "with such a book, with its pleasant and quite natural make-believe." The magic cloak which the fairy used enabled the heroine to see "into the homes of many an animal and insect." Oceanography, with The Mighty Deep and what we know of it (Pearson, 1902). One reviewer said "Call it oceanography and it is apt to repel; put it in Miss Giberne'e graceful words and it attracts while it teaches.". Another said "It is a singularly informing book, and is written in such a way that any boy or girl of average intelligence will not only understand it readily but will thoroughly enjoy it. There are too too few books of this class." Science in general, with This Wonder World (Religious Tract Society, 1913). In this volume, Giberne addresses a range of topics "how the wood and the iron and the coal come to be here, and how the air and the water and the fire serve us. Concerning these and other subjects such as flying machines, Miss Giberne writes very simply and effectively." Botany, with The garden of earth, a little book on plant-life, plantgrowth, and the ways and uses of plants (SPCK, 1920). "It is not a Manual of Botany with hard and dry names, which often make the subject distasteful, but a book introducing us gradually and simply to an intimate and loving acquaintance with the inhabitants of the vegetable world." Giberne was prolific. At her peak in the 1880s and 1890s, she produced 36 and 33 volumes respectively. Her output tapered off after 1900. However, her output over eight decades indicates her dedication to her work. Later life Although the Oxford Dictionary of National Biography states that Giberne wrote for her own interests rather than to earn money, she relied to some extent on her royalty income. Giberne found herself with severe financial problems in 1905, and applied to the Royal Literary Fund. She was now sixty, and was said to have given up the best years of her life to support her ailing father (who had died in 1902). She had failing eyesight, with cataracts in both eyes, and a weak heart. Her income was listed as an annuity, the royalties from her books, and £100 a year from the Indian Civil Service as a pensioner's child. She was awarded £200 from the Royal Literary Fund and £273 from the Royal Bounty Fund, both to be put towards the purchase of a Post Office annuity. However, her royalty income was falling, and her nominal income of £170 was not sufficient due to the rising cost of living, and she had been force to sell some furniture and all of her silver as well as moving into smaller accommodation. This time she was awarded a grant of £50. The 1911 census found her lodging in rooms at 2, The Avenue, Eastbourne. In 1939 she was living at 21 Enys Road Eastbourne. She died in a nursing home at 16 Motcombe Road, Eastbourne, on 20 August 1939, aged 94. Her estate was worth £539 18s 11d. List of works The following list of works has developed largely from a search on the Jisc Library Hub Discover database.. Where necessary, missing details such as page counts and publisher's names have been filled in by searches on WorldCat and on newspaper archives. See also Timeline of women in science Notes References External links Books by Giberne listed in the catalogue of the British Library. Books by Giberne in the Internet Archive Books by Giberne in the University of Florida Digital Collections. 1845 births 1939 deaths 19th-century British women writers 19th-century British writers Amateur astronomers 19th-century British astronomers English-language writers from India British women children's writers People from Belagavi district British women astronomers Women writers from Karnataka Writers from Karnataka Indian people of French descent
Agnes Giberne
Astronomy
2,747
12,089,795
https://en.wikipedia.org/wiki/Mackmyra%20Whisky
Mackmyra Whisky was a Swedish single malt whisky distillery. On August 19th, 2024, Mackmyra Svensk Whisky AB filed for bankruptcy. After the bancrupcy, more than 50 organizations expressed interest to buy Mackmyra, such interest level is unprecedented in Swedish history. It is named after the village and manor of Mackmyra, where the first distillery was established, in the residential district of Valbo, south-west of Gävle. The toponym is commonly suggested as deriving from a regional word for gnats (Swedish: mack) and mire (Swedish: myr). However, owlet moths have all but disappeared from present-day Mackmyra, due to the gradual rebound of land—a result of the melting of ice sheets 10,000 years ago. Mackmyra Svensk Whisky AB is a publicly traded company, listed in December 2011 on Nasdaq OMX's alternative-investment market First North. The company has about 45 employees with annual revenues of around SEK 100 million, and its biggest shareholder is the Swedish farmer's co-op Lantmännen. History Mackmyra's history started in 1998 at a Swedish winter resort, where eight friends from the Royal Institute of Technology met up for a ski trip. Noticing all of them had brought along a bottle of malt whisky for the host, a conversation started about the manufacturing of a Swedish whisky. The following year a company was founded, and after years of experimenting with 170 different recipes, they finally settled on two recipes in 2002. That same year a new distillery was built in the old mill and power station at Mackmyra, which went on stream in October. The first limited edition single malt whisky, Preludium 01, launched in February 2006 and sold-out in less than 20 minutes. Production All ingredients used in the production are sourced within a 75-mile radius from Mackmyra, except for the yeast, which is from Rotebro. The water undergoes a natural filtration process in an esker nearby and is only sterilized with a high-intensity UV light. The peat is from a local bog near Österfärnebo, and the distillery uses barley from Dalarna and Strömsta Manor in Enköping. Mackmyra bottles all of its wares in their natural color, without additives, and ages their spirits in four different cask types: bourbon, sherry, Swedish oak and in a special signature cask made from classic American bourbon casks and Swedish oak. The whisky is generally matured 50 meters below ground in the disused Bodås Mine in Hofors, and most releases have been at cask strength, except for The First Edition and Mackmyra Brukswhisky. Mackmyra filed for insolvency in 2024. Distilleries Mackmyra has two active distilleries. The first went on stream at Mackmyra in 2002, featuring a full-sized pot still from Forsyth's in Rothes, Scotland, Swedish stainless steel washbacks and a German mash tun, with a production capacity of 600,000 bottles a year. A second distillery, about 6 miles east of Mackmyra village, was built and went on stream in 2011. The project cost has been estimated at SEK 50 million, featuring two full-sized pot stills with a production capacity of 1.8 M bottles a year. It's seven storeys high, using gravity to power many internal processes within the distillery, resulting in about 45% less energy use compared to the first distillery. Products Standard Range The First Edition (ABV 46.1%) - Introduced in 2008, and the first Swedish whisky produced in large volumes since Skeppets Whisky. Mackmyra Brukswhisky (ABV 41.1%) – Introduced in 2010, and sold internationally as The Swedish Whisky. Mackmyra Svensk Rök (ABV 46.1%) – Introduced in 2013, and the first Swedish single malt whisky with a smoky flavor. Special edition bottlings Mackmyra Midvinter – A limited edition series, launched in November 2013. Mackmyra Midnattssol - A limited editions series, launched in May 2014. Mackmyra Moment – A series of whiskies from handpicked casks selected by the master blender. Mackmyra Reserve – A single barrel whisky made to order and stored until ready to drink in 30-litre casks. The customer picks recipe and cask type. Mackmyra 10 år – 10-year-old limited edition whisky. Past special edition bottlings Mackmyra Preludium – 2006-2007 Mackmyra Privus – 2006 Mackmyra Special – 2008-2013 Other spirits Vit Hund (ABV 46.1%) - An unmatured raw whisky Bee (ABV 22%) - A whisky and honey liqueur Awards Mackmyra have won several awards at international spirit competitions. For example: Mackmyra Brukswhisky have been named "European Whisky of the Year" by Jim Murray in the Whisky Bible, and was awarded gold by The International Wine and Spirit Competition (IWSC) in 2010. In 2012, Mackmyra received a trophy as the "European Spirits Producer of the Year" by the IWSC, and was awarded gold for Moment Skog and Mackmyra Special 08. Gold have also previously been awarded in 2011 for The First Edition, Moment Drivved, Moment Medvind and Mackmyra Reserve. In 2013 the distillery was awarded Gold Outstanding by the IWSC and Three Golden Stars by the International Taste and Quality Institute for Moment Glöd single malt whisky. See also List of whisky brands Single malt whisky References External links Official Mackmyra Website (in English) Swedish Whisky (in English) Coordinates: Distilleries Swedish distilled drinks Food and drink companies of Sweden Companies based in Gävleborg County
Mackmyra Whisky
Chemistry
1,231
44,535,299
https://en.wikipedia.org/wiki/Middle%20child%20syndrome
Middle child syndrome is the idea that the middle children of a family, those born in between siblings, are treated or seen differently by their parents from the rest of their siblings. The theory believes that the particular birth order of siblings affects children's character and development process because parents focus more on the first and last-born children. The term is not used to describe a mental disorder. Instead, it is a hypothetical idea telling how middle children see the world based on their subconscious upbringing. As a result, middle children are believed to develop different characteristics and personality traits from the rest of their siblings, as well as experiencing household life differently from the rest of their siblings. Birth order Alfred Adler (1870-1937) was an Austrian physician and psychiatrist during the Victorian era. Throughout his life, he created and studied three main theories. Inferiority v. superiority, social interest, and birth order. His theory surrounding birth order stated that the order siblings/children are born significantly affects children's adolescence and personality types. With the help of Sigmund Freud and Carl Jung, Adler specifically developed this theory to understand children's behavior easier. He understood better their working brain and why they act differently even after being raised under the same roof by the same parents. His idea revolved around married parents who raised their children while living together. Many researchers and psychologists today study the topic of birth order and how it affects children—the term "middle child syndrome" developed as a term over time. It describes the shared characteristics middle children feel and the events they go through that are specifically related to being the middle child. According to Adler's theory, the life of each first, middle, and last-born sibling is different regarding birth order, and their personality traits can be affected by this. The oldest child may be dominant and conservative The middle child may be cooperative and independent The youngest child may be ambitious and privileged With middle children being "stuck in the middle," it can become standard for the middle-born to feel unloved or have less attention on them from their parents. There are certain family situations where birth order and middle child syndrome don't apply. Alfred Adler's concept surrounding birth order relies on the stereotypical dysfunctional family. Middle child syndrome is an idea, not a diagnosis. This term helps researchers understand more about child development and why children behave as they do regarding parenting and sibling relationships. Research A study on the differences between the perceived IQ of middle-born children and their siblings was conducted in 1988. Through the data they collected, researchers found that parents tended to have a more favorable impression of their first-born's intellect than their younger siblings. It was found that when testing the IQ of siblings of comparable ages, their IQ scores tended to be within a few points of each other. The study concluded that although siblings tended to have a similar IQ due to having a shared environment, the way they were treated due to their perceived intelligence was mismatched. In 1998, researchers conducted a survey to test the theory that birth order had an influence on the personality of an individual and the strength of their bond with their parents. They found that middle children were the least likely to say they would turn to their parents when faced with a dire and stressful situation. It was also noted that middle children were less likely to nominate their mother as the person they felt most close to compared to the first-born and last-born. In 2016, research was performed to examine birth order and its effect on the idealistic self-representation among undergraduate engineering students. Among the 320 participants, researchers found that middle-born children were less likely to be family-oriented compared to their siblings. According to the study, first-born children scored higher in being protective compared to their younger siblings. Additionally, the middle children had scored the highest for affection and getting along. However, their score was lower for companionship and identification. Such findings suggest that there could be differences in an individual's character that might be attributed to the order in which they were born. Middle children were also the most likely to develop maladaptive perfectionism, which is an inclination towards following instructions up to the finest details. An analysis on birth order and parental sibling involvement in sex education was conducted in 2018. The survey had over 15,000 participants. Based on the results, researchers found that 30.9 percent of middle-born women were slightly less likely to talk to their parents about procreation in comparison to the 29.4 percent of women that were youngest in their family. Likewise, the research determined that 17.9 percent of men born in the middle of their family found it relatively simple to discuss sexual reproduction with their parents contrary to the 21.4 percent of last-born men. Jeannie S. Kidwell conducted a study exploring the self-esteem of middle children compared to the youngest and oldest children in the family. Other factors were also accounted for in the study, such as the number of children, age difference, and gender. Kidwell proposed that self esteem is an important scope of one's identity and related to the competence, achievement, and relationships of a child's development. The results of Kidwell's study suggested that an individual's self-esteem decreases as their number of siblings increase. However, it was mostly seen within families with children born in intervals of two years apart. The study suggests that this is because “there is less time to develop and solidify the uniqueness inherent in being firstborn and lastborn when there is only one year between siblings. With this compact spacing, all three birth positions become less distinct, clouding the behavioral and perceptual differences between them.” The “lack of uniqueness” phenomenon is defined as achieving status, affection, and recognition among family members because the individual feels special in their eyes. Kidwell analyzed whether it was more difficult for middleborn children and if it would affect their self-assessment. Kidwell's findings proposed that young men with siblings that were all female showed higher levels of self-esteem, despite the order in which they were born. Examples and traits The theory of birth order argues that the sequence in which a person is born can influence their distinct personality. It is believed that personality may be attributed to the parenting style in which one was raised. For example, parents with multiple children might raise the oldest child differently from the middle or youngest child. Middle child syndrome is often used to describe how middle children might have different experiences in the way they were treated throughout their childhood. While every middle child's upbringing consists of distinct circumstances, there is evidence of similar behavioral patterns among them. Traits Middle children's personality traits result from the relationships between the middle child and family- siblings and parents. Secretive Mediator Diplomatic Independent Loyal Social Accountable Compromising Adaptable Flexible Due to birth order theory, there are several situations during adolescence that middle children may go through more than their first or last-born siblings. Examples Not feeling special growing up Less parental awareness of middle child in comparison to siblings More reliant on friends than family The first of the family to leave home The last one in the family to ask for help Protective over relationships outside of the family May feel overpowered or dominated in certain situations More likely to be ignored or neglected by parents Explanation It has yet to be discovered when or where the term middle child syndrome originated. However, the study and research of birth order have given the phrase its meaning. Being a middle child doesn't propose instant oversight. There may even be times when being a middle child has its advantages. Like many other life affairs, being the middle child has positive and negative aspects. While birth order and middle child syndrome may help us understand child development, it doesn't define the middle-born as a whole. Ultimately, there can be psychological effects on middle-born children who don't get the attention that the oldest and youngest child of the family receives. While there are many birth order studies and research, Alfred Adler is the leading psychologist who developed the theory. However, his research is widely criticized as being outdated and not including essential aspects in his work, such as race, age, and gender. See also Sibling rivalry Sibling abuse Nepotism List of siblings groups Nuclear family Sibling Day References Further reading Human development Psychological theories Sibling
Middle child syndrome
Biology
1,680
29,103,391
https://en.wikipedia.org/wiki/Mycena%20multiplicata
Mycena multiplicata is a species of mushroom in the family Mycenaceae. First described as a new species in 2007, the mushroom is known only from the prefecture of Kanagawa, Japan, where it grows on dead fallen twigs in lowland forests dominated by oak. The mushroom has a whitish cap that reaches up to in diameter atop a slender stem long and thick. On the underside of the cap are whitish, distantly spaced gills that are narrowly attached to the stem. Microscopic characteristics of the mushroom include the amyloid spores (which turn bluish-black to black in the presence of Melzer's reagent), the pear-shaped to broadly club-shaped cheilocystidia (cystidia found on the gill edge) which are covered with a few to numerous, unevenly spaced, cylindrical protuberances, the lack of pleurocystidia (cystidia on the gill face), and the diverticulate hyphae in the outer layer of the cap and stem. The edibility of the mushroom is unknown. Taxonomy, naming, and classification The mushroom was first collected by Japanese mycologist Haruki Takahashi in 1999, and reported as a new species in a 2007, along with seven other Japanese Mycenas. The specific epithet is derived from the Latin word multiplicata, meaning "multiplicative". Its Japanese name is Keashi-ochiedatake (ケアシオチエダタケ). Takahashi suggests that the mushroom is best classified in the section Mycena of the genus Mycena, as defined by Dutch Mycena specialist Maas Geesteranus. Description The cap of M. multiplicata is conical to convex to bell-shaped, reaching in diameter. It is often shallowly grooved toward the margin, dry, and somewhat hygrophanous (changing color when it loses or absorbs water). The cap surface is initially pruinose (appearing as if covered with a fine white powder), but soon becomes smooth. The cap color is whitish, sometimes pale brownish at the center. The white flesh is up to 0.3 mm thick, and does not have any distinctive taste or odor. The slender stem is long by thick, cylindrical, centrally attached to the cap, and hollow. Its surface is dry, pruinose near the top, and covered with fine, soft hairs toward the base. It is whitish to grayish-violet near the top, gradually becoming dark violet below. The stem base is covered with long, fairly coarse, whitish fibrils. The gills are narrowly attached to the stem, distantly spaced (between 13 and 16 gills reach the stem), up to 1.7 mm broad, thin, and whitish, with the gill edges the same color as the gill faces. The edibility of the mushroom has not been determined. Microscopic characteristics The spores are ellipsoid, thin-walled, smooth, colorless, amyloid, and measure 8–9.5 by 4 μm. The basidia (spore-bearing cells) are 24–31 by 6.5–7.5 μm, club-shaped, four-spored, and have clamps at the basal septa. The cheilocystidia (cystidia on the gill edge) are abundant, pear-shaped to broadly club-shaped, and measure 17–28 by 11–20 μm. They are covered with a few to numerous excrescences (outgrowths) that are 2–18 by 1–3 μm, colorless, and thin-walled. The excrescences are unevenly spaced, simple to somewhat branched, cylindrical, and straight or curved. There are no pleurocystidia (cystidia on the gill face) in this species. The hymenophoral (gill-producing) tissue is made of thin-walled hyphae that are 7–20 μm wide, cylindrical (but often inflated), smooth, hyaline (translucent), and dextrinoid (staining reddish to reddish-brown in Melzer's reagent). The cap cuticle is made of parallel, bent-over hyphae that are 3–5 μm wide, cylindrical, and covered with simple to highly branched colorless diverticulae that have thin walls. The layer of hyphae underneath the cap cuticle have a parallel arrangement, and are hyaline and dextrinoid, and made of short and inflated cells that are up to 52 μm wide. The stem cuticle is made of parallel, bent-over hyphae that are 2–10 μm wide, cylindrical, diverticulate, colorless or pale violet, dextrinoid, and thin-walled. The caulocystidia (cystidia on the stem) are 2–6 μm wide, and otherwise similar in appearance to the cheilocystidia. The stem tissue is made of longitudinally arranged, cylindrical hyphae measuring 5–13 μm wide that are smooth, hyaline, and dextrinoid. Clamp connections are present in the cap cuticle and flesh, and at the septa at the base of the basidia. Similar species Within the section Mycena, M. multiplicata is similar to the Malaysian species M. obcalyx in having a grayish-white cap, lobed cheilocystidia with finger-like outgrowths, and a lignicolous habitat. M. obcalyx may be distinguished by forming much smaller fruit bodies (with caps 2–4 mm wide) with subdecurrent gills, a pruinose (dusty-looking), hyaline (glassy) white stem, and broadly ellipsoid spores. Habitat and distribution Mycena multiplicata is known only from Kanagawa, Japan. It is found growing solitary or scattered, on dead fallen twigs in lowland forests dominated by the oak species Quercus myrsinaefolia and Q. serrata. References External links The Agaricales in Southwestern Islands of Japan Images of the holotype specimen multiplicata Fungi of Japan Fungi described in 2007 Fungus species
Mycena multiplicata
Biology
1,288
75,634,894
https://en.wikipedia.org/wiki/Amanita%20diemii
Amanita diemii is a species of Amanita found growing under Nothofagus in Argentina and Chile. References External links diemii Fungi described in 1954 Fungus species
Amanita diemii
Biology
39
3,756,990
https://en.wikipedia.org/wiki/High-Definition%20Coding
HDC (Hybrid Digital Coding or High-Definition Coding) with SBR (spectral band replication) is a proprietary lossy audio compression codec developed by iBiquity for use with HD Radio. It replaced the earlier PAC codec in 2003. In June 2017, the format was reverse engineered and determined to be a variant of HE-AACv1. It uses a modified discrete cosine transform (MDCT) audio coding data compression algorithm. References External links News story about switch to HDC Audio codecs HD Radio
High-Definition Coding
Technology
109
20,361,954
https://en.wikipedia.org/wiki/International%20Institute%20of%20Earthquake%20Engineering%20and%20Seismology
International Institute of Earthquake Engineering and Seismology (IIEES) (Persian: پژوهشگاه بین‌المللی زلزله‌شناسی و مهندسی زلزله) is an international earthquake engineering and seismology institute based in Iran. It was established as a result of the 24th UNESCO General Conference Resolution DR/250 under Iranian government approval in 1989. It was founded as an independent institute within the Iran's Ministry of Science, Research and Technology. On its establishment, the IIEES drew up a seismic code in an attempt to improve the infrastructural response to earthquakes and seismic activity in the country. Its primary objective is to reduce the risk of seismic activity on buildings and roads and provide mitigation measures both in Iran and the region. The institute is responsible for research and education in this field along with several universities and institutes in Iran by conducting research and providing education and knowledge in seismotectonic studies, seismology and earthquake engineering. In addition conducts research and educates in risk management and generating possibilities for an effective earthquake response strategy. The IIEES is composed of the following research Centers: Seismology, Geotechnical Earthquake Engineering, Structural Earthquake Engineering, Risk Management; National center for Earthquake Prediction, and Graduate School, Public Education and Information Division. See also 2003 Bam earthquake Bahram Akasheh Earthquake Engineering Research Institute National Center for Research on Earthquake Engineering References External links Official site (Persian) Official site (English) Earthquake and seismic risk mitigation Earthquake engineering Research institutes in Iran Science and technology in Iran Scientific organisations based in Iran
International Institute of Earthquake Engineering and Seismology
Engineering
334
9,893,170
https://en.wikipedia.org/wiki/Ghidra
Ghidra (pronounced GEE-druh; ) is a free and open source reverse engineering tool developed by the National Security Agency (NSA) of the United States. The binaries were released at RSA Conference in March 2019; the sources were published one month later on GitHub. Ghidra is seen by many security researchers as a competitor to IDA Pro. The software is written in Java using the Swing framework for the GUI. The decompiler component is written in C++, and is therefore usable in a stand-alone form. Scripts to perform automated analysis with Ghidra can be written in Java or Python (via Jython), though this feature is extensible and support for other programming languages is available via community plugins. Plugins adding new features to Ghidra itself can be developed using a Java-based extension framework. History Ghidra's existence was originally revealed to the public via Vault 7 in March 2017, but the software itself remained unavailable until its declassification and official release two years later. Some comments in its source code indicates that it existed as early as 1999. In June 2019, coreboot began to use Ghidra for its reverse engineering efforts on firmware-specific problems following the open source release of the Ghidra software suite. Ghidra can be used, officially, as a debugger since Ghidra 10.0. Ghidra's debugger supports debugging user-mode Windows programs via WinDbg, and Linux programs via GDB. Supported architectures The following architectures or binary formats are supported: x86 16, 32 and 64 bit ARM and AARCH64 PowerPC 32/64 and VLE MIPS 16/32/64 MicroMIPS 68xxx Java and DEX bytecode PA-RISC RISC-V eBPF BPF Tricore PIC 12/16/17/18/24 SPARC 32/64 CR16C Z80 6502 MC6805/6809, HC05/HC08/HC12 8048, 8051, 8085 CP1600 MSP430 AVR8, AVR32 SuperH V850 LoongArch Xtensa See also IDA Pro JEB decompiler radare2 Binary Ninja References External links Disassemblers National Security Agency Free software programmed in C++ Free software programmed in Java (programming language) Software using the Apache license
Ghidra
Engineering
505
214,513
https://en.wikipedia.org/wiki/Transmission%20electron%20microscopy
Transmission electron microscopy (TEM) is a microscopy technique in which a beam of electrons is transmitted through a specimen to form an image. The specimen is most often an ultrathin section less than 100 nm thick or a suspension on a grid. An image is formed from the interaction of the electrons with the sample as the beam is transmitted through the specimen. The image is then magnified and focused onto an imaging device, such as a fluorescent screen, a layer of photographic film, or a detector such as a scintillator attached to a charge-coupled device or a direct electron detector. Transmission electron microscopes are capable of imaging at a significantly higher resolution than light microscopes, owing to the smaller de Broglie wavelength of electrons. This enables the instrument to capture fine detail—even as small as a single column of atoms, which is thousands of times smaller than a resolvable object seen in a light microscope. Transmission electron microscopy is a major analytical method in the physical, chemical and biological sciences. TEMs find application in cancer research, virology, and materials science as well as pollution, nanotechnology and semiconductor research, but also in other fields such as paleontology and palynology. TEM instruments have multiple operating modes including conventional imaging, scanning TEM imaging (STEM), diffraction, spectroscopy, and combinations of these. Even within conventional imaging, there are many fundamentally different ways that contrast is produced, called "image contrast mechanisms". Contrast can arise from position-to-position differences in the thickness or density ("mass-thickness contrast"), atomic number ("Z contrast", referring to the common abbreviation Z for atomic number), crystal structure or orientation ("crystallographic contrast" or "diffraction contrast"), the slight quantum-mechanical phase shifts that individual atoms produce in electrons that pass through them ("phase contrast"), the energy lost by electrons on passing through the sample ("spectrum imaging") and more. Each mechanism tells the user a different kind of information, depending not only on the contrast mechanism but on how the microscope is used—the settings of lenses, apertures, and detectors. What this means is that a TEM is capable of returning an extraordinary variety of nanometre- and atomic-resolution information, in ideal cases revealing not only where all the atoms are but what kinds of atoms they are and how they are bonded to each other. For this reason TEM is regarded as an essential tool for nanoscience in both biological and materials fields. The first TEM was demonstrated by Max Knoll and Ernst Ruska in 1931, with this group developing the first TEM with resolution greater than that of light in 1933 and the first commercial TEM in 1939. In 1986, Ruska was awarded the Nobel Prize in physics for the development of transmission electron microscopy. History Initial development In 1873, Ernst Abbe proposed that the ability to resolve detail in an object was limited approximately by the wavelength of the light used in imaging or a few hundred nanometres for visible light microscopes. Developments in ultraviolet (UV) microscopes, led by Köhler and Rohr, increased resolving power by a factor of two. However this required expensive quartz optics, due to the absorption of UV by glass. It was believed that obtaining an image with sub-micrometre information was not possible due to this wavelength constraint. In 1858, Plücker observed the deflection of "cathode rays" (electrons) by magnetic fields. This effect was used by Ferdinand Braun in 1897 to build simple cathode-ray oscilloscope (CRO) measuring devices. In 1891, Eduard Riecke noticed that the cathode rays could be focused by magnetic fields, allowing for simple electromagnetic lens designs. In 1926, Hans Busch published work extending this theory and showed that the lens maker's equation could, with appropriate assumptions, be applied to electrons. In 1928, at the Technische Hochschule in Charlottenburg (now Technische Universität Berlin), Adolf Matthias, Professor of High Voltage Technology and Electrical Installations, appointed Max Knoll to lead a team of researchers to advance the CRO design. The team consisted of several PhD students including Ernst Ruska and Bodo von Borries. The research team worked on lens design and CRO column placement, to optimize parameters to construct better CROs, and make electron optical components to generate low magnification (nearly 1:1) images. In 1931, the group successfully generated magnified images of mesh grids placed over the anode aperture. The device used two magnetic lenses to achieve higher magnifications, arguably creating the first electron microscope. In that same year, Reinhold Rudenberg, the scientific director of the Siemens company, patented an electrostatic lens electron microscope. Improving resolution At the time, electrons were understood to be charged particles of matter; the wave nature of electrons was not fully realized until the PhD thesis of Louis de Broglie in 1924. Knoll's research group was unaware of this publication until 1932, when they realized that the de Broglie wavelength of electrons was many orders of magnitude smaller than that for light, theoretically allowing for imaging at atomic scales. (Even for electrons with a kinetic energy of just 1 electronvolt the wavelength is already as short as 1.18 nm.) In April 1932, Ruska suggested the construction of a new electron microscope for direct imaging of specimens inserted into the microscope, rather than simple mesh grids or images of apertures. With this device successful diffraction and normal imaging of an aluminium sheet was achieved. However the magnification achievable was lower than with light microscopy. Magnifications higher than those available with a light microscope were achieved in September 1933 with images of cotton fibers quickly acquired before being damaged by the electron beam. At this time, interest in the electron microscope had increased, with other groups, such as that of Paul Anderson and Kenneth Fitzsimmons of Washington State University and that of Albert Prebus and James Hillier at the University of Toronto, who constructed the first TEMs in North America in 1935 and 1938, respectively, continually advancing TEM design. Research continued on the electron microscope at Siemens in 1936, where the aim of the research was the development and improvement of TEM imaging properties, particularly with regard to biological specimens. At this time electron microscopes were being fabricated for specific groups, such as the "EM1" device used at the UK National Physical Laboratory. In 1939, the first commercial electron microscope, pictured, was installed in the Physics department of IG Farben-Werke. Further work on the electron microscope was hampered by the destruction of a new laboratory constructed at Siemens by an air raid, as well as the death of two of the researchers, Heinz Müller and Friedrick Krause during World War II. Further research After World War II, Ruska resumed work at Siemens, where he continued to develop the electron microscope, producing the first microscope with 100k magnification. The fundamental structure of this microscope design, with multi-stage beam preparation optics, is still used in modern microscopes. The worldwide electron microscopy community advanced with electron microscopes being manufactured in Manchester UK, the USA (RCA), Germany (Siemens) and Japan (JEOL). The first international conference in electron microscopy was in Delft in 1949, with more than one hundred attendees. Later conferences included the "First" international conference in Paris, 1950 and then in London in 1954. With the development of TEM, the associated technique of scanning transmission electron microscopy (STEM) was re-investigated and remained undeveloped until the 1970s, with Albert Crewe at the University of Chicago developing the field emission gun and adding a high quality objective lens to create the modern STEM. Using this design, Crewe demonstrated the ability to image atoms using annular dark-field imaging. Crewe and coworkers at the University of Chicago developed the cold field electron emission source and built a STEM able to visualize single heavy atoms on thin carbon substrates. Background Electrons Theoretically, the maximum resolution, d, that one can obtain with a light microscope is limited by the wavelength of the photons (λ) and the numerical aperture NA of the system. where n is the index of refraction of the medium in which the lens is working and α is the maximum half-angle of the cone of light that can enter the lens (see numerical aperture). Early twentieth century scientists theorized ways of getting around the limitations of the relatively large wavelength of visible light (wavelengths of 400–700 nanometres) by using electrons. Like all matter, electrons have both wave and particle properties (matter wave), and their wave-like properties mean that a beam of electrons can be focused and diffracted much like light can. The wavelength of electrons is related to their kinetic energy via the de Broglie equation, which says that the wavelength is inversely proportional to the momentum. Taking into account relativistic effects (as in a TEM an electron's velocity is a substantial fraction of the speed of light, c) the wavelength is where h is the Planck constant, m0 is the rest mass of an electron and E is the kinetic energy of the accelerated electron. Electron source From the top down, the TEM consists of an emission source or cathode, which may be a tungsten filament, a lanthanum hexaboride (LaB6) single crystal or a field emission gun. The gun is connected to a high voltage source (typically ~100–300 kV) and emits electrons either by thermionic or field electron emission into the vacuum. In the case of a thermionic source, the electron source is mounted in a Wehnelt cylinder to provide preliminary focus of the emitted electrons into a beam while also stabilizing the current using a passive feedback circuit. A field emission source uses instead electrostatic electrodes called an extractor, a suppressor, and a gun lens, with different voltages on each, to control the electric field shape and intensity near the sharp tip. The combination of the cathode and these first electrostatic lens elements is collectively called the "electron gun". After it leaves the gun, the beam is typically accelerated until it reaches its final voltage and enters the next part of the microscope: the condenser lens system. These upper lenses of the TEM then further focus the electron beam to the desired size and location on the sample. Manipulation of the electron beam is performed using two physical effects. The interaction of electrons with a magnetic field will cause electrons to move according to the left hand rule, thus allowing electromagnets to manipulate the electron beam. Additionally, electrostatic fields can cause the electrons to be deflected through a constant angle. Coupling of two deflections in opposing directions with a small intermediate gap allows for the formation of a shift in the beam path, allowing for beam shifting. Optics The lenses of a TEM are what gives it its flexibility of operating modes and ability to focus beams down to the atomic scale and magnify them to get an image. A lens is usually made of a solenoid coil nearly surrounded by ferromagnetic materials designed to concentrate the coil's magnetic field into a precise, confined shape. When an electron enters and leaves this magnetic field, it spirals around the curved magnetic field lines in a way that acts very much as an ordinary glass lens does for light—it is a converging lens. But, unlike a glass lens, a magnetic lens can very easily change its focusing power by adjusting the current passing through the coils. Equally important to the lenses are the apertures. These are circular holes in thin strips of heavy metal. Some are fixed in size and position and play important roles in limiting x-ray generation and improving the vacuum performance. Others can be freely switched among several different sizes and have their positions adjusted. Variable apertures after the sample allow the user to select the range of spatial positions or electron scattering angles to be used in the formation of an image or a diffraction pattern. The electron-optical system also includes deflectors and stigmators, usually made of small electromagnets. The deflectors allow the position and angle of the beam at the sample position to be independently controlled and also ensure that the beams remain near the low-aberration centers of every lens in the lens stacks. The stigmators compensate for slight imperfections and aberrations that cause astigmatism—a lens having a different focal strength in different directions. Typically a TEM consists of three stages of lensing. The stages are the condenser lenses, the objective lenses, and the projector lenses. The condenser lenses are responsible for primary beam formation, while the objective lenses focus the beam that comes through the sample itself (in STEM scanning mode, there are also objective lenses above the sample to make the incident electron beam convergent). The projector lenses are used to expand the beam onto the phosphor screen or other imaging device, such as film. The magnification of the TEM is due to the ratio of the distances between the specimen and the objective lens' image plane. TEM optical configurations differ significantly with implementation, with manufacturers using custom lens configurations, such as in spherical aberration corrected instruments, or TEMs using energy filtering to correct electron chromatic aberration. Reciprocity The optical reciprocity theorem, or principle of Helmholtz reciprocity, generally holds true for elastically scattered electrons, as is often the case under standard TEM operating conditions. The theorem states that the wave amplitude at some point B as a result of electron point source A would be the same as the amplitude at A due to an equivalent point source placed at B. Simply stated, the wave function for electrons focused through any series of optical components that includes only scalar (i.e. not magnetic) fields will be exactly equivalent if the electron source and observation point are reversed. R Reciprocity is used to understand scanning transmission electron microscopy (STEM) in the familiar context of TEM, and to obtain and interpret images using STEM. Display and detectors The key factors when considering electron detection include detective quantum efficiency (DQE), point spread function (PSF), modulation transfer function (MTF), pixel size and array size, noise, data readout speed, and radiation hardness. Imaging systems in a TEM consist of a phosphor screen, which may be made of fine (10–100 μm) particulate zinc sulfide, for direct observation by the operator, and an image recording system such as photographic film, doped YAG screen coupled CCDs, or other digital detector. Typically these devices can be removed or inserted into the beam path as required. (Photograph film is no longer used.) The first report of using a Charge-Coupled Device (CCD) detector for TEM was in 1982, but the technology didn't find widespread use until the late 1990s/early 2000s. Monolithic active-pixel sensors (MAPSs) were also used in TEM. CMOS detectors, which are faster and more resistant to radiation damage than CCDs, have been used for TEM since 2005. In the early 2010s, further development of CMOS technology allowed for the detection of single electron counts ("counting mode"). These Direct Electron Detectors are available from Gatan, FEI, Quantum Detectors and Direct Electron. Components A TEM is composed of several components, which include a vacuum system in which the electrons travel, an electron emission source for generation of the electron stream, a series of electromagnetic lenses, as well as electrostatic plates. The latter two allow the operator to guide and manipulate the beam as required. Also required is a device to allow the insertion into, motion within, and removal of specimens from the beam path. Imaging devices are subsequently used to create an image from the electrons that exit the system. Vacuum system To increase the mean free path of the electron gas interaction, a standard TEM is evacuated to low pressures, typically on the order of . The need for this is twofold: first the allowance for the voltage difference between the cathode and the ground without generating an arc, and secondly to reduce the collision frequency of electrons with gas atoms to negligible levels—this effect is characterized by the mean free path. TEM components such as specimen holders and film cartridges must be routinely inserted or replaced requiring a system with the ability to re-evacuate on a regular basis. As such, TEMs are equipped with multiple pumping systems and airlocks and are not permanently vacuum sealed. The vacuum system for evacuating a TEM to an operating pressure level consists of several stages. Initially, a low or roughing vacuum is achieved with either a rotary vane pump or diaphragm pumps setting a sufficiently low pressure to allow the operation of a turbo-molecular or diffusion pump establishing high vacuum level necessary for operations. To allow for the low vacuum pump to not require continuous operation, while continually operating the turbo-molecular pumps, the vacuum side of a low-pressure pump may be connected to chambers which accommodate the exhaust gases from the turbo-molecular pump. Sections of the TEM may be isolated by the use of pressure-limiting apertures to allow for different vacuum levels in specific areas such as a higher vacuum of 10−4 to 10−7 Pa or higher in the electron gun in high-resolution or field-emission TEMs. High-voltage TEMs require ultra-high vacuums on the range of 10−7 to 10−9 Pa to prevent the generation of an electrical arc, particularly at the TEM cathode. As such for higher voltage TEMs a third vacuum system may operate, with the gun isolated from the main chamber either by gate valves or a differential pumping aperture – a small hole that prevents the diffusion of gas molecules into the higher vacuum gun area faster than they can be pumped out. For these very low pressures, either an ion pump or a getter material is used. Poor vacuum in a TEM can cause several problems ranging from the deposition of gas inside the TEM onto the specimen while viewed in a process known as electron beam induced deposition to more severe cathode damages caused by electrical discharge. The use of a cold trap to adsorb sublimated gases in the vicinity of the specimen largely eliminates vacuum problems that are caused by specimen sublimation. Specimen stage TEM specimen stage designs include airlocks to allow for insertion of the specimen holder into the vacuum with minimal loss of vacuum in other areas of the microscope. The specimen holders hold a standard size of sample grid or self-supporting specimen. Standard TEM grid sizes are 3.05 mm diameter, with a thickness and mesh size ranging from a few to 100 μm. The sample is placed onto the meshed area having a diameter of approximately 2.5 mm. Usual grid materials are copper, molybdenum, gold or platinum. This grid is placed into the sample holder, which is paired with the specimen stage. A wide variety of designs of stages and holders exist, depending upon the type of experiment being performed. In addition to 3.05 mm grids, 2.3 mm grids are sometimes, if rarely, used. These grids were particularly used in the mineral sciences where a large degree of tilt can be required and where specimen material may be extremely rare. Electron transparent specimens have a thickness usually less than 100 nm, but this value depends on the accelerating voltage. Once inserted into a TEM, the sample has to be manipulated to locate the region of interest to the beam, such as in single grain diffraction, in a specific orientation. To accommodate this, the TEM stage allows movement of the sample in the XY plane, Z height adjustment, and commonly a single tilt direction parallel to the axis of side entry holders. Sample rotation may be available on specialized diffraction holders and stages. Some modern TEMs provide the ability for two orthogonal tilt angles of movement with specialized holder designs called double-tilt sample holders. Some stage designs, such as top-entry or vertical insertion stages once common for high resolution TEM studies, may simply only have X-Y translation available. The design criteria of TEM stages are complex, owing to the simultaneous requirements of mechanical and electron-optical constraints and specialized models are available for different methods. A TEM stage is required to have the ability to hold a specimen and be manipulated to bring the region of interest into the path of the electron beam. As the TEM can operate over a wide range of magnifications, the stage must simultaneously be highly resistant to mechanical drift, with drift requirements as low as a few nm/minute while being able to move several μm/minute, with repositioning accuracy on the order of nanometres. Earlier designs of TEM accomplished this with a complex set of mechanical downgearing devices, allowing the operator to finely control the motion of the stage by several rotating rods. Modern devices may use electrical stage designs, using screw gearing in concert with stepper motors, providing the operator with a computer-based stage input, such as a joystick or trackball. Two main designs for stages in a TEM exist, the side-entry and top entry version. Each design must accommodate the matching holder to allow for specimen insertion without either damaging delicate TEM optics or allowing gas into TEM systems under vacuum. The most common is the side entry holder, where the specimen is placed near the tip of a long metal (brass or stainless steel) rod, with the specimen placed flat in a small bore. Along the rod are several polymer vacuum rings to allow for the formation of a vacuum seal of sufficient quality, when inserted into the stage. The stage is thus designed to accommodate the rod, placing the sample either in between or near the objective lens, dependent upon the objective design. When inserted into the stage, the side entry holder has its tip contained within the TEM vacuum, and the base is presented to atmosphere, the airlock formed by the vacuum rings. Insertion procedures for side-entry TEM holders typically involve the rotation of the sample to trigger micro switches that initiate evacuation of the airlock before the sample is inserted into the TEM column. The second design is the top-entry holder consists of a cartridge that is several cm long with a bore drilled down the cartridge axis. The specimen is loaded into the bore, possibly using a small screw ring to hold the sample in place. This cartridge is inserted into an airlock with the bore perpendicular to the TEM optic axis. When sealed, the airlock is manipulated to push the cartridge such that the cartridge falls into place, where the bore hole becomes aligned with the beam axis, such that the beam travels down the cartridge bore and into the specimen. Such designs are typically unable to be tilted without blocking the beam path or interfering with the objective lens. Electron gun The electron gun is formed from several components: the filament, a biasing circuit, a Wehnelt cap, and an extraction anode. By connecting the filament to the negative component power supply, electrons can be "pumped" from the electron gun to the anode plate and the TEM column, thus completing the circuit. The gun is designed to create a beam of electrons exiting from the assembly at some given angle, known as the gun divergence semi-angle, α. By constructing the Wehnelt cylinder such that it has a higher negative charge than the filament itself, electrons that exit the filament in a diverging manner are, under proper operation, forced into a converging pattern the minimum size of which is the gun crossover diameter. The thermionic emission current density, J, can be related to the work function of the emitting material via Richardson's law where A is the Richardson's constant, Φ is the work function and T is the temperature of the material. This equation shows that in order to achieve sufficient current density it is necessary to heat the emitter, taking care not to cause damage by application of excessive heat. For this reason materials with either a high melting point, such as tungsten, or those with a low work function (LaB6) are required for the gun filament. Furthermore, both lanthanum hexaboride and tungsten thermionic sources must be heated in order to achieve thermionic emission, this can be achieved by the use of a small resistive strip. To prevent thermal shock, there is often a delay enforced in the application of current to the tip, to prevent thermal gradients from damaging the filament, the delay is usually a few seconds for LaB6, and significantly lower for tungsten. Electron lens Electron lenses are designed to act in a manner emulating that of an optical lens, by focusing parallel electrons at some constant focal distance. Electron lenses may operate electrostatically or magnetically. The majority of electron lenses for TEM use electromagnetic coils to generate a convex lens. The field produced for the lens must be radially symmetrical, as deviation from the radial symmetry of the magnetic lens causes aberrations such as astigmatism, and worsens spherical and chromatic aberration. Electron lenses are manufactured from iron, iron-cobalt or nickel cobalt alloys, such as permalloy. These are selected for their magnetic properties, such as magnetic saturation, hysteresis and permeability. The components include the yoke, the magnetic coil, the poles, the polepiece, and the external control circuitry. The pole piece must be manufactured in a very symmetrical manner, as this provides the boundary conditions for the magnetic field that forms the lens. Imperfections in the manufacture of the pole piece can induce severe distortions in the magnetic field symmetry, which induce distortions that will ultimately limit the lenses' ability to reproduce the object plane. The exact dimensions of the gap, pole piece internal diameter and taper, as well as the overall design of the lens is often performed by finite element analysis of the magnetic field, whilst considering the thermal and electrical constraints of the design. The coils which produce the magnetic field are located within the lens yoke. The coils can contain a variable current, but typically use high voltages, and therefore require significant insulation in order to prevent short-circuiting the lens components. Thermal distributors are placed to ensure the extraction of the heat generated by the energy lost to resistance of the coil windings. The windings may be water-cooled, using a chilled water supply in order to facilitate the removal of the high thermal duty. Apertures Apertures are annular metallic plates, through which electrons that are further than a fixed distance from the optic axis may be excluded. These consist of a small metallic disc that is sufficiently thick to prevent electrons from passing through the disc, whilst permitting axial electrons. This permission of central electrons in a TEM causes two effects simultaneously: firstly, apertures decrease the beam intensity as electrons are filtered from the beam, which may be desired in the case of beam sensitive samples. Secondly, this filtering removes electrons that are scattered to high angles, which may be due to unwanted processes such as spherical or chromatic aberration, or due to diffraction from interaction within the sample. Apertures are either a fixed aperture within the column, such as at the condenser lens, or are a movable aperture, which can be inserted or withdrawn from the beam path, or moved in the plane perpendicular to the beam path. Aperture assemblies are mechanical devices which allow for the selection of different aperture sizes, which may be used by the operator to trade off intensity and the filtering effect of the aperture. Aperture assemblies are often equipped with micrometers to move the aperture, required during optical calibration. Imaging methods Imaging methods in TEM use the information contained in the electron waves exiting from the sample to form an image. The projector lenses allow for the correct positioning of this electron wave distribution onto the viewing system. The observed intensity, I, of the image, assuming sufficiently high quality of imaging device, can be approximated as proportional to the time-averaged squared absolute value of the amplitude of the electron wavefunctions, where the wave that forms the exit beam is denoted by Ψ. Different imaging methods therefore attempt to modify the electron waves exiting the sample in a way that provides information about the sample, or the beam itself. From the previous equation, it can be deduced that the observed image depends not only on the amplitude of beam, but also on the phase of the electrons, although phase effects may often be ignored at lower magnifications. Higher resolution imaging requires thinner samples and higher energies of incident electrons, which means that the sample can no longer be considered to be absorbing electrons (i.e., via a Beer's law effect). Instead, the sample can be modeled as an object that does not change the amplitude of the incoming electron wave function, but instead modifies the phase of the incoming wave; in this model, the sample is known as a pure phase object. For sufficiently thin specimens, phase effects dominate the image, complicating analysis of the observed intensities. To improve the contrast in the image, the TEM may be operated at a slight defocus to enhance contrast, owing to convolution by the contrast transfer function of the TEM, which would normally decrease contrast if the sample was not a weak phase object. The figure on the right shows the two basic operation modes of TEM – imaging and diffraction modes. In both cases the specimen is illuminated with the parallel beam, formed by electron beam shaping with the system of Condenser lenses and Condenser aperture. After interaction with the sample, on the exit surface of the specimen two types of electrons exist – unscattered (which will correspond to the bright central beam on the diffraction pattern) and scattered electrons (which change their trajectories due to interaction with the material). In Imaging mode, the objective aperture is inserted in a back focal plane (BFP) of the objective lens (where diffraction spots are formed). If using the objective aperture to select only the central beam, the transmitted electrons are passed through the aperture while all others are blocked, and a bright field image (BF image) is obtained. If we allow the signal from a diffracted beam, a dark field image (DF image) is received. The selected signal is magnified and projected on a screen (or on a camera) with the help of Intermediate and Projector lenses. An image of the sample is thus obtained. In Diffraction mode, a selected area aperture may be used to determine more precisely the specimen area from which the signal will be displayed. By changing the strength of current to the intermediate lens, the diffraction pattern is projected on a screen. Diffraction is a very powerful tool for doing a cell reconstruction and crystal orientation determination. Contrast formation The contrast between two adjacent areas in a TEM image can be defined as the difference in the electron densities in image plane. Due to the scattering of the incident beam by the sample, the amplitude and phase of the electron wave change, which results in amplitude contrast and phase contrast, correspondingly. Most images have both contrast components. Amplitude–contrast is obtained due to removal of some electrons before the image plane. During their interaction with the specimen some of electrons will be lost due to absorption, or due to scattering at very high angles beyond the physical limitation of microscope or are blocked by the objective aperture. While the first two losses are due to the specimen and microscope construction, the objective aperture can be used by operator to enhance the contrast. Figure on the right shows a TEM image (a) and the corresponding diffraction pattern (b) of Pt polycrystalline film taken without an objective aperture. In order to enhance the contrast in the TEM image the number of scattered beams as visible in the diffraction pattern should be reduced. This can be done by selecting a certain area in the back focal plane such as only the central beam or a specific diffracted beam (angle), or combinations of such beams. By intentionally selecting an objective aperture which only permits the non-diffracted beam to pass beyond the back focal plane (and onto the image plane): one creates a Bright-Field (BF) image (c), whereas if the central, non-diffracted beam is blocked: one may obtain dark-field (DF) images such as those shown in (d–e). The DF images (d–e) were obtained by selecting the diffracted beams indicated in diffraction pattern with circles (b) using an aperture at the back focal plane. Grains from which electrons are scattered into these diffraction spots appear brighter. More details about diffraction contrast formation are given further. There are two types of amplitude contrast – mass–thickness and diffraction contrast. First, let's consider mass–thickness contrast. When the beam illuminates two neighbouring areas with low mass (or thickness) and high mass (or thickness), the heavier region scatters electrons at bigger angles. These strongly scattered electrons are blocked in BF TEM mode by objective aperture. As a result, heavier regions appear darker in BF images (have low intensity). Mass–thickness contrast is most important for non–crystalline, amorphous materials. Diffraction contrast occurs due to a specific crystallographic orientation of a grain. In such a case the crystal is oriented in a way that there is a high probability of diffraction. Diffraction contrast provides information on the orientation of the crystals in a polycrystalline sample, as well as other information such as defects. Note that in case diffraction contrast exists, the contrast cannot be interpreted as due to mass or thickness variations. Diffraction contrast Samples can exhibit diffraction contrast, whereby the electron beam undergoes diffraction which in the case of a crystalline sample, disperses electrons into discrete locations in the back focal plane. By the placement of apertures in the back focal plane, i.e. the objective aperture, the desired reciprocal lattice vectors can be selected (or excluded), thus only parts of the sample that are causing the electrons to scatter to the selected reflections will end up projected onto the imaging apparatus. If the reflections that are selected do not include the unscattered beam (which will appear up at the focal point of the lens), then the image will appear dark wherever no sample scattering to the selected peak is present, as such a region without a specimen will appear dark. This is known as a dark-field image. Modern TEMs are often equipped with specimen holders that allow the user to tilt the specimen to a range of angles in order to obtain specific diffraction conditions, and apertures placed above the specimen allow the user to select electrons that would otherwise be diffracted in a particular direction from entering the specimen. Applications for this method include the identification of lattice defects in crystals. By carefully selecting the orientation of the sample, it is possible not just to determine the position of defects but also to determine the type of defect present. If the sample is oriented so that one particular plane is only slightly tilted away from the strongest diffracting angle (known as the Bragg Angle), any distortion of the crystal plane that locally tilts the plane to the Bragg angle will produce particularly strong contrast variations. However, defects that produce only displacement of atoms that do not tilt the crystal towards the Bragg angle (i. e. displacements parallel to the crystal plane) will produce weaker contrast. Phase contrast Crystal structure can also be investigated by high-resolution transmission electron microscopy (HRTEM), also known as phase contrast. When using a field emission source and a specimen of uniform thickness, the images are formed due to differences in phase of electron waves, which is caused by specimen interaction. Image formation is given by the complex modulus of the incoming electron beams. As such, the image is not only dependent on the number of electrons hitting the screen, making direct interpretation of phase contrast images slightly more complex. However this effect can be used to an advantage, as it can be manipulated to provide more information about the sample, such as in complex phase retrieval techniques. Diffraction As previously stated, by adjusting the magnetic lenses such that the back focal plane of the lens rather than the imaging plane is placed on the imaging apparatus a diffraction pattern can be generated. For thin crystalline samples, this produces an image that consists of a pattern of dots in the case of a single crystal, or a series of rings in the case of a polycrystalline or amorphous solid material. For the single crystal case the diffraction pattern is dependent upon the orientation of the specimen and the structure of the sample illuminated by the electron beam. This image provides the investigator with information about the space group symmetries in the crystal and the crystal's orientation to the beam path. This is typically done without using any information but the position at which the diffraction spots appear and the observed image symmetries. Diffraction patterns can have a large dynamic range, and for crystalline samples, may have intensities greater than those recordable by CCD. As such, TEMs may still be equipped with film cartridges for the purpose of obtaining these images, as the film is a single use detector. Analysis of diffraction patterns beyond point-position can be complex, as the image is sensitive to a number of factors such as specimen thickness and orientation, objective lens defocus, and spherical and chromatic aberration. Although quantitative interpretation of the contrast shown in lattice images is possible, it is inherently complicated and can require extensive computer simulation and analysis, such as electron multislice analysis. More complex behavior in the diffraction plane is also possible, with phenomena such as Kikuchi lines arising from multiple diffraction within the crystalline lattice. In convergent beam electron diffraction (CBED) where a non-parallel, i.e. converging, electron wavefront is produced by concentrating the electron beam into a fine probe at the sample surface, the interaction of the convergent beam can provide information beyond structural data such as sample thickness. Electron energy loss spectroscopy (EELS) Using the advanced technique of electron energy loss spectroscopy (EELS), for TEMs appropriately equipped, electrons can be separated into a spectrum based upon their velocity (which is closely related to their kinetic energy, and thus energy loss from the beam energy), using magnetic sector based devices known as EEL spectrometers. These devices allow for the selection of particular energy values, which can be associated with the way the electron has interacted with the sample. For example, different elements in a sample result in different electron energies in the beam after the sample. This normally results in chromatic aberration – however this effect can, for example, be used to generate an image which provides information on elemental composition, based upon the atomic transition during electron-electron interaction. EELS spectrometers can often be operated in both spectroscopic and imaging modes, allowing for isolation or rejection of elastically scattered beams. As for many images inelastic scattering will include information that may not be of interest to the investigator thus reducing observable signals of interest, EELS imaging can be used to enhance contrast in observed images, including both bright field and diffraction, by rejecting unwanted components. Three-dimensional imaging As TEM specimen holders typically allow for the rotation of a sample by a desired angle, multiple views of the same specimen can be obtained by rotating the angle of the sample along an axis perpendicular to the beam. By taking multiple images of a single TEM sample at differing angles, typically in 1° increments, a set of images known as a "tilt series" can be collected. This methodology was proposed in the 1970s by Walter Hoppe. Under purely absorption contrast conditions, this set of images can be used to construct a three-dimensional representation of the sample. The reconstruction is accomplished by a two-step process, first images are aligned to account for errors in the positioning of a sample; such errors can occur due to vibration or mechanical drift. Alignment methods use image registration algorithms, such as autocorrelation methods to correct these errors. Secondly, using a reconstruction algorithm, such as filtered back projection, the aligned image slices can be transformed from a set of two-dimensional images, Ij(x, y), to a single three-dimensional image, I′j(x, y, z). This three-dimensional image is of particular interest when morphological information is required, further study can be undertaken using computer algorithms, such as isosurfaces and data slicing to analyse the data. As TEM samples cannot typically be viewed at a full 180° rotation, the observed images typically suffer from a "missing wedge" of data, which when using Fourier-based back projection methods decreases the range of resolvable frequencies in the three-dimensional reconstruction. Mechanical refinements, such as multi-axis tilting (two tilt series of the same specimen made at orthogonal directions) and conical tomography (where the specimen is first tilted to a given fixed angle and then imaged at equal angular rotational increments through one complete rotation in the plane of the specimen grid) can be used to limit the impact of the missing data on the observed specimen morphology. Using focused ion beam milling, a new technique has been proposed which uses pillar-shaped specimen and a dedicated on-axis tomography holder to perform 180° rotation of the sample inside the pole piece of the objective lens in TEM. Using such arrangements, quantitative electron tomography without the missing wedge is possible. In addition, numerical techniques exist which can improve the collected data. All the above-mentioned methods involve recording tilt series of a given specimen field. This inevitably results in the summation of a high dose of reactive electrons through the sample and the accompanying destruction of fine detail during recording. The technique of low-dose (minimal-dose) imaging is therefore regularly applied to mitigate this effect. Low-dose imaging is performed by deflecting illumination and imaging regions simultaneously away from the optical axis to image an adjacent region to the area to be recorded (the high-dose region). This area is maintained centered during tilting and refocused before recording. During recording the deflections are removed so that the area of interest is exposed to the electron beam only for the duration required for imaging. An improvement of this technique (for objects resting on a sloping substrate film) is to have two symmetrical off-axis regions for focusing followed by setting focus to the average of the two high-dose focus values before recording the low-dose area of interest. Non-tomographic variants on this method, referred to as single particle analysis, use images of multiple (hopefully) identical objects at different orientations to produce the image data required for three-dimensional reconstruction. If the objects do not have significant preferred orientations, this method does not suffer from the missing data wedge (or cone) which accompany tomographic methods nor does it incur excessive radiation dosage, however it assumes that the different objects imaged can be treated as if the 3D data generated from them arose from a single stable object. Sample preparation Sample preparation in TEM can be a complex procedure. TEM specimens should be less than 100 nanometres thick for a conventional TEM. Unlike neutron or X-ray radiation the electrons in the beam interact readily with the sample, an effect that increases roughly with atomic number squared (Z2). High quality samples will have a thickness that is comparable to the mean free path of the electrons that travel through the samples, which may be only a few tens of nanometres. Preparation of TEM specimens is specific to the material under analysis and the type of information to be obtained from the specimen. Materials that have dimensions small enough to be electron transparent, such as powdered substances, small organisms, viruses, or nanotubes, can be quickly prepared by the deposition of a dilute sample containing the specimen onto films on support grids. Biological specimens may be embedded in resin to withstand the high vacuum in the sample chamber and to enable cutting tissue into electron transparent thin sections. The biological sample can be stained using either a negative staining material such as uranyl acetate for bacteria and viruses, or, in the case of embedded sections, the specimen may be stained with heavy metals, including osmium tetroxide. Alternately samples may be held at liquid nitrogen temperatures after embedding in vitreous ice. In material science and metallurgy the specimens can usually withstand the high vacuum, but still must be prepared as a thin foil, or etched so some portion of the specimen is thin enough for the beam to penetrate. Constraints on the thickness of the material may be limited by the scattering cross-section of the atoms from which the material is comprised. Tissue sectioning Before sectioning, biological tissue is often embedded in an epoxy resin block and first trimmed using a razor blade into a trapezoidal block face. Thick sections are then cut from the block face. The thick sections are crudely stained with toluidine blue and examined for specimen and block orientation before thin sectioning. Biological tissue is then thinned to less than 100 nm on an ultramicrotome. The resin block is fractured as it passes over a glass or diamond knife edge. This method is used to obtain thin, minimally deformed samples that allow for the observation of tissue ultrastructure. Inorganic samples, such as aluminium, may also be embedded in resins and ultrathin sectioned in this way, using either coated glass, sapphire or larger angle diamond knives. To prevent charge build-up at the sample surface when viewing in the TEM, tissue samples need to be coated with a thin layer of conducting material, such as carbon. Sample staining TEM samples of biological tissues need high atomic number stains to enhance contrast. The stain absorbs the beam electrons or scatters part of the electron beam which otherwise is projected onto the imaging system. Compounds of heavy metals such as osmium, lead, uranium or gold (in immunogold labelling) may be used prior to TEM observation to selectively deposit electron dense atoms in or on the sample in desired cellular or protein region. This process requires an understanding of how heavy metals bind to specific biological tissues and cellular structures. Another form of sample staining is negative stain, where a larger amount of heavy metal stain is applied to the sample. The result is a sample with a dark background and the topological surface of the sample appearing lighter. Negative stain electron microscopy can be ideal for visualizing or forming 3D topological reconstructions of large proteins or macromolecular complexes (> 150 kDa). For smaller proteins, negative stain can be used as a screening step to find ideal sample concentration for cryogenic electron microscopy. Mechanical milling Mechanical polishing is also used to prepare samples for imaging on the TEM. Polishing needs to be done to a high quality, to ensure constant sample thickness across the region of interest. A diamond, or cubic boron nitride polishing compound may be used in the final stages of polishing to remove any scratches that may cause contrast fluctuations due to varying sample thickness. Even after careful mechanical milling, additional fine methods such as ion etching may be required to perform final stage thinning. Chemical etching Certain samples may be prepared by chemical etching, particularly metallic specimens. These samples are thinned using a chemical etchant, such as an acid, to prepare the sample for TEM observation. Devices to control the thinning process may allow the operator to control either the voltage or current passing through the specimen, and may include systems to detect when the sample has been thinned to a sufficient level of optical transparency. Ion etching Ion etching is a sputtering process that can remove very fine quantities of material. This is used to perform a finishing polish of specimens polished by other means. Ion etching uses an inert gas passed through an electric field to generate a plasma stream that is directed to the sample surface. Acceleration energies for gases such as argon are typically a few kilovolts. The sample may be rotated to promote even polishing of the sample surface. The sputtering rate of such methods is on the order of tens of micrometres per hour, limiting the method to only extremely fine polishing. Ion etching by argon gas has been recently shown to be able to file down MTJ stack structures to a specific layer which has then been atomically resolved. The TEM images taken in plan view rather than cross-section reveal that the MgO layer within MTJs contains a large number of grain boundaries that may be diminishing the properties of devices. Ion milling (FIB) More recently focused ion beam methods have been used to prepare samples. FIB is a relatively new technique to prepare thin samples for TEM examination from larger specimens. Because FIB can be used to micro-machine samples very precisely, it is possible to mill very thin membranes from a specific area of interest in a sample, such as a semiconductor or metal. Unlike inert gas ion sputtering, FIB makes use of significantly more energetic gallium ions and may alter the composition or structure of the material through gallium implantation. Nanowire assisted transfer For a minimal introduction of stress and bending to transmission electron microscopy (TEM) samples (lamellae, thin films, and other mechanically and beam sensitive samples), when transferring inside a focused ion beam (FIB), flexible metallic nanowires can be attached to a typically rigid micromanipulator. The main advantages of this method include a significant reduction of sample preparation time (quick welding and cutting of nanowire at low beam current), and minimization of stress-induced bending, Pt contamination, and ion beam damage. This technique is particularly suitable for in situ electron microscopy sample preparation. Replication Samples may also be replicated using cellulose acetate film, the film subsequently coated with a heavy metal such as platinum, the original film dissolved away, and the replica imaged on the TEM. Variations of the replica technique are used for both materials and biological samples. In materials science a common use is for examining the fresh fracture surface of metal alloys. Modifications The capabilities of the TEM can be further extended by additional stages and detectors, sometimes incorporated on the same microscope. Scanning TEM A TEM can be modified into a scanning transmission electron microscope (STEM) by the addition of a system that rasters a convergent beam across the sample to form the image, when combined with suitable detectors. Scanning coils are used to deflect the beam, such as by an electrostatic shift of the beam, where the beam is then collected using a current detector such as a Faraday cup, which acts as a direct electron counter. By correlating the electron count to the position of the scanning beam (known as the "probe"), the transmitted component of the beam may be measured. The non-transmitted components may be obtained either by beam tilting or by the use of annular dark field detectors. Fundamentally, TEM and STEM are linked via Helmholtz reciprocity. A STEM is a TEM in which the electron source and observation point have been switched relative to the direction of travel of the electron beam. See the ray diagrams in the figure on the right. The STEM instrument effectively relies on the same optical set-up as a TEM, but operates by flipping the direction of travel of the electrons (or reversing time) during operation of a TEM. Rather than using an aperture to control detected electrons, as in TEM, a STEM uses various detectors with collection angles that may be adjusted depending on which electrons the user wants to capture. Low-voltage electron microscope A low-voltage electron microscope (LVEM) is operated at relatively low electron accelerating voltage between 5–25 kV. Some of these can be a combination of SEM, TEM and STEM in a single compact instrument. Low voltage increases image contrast which is especially important for biological specimens. This increase in contrast significantly reduces, or even eliminates the need to stain. Resolutions of a few nm are possible in TEM, SEM and STEM modes. The low energy of the electron beam means that permanent magnets can be used as lenses and thus a miniature column that does not require cooling can be used. Cryo-TEM Cryogenic transmission electron microscopy (Cryo-TEM) uses a TEM with a specimen holder capable of maintaining the specimen at liquid nitrogen or liquid helium temperatures. This allows imaging specimens prepared in vitreous ice, the preferred preparation technique for imaging individual molecules or macromolecular assemblies, imaging of vitrified solid-electrolye interfaces, and imaging of materials that are volatile in high vacuum at room temperature, such as sulfur. Environmental/in-situ TEM In-situ experiments may also be conducted in TEM using differentially pumped sample chambers, or specialized holders. Types of in-situ experiments include studying nanomaterials, biological specimens, chemical reactions of molecules, liquid-phase electron microscopy, and material deformation testing. High temperature in-situ TEM Many phase transformations occur during heating. Additionally, coarsening and grain growth, along with other diffusion-related processes occur more rapidly at elevated temperatures, where kinetics are improved, allowing for the observation of related phenomena under transmission electron microscopy within reasonable time scales. This also allows for the observation of phenomena that occur at elevated temperatures and disappear or are not uniformly preserved in ex-situ samples. High temperature TEM introduces various additional challenges which must be addressed in the mechanics of high temperature holders, including but not limited to drift-correction, temperature measurement, and decreased spatial resolution at the expense of more complex holders. Sample drift in the TEM is linearly proportional to the temperature differential between the room and holder. With temperatures as high as 1500C in modern holders, samples may experience significant drift and vertical displacement (bulging), requiring continuous focus or stage adjustments, inducing resolution loss and mechanical drift. Individual labs and manufacturers have developed software coupled with advanced cooling systems to correct for thermal drift based on the predicted temperature in the sample chamber These systems often take 30 min-many hours for sample shifts to stabilize. While significant progress has been made, no universal TEM attachment has been made to account for drift at elevated temperatures. An additional challenge of many of these specialized holders is knowing the local sample temperature. Many high temperature holders utilize a tungsten filament to locally heat the sample. Ambiguity in temperature in furnace heaters (W wire) with thermocouples arises from the thermal contact between the furnace and the TEM grid; complicated by temperature gradients along the sample caused by varying thermal conductivity with different samples and grid materials. With different holders both commercial and lab made, different methods for creating temperature calibration are available. Manufacturers like Gatan use IR pyrometry to measure temperature gradients over their entire sample. An even better method to calibrate is Raman spectroscopy which measures the local temperature of Si powder on electron transparent windows and quantitatively calibrates the IR pyrometry. These measurements have guaranteed accuracy within 5%. Research laboratories have also performed their own calibrations on commercial holders. Researchers at NIST utilized Raman spectroscopy to map the temperature profile of a sample on a TEM grid and achieve very precise measurements to enhance their research. Similarly, a research group in Germany utilized X-ray diffraction to measure slight shifts in lattice spacing caused by changes in temperature to back calculate the exact temperature in the holder. This process required careful calibration and exact TEM optics. Other examples include the use of EELS to measure local temperature using change of gas density, and resistivity changes. Optimal resolution in a TEM is achieved when spherical aberrations are corrected with objective lens. However, due to the geometry of most TEMs, inserting large in-situ holders requires the user to compromise the objective lens and endure spherical aberrations. Therefore, there is a compromise between the width of the pole-piece gap and spatial resolution below 0.1 nm. Research groups at various institutions have tried to overcome spherical aberrations through use of monochromators to achieve 0.05 nm resolution with a 5 mm pole piece gap. In-situ mechanical TEM High resolution of TEM allows for monitoring the sample in question on a length scale ranging from hundreds of nanometres to several angstroms. This allows for the visualization of both elastic and plastic deformation via strain fields as well as the motion of crystallographic defects such as lattice distortions and dislocation motion. By simultaneously observing deformation phenomena and measuring mechanical response in situ, it is possible to connect nano-mechanical testing information to models that describe both the subtlety and complexity of how materials respond to stress and strain. The material properties and data accuracy obtained from such nano-mechanical tests is largely determined by the mechanical straining holder being used. Current straining holders have the ability to perform tensile tests, nano-indentation, compression tests, shear tests and bending tests on materials. Classical mechanical holders One of the pioneers of classical holders was Heinz G.F. Wilsdorf, who conducted a tensile test inside a TEM in 1958. In a typical experiment, electron transparent TEM samples are cut to shape and glued to a deformable grid. Advances in micromanipulators have also enabled the tensile testing of nanowires and thin films. The deformable grid attaches to the classical tensile holder which stretches the sample using a long rigid shaft attached to a worm gear box actuated by an electric motor located in a housing outside the TEM. Typically strain rates range from 10 nm/s to 10 μm/s. Custom-made holders expanding simple straining actuation have enabled bending tests using a bending holder and shear tests using a shear sample holder. The typical measured sample properties in these experiments are yield strength, elastic modulus, shear modulus, tensile strength, bending strength, and shear strength. In order to study the temperature-dependent mechanical properties of TEM samples, the holder can be cooled through a cold finger connected to a liquid nitrogen reservoir. For high temperature experiments, the TEM sample can also be heated through a miniaturized furnace or a laser that can typically reach 1000 °C. Nano-indentation holders Nano-Indentation holders perform a hardness test on the material in question by pressing a hard tip into a polished flat surface and measuring the applied force and the resulting displacement on the TEM sample through a change in capacitance between a reference and a movable electrostatic plate attached to the tip. The typical measured sample properties are hardness and elastic modulus. Although nano-indentation was possible since early 1980s, its investigation using a TEM was first reported in 2001 where an aluminum sample deposited on a silicon wedge was investigated. For nanoindentation experiments, TEM samples are typically shaped as wedges using a tripod polisher, H-bar window or a micro-nanopillar using focused ion beam to create enough space for a tip to be pressed at the desired electron transparent location. The indenter tips are typically flat punch-type, pyramidal, or wedge shaped elongated in the z-direction. Pyramidal tips offer high precision on the order of 10 nm but suffer from sample slip, while wedge indenters have greater contract to prevent slipping but require finite element analysis to model the transmitted stress since the high contact area with the TEM sample makes this almost a compression test. Micro electro-mechanical systems (MEMs) Micro Electro-Mechanical Systems (MEMs) based holders provide a cheap and customizable platform to conduct mechanical tests on previously difficult samples to work with such as micropillars, nanowires, and thin films. Passive MEMs are used as simple push to pull devices for in-situ mechanical tests. Typically, a nano-indentation holder is used to apply a pushing force at the indentation site. Using a geometry of arms, this pushing force translates to a pulling force on a pair of tensile pads to which the sample is attached. Thus, a compression applied on the outside of the MEMs translates to a tension in the central gap where the TEM sample is located. The resulting force-displacement curve needs to be corrected by performing the same test on an empty MEMs without the TEM sample to account for the stiffness of the empty MEMs. The dimensions and stiffness of the MEMs can be modified to perform tensile tests on different sized samples with different loads. To smoothen the actuation process, active MEMs have been developed with built-in actuators and sensors. These devices work by applying a stress using electrical power and measuring strain using capacitance variations. Electrostatically actuated MEMs have also been developed to accommodate very low applied forces in the 1–100 nN range. Much of current research focuses on developing sample holders that can perform mechanical tests while creating an environmental stimulus such as temperature change, variable strain rates, and different gas environments. In addition, the emergence of high resolution detectors are allowing to monitor dislocation motion and interactions with other defects and pushing the limits of sub-nanometre strain measurements. In-situ mechanical TEM measurements are routinely coupled with other standard TEM measurements such as EELS and XEDS to reach a comprehensive understanding of the sample structure and properties. Aberration corrected TEM Modern research TEMs may include aberration correctors, to reduce the amount of distortion in the image. Incident beam monochromators may also be used which reduce the energy spread of the incident electron beam to less than 0.15 eV. Major aberration corrected TEM manufacturers include JEOL, Hitachi High-technologies, FEI Company, and NION. Ultrafast and dynamic TEM It is possible to reach temporal resolution far beyond that of the readout rate of electron detectors with the use of pulsed electrons. Pulses can be produced by either modifying the electron source to enable laser-triggered photoemission or by installation of an ultrafast beam blanker. This approach is termed ultrafast transmission electron microscopy when stroboscopic pump-probe illumination is used: an image is formed by the accumulation of many ultrashort electron pulses (typically of hundreds of femtoseconds) with a fixed time delay between the arrival of the electron pulse and the sample excitation. On the other hand, the use of single or a short sequence of electron pulses with a sufficient number of electrons to form an image from each pulse is called dynamic transmission electron microscopy. Temporal resolution down to hundreds of femtoseconds and spatial resolution comparable to that available with a Schottky field emission source is possible in ultrafast TEM. Using the Photon-gating approach, the temporal resolution in ultrafast electron microscope reaches to 30-fs allowing the imaging of ultrafast atomic and electron dynamics of matter. However, the technique can only image reversible processes that can be reproducibly triggered millions of times. Dynamic TEM can resolve irreversible processes down to tens of nanoseconds and tens of nanometres. The technique has been pioneered at the early 2000s in laboratories in Germany (Technische Universität Berlin) and in the USA (Caltech and Lawrence Livermore National Laboratory ). Ultrafast TEM and Dynamic TEM have made possible real-time investigation of numerous physical and chemical phenomena at the nanoscale. An interesting variant of the Ultrafast Transmission Electron Microscopy technique is the Photon-Induced Near-field Electron Microscopy (PINEM). The latter is based on the inelastic coupling between electrons and photons in presence of a surface or a nanostructure. This method allows one to investigate time-varying nanoscale electromagnetic fields in an electron microscope, as well as dynamically shape the wave properties of the electron beam. Limitations There are a number of drawbacks to the TEM technique. Many materials require extensive sample preparation to produce a sample thin enough to be electron transparent, which makes TEM analysis a relatively time-consuming process with a low throughput of samples. The structure of the sample may also be changed during the preparation process. Also the field of view is relatively small, raising the possibility that the region analyzed may not be characteristic of the whole sample. There is potential that the sample may be damaged by the electron beam, particularly in the case of biological materials. Resolution limits The limit of resolution obtainable in a TEM may be described in several ways, and is typically referred to as the information limit of the microscope. One commonly used value is a cut-off value of the contrast transfer function, a function that is usually quoted in the frequency domain to define the reproduction of spatial frequencies of objects in the object plane by the microscope optics. A cut-off frequency, qmax, for the transfer function may be approximated with the following equation, where Cs is the spherical aberration coefficient and λ is the electron wavelength: For a 200 kV microscope, with partly corrected spherical aberrations ("to the third order") and a Cs value of 1 μm, a theoretical cut-off value might be 1/qmax = 42 pm. The same microscope without a corrector would have Cs = 0.5 mm and thus a 200 pm cut-off. The spherical aberrations are suppressed to the third or fifth order in the "aberration-corrected" microscopes. Their resolution is however limited by electron source geometry and brightness and chromatic aberrations in the objective lens system. The frequency domain representation of the contrast transfer function may often have an oscillatory nature, which can be tuned by adjusting the focal value of the objective lens. This oscillatory nature implies that some spatial frequencies are faithfully imaged by the microscope, whilst others are suppressed. By combining multiple images with different spatial frequencies, the use of techniques such as focal series reconstruction can be used to improve the resolution of the TEM in a limited manner. The contrast transfer function can, to some extent, be experimentally approximated through techniques such as Fourier transforming images of amorphous material, such as amorphous carbon. More recently, advances in aberration corrector design have been able to reduce spherical aberrations and to achieve resolution below 0.5 ångströms (50 pm) at magnifications above 50 million times. Improved resolution allows for the imaging of lighter atoms that scatter electrons less efficiently, such as lithium atoms in lithium battery materials. The ability to determine the position of atoms within materials has made the HRTEM an indispensable tool for nanotechnology research and development in many fields, including heterogeneous catalysis and the development of semiconductor devices for electronics and photonics. See also Electron microscope Cryo-electron microscopy Electron diffraction Electron energy loss spectroscopy (EELS) Energy filtered transmission electron microscopy (EFTEM) High-resolution transmission electron microscopy (HRTEM) Low-voltage electron microscope (LVEM) Precession electron diffraction Scanning confocal electron microscopy References External links The National Center for Electron Microscopy, Berkeley California USA The National Center for Macromolecular Imaging, Houston Texas USA The National Resource for Automated Molecular Microscopy, New York USA Tutorial courses in Transmission Electron Microscopy Cambridge University Teaching and Learning Package on TEM Online course on Transmission Electron Microscopy and Crystalline Imperfections Eric Stach (2008). Transmission electron microscope simulator (Teaching tool). animations and explanations on various types of microscopes including electron microscopes (Université Paris Sud) Electron beam Electron microscopy techniques Scientific techniques Articles containing video clips
Transmission electron microscopy
Chemistry
13,876
10,428,113
https://en.wikipedia.org/wiki/Chlorobutanol
Chlorobutanol (trichloro-2-methyl-2-propanol) is an organic compound with the formula . The compound is an example of a chlorohydrin. The compound is a preservative, sedative, hypnotic and weak local anesthetic similar in nature to chloral hydrate. It has antibacterial and antifungal properties. Chlorobutanol is typically used at a concentration of 0.5% where it lends long term stability to multi-ingredient formulations. However, it retains antimicrobial activity at 0.05% in water. Chlorobutanol has been used in anesthesia and euthanasia of invertebrates and fishes. It is a white, volatile solid with a camphor-like odor. Synthesis Chlorobutanol was first synthesized in 1881 by the German chemist Conrad Willgerodt (1841–1930). Chlorobutanol is formed by the reaction of chloroform and acetone in the presence of potassium or sodium hydroxide. It may be purified by sublimation or recrystallisation. Parthenogenesis Chlorobutanol has proven effective at stimulating parthenogenesis in sea urchin eggs up to the pluteus stage, possibly by increasing irritability to cause stimulation. For the eggs of the fish Oryzias latipes, however, chlorobutanol only acted as an anesthetic. Pharmacology and toxicity It is an anesthetic with effects related to isoflurane and halothane. Chlorobutanol is toxic to the liver, a skin irritant and a severe eye irritant. References External links Chlorobutanol MSDS Hypnotics Sedatives Trichloromethyl compounds Tertiary alcohols GABAA receptor positive allosteric modulators Glycine receptor agonists Halohydrins Substances discovered in the 19th century
Chlorobutanol
Biology
422
5,267,316
https://en.wikipedia.org/wiki/Fusiform
Fusiform (from Latin fusus ‘spindle’) means having a spindle-like shape that is wide in the middle and tapers at both ends. It is similar to the lemon-shape, but often implies a focal broadening of a structure that continues from one or both ends, such as an aneurysm on a blood vessel. Examples Fusiform, a body shape common to many aquatic animals, characterized by being tapered at both the head and the tail Fusiform, a classification of aneurysm Fusiform bacteria (spindled rods, that is, fusiform bacilli), such as the Fusobacteriota Fusiform cell (biology) Fusiform face area, a part of the human visual system which seems to specialize in facial recognition Fusiform gyrus, part of the temporal lobe of the brain Fusiform muscle, where the fibres run parallel along the length of the muscle Fusiform neuron, a spindle-shaped neuron References Geometric shapes See also Streamliner, a fusiform hydro-/aero-dynamic vehicle. Historically, the adjective "streamlined" was more commonly used among designers for the word "fusiform".
Fusiform
Mathematics
256
56,607,362
https://en.wikipedia.org/wiki/Windy%20%28weather%20service%29
Windy is a Czech company providing interactive weather forecasting services worldwide. The portal was founded by Ivo Lukačovič in November 2014, initially focusing on wind animation. It has since expanded to include various essential meteorological parameters, such as temperature, pressure, relative humidity, cloud base, and additional panels featuring more advanced data. The wind animation component is based on the open-source project earth by Cameron Beccario. As of May 2018, it had a team of six employees, and 300,000 users visited the site per day. List of weather models Global models GFS (Resolution 22 km) ECMWF (Resolution 9 km) ICON by German DWD (Resolution 6 km for Europe, 13 km for global) Meteoblue AI Global Model Local models NEMS by Swiss company Meteoblue (Resolution 4 km for Europe) NAM Conus by NOAA (Resolution 5 km for continental US) NAM Alaska (Resolution 6 km for Alaska) NAM Hawaii (Resolution 4 km for Hawaii) HRDPS by ECCC (Resolution 2.5 km for Canada) AROME by Météo-France (Resolution 1.25 km for France, Germany and the Alps) References External links Meteorological companies Weather prediction
Windy (weather service)
Physics
247
5,012,212
https://en.wikipedia.org/wiki/19%20Canum%20Venaticorum
19 Canum Venaticorum is a binary star system in the northern constellation of Canes Venatici, located approximately 238 light years from Sun based on its parallax. It is dimly visible to the naked eye as a white-hued star with an apparent visual magnitude of 5.77. The pair orbit each other with a period of 219.2 years and an eccentricity of 0.686. The system is moving closer to the Earth with a heliocentric radial velocity of −21 km/s. The magnitude +5.87 primary, component A, is an A-type main-sequence star with a stellar classification of A7 V. It is 366 million years old with twice the mass of the Sun and 2.5 times the Sun's radius. The star is radiating 25.5 times the Sun's luminosity from its photosphere at an effective temperature of 8,048 K. It has a high rate of spin, showing a projected rotational velocity of 110 km/s. As of 2012, its companion, designated component B, is a magnitude 9.48 star located from the primary along a position angle of 58°. References A-type main-sequence stars Binary stars Canes Venatici Durchmusterung objects Canum Venaticorum, 19 115271 064692 5004
19 Canum Venaticorum
Astronomy
281
55,034,900
https://en.wikipedia.org/wiki/Atlas%20personality
The Atlas personality, named after the story of the Titan Atlas from Greek mythology who is forced to hold up the sky, is someone obliged to take on adult responsibilities prematurely. They are as a result liable to develop a pattern of compulsive caregiving in later life. Origins and nature The Atlas personality is typically found in a person who felt obliged during childhood to take on responsibilities such as providing psychological support to parents, often in a chaotic family situation. This experience often involves parentification. The result in adult life can be a personality devoid of fun, and feeling the weight of the world on their shoulders. Depression and anxiety, as well as oversensitivity to others and an inability to assert their own needs, are further identifiable characteristics. In addition, there may also be an underlying rage against the parents for not having provided love, and for exploiting the child for their own needs.<ref>Alice Miller, 'The Drama of Being a Child (London 1990) p. 38</ref> While Atlas personalities may appear to function adequately as adults, they may be pervaded with a sense of emptiness and be lacking in vitality. Treatment Persons suffering from Atlas personality may benefit from psychotherapy. In such cases, a therapist talks with the patient about the patient's childhood and helps identify behavioral patterns that may have arisen from being given too many responsibilities too early in life. See also References Further reading L. J. Cozolino, The Making of a Therapist'' (New York 2004) Behavioural syndromes associated with physiological disturbances and physical factors Interpersonal relationships Narcissism Borderline personality disorder Atlas (mythology)
Atlas personality
Biology
334
60,452,819
https://en.wikipedia.org/wiki/Tropical%20night
A tropical night is a term used in many European countries to describe days when the temperature does not fall below during the nighttime. This definition is in use in countries including the Austria, Croatia, Denmark, Finland, Germany, Greece, Hungary, Italy, the Republic of Ireland, Latvia, Lithuania, the Netherlands, Norway, Poland, Portugal, Romania, Serbia, Spain, Sweden and the United Kingdom. In the United States, by contrast, the term sultry nights is used when the temperature does not fall below in the Gulf and Atlantic states. Tropical nights are common during heat waves and occur mostly over seas, coasts, and lakes. Heat gets stored in the water during periods of sunny and warm weather during the day, which is then emitted during the night and keeps the night temperatures up. Greece South Greece records very high spring, summer, autumn and occasionally winter minimum temperatures due to its geographical proximity to the Middle East, Minor Asia and the Sahara but also due to foehn winds especially in Crete and Rhodes. The World Meteorological Organization station in Kastellorizo registers on average 156 tropical nights per year, while Crete routinely records tropical nights even in January. Downtown Athens records 107 tropical nights per year for the period 1991-2020. In 2018, Lindos registered a record high of 178 tropical nights. On average Lindos records 4 days each year with minimum temperatures over 30.0 °C. On the night between 25 and 26 of June 2007 the temperature did not drop below 38.0°C in the Palaiochora World Meteorological Organization station. On the night of the 11th of January 2021 the World Meteorological Organization station in Falasarna recorded a stunning temperature of 28.3 °C due to strong foehn winds while the minimum temperature for that day was 22.6 °C marking both the highest January temperature during a night and the highest January minimum temperature ever recorded in Greece. On the 27th of June 2007 Monemvasia registered a staggering minimum temperature of 35.9 °C which is the highest minimum temperature ever recorded in mainland Greece. Monemvasia records 133 tropical nights per year which is unique for a location in mainland Europe. During July 2024 minimum temperatures remained over 30 °C (86 °F) for 12 consecutive days in metropolitan Athens, breaking all known records for any area in the country. On 4 July 1998, Kythira recorded an astonishing minimum temperature of 37.0 °C. United Kingdom The Met Office began tracking 'tropical nights' in 2018. This criterion is infrequently met, with the 30 years between 1961 and 1990 seeing 44 tropical nights, most of them associated with the hot summers of 1976 and 1983. From 1991 to 11 August 2020, 84 such nights were recorded, with 21 of them occurring since 2008. Five nights that stayed above 20 °C were recorded in 2018, and four in 2019. By 11 August 2020, four tropical nights had been recorded for that year, one in June and three in August. During the July 2022 heatwave, a tropical night recorded overnight from 18–19 July was reported to have been the warmest on record, where temperatures in many parts of the country did not fall below 25 °C. The hottest night on record was set in the early hours of 19 July 2022 at Shirburn Model Farm, Oxfordshire, not falling below 26.8 °C, smashing the previous record of 23.9 °C in the country. This was confirmed on 23 August 2022. Croatia In Croatia, this occurrence is usually termed 'warm night' (), but also ('tropical night'). A 'very warm night' () occurs when the temperature stays above overnight. Tropical nights happen regularly at the seaside in summer, and less frequently inland. In the 1961–1990 period, there was an average of 10–20 tropical nights a month during the summer at the seaside, but less than one per year in most of continental Croatia. However, they have become more frequent in Zagreb since 2000. During 1990–2014, Zagreb recorded an increasing trend of 19.5 additional tropical nights per decade. In August 2018, the Zagreb–Grič Observatory registered 24 tropical nights, beating the previous record from 2003. Ireland In Ireland, two tropical nights were observed at the Valentia Observatory in County Kerry during a heatwave in July 2021. This was the first time ever that two tropical nights were recorded in a row in Ireland. Spain In Spain, is termed noche tropical (tropical night). It occurs mostly on the Canary Islands, Mediterranean coast, Ibiza and Menorca. It is also common on inland parts of Andalusia. In central parts they are less prevalent, but yet expected to occur on some summer nights, especially during heat waves. On interior of north is very rare. When the temperature does not fall below is noche tórrida or noche ecuatorial (torrid night or ecuatorial night). This term has gained popularity, as nighttime temperatures have been increasingly higher in recent years during the summer. The Canary Islands are more affected by torrid nights than any other part of country. It is common, but not expected to be consistent on the Mediterranean coast and the Balearic Islands of Ibiza and Menorca. In hinterland Spain can occur during intense heat waves. Recently, the name noche infernal (hellish night) was introduced and is when the temperature does not fall below , something that for now is not very common, but has occurred, especially in the Canary Islands. In mainland Spain, some cities recorded hellish nights, such as the city of Málaga and Almería. The Canary Islands record the highest number of tropical nights per year, with the island of El Hierro having 154 tropical nights per year and Santa Cruz de Tenerife having 130. In mainland Spain, the cities Cartagena, Cádiz, Almería, Valencia, San Javier, Málaga and Alicante have the highest number with 101, 92, 89, 79, 75, 72 and 71 respectively. In 2023, El Hierro registered a record 208 tropical nights, the highest ever recorded in the country. Although some cities on the Mediterranean coast in the east and southeast of Spain have less tropical nights, on average, compared to some cities in the south of the country, it is important to mention that the Mediterranean coast in the east and southeast has high levels of air humidity during the summer, making high dew point levels. These high levels of air humidity can make tropical nights much more uncomfortable compared to southern cities that have low levels of air humidity, as high humidity makes it difficult or even impossible for sweat to evaporate, causing the heat index increases. In addition to the increase in the heat index, there is also a feeling of stuffy and sticky weather, which can contribute to a general feeling of discomfort, making the weather more oppressive. The highest minimum temperature ever recorded in Spain was in Guia de Isora on 12 August 2023. In peninsular Spain it was in Almeria on 31 July 2001, which is also the highest ever recorded on iberian peninsula. See also Heat wave Urban heat island References Anomalous weather Meteorological quantities
Tropical night
Physics,Mathematics
1,449
28,377,258
https://en.wikipedia.org/wiki/Flood%20opening
A flood opening or flood vent (also styled floodvent) is an orifice in an enclosed structure intended to allow the free passage of water between the interior and exterior. United States In the United States, flood openings are used to provide for the automatic equalization of hydrostatic pressure on either side of a wall. Building codes usually require the installation of flood openings in the walls of structures located in A-type flood zones recognized by the Federal Emergency Management Agency. Various agencies in the United States define necessary characteristics for flood openings. The NFIP Regulations and Building Codes require that any residential building constructed in Flood Zone Type A have the lowest floor, including basements, elevated to or above the Base Flood Elevation (BFE). Enclosed areas are permitted under elevated buildings provided that they meet certain use restrictions and construction requirements such as the installation of flood vents to allow for the automatic entry and exit of flood waters. The wet floodproofing technique is required for residential buildings. Engineered vs. non-engineered openings Most regulatory authorities in the United States that offer requirements for flood openings define two major classes of opening: engineered, and non-engineered. The requirements for non-engineered openings are typically stricter, defining necessary characteristics for aspects ranging from overall size of each opening, to allowable screening or other coverage options, to number and placement of openings. Engineered openings ignore many of the requirements, depending on the particular regulatory authority. To qualify as an engineered opening, testing and/or certification by a qualified agency (varying from regulator to regulator, and indicated below where appropriate) is required. American Society of Civil Engineers definition The American Society of Civil Engineers (ASCE) requirements apply to any structure that is not dry flood-proofed and which is in the mapped flood zone. It calls for openings in load-bearing foundation walls located below the mapped flood elevation. Where non-engineered openings are used, each opening must be at least three inches in diameter, and have no screen or other cover that interferes with the transition of water between interior and exterior. The total net open area of all flood openings in the structure must be equal to or greater than one square inch, per square foot of footprint of the enclosed area—though no fewer than two openings, total, which must be located on different walls. Openings must be placed such that the bottom of each opening is no more than one foot above the adjacent ground level. In lieu of these requirements, engineered openings must conform to a performance standard: during a flood with a rate of rise/fall of five feet per hour, the difference between interior and exterior flood water levels in an enclosure using the engineered openings must not be greater than one foot. International Building Code (IBC) The International Building Code refers to the American Society of Civil Engineer requirements for both non-engineered and engineered flood openings. International Residential Code (IRC) The International Residential Code requirements vary mildly from revision to revision, but require that entry and exit of floodwater be provided for in accordance with the requirements of the ASCE. These requirements apply for both non-engineered and engineered flood openings. FEMA While the Federal Emergency Management Agency does not have de jure authority over the building code, it maintains crucial influence over flood opening standards through its administration of the National Flood Insurance Program (NFIP). By controlling the standards for nearly all flood insurance in the United States, the NFIP exerts exceptional de facto authority over many aspects of floodplain construction. The FEMA (and, thus, NFIP) requirements for non-engineered openings are similar to requirements from the American Society of Civil Engineers. Unlike the ASCE, FEMA requires the placement of openings such that the bottom of each opening is no more than one foot above the higher of the adjacent ground level, or the interior foundation slab height. For engineered openings, FEMA offers two subclassifications: individual certification openings, and openings with International Code Council Evaluation Service (ICC-ES) Evaluation Reports. Individual certification openings are offered for use when, "[f]or architectural or other reasons, building designers or owners may prefer to use unique or individually designed openings or devices". In such cases, an architect or engineer may provide certification including the professional's signature and applied seal. The certification must include a "statement certifying that the openings are designed to automatically equalize hydrostatic flood loads on exterior walls by allowing the automatic entry and exit of floodwaters in accordance with the...design requirements"; "[d]escription of the range of flood characteristics tested or computed for which the certification is valid, such as rates of rise and fall of floodwaters"; and "[d]escription of the installation requirements and limitations that, if not followed, will void the certification". The nature of the "live seal" requirement means that each structure containing an individual certification opening must have a separate certification, even if the opening is identical to that used in another structure. The alternative subclassification is an opening that carries certification through the ICC-ES. According to FEMA, "Evaluation Reports are issued only after the ICC-ES performs technical evaluations of documentation submitted by a manufacturer, including technical design reports, certifications, and testing that demonstrate code compliance and performance." The report must include a statement concerning the purpose of the opening tested; a description of the characteristics tested; and a description of installation requirements. FEMA allows a copy of the report to be used as a blanket certification of any project including an ICC-ES certified opening, in contrast to the requirements of an individual certification opening. AC364-1006-R1 AC364-1006-R1 documents the ICC-ES's testing standards for flood openings, including specifications for a dual-chambered testing tank. While the requirements for the opening itself are based on ASCE 24, the substance of the test adds new layers of performance expectation. Under these requirements, the opening must activate before water level is one foot above the bottom of the opening, under conditions of 50 and 300 gallons per minute flooding, at a minimum of five foot per hour rate of rise. Additionally, water levels on the testing tank's "interior" and "exterior" portions must at no point differ more than one foot. To gauge performance against waterborne debris, leaves and grass clippings are added to both chambers of the tank. See also Culvert References Building engineering Construction Architectural elements Safety codes Legal codes Hydrology Flood control
Flood opening
Chemistry,Technology,Engineering,Environmental_science
1,318
3,031,660
https://en.wikipedia.org/wiki/INT%2013H
INT 13h is shorthand for BIOS interrupt call 13hex, the 20th interrupt vector in an x86-based (IBM PC-descended) computer system. The BIOS typically sets up a real mode interrupt handler at this vector that provides sector-based hard disk and floppy disk read and write services using cylinder-head-sector (CHS) addressing. Modern PC BIOSes also include INT 13h extension functions, originated by IBM and Microsoft in 1992, that provide those same disk access services using 64-bit LBA addressing; with minor additions, these were quasi-standardized by Phoenix Technologies and others as the EDD (Enhanced Disk Drive) BIOS extensions. INT is an x86 instruction that triggers a software interrupt, and 13hex is the interrupt number (as a hexadecimal value) being called. Modern computers come with both BIOS INT 13h and UEFI functionality that provides the same services and more, with the exception of UEFI Class 3 that completely removes CSM thus lacks INT 13h and other interrupts. Typically, UEFI drivers use LBA-addressing instead of CHS-addressing. Overview Under real mode operating systems, such as DOS, calling INT 13h would jump into the computer's ROM-BIOS code for low-level disk services, which would carry out physical sector-based disk read or write operations for the program. In DOS, it serves as the low-level interface for the built-in block device drivers for hard disks and floppy disks. This allows INT 25h and INT 26h to provide absolute disk read/write functions for logical sectors to the FAT file system driver in the DOS kernel, which handles file-related requests through DOS API (INT 21h) functions. Under protected mode operating systems, such as Microsoft Windows NT derivatives (e.g. NT4, 2000, XP, and Server 2003) and Linux with dosemu, the OS intercepts the call and passes it to the operating system's native disk I/O mechanism. Windows 9x and Windows for Workgroups 3.11 also bypass BIOS routines when using 32-bit Disk Access. Besides performing low-level disk access, INT 13h calls and related BIOS data structures also provide information about the types and capacities of disks (or other DASD devices) attached to the system; when a protected-mode OS boots, it may use that information from the BIOS to enumerate disk hardware so that it (the OS) can load and configure appropriate disk I/O drivers. The original BIOS real-mode INT 13h interface supports drives of sizes up to about 8 GB using what is commonly referred to as physical CHS addressing. This limit originates from the hardware interface of the IBM PC/XT disk hardware. The BIOS used the cylinder-head-sector (CHS) address given in the INT 13h call, and transferred it directly to the hardware interface. A lesser limit, about 504 MB, was imposed by the combination of CHS addressing limits used by the BIOS and those used by ATA hard disks, which are dissimilar. When the CHS addressing limits of both the BIOS and ATA are combined (i.e. when they are applied simultaneously), the number of 512-byte sectors that can be addressed represent a total of about 504 MB. The 504 MB limit was overcome using CHS translation, a technique by which the BIOS would simulate a fictitious CHS geometry at the INT 13h interface, while communicating with the ATA drive using its native logical CHS geometry. (By the time the 504 MB barrier was being approached, ATA disks had long before ceased to present their real physical geometry parameters at the external ATA interface.) Translation allows the BIOS, still using CHS addressing, to effectively address ATA disks with sizes up to 8064 MB, the native capacity of the BIOS CHS interface alone. (The ATA interface has a much larger native CHS addressing capacity, so once the "interference" of the CHS limits of BIOS and ATA was resolved by addressing, only the smaller limitation of the BIOS was significant.) CHS translation is sometimes referred to as logical CHS addressing, but that is actually a misnomer since by the time of this BIOS development, ATA CHS addresses were already logical, not physical. The 8064 MB limit originates from a combination of the register value based calling convention used in the INT 13h interface and the goal of maintaining backward compatibility—dictating that the format or size of CHS addresses passed to INT 13h could not be changed to add more bits to one of the fields, e.g. the Cylinder-number field. This limit uses 1024 cylinders, 256 heads, 63 sectors, and 512 byte blocks, allowing exactly 7.875 GiB of addressing (102425663). There were briefly a number of BIOSes that offered incompatible versions of this interface—for example, AWARD AT BIOS and AMI 386sx BIOS have been extended to handle up to 4096 cylinders by placing bits 10 and 11 of the cylinder number into bits 6 and 7 of register DH. All versions of MS-DOS, (including MS-DOS 7 and Windows 95) have a bug which prevents booting disk drives with 256 heads (register value 0xFF), so many modern BIOSes provide CHS translation mappings with at most 255 (0xFE) heads, thus reducing the total addressable space to exactly 8032.5 MiB (approx 7.844 GiB). To support addressing of even larger disks, an interface known as INT 13h Extensions was introduced by IBM and Microsoft, then later re-published and slightly extended by Phoenix Technologies as part of BIOS Enhanced Disk Drive Services (EDD). It defines new functions within the INT 13h service, all having function numbers greater than 40h, that use 64-bit logical block addressing (LBA), which allows addressing up to 8 ZiB. (An ATA drive can also support 28-bit or 48-bit LBA which allows up to 128 GiB or 128 PiB respectively, assuming a 512-byte sector/block size). This is a "packet" interface, because it uses a pointer to a packet of information rather than the register based calling convention of the original INT 13h interface. This packet is a very simple data structure that contains an interface version, data size, and LBAs. For software backward-compatibility, the extended functions are implemented alongside the original CHS functions, and calls to functions from both sets can be intermixed, even for the same drive, with the caveat that the CHS functions cannot reach past the first 8064 MB of the disk. Some cache drivers flush their buffers when detecting that DOS is bypassed by directly issuing INT 13h from applications. A dummy read via INT 13h can be used as one of several methods to force cache flushing for unknown caches (e.g. before rebooting). AMI BIOSes from around 1990–1991 trash word unaligned buffers. Some DOS and terminate-and-stay-resident programs clobber interrupt enabling and registers so PC DOS and MS-DOS install their own filters to prevent this. List of services If the second column is empty then the function may be used both for floppy and hard disk. FD: for floppy disk only. HD: for hard disk only. PS/2: for hard disk on PS/2 system only. EXT: part of the Extensions which were written in the 1990s to support hard drives with more than 8 GB. : Reset Disk System : Get Status of Last Drive Operation Bit 7=0 for floppy drive, bit 7=1 for fixed drive : Read Sectors From Drive Remarks Register CX contains both the cylinder number (10 bits, possible values are 0 to 1023) and the sector number (6 bits, possible values are 1 to 63). Cylinder and Sector bits are numbered below: CX = ---CH--- ---CL--- cylinder : 76543210 98 sector : 543210 Examples of translation: CX := ( ( cylinder and 255 ) shl 8 ) or ( ( cylinder and 768 ) shr 2 ) or sector; cylinder := ( (CX and $FF00) shr 8 ) or ( (CX and $C0) shl 2) sector := CX and 63; Addressing of Buffer should guarantee that the complete buffer is inside the given segment, i.e. ( BX + size_of_buffer ) <= 10000h. Otherwise the interrupt may fail with some BIOS or hardware versions. Example Assume you want to read 16 sectors (= 2000h bytes) and your buffer starts at memory address 4FF00h. Utilizing memory segmentation, there are different ways to calculate the register values, e.g.: ES = segment = 4F00h BX = offset = 0F00h sum = memory address = 4FF00h would be a good choice because 0F00h + 2000h = 2F00h <= 10000h ES = segment = 4000h BX = offset = FF00h sum = memory address = 4FF00h would not be a good choice because FF00h + 2000h = 11F00h > 10000h Function 02h of interrupt 13h may only read sectors of the first 16,450,560 sectors of your hard drive, to read sectors beyond the 8 GB limit you should use function 42h of Extensions. Another alternate may be DOS interrupt 25h which reads sectors within a partition. Code Example [ORG 7c00h] ; code starts at 7c00h xor ax, ax ; make sure ds is set to 0 mov ds, ax cld ; start putting in values: mov ah, 2h ; int13h function 2 mov al, 63 ; we want to read 63 sectors mov ch, 0 ; from cylinder number 0 mov cl, 2 ; the sector number 2 - second sector (starts from 1, not 0) mov dh, 0 ; head number 0 xor bx, bx mov es, bx ; es should be 0 mov bx, 7e00h ; 512bytes from origin address 7c00h int 13h jmp 7e00h ; jump to the next sector ; to fill this sector and make it bootable: times 510-($-$$) db 0 dw 0AA55h After this code section (which the asm file should start with), you may write code and it will be loaded to memory and executed. Notice how we didn't change dl (the drive). That is because when the computer first loads up, dl is set to the number of the drive that was booted, so assuming we want to read from the drive we booted from, there is no need to change dl. : Write Sectors To Drive : Verify Sectors From Drive : Format Track : Format Track Set Bad Sector Flags : Format Drive Starting at Track : Read Drive Parameters Remarks Logical values of function 08h may/should differ from physical CHS values of function 48h. Result register CX contains both cylinders and sector/track values, see remark of function 02h. : Init Drive Pair Characteristics AH=0Ah: Read Long Sectors From Drive The only difference between this function and function 02h (see above) is that function 0Ah reads 516 bytes per sector instead of only 512. The last 4 bytes contains the Error Correction Code (ECC), a checksum of sector data. : Check Extensions Present : Extended Read Sectors From Drive As already stated with int 13h AH=02h, care must be taken to ensure that the complete buffer is inside the given segment, i.e. ( BX + size_of_buffer ) <= 10000h : Extended Write Sectors to Drive : Extended Read Drive Parameters Remark Physical CHS values of function 48h may/should differ from logical values of function 08h. INT 13h AH=4Bh: Get Drive Emulation Type See also INT 10H BIOS interrupt call Cylinder-head-sector INT (x86 instruction) DPMI (DOS Protected Mode Interface) Ralf Brown's Interrupt List BIOS Enhanced Disk Drive Specification References External links BIOS Interrupt 13h Extensions Ralf Brown's comprehensive Interrupt List Norton Guide about int 13h, ah = 00h .. 1ah IBM PC compatibles BIOS Interrupts
INT 13H
Technology
2,590
73,360
https://en.wikipedia.org/wiki/Lyapunov%20fractal
In mathematics, Lyapunov fractals (also known as Markus–Lyapunov fractals) are bifurcational fractals derived from an extension of the logistic map in which the degree of the growth of the population, r, periodically switches between two values A and B. A Lyapunov fractal is constructed by mapping the regions of stability and chaotic behaviour (measured using the Lyapunov exponent ) in the a−b plane for given periodic sequences of a and b. In the images, yellow corresponds to (stability), and blue corresponds to (chaos). Lyapunov fractals were discovered in the late 1980s by the Germano-Chilean physicist Mario Markus from the Max Planck Institute of Molecular Physiology. They were introduced to a large public by a science popularization article on recreational mathematics published in Scientific American in 1991. Properties Lyapunov fractals are generally drawn for values of A and B in the interval . For larger values, the interval [0,1] is no longer stable, and the sequence is likely to be attracted by infinity, although convergent cycles of finite values continue to exist for some parameters. For all iteration sequences, the diagonal a = b is always the same as for the standard one parameter logistic function. The sequence is usually started at the value 0.5, which is a critical point of the iterative function. The other (even complex valued) critical points of the iterative function during one entire round are those that pass through the value 0.5 in the first round. A convergent cycle must attract at least one critical point. Therefore, all convergent cycles can be obtained by just shifting the iteration sequence, and keeping the starting value 0.5. In practice, shifting this sequence leads to changes in the fractal, as some branches get covered by others. For instance, the Lyapunov fractal for the iteration sequence AB (see top figure on the right) is not perfectly symmetric with respect to a and b. Algorithm The algorithm for computing Lyapunov fractals works as follows: Choose a string of As and Bs of any nontrivial length (e.g., AABAB). Construct the sequence formed by successive terms in the string, repeated as many times as necessary. Choose a point . Define the function if , and if . Let , and compute the iterates . Compute the Lyapunov exponent:In practice, is approximated by choosing a suitably large and dropping the first summand as for . Color the point according to the value of obtained. Repeat steps (3–7) for each point in the image plane. More Iterations More dimensions Lyapunov fractals can be calculated in more than two dimensions. The sequence string for a n-dimensional fractal has to be built from an alphabet with n characters, e.g. "ABBBCA" for a 3D fractal, which can be visualized either as 3D object or as an animation showing a "slice" in the C direction for each animation frame, like the example given here. Notes References Markus, Mario, "Die Kunst der Mathematik", Verlag Zweitausendeins, Frankfurt External links EFG's Fractals and Chaos – Lyapunov Exponents Fractals
Lyapunov fractal
Mathematics
695
63,892,796
https://en.wikipedia.org/wiki/Lutetium%20%28177Lu%29%20chloride
{{DISPLAYTITLE:Lutetium (177Lu) chloride}} Lutetium (177Lu) chloride is a radioactive compound used for the radiolabeling of pharmaceutical molecules, aimed either as an anti-cancer therapy or for scintigraphy (medical imaging). It is an isotopomer of lutetium(III) chloride containing the radioactive isotope 177Lu, which undergoes beta decay with a half-life of 6.64 days. Medical uses Lutetium (177Lu) chloride is a radiopharmaceutical precursor and is not intended for direct use in patients. It is used for the radiolabeling of carrier molecules specifically developed for reaching certain target tissues or organs in the body. The molecules labeled in this way are used as cancer therapeutics or for scintigraphy, a form of medical imaging. 177Lu has been used with both small molecule therapeutic agents (such as 177Lu-DOTATATE) and antibodies for targeted cancer therapy Contraindications Medicines radiolabeled with lutetium (177Lu) chloride must not be used in women unless pregnancy has been ruled out. Adverse effects The most common side effects are anaemia (low red blood cell counts), thrombocytopenia (low blood platelet counts), leucopenia (low white blood cell counts), lymphopenia (low levels of lymphocytes, a particular type of white blood cell), nausea (feeling sick), vomiting and mild and temporary hair loss. Society and culture Legal status Lutetium (177Lu) chloride (Lumark) was approved for use in the European Union in June 2015. Lutetium (177Lu) chloride (EndolucinBeta) was approved for use in the European Union in July 2016. In July 2022, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Illuzyce, a radiopharmaceutical precursor. Illuzyce is not intended for direct use in patients and must be used only for the radiolabelling of carrier medicines that have been specifically developed and authorized for radiolabelling with lutetium (177Lu) chloride. The applicant for this medicinal product is Billev Pharma ApS. Illuzyce was approved for medical use in the European Union in September 2022. In September 2024, the CHMP adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Theralugand, a radiopharmaceutical precursor. Theralugand is not intended for direct use in patients and must be used only for the radiolabelling of carrier medicines that have been specifically developed and authorized for radiolabelling with lutetium (177Lu) chloride. The applicant for this medicinal product is Eckert & Ziegler Radiopharma GmbH. References Radiopharmaceuticals Lutetium compounds Chlorides Orphan drugs
Lutetium (177Lu) chloride
Chemistry
619
21,543,926
https://en.wikipedia.org/wiki/Martin%20Fleischmann
Martin Fleischmann FRS (29 March 1927 – 3 August 2012) was a British chemist who worked in electrochemistry. The premature announcement of his cold fusion research with Stanley Pons, regarding excess heat in heavy water, caused a media sensation and elicited skepticism and criticism from many in the scientific community. Personal life Fleischmann was born in Karlovy Vary, Czechoslovakia, in 1927. His father was a wealthy lawyer and his mother the daughter of a high-ranking Austrian civil officer. Since his father was of Jewish heritage, Fleischmann's family moved to the Netherlands, and then to England in 1938, to avoid Nazi persecution. His father died of the complications of injuries received in a Nazi prison, after which Fleischmann lived for a period with his mother in a leased cottage in Rustington, Sussex. His early education was obtained at Worthing High School for Boys. After serving in the Czech Airforce Training Unit during the war, he moved to London to study for undergraduate and postgraduate degrees in chemistry at Imperial College London. His PhD was awarded in 1951, under the supervision of Professor Herrington, for his thesis on the diffusion of electrogenerated hydrogen through palladium foils. He met Sheila, his future wife, as a student and remained married to her for 62 years. Career Electrochemistry (1950s to 1983) Fleischmann's professional career was focused almost entirely on fundamental electrochemistry. Fleischmann went on to teach at King's College, Durham University, which in 1963 became the newly established University of Newcastle upon Tyne. In 1967, Fleischmann became Professor of Electrochemistry at the University of Southampton, occupying the Faraday Chair of Chemistry. From 1970 to 1972, he was president of the International Society of Electrochemists. In 1973, together with Patrick J. Hendra and A. James McQuillan, he played an important role in the discovery of Surface Enhanced Raman Scattering effect (SERS), for which the University of Southampton was awarded a National Chemical Landmark plaque by the Royal Society of Chemistry in 2013, and he developed the ultramicroelectrode in the 1980s. In 1979, he was awarded the medal for electrochemistry and thermodynamics by the Royal Society of London. In 1982 he retired from the University of Southampton. In 1985 he received the Olin Palladium Award from the Electrochemical Society, and in 1986 was elected to the Fellowship of the Royal Society. He retired from teaching in 1983 and was given an honorary professorship at Southampton University. Fellowships, prizes and awards Secretary/Treasurer of the International Society of Electrochemistry (1964–1967) President of the International Society of Electrochemistry (1973–1974) Electrochemistry and Thermodynamics Medal of the Royal Society of Chemistry (1979) Fellowship of the Royal Society (1985) Olin Palladium Medal of the Electrochemical Society (1986) Cold fusion (1983 to 1992) Fleischmann confided to Stanley Pons that he might have found what he believed to be a way to create nuclear fusion at room temperatures. From 1983 to 1989, he and Pons spent $100,000 in self-funded experiments at the University of Utah. Fleischmann wanted to publish it first in an obscure journal, and had already spoken with a team that was doing similar work in a different university for a joint publication. The details have not surfaced, but it seems that the University of Utah wanted to establish priority over the discovery and its patents by making a public announcement before the publication. In an interview with 60 Minutes on 19 April 2009, Fleischmann said that the public announcement was the university's idea, and that he regretted doing it. This decision, perceived as short-circuiting the way science is usually communicated to other scientists, later caused heavy criticism against Fleischmann and Pons. On 23 March 1989 the work was announced at a press conference as "a sustained nuclear fusion reaction," which was quickly labelled by the press as cold fusion – a result previously thought to be unattainable. On 26 March Fleischmann warned on the Wall Street Journal Report not to try replications until a published paper was available two weeks later in Journal of Electroanalytical Chemistry, but that did not stop hundreds of scientists who had already started work at their laboratories the moment they heard the news on 23 March, and more often than not they failed to reproduce the effects. Those who failed to reproduce the claim attacked the pair for fraudulent, sloppy, and unethical work; incomplete, unreproducible, and inaccurate results; and erroneous interpretations. When the paper was finally published, both electrochemists and physicists called it "sloppy" and "uninformative", and it was said that, had Fleischmann and Pons waited for the publication of their paper, most of the trouble would have been avoided because scientists would not have gone so far in trying to test their work. Fleischmann and Pons sued an Italian journalist who had published very harsh criticisms of them, but the judge rejected the case saying that criticisms were appropriate given the scientists' behaviour, the lack of evidence since the first announcement, and the lack of interest shown by the scientific community, and that they were an expression of the journalist's "right of reporting". Retirement (from 1992) In 1992, Fleischmann moved to France with Pons to continue their work at the IMRA laboratory (part of Technova Corporation, a subsidiary of Toyota), but in 1995 he retired and returned to England. He co-authored further papers with researchers from the US Navy and Italian national laboratories (INFN and ENEA), on the subject of cold fusion. In March 2006, "Solar Energy Limited" division "D2Fusion Inc" announced in a press release that Fleischmann, then 79, would be acting as their senior scientific advisor. Death Fleischmann died at home in Tisbury, Wiltshire on 3 August 2012, of natural causes. He had suffered from Parkinson's disease, diabetes and heart disease. He was survived by his son and two daughters. Legacy While holding the Faraday Chair of Electrochemistry he and Graham Hills established in the late 1960s the Electrochemistry Group of the University of Southampton. Fleischmann produced over 272 scientific papers and book chapters on the field of electrochemistry. He contributed to the fundamental theory of: Potentiostat design Microelectrodes Electrochemical nucleation Surface-enhanced Raman spectroscopy In-situ X-ray techniques Organic electrochemistry Electrochemical engineering Biological electrodes Corrosion The Martin Fleischmann Memorial Project was started in 2012 in his honour to gather together research from around the world connected to LENR (low-energy nuclear reactions). Peer-reviewed papers on "Cold Fusion" References Further reading by David Voss External links Book on Fleischmann's scientific contributions as electrochemist . Interview: Fusion in a cold climate, 2009, New Scientist The Believers movie official website 1927 births 2012 deaths Cold fusion Academic scandals Academics of the University of Southampton British chemists Electrochemists British people of Czech-Jewish descent Alumni of Imperial College London People educated at Worthing High School University of Utah staff Fellows of the Royal Society Czechoslovak refugees Deaths from Parkinson's disease in England
Martin Fleischmann
Physics,Chemistry
1,510
298,170
https://en.wikipedia.org/wiki/Vastu%20shastra
Originating in ancient India, Vastu Shastra (, – literally "science of architecture") is a traditional Hindu system of architecture based on ancient texts that describe principles of design, layout, measurements, ground preparation, space arrangement, and spatial geometry. The designs aim to integrate architecture with nature, the relative functions of various parts of the structure, and ancient beliefs utilising geometric patterns (yantra), symmetry, and directional alignments. Vastu Shastra are the textual part of Vastu Vidya – the broader knowledge about architecture and design theories from ancient India. Vastu Vidya is a collection of ideas and concepts, with or without the support of layout diagrams, that are not rigid. Rather, these ideas and concepts are models for the organisation of space and form within a building or collection of buildings, based on their functions in relation to each other, their usage and the overall fabric of the Vastu. Ancient Vastu Shastra principles include those for the design of Mandir (Hindu temples) and the principles for the design and layout of houses, towns, cities, gardens, roads, water works, shops, and other public areas. The Pandit or Architects of Vastu Shastra are Sthapati, Sūtragrāhin(Sutradhar), Vardhaki, and Takṣhaka. In contemporary India, states Chakrabarti, consultants that include "quacks, priests and astrologers" fueled by greed are marketing pseudoscience and superstition in the name of Vastu-sastras. They have little knowledge of what the historic Vastu-sastra texts actually teach, and they frame it in terms of a "religious tradition", rather than ground it in any "architectural theory" therein. Terminology The Sanskrit word vāstu means a dwelling or house with a corresponding plot of land. The vrddhi, vāstu, takes the meaning of "the site or foundation of a house, site, ground, building or dwelling-place, habitation, homestead, house". The underlying root is vas "to dwell, live, stay, reside". The term shastra may loosely be translated as "doctrine, teaching". Vāstu-Śastras (literally, science of dwelling) are ancient Sanskrit manuals of architecture. These contain Vastu-Vidya (literally, knowledge of dwelling). History Vastu, crafts and architecture are traditionally attributed to the divine Vishwakarma in the Hindu pantheon. Theories tracing links of the principles of composition in Vastu Shastra and the Indus Valley civilization have been made, but scholar Kapila Vatsyayan considers this speculation since the Indus Valley script remains undeciphered. According to Chakrabarti, Vastu Vidya is as old as the Vedic period and linked to the ritual architecture. According to Michael W. Meister, the Atharvaveda contains verses with mystic cosmogony which provide a paradigm for cosmic planning, but they did not represent architecture nor a developed practice. The Arthashastra dated to 2nd century BCE and 3rd century CE, dedicates chapters to domestic architecture, forts and town planning. Vastu sastras are stated by some to have roots in pre-1st-century CE literature, but these views suffer from being a matter of interpretation. For example, the mathematical rules and steps for constructing Vedic yajna square for the sacrificial fire are in the Sulba-sutras dated to 4th-century BCE. However, these are ritual artifacts and they are not buildings or temples or broader objects of a lasting architecture. Varahamihira's Brihat Samhita dated to about the sixth century CE is among the earliest known Indian texts with dedicated chapters with principles of architecture. For example, Chapter 53 of the Brihat Samhita is titled "On architecture", and there and elsewhere it discusses elements of vastu sastra such as "planning cities and buildings" and "house structures, orientation, storeys, building balconies" along with other topics. According to Michael Meister, a scholar of Indian architecture, we must acknowledge that Varahamihira does mention his own sources on vastu as older texts and sages. However, these may be mythology and reflect the Indian tradition to credit mythical sages and deities. Description There exist many Vāstu-Śastras on the art of building houses, temples, towns and cities. Among early known example is the Arthashastra dated to 2nd century BCE and 3rd century CE, with chapters dedicated to domestic architecture, forts and town planning. By 6th century AD, Sanskrit texts for constructing palatial temples were in circulation in India. Vāstu-Śastras include chapters on home construction, town planning, and how efficient villages, towns and kingdoms integrated temples, water bodies and gardens within them to achieve harmony with nature. While it is unclear, states Barnett, as to whether these temple and town planning texts were theoretical studies and if or when they were properly implemented in practice, these texts suggest that town planning and Hindu temples were conceived as ideals of art and integral part of Hindu social and spiritual life. Six of the most studied, complete and referred to Indian texts on Vastu Vidya that have survived into the modern age, states Tillotson, are – the Mayamata, the Manasara, the Samarangana Sutradhara, the Rajavallabha, the Vishvakarmaprakasha and the Aparajitaprccha. Numerous other important texts contain sections or chapters on aspects of architecture and design. The Silpa Prakasa of Odisha, authored by Ramachandra Bhattaraka Kaulachara sometime in ninth or tenth century CE, is another Vāstu Śastra. Silpa Prakasa describes the geometric principles in every aspect of the temple and symbolism such as 16 emotions of human beings carved as 16 types of female figures. These styles were perfected in Hindu temples prevalent in the eastern states of India. Other ancient texts found expand these architectural principles, suggesting that different parts of India developed, invented and added their own interpretations. For example, in Saurastra tradition of temple building found in western states of India, the feminine form, expressions and emotions are depicted in 32 types of Nataka-stri compared to 16 types described in Silpa Prakasa. Silpa Prakasa provides brief introduction to 12 types of Hindu temples. Other texts, such as Pancaratra Prasada Prasadhana compiled by Daniel Smith and Silpa Ratnakara compiled by Narmada Sankara provide a more extensive list of Hindu temple types. Sanskrit texts for temple construction discovered in Rajasthan, in northwestern region of India, include Sutradhara Mandana's Prasadamandana (literally, planning and building a temple) with chapters on town building. Manasara shilpa and Mayamata, texts of South Indian origin, estimated to be in circulation by 5th to 7th century AD, is a guidebook on South Indian Vastu design and construction. Isanasivagurudeva paddhati is another Sanskrit text from the 9th century describing the art of building in India in south and central India. In north India, Brihat-samhita by Varāhamihira is the widely cited ancient Sanskrit text from 6th century describing the design and construction of Nagara style of Hindu temples. These Vāstu Śastras, often discuss and describe the principles of Hindu temple design, but do not limit themselves to the design of a Hindu temple. They describe the temple as a holistic part of its community, and lay out various principles and a diversity of alternate designs for home, village and city layout along with the temple, gardens, water bodies and nature. Mandala types and properties The central area in all mandala is the Brahmasthana. Mandala "circle-circumference" or "completion", is a concentric diagram having spiritual and ritual significance in both Hinduism and Buddhism. The space occupied by it varies in different mandala – in Pitha (9) and Upapitha (25). it occupies one square module, in Mahaapitha (16), Ugrapitha (36) and Manduka (64), four square modules and in Sthandila (49) and Paramasaayika (81), nine square modules. The Pitha is an amplified Prithvimandala in which, according to some texts, the central space is occupied by earth. The Sthandila mandala is used in a concentric manner. A site of any shape can be divided using the Pada Vinyasa. Sites are known by the number of squares. They range from 1x1 to 32x32 (1024) square sites. Examples of mandalas with the corresponding names of sites include: Sakala (1 square) corresponds to Eka-pada (single divided site) Pechaka (4 squares) corresponds to Dwi-pada (two divided site) Pitha (9 squares) corresponds to Tri-pada (three divided site) Mahaapitha (16 squares) corresponds to Chatush-pada (four divided site) Upapitha (25 squares) corresponds to Pancha-pada (five divided site) Ugrapitha (36 squares) corresponds to Shashtha-pada (six divided site) Sthandila (49 squares) corresponds to Sapta-pada (seven divided site) Manduka/ Chandita (64 square) corresponds to Ashta-pada (eight divided site) Paramasaayika (81 squares) corresponds to Nava-pada (nine divided site) Aasana (100 squares) corresponds to Dasa-pada (ten divided site) Bhadrmahasan (196 squares) corresponds to Chodah-pada (14 divided sites) Modern adaptations and usage Vāstu Śastra represents a body of ancient concepts and knowledge to many modern architects, a guideline but not a rigid code. The square-grid mandala is viewed as a model of organisation, not as a ground plan. The ancient Vāstu Śastra texts describe functional relations and adaptable alternate layouts for various rooms or buildings and utilities, but do not mandate a set compulsory architecture. Sachdev and Tillotson state that the mandala is a guideline, and employing the mandala concept of Vāstu Śastra does not mean every room or building has to be square. The basic theme is around core elements of central space, peripheral zones, direction with respect to sunlight, and relative functions of the spaces. The pink city Jaipur in Rajasthan was master planned by architect Vidyadhar Bhattacharya (1693–1751) who was approached by Rajput king Jai Singh and was built by 1727 CE, in part around Vastu Shilpa Sastra principles. Similarly, modern-era projects such as the architect Charles Correa's designed Gandhi Smarak Sangrahalaya in Ahmedabad, Vidhan Bhavan in Bhopal, and Jawahar Kala Kendra in Jaipur adapt and apply concepts from the Vastu Shastra Vidya. In the design of Chandigarh city, Le Corbusier incorporated modern architecture theories with those of Vastu Shastra. During the colonial rule period of India, town planning officials of the British Raj did not consider Vastu Vidya, but largely grafted Islamic Mughal era motifs and designs such as domes and arches onto Victorian-era style buildings without overall relationship layout. This movement, known as Indo-Saracenic architecture, is found in chaotically laid out, but externally grand structures in the form of currently used major railway stations, harbours, tax collection buildings, and other colonial offices in South Asia. Vāstu Śastra Vidya was ignored, during colonial era construction, for several reasons. These texts were viewed by 19th and early 20th century architects as archaic, the literature was inaccessible being in an ancient language not spoken or read by the architects, and the ancient texts assumed space to be readily available. In contrast, public projects in the colonial era were forced into crowded spaces and local layout constraints, and the ancient Vastu sastra were viewed with prejudice as superstitious and rigid about a square grid or traditional materials of construction. Sachdev and Tillotson state that these prejudices were flawed, as a scholarly and complete reading of the Vāstu Śastra literature amply suggests the architect is free to adapt the ideas to new materials of construction, local layout constraints and into a non-square space. The design and completion of a new city of Jaipur in early 1700s based on Vāstu Śastra texts, well before any colonial era public projects, was one of many proofs. Other examples include modern public projects designed by Charles Correa such as Jawahar Kala Kendra in Jaipur, and Gandhi Ashram in Ahmedabad. Vastu Shastra remedies have also been applied by Khushdeep Bansal in 1997 to the Parliament complex of India, when he contented that the library being built next to the building is responsible for political instability in the country. German architect Klaus-Peter Gast states that the principles of Vāstu Śastras is witnessing a major revival and wide usage in the planning and design of individual homes, residential complexes, commercial and industrial campuses, and major public projects in India, along with the use of ancient iconography and mythological art work incorporated into the Vastu vidya architectures. Vastu and superstition The use of Vastu shastra and Vastu consultants in modern home and public projects is controversial. Some architects, particularly during India's colonial era, considered it arcane and superstitious. Other architects state that critics have not read the texts and that most of the text is about flexible design guidelines for space, sunlight, flow and function. Vastu Shastra is a pseudoscience, states Narendra Nayak – the head of Federation of Indian Rationalist Associations. In contemporary India, Vastu consultants "promote superstition in the name of science". Astronomer Jayant Narlikar states that Vastu Shastra has rules about integrating architecture with its ambience but that the dictates of Vastu and alleged harm or benefits being marketed have "no logical connection to environment". He gives examples of Vastu consultants claiming the need to align the house to magnetic axis for "overall growth, peace and happiness, or that "parallelogram-shaped sites can lead to quarrels in the family", states Narlikar. He says this is pseudoscience. Vibhuti Chakrabarti, a scholar of Architecture and Sanskrit literature has critically translated historic Vastu literature, and states that in contemporary India, some are offering their services as Vastu consultants where they project it as a "religious tradition", rather than an "architectural methodology" as taught in historic texts. He says that these consultants include "quacks, priests and astrologers" fuelled by greed and with little knowledge of what the historic Vastu-sastra texts teach. They are said to market false advice and superstition in the name of Vastu Vidya tradition, sometimes under the rubric of "Vedic sciences". Sanskrit treatises on architecture Of the numerous Sanskrit treatises mentioned in ancient Indian literature, some have been translated in English. Many Agamas, Puranas and Hindu scriptures include chapters on architecture of temples, homes, villages, towns, fortifications, streets, shop layout, public wells, public bathing, public halls, gardens, river fronts among other things. In some cases, the manuscripts are partially lost, some are available only in Tibetan, Nepalese or South Indian languages, while in others original Sanskrit manuscripts are available in different parts of India. Some treatises, or books with chapters on Vaastu Shastra include: See also Aranmula Kottaram Dowsing Feng shui Geomancy Kanippayyur Shankaran Namboodiripad Ley line Shilpa Shastras Tajul muluk References Further reading Acharya P.K. (1933), Manasara (English translation), Online proofread edition including footnotes and glossary Acharya P.K. (1946), An Encyclopedia of Hindu Architecture, Oxford University Press – Terminology of Ancient Architecture Acharya P.K. (1946), Bibliography of Ancient Sanskrit Treatises on Architecture and Arts, in An Encyclopedia of Hindu Architecture, Oxford University Press, pp. 615–659. B.B. Dutt (1925), IVVRF (2000), Journal Of International Conference Vastu Panorama 2000, Main Theme – The Study of Energetic Dimension of Man and Behavior of Environment IVVRF (2004), Journal Of International Conference Vastu Panorama 2004 IVVRF (2008), Journal Of International Conference Vastu Panorama 2008, Main Theme – Save Mother earth and life – A Vastu Mission IVVRF (2012), Journal Of International Conference Vastu Panorama 2012, Main Theme – Vastu Dynamics for Global Well Being V. Chakraborty, Arya, Rohit Vaastu: the Indian art of placement : design and decorate homes to reflect eternal spiritual principles Inner Traditions / Bear & Company, 2000, . Vastu: Transcendental Home Design in Harmony with Nature, Sherri Silverman Prabhu, Balagopal, T.S and Achyuthan, A, "A text Book of Vastuvidya", Vastuvidyapratisthanam, Kozhikode, New Edition, 2011. Prabhu, Balagopal, T.S and Achyuthan, A, "Design in Vastuvidya", Vastuvidyapratisthanam, Kozhiko Prabhu, Balagopal, T.S, "Vastuvidyadarsanam" (Malayalam), Vastuvidyapratisthanam, Kozhikode. Prabhu, Balagopal, T.S and Achyuthan, A, "Manusyalaya candrika- An Engineering Commentary", Vastuvidyapratisthanam, Kozhikode, New Edition, 2011. Vastu-Silpa Kosha, Encyclopedia of Hindu Temple architecture and Vastu/S.K.Ramachandara Rao, Delhi, Devine Books (Lala Murari Lal Chharia Oriental series) (Set) D. N. Shukla, Vastu-Sastra: Hindu Science of Architecture, Munshiram Manoharial Publishers, 1993, . B. B. Puri, Applied vastu shastra vaibhavam in modern architecture, Vastu Gyan Publication, 1997, . Vibhuti Chakrabarti, Indian Architectural Theory: Contemporary Uses of Vastu Vidya Routledge, 1998, . Siddharth, Dr. Jayshree Om: The Ancient Science of Vastu, 2020, Hindu temple architecture Vedic period Hindu philosophical concepts Environmental design History of literature in India Indian architectural history Pseudoscience Superstitions Superstitions of India Architectural theory
Vastu shastra
Engineering
3,915
6,729,866
https://en.wikipedia.org/wiki/Phase%20offset%20modulation
Phase offset modulation works by overlaying two instances of a periodic waveform on top of each other. (In software synthesis, the waveform is usually generated by using a lookup table.) The two instances of the waveform are kept slightly out of sync with each other, as one is further ahead or further behind in its cycle. The values of both of the waveforms are either multiplied together, or the value of one is subtracted from the other. This generates an entirely new waveform with a drastically different shape. For example, one sawtooth (ramp) wave subtracted from another will create a pulse wave, with the amount of offset (i.e. the difference between the two waveforms' starting points) dictating the duty cycle. If you slowly change the offset amount, you create pulse-width modulation. Using this technique, not only can a ramp wave create pulsewidth modulation, but any other waveform can achieve a comparable effect. Wave mechanics
Phase offset modulation
Physics
202
15,213,007
https://en.wikipedia.org/wiki/MLwiN
MLwiN is a statistical software package for fitting multilevel models. It uses both maximum likelihood estimation and Markov chain Monte Carlo (MCMC) methods. MLwiN is based on an earlier package, MLn, but with a graphical user interface (as well as other additional features). MLwiN represents multilevel models using mathematical notation including Greek letters and multiple subscripts, so the user needs to be (or become) familiar with such notation. For a tutorial introduction to multilevel models and their applications in medical statistics illustrated using MLwiN, see Goldstein et al. References External links Website Multilevel Modelling Software Reviews Statistical software
MLwiN
Mathematics
133
26,041,731
https://en.wikipedia.org/wiki/Drawbar%20force%20gauge
A drawbar force gauge is a gauge designed to measure forces on a machine tool's drawbar. These types of machines are found in metalworking, woodworking, stone cutting, and carbon fiber fabricating shops. Many modern machines generate well in excess of 50,000 N (12,000 lbf). Measuring and maintaining this force is an important and necessary part of a machine shop preventative maintenance plan. How drawbar force gauges work Modern drawbar force gauges typically are based on a force sensor that uses bonded strain gauges and electronics to convert the resulting output into a digital display for the user to view. Earlier versions of these gauges sometimes also used a sealed hydraulic cavity with a pressure gauge to measure and display force. These hydraulic gauges are generally considered less accurate because of the physical limitations of the indicator. Why drawbar force is measured Drawbar force gauges allow early detection of problems with the spindle's Belleville spring stack, verification of performance of the clamping system as a whole, help prevent damage to spindle taper and other machine features critical to machining accuracy, and ultimately help to keep the machine operator safe. Drawbar force measurement has been made much more important in recent years with the introduction of radically higher RPM machines. These machines are necessary to work the modern materials required in a multitude of applications—new types of composite wood material, carbon fiber, and high strength materials such as titanium. High speed machining of these materials is considered to begin at 10,000 rpm and may reach as high as 50,000 rpm. The need for regular verification of the spindle clamping system becomes obvious. As the required machining speeds become higher, the need for machines to be built with smaller diameter spindle components increases. When the spring pack, bearings, and hydraulic units become smaller, the stresses placed upon them become greater. As a result, the clamping system will remain in good shape for fewer and fewer "cycles", or "clamp/unclamp" procedures. Again, this requires gauges and routine procedures to monitor this process. Many operators do not realize that this is something that has changed over time. Any metal or wood working machine that takes advantage of the HSK taper system should be routinely checked. The slightest stroke mis-adjustment, dirt, or slight wear of the drawbar system can result in significantly reduced holding force. A preventative maintenance schedule, with a strict timetable for testing is a necessity when operating any type of high speed machine utilizing the HSK system. Retention knob Drawbar force gauges are able to detect broken or weakening components of the drawbar clamping system, can give indications that the unit needs lubrication, detect gripper mis-adjustment, or demonstrate that the incorrect retention knob is being used for the machine. A retention knob is a device screwed into the narrow end of a tool holder, enabling the drawbar to pull the tool holder into the spindle. With a highly accurate electronic gauge, deficiencies can be noted and corrected. Many hours of expensive machine operating time can be put to use while avoiding fretting, chatter, "stuck" tool holders in a spindle, etc., by employing proper preventative maintenance techniques using an accurate electronic gauge and other spindle health management tools. Drawbar force gauges in tool holder standards The following tool holder standards specifically address tool retention force as measured by a drawbar force gauge: HSK standard ISO 12164-1: Hollow taper interface with flange contact surface—Part 1: Shanks—Dimensions Steep Taper standard ASME B5.50: 7/24 Taper Tool to Spindle Connection for Automatic Tool Change Capto Taper standard ISO 26623-1: Polygonal taper interface with flange contact surface—Part 1: Dimensions and designation of shanks External links Don’t Forget The Drawbar, Modern Machine Shop magazine, March 2006, By Peter Zelinski Draw Bar Force Testing Equipment, MachineToolHelp.com Spin Doctors, Cutting Tool Engineering magazine, August 2009, By George Weimer Machine tools Metalworking measuring instruments
Drawbar force gauge
Engineering
840
2,357,289
https://en.wikipedia.org/wiki/HEPCO
Heavy Equipment Production Company (HEPCO) is an Iranian corporation that manufactures construction equipment, railroad cars, trucks, forklifts and the industrial machinery of oil, gas, energy, metal and mining industries in Arak, HEPCO is the largest heavy equipment manufacturer in the Middle East. This company has 1,500 employees with an annual production capacity of 4,800 units. History HEPCO was established and registered in 1972, with the intention of assembly & production of heavy equipment. In 1975 HEPCO resumed operation in its premises in Arak consisting of 1,000,000 square meters of land & 40,000 square meters of production hall in collaboration with manufacturers such as Navistar International, Dynapac, Poclain, Sakai Heavy Industries and Lokomo. In 1984, HEPCO development project was designed in collaboration with Liebherr & Volvo companies, aiming at fabrication of steel structures of construction equipment. In 2020, HEPCO signed 5 MOUs with National Iranian Copper Industry Company (NICIC), Mobarakeh Steel Company, Gol Gohar Mining and Industrial Company, Chadormalu Mining and Industrial Company, Mining Investment Insurance Company (MIIC), and Iran Mine House, said Khodadad Gharibpour, the head of Iranian Mines and Mining Industries Development and Renovation Organization (IMIDRO) to manufacture 900 road building machines in total during the first and second years. Privatization HEPCO was originally established before the Islamic Revolution in Iran in 1979. The company was consequently seized by organizations and persons tied with regime officials. Afterwards, the company was sold to the private sector as part of the ruling regime’s privatization plans of 2006, which put regime officials and their affiliates in control of the country’s vital businesses. A series of protests took place in intervals since the privatization took place, ending in 2021 when the factory eventually was shut down. Collapse After a series of clashes between the factory workers at HEPCO and police, multiple protesters were injured and/or detained by the Iranian anti-riot forces, and, as a result, the headcount of factory workers dropped from 8,000 to nearly 1,000. Gallery See also Automotive industry in Iran Mining in Iran References External links HEPCO Website Manufacturing companies established in 1972 Crane manufacturers Iranian brands 1972 establishments in Iran Mining equipment companies Forklift truck manufacturers Companies listed on the Tehran Stock Exchange Industrial machine manufacturers Truck manufacturers of Iran Rolling stock manufacturers of Iran Car manufacturers of Iran Companies based in Arak Companies of Iran Manufacturing companies of Iran Companies of Iran by year of establishment Construction equipment manufacturers of Iran
HEPCO
Engineering
530
30,775,584
https://en.wikipedia.org/wiki/Louis%20Leithold
Louis Leithold (San Francisco, United States, 16 November 1924 – Los Angeles, 29 April 2005) was an American mathematician and teacher. He is best known for authoring The Calculus, a classic textbook about calculus that changed the teaching methods for calculus in world high schools and universities. Known as "a legend in AP calculus circles," Leithold was the mentor of Jaime Escalante, the Los Angeles high-school teacher whose story is the subject of the 1988 movie Stand and Deliver. Biography Leithold attended the University of California, Berkeley, where is attained his B.A., M.A. and PhD. He went on to teach at Phoenix College (Arizona) (which has a math scholarship in his name), California State University, Los Angeles, the University of Southern California, Pepperdine University, and The Open University (UK). In 1968, Leithold published The Calculus, a "blockbuster best-seller" which simplified the teaching of calculus. At age 72, after his retirement from Pepperdine, he began teaching calculus at Malibu High School, in Malibu, California, drilling his students for the Advanced Placement Calculus, and achieving considerable success. He regularly assigned two hours of homework per night, and had two training sessions at his own house that ran Saturdays or Sundays from 9AM to 4PM before the AP test. His teaching methods were praised for their liveliness, and his love for the topic was well known. He also taught workshops for calculus teachers. One of the people he influenced was Jaime Escalante, who taught math to minority students at Garfield High School in East Los Angeles. Escalante's subsequent success as a teacher is portrayed in the 1988 film Stand and Deliver. Leithold died of natural causes the week before his class (which he had been "relentlessly drilling" for eight months) was to take the AP exam; his students went on to receive top scores. A memorial service was held in Glendale, and a scholarship established in his name. Leithold experienced a notable legal event in his personal life in 1959 when he and his then-wife, musician Dr. Thyra N. Pliske, adopted a minor child, Gordon Marc Leithold. The couple eventually divorced in 1962, with an Arizona court granting Thyra custody of the child and Louis receiving certain visitation rights. Thyra later married Gilbert Norman Plass, and the family moved to Dallas, Texas in 1963. In 1965, Louis filed a suit against his former wife and her new husband in the Juvenile Court of Dallas County, Texas. The suit, titled "Application for Modification of Visitation and Custody," sought significant changes to the Arizona decree based on allegations of changed conditions and circumstances. Following a hearing, the Dallas court modified the Arizona decree with respect to Louis' visitation rights. His son would die in 1994, at the age of 35 in Houston, Texas. He was an art collector, and had art by Vasa Mihich. He also used art in his Calculus book by Patrick Caulfield. References University of California, Berkeley alumni Writers from San Francisco Schoolteachers from Arizona 20th-century American mathematicians 21st-century American mathematicians 1924 births 2005 deaths History of calculus American science writers Educators from California California State University, Los Angeles faculty University of Southern California faculty Pepperdine University faculty 20th-century American educators
Louis Leithold
Mathematics
672
39,487,834
https://en.wikipedia.org/wiki/Boletus%20bicoloroides
Boletus bicoloroides is a fungus of the genus Boletus native to the United States. It was first described officially in 1971 by mycologists Alexander H. Smith and Harry Delbert Thiers. See also List of Boletus species References External links bicoloroides Fungi described in 1971 Fungi of the United States Taxa named by Harry Delbert Thiers Taxa named by Alexander H. Smith Fungi without expected TNC conservation status Fungus species
Boletus bicoloroides
Biology
93
56,868,264
https://en.wikipedia.org/wiki/NGC%204596
NGC 4596 is a barred lenticular galaxy located about 55 million light-years away in the constellation Virgo. NGC 4596 was discovered by astronomer William Herschel on March 15, 1784. NGC 4596 is a member of the Virgo Cluster and has an inclination of about 38°. Physical characteristics NGC 4596 has a strong bar with bright ansae at the ends. Two diffuse spiral arms branch off from the ends of the bar and form an inner pseudoring that is well-defined. The spiral arms continue out and fade rapidly in the bright outer disk. Supermassive black hole NGC 4596 has a supermassive black hole with an estimated mass of 78 million suns ( M☉). See also List of NGC objects (4001–5000) NGC 4608 NGC 1533 References External links Virgo (constellation) Barred lenticular galaxies 4596 42401 7828 Astronomical objects discovered in 1784 Discoveries by William Herschel Virgo Cluster
NGC 4596
Astronomy
201
54,358,158
https://en.wikipedia.org/wiki/Mouse%20ear%20swelling%20test
The mouse ear swelling test is a toxicological test that aims to mimic human skin reactions to chemicals. It avoids post-mortem examination of tested animals. References See also Local lymph node assay Draize test Freund's Complete Adjuvant Toxicology Allergology
Mouse ear swelling test
Environmental_science
62
23,785,343
https://en.wikipedia.org/wiki/Excretory%20system%20of%20gastropods
The excretory system of gastropods removes nitrogenous waste and maintains the internal water balance of these creatures, commonly referred to as snails and slugs. The primary organ of excretion is a nephridium. Structure The most primitive gastropods retain two nephridia, but in the great majority of species, the right nephridium has been lost, leaving a single excretory organ, located in the anterior part of the visceral mass. The nephridium projects into the main venous sinus in the animal's foot. The circulatory fluid of gastropods, known as haemolymph directly bathes the tissues, where it supplies them with oxygen and absorbs carbon dioxide and nitrogenous waste, a necessary waste product of metabolism. From the arterial sinuses bathing the tissues, it drains into the venous sinus, and thus flows past the nephridium. The main body cavity of most aquatic gastropods also includes pericardial glands, often located above the heart. These secrete waste into the haemolymph, prior to further filtration in the nephridium. Pulmonates lack these glands, so that the nephridium is the only major organ of excretion. In some gastropods, the nephridium opens directly into the sinus, but more usually, there is a small duct, referred to as the renopericardial canal. In aquatic gastropods, the nephridium is drained by a ureter that opens near the rear of the mantle cavity. This allows the flow of water through the cavity to flush out the excreta. Terrestrial pulmonates instead have a much longer ureter, that opens near the anus. In addition to the pericardial glands and nephridium, excretory cells are also present in the digestive glands opening into the stomach. These glands have a metabolic function, somewhat similar to that of the vertebrate liver, and excrete waste products directly into the digestive system, where it is voided with the faeces. Water balance In aquatic gastropods, the waste product is invariably ammonia, which rapidly dissolves in the surrounding water. In the case of freshwater species, the nephridium also resorbs a significant amount of salt in order to prevent its loss through osmosis into the surrounding water. Terrestrial species instead excrete insoluble uric acid, which allows them to maintain their internal water balance. Even so, most species require a somewhat humid environment, and secrete a considerable amount of water in their slime trail. Those few species that dwell in arid environments typically hibernate or aestivate during dry periods to preserve moisture. References External links Gastropod anatomy Organ systems
Excretory system of gastropods
Biology
574
3,563,002
https://en.wikipedia.org/wiki/Crown%20glass%20%28optics%29
Crown glass is a type of optical glass used in lenses and other optical components. It has relatively low refractive index (≈1.52) and low dispersion (with Abbe numbers between 50 and 85). Crown glass is produced from alkali-lime silicates containing approximately 10% potassium oxide and is one of the earliest low dispersion glasses. History The term originated from crown-glass windows, a method of window production that began in France during the Middle Ages. A molten blob of glass was attached to a pole and spun rapidly, flattening it out into a large disk from which windows were cut. The center, called the "crown" or "bullseye", was too thick for windows, but was often used to make lenses or deck prisms. Types The borosilicate glass Schott BK7 (glass code 517642) is an extremely common crown glass, used in precision lenses. Borosilicates contain about 10% boric oxide, have good optical and mechanical characteristics, and are resistant to chemical and environmental damage. Other additives used in crown glasses include zinc oxide, phosphorus pentoxide, barium oxide, fluorite and lanthanum oxide. The crown/flint distinction is so important to optical glass technology that many glass names, notably Schott glasses, incorporate it. A K in a Schott name indicates a crown glass (Krone in German). The B in BK7 indicates that it is a borosilicate glass composition. BAK-4 barium crown glass (glass code 569560) has a higher index of refraction than BK7, and is used for prisms in high-end binoculars. In that application, it gives better image quality and a round exit pupil. A concave lens of flint glass is commonly combined with a convex lens of crown glass to produce an achromatic doublet. The dispersions of the glasses partially compensate for each other, producing reduced chromatic aberration compared to a singlet lens with the same focal length. See also History of the achromatic telescope John Dollond, who patented and commercialised the crown/flint doublet Notes External links Crown glass article Applied photographic optics Book Book- The properties of optical glass Handbook of Ceramics, Glasses, and Diamonds Optical glass construction Video of blowing crown glass by Corning Museum of Glass Glass compositions
Crown glass (optics)
Chemistry
495
12,041,060
https://en.wikipedia.org/wiki/Di-%CF%80-methane%20rearrangement
In organic chemistry, the di-π-methane rearrangement is the photochemical rearrangement of a molecule that contains two π-systems separated by a saturated carbon atom. In the aliphatic case, this molecules is a 1,4-diene; in the aromatic case, an allyl-substituted arene. The reaction forms (respectively) an ene- or aryl-substituted cyclopropane. Formally, it amounts to a 1,2 shift of one ene group (in the diene) or the aryl group (in the allyl-aromatic analog), followed by bond formation between the lateral carbons of the non-migrating moiety: Discovery This rearrangement was originally encountered in the photolysis of barrelene to give semibullvalene. Once the mechanism was recognized as general by Howard Zimmerman in 1967, it was clear that the structural requirement was two π groups attached to an sp3-hybridized carbon, and then a variety of further examples was obtained. Notable examples One example was the photolysis of Mariano's compound, 3,3dimethyl-1,1,5,5tetraphenyl-1,4pentadiene. In this symmetric diene, the active π bonds are conjugated to arenes, which does not inhibit the reaction. Another was the asymmetric Pratt diene. Pratt's diene demonstrates that the reaction preferentially cyclopropanates aryl substituents, because the reaction pathway preserves the resonant stabil­ization of a benzhydrylic radical inter­mediate. The barrelene rearrangement is more complex than the Mariano and Pratt examples since there are two sp3-hybridized carbons. Each bridgehead carbon has three (ethylenic) π bonds, and any two can undergo the diπ-methane rearrangement. Moreover, unlike the acyclic Mariano and Pratt dienes, the barrelene reaction requires a triplet excited state. Thus acetone is used in the barrel­ene reaction; acetone captures the light and then delivers triplet excitation to the barrelene reactant. In the final step of the rearrangement there is a spin flip, to provide paired electrons and a new σ bond. As excited-state probe The dependence of the di-π-methane re­arrange­ment on the multiplicity of the excited state arises from the free-rotor effect. Triplet 1,4-dienes freely undergo cis-trans inter­conversion of diene double bonds (i.e. free rotation). In acyclic dienes, this free rotation leads to diradical reconnection, short-circuiting the di-π-methane process. Singlet excited states do not rotate and may thus undergo the di-π-methane mechanism. For cyclic dienes, as in the barrelene example, the ring structure can prevent free-rotatory dissipation, and may in fact require bond rotation to complete the rearrangement. References Rearrangement reactions
Di-π-methane rearrangement
Chemistry
626
59,427,673
https://en.wikipedia.org/wiki/Dermophis%20donaldtrumpi
Dermophis donaldtrumpi is a name proposed for a putative new species of caecilian a nearly-blind, serpentine amphibian to be named after Donald Trump. It was originally discovered in Panama and though the name was proposed in 2018, it has yet to be confirmed as a new species; as of 2024, the binomial name and description of the species has not been formally published. It was given its name after the Rainforest Trust held an auction for the naming rights. The company EnviroBuild won the auction and proposed naming the species in protest against Trump's environmental policies and views. Description Dermophis donaldtrumpi is about long, and like all caecilians it has a worm-like appearance with a smooth and shiny skin rich in mucous glands. This order of amphibians are either aquatic or fossorial, with D. donaldtrumpi belonging to the latter type, living almost entirely underground. It is nearly blind, with its reduced eyes only able to detect light and dark, so it uses a pair of tentacles, unique to caecilians, near its mouth in order to find prey. The offspring feed on an extra layer of skin produced by the mother (dermatotrophy), which provide them with both nutrients and microbes necessary for a healthy microbiome. According to the Rainforest Trust, amphibians such as D. donaldtrumpi are vulnerable to extinction due to being exceptionally sensitive to the results of global warming. Naming In December 2018, the Rainforest Trust completed an auction of naming rights for twelve newly discovered species of South American plants and animals, the money going towards the conservation of the species' habitats. The sustainable building materials company EnviroBuild paid $25,000 for the right to name the new amphibian. Aidan Bell, owner of EnviroBuild, stated that he named the species after Trump to raise awareness of Trump's policies on climate change and the danger that he sees those policies pose to the survival of many species. Referring to the creature's "rudimentary eyes which can only detect light or dark", Bell said that "Capable of seeing the world only in black and white, Donald Trump has claimed that climate change is a hoax by the Chinese." Daily Mirror also linked the creature's limited sight as "a feature that inspired it[s] name." According to The Washington Post, "The naming choice highlights the president's dismal approval ratings worldwide and is clearly designed to belittle him." Bell also related the caecilian's instincts when nurturing young to what he claims as Trump's nepotism: "The Dermophis genus grows an extra layer of skin which their young use their teeth to peel off and eat, a behaviour known as dermatrophy. As a method of ensuring [his] children survive in life Donald Trump prefers granting them high roles in the Oval Office." Dermatotrophy, a form of parental care in which the young feed on the skin of its mother, has been scientifically observed in other species of caecilians, such as Boulengerula taitana. EnviroBuild further connected the amphibian's nature to burrow underground with more of the president's policies: "Burrowing its head underground helps Donald Trump when avoiding scientific consensus on anthropogenic climate change". The company also noted that he had "appointed several energy lobbyists to the Environment Agency, where their job is to regulate the energy industry." The process of naming this species has garnered significant controversy among researchers and conservationists; some have pointed out the conservation benefits of the money donated to name the species, while others have criticized the idea of donating to name a new species, as many of the donors had no involvement with the studies undertaken to describe the species. In particular, North Carolina Museum of Natural Sciences researcher Christian Kammerer sees the facetious name of the amphibian as "just mean to the creature". See also List of things named after Donald Trump § Species List of organisms named after famous people (born 1900–1949) References Dermophis Endemic fauna of Panama Donald Trump Undescribed vertebrate species Nomina nuda
Dermophis donaldtrumpi
Biology
882
55,786,149
https://en.wikipedia.org/wiki/Interlaced%20arch
Interlaced arches is a scheme of decoration employed in Romanesque and Gothic architecture, where arches spring from alternate piers, interlacing or intersecting one another. In the former case, the first arch archivolt is carried alternately over and under the second, in the latter the archivolts actually intersect and stop one another. An example of the former exists in St Peter-in-the-East in Oxford and of the latter in St. Joseph’s chapel in Glastonbury, and in the Bristol Cathedral. The arches in the interlacing arcade can be either semicircular or pointed, and usually form purely decorative blind arcades. The interlaced arches are most likely an invention of Islamic architecture (cf. Bab al-Mardum Mosque, 999-1000 AD and Mosque–Cathedral of Córdoba, 833-988). This decoration was especially popular in England, with the most famous example at Lincoln Cathedral (St Hugh's choir). References Sources Arches and vaults
Interlaced arch
Engineering
206
34,227,089
https://en.wikipedia.org/wiki/Electron%20magnetic%20circular%20dichroism
Electron magnetic circular dichroism (EMCD) (also known as electron energy-loss magnetic chiral dichroism) is the EELS equivalent of XMCD. The effect was first proposed in 2003 and experimentally confirmed in 2006 by the group of Prof. Peter Schattschneider at the Vienna University of Technology. Similarly to XMCD, EMCD is a difference spectrum of two EELS spectra taken in a magnetic field with opposite helicities. Under appropriate scattering conditions virtual photons with specific circular polarizations can be absorbed, giving rise to spectral differences. The largest difference is expected between the case where one virtual photon with left circular polarization and one with right circular polarization are absorbed. By closely analyzing the difference in the EMCD spectrum, information can be obtained on the magnetic properties of the atom, such as its spin and orbital magnetic moment. In the case of transition metals such as iron, cobalt, and nickel, the absorption spectra for EMCD are usually measured at the L-edge. This corresponds to the excitation of a 2p electron to a 3d state by the absorption of a virtual photon providing the ionisation energy. The absorption is visible as a spectral feature in the electron energy loss spectrum (EELS). Because the 3d electron states are the origin of the magnetic properties of the elements, the spectra contain information on the magnetic properties. Moreover, since the energy of each transition depends on the atomic number, the information obtained is element specific, that is, it is possible to distinguish the magnetic properties of a given element by examining the EMCD spectrum at its characteristic energy (708 eV for iron). Since in both EMCD and XMCD the same electronic transitions are probed, the information obtained is the same. However EMCD has a higher spatial resolution and depth sensitivity than its X-ray counterpart. Moreover, EMCD can be measured on any TEM equipped with an EELS detector, whereas XMCD is normally measured only on dedicated synchrotron beamlines. A disadvantage of EMCD in its original incarnation is its requirement of crystalline materials with a thickness and orientation that just precisely gives the correct 90 degree phase shift needed for EMCD. However, a new method has recently demonstrated that electron vortex beams can also be used to measure EMCD without the geometrical constraints of the original procedure. See also Magnetic circular dichroism Faraday effect XMCD References Spectroscopy
Electron magnetic circular dichroism
Physics,Chemistry,Astronomy
492
4,865,616
https://en.wikipedia.org/wiki/Palatovaginal%20canal
The palatovaginal canal (also pharyngeal canal) is a small canal formed between the sphenoidal process of palatine bone, and vaginal process of sphenoid bone. It connects the pterygopalatine fossa and and nasal cavity. It transmits the pharyngeal nerve (pharyngeal branch of maxillary nerve), and the pharyngeal branch of maxillary artery. Anatomy Its proximal opening is situated inferoposteriorly in the pterygopalatine fossa. Its distal opening is situated in the nasal cavity at the root of the pterygoid process near the lateral margin of the ala of vomer. Variation An inconstant vomerovaginal canal may lie between the ala of the vomer and the vaginal process of the sphenoid bone, medial to the palatovaginal canal, and lead into the anterior end of the palatovaginal canal. Contents The pharyngeal branch of the maxillary artery supplies the nasopharynx, posterior part of the roof of the nasal cavity, sphenoid sinus, and pharyngotympanic tube. References External links Musculoskeletal system
Palatovaginal canal
Biology
256
895,721
https://en.wikipedia.org/wiki/Clonal%20colony
A clonal colony or genet is a group of genetically identical individuals, such as plants, fungi, or bacteria, that have grown in a given location, all originating vegetatively, not sexually, from a single ancestor. In plants, an individual in such a population is referred to as a ramet. In fungi, "individuals" typically refers to the visible fruiting bodies or mushrooms that develop from a common mycelium which, although spread over a large area, is otherwise hidden in the soil. Clonal colonies are common in many plant species. Although many plants reproduce sexually through the production of seed, reproduction occurs by underground stolons or rhizomes in some plants. Above ground, these plants most often appear to be distinct individuals, but underground they remain interconnected and are all clones of the same plant. However, it is not always easy to recognize a clonal colony especially if it spreads underground and is also sexually reproducing. Methods of establishment With most woody plants, clonal colonies arise by wide-ranging roots that at intervals send up new shoots, termed suckers. Trees and shrubs with branches that may tend to bend and rest on the ground, or which possess the ability to form aerial roots can form colonies via layering, or aerial rooting, e. g. willow, blackberry, fig, and banyan. Some vines naturally form adventitious roots on their stems that take root in the soil when the stems contact the ground, e.g. ivy and trumpet vine. With other vines, rooting of the stem where nodes come into contact with soil may establish a clonal colony, e.g. Wisteria. Ferns and many herbaceous flowering plants often form clonal colonies via horizontal underground stems termed rhizomes, e.g. ostrich fern Matteuccia struthiopteris and goldenrod. A number of herbaceous flowering plants form clonal colonies via horizontal surface stems termed stolons, or runners; e.g. strawberry and many grasses. Non-woody plants with underground storage organs such as bulbs and corms can also form colonies, e.g. Narcissus and Crocus. A few plant species can form colonies via adventitious plantlets that form on leaves, e.g. Kalanchoe daigremontiana and Tolmiea menziesii. A few plant species can form colonies via asexual seeds, termed apomixis, e.g. dandelion. Record colonies The only known natural example of King's Lomatia (Lomatia tasmanica) found growing in the wild is a clonal colony in Tasmania estimated to be 43,600 years old. A group of 47,000 Quaking Aspen (Populus tremuloides) trees (nicknamed "Pando") in the Wasatch Mountains, Utah, United States, has been shown to be a single clone connected by the root system. It is sometimes considered the world's largest organism by mass, covering , and also as among the world's oldest living organisms, at an estimated 14,000 years old. Another possible candidate for oldest organism on earth is an underwater meadow of the marine plant Posidonia oceanica in the Mediterranean Sea, which could be up to 100,000 years of age. Examples When woody plants form clonal colonies, they often remain connected through the root system, sharing roots, water and mineral nutrients. A few non-vining, woody plants that form clonal colonies are Bigelow oak (Quercus sinuata var. breviloba), quaking aspen (Populus tremuloides), bayberry (Myrica pensylvanica), black locust (Robinia pseudoacacia), creosote bush (Larrea tridentata), bladdernut, blueberry (Vaccinium), devil's club (Oplopanax horridus), forsythia, hazelnut (Corylus), honey locust (Gleditsia triacanthos), Kentucky coffeetree (Gymnocladus dioicus), kerria (Kerria japonica), pawpaw (Asimina triloba), poplars (Populus), sassafras (Sassafras albidum), sumac (Rhus), sweetgum (Liquidambar styraciflua), and sweetshrub (Calycanthus floridus). See also King Clone Tumour heterogeneity References Further reading Kricher, J. C., & Morrison, G. (1988). A Field Guide to Eastern Forests, pp. 19–20. Peterson Field Guide Series. . Plant reproduction Plant morphology Mycology
Clonal colony
Biology
989
45,337,989
https://en.wikipedia.org/wiki/Ly6
Ly6 also known as lymphocyte antigen 6 or urokinase-type plasminogen activator receptor (uPAR) is family of proteins that share a common structure but differ in their tissue expression patterns and function. Ly6 are cysteine-rich proteins that form disulfide bridges and contain a LU domain. These proteins are GPI-anchored to the cell membrane or are secreted. A total of 35 human and 61 mouse Ly6 family members have been identified. Depending on which tissues they are expressed in, LY6 family members have different roles. They are expressed in various types of tissues and their expression dependent on the stage of cell differentiation. For example, they are involved in cell proliferation, cell migration, cell–cell interactions, immune cell maturation, macrophage activation, and cytokine production. Their overexpression or dysregulation, for example due to point mutations, is associated with tumorogenesis and autoimmune diseases. This family was discovered in the 1970s, and these proteins are still used as markers of distinct stage of leukocyte differentiation. Gene organization Genes encoding human Ly6 family members are located in clusters on chromosomes 6, 8, 11 and 19. In the murine genome family members are located on chromosomes 17, 15, 9 and 7, respectively. Genes encoding Ly6 proteins with one LU domain consist of 3 exons and 2 introns. The first exon encodes the signal peptide, Exons 2 and 3 encode the LU domain, and exon 3 also encodes the GPI anchor. Protein structure Ly6 proteins are characterized by the LU domain. Typically, they contain one LU domain, but some members of the family have multiple LU domains. The LU domain consists of 60-80 AA and contains 10 cysteines arranged in a specific pattern that allows the creation of 5 disulfide bridges which in turn allow the formation of a three-fingered (3F) structural motif. Based on their subcellular localization, these proteins are classified as GPI-anchored to the cell membrane or secreted. Expression Although the Ly6 family members share a common structure, their expression varies in different tissues and is regulated depending on the stage of cell differentiation. Many Ly6 family members are expressed in hematopoietic precursors and differentiated hematopoietic cells in a lineage-specific manner and making them useful cell surface markers for leukocytes, facilitating identification of distinct leukocyte sub-population. Further, the Ly6 family proteins are also expressed, for example, by sperm, neurons, keratinocytes and epithelial cells. Function Ly6 family proteins have different functions depending on expression in different tissues. They play an important role in the immune response to infection and maintaining homeostasis in response to varying environmental conditions. It is involved in cell proliferation, cell migration, cell–cell interactions, immune cell maturation, macrophage activation, and cytokine production. It is also involved in complement activity, neuronal activity, angiogenesis, tumorogenesis and wound healing. Clinical relevance Many Ly6 family proteins (with the notable exception of SLURP1) are over-expressed in inflamed tissues and in tumors. They are therefore used as tumor markers and are also potential therapeutic targets. Some point mutations in Ly6 family proteins are associated with autoimmune diseases, such as psoriasis vulgaris. Ly6 proteins Examples of Ly6 proteins include: LY6E LYNX1 LYPD1 LYPD3 LYPD5 LYPD8 LYPD6B References Protein families
Ly6
Biology
763
18,207,458
https://en.wikipedia.org/wiki/Vector%20Field%20Histogram
In robotics, Vector Field Histogram (VFH) is a real time motion planning algorithm proposed by Johann Borenstein and Yoram Koren in 1991. The VFH utilizes a statistical representation of the robot's environment through the so-called histogram grid, and therefore places great emphasis on dealing with uncertainty from sensor and modeling errors. Unlike other obstacle avoidance algorithms, VFH takes into account the dynamics and shape of the robot, and returns steering commands specific to the platform. While considered a local path planner, i.e., not designed for global path optimality, the VFH has been shown to produce near optimal paths. The original VFH algorithm was based on previous work on Virtual Force Field, a local path-planning algorithm. VFH was updated in 1998 by Iwan Ulrich and Johann Borenstein, and renamed VFH+ (unofficially "Enhanced VFH"). The approach was updated again in 2000 by Ulrich and Borenstein, and was renamed VFH*. VFH is currently one of the most popular local planners used in mobile robotics, competing with the later developed dynamic window approach. Many robotic development tools and simulation environments contain built-in support for the VFH, such as in the Player Project. VFH The Vector Field Histogram was developed with aims of being computationally efficient, robust, and insensitive to misreadings. In practice, the VFH algorithm has proven to be fast and reliable, especially when traversing densely-populated obstacle courses. At the center of the VFH algorithm is the use of statistical representation of obstacles, through histogram grids (see also occupancy grid). Such representation is well suited for inaccurate sensor data, and accommodates fusion of multiple sensor readings. The VFH algorithm contains three major components: Cartesian histogram grid: a two-dimensional Cartesian histogram grid is constructed with the robot's range sensors, such as a sonar or a laser rangefinder. The grid is continuously updated in real time. Polar histogram: a one-dimensional polar histogram is constructed by reducing the Cartesian histogram around the momentary location of the robot. Candidate valley: consecutive sectors with a polar obstacle density below threshold, known as candidate valleys, is selected based on the proximity to the target direction. Once the center of the selected candidate direction is determined, orientation of the robot is steered to match. The speed of the robot is reduced when approaching obstacles head-on. VFH+ The VFH+ algorithm improvements include: Threshold hysteresis: a hysteresis increases the smoothness of the planned trajectory. Robot body size: robots of different sizes are taken into account, eliminating the need to manually adjust parameters via low-pass filters. Obstacle look-ahead: sectors that are blocked by obstacles are masked in VFH+, so that the steer angle is not directed into an obstacle. Cost function: a cost function was added to better characterize the performance of the algorithm, and also gives the possibility of switching between behaviours by changing the cost function or its parameters. VFH* In August 2000, Iwan Ulrich and Johann Borenstein published a paper describing VFH*, claiming improvement upon the original VFH algorithms by explicitly dealing with the shortcomings of a local planning algorithm, in that global optimality is not ensured. In VFH*, the algorithm verifies the steering command produced by using the A* search algorithm to minimize the cost and heuristic functions. While simple in practice, it has been shown in experimental results that this look-ahead verification can successfully deal with problematic situations that the original VFH and VFH+ cannot handle (the resulting trajectory is fast and smooth, with no significant slow down in presence of obstacles). See also Motion planning Dynamic window approach References Robot control
Vector Field Histogram
Engineering
805
24,471,476
https://en.wikipedia.org/wiki/Denominator%20data
In epidemiology, data or facts about a population are called denominator data. Denominator data are independent of any specific disease or condition. This name is given because in mathematical models of disease, disease-specific data such as the incidence of disease in a population, the susceptibility of the population to a specific condition, the disease resistance, etc. disease-specific variables are expressed as their proportion of some attribute of the general population, and hence appear as the numerator of the fraction or percentage being calculated, general data about the population typically appearing in the denominator; hence the term "denominator data." In an epidemiological compartment model, for example, variables are often scaled to total population. The susceptible fraction of a population is obtained by taking the ratio of the number of people susceptible to the total population. Susceptibility to a disease may depend on other factors such as age or sex. Data about a population including age distribution, male/female ratios, and other demographic factors may be relevant as denominator data. Denominator data is not only limited to data describing human populations but also includes information about wild and domestic animal populations. See also Incidence Cumulative incidence Prevalence Attributable risk References Epidemiology
Denominator data
Environmental_science
261
346,781
https://en.wikipedia.org/wiki/Modeling%20language
A modeling language is any artificial language that can be used to express data, information or knowledge or systems in a structure that is defined by a consistent set of rules. The rules are used for interpretation of the meaning of components in the structure of a programming language. Overview A modeling language can be graphical or textual. Graphical modeling languages use a diagram technique with named symbols that represent concepts and lines that connect the symbols and represent relationships and various other graphical notation to represent constraints. Textual modeling languages may use standardized keywords accompanied by parameters or natural language terms and phrases to make computer-interpretable expressions. An example of a graphical modeling language and a corresponding textual modeling language is EXPRESS. Not all modeling languages are executable, and for those that are, the use of them doesn't necessarily mean that programmers are no longer required. On the contrary, executable modeling languages are intended to amplify the productivity of skilled programmers, so that they can address more challenging problems, such as parallel computing and distributed systems. A large number of modeling languages appear in the literature. Type of modeling languages Graphical types Example of graphical modeling languages in the field of computer science, project management and systems engineering: Behavior Trees are a formal, graphical modeling language used primarily in systems and software engineering. Commonly used to unambiguously represent the hundreds or even thousands of natural language requirements that are typically used to express the stakeholder needs for a large-scale software-integrated system. Business Process Modeling Notation (BPMN, and the XML form BPML) is an example of a Process Modeling language. C-K theory consists of a modeling language for design processes. DRAKON is a general-purpose algorithmic modeling language for specifying software-intensive systems, a schematic representation of an algorithm or a stepwise process, and a family of programming languages. EXPRESS and EXPRESS-G (ISO 10303-11) is an international standard general-purpose data modeling language. Extended Enterprise Modeling Language (EEML) is commonly used for business process modeling across a number of layers. Flowchart is a schematic representation of an algorithm or a stepwise process. Fundamental Modeling Concepts (FMC) modeling language for software-intensive systems. IDEF is a family of modeling languages, which include IDEF0 for functional modeling, IDEF1X for information modeling, IDEF3 for business process modeling, IDEF4 for Object-Oriented Design and IDEF5 for modeling ontologies. Jackson Structured Programming (JSP) is a method for structured programming based on correspondences between data stream structure and program structure. LePUS3 is an object-oriented visual Design Description Language and a formal specification language that is suitable primarily for modeling large object-oriented (Java, C++, C#) programs and design patterns. Lifecycle Modeling Language is an open-standard language for systems engineering that supports the full system lifecycle: conceptual, utilization, support and retirement stages. Object-Role Modeling (ORM) in the field of software engineering is a method for conceptual modeling, and can be used as a tool for information and rules analysis. Petri nets use variations on exactly one diagramming technique and topology, namely the bipartite graph. The simplicity of its basic user interface easily enabled extensive tool support over the years, particularly in the areas of model checking, graphically oriented simulation, and software verification. Southbeach Notation is a visual modeling language used to describe situations in terms of agents that are considered useful or harmful from the modeler's perspective. The notation shows how the agents interact with each other and whether this interaction improves or worsens the situation. Specification and Description Language (SDL) is a specification language targeted at the unambiguous specification and description of the behavior of reactive and distributed systems. SysML is a Domain-Specific Modeling language for systems engineering that is defined as a UML profile (customization). Unified Modeling Language (UML) is a general-purpose modeling language that is an industry standard for specifying software-intensive systems. UML 2.0, the current version, supports thirteen different diagram techniques, and has widespread tool support. FLINT — language which allows a high-level description of normative systems. Service-oriented modeling framework (SOMF) is a holistic language for designing enterprise and application level architecture models in the space of enterprise architecture, virtualization, service-oriented architecture (SOA), cloud computing, and more. Architecture description language (ADL) is a language used to describe and represent the systems architecture of a system. Architecture Analysis & Design Language (AADL) is a modeling language that supports early and repeated analyses of a system's architecture with respect to performance-critical properties through an extendable notation, a tool framework, and precisely defined semantics. Examples of graphical modeling languages in other fields of science. EAST-ADL is a Domain-Specific Modeling language dedicated to automotive system design. Energy Systems Language (ESL), a language that aims to model ecological energetics & global economics. IEC 61499 defines Domain-Specific Modeling language dedicated to distribute industrial process measurement and control systems. Textual types Information models can also be expressed in formalized natural languages, such as Gellish. Gellish has natural language variants such as Gellish Formal English and Gellish Formal Dutch (Gellish Formeel Nederlands), etc. Gellish Formal English is an information representation language or semantic modeling language that is defined in the Gellish English Dictionary-Taxonomy, which has the form of a Taxonomy-Ontology (similarly for Dutch). Gellish Formal English is not only suitable to express knowledge, requirements and dictionaries, taxonomies and ontologies, but also information about individual things. All that information is expressed in one language and therefore it can all be integrated, independent of the question whether it is stored in central or distributed or in federated databases. Information models in Gellish Formal English consists of collections of Gellish Formal English expressions, that use natural language terms and formalized phrases. For example, a geographic information model might consist of a number of Gellish Formal English expressions, such as: - the Eiffel tower <is located in> Paris - Paris <is classified as a> city whereas information requirements and knowledge can be expressed for example as follows: - tower <shall be located in a> geographical area - city <is a kind of> geographical area Such Gellish Formal English expressions use names of concepts (such as "city") and phrases that represent relation types (such as and ) that should be selected from the Gellish English Dictionary-Taxonomy (or of your own domain dictionary). The Gellish English Dictionary-Taxonomy enables the creation of semantically rich information models, because the dictionary contains more than 600 standard relation types and contains definitions of more than 40000 concepts. An information model in Gellish can express facts or make statements, queries and answers. More specific types In the field of computer science recently more specific types of modeling languages have emerged. Algebraic Algebraic Modeling Languages (AML) are high-level programming languages for describing and solving high complexity problems for large scale mathematical computation (i.e. large scale optimization type problems). One particular advantage of AMLs like AIMMS, AMPL, GAMS, Gekko, Mosel, OPL, MiniZinc, and OptimJ is the similarity of its syntax to the mathematical notation of optimization problems. This allows for a very concise and readable definition of problems in the domain of optimization, which is supported by certain language elements like sets, indices, algebraic expressions, powerful sparse index and data handling variables, constraints with arbitrary names. The algebraic formulation of a model does not contain any hints how to process it. Behavioral Behavioral languages are designed to describe the observable behavior of complex systems consisting of components that execute concurrently. These languages focus on the description of key concepts such as: concurrency, nondeterminism, synchronization, and communication. The semantic foundations of Behavioral languages are process calculus or process algebra. Discipline-specific A discipline-specific modeling (DspM) language is focused on deliverables affiliated with a specific software development life cycle stage. Therefore, such language offers a distinct vocabulary, syntax, and notation for each stage, such as discovery, analysis, design, architecture, contraction, etc. For example, for the analysis phase of a project, the modeler employs specific analysis notation to deliver an analysis proposition diagram. During the design phase, however, logical design notation is used to depict the relationship between software entities. In addition, the discipline-specific modeling language best practices does not preclude practitioners from combining the various notations in a single diagram. Domain-specific Domain-specific modeling (DSM) is a software engineering methodology for designing and developing systems, most often IT systems such as computer software. It involves the systematic use of a graphical domain-specific language (DSL) to represent the various facets of a system. DSM languages tend to support higher-level abstractions than General-purpose modeling languages, so they require less effort and fewer low-level details to specify a given system. Framework-specific A framework-specific modeling language (FSML) is a kind of domain-specific modeling language which is designed for an object-oriented application framework. FSMLs define framework-provided abstractions as FSML concepts and decompose the abstractions into features. The features represent implementation steps or choices. A FSML concept can be configured by selecting features and providing values for features. Such a concept configuration represents how the concept should be implemented in the code. In other words, concept configuration describes how the framework should be completed in order to create the implementation of the concept. Information and knowledge modeling Linked data and ontology engineering require 'host languages' to represent entities and the relations between them, constraints between the properties of entities and relations, and metadata attributes. JSON-LD and RDF are two major (and semantically almost equivalent) languages in this context, primarily because they support statement reification and contextualisation which are essential properties to support the higher-order logic needed to reason about models. Model transformation is a common example of such reasoning. Object-oriented Object modeling languages are modeling languages based on a standardized set of symbols and ways of arranging them to model (part of) an object oriented software design or system design. Some organizations use them extensively in combination with a software development methodology to progress from initial specification to an implementation plan and to communicate that plan to an entire team of developers and stakeholders. Because a modeling language is visual and at a higher-level of abstraction than code, using models encourages the generation of a shared vision that may prevent problems of differing interpretation later in development. Often software modeling tools are used to construct these models, which may then be capable of automatic translation to code. Virtual reality Virtual Reality Modeling Language (VRML), before 1995 known as the Virtual Reality Markup Language is a standard file format for representing 3-dimensional (3D) interactive vector graphics, designed particularly with the World Wide Web in mind. Others Architecture Description Language Face Modeling Language Generative Modelling Language Java Modeling Language Promela Rebeca Modeling Language Service Modeling Language Web Services Modeling Language X3D Applications Various kinds of modeling languages are applied in different disciplines, including computer science, information management, business process modeling, software engineering, and systems engineering. Modeling languages can be used to specify: system requirements, structures and behaviors. Modeling languages are intended to be used to precisely specify systems so that stakeholders (e.g., customers, operators, analysts, designers) can better understand the system being modeled. The more mature modeling languages are precise, consistent and executable. Informal diagramming techniques applied with drawing tools are expected to produce useful pictorial representations of system requirements, structures and behaviors, which can be useful for communication, design, and problem solving but cannot be used programmatically. Executable modeling languages applied with proper tool support, however, are expected to automate system verification and validation, simulation and code generation from the same representations. Quality A review of modelling languages is essential to be able to assign which languages are appropriate for different modelling settings. In the term settings we include stakeholders, domain and the knowledge connected. Assessing the language quality is a means that aims to achieve better models. Framework for evaluation Here language quality is stated in accordance with the SEQUAL framework for quality of models developed by Krogstie, Sindre and Lindland (2003), since this is a framework that connects the language quality to a framework for general model quality. Five areas are used in this framework to describe language quality and these are supposed to express both the conceptual as well as the visual notation of the language. We will not go into a thorough explanation of the underlying quality framework of models but concentrate on the areas used to explain the language quality framework. Domain appropriateness The framework states the ability to represent the domain as domain appropriateness. The statement appropriateness can be a bit vague, but in this particular context it means able to express. You should ideally only be able to express things that are in the domain but be powerful enough to include everything that is in the domain. This requirement might seem a bit strict, but the aim is to get a visually expressed model which includes everything relevant to the domain and excludes everything not appropriate for the domain. To achieve this, the language has to have a good distinction of which notations and syntaxes that are advantageous to present. Participant appropriateness To evaluate the participant appropriateness we try to identify how well the language expresses the knowledge held by the stakeholders. This involves challenges since a stakeholder's knowledge is subjective. The knowledge of the stakeholder is both tacit and explicit. Both types of knowledge are of dynamic character. In this framework only the explicit type of knowledge is taken into account. The language should to a large extent express all the explicit knowledge of the stakeholders relevant to the domain. Modeller appropriateness Last paragraph stated that knowledge of the stakeholders should be presented in a good way. In addition it is imperative that the language should be able to express all possible explicit knowledge of the stakeholders. No knowledge should be left unexpressed due to lacks in the language. Comprehensibility appropriateness Comprehensibility appropriateness makes sure that the social actors understand the model due to a consistent use of the language. To achieve this the framework includes a set of criteria. The general importance that these express is that the language should be flexible, easy to organize and easy to distinguish different parts of the language internally as well as from other languages. In addition to this, the goal should be as simple as possible and that each symbol in the language has a unique representation. This is in connection to also to the structure of the development requirements. . Tool appropriateness To ensure that the domain actually modelled is usable for analyzing and further processing, the language has to ensure that it is possible to reason in an automatic way. To achieve this it has to include formal syntax and semantics. Another advantage by formalizing is the ability to discover errors in an early stage. It is not always that the language best fitted for the technical actors is the same as for the social actors. Organizational appropriateness The language used is appropriate for the organizational context, e.g. that the language is standardized within the organization, or that it is supported by tools that are chosen as standard in the organization. See also Model-based testing (MBT) Model-driven engineering (MDE) References Further reading John Krogstie (2003) "Evaluating UML using a generic quality framework" . SINTEF Telecom and Informatics and IDI, NTNU, Norway Krogstie and Sølvsberg (2003). Information Systems Engineering: Conceptual Modeling in a Quality Perspective. Institute of computer and information sciences.\ Anna Gunhild Nysetvold and John Krogstie (2005). "Assessing business processing modeling languages using a generic quality framework". Institute of computer and information sciences. External links Fundamental Modeling Concepts Software Modeling Languages Portal BIP -- Incremental Component-based Construction of Real-time Systems Gellish Formal English Specification languages
Modeling language
Engineering
3,297
7,887,320
https://en.wikipedia.org/wiki/Natural%20Health%20Products%20Directorate
The Natural Health Products Directorate (NHPD) is the division of the Health Products and Food Branch of Health Canada that is responsible for implementation of the Natural Health Product Regulations, including Good Manufacturing Practices, for Natural Health Products for sale in Canada. Aspects As the regulatory authority for nutritionals, the NHPD controls two aspects of consumer and product safety and efficacy. The issuance of site licences, required for manufacturing facilities in Canada to produce nutritionals for sale in Canada. The issuance of Natural Health Product Numbers (NPNs), required for each nutritional marketed in Canada. Each product is evaluated for formulation, dosage requirements, label claims, safety, and proof of efficacy prior to granting an NPN. External links NHPD Website Health Canada Regulators of biotechnology products
Natural Health Products Directorate
Biology
155
55,011,042
https://en.wikipedia.org/wiki/Amauroderma%20subsessile
Amauroderma subsessile is a polypore fungus in the family Ganodermataceae. It was described as a new species in 2015 by mycologists Allyne Christina Gomes-Silva, Leif Ryvarden, and Tatiana Gibertoni. The specific epithet subsessile (from the Latin words sub "somewhat" and sessilis = "without a stipe") refers to "the basidiomata not completely sessile, with a short to long stipe". A. subsessile is found in the states of Rondônia and Pará in the Brazilian Amazon, as well as Costa Rica and Panama. References subsessile Fungi described in 2015 Fungi of Central America Fungi of Brazil Taxa named by Leif Ryvarden Fungus species
Amauroderma subsessile
Biology
164
567,527
https://en.wikipedia.org/wiki/Provisional%20designation%20in%20astronomy
Provisional designation in astronomy is the naming convention applied to astronomical objects immediately following their discovery. The provisional designation is usually superseded by a permanent designation once a reliable orbit has been calculated. Approximately 47% of the more than 1,100,000 known minor planets remain provisionally designated, as hundreds of thousands have been discovered in the last two decades. Minor planets The current system of provisional designation of minor planets (asteroids, centaurs and trans-Neptunian objects) has been in place since 1925. It superseded several previous conventions, each of which was in turn rendered obsolete by the increasing numbers of minor planet discoveries. A modern or new-style provisional designation consists of the year of discovery, followed by two letters and, possibly, a suffixed number. New-style provisional designation For example, the provisional designation (15760 Albion) stands for the 27th body identified during 16-31 Aug 1992: 1992 – the first element indicates the year of discovery. Q – the first letter indicates the half-month of the object's discovery within that year and ranges from A (first half of January) to Y (second half of December), while the letters I and Z are not used (see table below). The first half is always the 1st through to the 15th of the month, regardless of the numbers of days in the second "half". Thus, Q indicates the period from Aug 16 to 31. B1 – the second letter and a numerical suffix indicate the order of discovery within that half-month. The first 25 discoveries of the half-month only receive a letter (A to Z) without a suffix, while the letter I is not used (to avoid potential confusions with the digit 1). Because modern techniques typically yield hundreds if not thousands of discoveries per half-month, the subscript number is appended to indicate the number of times that the letters from A to Z have cycled through. The suffix 1 indicates one completed cycle (1 cycle × 25 letters = 25), while B is the 2nd position in the current cycle. Thus, B1 stands for the 27th minor planet discovered in a half-month. The packed form of is written as . This scheme is now also used retrospectively for pre-1925 discoveries. For these, the first digit of the year is replaced by an A. For example, A801 AA indicates the first object discovered in the first half of January 1801 (1 Ceres). Further explanations During the first half-month of January 2014, the first minor planet identification was assigned the provisional designation . Then the assignment continued to the end of the cycle at , which was in turn followed by the first identification of the second cycle, . The assignment in this second cycle continued with , , ... until , and then was continued with the first item in the third cycle. With the beginning of a new half-month on 16 January 2014, the first letter changed to "B", and the series started with . An idiosyncrasy of this system is that the second letter is listed before the number, even though the second letter is considered "least-significant". This is in contrast to most of the world's numbering systems. This idiosyncrasy is not seen, however, in the so-called packed form (packed designation). A packed designation has no spaces. It may also use letters to codify for the designation's year and subscript number. It is frequently used in online and electronic documents. For example, the provisional designation is written as K07Tf8A in the packed form, where "K07" stands for the year 2007, and "f8" for the subscript number 418. 90377 Sedna, a large trans-Neptunian object, had the provisional designation , meaning it was identified in the first half of November 2003 (as indicated by the letter "V"), and that it was the 302nd object identified during that time, as 12 cycles of 25 letters give 300, and the letter "B" is the second position in the current cycle. Survey designations do not follow the rules for new-style provisional designations. For technical reasons, such as ASCII limitations, the numerical suffix is not always subscripted, but sometimes "flattened out", so that can also be written as . A very busy half month was the second half of January 2015 (letter "B"), which saw a total of 14,208 new minor planet identifications . One of the last assignments in this period was and corresponds to the 14,208th position in the sequence. Survey designations Minor planets discovered during the Palomar–Leiden survey including three subsequent Trojan-campaigns, which altogether discovered more than 4,000 asteroids and Jupiter trojans between 1960 and 1977, have custom designations that consist of a number (order in the survey) followed by a space and one of the following identifiers: P-L  Palomar–Leiden survey (1960–1970) T-1  Palomar–Leiden Trojan survey (1971) T-2  Palomar–Leiden Trojan survey (1973) T-3  Palomar–Leiden Trojan survey (1977) For example, the asteroid 6344 P-L is the 6344th minor planet in the original Palomar–Leiden survey, while the asteroid 4835 T-1 was discovered during the first Trojan-campaign. The majority of these bodies have since been assigned a number and many are already named. Historical designations The first four minor planets were discovered in the early 19th century, after which there was a lengthy gap before the discovery of the fifth. Astronomers initially had no reason to believe that there would be countless thousands of minor planets, and strove to assign a symbol to each new discovery, in the tradition of the symbols used for the major planets. For example, 1 Ceres was assigned a stylized sickle (⚳), 2 Pallas a stylized lance or spear (⚴), 3 Juno a scepter (⚵), and 4 Vesta an altar with a sacred fire (). All had various graphic forms, some of considerable complexity. It soon became apparent, though, that continuing to assign symbols was impractical and provided no assistance when the number of known minor planets was in the dozens. Johann Franz Encke introduced a new system in the Berliner Astronomisches Jahrbuch (BAJ) for 1854, published in 1851, in which he used encircled numbers instead of symbols. Encke's system began the numbering with Astrea which was given the number (1) and went through (11) Eunomia, while Ceres, Pallas, Juno and Vesta continued to be denoted by symbols, but in the following year's BAJ, the numbering was changed so that Astraea was number (5). The new system found popularity among astronomers, and since then, the final designation of a minor planet is a number indicating its order of discovery followed by a name. Even after the adoption of this system, though, several more minor planets received symbols, including 28 Bellona the morning star and lance of Mars's martial sister, 35 Leukothea an ancient lighthouse and 37 Fides a Latin cross (). According to Webster's A Dictionary of the English Language, four more minor planets were also given symbols: 16 Psyche, 17 Thetis, 26 Proserpina, and 29 Amphitrite. However, there is no evidence that these symbols were ever used outside of their initial publication in the Astronomische Nachrichten. 134340 Pluto is an exception: it is a high-numbered minor planet that received a graphical symbol with significant astronomical use (♇), because it was considered a major planet on its discovery, and did not receive a minor planet number until 2006. Graphical symbols continue to be used for some minor planets, and assigned for some recently discovered larger ones, mostly by astrologers (see astronomical symbol and astrological symbol). Three centaurs – 2060 Chiron, 5145 Pholus, and 7066 Nessus – and the largest trans-Neptunian objects – 50000 Quaoar, 90377 Sedna, 90482 Orcus, 136108 Haumea, 136199 Eris, 136472 Makemake, and 225088 Gonggong – have relatively standard symbols among astrologers: the symbols for Haumea, Makemake, and Eris have even been occasionally used in astronomy. However, such symbols are generally not in use among astronomers. Genesis of the current system Several different notation and symbolic schemes were used during the latter half of the nineteenth century, but the present form first appeared in the journal Astronomische Nachrichten (AN) in 1892. New numbers were assigned by the AN on receipt of a discovery announcement, and a permanent designation was then assigned once an orbit had been calculated for the new object. At first, the provisional designation consisted of the year of discovery followed by a letter indicating the sequence of the discovery, but omitting the letter I (historically, sometimes J was omitted instead). Under this scheme, 333 Badenia was initially designated , 163 Erigone was , etc. In 1893, though, increasing numbers of discoveries forced the revision of the system to use double letters instead, in the sequence AA, AB... AZ, BA and so on. The sequence of double letters was not restarted each year, so that followed and so on. In 1916, the letters reached ZZ and, rather than starting a series of triple-letter designations, the double-letter series was restarted with . Because a considerable amount of time could sometimes elapse between exposing the photographic plates of an astronomical survey and actually spotting a small Solar System object on them (witness the story of Phoebe's discovery), or even between the actual discovery and the delivery of the message (from some far-flung observatory) to the central authority, it became necessary to retrofit discoveries into the sequence — to this day, discoveries are still dated based on when the images were taken, and not on when a human realised they were looking at something new. In the double-letter scheme, this was not generally possible once designations had been assigned in a subsequent year. The scheme used to get round this problem was rather clumsy and used a designation consisting of the year and a lower-case letter in a manner similar to the old provisional-designation scheme for comets. For example, (note that there is a space between the year and the letter to distinguish this designation from the old-style comet designation 1915a, Mellish's first comet of 1915), 1917 b. In 1914 designations of the form year plus Greek letter were used in addition. Temporary minor planet designations Temporary designations are custom designations given by an observer or discovering observatory prior to the assignment of a provisional designation by the MPC. These intricate designations were used prior to the Digital Age, when communication was slow or even impossible (e.g. during WWI). The listed temporary designations by observatory/observer use uppercase and lowercase letters (LETTER, letter), digits, numbers and years, as well Roman numerals (ROM) and Greek letters (greek). Comets The system used for comets was complex previous to 1995. Originally, the year was followed by a space and then a Roman numeral (indicating the sequence of discovery) in most cases, but difficulties always arose when an object needed to be placed between previous discoveries. For example, after Comet 1881 III and Comet 1881 IV might be reported, an object discovered in between the discovery dates but reported much later couldn't be designated "Comet 1881 III½". More commonly comets were known by the discoverer's name and the year. An alternate scheme also listed comets in order of time of perihelion passage, using lower-case letters; thus "Comet Faye" (modern designation 4P/Faye) was both Comet 1881 I (first comet to pass perihelion in 1881) and Comet 1880c (third comet to be discovered in 1880). The system since 1995 is similar to the provisional designation of minor planets. For comets, the provisional designation consists of the year of discovery, a space, one letter (unlike the minor planets with two) indicating the half-month of discovery within that year (A=first half of January, B=second half of January, etc. skipping I (to avoid confusion with the number 1 or the numeral I) and not reaching Z), and finally a number (not subscripted as with minor planets), indicating the sequence of discovery within the half-month. Thus, the eighth comet discovered in the second half of March 2006 would be given the provisional designation 2006 F8, whilst the tenth comet of late March would be 2006 F10. If a comet splits, its segments are given the same provisional designation with a suffixed letter A, B, C, ..., Z, AA, AB, AC... If an object is originally found asteroidal, and later develops a cometary tail, it retains its asteroidal designation. For example, minor planet 1954 PC turned out to be Comet Faye, and we thus have "4P/1954 PC" as one of the designations of said comet. Similarly, minor planet was reclassified as a comet, and because it was discovered by LINEAR, it is now known as 176P/LINEAR (LINEAR 52) and (118401) LINEAR. Provisional designations for comets are given condensed or "packed form" in the same manner as minor planets. 2006 F8, if a periodic comet, would be listed in the IAU Minor Planet Database as PK06F080. The last character is purposely a zero, as that allows comet and minor planet designations not to overlap. Periodic comets Comets are assigned one of four possible prefixes as a rough classification. The prefix "P" (as in, for example, P/1997 C1, a.k.a. Comet Gehrels 4) designates a "periodic comet", one which has an orbital period of less than 200 years or which has been observed during more than a single perihelion passage (e.g. 153P/Ikeya-Zhang, whose period is 367 years). They receive a permanent number prefix after their second observed perihelion passage (see List of periodic comets). Non-periodic comets Comets which do not fulfill the "periodic" requirements receive the "C" prefix (e.g. C/2006 P1, the Great Comet of 2007). Comets initially labeled as "non-periodic" may, however, switch to "P" if they later fulfill the requirements. Comets which have been lost or have disintegrated are prefixed "D" (e.g. D/1993 F2, Comet Shoemaker-Levy 9). Finally, comets for which no reliable orbit could be calculated, but are known from historical records, are prefixed "X" as in, for example, X/1106 C1. (Also see List of non-periodic comets and List of hyperbolic comets.) Satellites and rings of planets When satellites or rings are first discovered, they are given provisional designations such as "" (the 11th new satellite of Jupiter discovered in 2000), "" (the first new satellite of Pluto discovered in 2005), or "" (the second new ring of Saturn discovered in 2004). The initial "S/" or "R/" stands for "satellite" or "ring", respectively, distinguishing the designation from the prefixes "C/", "D/", "P/", and "X/" used for comets. These designations are sometimes written as "", dropping the second space. The prefix "S/" indicates a natural satellite, and is followed by a year (using the year when the discovery image was acquired, not necessarily the date of discovery). A one-letter code written in upper case identifies the planet such as J and S for Jupiter and Saturn, respectively (see list of one-letter abbreviations), and then a number identifies sequentially the observation. For example, Naiad, the innermost moon of Neptune, was at first designated "". Later, once its existence and orbit were confirmed, it received its full designation, "". The Roman numbering system arose with the very first discovery of natural satellites other than Earth's Moon: Galileo referred to the Galilean moons as I through IV (counting from Jupiter outward), in part to spite his rival Simon Marius, who had proposed the names now adopted. Similar numbering schemes naturally arose with the discovery of moons around Saturn and Uranus. Although the numbers initially designated the moons in orbital sequence, new discoveries soon failed to conform with this scheme (e.g. "" is Amalthea, which orbits closer to Jupiter than does Io). The unstated convention then became, at the close of the 19th century, that the numbers more or less reflected the order of discovery, except for prior historical exceptions (see the Timeline of discovery of Solar System planets and their natural satellites). The convention has been extended to natural satellites of minor planets, such as "". Moons of minor planets The provisional designation system for minor planet satellites, such as asteroid moons, follows that established for the satellites of the major planets. With minor planets, the planet letter code is replaced by the minor planet number in parentheses. Thus, the first observed moon of 87 Sylvia, discovered in 2001, was at first designated S/2001 (87) 1, later receiving its permanent designation of (87) Sylvia I Romulus. Where more than one moon has been discovered, Roman numerals specify the discovery sequence, so that Sylvia's second moon is designated (87) Sylvia II Remus. Since Pluto was reclassified in 2006, discoveries of Plutonian moons since then follow the minor-planet system: thus Nix and Hydra, discovered in 2005, were S/2005 P 2 and S/2005 P 1, but Kerberos and Styx, discovered in 2011 and 2012 respectively, were S/2011 (134340) 1 and S/2012 (134340) 1. That said, there has been some unofficial use of the formats "S/2011 P 1" and "S/2012 P 1". Packed designation Packed designations are used in online and electronic documents as well as databases. Packed minor planet designation The Orbit Database (MPCORB) of the Minor Planet Center (MPC) uses the "packed form" to refer to all provisionally designated minor planets. The idiosyncrasy found in the new-style provisional designations, no longer exists in this packed-notation system, as the second letter is now listed after the subscript number, or its equivalent 2-digit code. For an introduction on provisional minor planet designations in the "un-packed" form, see . Provisional packed designations The system of packed provisional minor planet designations: uses exactly 7 characters with no spaces for all designations compacts 4 digit years to a 3-character code, e.g. 2014 is written as K14 (see tables below) converts all subscript numbers to a 2-character code (00 is used when there is no following subscript, 99 is used for subscript 99, A0 is used for subscript 100, and A1 is used for 101) the packed 2 character subscript code is placed between the half-month letter and the second (discovery order) letter (e.g. has discovery order K so the last three characters for its packed form are A2K) Contrary to the new-style system, the letter "i" is used in the packed form both for the year and the numeric suffix. The compacting system provides upper and lowercase letters to encode up to 619 "cycles". This means that 15,500 designations () within a half-month can be packed, which is a few times more than the designations assigned monthly in recent years. Examples is written as J95X00A is written as J95X01L is written as K16EF6K is written as K07Tf8A Description The year 1995 is compacted to J95. As it has no subscript number, 00 is used as placeholder instead, and directly placed after the half-month letter "X". The year 1995 is compacted to J95. Subscript number "1" is padded to 01 to maintain the length of 7 characters, and placed after the first letter. The year 2016 is compacted to K16. The subscript number "156" exceeds 2 digits and is converted to F6, (see table below) The year 2007 is compacted to K07. The subscript number "418" exceeds 2 digits and is converted to f8, (see table below) Conversion tables Comets follow the minor-planet scheme for their first four characters. The fifth and sixth characters encode the number. The seventh character is usually 0, unless it is a component of a split comet, in which case it encodes in lowercase the letter of the fragment. Examples 1995 A1 is written as J95A010 1995 P1-B is written as J95P01b (i.e. fragment B of comet ) 2088 A103 is written as K88AA30 (as the subscript number exceeds two digits and is converted according to the above table). There is also an extended form that adds five characters to the front. The fifth character is one of "C", "D", "P", or "X", according to the status of the comet. If the comet is periodic, then the first four characters are the periodic-comet number (padded to the left with zeroes); otherwise, they are blank. Natural satellites use the format for comets, except that the last column is always 0. Packed survey designations Survey designations used during the Palomar–Leiden Survey (PLS) have a simpler packed form, as for example: is written as PLS6344 is written as T1S4835 is written as T2S1010 is written as T3S4101 Note that the survey designations are distinguished from provisional designations by having the letter S in the third character, which contains a decimal digit in provisional designations and permanent numbers. Permanent packed designations A packed form for permanent designations also exists (these are numbered minor planets, with or without a name). In this case, only the designation's number is used and converted to a 5-character string. The rest of the permanent designation is ignored. Minor planet numbers below 100,000 are simply zero-padded to 5 digits from the left side. For minor planets between 100,000 and 619,999 inclusive, a single letter (A–Z and a–z) is used, similar as for the provisional subscript number (also see table above): A covers the number range 100,000–109,999 B covers the number range 110,000–119,999 a covers the number range 360,000–369,999 z covers the number range 610,000–619,999 Examples 00001 encodes 1 Ceres 99999 encodes A0000 encodes 100000 Astronautica, () A9999 encodes () B0000 encodes () G3693 encodes 163693 Atira () Y2843 encodes 342843 Davidbowie () g0356 encodes 420356 Praamzius () z9999 encodes () For minor planets numbered 620,000 or higher, a tilde "~" is used as the first character. The subsequent 4 characters encoded in Base62 (using 0–9, then A–Z, and a–z, in this specific order) are used to store the difference of the object's number minus 620,000. This extended system allows for the encoding of more than 15 million minor planet numbers. For example: is represented as ~0000 ( is represented as ~000z ( (3140113) will be represented as ~AZaz ( (15396335) will be represented as ~zzzz ( For comets, permanent designations only apply to periodic comets that are seen to return. The first four characters are the number of the comet (left-padded with zeroes). The fifth character is "P", unless the periodic comet is lost or defunct, in which case it is "D". For natural satellites, permanent packed designations take the form of the planet letter, then three digits containing the converted Roman numeral (left-padded with zeroes), and finally an "S". For example, Jupiter XIII Leda is J013S, and Neptune II Nereid is N002S. See also Minor-planet designation Naming of moons References External links New- And Old-Style Minor Planet Designations (Minor Planet Center) Astronomical nomenclature Comets Minor planets Moons
Provisional designation in astronomy
Astronomy
5,148
1,348,510
https://en.wikipedia.org/wiki/ISO/IEC%209126
ISO/IEC 9126 Software engineering — Product quality was an international standard for the evaluation of software quality. It has been replaced by ISO/IEC 25010:2011. The fundamental objective of the ISO/IEC 9126 standard is to address some of the well-known human biases that can adversely affect the delivery and perception of a software development project. These biases include changing priorities after the start of a project or not having any clear definitions of "success". By clarifying, then agreeing on the project priorities and subsequently converting abstract priorities (compliance) to measurable values (output data can be validated against schema X with zero intervention), ISO/IEC 9126 tries to develop a common understanding of the project's objectives and goals. The standard is divided into four parts: quality model external metrics internal metrics quality in use metrics. Quality The quality model presented in the first part of the standard, ISO/IEC 9126-1, classifies software quality in a structured set of characteristics and sub-characteristics as follows: Functionality - "A set of attributes that bear on the existence of a set of functions and their specified properties. The functions are those that satisfy stated or implied needs." Suitability Accuracy Interoperability Security Functionality compliance Reliability - "A set of attributes that bear on the capability of software to maintain its level of performance under stated conditions for a stated period of time." Maturity Fault tolerance Recoverability Reliability compliance Usability - "A set of attributes that bear on the effort needed for use, and on the individual assessment of such use, by a stated or implied set of users." Understandability Learnability Operability Attractiveness Usability compliance Efficiency - "A set of attributes that bear on the relationship between the level of performance of the software and the amount of resources used, under stated conditions." Time behaviour Resource utilization Efficiency compliance Maintainability - "A set of attributes that bear on the effort needed to make specified modifications." Analyzability Changeability Stability Testability Maintainability compliance Portability - "A set of attributes that bear on the ability of software to be transferred from one environment to another." Adaptability Installability Co-existence Replaceability Portability compliance Each quality sub-characteristic (e.g. adaptability) is further divided into attributes. An attribute is an entity which can be verified or measured in the software product. Attributes are not defined in the standard, as they vary between different software products. Software product is defined in a broad sense: it encompasses executables, source code, architecture descriptions, and so on. As a result, the notion of user extends to operators as well as to programmers, which are users of components such as software libraries. The standard provides a framework for organizations to define a quality model for a software product. On doing so, however, it leaves up to each organization the task of specifying precisely its own model. This may be done, for example, by specifying target values for quality metrics which evaluates the degree of presence of quality attributes. Internal Metrics Internal metrics are those which do not rely on software execution (static measure). External metrics External metrics are applicable to running software. Quality-in-use metrics Quality-in-use metrics are only available when the final product is used in real conditions. Ideally, the internal quality determines the external quality and external quality determines quality in use. This standard stems from the GE model for describing software quality, presented in 1977 by McCall et al., which is organized around three types of quality characteristic: Factors (to specify): They describe the external view of the software, as viewed by the users. Criteria (to build): They describe the internal view of the software, as seen by the developer. Metrics (to control): They are defined and used to provide a scale and method for measurement. ISO/IEC 9126 distinguishes between a defect and a nonconformity, a defect being "The nonfulfilment of intended usage requirements", whereas a nonconformity is "The nonfulfilment of specified requirements". A similar distinction is made between validation and verification, known as V&V in the testing trade. History ISO/IEC 9126 was issued on December 19, 1991. On June 15, 2001, ISO/IEC 9126:1991 was replaced by ISO/IEC 9126:2001 (four parts 9126–1 to 9126–4). On March 1, 2011, ISO/IEC 9126 was replaced by ISO/IEC 25010:2011 Systems and software engineering - Systems and software Quality Requirements and Evaluation (SQuaRE) - System and software quality models. Compared to 9126, "security" and "compatibility" were added as main characteristics. Developments ISO/IEC then started work on SQuaRE (Software product Quality Requirements and Evaluation), a more extensive series of standards to replace ISO/IEC 9126, with numbers of the form ISO/IEC 250mn. For instance, ISO/IEC 25000 was issued in 2005, and ISO/IEC 25010, which supersedes ISO/IEC 9126-1, was issued in March 2011. ISO 25010 has eight product quality characteristics (in contrast to ISO 9126's six), and 31 subcharacteristics. "Functionality" is renamed "functional suitability". "Functional completeness" is added as a subcharacteristic, and "interoperability" and "security" are moved elsewhere. "Accuracy" is renamed "functional correctness", and "suitability" is renamed "functional appropriateness". "Efficiency" is renamed "performance efficiency". "Capacity" is added as a subcharactersitic. "Compatibility" is a new characteristic, with "co-existence" moved from "portability" and "interoperability" moved from "functionality". "Usability" has new subcharacteristics of "user error protection" and "accessibility" (use by people with a wide range of characteristics). "Understandability" is renamed "appropriateness recognizability", and "attractiveness" is renamed "user interface aesthetics". "Reliability" has a new subcharacteristic of "availability" (when required for use). "Security" is a new characteristic with subcharacteristics of "confidentiality" (data accessible only by those authorized), "integrity" (protection from unauthorized modification), "non-repudiation" (actions can be proven to have taken place), "accountability" (actions can be traced to who did them), and "authenticity" (identity can be proved to be the one claimed). "Maintainability" has new subcharacteristics of "modularity" (changes in one component have a minimal impact on others) and "reusability"; "changeability" and "stability" are rolled up into "modifiability". "Portability" has "co-existence" moved elsewhere. See also CISQ ISO/IEC 25002:2024 ISO/IEC 25010 ISO 9000 Verification and Validation Non-functional requirements Squale ISO/IEC JTC 1/SC 7 Software quality References Scalet et al., 2000: ISO/IEC 9126 and 14598 integration aspects: A Brazilian viewpoint. The Second World Congress on Software Quality, Yokohama, Japan, 2000. Software quality 09126 09126
ISO/IEC 9126
Engineering
1,529
16,761,943
https://en.wikipedia.org/wiki/Card%20sorting
Card sorting is a technique in user experience design in which a person tests a group of subject experts or users to generate a dendrogram (category tree) or folksonomy. It is a useful approach for designing information architecture, workflows, menu structure, or web site navigation paths. Card sorting uses a relatively low-tech approach. The person conducting the test (usability analyst, user experience designer, etc.) first identifies key concepts and writes them on index cards or Post-it notes. Test subjects, individually or sometimes as a group, then arrange the cards to represent how they see the structure and relationships of the information. Groups can be organized as collaborative groups (focus groups) or as repeated individual sorts. The literature discusses appropriate numbers of users needed to produce trustworthy results. A card sort is commonly undertaken when designing a navigation structure for an environment that offers a variety of content and functions, such as a web site. In that context, the items to organize are those significant in the environment. The way the items are organized should make sense to the target audience and cannot be determined from first principles. The field of information architecture is founded on the study of the structure of information. If an accepted and standardized taxonomy exists for a subject, it would be natural to apply that taxonomy to organize both the information in the environment, and any navigation to particular subjects or functions. Card sorting is useful when: The variety of items to organize is so great that no existing taxonomy is accepted as organizing the items. Similarities among the items make them difficult to divide clearly into categories. Members of the audience that uses the environment differ significantly in how they view the similarities among items and the appropriate groupings of items. Basic method To perform a card sort: A person representative of the audience receives a set of index cards with terms written on them. This person groups the terms in whatever way they think is logical, and gives each group a category name, either from an existing card or by writing a name on a blank card. Testers repeat this process across a group of test subjects. The testers later analyze the results to discover patterns. Variants Open card sorting In an open card sort, participants create their own names for the categories. This helps reveal not only how they mentally classify the cards, but also what terms they use for the categories. Open sorting is generative; it is typically used to discover patterns in how participants classify, which in turn helps generate ideas for organizing information. Closed card sorting In a closed card sort, participants are provided with a predetermined set of category names. They then assign the index cards to these fixed categories. This helps reveal the degree to which the participants agree on which cards belong under each category. Closed sorting is evaluative; it is typically used to judge whether a given set of category names provides an effective way to organize a given collection of content. Reverse card sorting In a reverse card sort (more popularly called tree testing), an existing structure of categories and sub-categories is tested. Users are given tasks and are asked to complete them navigating a collection of cards. Each card contains the names of subcategories related to a category, and the user should find the card most relevant to the given task starting from the main card with the top-level categories. This ensures that the structure is evaluated in isolation, nullifying the effects of navigational aids, visual design, and other factors. Reverse card sorting is evaluative—it judges whether a predetermined hierarchy provides a good way to find information. Modified-Delphi card sorting Created by Celeste Paul, The Modified-Delphi card sort is based on the Delphi method. Rather than each participant creating their own card sort, only the first participant does a full card sort of organizing and arranging items. The next participant iterates on the first participant's model, then the third participant iterates on the second's model, and so on. The idea is that with each iteration the card sort gets more refined with fewer participants and consensus is built sooner. Analysis Various methods can be used to analyze the data. The purpose of the analysis is to extract patterns from the population of test subjects, so that a common set of categories and relationships emerges. This common set is then incorporated into the design of the environment, either for navigation or for other purposes. Card sorting is also evaluated through dendrograms. There is some indication that different evaluation methods for card sorting provide different results. Card sorting is an established technique with an emerging literature. Online (remote) card sorting A number of web-based tools are available to perform card sorting. The perceived advantage of web-based card sorting is that it reaches a larger group of participants at a lower cost. The software can also help analyze the sort results. A perceived disadvantage of a remote card sort is the lack of personal interaction between card sort participants and the card sort administrator, which may produce valuable insights. See also Cluster analysis Group concept mapping Q methodology References Further reading Folksonomy Human–computer interaction Qualitative research Survey methodology Usability
Card sorting
Engineering
1,031
74,131,788
https://en.wikipedia.org/wiki/Flotufolastat%20%2818F%29
{{DISPLAYTITLE:Flotufolastat (18F)}} Flotufolastat (18F), sold under the brand name Posluma, is a radioactive diagnostic agent for use with positron emission tomography (PET) imaging for prostate cancer. The active ingredient is flotufolastat (18F). Flotufolastat (18F) was approved for medical use in the United States in May 2023. Medical uses Flotufolastat (18F) is indicated for positron emission tomography of prostate-specific membrane antigen positive lesions in men with prostate cancer. References External links Medicinal radiochemistry PET radiotracers Prostate cancer Radiopharmaceuticals Tert-butyl compounds Peptide therapeutics Fluorine compounds Gallium compounds
Flotufolastat (18F)
Chemistry
173
26,463
https://en.wikipedia.org/wiki/Rigel
Rigel is a blue supergiant star in the constellation of Orion. It has the Bayer designation β Orionis, which is Latinized to Beta Orionis and abbreviated Beta Ori or β Ori. Rigel is the brightest and most massive componentand the eponymof a star system of at least four stars that appear as a single blue-white point of light to the naked eye. This system is located at a distance of approximately from the Sun. A star of spectral type B8Ia, Rigel is 120,000 times as luminous as the Sun, and is 18 to 24 times as massive, depending on the method and assumptions used. Its radius is more than seventy times that of the Sun, and its surface temperature is . Due to its stellar wind, Rigel's mass-loss is estimated to be ten million times that of the Sun. With an estimated age of seven to nine million years, Rigel has exhausted its core hydrogen fuel, expanded, and cooled to become a supergiant. It is expected to end its life as a typeII supernova, leaving a neutron star or a black hole as a final remnant, depending on the initial mass of the star. Rigel varies slightly in brightness, its apparent magnitude ranging from 0.05 to 0.18. It is classified as an Alpha Cygni variable due to the amplitude and periodicity of its brightness variation, as well as its spectral type. Its intrinsic variability is caused by pulsations in its unstable atmosphere. Rigel is generally the seventh-brightest star in the night sky and the brightest star in Orion, though it is occasionally outshone by Betelgeuse, which varies over a larger range. A triple-star system is separated from Rigel by an angle of . It has an apparent magnitude of 6.7, making it 1/400th as bright as Rigel. Two stars in the system can be seen by large telescopes, and the brighter of the two is a spectroscopic binary. These three stars are all blue-white main-sequence stars, each three to four times as massive as the Sun. Rigel and the triple system orbit a common center of gravity with a period estimated to be 24,000 years. The inner stars of the triple system orbit each other every 10 days, and the outer star orbits the inner pair every 63 years. A much fainter star, separated from Rigel and the others by nearly an arc minute, may be part of the same star system. Nomenclature In 2016, the International Astronomical Union (IAU) included the name "Rigel" in the IAU Catalog of Star Names. According to the IAU, this proper name applies only to the primary component A of the Rigel system. The system is listed variously in historical astronomical catalogs as or For simplicity, Rigel's companions are referred to as Rigel B, C, and D; the IAU describes such names as "useful nicknames" that are "unofficial". In modern comprehensive catalogs, the whole multiple star system is known as or The designation of Rigel as (Latinized to beta Orionis) was made by Johann Bayer in 1603. The "beta" designation is usually given to the second-brightest star in each constellation, but Rigel is almost always brighter than (Betelgeuse). Astronomer J.B. Kaler speculated that Bayer assigned letters during a rare period when variable star Betelgeuse temporarily outshone Rigel, resulting in Betelgeuse being designated "alpha" and Rigel designated "beta". However, closer examination of Bayer's method shows that he did not strictly order the stars by brightness, but instead grouped them first by magnitude, then by declination. Rigel and Betelgeuse were both classed as first magnitude, and in Orion the stars of each class appear to have been ordered north to south. Rigel has many other stellar designations taken from various catalogs, including the (19 Ori), the Bright Star Catalogue entry HR 1713, and the Henry Draper Catalogue number HD 34085. These designations frequently appear in the scientific literature, but rarely in popular writing. Rigel is listed in the General Catalogue of Variable Stars, but since its familiar Bayer designation is used instead of creating a separate variable star designation. Observation Rigel is an intrinsic variable star with an apparent magnitude ranging from 0.05 to 0.18. It is typically the seventh-brightest star in the celestial sphere, excluding the Sun, although occasionally fainter than Betelgeuse. Rigel appears slightly blue-white and has a B-V color index of −0.06. It contrasts strongly with reddish Betelgeuse. Culminating every year at midnight on 12 December, and at 9:00pm on 24 January, Rigel is visible on winter evenings in the Northern Hemisphere and on summer evenings in the Southern Hemisphere. In the Southern Hemisphere, Rigel is the first bright star of Orion visible as the constellation rises. Correspondingly, it is also the first star of Orion to set in most of the Northern Hemisphere. The star is a vertex of the "Winter Hexagon", an asterism that includes Aldebaran, Capella, Pollux, Procyon, and Sirius. Rigel is a prominent equatorial navigation star, being easily located and readily visible in all the world's oceans (the exception is the area north of the 82nd parallel north). Spectroscopy Rigel's spectral type is a defining point of the classification sequence for supergiants. The overall spectrum is typical for a late B class star, with strong absorption lines of the hydrogen Balmer series as well as neutral helium lines and some of heavier elements such as oxygen, calcium, and magnesium. The luminosity class for B8 stars is estimated from the strength and narrowness of the hydrogen spectral lines, and Rigel is assigned to the bright supergiant class Ia. Variations in the spectrum have resulted in the assignment of different classes to Rigel, such as B8 Ia, B8 Iab, and B8 Iae. As early as 1888, the heliocentric radial velocity of Rigel, as estimated from the Doppler shifts of its spectral lines, was seen to vary. This was confirmed and interpreted at the time as being due to a spectroscopic companion with a period of about 22 days. The radial velocity has since been measured to vary by about around a mean of . In 1933, the Hα line in Rigel's spectrum was seen to be unusually weak and shifted towards shorter wavelengths, while there was a narrow emission spike about to the long wavelength side of the main absorption line. This is now known as a P Cygni profile after a star that shows this feature strongly in its spectrum. It is associated with mass loss where there is simultaneously emission from a dense wind close to the star and absorption from circumstellar material expanding away from the star. The unusual Hα line profile is observed to vary unpredictably. It is a normal absorption line around a third of the time. About a quarter of the time, it is a double-peaked line, that is, an absorption line with an emission core or an emission line with an absorption core. About a quarter of the time it has a P Cygni profile; most of the rest of the time, the line has an inverse P Cygni profile, where the emission component is on the short wavelength side of the line. Rarely, there is a pure emission Hα line. The line profile changes are interpreted as variations in the quantity and velocity of material being expelled from the star. Occasional very high-velocity outflows have been inferred, and, more rarely, infalling material. The overall picture is one of large looping structures arising from the photosphere and driven by magnetic fields. Variability Rigel has been known to vary in brightness since at least 1930. The small amplitude of Rigel's brightness variation requires photoelectric or CCD photometry to be reliably detected. This brightness variation has no obvious period. Observations over 18 nights in 1984 showed variations at red, blue, and yellow wavelengths of up to 0.13 magnitudes on timescales of a few hours to several days, but again no clear period. Rigel's color index varies slightly, but this is not significantly correlated with its brightness variations. From analysis of Hipparcos satellite photometry, Rigel is identified as belonging to the Alpha Cygni class of variable stars, defined as "non-radially pulsating supergiants of the Bep–AepIa spectral types". In those spectral types, the 'e' indicates that it displays emission lines in its spectrum, while the 'p' means it has an unspecified spectral peculiarity. Alpha Cygni type variables are generally considered to be irregular or have quasi-periods. Rigel was added to the General Catalogue of Variable Stars in the 74th name-list of variable stars on the basis of the Hipparcos photometry, which showed variations with a photographic amplitude of 0.039 magnitudes and a possible period of 2.075 days. Rigel was observed with the Canadian MOST satellite for nearly 28 days in 2009. Milli-magnitude variations were observed, and gradual changes in flux suggest the presence of long-period pulsation modes. Mass loss From observations of the variable Hα spectral line, Rigel's mass-loss rate due to stellar wind is estimated be solar masses per year (/yr)—about ten million times more than the mass-loss rate from the Sun. More detailed optical and Kband infrared spectroscopic observations, together with VLTI interferometry, were taken from 2006 to 2010. Analysis of the Hα and Hγ line profiles, and measurement of the regions producing the lines, show that Rigel's stellar wind varies greatly in structure and strength. Loop and arm structures were also detected within the wind. Calculations of mass loss from the Hγ line give in 2006-7 and in 2009–10. Calculations using the Hα line give lower results, around . The terminal wind velocity is . It is estimated that Rigel has lost about three solar masses () since beginning life as a star of seven to nine million years ago. Distance Rigel's distance from the Sun is somewhat uncertain, different estimates being obtained by different methods. Old estimates placed it 166 parsecs (or 541 light years) away from the Sun. The 2007 Hipparcos new reduction of Rigel's parallax is , giving a distance of with a margin of error of about 9%. Rigel B, usually considered to be physically associated with Rigel and at the same distance, has a Gaia Data Release 3 parallax of , suggesting a distance around . However, the measurements for this object may be unreliable. Indirect distance estimation methods have also been employed. For example, Rigel is believed to be in a region of nebulosity, its radiation illuminating several nearby clouds. Most notable of these is the 5°-long IC 2118 (Witch Head Nebula), located at an angular separation of 2.5° from the star, or a projected distance of away. From measures of other nebula-embedded stars, IC2118's distance is estimated to be . Rigel is an outlying member of the Orion OB1 association, which is located at a distance of up to from Earth. It is a member of the loosely defined Taurus-Orion R1 Association, somewhat closer at . Rigel is thought to be considerably closer than most of the members of Orion OB1 and the Orion Nebula. Betelgeuse and Saiph lie at a similar distance to Rigel, although Betelgeuse is a runaway star with a complex history and might have originally formed in the main body of the association. Stellar system Hierarchical scheme for Rigel's components The star system of which Rigel is a part has at least four components. Rigel (sometimes called Rigel A to distinguish from the other components) has a visual companion, which is likely a close triple-star system. A fainter star at a wider separation might be a fifth component of the Rigel system. William Herschel discovered Rigel to be a visual double star on 1 October 1781, cataloguing it as star 33 in the "second class of double stars" in his Catalogue of Double Stars, usually abbreviated to HII33, or as H233 in the Washington Double Star Catalogue. Friedrich Georg Wilhelm von Struve first measured the relative position of the companion in 1822, cataloguing the visual pair as Σ 668. The secondary star is often referred to as Rigel B or β Orionis B. The angular separation of Rigel B from Rigel A is 9.5 arc seconds to its south along position angle 204°. Although not particularly faint at visual magnitude 6.7, the overall difference in brightness from Rigel A (about 6.6 magnitudes or 440 times fainter) makes it a challenging target for telescope apertures smaller than . At Rigel's estimated distance, Rigel B's projected separation from Rigel A is over 2,200astronomical units (AU). Since its discovery, there has been no sign of orbital motion, although both stars share a similar common proper motion. The pair would have an estimated orbital period of 24,000years. Gaia Data Release 2(DR2) contains a somewhat unreliable parallax for Rigel B, placing it at about , further away than the Hipparcos distance for Rigel, but similar to the Taurus-Orion R1 association. There is no parallax for Rigel in Gaia DR2. The Gaia DR2 proper motions for Rigel B and the Hipparcos proper motions for Rigel are both small, although not quite the same. In 1871, Sherburne Wesley Burnham suspected Rigel B to be a binary system, and in 1878, he resolved it into two components. This visual companion is designated as component C (Rigel C), with a measured separation from component B that varies from less than to around . In 2009, speckle interferometry showed the two almost identical components separated by , with visual magnitudes of 7.5 and 7.6, respectively. Their estimated orbital period is 63years. Burnham listed the Rigel multiple system as β555 in his double star catalog or BU555 in modern use. Component B is a double-lined spectroscopic binary system, which shows two sets of spectral lines combined within its single stellar spectrum. Periodic changes observed in relative positions of these lines indicate an orbital period of 9.86days. The two spectroscopic components Rigel Ba and Rigel Bb cannot be resolved in optical telescopes but are known to both be hot stars of spectral type around B9. This spectroscopic binary, together with the close visual component Rigel C, is likely a physical triple-star system, although Rigel C cannot be detected in the spectrum, which is inconsistent with its observed brightness. In 1878, Burnham found another possibly associated star of approximately 13th magnitude. He listed it as component D of β555, although it is unclear whether it is physically related or a coincidental alignment. Its 2017 separation from Rigel was , almost due north at a position angle of 1°. Gaia DR2 finds it to be a 12th magnitude sunlike star at approximately the same distance as Rigel. Likely a K-type main-sequence star, this star would have an orbital period of around 250,000 years, if it is part of the Rigel system. A spectroscopic companion to Rigel was reported on the basis of radial velocity variations, and its orbit was even calculated, but subsequent work suggests the star does not exist and that observed pulsations are intrinsic to Rigel itself. Physical characteristics Rigel is a blue supergiant that has exhausted the hydrogen fuel in its core, expanded and cooled as it moved away from the main sequence across the upper part of the Hertzsprung–Russell diagram. When it was on the main sequence, its effective temperature would have been around . Rigel's complex variability at visual wavelengths is caused by stellar pulsations similar to those of Deneb. Further observations of radial velocity variations indicate that it simultaneously oscillates in at least 19 non-radial modes with periods ranging from about 1.2 to 74 days. Estimation of many physical characteristics of blue supergiant stars, including Rigel, is challenging due to their rarity and uncertainty about how far they are from the Sun. As such, their characteristics are mainly estimated from theoretical stellar evolution models. Its effective temperature can be estimated from the spectral type and color to be around . A mass of at an age of million years has been estimated by comparing evolutionary tracks, while atmospheric modeling from the spectrum gives a mass of . Although Rigel is often considered the most luminous star within 1,000 light-years of the Sun, its energy output is poorly known. Using the Hipparcos distance of , the estimated relative luminosity for Rigel is about 120,000 times that of the Sun (), but another recently published distance of suggests an even higher luminosity of . Other calculations based on theoretical stellar evolutionary models of Rigel's atmosphere give luminosities anywhere between and , while summing the spectral energy distribution from historical photometry with the Hipparcos distance suggests a luminosity as low as . A 2018 study using the Navy Precision Optical Interferometer measured the angular diameter as . After correcting for limb darkening, the angular diameter is found to be , yielding a radius of . An older measurement of the angular diameter gives , equivalent to a radius of at . These radii are calculated assuming the Hipparcos distance of ; adopting a distance of leads to a significantly larger size. Older distance estimates were mostly far lower than modern estimates, leading to lower radius estimates; a 1922 estimate by John Stanley Plaskett gave Rigel a diameter of 25 million miles, or approximately , smaller than its neighbor Aldebaran. Due to their closeness to each other and ambiguity of the spectrum, little is known about the intrinsic properties of the members of the Rigel BC triple system. All three stars seem to be near equally hot B-type main-sequence stars that are three to four times as massive as the Sun. Evolution Stellar evolution models suggest the pulsations of Rigel are powered by nuclear reactions in a hydrogen-burning shell that is at least partially non-convective. These pulsations are stronger and more numerous in stars that have evolved through a red supergiant phase and then increased in temperature to again become a blue supergiant. This is due to the decreased mass and increased levels of fusion products at the surface of the star. Rigel is likely to be fusing helium in its core. Due to strong convection of helium produced in the core while Rigel was on the main sequence and in the hydrogen-burning shell since it became a supergiant, the fraction of helium at the surface has increased from 26.6% when the star formed to 32% now. The surface abundances of carbon, nitrogen, and oxygen seen in the spectrum are compatible with a post-red supergiant star only if its internal convection zones are modeled using non-homogeneous chemical conditions known as the Ledoux Criteria. Rigel is expected to eventually end its stellar life as a type II supernova. It is one of the closest known potential supernova progenitors to Earth, and would be expected to have a maximum apparent magnitude of around (about the same brightness as a quarter Moon or around 300 times brighter than Venus ever gets). The supernova would leave behind either a black hole or a neutron star. Etymology and cultural significance The earliest known recording of the name Rigel is in the Alfonsine tables of 1521. It is derived from the Arabic name , "the left leg (foot) of Jauzah" (i.e. rijl meaning "leg, foot"), which can be traced to the 10th century. "Jauzah" was a proper name for Orion; an alternative Arabic name was , "the foot of the great one", from which stems the rarely used variant names Algebar or Elgebar. The Alphonsine tables saw its name split into "Rigel" and "Algebar", with the note, et dicitur Algebar. Nominatur etiam Rigel. Alternate spellings from the 17th century include Regel by Italian astronomer Giovanni Battista Riccioli, Riglon by German astronomer Wilhelm Schickard, and Rigel Algeuze or Algibbar by English scholar Edmund Chilmead. With the constellation representing the mythological Greek huntsman Orion, Rigel is his knee or (as its name suggests) foot; with the nearby star Beta Eridani marking Orion's footstool. Rigel is presumably the star known as "Aurvandil's toe" in Norse mythology. In the Caribbean, Rigel represented the severed leg of the folkloric figure Trois Rois, himself represented by the three stars of Orion's Belt. The leg had been severed with a cutlass by the maiden Bįhi (Sirius). The Lacandon people of southern Mexico knew it as tunsel ("little woodpecker"). Rigel was known as Yerrerdet-kurrk to the Wotjobaluk koori of southeastern Australia, and held to be the mother-in-law of Totyerguil (Altair). The distance between them signified the taboo preventing a man from approaching his mother-in-law. The indigenous Boorong people of northwestern Victoria named Rigel as Collowgullouric Warepil. The Wardaman people of northern Australia know Rigel as the Red Kangaroo Leader Unumburrgu and chief conductor of ceremonies in a songline when Orion is high in the sky. Eridanus, the river, marks a line of stars in the sky leading to it, and the other stars of Orion are his ceremonial tools and entourage. Betelgeuse is Ya-jungin "Owl Eyes Flicking", watching the ceremonies. The Māori people of New Zealand named Rigel as Puanga, said to be a daughter of Rehua (Antares), the chief of all-stars. Its heliacal rising presages the appearance of Matariki (the Pleiades) in the dawn sky, marking the Māori New Year in late May or early June. The Moriori people of the Chatham Islands, as well as some Māori groups in New Zealand, mark the start of their New Year with Rigel rather than the Pleiades. Puaka is a southern name variant used in the South Island. In Japan, the Minamoto or Genji clan chose Rigel and its white color as its symbol, calling the star Genji-boshi (), while the Taira or Heike clan adopted Betelgeuse and its red color. The two powerful families fought the Genpei War; the stars were seen as facing off against each other and kept apart only by the three stars of Orion's Belt. In modern culture The MS Rigel was originally a Norwegian ship, built in Copenhagen in 1924. It was requisitioned by the Germans during World War II and sunk in 1944 while being used to transport prisoners of war. Two US Navy ships have borne the name USS Rigel. The SSM-N-6 Rigel was a cruise missile program for the US Navy that was cancelled in 1953 before reaching deployment. The Rigel Skerries are a chain of small islands in Antarctica, renamed after originally being called Utskjera. They were given their current name as Rigel was used as an astrofix. Mount Rigel, elevation , is also in Antarctica. See also Orion in Chinese astronomy Notes References External links December double star of the monthbeta Orionis Astronomical Society of Southern Africa My Favorite Double Star AAVSO B-type supergiants B-type main-sequence stars Alpha Cygni variables Multiple star systems Population I stars Orion (constellation) Arabic words and phrases Orionis, Beta BD-08 1063 Orionis, 19 034085 024436
Rigel
Astronomy
4,994
10,932,122
https://en.wikipedia.org/wiki/Order%20%28mouldings%29
An order refers to each of a series of mouldings most often found in Romanesque and Gothic arches. Arches and vaults Architectural elements
Order (mouldings)
Technology,Engineering
28
710,166
https://en.wikipedia.org/wiki/Cognitive%20distortion
A cognitive distortion is a thought that causes a person to perceive reality inaccurately due to being exaggerated or irrational. Cognitive distortions are involved in the onset or perpetuation of psychopathological states, such as depression and anxiety. According to Aaron Beck's cognitive model, a negative outlook on reality, sometimes called negative schemas (or schemata), is a factor in symptoms of emotional dysfunction and poorer subjective well-being. Specifically, negative thinking patterns reinforce negative emotions and thoughts. During difficult circumstances, these distorted thoughts can contribute to an overall negative outlook on the world and a depressive or anxious mental state. According to hopelessness theory and Beck's theory, the meaning or interpretation that people give to their experience importantly influences whether they will become depressed and whether they will experience severe, repeated, or long-duration episodes of depression. Challenging and changing cognitive distortions is a key element of cognitive behavioral therapy (CBT). Definition Cognitive comes from the Medieval Latin , equivalent to Latin , 'known'. Distortion means the act of twisting or altering something out of its true, natural, or original state. History In 1957, American psychologist Albert Ellis, though he did not know it yet, would aid cognitive therapy in correcting cognitive distortions and indirectly helping David D. Burns in writing The Feeling Good Handbook. Ellis created what he called the ABC Technique of rational beliefs. The ABC stands for the activating event, beliefs that are irrational, and the consequences that come from the beliefs. Ellis wanted to prove that the activating event is not what caused the emotional behavior or the consequences, but the beliefs and how the person irrationally perceives the events which aid the consequences. With this model, Ellis attempted to use rational emotive behavior therapy (REBT) with his patients, in order to help them "reframe" or reinterpret the experience in a more rational manner. In this model, Ellis explains it all to his clients, while Beck helps his clients figure this out on their own. Beck first started to notice these automatic distorted thought processes when practicing psychoanalysis, while his patients followed the rule of saying anything that comes to mind. He realized that his patients had irrational fears, thoughts, and perceptions that were automatic. Beck began noticing his automatic thought processes that he knew his patients had but did not report. Most of the time the thoughts were biased against themselves and very erroneous. Beck believed that the negative schemas developed and manifested themselves in the perspective and behavior. The distorted thought processes led to focusing on degrading the self, amplifying minor external setbacks, experiencing other's harmless comments as ill-intended, while simultaneously seeing self as inferior. Inevitably cognitions are reflected in their behavior with a reduced desire to care for oneself, reduced desire to seek pleasure, and finally give up. These exaggerated perceptions, due to cognition, feel real and accurate because the schemas, after being reinforced through the behavior, tend to become 'knee-jerk' automatic and do not allow time for reflection. This cycle is also known as Beck's cognitive triad, focused on the theory that the person's negative schema applied to the self, the future, and the environment. In 1972, psychiatrist, psychoanalyst, and cognitive therapy scholar Aaron T. Beck published Depression: Causes and Treatment. He was dissatisfied with the conventional Freudian treatment of depression because there was no empirical evidence for the success of Freudian psychoanalysis. Beck's book provided a comprehensive and empirically supported theoretical model for depression—its potential causes, symptoms, and treatments. In Chapter 2, titled "Symptomatology of Depression", he described "cognitive manifestations" of depression, including low self-evaluation, negative expectations, self-blame and self-criticism, indecisiveness, and distortion of the body image. Beck's student David D. Burns continued research on the topic. In his book Feeling Good: The New Mood Therapy, Burns described personal and professional anecdotes related to cognitive distortions and their elimination. When Burns published Feeling Good: The New Mood Therapy, it made Beck's approach to distorted thinking widely known and popularized. Burns sold over four million copies of the book in the United States alone. It was a book commonly "prescribed" for patients with cognitive distortions that have led to depression. Beck approved of the book, saying that it would help others alter their depressed moods by simplifying the extensive study and research that had taken place since shortly after Beck had started as a student and practitioner of psychoanalytic psychiatry. Nine years later, The Feeling Good Handbook was published, which was also built on Beck's work and includes a list of ten specific cognitive distortions that will be discussed throughout this article. Main types John C. Gibbs and Granville Bud Potter propose four categories for cognitive distortions: self-centered, blaming others, minimizing-mislabeling, and assuming the worst. The cognitive distortions listed below are categories of negative self-talk. All-or-nothing thinking The "all-or-nothing thinking distortion" is also referred to as "splitting", "black-and-white thinking", and "polarized thinking." Someone with the all-or-nothing thinking distortion looks at life in black and white categories. Either they are a success or a failure; either they are good or bad; there is no in-between. According to one article, "Because there is always someone who is willing to criticize, this tends to collapse into a tendency for polarized people to view themselves as a total failure. Polarized thinkers have difficulty with the notion of being 'good enough' or a partial success." Example (from The Feeling Good Handbook): A woman eats a spoonful of ice cream. She thinks she is a complete failure for breaking her diet. She becomes so depressed that she ends up eating the whole quart of ice cream. This example captures the polarized nature of this distortion—the person believes they are totally inadequate if they fall short of perfection. In order to combat this distortion, Burns suggests thinking of the world in terms of shades of gray. Rather than viewing herself as a complete failure for eating a spoonful of ice cream, the woman in the example could still recognize her overall effort to diet as at least a partial success. This distortion is commonly found in perfectionists. Jumping to conclusions Reaching preliminary conclusions (usually negative) with little (if any) evidence. Three specific subtypes are identified: Mind reading Inferring a person's possible or probable (usually negative) thoughts from their behaviour and nonverbal communication; taking precautions against the worst suspected case without asking the person. Example 1: A student assumes that the readers of their paper have already made up their minds concerning its topic, and, therefore, writing the paper is a pointless exercise. Example 2: Kevin assumes that because he sits alone at lunch, everyone else must think he is a loser. (This can encourage self-fulfilling prophecy; Kevin may not initiate social contact because of his fear that those around him already perceive him negatively). Fortune-telling Predicting outcomes (usually negative) of events. Example: A depressed person tells themselves they will never improve; they will continue to be depressed for their whole life. One way to combat this distortion is to ask, "If this is true, does it say more about me or them?" Labelling Labelling occurs when someone overgeneralizes the characteristics of other people. Someone might use an unfavourable term to describe a complex person or event, such as assuming that a friend is upset with them due to a late reply to a text message, even though there could be various other reasons for the delay. It is a more extreme form of jumping-to-conclusions cognitive distortion where one presumes to know the thoughts, feelings, or intentions of others without any factual basis. Emotional reasoning In the emotional reasoning distortion, it is assumed that feelings expose the true nature of things and experience reality as a reflection of emotionally linked thoughts; something is believed true solely based on a feeling. Examples: "I feel stupid, therefore I must be stupid". Feeling fear of flying in planes, and then concluding that planes must be a dangerous way to travel. Feeling overwhelmed by the prospect of cleaning one's house, therefore concluding that it's hopeless to even start cleaning. Should/shouldn't and must/mustn't statements Making "must" or "should" statements was included by Albert Ellis in his rational emotive behavior therapy (REBT), an early form of CBT; he termed it "musturbation". Michael C. Graham called it "expecting the world to be different than it is". It can be seen as demanding particular achievements or behaviors regardless of the realistic circumstances of the situation. Example: After a performance, a concert pianist believes they should not have made so many mistakes. In Feeling Good: The New Mood Therapy, David Burns clearly distinguished between pathological "should statements", moral imperatives, and social norms. A related cognitive distortion, also present in Ellis' REBT, is a tendency to "awfulize"; to say a future scenario will be awful, rather than to realistically appraise the various negative and positive characteristics of that scenario. According to Burns, "must" and "should" statements are negative because they cause the person to feel guilty and upset at themselves. Some people also direct this distortion at other people, which can cause feelings of anger and frustration when that other person does not do what they should have done. He also mentions how this type of thinking can lead to rebellious thoughts. In other words, trying to whip oneself into doing something with "shoulds" may cause one to desire just the opposite. Gratitude traps A gratitude trap is a type of cognitive distortion that typically arises from misunderstandings regarding the nature or practice of gratitude. The term can refer to one of two related but distinct thought patterns: A self-oriented thought process involving feelings of guilt, shame, or frustration related to one's expectations of how things "should" be. An "elusive ugliness in many relationships, a deceptive 'kindness,' the main purpose of which is to make others feel indebted", as defined by psychologist Ellen Kenner. Personalization and blaming Personalization is assigning personal blame disproportionate to the level of control a person realistically has in a given situation. Example 1: A foster child assumes that they have not been adopted because they are not "loveable enough". Example 2: A child has bad grades. Their mother believes it is because they are not a good enough parent. Blaming is the opposite of personalization. In the blaming distortion, the disproportionate level of blame is placed upon other people, rather than oneself. In this way, the person avoids taking personal responsibility, making way for a "victim mentality". Example: Placing blame for marital problems entirely on one's spouse. Always being right In this cognitive distortion, being wrong is unthinkable. This distortion is characterized by actively trying to prove one's actions or thoughts to be correct, and sometimes prioritizing self-interest over the feelings of another person. In this cognitive distortion, the facts that oneself has about their surroundings are always right while other people's opinions and perspectives are wrongly seen. Fallacy of change Relying on social control to obtain cooperative actions from another person. The underlying assumption of this thinking style is that one's happiness depends on the actions of others. The fallacy of change also assumes that other people should change to suit one's own interests automatically, and/or that it is fair to pressure them to change. It may be present in most abusive relationships in which partners' "visions" of each other are tied into the belief that happiness, love, trust, and perfection would just occur once they or the other person change aspects of their beings. Minimizing-mislabeling Magnification and minimization Giving proportionally greater weight to a perceived failure, weakness or threat, or lesser weight to a perceived success, strength or opportunity, so that the weight differs from that assigned by others, such as "making a mountain out of a molehill". In depressed clients, often the positive characteristics of other people are exaggerated, and their negative characteristics are understated. Catastrophizing is a form of magnification where one gives greater weight to the worst possible outcome, however unlikely, or experiences a situation as unbearable or impossible when it is just uncomfortable. Labeling and mislabeling A form of overgeneralization; attributing a person's actions to their character instead of to an attribute. Rather than assuming the behaviour to be accidental or otherwise extrinsic, one assigns a label to someone or something that is based on the inferred character of that person or thing. Assuming the worst Overgeneralizing Someone who overgeneralizes makes faulty generalizations from insufficient evidence. Such as seeing a "single negative event" as a "never-ending pattern of defeat", and as such drawing a very broad conclusion from a single incident or a single piece of evidence. Even if something bad happens only once, it is expected to happen over and over again. Example 1: A person is asked out on a first date, but not a second one. They are distraught as tells a friend, "This always happens to me! I'll never find love!" Example 2: A person is lonely and often spends most of their time at home. Friends sometimes ask them to dinner and to meet new people. They feel it is useless to even try. No one could really like them. And anyway, all people are the same: petty and selfish. One suggestion to combat this distortion is to "examine the evidence" by performing an accurate analysis of one's situation. This aids in avoiding exaggerating one's circumstances. Disqualifying the positive Disqualifying the positive refers to rejecting positive experiences by insisting they "don't count" for some reason or other. Negative belief is maintained despite contradiction by everyday experiences. Disqualifying the positive may be the most common fallacy in the cognitive distortion range; it is often analyzed with "always being right", a type of distortion where a person is in an all-or-nothing self-judgment. People in this situation show signs of depression. Examples include: "I will never be as good as Jane" "Anyone could have done as well" "They are just congratulating me to be nice" Mental filtering Filtering distortions occur when an individual dwells only on the negative details of a situation and filters out the positive aspects. Example: Andy gets mostly compliments and positive feedback about a presentation he has done at work, but he also has received a small piece of criticism. For several days following his presentation, Andy dwells on this one negative reaction, forgetting all of the positive reactions that he had also been given. The Feeling Good Handbook notes that filtering is like a "drop of ink that discolors a beaker of water". One suggestion to combat filtering is a cost–benefit analysis. A person with this distortion may find it helpful to sit down and assess whether filtering out the positive and focusing on the negative is helping or hurting them in the long run. Conceptualization In a series of publications, philosopher Paul Franceschi has proposed a unified conceptual framework for cognitive distortions designed to clarify their relationships and define new ones. This conceptual framework is based on three notions: (i) the reference class (a set of phenomena or objects, e.g. events in the patient's life); (ii) dualities (positive/negative, qualitative/quantitative, ...); (iii) the taxon system (degrees allowing to attribute properties according to a given duality to the elements of a reference class). In this model, "dichotomous reasoning", "minimization", "maximization" and "arbitrary focus" constitute general cognitive distortions (applying to any duality), whereas "disqualification of the positive" and "catastrophism" are specific cognitive distortions, applying to the positive/negative duality. This conceptual framework posits two additional cognitive distortion classifications: the "omission of the neutral" and the "requalification in the other pole". Cognitive restructuring Cognitive restructuring (CR) is a popular form of therapy used to identify and reject maladaptive cognitive distortions, and is typically used with individuals diagnosed with depression. In CR, the therapist and client first examine a stressful event or situation reported by the client. For example, a depressed male college student who experiences difficulty in dating might believe that his "worthlessness" causes women to reject him. Together, therapist and client might then create a more realistic cognition, e.g., "It is within my control to ask girls on dates. However, even though there are some things I can do to influence their decisions, whether or not they say yes is largely out of my control. Thus, I am not responsible if they decline my invitation." CR therapies are designed to eliminate "automatic thoughts" that include clients' dysfunctional or negative views. According to Beck, doing so reduces feelings of worthlessness, anxiety, and anhedonia that are symptomatic of several forms of mental illness. CR is the main component of Beck's and Burns's CBT. Narcissistic defense Those diagnosed with narcissistic personality disorder tend, unrealistically, to view themselves as superior, overemphasizing their strengths and understating their weaknesses. Narcissists use exaggeration and minimization this way to shield themselves against psychological pain. Decatastrophizing In cognitive therapy, decatastrophizing or decatastrophization is a cognitive restructuring technique that may be used to treat cognitive distortions, such as magnification and catastrophizing, commonly seen in psychological disorders like anxiety and psychosis. Major features of these disorders are the subjective report of being overwhelmed by life circumstances and the incapability of affecting them. Criticism Common criticisms of the diagnosis of cognitive distortion relate to epistemology and the theoretical basis. If the perceptions of the patient differ from those of the therapist, it may not be because of intellectual malfunctions, but because the patient has different experiences. In some cases, depressed subjects appear to be "sadder but wiser". See also References Cognitive therapy Defence mechanisms Cognitive biases Anxiety Barriers to critical thinking Depression (mood) Error Narcissism Deception
Cognitive distortion
Biology
3,831
47,261,948
https://en.wikipedia.org/wiki/Peirce%27s%20law
In logic, Peirce's law is named after the philosopher and logician Charles Sanders Peirce. It was taken as an axiom in his first axiomatisation of propositional logic. It can be thought of as the law of excluded middle written in a form that involves only one sort of connective, namely implication. In propositional calculus, Peirce's law says that ((P→Q)→P)→P. Written out, this means that P must be true if there is a proposition Q such that the truth of P follows from the truth of "if P then Q". Peirce's law does not hold in intuitionistic logic or intermediate logics and cannot be deduced from the deduction theorem alone. Under the Curry–Howard isomorphism, Peirce's law is the type of continuation operators, e.g. call/cc in Scheme. History Here is Peirce's own statement of the law: A fifth icon is required for the principle of excluded middle and other propositions connected with it. One of the simplest formulae of this kind is: This is hardly axiomatical. That it is true appears as follows. It can only be false by the final consequent x being false while its antecedent (x ⤙ y) ⤙ x is true. If this is true, either its consequent, x, is true, when the whole formula would be true, or its antecedent x ⤙ y is false. But in the last case the antecedent of x ⤙ y, that is x, must be true. (Peirce, the Collected Papers 3.384). Peirce goes on to point out an immediate application of the law: From the formula just given, we at once get: where the a is used in such a sense that (x ⤙ y) ⤙ a means that from (x ⤙ y) every proposition follows. With that understanding, the formula states the principle of excluded middle, that from the falsity of the denial of x follows the truth of x. (Peirce, the Collected Papers 3.384). Warning: As explained in the text, "a" here does not denote a propositional atom, but something like the quantified propositional formula . The formula would not be a tautology if a were interpreted as an atom. Relations between principles In intuitionistic logic, if is proven or rejected, or if is provenly valid, then Pierce's law for the two propositions holds. But the law's special case when is rejected, called consequentia mirabilis, is equivalent to excluded middle already over minimal logic. This also means that Piece's law entails classical logic over intuitionistic logic, as also shown below. Intuitionistically, not even the constraint implies the law for two propositions. Postulating the latter to be valid results in Smetanich's intermediate logic. It is helpful to consider Pierce's law in the equivalent form . Indeed, from follows , and so is equivalent to . The case now directly shows how double-negation elimination implies consequentia mirabilis over minimal logic. In intuitionistic logic, explosion can be used for , and so here the law's special case consequentia mirabilis also implies double-negation elimination. As the double-negated excluded middle is always already valid even in minimal logic, this also intuitionistically proves excluded middle. In the other direction, one can intuitionistically also show that excluded middle implies Piece's law directly. To this end, note that using the principle of explosion, excluded middle may be expressed as . In words, this may be expressed as: "Every proposition either holds or implies any other proposition." Now to prove the law, note that is derivable from just implication introduction on the one hand and modus ponens on the other. Finally, in place of consider . Another proof of the law in classical logic proceeds by passing through the classically valid reverse disjunctive syllogism twice: First note that is implied by , which is intuitionistically equivalent to . Now explosion entails that implies , and using excluded middle for entails that these two are in fact equivalent. Taken together, this means that in classical logic is equivalent to . Using Peirce's law with the deduction theorem Peirce's law allows one to enhance the technique of using the deduction theorem to prove theorems. Suppose one is given a set of premises Γ and one wants to deduce a proposition Z from them. With Peirce's law, one can add (at no cost) additional premises of the form Z→P to Γ. For example, suppose we are given P→Z and (P→Q)→Z and we wish to deduce Z so that we can use the deduction theorem to conclude that (P→Z)→(((P→Q)→Z)→Z) is a theorem. Then we can add another premise Z→Q. From that and P→Z, we get P→Q. Then we apply modus ponens with (P→Q)→Z as the major premise to get Z. Applying the deduction theorem, we get that (Z→Q)→Z follows from the original premises. Then we use Peirce's law in the form ((Z→Q)→Z)→Z and modus ponens to derive Z from the original premises. Then we can finish off proving the theorem as we originally intended. Completeness of the implicational propositional calculus One reason that Peirce's law is important is that it can substitute for the law of excluded middle in the logic which only uses implication. The sentences which can be deduced from the axiom schemas: P→(Q→P) (P→(Q→R))→((P→Q)→(P→R)) ((P→Q)→P)→P from P and P→Q infer Q (where P,Q,R contain only "→" as a connective) are all the tautologies which use only "→" as a connective. Failure in non-classical models of intuitionistic logic Since Peirce's law implies the law of the excluded middle, it must always fail in non-classical intuitionistic logics. A simple explicit counterexample is that of Gödel many valued logics, which are a fuzzy logic where truth values are real numbers between 0 and 1, with material implication defined by: and where Peirce's law as a formula can be simplified to: where it always being true would be equivalent to the statement that u > v implies u = 1, which is true only if 0 and 1 are the only allowed values. At the same time however, the expression cannot ever be equal to the bottom truth value of the logic and its double negation is always true. See also Charles Sanders Peirce bibliography Notes Further reading Peirce, C.S., "On the Algebra of Logic: A Contribution to the Philosophy of Notation", American Journal of Mathematics 7, 180–202 (1885). Reprinted, the Collected Papers of Charles Sanders Peirce 3.359–403 and the Writings of Charles S. Peirce: A Chronological Edition 5, 162–190. Peirce, C.S., Collected Papers of Charles Sanders Peirce, Vols. 1–6, Charles Hartshorne and Paul Weiss (eds.), Vols. 7–8, Arthur W. Burks (ed.), Harvard University Press, Cambridge, MA, 1931–1935, 1958. Mathematical logic Charles Sanders Peirce Theorems in propositional logic Intuitionism
Peirce's law
Mathematics
1,598
800,010
https://en.wikipedia.org/wiki/Belief%20propagation
Belief propagation, also known as sum–product message passing, is a message-passing algorithm for performing inference on graphical models, such as Bayesian networks and Markov random fields. It calculates the marginal distribution for each unobserved node (or variable), conditional on any observed nodes (or variables). Belief propagation is commonly used in artificial intelligence and information theory, and has demonstrated empirical success in numerous applications, including low-density parity-check codes, turbo codes, free energy approximation, and satisfiability. The algorithm was first proposed by Judea Pearl in 1982, who formulated it as an exact inference algorithm on trees, later extended to polytrees. While the algorithm is not exact on general graphs, it has been shown to be a useful approximate algorithm. Motivation Given a finite set of discrete random variables with joint probability mass function , a common task is to compute the marginal distributions of the . The marginal of a single is defined to be where is a vector of possible values for the , and the notation means that the sum is taken over those whose th coordinate is equal to . Computing marginal distributions using this formula quickly becomes computationally prohibitive as the number of variables grows. For example, given 100 binary variables , computing a single marginal using and the above formula would involve summing over possible values for . If it is known that the probability mass function factors in a convenient way, belief propagation allows the marginals to be computed much more efficiently. Description of the sum-product algorithm Variants of the belief propagation algorithm exist for several types of graphical models (Bayesian networks and Markov random fields in particular). We describe here the variant that operates on a factor graph. A factor graph is a bipartite graph containing nodes corresponding to variables and factors , with edges between variables and the factors in which they appear. We can write the joint mass function: where is the vector of neighboring variable nodes to the factor node . Any Bayesian network or Markov random field can be represented as a factor graph by using a factor for each node with its parents or a factor for each node with its neighborhood respectively. The algorithm works by passing real valued functions called messages along the edges between the nodes. More precisely, if is a variable node and is a factor node connected to in the factor graph, then the messages from to and the messages from to are real-valued functions , whose domain is the set of values that can be taken by the random variable associated with , denoted . These messages contain the "influence" that one variable exerts on another. The messages are computed differently depending on whether the node receiving the message is a variable node or a factor node. Keeping the same notation: A message from a variable node to a factor node is defined by for , where is the set of neighboring factor nodes of . If is empty then is set to the uniform distribution over . A message from a factor node to a variable node is defined to be the product of the factor with messages from all other nodes, marginalized over all variables except the one associated with ,for , where is the set of neighboring (variable) nodes to . If is empty, then , since in this case . As shown by the previous formula: the complete marginalization is reduced to a sum of products of simpler terms than the ones appearing in the full joint distribution. This is the reason that belief propagation is sometimes called sum-product message passing, or the sum-product algorithm. In a typical run, each message will be updated iteratively from the previous value of the neighboring messages. Different scheduling can be used for updating the messages. In the case where the graphical model is a tree, an optimal scheduling converges after computing each message exactly once (see next sub-section). When the factor graph has cycles, such an optimal scheduling does not exist, and a typical choice is to update all messages simultaneously at each iteration. Upon convergence (if convergence happened), the estimated marginal distribution of each node is proportional to the product of all messages from adjoining factors (missing the normalization constant): Likewise, the estimated joint marginal distribution of the set of variables belonging to one factor is proportional to the product of the factor and the messages from the variables: In the case where the factor graph is acyclic (i.e. is a tree or a forest), these estimated marginal actually converge to the true marginals in a finite number of iterations. This can be shown by mathematical induction. Exact algorithm for trees In the case when the factor graph is a tree, the belief propagation algorithm will compute the exact marginals. Furthermore, with proper scheduling of the message updates, it will terminate after two full passes through the tree. This optimal scheduling can be described as follows: Before starting, the graph is oriented by designating one node as the root; any non-root node which is connected to only one other node is called a leaf. In the first step, messages are passed inwards: starting at the leaves, each node passes a message along the (unique) edge towards the root node. The tree structure guarantees that it is possible to obtain messages from all other adjoining nodes before passing the message on. This continues until the root has obtained messages from all of its adjoining nodes. The second step involves passing the messages back out: starting at the root, messages are passed in the reverse direction. The algorithm is completed when all leaves have received their messages. Approximate algorithm for general graphs Although it was originally designed for acyclic graphical models, the Belief Propagation algorithm can be used in general graphs. The algorithm is then sometimes called loopy belief propagation, because graphs typically contain cycles, or loops. The initialization and scheduling of message updates must be adjusted slightly (compared with the previously described schedule for acyclic graphs) because graphs might not contain any leaves. Instead, one initializes all variable messages to 1 and uses the same message definitions above, updating all messages at every iteration (although messages coming from known leaves or tree-structured subgraphs may no longer need updating after sufficient iterations). It is easy to show that in a tree, the message definitions of this modified procedure will converge to the set of message definitions given above within a number of iterations equal to the diameter of the tree. The precise conditions under which loopy belief propagation will converge are still not well understood; it is known that on graphs containing a single loop it converges in most cases, but the probabilities obtained might be incorrect. Several sufficient (but not necessary) conditions for convergence of loopy belief propagation to a unique fixed point exist. There exist graphs which will fail to converge, or which will oscillate between multiple states over repeated iterations. Techniques like EXIT charts can provide an approximate visualization of the progress of belief propagation and an approximate test for convergence. There are other approximate methods for marginalization including variational methods and Monte Carlo methods. One method of exact marginalization in general graphs is called the junction tree algorithm, which is simply belief propagation on a modified graph guaranteed to be a tree. The basic premise is to eliminate cycles by clustering them into single nodes. Related algorithm and complexity issues A similar algorithm is commonly referred to as the Viterbi algorithm, but also known as a special case of the max-product or min-sum algorithm, which solves the related problem of maximization, or most probable explanation. Instead of attempting to solve the marginal, the goal here is to find the values that maximizes the global function (i.e. most probable values in a probabilistic setting), and it can be defined using the arg max: An algorithm that solves this problem is nearly identical to belief propagation, with the sums replaced by maxima in the definitions. It is worth noting that inference problems like marginalization and maximization are NP-hard to solve exactly and approximately (at least for relative error) in a graphical model. More precisely, the marginalization problem defined above is #P-complete and maximization is NP-complete. The memory usage of belief propagation can be reduced through the use of the Island algorithm (at a small cost in time complexity). Relation to free energy The sum-product algorithm is related to the calculation of free energy in thermodynamics. Let Z be the partition function. A probability distribution (as per the factor graph representation) can be viewed as a measure of the internal energy present in a system, computed as The free energy of the system is then It can then be shown that the points of convergence of the sum-product algorithm represent the points where the free energy in such a system is minimized. Similarly, it can be shown that a fixed point of the iterative belief propagation algorithm in graphs with cycles is a stationary point of a free energy approximation. Generalized belief propagation (GBP) Belief propagation algorithms are normally presented as message update equations on a factor graph, involving messages between variable nodes and their neighboring factor nodes and vice versa. Considering messages between regions in a graph is one way of generalizing the belief propagation algorithm. There are several ways of defining the set of regions in a graph that can exchange messages. One method uses ideas introduced by Kikuchi in the physics literature, and is known as Kikuchi's cluster variation method. Improvements in the performance of belief propagation algorithms are also achievable by breaking the replicas symmetry in the distributions of the fields (messages). This generalization leads to a new kind of algorithm called survey propagation (SP), which have proved to be very efficient in NP-complete problems like satisfiability and graph coloring. The cluster variational method and the survey propagation algorithms are two different improvements to belief propagation. The name generalized survey propagation (GSP) is waiting to be assigned to the algorithm that merges both generalizations. Gaussian belief propagation (GaBP) Gaussian belief propagation is a variant of the belief propagation algorithm when the underlying distributions are Gaussian. The first work analyzing this special model was the seminal work of Weiss and Freeman. The GaBP algorithm solves the following marginalization problem: where Z is a normalization constant, A is a symmetric positive definite matrix (inverse covariance matrix a.k.a. precision matrix) and b is the shift vector. Equivalently, it can be shown that using the Gaussian model, the solution of the marginalization problem is equivalent to the MAP assignment problem: This problem is also equivalent to the following minimization problem of the quadratic form: Which is also equivalent to the linear system of equations Convergence of the GaBP algorithm is easier to analyze (relatively to the general BP case) and there are two known sufficient convergence conditions. The first one was formulated by Weiss et al. in the year 2000, when the information matrix A is diagonally dominant. The second convergence condition was formulated by Johnson et al. in 2006, when the spectral radius of the matrix where D = diag(A). Later, Su and Wu established the necessary and sufficient convergence conditions for synchronous GaBP and damped GaBP, as well as another sufficient convergence condition for asynchronous GaBP. For each case, the convergence condition involves verifying 1) a set (determined by A) being non-empty, 2) the spectral radius of a certain matrix being smaller than one, and 3) the singularity issue (when converting BP message into belief) does not occur. The GaBP algorithm was linked to the linear algebra domain, and it was shown that the GaBP algorithm can be viewed as an iterative algorithm for solving the linear system of equations Ax = b where A is the information matrix and b is the shift vector. Empirically, the GaBP algorithm is shown to converge faster than classical iterative methods like the Jacobi method, the Gauss–Seidel method, successive over-relaxation, and others. Additionally, the GaBP algorithm is shown to be immune to numerical problems of the preconditioned conjugate gradient method Syndrome-based BP decoding The previous description of BP algorithm is called the codeword-based decoding, which calculates the approximate marginal probability , given received codeword . There is an equivalent form, which calculate , where is the syndrome of the received codeword and is the decoded error. The decoded input vector is . This variation only changes the interpretation of the mass function . Explicitly, the messages are where is the prior error probability on variable This syndrome-based decoder doesn't require information on the received bits, thus can be adapted to quantum codes, where the only information is the measurement syndrome. In the binary case, , those messages can be simplified to cause an exponential reduction of in the complexity Define log-likelihood ratio , , then where The posterior log-likelihood ratio can be estimated as References Further reading Bickson, Danny. (2009). Gaussian Belief Propagation Resource Page —Webpage containing recent publications as well as Matlab source code. Coughlan, James. (2009). A Tutorial Introduction to Belief Propagation. Mackenzie, Dana (2005). "Communication Speed Nears Terminal Velocity", New Scientist. 9 July 2005. Issue 2507 (Registration required) Graph algorithms Graphical models Coding theory
Belief propagation
Mathematics
2,708
59,709,418
https://en.wikipedia.org/wiki/Q-PACE
CubeSat Particle Aggregation and Collision Experiment (Q-PACE) or Cu-PACE, was an orbital spacecraft mission that would have studied the early stages of proto-planetary accretion by observing particle dynamical aggregation for several years. Current hypotheses have trouble explaining how particles can grow larger than a few centimeters. This is called the meter size barrier. This mission was selected in 2015 as part of NASA's ELaNa program, and it was launched on 17 January 2021. As of March 2021, however, contact has yet to be established with the satellite, and the mission was feared to be lost. The mission was eventually terminated. Overview Q-PACE was led by Joshua Colwell at the University of Central Florida and was selected NASA's CubeSat Launch Initiative (CSLI) which placed it on Educational Launch of Nanosatellites ELaNa XX. The development of the mission was funded through NASA's Small Innovative Missions for Planetary Exploration (SIMPLEx) program. Observations of the collisional evolution and accretion of particles in a microgravity environment are necessary to elucidate the processes that lead to the formation of planetesimals (the building blocks of planets), km-size, and larger bodies, within the protoplanetary disk. The current hypotheses of planetesimal formation have difficulties in explaining how particles grow beyond one centimeter in size, so repeated experimentation in relevant conditions is necessary. Q-PACE was to explore the fundamental properties of low‐velocity (< ) particle collisions in a microgravity environment in an effort to better understand accretion in the protoplanetary disk. Several precursor tests and flight missions were performed in suborbital flights as well as in the International Space Station. The small spacecraft does not need accurate pointing or propulsion, which simplified the design. On 17 January 2021, Q-PACE launched on a Virgin Orbit Launcher One, an air launch to orbit rocket that was dropped from the Cosmic Girl airplane over the Pacific Ocean. As of March 2021, however, contact was not established with the satellite after it reached orbit, and the spacecraft was declared lost and the mission ended. Objectives The main objective of Q-PACE was to understand protoplanetary growth from pebbles to boulders by performing long-duration microgravity collision experiments. The specific goals are: Quantify the energy damping in multi-particle systems at low collision speeds (< to ) Identify the influence of a size distribution on the collision outcome. Observe the influence of dust on a multi-particle system. Quantify statistically rare events: observe a large number of similar collisions to arrive at a probabilistic description of collisional outcomes. Method Q-PACE was a 3U CubeSat with a collision test chamber and several particle reservoirs that contain meteoritic chondrules, dust particles, dust aggregates, and larger spherical particles. Particles will be introduced into the test chamber for a series of separate experimental runs. The scientists designed a series of experiments involving a broad range of particle size, density, surface properties, and collision velocities to observe collisional outcomes from bouncing to sticking as well as aggregate disruption in tens of thousands of collisions. The test chamber will be mechanically agitated to induce collisions that will be recorded by on‐board video for downlink and analysis. Long duration microgravity allows a very large number of collisions to be studied and produce statistically significant data. References Astrophysics Celestial mechanics Solar System dynamic theories Spacecraft launched in 2021 CubeSats Nanosatellites
Q-PACE
Physics,Astronomy
714
25,918,402
https://en.wikipedia.org/wiki/Psilocybe%20subacutipilea
Psilocybe subacutipilea is a species of mushroom in the family Hymenogastraceae. Described as new to science in 1994, the species is found in Colombia. Based on its blue staining reaction to touch, the mushroom is presumed to contain the psychoactive compound psilocybin. P. subacutipilea is classified in the section Mexicana of the genus Psilocybe. It is similar to the Brazilian species P. acutipilea. See also List of psilocybin mushrooms Psilocybin mushrooms References subacutipilea Entheogens Psychoactive fungi Psychedelic tryptamine carriers Fungi of Colombia Fungi described in 1994 Taxa named by Gastón Guzmán Fungus species
Psilocybe subacutipilea
Biology
148
43,026,302
https://en.wikipedia.org/wiki/Wilderness%20Reserve
Wilderness Reserve is a private estate of in Suffolk's Yox Valley assembled by Jon Hunt since 1995 incorporating estates of Sibton Park, , Heveningham Hall, , Cockfield Hall, and other land acquisitions within the catchments of the River Yox and Blyth Valley. The estate, which offers high-end holiday accommodation within an un-fenced landscape developed for wildlife and leisure activities includes a recently completed design for parkland and lakes by celebrated landscape architect Capability Brown (1716–1783). History The land and buildings for Wilderness Reserve have been assembled over time with various land purchases by Hunt. The first purchase was Heveningham Hall in 1994 of , however but it was the later purchase of the nearby Sibton Park estate that marked the start of the main Wilderness Reserve project. Various other acquisitions of land and buildings have also been made, including Cockfield Hall. Capability Brown design realised Designs for the grounds of Heveningham Hall commissioned from Capability Brown in 1782 by Gerard Vanneck, the then owner of hall, were only partly implemented following the death of the client the following year. Hunt set about restoring Brown's original design, soon after purchasing hall in 1994 and sought the help of landscape architect Kim Wilkie. During the restoration of Brown's vision required the removal of modern features inconsistent with an 18th-century design including concrete roads, car parks, telegraph poles and farm outbuildings were either demolished or buried. and Wilkie says that "98 per cent" of Brown's original 600-acre design at Heveningham were now in place. Wider areas have been developed according to principles that Brown would recognise as consistent with an Arcadian pleasure ground including lakes, parkland and woods and various historic manor houses. Rewilding The estate includes part of the valleys of the River Blyth and River Yox that lead to the nearby Minsmere nature reserve on the Heritage Coast. Hunt has said that he wanted to allow nature to recover naturally by rewilding much of the land. Country Life magazine reported, "…sharp-leaved fluellen, field madder, heartsease, corn mint – these and other plants that an arable farmer would regard as weeds flourish unsprayed". Returning this farmland to the wild over two decades has allowed significant numbers of animal species, flora and fauna to settle in the area. Hunt has created new areas of pines, lakes and meadows, planting 800,000 trees and managing land for wildflowers. 2,000 nest boxes have been installed and there are now breeding populations of raptors, barn owls, buzzards and other at-risk species, together with many other mammal, amphibian, moth and butterfly populations. Country Life reports that the estate supports 13 pairs of breeding owls and is visited by bitterns. In 2013 it was reported that 72,000 ash trees under the threat of European chalara disease would need to be removed from the estate, to be replaced by hornbeam and oak. Accommodation Many buildings have been converted to offer high-end holiday accommodation: Sibton Park, an 1827 Georgian Grade II listed manor house. Sibton Park Walled Garden, accommodation for 16 people (as featured in New York Times in 2016). Garden Cottage, sleeps 6, abutting the Walled Garden Farmhouse, sleeps 12. Thatched house close to walled garden. Cartshed Clockhouse Barn Gate Lodges Hex Cottage Moat Cottage The Grange The Cider House The Hovel The Tabernacle Tennis Pavilion Together with various farmhouses, lodges and barns Hunt's private residence, Heveningham Hall is adjacent to the estate, but some buildings within the estate have been made available for holiday accommodation, including a clockhouse, barn and a gate lodge, as well as a farmhouse said to have been visited by Alexa Chung. In 2013, Hunt told the Financial Times: "Farming alone won't pay for a modern estate to survive. Real estate will." References Geography of Suffolk Wildlife conservation Rewilding in the United Kingdom Environmentalism in England
Wilderness Reserve
Biology
817
30,874,055
https://en.wikipedia.org/wiki/Delhi%20Metro
The Delhi Metro is a rapid transit system that serves Delhi and the adjoining satellite cities of Ghaziabad, Faridabad, Gurugram, Noida, Bahadurgarh, and Ballabhgarh in the National Capital Region of India. The system consists of 10 colour-coded lines serving 257 stations, with a total length of . It is India's largest and busiest metro rail system and the second-oldest, after the Kolkata Metro. The metro has a mix of underground, at-grade, and elevated stations using broad-gauge and standard-gauge tracks. The metro makes over 4,300 trips daily. Construction began in 1998, and the first elevated section (Shahdara to Tis Hazari) on the Red Line opened on 25 December 2002. The first underground section (Vishwa Vidyalaya – Kashmere Gate) on the Yellow Line opened on 20 December 2004. The network was developed in phases. Phase I was completed by 2006, followed by Phase II in 2011. Phase III was mostly complete in 2021, except for a small extension of the Airport Line which opened in 2023. Construction of Phase IV began on 30 December 2019. The Delhi Metro Rail Corporation (DMRC), a joint venture between the Government of India and Delhi, built and operates the Delhi Metro. The DMRC was certified by the United Nations in 2011 as the first metro rail and rail-based system in the world to receive carbon credits for reducing greenhouse-gas emissions, reducing annual carbon emission levels in the city by 630,000 tonnes. The Delhi Metro has interchanges with the Rapid Metro Gurgaon (with a shared ticketing system) and Noida Metro. On 22 October 2019, DMRC took over operations of the financially-troubled Rapid Metro Gurgaon. The Delhi Metro's annual ridership was 203.23 crore (2.03 billion) in 2023. The system will have interchanges with the Delhi-Meerut RRTS, India's fastest urban regional transit system. History Background The concept of mass rapid transit for New Delhi first emerged from a 1969 traffic and travel characteristics study in the city. Over the next several years, committees in a number of government departments were commissioned to examine issues related to technology, route alignment, and governmental jurisdiction. In 1984, the Urban Arts Commission proposed the development of a multi-modal transport system which would build three underground mass rapid transit corridors and augmenting the city's suburban railway and road transport networks. The city expanded significantly while technical studies and financing the project underway, doubling its population and increasing the number of vehicles five-fold between 1981 and 1998. Traffic congestion and pollution soared as an increasing number of commuters used private vehicles, and the existing bus system was unable to bear the load. A 1992 attempt to privatise the bus transport system compounded the problem, with inexperienced operators plying poorly-maintained, noisy and polluting buses on lengthy routes; this resulted in long waiting times, unreliable service, overcrowding, unqualified drivers, speeding and reckless driving which led to road accidents. The Government of India under Prime Minister H.D. Deve Gowda and the Government of Delhi set up the Delhi Metro Rail Corporation (DMRC) on 3 May 1995, with Elattuvalapil Sreedharan its managing director. Mangu Singh replaced Sreedharan as DMRC managing director on 31 December 2011. Initial construction When the project was originally approved by the Union Cabinet in September 1996, it had three corridors. In 1997, official development assistance loans from Japan were granted to finance and conduct the first phase of the system. Construction of the Delhi Metro began on 1 October 1998. To avoid problems experienced by the Kolkata Metro, which witnessed substantial delays and ran 12 times over budget due to "political meddling, technical problems and bureaucratic delays", the DMRC was created as a special-purpose vehicle vested with autonomy and power to execute the large project which involved many technical complexities in a difficult urban environment within a limited time frame. Putting the central and state governments on an equal footing gave an unprecedented level of autonomy and freedom to the company, which had full powers to hire people, decide on tenders, and control funds. The DMRC hired the Hong Kong MTRC as a technical consultant on rapid-transit operation and construction techniques. Construction proceeded smoothly except for a major disagreement in 2000, when the Ministry of Railways forced the system to use broad gauge despite the DMRC's preference for standard gauge. This decision led to an additional capital expenditure of . The Delhi Metro's first line, the Red Line, was inaugurated by Prime Minister Atal Bihari Vajpayee on 24 December 2002. The metro became India's second underground rapid transit system, after the Kolkata Metro, when the Vishwa Vidyalaya–Kashmere Gate section of the Yellow Line opened on 20 December 2004. The underground line was inaugurated by Prime Minister Manmohan Singh. The project's first phase was completed in 2006, on budget and almost three years ahead of schedule, an achievement described by Business Week as "nothing short of a miracle". Phase I A 64.75 kilometer (40.23 miles) network of 59 stations was constructed in Delhi, encompassing the initial sections of the Red, Yellow, and Blue Lines. The stations were opened to the public between 25 December 2002 and 11 November 2006. Phase II A total of network of 86 stations and 10 routes and extensions was built. Seven routes were extensions of the Phase I network, three were new colour-coded lines, and three routes connect to other cities (the Yellow Line to Gurgaon and the Blue Line to Noida and Ghaziabad) of the national capital region in the states of Haryana and Uttar Pradesh. At the end of Phases I and II, the network's total length was and 145 stations became operational between 4 June 2008 and 27 August 2011. Phase III Phase I (Red, Yellow and Blue Lines) and Phase II (Green, Violet, and Airport Express Lines) focused on adding radial lines to expand the network. To further reduce congestion and improve connectivity, Phase III included eight extensions to existing lines, two ring lines (the Pink and Magenta Lines) and the Grey Line. It has 28 underground stations, three new lines and seven route extensions, totaling , at a cost of . The three new Phase III lines are the Pink Line on Inner Ring Road (Line 7), the Magenta Line on Outer Ring Road (Line 8) and the Grey Line connecting Dwarka and Najafgarh (Line 9). Work on Phase III began in 2011, with 2016 the planned deadline. Over 20 tunnel-boring machines were used simultaneously to expedite construction, which was completed in March 2019 (except for a small stretch due to non-availability of land). Short extensions were later added to Phase III, which was expected to be completed by the end of 2020, but construction was delayed due to the COVID-19 pandemic. It was completed on 18 September 2021 with the opening of the Grey Line extension from Najafgarh to Dhansa Bus Stand. An extension of the Airport Line to Yashobhoomi Dwarka Sector - 25 metro station was later added, and it was completed on 17 September 2023. Driverless operations on the Magenta line began on 28 December 2021, making it the Delhi Metro's (and India's) first driverless metro line. On 25 November 2021, the Pink Line also began driverless operations. The total driverless DMRC network is nearly , putting Delhi Metro in fourth position globally among such networks behind Kuala Lumpur. The expected daily ridership of the network after the completion of Phase III was estimated at 53.47 lakh passengers. Actual DMRC ridership was 27.79 lakh in 2019–20, 51.97 percent of the projected ridership. Actual ridership of the Phase III corridors was 4.38 lakh, compared with a projected ridership of 20.89 lakh in 2019–20 (a deficit of 79.02 percent). The communication-based train control (CBTC) on Phase III trains enables them to run at a 90-second headway, although the actual headway between trains is higher because of the relatively low demand on the new corridors. Keeping the short headway and other constraints in mind, DMRC changed its decision to build nine-car-long stations for new lines and opted for shorter stations which can accommodate six-car trains. Phase IV Phase IV, with a length of and six lines, was finalized by the Government of Delhi in December 2018. Approval from the government of India was received for three priority corridors in March 2019. Construction of the corridors began on 30 December 2019, with an expected completion date of 2026. The metro's total length will exceed at the end of Phase IV, not including other independently operated systems in the National Capital Region such as the Aqua Line of the Noida-Greater Noida Metro and the Rapid Metro Gurgaon which connect to the Delhi Metro. Construction Incidents On 19 October 2008, a launching gantry and part of the overhead Blue Line extension under construction in Laxmi Nagar collapsed and fell on a passing bus. Workers were using a crane to lift a 400-tonne concrete span of the bridge when the gantry and a span of the bridge collapsed on the bus. The driver and a construction worker were killed. On 12 July 2009, a section of a bridge collapsed while it was being erected at Zamrudpur, east of Kailash, on the Central Secretariat – Badarpur corridor. Six people died and 15 were injured. A crane removing the debris collapsed the following day and collapsed two other nearby cranes, injuring six. On 22 July 2009, a worker at the Ashok Park Metro station was killed when a steel beam fell on him. Over a hundred people, including 93 workers, have died since work on the metro began in 1998. On 23 April 2018, five people were injured when an iron girder fell off the elevated section of a Metro structure under construction at the Mohan Nagar intersection in Ghaziabad. A car, an auto rickshaw, and a motorbike were also damaged in the incident. Expansion Gurugram Upcoming/Under construction Metro Line The Haryana Mass Rapid Transit Corporation (HMRTC) has plans to establish a metro network spanning 188 kilometers in Gurugram. Gurugram Metro Rail Limited (GMRL) will be responsible for constructing, maintaining, and operating this metro line, similar to the Delhi Metro Rail Corporation. Currently, all these lines will be developed in the first phase, with further expansion planned in the second/upcoming phase. Proposed Phase V Former DMRC managing director E. Sreedharan said that by the time Phase IV is completed, the city will need Phase V to cope with increased population and transport needs. Planning for this phase has not begun, but the following corridor has been suggested for the near future: Yamuna Bank – Loni border: , dropped from Phase IV expansion Central Vista Loop Line is a part of Central Vista Redevelopment Project. Delhi Air Train or Automated People's Mover is a part of Indira Gandhi International Airport expansion which will connect T1, T2, T3 and Aerocity. A detailed project report (DPR) has been prepared for the extension of the Yellow Line (Delhi Metro) to Khera Kalan in North Delhi from Samaypur Badli metro station, with a proposed station at Siraspur along the route. Haryana and UP connectivity Haryana projects Gurugram Metro loop (from HUDA City Centre to Cyber City) - approved: The total length of the corridor will be about , consisting of 27 elevated stations with six interchange stations. This link would start at HUDA City Centre and move towards Sector 45, Cyber Park, district shopping centre, Sector 47, Subhash Chowk, Sector 48, Sector 72 A, Hero Honda Chowk, Udyog Vihar Phase 6, Sector 10, Sector 37, Basai village, Sector 9, Sector 7, Sector 4, Sector 5, Ashok Vihar, Sector 3, Bajghera Road, Palam Vihar Extension, Palam Vihar, Sector 23 A, Sector 22, Udyog Vihar Phase 4, Udyog Vihar Phase 5 and finally merge in existing Metro network of Rapid Metro Gurgaon at Moulsari Avenue station near Cyber City. Gurugram (from Rezang La Chowk in Palam Vihar to IGI Airport (IICC - Dwarka Sector 25 metro station)) - proposed: connect Gurugram Loop to IGI Airport by connecting Palam Vihar to Delhi Airport Metro Express (Orange Line) at existing IICC - Dwarka Sector 25 metro station (India International Convention and Expo Centre) which also connects to Blue Line at Dwarka Sector 21 metro station). It will likely be a nearly 6 km extension of Orange Line from IICC Dwarka to Bamnoli Chowk (southeast end of IICC), Nykaa village, Bijwasan railway station (BWSN), Gurugram Sector-51 and connect to Gurugram metro network near Palam Vihar Halt railway station (PLVR). HUDA City Centre to Manesar City - approved: An extension of Yellow Line, included in the Gurgaon Masterplan 2031, approved by the Haryana govt will go up to Panchgaon Chowk in Manesar, where it will interchange with Delhi–Alwar Regional Rapid Transit System, Haryana Orbital Rail Corridor (Panchgaon), Western Peripheral Expressway's Multimodal Transit Centre and Jhajjar-Palwal rail line. Gurgaon – Faridabad metro - DPR ready: In May 2020, the Detailed Project Report (DPR) and survey for the long Gurgaon-Faridabad metro link from Vatika Chowk in Gururam to Bata Chowk in Faridabad was completed which will have 8 stations, of which the elevated stretch along the Gurgaon-Faridabad Road through eco-sensitive wildlife corridor will be elevated. Bahadurgarh (Brigadier Hoshiyar Singh metro station) – Rohtak City: A Green Line extension partially approved to Asaudha Bahadurgarh to Asaudah railway station section, to connect with Haryana Orbital Rail Corridor at Asaudha station, is covered in FY 2023-24 budget of Haryana govt. Dhansa Bus Stand – Jhajjar City: A Grey Line extension, proposed but not approved Uttar Pradesh (UP) projects Shiv Vihar – Loni: Proposed but not approved Noida – Noida International Airport: surface line along the Yamuna Expressway serving the proposed Noida International Airport. The line, envisioned to be completed by 2025, will connect with the Noida Metro. Integration with RapidX The RapidX is a semi-high-speed regional rapid transit system (RRTS) which aims to connect Delhi with its neighbouring cities via eight lines of semi-high-speed trains operating at a maximum speed of . Phase I of the project consists of three corridors: Delhi–Meerut, Delhi–Alwar, and Delhi–Panipat corridor. The Delhi–Meerut corridor, also known as the Delhi–Meerut RRTS, is currently under development by the National Capital Region Transport Corporation (NCRTC). The Delhi–Meerut RRTS is long and costs . It will comprise 14 stations (with nine additional stations for the Meerut Metro) and two depots. Three of the 14 stations (Sarai Kale Khan, New Ashok Nagar, and Anand Vihar) will be in Delhi, and are planned for seamless integration with the Delhi Metro. Lines Red Line (Line 1) The Red Line, the first metro line opened, connects Rithala in the west to Shaheed Sthal (New Bus Adda) in the east for a distance of . Partly elevated and partly at grade, it crosses the Yamuna River between the Kashmere Gate and Shastri Park stations. The opening of the first stretch on 24 December 2002, between Shahdara and Tis Hazari, crashed the ticketing system due to demand. Subsequent sections were opened from Tis Hazari – Trinagar (later renamed Inderlok) on 4 October 2003, Inderlok – Rithala on 31 March 2004, and Shahdara – Dilshad Garden on 4 June 2008. The Red Line has interchanges at Kashmere Gate with the Yellow and Violet Lines, at Inderlok with the Green Line, and at Netaji Subhash Place and Welcome with the Pink Line. An interchange with the Blue Line at Mohan Nagar is planned. Six-coach trains were commissioned on the line on 24 November 2013. An extension from Dilshad Garden to Shaheed Sthal (New Bus Adda) opened on 8 March 2019. The metro introduced a set of two eight-coach trains on the Red Line, converted from the existing fleet of 39 six-coach trains, in November 2022. Yellow Line (Line 2) The Yellow Line, the metro's second line, was its first underground line. Running north to south, it connects Samaypur Badli with Millennium City Centre Gurugram in Gurugram. The northern and southern parts of the line are elevated, and the central section (which passes through some of the most congested parts of Delhi) is underground. The underground section between Vishwa Vidyalaya and Kashmere Gate opened on 20 December 2004; the Kashmere Gate – Central Secretariat section opened on 3 July 2005, and Vishwa Vidyalaya – Jahangirpuri on 4 February 2009. The line has India's second-deepest metro station at Chawri Bazar, below ground level. An additional stretch from Qutab Minar to Millennium City Centre Gurugram, initially operating separately from the mainline, opened on 21 June 2010; the Chhatarpur station on this stretch opened on 26 August of that year. Due to delays in acquiring land to construct the station, it was built with prefabricated structures in nine months and is the only Delhi Metro station made completely of steel. The connecting link between Central Secretariat and Qutub Minar opened on 3 September 2010. On 10 November 2015, the line was further extended between Jahangirpuri and Samaypur Badli in Outer Delhi. Interchanges are available with the Red Line and Kashmere Gate ISBT at Kashmere Gate, with the Blue Line at Rajiv Chowk, with the Violet Line at Kashmere Gate and Central Secretariat, with the Airport Express at New Delhi, with the Pink Line at Azadpur and Dilli Haat - INA, with the Magenta Line at Hauz Khas, with Rapid Metro Gurgaon at Sikanderpur, and with Indian Railways at Chandni Chowk and New Delhi. The Yellow Line is the metro's first line to replace four-coach trains with six- and eight-coach configurations. The Metro Museum at Patel Chowk metro station, South Asia's only rapid-transit museum, has a collection of display panels, historical photographs and exhibits tracing the genesis of the Delhi Metro. The museum was opened on 1 January 2009. Blue Line (Lines 3 and 4) The Blue Line, the third line of the metro open, was the first to connect areas outside Delhi. Mainly elevated and partly underground, it connects Dwarka Sub City in the west with the satellite city of Noida in the east for a distance of . The line's first section, between Dwarka and Barakhamba Road, opened on 31 December 2005, and subsequent sections opened between Dwarka – Dwarka Sector 9 on 1 April 2006, Barakhamba Road – Indraprastha on 11 November 2006, Indraprastha – Yamuna Bank on 10 May 2009, Yamuna Bank – Noida City Centre on 12 November 2009, and Dwarka Sector 9 – Dwarka Sector 21 on 30 October 2010. The line crosses the Yamuna River between the Indraprastha and Yamuna Bank stations, and has India's second extradosed bridge across the Northern Railways mainlines near Pragati Maidan. A branch of the Blue Line, inaugurated on 8 January 2010, runs for from the Yamuna Bank station to Anand Vihar in East Delhi. It was extended to Vaishali on 14 July 2011. A stretch from Dwarka Sector 9 to Dwarka Sector 21 opened on 30 October 2010. On 9 March 2019, a extension from Noida City Centre to Noida Electronic City was opened by Prime Minister Narendra Modi. Interchanges are available with the Aqua Line (Noida Metro) Noida Sector 51 station at Noida Sector 52, with the Yellow Line at Rajiv Chowk, with the Green Line at Kirti Nagar, with the Violet Line at Mandi House, with the Airport Express at Dwarka Sector 21, with the Pink Line at Rajouri Garden, Mayur Vihar Phase-I, Karkarduma and Anand Vihar, with the Magenta Line at Janakpuri West and Botanical Garden, and with Indian Railways and the Interstate Bus Station (ISBT) at Anand Vihar station (which connects with Anand Vihar Railway Terminal and Anand Vihar ISBT). An interchange with the Red Line at Mohan Nagar is planned. Green Line (Line 5) Opened in 2010, the Green Line (Line 5) is the metro's fifth and its first standard-gauge line; the others were broad gauge. It runs between Inderlok (a Red Line station) and Brigadier Hoshiyar Singh, with a branch line connecting its Ashok Park Main station with Kirti Nagar on the Blue Line. The elevated line, built as part of Phase II, runs primarily along the busy NH 10 route in West Delhi. It has 24 stations, including an interchange, and covers . The line has India's first standard-gauge maintenance depot, at Mundka. It opened in two stages, with the Inderlok–Mundka section opening on 3 April 2010 and the Kirti Nagar–Ashok Park Main branch line opening on 27 August 2011. On 6 August 2012, to improve commuting in the National Capital Region, the government of India approved an extension from Mundka to Bahadurgarh in Haryana. The stretch has seven stations (Mundka Industrial Area, Ghevra, Tikri Kalan, Tikri Border, Pandit Shree Ram Sharma, Bahadurgarh City and Brigadier Hoshiyar Singh) between Mundka and Bahadurgarh, and opened on 24 June 2018. Interchanges are available with the Red Line at Inderlok, the Blue Line at Kirti Nagar and the Pink Line at Punjabi Bagh West. Violet Line (Line 6) The Violet Line is the sixth metro line opened and the second standard-gauge corridor, after the Green Line. The line connects Raja Nahar Singh in Ballabgarh via Faridabad to Kashmere Gate in New Delhi, with overhead and the rest underground. The first section between Central Secretariat and Sarita Vihar opened on 3 October 2010, hours before the inaugural ceremony of the 2010 Commonwealth Games, and connects Jawaharlal Nehru Stadium (the venue for the games' opening and closing ceremonies). Completed in 41 months, it includes a bridge over the Indian Railways mainlines and a cable-stayed bridge across a road flyover; it connects several hospitals, tourist attractions, and an industrial estate. Service is provided at five-minute intervals. An interchange with the Yellow Line is available at Central Secretariat through an integrated concourse. On 14 January 2011, the remaining portion from Sarita Vihar to Badarpur was opened; this added three new stations to the network. The section between Mandi House and Central Secretariat was opened on 26 June 2014, and a section between ITO and Mandi House was opened on 8 June 2015. A extension south to Escorts Mujesar in Faridabad was inaugurated by Prime Minister Narendra Modi on 6 September 2015. All nine stations on the Badarpur–Escorts Mujesar (Faridabad) section of the metro's Phase III received the highest rating (platinum) for adherence to green-building norms from the Indian Green Building Council (IGBC). The awards were given to DMRC Managing Director Mangu Singh by IGBC chair P. C. Jain on 10 September 2015. The line's Faridabad corridor is the longest corridor outside Delhi: 11 stations and . On 28 May 2017, the ITO–Kashmere Gate corridor was opened by Union Minister of Urban Development Venkaiah Naidu and Chief Minister of Delhi Arvind Kejriwal. The underground section is popularly known as the Heritage Line. Interchanges are available with the Red Line at Kashmere Gate, with the Yellow Line at Kashmere Gate and Central Secretariat, with the Blue Line at Mandi House, with the Pink Line at Lajpat Nagar and with the Magenta Line at Kalkaji Mandir. Airport Express Line / Orange (Line 7) The Airport Express line runs from New Delhi to Yashobhoomi Dwarka Sector - 25, linking the New Delhi railway station and Indira Gandhi International Airport. The line was operated by Delhi Airport Metro Express Pvt. Limited (DAMEL), a subsidiary of Reliance Infrastructure (the line's concessionaire until 30 June 2013). It is now operated by DMRC. The line was built at a cost of , of which Reliance Infrastructure invested and will pay fees in a revenue-share model. It has six stations (Dhaula Kuan and Delhi Aerocity became operational on 15 August 2011), and some have check-in facilities, parking, and eateries. Rolling stock consists of six-coach trains, operating at ten-minute intervals, with a maximum speed of . Originally scheduled to open before the 2010 Commonwealth Games, the line failed to obtain the mandatory safety clearance and was opened on 23 February 2011 after a delay of about five months. Sixteen months after beginning operations, it was shut down for viaduct repairs on 7 July 2012. The line reopened on 22 January 2013. On 27 June 2013, Reliance Infrastructure told DMRC that they were unable to operate the line beyond 30 June of that year. DMRC took over the line on 1 July 2013 with a 100-person operations and maintenance team. In January 2015, DMRC reported that the line's ridership had increased about 30 percent after a fare reduction of up to 40 percent the previous July. DMRC announced a further fare reduction on 14 September 2015, with a maximum fare of ₹60 and minimum of ₹10 instead of ₹100 and ₹20. DMRC said that this was done to reduce crowding on the Blue Line, diverting some Dwarka-bound passengers to the Airport Express Line (which is underutilised and faster than the Blue Line. The line's speed was increased from to on 24 June 2023, enabling a 16-minute ride from New Delhi to IGI Airport. Interchanges are available with the Yellow Line at New Delhi, with the Blue Line at Dwarka Sector 21, with the Durgabai Deshmukh South Campus metro station of the Pink Line at Dhaula Kuan, and with Indian Railways at New Delhi. An expansion of Dwarka Sector 25 was inaugurated on 17 September 2023 with the opening of the adjacent India International Convention Centre. Pink Line (Line 7) The Pink Line is the second new line of the Delhi Metro's third phase. It was opened on 14 March 2018, with an extension opening on 6 August. The Trilokpuri Sanjay Lake-to-Shiv Vihar section was opened on 31 October, and the Lajpat Nagar-to-Mayur Vihar Pocket I section opened on 31 December of that year. The final section, between Mayur Vihar Pocket I and Trilokpuri Sanjay Lake, was opened on 6 August 2021 after delays due to land-acquisition and rehabilitation issues. The Pink Line has 38 stations from Majlis Park to Shiv Vihar, both in North Delhi. With a length of , it is the Delhi Metro's longest line. The mostly-elevated line covers Delhi in a U-shaped pattern. It is also known as the Ring Road Line, since it runs along the busy Ring Road. The line has interchanges with most of the metro's other lines, including with the Red Line at Netaji Subhash Place and Welcome, with the Yellow Line at Azadpur and Dilli Haat – INA, with the Blue Line at Rajouri Garden, Mayur Vihar Phase-I, Anand Vihar and Karkarduma, with the Green Line at Punjabi Bagh West, with Dhaula Kuan of the Airport Express at Durgabai Deshmukh South Campus, with the Violet Line at Lajpat Nagar, with Indian Railways at Hazrat Nizamuddin and Anand Vihar Terminal, and the ISBTs at Anand Vihar and Sarai Kale Khan. The Pink Line reaches the Delhi Metro's highest point at Dhaula Kuan, passing over the Dhaula Kuan grade-separator flyovers and the Airport Express Line. Magenta Line (Line 8) The Magenta Line is the Delhi Metro's first new line of its third phase. The Botanical Garden-to-Kalkaji Mandir section opened on 25 December 2017, and the remainder of the line opened on 28 May 2018. It has 26 stations, from Krishna Park Extension to Botanical Garden. The line directly connects to Terminal 1D of Indira Gandhi International Airport. The Hauz Khas station on this line and the Yellow Line is the deepest metro station, at a depth of . The Magenta Line has interchanges with the Yellow Line at Hauz Khas, with the Blue Line at Janakpuri West and Botanical Garden, and with the Violet Line at Kalkaji Mandir. India's first driverless train service began on the Magenta Line in December 2020. Grey Line (Line 9) The Grey Line (also known as Line 9), the metro's shortest, runs from Dwarka to Dhansa Bus Stand in western Delhi. The line has four stations (Dhansa Bus Stand, Najafgarh, Nangli and Dwarka), and has an interchange with the Blue Line at Dwarka. The Najafgarh-to-Dwarka section was opened on 4 October 2019. The extension to Dhansa Bus Stand was scheduled to open in December 2020, but construction was delayed by the COVID-19 pandemic; it opened on 18 September 2021. Network The Delhi Metro has been undergoing construction in phases. Phase I consisted of 59 stations and of route length, of which is underground and at grade or elevated. The inauguration of the Dwarka–Barakhamba Road corridor of the Blue Line completed Phase I in October 2006. Phase II consists of of route length and 86 stations, and is completed; the first section opened in June 2008, and the last section opened in August 2011. Phase III consists of 109 stations, three new lines and seven route extensions, totaling , at a cost of . Most of it was completed on 5 April 2019, except for a small section of the Pink Line between the Mayur Vihar Pocket 1 and Trilokpuri Sanjay Lake stations (opened on 6 August 2021) the Grey Line extension from Najafgarh to Dhansa Bus Stand (opened on 18 September 2021) and the Airport Express extension from Dwarka Sector 21 to Yashbhoomi-Dwarka Sector 25 (Completed on 17th September 2023). Phase IV, with six lines totaling , was finalized in July 2015. Of this, across three lines (priority corridors) with 45 stations was approved by the government of India for construction on 7 March 2019. The Golden Line was lengthened in October 2020, making the project long. Magenta Line's one station extension along the RK Ashram Marg opened on 5th January 2025 currently as far as Krishna Park Extension, with the rest of the network (Along with planned routes) is planned to be completed by atleast 2029. Operations Trains operate at a frequency of one to two minutes to five to ten minutes between 05:00 and 00:00, depending upon peak and off-peak hours. They typically travel up to , and stop for about 20 seconds at each station. Automated station announcements are in Hindi and English. Many stations have ATMs, food outlets, cafés, convenience stores and mobile recharge. Eating, drinking, smoking, and chewing gum are prohibited. The metro has a sophisticated fire alarm system for advance warning in emergencies, and fire retardant material is used in trains and stations. Navigation information is available on Google Maps. Since October 2010, the first coach of every train is reserved for women; the last coach is also reserved when the train changes tracks at the terminal stations on the Red, Green and Violet Lines. The mobile Delhi Metro Rail app has been introduced for iPhone and Android users with information such as the location of the nearest metro station, fares, parking availability, nearby tourist attractions, security and emergency helpline numbers. Security Security has been provided by the CISF Unit DMRC since 2007. Closed-circuit cameras monitor trains and stations, and their feeds are monitored by the CISF and Delhi Metro authorities. Over 7,000 CISF personnel have been deployed for security in addition to metal detectors, X-ray baggage-inspection systems, and detection dogs. Eighteen Delhi Metro Rail Police stations have been established, and about 5,200 CCTV cameras have been installed. Each underground station has 45 to 50 cameras, and each elevated station has 16 to 20 cameras. The cameras are monitored by the CISF and the Delhi Metro Rail Corporation. Intercoms are provided in each train car for emergency communication between passengers and the train operator. Periodic security drills are carried out at stations and on trains. The DMRC is considering raising station walls and railings for passenger safety. Ticketing The metro's fares were last revised on 10 October 2017, based on the recommendation of the 4th Fare Fixation Committee in May 2016. Metro commuters have five choices for ticket purchases: RFID token: RFID tokens are valid only for a single journey on the day of purchase. Their value depends on the distance travelled, with fares for a single journey ranging from to . Fares are calculated based on the distance between the origin and destination stations. As of 2024 they are no longer in use. Smart card: Smart cards are available for longer terms, and are the most convenient for frequent commuters. Valid for ten years from the date of purchase or the date of the last recharge, they are available in denominations of to . A 10-percent discount is given, with an additional 10-percent discount for off-peak travel. A new card has a deposit, refundable on its return before expiry if physically undamaged. For women commuters, the Delhi government unsuccessfully proposed a fare-exemption scheme. A common ticketing facility, allowing commuters to use smart cards on Delhi Transport Corporation (DTC) buses and the metro, was introduced on 28 August 2018. Tourist card: Tourist cards can be used for unlimited travel on the Delhi Metro for short periods of time. There are two kinds of tourist cards, valid for one and three days. The cost of a one-day card is and a three-day card is , including a refundable deposit of paid at purchase. National Common Mobility Card: Part of the Indian government's One Nation, One Card policy, the National Common Mobility Card is an inter-operable transport card enabling a user to pay for travel, tolls, shopping and cash. Enabled through RuPay, the NCMC was commissioned on the Airport Express Line on 28 December 2020. In June 2023, DMRC completed the upgrade of its automatic fare collection (AFC) systems to be compliant with NCMC services. QR code based ticketing: A Delhi Metro QR ticket is a mobile-based ticket allowing travel like a token or recharge card. A ticket can be bought online with the RIDLR app. For entry and exit, the QR ticket is scanned at the AFC gates. Similar to mobile-based tickets, paper QR tickets can be bought at a station. Problems As the metro has expanded, high ridership on new trains has led to increasing overcrowding and delays. To alleviate the problem, eight-coach trains have been introduced on the Yellow and Blue Lines and more-frequent trains have been proposed. Infrequent, overcrowded and erratic feeder bus services connecting stations to nearby localities have also been a concern. Although the quality and cleanliness of the Delhi Metro have been praised, rising fares have been criticized; fares are higher than those of the bus services the metro replaced. According to a recent study, Delhi Metro fares are the second-most unaffordable among metros charging less than US$0.5 per ride. Another study finds that Delhi Metro may also have a low ridership problem compared to its size and may not be generating the amount of traffic a metro system generates. Feeder buses DMRC began its feeder bus service in 2007 with a fleet of 117 minibuses on 16 routes. In January 2024, it had a fleet of 47 electric feeder buses on five routes to nine metro stations: Kashmere Gate, Gokulpuri, Shastri Park, Laxmi Nagar, East Vinod Nagar - Mayur Vihar-II, Anand Vihar, Dilshad Garden, Vishwavidyalaya, and GTB Nagar. The routes are: MC-127: Kashmere Gate to Harsh Vihar MC-137: Shastri Park to Mayur Vihar Phase-III MC-137 (Mini): Udyog Bhawan to Vanijya Bhawan MC-341: Mayur Vihar Phase-III to Harsh Vihar ML-06: Vishwavidyalaya to Shankarpura Ridership Note that DMRC reports different metrics versus the daily ridership below. DMRC report "daily passenger journeys" - for example, in 2022–23, DMRC reported that average daily passenger journeys were approx 4.63 million per day as compared to 5.16 million per day in 2019-20 (pre-Covid). Metro service was suspended on 25 March 2020 due to the COVID-19 pandemic. Operations resumed on 12 September 2020, and the average daily ridership fell to 8.78 lakh (0.88 million) in FY 2020–21. The maximum daily ridership (passenger journeys) of 7.109 million was reported on 13 February 2024. * Includes Rapid Metro Gurgaon ^ From 2019 onwards the DMRC changed the ridership calculation to count every trip taken by a passenger on a line. This means that a passenger that takes 2 connections will count 3 times towards ridership. This is different from the more standard practice of counting entire journeys applied in other metro systems. Finances Summary financials Source: The Delhi Metro has been operating with a loss in EBT (earnings before taxes) since 2010, although the loss has shrunk since 2015–16. Its EBITDA (earnings before interest, taxes, depreciation, and amortization) declined from 73 percent in FY 2007 to 27 percent in FY 2016–17 before improving to 30 percent in 2017–18. The metro began a naming policy for stations in 2014, awarded by an open e-tendering process, to generate non-fare revenue. Funding and Capitalisation DMRC is owned by the government of the National Capital Territory of Delhi and the government of India. Total debt was in March 2016, and equity capital was . The cost of the debt is zero percent for Union Government and Delhi Government loans, and from 0.01 and 2.3 percent for Japan International Cooperation Agency (JICA) loans. On 31 March 2016, was paid-up capital; the rest is reserves and surplus. Depots Delhi Metro has 15 depots. Some depots, such as Shastri Park and Yamuna Bank, are near their respective at-grade station complexes; others, such as Sarita Vihar and Mundka, are joined indirectly to the main line. The Najafgarh depot is unique in housing trains from the Blue and Grey Lines; the Sarita Vihar depot will house Violet and Golden Line trains in the future. The Phase III Kalindi Kunj and Vinod Nagar depots were built differently due to land-acquisition issues; the former has an extra elevated stabling yard adjacent to the Jasola Vihar - Shaheen Bagh station, and the latter has two sub-depots (one with two floors). An elevated stabling yard was also built adjacent to the Noida Electronic City station, but it is not considered a depot. As part of Phase IV, the Mukundpur depot will be expanded to accommodate the Pink and Magenta Lines without land-acquisition issues. The metro has two rail gauges. Phase I lines have broad gauge rolling stock, and three Phase II lines have rolling stock. Trains are maintained at seven depots at Khyber Pass and Sultanpur for the Yellow Line, Mundka for the Green Line, Najafgarh and Yamuna Bank for the Blue Line, Shastri Park for the Red Line, and Sarita Vihar for the Violet Line. Maglev trains were considered for some Phase III lines, but DMRC decided to continue with conventional rail in August 2012. By 31 March 2015, the company had a total of 1,306 coaches (220 trains). In addition to line extensions, two new lines (7 and 8) were proposed in Phase III. Unattended train operation (UTO) will be in 486 coaches (81 six-car trains). An additional 258 broad-gauge (BG) coaches for Lines 1 to 4 and 138 standard-gauge (SG) coaches for Lines 5 and 6 were proposed. At the end of Phase III, there would be 2,188 coaches (333 trains). Except for a few four-car trains on Line 5, 93 percent of the trains would have a six- or eight-car configuration at the end of Phase III. Broad gauge Rolling stock is provided by two major suppliers. Phase I rolling stock was supplied by a consortium of companies (Hyundai Rotem, Mitsubishi Corporation, and MELCO). The coaches look similar to the MTR Rotem EMU, but have only four doors; sliding doors, instead of plug doors, are used. The coaches were initially built in South Korea by Rotem, then in Bangalore by BEML through a technology transfer arrangement. The trains consist of four lightweight stainless-steel coaches with vestibules (permitting movement throughout them) and can carry up to 1,500 passengers, with 50 seated and 330 standing passengers per coach. The coaches are air-conditioned, equipped with automatic doors, microprocessor-controlled brakes and secondary air suspension, and can maintain an average speed of over a distance of . The system is extendable to eight coaches, and platforms have been designed accordingly. Phase II rolling stock is supplied by Bombardier Transportation, which received an order for 614 cars at a cost of about . Although the initial trains were made in Görlitz, Germany and Sweden, the remainder will be built at Bombardier's factory in Savli (near Vadodara). The four- and six-car trains have a capacity of 1,178 and 1,792 commuters each, respectively. Coaches have closed-circuit television (CCTV) cameras with eight-hour backup, chargers for cell phones and laptops, and improved climate control. Standard gauge Standard-gauge rolling stock is manufactured by BEML at its factory in Bangalore, and most of these trains are supplied to BEML by Hyundai Rotem. The four-car trains have a capacity of 1,506 passengers, accommodating 50 seated and 292 standing passengers in each coach. The trains, with CCTV cameras in and outside the coaches, chargers for mobile phones and laptops, improved climate control and microprocessor-controlled disc brakes, will be capable of maintaining an average speed of over a distance of . Airport Express Eight six-car trains supplied by CAF Beasain were imported from Spain. CAF held five-percent equity in the DAME project, and Reliance Infrastructure held the remaining 95 percent before DMRC took over operations. Trains on this line have noise reduction and padded fabric seats. Coaches are equipped with LCD screens for entertainment and flight information. Trains have an event recorder which can withstand high levels of temperature and impact, and wheels have a flange-lubrication system for reduced noise and improved comfort. Signaling and telecommunication The metro uses cab signaling with a centralised automatic train control system consisting of automatic operation, protection and signaling modules. A 380 MHz digital trunked TETRA radio communication system from Motorola Solutions is used on all lines to carry voice and data information. For the Blue Line, Siemens supplied the electronic interlocking Sicas, the Vicos OC 500 operation-control system and the LZB 700 M automation-control system. An integrated system with optical fibre cable, on-train radio, CCTV, and a centralised clock and public address system is used for telecommunication during normal operations and emergencies. Alstom supplied the signaling system for the Red and Yellow Lines, and Bombardier Transportation supplied its CITYFLO 350 signaling system for the Green and Violet Lines. The Airport Express line introduced WiFi service at all its stations on 13 January 2012. Connectivity in trains is expected in the future. WiFi service is provided by YOU Broadband and Cable India. In August 2017, Wifi service began at all the 50 stations of the Blue Line. A fully-automated, operator-less train system was offered to the metro by the French technology firm Thales. Environment and aesthetics The metro has received awards for environmentally-friendly practices from organisations including the United Nations, RINA, and the International Organization for Standardization; it is the second metro in the world, after the New York City Subway, to be ISO 14001 certified for environmentally-friendly construction. By March 2023, 64 metro stations, four sections on the central verge between piers, and 12 other Phase I and II locations on the network have rainwater harvesting for environmental protection; all 27 Phase-IV elevated stations will also harvest rainwater, and 52 recharge pits are being constructed for this purpose. It is the world's first railway project to earn carbon credits after being registered with the United Nations under the UN's Clean Development Mechanism, and has earned 400,000 carbon credits with the regenerative braking systems on its trains. DMRC installed the metro's first rooftop solar power plant at the Dwarka Sector-21 station in 2014. The network received 35 percent of its energy from renewable sources by April 2023, which it intends to increase to 50 percent by 2031. Of this, 30 percent comes from the Rewa Ultra Mega Solar park in Madhya Pradesh; four percent (50 MWp) comes from rooftop solar panels, and one percent comes from a waste-to-energy plant in Ghazipur. DMRC has installed solar panels at 142 locations: 15 depots, 93 stations, and 34 other buildings. The metro has been promoted as an integral part of community infrastructure, and artwork depicting the local way of life has been displayed at stations. Students at local art colleges have designed murals at metro stations, and the viaduct pillars of some elevated sections have been decorated with mosaic murals created by local schoolchildren. The metro station at INA Colony has a gallery of artwork and handicrafts from across India, and all stations on the Central Secretariat – Qutub Minar section of the Yellow Line have panels depicting Delhi's architectural heritage. The Nobel Memorial Wall at Rajiv Chowk has portraits of the seven Indian Nobel laureates: Rabindranath Tagore, CV Raman, Hargobind Khorana, Mother Teresa, Subrahmanyan Chandrasekhar, Amartya Sen and Venkatraman Ramakrishnan. In popular culture A number of films have been shot in the Delhi Metro; the first was Bewafaa in November 2005. Delhi-6, Love Aaj Kal, PK, and Paa also have scenes filmed inside Delhi Metro trains and stations. Bang Bang! was filmed near the Mayur Vihar Extension metro station in March 2014, and the 2019 film War was filmed in the metro. See also Urban rail transit in India Delhi Suburban Railway Transport in Delhi National Capital Region Transport Corporation Delhi Transport Corporation List of suburban and commuter rail systems Lists of urban rail transit systems List of metro systems Metro Tunneling Group Notes References Further reading External links Delhi Metro route map Collection of Delhi Metro Images New Delhi Siemens Mobility projects Underground rapid transit in India Railway lines opened in 2002 Transport in Delhi Standard gauge railways in India 2002 establishments in Delhi Automated guideway transit
Delhi Metro
Technology,Engineering
10,109
52,325,211
https://en.wikipedia.org/wiki/Polyporus%20tuberaster
Polyporus tuberaster, commonly known as the tuberous polypore or stone fungus, is a species of fungus in the genus Polyporus. It is easily identified by the fact that it grows from a large sclerotium that can resemble buried wood or a potato. The yellow-brown cap is 4–15 cm wide, and ranges from convex to flat and even funnel-shaped. The whitish stalks can grow upwards of 10 cm high and 2–4 cm wide. The spores are white. The species is edible but also tough. References External links tuberaster Edible fungi Fungi described in 1821 Fungus species
Polyporus tuberaster
Biology
127