id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
68,697,715
https://en.wikipedia.org/wiki/Martin%20curve
The Martin curve is a power law used by oceanographers to describe the export to the ocean floor of particulate organic carbon (POC). The curve is controlled with two parameters: the reference depth in the water column, and a remineralisation parameter which is a measure of the rate at which the vertical flux of POC attenuates. It is named after the American oceanographer John Martin. The Martin Curve has been used in the study of ocean carbon cycling and has contributed to understanding the role of the ocean in regulating atmospheric levels. Background The dynamics of the particulate organic carbon (POC) pool in the ocean are central to the marine carbon cycle. POC is the link between surface primary production, the deep ocean, and marine sediments. The rate at which POC is degraded in the dark ocean can impact atmospheric CO2 concentration. The biological carbon pump (BCP) is a crucial mechanism by which atmospheric CO2 is taken up by the ocean and transported to the ocean interior. Without the BCP, the pre-industrial atmospheric CO2 concentration (~280 ppm) would have risen to ~460 ppm. At present, the particulate organic carbon (POC) flux from the surface layer of the ocean to the ocean interior has been estimated to be 4–13 Pg-C year−1. To evaluate the efficiency of the BCP, it is necessary to quantify the vertical attenuation of the POC flux with depth because the deeper that POC is transported, the longer the CO2 will be isolated from the atmosphere. Thus, an increase in the efficiency of the BCP has the potential to cause an increase of ocean carbon sequestration of atmospheric CO2 that would result in a negative feedback on global warming. Different researchers have investigated the vertical attenuation of the POC flux since the 1980s. In 1987, Martin et al. proposed the following power law function to describe the POC flux attenuation: (1) where z is water depth (m), and Fz and F100 are the POC fluxes at depths of z metres and 100 metres respectively. Although other functions, such as an exponential curve, have also been proposed and validated, this power law function, commonly known as the "Martin curve", has been used very frequently in discussions of the BCP. The exponent b in this equation has been used as an index of BCP efficiency: the larger the exponent b, the higher the vertical attenuation rate of the POC flux and the lower the BCP efficiency. Moreover, numerical simulations have shown that a change in the value of b would significantly change the atmospheric CO2 concentration. Subsequently, other researchers have derived alternative remineralization profiles from assumptions about particle degradability and sinking speed. However, the Martin curve has become ubiquitous as the model that assumes slower-sinking and/or labile organic matter is preferentially depleted near the surface causing increasing sinking speed and/or remineralization timescale with depth. The Martin curve can be expressed in a slightly more general way as: where fp(z) is the fraction of the flux of particulate organic matter from a productive layer near the surface sinking through the depth horizon z [m], Cp [mb] is a scaling coefficient, and b is a nondimensional exponent controlling how fp decreases with depth. The equation is often normalised to a reference depth zo but this parameter can be readily absorbed into Cp. Vertical attenuation rate The vertical attenuation rate of the POC flux is very dependent on the sinking velocity and decomposition rate of POC in the water column. Because POC is labile and has little negative buoyancy, it must be aggregated with relatively heavy materials called ballast to settle gravitationally in the ocean. Materials that may serve as ballast include biogenic opal (hereinafter "opal"), CaCO3, and aluminosilicates. In 1993, Ittekkot hypothesized that the drastic decrease from ~280 to ~200 ppm of atmospheric CO2 that occurred during the last glacial maximum was caused by an increase of the input of aeolian dust (aluminosilicate ballast) to the ocean, which strengthened the BCP. In 2002, Klaas and Archer , as well as Francois et al. who compiled and analyzed global sediment trap data, suggested that CaCO3, which has the largest density among possible ballast minerals, is globally the most important and effective facilitator of vertical POC transport, because the transfer efficiency (the ratio of the POC flux in the deep sea to that at the bottom of the surface mixed layer) is higher in subtropical and tropical areas where CaCO3 is a major component of marine snow. Reported sinking velocities of CaCO3-rich particles are high. Numerical simulations that take into account these findings have indicated that future ocean acidification will reduce the efficiency of the BCP by decreasing ocean calcification. In addition, the POC export ratio (the ratio of the POC flux from an upper layer (a fixed depth such as 100 metres, or the euphotic zone or mixed layer) to net primary productivity) in subtropical and tropical areas is low because high temperatures in the upper layer increase POC decomposition rates. The result might be a higher transfer efficiency and a strong positive correlation between POC and CaCO3 in these low-latitude areas: labile POC, which is fresher and easier for microbes to break down, decomposes in the upper layer, and relatively refractory POC is transported to the ocean interior in low-latitude areas. On the basis of observations that revealed a large increase of POC fluxes in high-latitude areas during diatom blooms and on the fact that diatoms are much bigger than coccolithophores, Honda and Watanabe proposed in 2010 that opal, rather than CaCO3, is crucial as ballast for effective POC vertical transport in subarctic regions. Weber et al. reported in 2016 a strong negative correlation between transfer efficiency and the picoplankton fraction of plankton as well as higher transfer efficiencies in high-latitude areas, where large phytoplankton such as diatoms predominate. They also calculated that the fraction of vertically transported CO2 that has been sequestered in the ocean interior for at least 100 years is higher in high-latitude (polar and subpolar) regions than in low-latitude regions. In contrast, Bach et al.conducted in 2019 a mesocosm experiment to study how the plankton community structure affected sinking velocities and reported that during more productive periods the sinking velocity of aggregated particles was not necessarily higher, because the aggregated particles produced then were very fluffy; rather, the settling velocity was higher when the phytoplankton were dominated by small cells. In 2012, Henson et al. revisited the global sediment trap data and reported the POC flux is negatively correlated with the opal export flux and uncorrelated with the CaCO3 export flux. Key factors affecting the rate of biological decomposition of sinking POC in the water column are water temperature and the dissolved oxygen (DO) concentration: the lower the water temperature and the DO concentration, the slower the biological respiration rate and, consequently, the POC flux decomposition rate. For example, in 2015 Marsay with other analysed POC flux data from neutrally buoyant sediment traps in the upper 500 m of the water column and found a significant positive correlation between the exponent b in equation (1) above and water temperature (i.e., the POC flux was attenuated more rapidly when the water was warmer). In addition, Bach et al. found POC decomposition rates are high (low) when diatoms and Synechococcus (harmful algae) are the dominant phytoplankton because of increased (decreased) zooplankton abundance and the consequent increase (decrease) in grazing pressure. Using radiochemical observations (234Th-based POC flux observations), Pavia et al. found in 2019 that the exponent b of the Martin curve was significantly smaller in the low-oxygen (hypoxic) eastern Pacific equatorial zone than in other areas; that is, vertical attenuation of the POC flux was smaller in the hypoxic area. They pointed out that a more hypoxic ocean in the future would lead to a lower attenuation of the POC flux and therefore increased BCP efficiency and could thereby be a negative feedback on global warming. McDonnell et al. reported in 2015 that vertical transport of POC is more effective in the Antarctic, where the sinking velocity is higher and the biological respiration rate is lower than in the subtropical Atlantic. Henson et al. also reported in 2019 a high export ratio during the early bloom period, when primary productivity is low, and a low export ratio during the late bloom period, when primary productivity is high. They attributed the low export ratio during the late bloom to grazing pressure by microzooplankton and bacteria. Despite these many investigations of the BCP, the factors governing the vertical attenuation of POC flux are still under debate. Observations in subarctic regions have shown that the transfer efficiency between depths of 1000 and 2000 m is relatively low and that between the bottom of the euphotic zone and a depth of 1000 m it is relatively high. Marsay et al. therefore proposed in 2015 that the Martin curve does not appropriately express the vertical attenuation of POC flux in all regions and that a different equation should instead be developed for each region. Gloege et al. discussed in 2017 parameterization of the vertical attenuation of POC flux, and reported that vertical attenuation of the POC flux in the twilight zone (from the base of the euphotic zone to 1000 m) can be parameterised well not only by a power law model (Martin curve) but also by an exponential model and a ballast model. However, the exponential model tends to underestimate the POC flux in the midnight zone (depths greater than 1000 metres). Cael and Bisson reported in 2018 that the exponential model (power law model) tends to underestimate the POC flux in the upper layer, and overestimate it in the deep layer. However, the abilities of both models to describe POC fluxes were comparable statistically when they were applied to the POC flux dataset from the eastern Pacific that was used to propose the "Martin curve". In a long-term study in the northeastern Pacific, Smith et al. observed in 2018 a sudden increase of the POC flux accompanied by an unusually high transfer efficiency; they have suggested that because the Martin curve cannot express such a sudden increase, it may sometimes underestimate BCP strength. In addition, contrary to previous findings, some studies have reported a significantly higher transfer efficiency, especially to the deep sea, in subtropical regions than in subarctic regions. This pattern may be attributable to small temperature and DO concentration differences in the deep sea between high-latitude and low-latitude regions, as well as to a higher sinking velocity in subtropical regions, where CaCO3 is a major component of deep-sea marine snow. Moreover, it is also possible that POC is more refractory in low-latitude areas than in high-latitude areas. Uncertainty in the biological pump The ocean's biological pump regulates atmospheric carbon dioxide levels and climate by transferring organic carbon produced at the surface by phytoplankton to the ocean interior via marine snow, where the organic carbon is consumed and respired by marine microorganisms. This surface to deep transport is usually described by a power law relationship of sinking particle concentration with depth. Uncertainty in biological pump strength can be related to different variable values (parametric uncertainty) or the underlying equations (structural uncertainty) that describe organic matter export. In 2021, Lauderdale evaluated structural uncertainty using an ocean biogeochemistry model by systematically substituting six alternative remineralisation profiles fit to a reference power-law curve. Structural uncertainty makes a substantial contribution, about one-third in atmospheric pCO2 terms, to the total uncertainty of the biological pump, highlighting the importance of improving biological pump characterisation from observations and its mechanistic inclusion in climate models. Carbon and nutrients are consumed by phytoplankton in the surface ocean during primary production, leading to a downward flux of organic matter. This "marine snow" is transformed, respired, and degraded by heterotrophic organisms in deeper waters, ultimately releasing those constituents back into dissolved inorganic form. Oceanic overturning and turbulent mixing return resource-rich deep waters back to the sunlit surface layer, sustaining global ocean productivity. The biological pump maintains this vertical gradient in nutrients through uptake, vertical transport, and remineralisation of organic matter, storing carbon in the deep ocean that is isolated from the atmosphere on centennial and millennial timescales, lowering atmospheric CO2 levels by several hundred microatmospheres. The biological pump resists simple mechanistic characterisation due to the complex suite of biological, chemical, and physical processes involved, so the fate of exported organic carbon is typically described using a depth-dependent profile to evaluate the degradation of sinking particulate matter. See also Particulate inorganic carbon References Oceanography Carbon
Martin curve
[ "Physics", "Environmental_science" ]
2,769
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics" ]
68,700,713
https://en.wikipedia.org/wiki/LigD
LigD is a multifunctional ligase/polymerase/nuclease (3'-phosphoesterase) found in bacterial non-homologous end joining (NHEJ) DNA repair systems. It is much more error-prone than the more complex eukaryotic system of NHEJ, which uses multiple enzymes to fill its role. The polymerase preferentially use rNTPs (RNA nucleotides), possibly advantageous in dormant cells. The actual architecture of LigD is variable. The LigD homolog in Bacillus subtilis does not have the nuclease domain. LigD with its ligase domain artificially removed can perform its function (with loss of fidelity) with a separate LigC acting as the ligase. The LigD homolog in the archaeon Methanocella paludicola is broken into three single-domain proteins sharing an operon. References DNA repair
LigD
[ "Biology" ]
201
[ "Molecular genetics", "Cellular processes", "DNA repair" ]
68,702,013
https://en.wikipedia.org/wiki/Sandra%20Hirche
Sandra Hirche (born 1974) is a German control theorist and engineer. She is Liesel Beckmann Distinguished Professor of electrical and computer engineering at the Technical University of Munich, where she holds the chair of information-oriented control. Her research focuses on human–robot interaction, haptic technology, telepresence, and the control engineering and systems theory needed to make those technologies work. Education and career Hirche was born in 1974 in Freiberg. She became a student of aerospace engineering at Technische Universität Berlin, earning a diploma in 2002. She completed her doctorate (Dr.Ing.) at the Technical University of Munich in 2005. After postdoctoral research at the Tokyo Institute of Technology and University of Tokyo, she joined the Technical University of Munich as an associate professor in 2008. She was named Liesel Beckmann Distinguished Professor and given the chair of information-oriented control in 2013. Recognition Hirche was named an IEEE Fellow in 2020 "for contributions to human-machine interaction and networked control". References External links Home page Living people German electrical engineers German women engineers Control theorists Technische Universität Berlin alumni Academic staff of the Technical University of Munich Fellows of the IEEE 1974 births 21st-century German engineers 21st-century German women engineers
Sandra Hirche
[ "Engineering" ]
261
[ "Control engineering", "Control theorists" ]
68,703,788
https://en.wikipedia.org/wiki/Human%20Landing%20System
A Human Landing System (HLS) is a spacecraft in the U.S. National Aeronautics and Space Administration's (NASA) Artemis program that is expected to land humans on the Moon. These are being designed to convey astronauts from the Lunar Gateway space station in lunar orbit to the lunar surface, sustain them there, and then return them to the Gateway station. NASA intends to use Starship HLS for Artemis III, an enhanced Starship HLS for Artemis IV, and a Blue Origin HLS for Artemis V. Rather than leading the HLS development effort internally, NASA provided a reference design and asked commercial vendors compete to design, develop and deliver systems based on a NASA-produced set of requirements. Each selected vendor is required to deliver two landers: one for an uncrewed test lunar landing, and one to be used as the first Artemis crewed lander. NASA started the competition process in 2019 with the Starship HLS selected as the winner in 2021. The original timeline called for an uncrewed test flight before a crewed flight in 2024 as part of the Artemis III mission, but the crewed flight has been delayed to at least 2025. In addition to the initial contract, NASA awarded two rounds of separate contracts in May 2019 and September 2021 on aspects of the HLS to encourage alternative designs, separately from the initial HLS development effort. It announced in March 2022 that it was developing new sustainability rules and pursuing both a Starship HLS upgrade and a new competing alternative design that would comply with the rules. In May 2023, Blue Origin was selected as the second provider for lunar lander services. Reference design The Advanced Exploration Lander was a 2018 NASA concept for a three-stage lander, intended to serve as a design reference for the commercial HLS design proposals. After departing from the Lunar Gateway in its lunar near-rectilinear halo orbit (NRHO), a transfer module would take the lander and embarked crew to a low lunar orbit and then separate. The descent module would then land itself and the ascent module carrying the crew on the lunar surface. A crew of up to four could spend up to two weeks on the surface before using the ascent module to take them back to Gateway. Each of the three modules would have a mass of approximately 12 to 15 metric tons and would be delivered separately by commercial launchers for integration at Gateway. Both the ascent and transfer modules could be designed to be reusable, with the descent module intended to be left on the lunar surface. Preliminary HLS studies In December 2018 NASA announced that it was issuing a formal request for proposals as Appendix E of NextSTEP-2 inviting American companies to submit bids for the design and development of new reusable systems allowing astronauts to land on the lunar surface. On February 14, 2019, NASA hosted an Industry Forum at NASA HQ to provide an overview of the Human Landing System (HLS) Broad Agency Announcement. In April 2019 NASA announced a formal request for proposals closing on November 15, 2019, for Appendix H of NextSTEP-2 inviting American companies to submit bids for the design and development of the Ascent Element of the Human Landing System (HLS) including the cabin used during landings. This was extended to cover an option for an integrated lander—a single vehicle that performs transfer, descent, and ascent. Design competition Five companies responded to NASA's request for proposal by the November 2019 deadline, and after evaluating the proposals, NASA selected three for further design work. In April 2020, NASA awarded separate contracts totaling US$967 million in design development funding to Blue Origin, Dynetics, and SpaceX to begin a 10-month-long design processes. The companies/teams selected in the 2020 design awards were the "National Team" led by Blue Origin, with US$579 million in NASA design funding; Dynetics, including SNC and other unspecified companies, with US$253 million in NASA funding; and SpaceX with a modified Starship spacecraft design called Starship HLS, with US$135 million in NASA design funding. Although the HLS initial design phase was planned to be a ten-month program ending in February 2021 with the selection of up to two contractors, NASA delayed the selection process and announcement by two months. The companies were bidding on a contract to provide design, development, build, test, and evaluation of an HLS, plus two lunar landings, one uncrewed and one crewed, for a fixed price. NASA evaluated the bids based on three evaluation factors: technical merit, managerial ability, and price, in that order, and found SpaceX better. On 16 April 2021, NASA selected only a single lander—Starship HLS—to move on to a full development contract. NASA awarded a US$2.89 billion contract to SpaceX to develop the Starship HLS lander and to provide two operational lunar missions—one uncrewed demonstration mission, and one crewed lunar landing—as early as 2025. NASA had stated that they would have preferred to award two contracts, but that insufficient funds were appropriated by Congress to allow the awarding of a second contract. This had been stated as a possible outcome in the contract solicitation. Post-competition protests and litigation On April 30, 2021, both Blue Origin and Dynetics filed formal protests with the US Government Accountability Office claiming that NASA had improperly evaluated aspects of the proposals. On April 30, 2021, NASA suspended the Starship HLS contract and funding until such time as the GAO could issue a ruling on the protests. In May 2021, Sen. Cantwell, from Blue Origin's state of Washington, introduced an amendment to the "Endless Frontier Act" that directed NASA to reopen the HLS competition and select a second lander proposal and authorized spending of an additional US$10 billion. This funding would require a separate appropriations act. Sen. Sanders criticized the amendment as a "multibillion dollar Bezos bailout", as the money would likely go to Blue Origin, which was founded by Jeff Bezos. The act, including this amendment, was passed by the U.S. Senate on June 8, 2021. On July 30, 2021, the GAO rejected the protests and found that "NASA did not violate procurement law" in awarding the contract to SpaceX, who bid a much lower cost and more capable system. Nevertheless, CNBC reported on August 4 that "Jeff Bezos' space company remains on the offensive in criticizing NASA's decision to award Elon Musk's SpaceX with the sole contract to build a vehicle to land astronauts on the moon" and the company had produced an infographic highlighting several Starship deficiencies compared to the Blue Origin proposal, but noted the infographic avoided showing the Blue Origin bid price as roughly double the SpaceX bid price. Soon after the appeal was rejected, NASA made the contracted initial payment of US$300M to SpaceX. On August 13, 2021, Blue Origin filed a lawsuit in the US Court of Federal Claims challenging "NASA's unlawful and improper evaluation of proposals." Blue Origin asked the court for an injunction to halt further spending by NASA on the existing contract with SpaceX. Reaction to the lawsuit was mostly negative in the space community, at NASA, and among Blue Origin employees according to space journalist Eric Berger. The judge dismissed the suit on November 4, 2021, and NASA was allowed to resume working with SpaceX. Starship HLS The Starship Human Landing System (Starship HLS) was selected by NASA for long-duration crewed lunar landings as part of NASA's Artemis program. The Starship HLS is a modified configuration of SpaceX's Starship spacecraft, optimized to operate on and around the Moon. As a result, the heat shield and flight control surfaces — parts of the main Starship design needed for atmospheric re-entry — are not included in Starship HLS. The entire spacecraft will land on the Moon and will then launch from the Moon. If needed, the variant will use high-thrust CH4/O2 RCS thrusters located mid-body on Starship HLS during the final "tens of meters" of the terminal lunar descent and landing, and will be powered by a solar array located on its nose below the docking port. Elon Musk stated that Starship HLS would be able to deliver "potentially up to 200 tons" to the lunar surface. Starship HLS would be launched to Earth orbit using the SpaceX Super Heavy booster, and would use a series of tanker spacecraft to refuel the Starship HLS vehicle in Earth orbit for lunar transit and lunar landing operations. Starship HLS would then act as its own transit vehicle to reach lunar orbit for rendezvous with Orion. In the mission concept, a NASA Orion spacecraft would carry a NASA crew to the lander, where they would depart and descend to the surface of the Moon. After lunar surface operations, Starship HLS would lift off from the lunar surface acting as a single-stage to orbit and return the crew to Orion. NASA highlighted two weaknesses with SpaceX's proposal. Starship's propulsion systems were described as "notably complex", and the report referred to prior delays under the Commercial Crew program and Falcon Heavy launch vehicle development as evidence of potential threats to their development schedule. Blue Origin selected as second provider In May 2023, Blue Origin was selected as a second provider for lunar lander services with a $3.4 billion contract. NASA stated that it decided to add another human landing system partner to: "increase competition, reduce costs to taxpayers, support a regular cadence of lunar landings, further invest in the lunar economy." Unselected proposals Integrated Lander Vehicle The Integrated Lander Vehicle (ILV) or National Human Landing System (NHLS) was a lunar lander design concept proposed by the "National Team" led by Blue Origin, along with Lockheed Martin, Northrop Grumman, and Draper Laboratory as major partners. The main selling point of the lander was that all the components had been in development in one form or another for some time. The transfer stage was based on the Cygnus spacecraft, the Blue Moon was to be used as the descent stage, and the ascent stage was based on the Orion spacecraft. It was to be launched in three parts on either the New Glenn and Vulcan Centaur but could also be launched on a single SLS Block 1B. In the April 2020 HLS source selection statement, NASA stated that the vehicle passed all requirements but faced risks with its power, propulsion, and communications systems which posed a significant risk to the developmental timeline. Dynetics ALPACA HLS The Dynetics ALPACA (Autonomous Logistics Platform for All-Moon Cargo Access) Human Landing System design concept was proposed by Dynetics and Sierra Nevada Corporation with support from a number of subcontractors. The vehicle design consisted of a single-stage lander powered by methalox engines, although an earlier design used drop tanks. ALPACA was proposed to launch on a Vulcan Centaur or SLS Block 1B rocket, and be refueled by up to three Vulcan Centaur tanker flights. Ultimately, NASA did not select the proposal, citing negative mass margins and an experimental thrust structure, which could pose threat to development time. Boeing HLS The Boeing Human Landing System proposal was submitted to NASA in early November 2019. The primary solution was a two-stage lander designed to launch on a single SLS Block 1B, with Intuitive Machines working with Boeing to provide engines, and reusing technologies from their Starliner spacecraft. To cover the possibility that the SLS Block 1B was not ready by 2024, Boeing proposed a solution where the descent stage was launched on an SLS Block 1 while the ascent stage would be launched by a commercial launcher and assembled in lunar orbit. The Boeing proposal was not selected for design funding by NASA in the April 2020 design funding announcements. Vivace HLS The Vivace Human Landing System was a lunar landing concept by aerospace firm Vivace. Little is known about the vehicle other than its resemblance to NASA's Altair lunar lander from the Constellation program. Vivace's concept was not selected for full design funding. Alternative design studies In addition to the design and development RFP for Appendix H of NextSTEP-2, NASA announced 11 contracts worth US$45.5 million in total for Appendix E of NextSTEP-2 in May 2019. These were short-term studies on transfer vehicles, descent elements, descent element prototypes, refueling element studies and prototypes. One of the requirements was that selected companies would contribute at least 20% of the total cost of the project "to reduce costs to taxpayers and encourage early private investments in the lunar economy". A second set of contracts totaling $146 million was awarded on September 14, 2021. These contracts were for studies of a second-generation HLS that is to be used for missions after Artemis III. As with the first set of contracts, NASA intends to award more than one HLS if there is sufficient funding. On March 23, 2022, NASA announced it intended to initiate a formal request for proposals for second-generation HLS designs, drafting new sustainability rules to support it with a 2026–2027 delivery date for the design. NASA stated it would solicit designs from the broader aerospace industry out of a need for redundancy and competition. Under the current HLS contract, NASA also exercised an option calling for a second Starship HLS demonstration mission to the Moon, with the Starship design updated to meet the new sustainability rules. In addition, NASA announced a target date of April 2025 for Artemis III, likely using the first-generation Starship HLS design. Space.com journalist Mike Wall speculated that, based on statements from NASA Administrator Bill Nelson, NASA had gained enough congressional and presidential support to make the requests. Follow-on programs In 2021, NASA began studies on the future Lunar Exploration Transportation Services (LETS) for regular trips between the Gateway station, lunar orbits, and the lunar surface; for sustainable HLS operations. Notes References Artemis program 2010s in the United States 2020s in the United States 2020s in spaceflight Human spaceflight programs Lunar modules NASA programs Public–private partnership projects in the United States
Human Landing System
[ "Engineering" ]
2,918
[ "Space programs", "Human spaceflight programs" ]
68,704,501
https://en.wikipedia.org/wiki/GH%20Turbine%20GT-25000
The GT-25000 is an industrial and marine gas turbine produced by CSIC Longjiang GH Gas Turbine Corporation, Ltd, a subsidiary of China Shipbuilding Industry Company (CSIC). Development In 1993, China and Ukraine signed the UGT-25000 Gas Turbine Production License and Single Unit Sales Contract. Under the contract, Ukraine was to sell 10 units of the DA80 (export designation of the UGT25000) gas turbines to China as well as a transfer of related technologies and technical documentation. The funding from China in exchange for transfer of technology stopped the project from being cancelled by Ukraine. They were planned to be used on the PLA Navy's future warships such as the 052B and 052C destroyers. However, they had blade problems and the last two 052Cs, hulls 512 and 513 built at Jiangnan Shipyard sat pierside for more than two years without being accepted by the PLAN. In 1998, the localisation process of the gas turbine was started and three entities were involved: No. 703 research institute under the China Shipbuilding Industry Corporation (CSIC) as well as Xi'an Aero-Engine Corporation and Harbin Turbine Co. The project was overseen by No. 703 research institute and technical drawings procured by them were shared with Xi'an Aero-Engine Corporation. In 2004, the first locally produced model was completed and named GT-25000. It achieved 60% localisation and had equivalent performance to the DA80. By 2011, the localisation rate had reached 98.1%. After the completion of localisation, efforts were made to improve the reliability of the GT-25000 over the original DA80. QC-280/QD-280 Later on, Xi'an Aero-Engine Corporation, which had also participated in the localisation process of the GT-25000, wanted to take over the military market from CSIC's GT-25000 and unveiled the QC-280/QD-280 series, which has essentially identical performance with the GT-25000 but named differently for IP reasons. Eventually, the PLA Navy chose to stick with CSIC's GT-25000 for its warships. Design Rating power: 26.7 MW - 30 MW (estimated) Efficiency: 36.5% Fuel type: Gas Exhaust temp: 480 °C Exhaust Flow: 89 kg/s Output speed: 3270~5000 rpm Variants UGT-25000 / DA80: Ukrainian variant produced by Zorya-Mashproekt GT-25000: Localised model produced by CSIC CGT25-D: Variant of GT-25000 with 30MW power for industrial uses. Exported to Russia in 2021. GT-25000 S-S cycle: Upgraded to 33MW power, designed with assistance from Ukrainian engineers GT-25000IC: Under development, aim of 40MW power with intercooler process QC-280/QD-280: Variant produced by Xi'an Aero-Engine Corporation after original localisation process. Users Type 052C destroyer Type 052D destroyer Type 055 destroyer See also General Electric LM2500 Rolls-Royce WR-21 Rolls-Royce MT30 Rolls-Royce Marine Spey References Gas turbines
GH Turbine GT-25000
[ "Technology" ]
665
[ "Engines", "Gas turbines" ]
68,705,329
https://en.wikipedia.org/wiki/3-Chlorophenmetrazine
3-Chlorophenmetrazine (3-CPM; code name PAL-594) is a recreational designer drug with stimulant effects. It is a substituted phenylmorpholine derivative, closely related to better known drugs such as phenmetrazine and 3-fluorophenmetrazine (3-FPM; PAL-593). The drug has been shown to act as a norepinephrine–dopamine releasing agent (NDRA) with additional weak serotonin release. Its values for induction of monoamine release are 27nM for dopamine, 75nM for norepinephrine, and 301nM for serotonin in rat brain synaptosomes. Hence, it releases dopamine about 3-fold more potently than norepinephrine and about 11-fold more potently than serotonin. Similarly to cis-4-methylaminorex, the drug is notable in being one of the most selective dopamine releasing agents (DRAs) known, although it still has substantial capacity to release norepinephrine. See also 3-Bromomethylphenidate 3-Chloromethamphetamine 3-Chloromethcathinone 4-Methylphenmetrazine G-130 Methylenedioxyphenmetrazine Phendimetrazine PDM-35 Radafaxine References Beta-Hydroxyamphetamines Designer drugs Phenylmorpholines Serotonin-norepinephrine-dopamine releasing agents
3-Chlorophenmetrazine
[ "Chemistry" ]
336
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
68,705,358
https://en.wikipedia.org/wiki/Methylenedioxyphenmetrazine
3,4-Methylenedioxyphenmetrazine, also known as 3-MDPM, is a recreational designer drug with stimulant effects. It is a substituted phenylmorpholine derivative, closely related to better known drugs such as phenmetrazine and 3-fluorophenmetrazine. It has been identified as a synthetic impurity formed in certain routes of MDMA manufacture. See also 3-Chlorophenmetrazine MDMAR MDPV Methylone References Benzodioxoles Beta-Hydroxyamphetamines Designer drugs Phenylmorpholines
Methylenedioxyphenmetrazine
[ "Chemistry" ]
132
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
64,350,197
https://en.wikipedia.org/wiki/Estradiol%2017%CE%B2-benzoate
Estradiol 17β-benzoate (E2-17B) is an estrogen and an estrogen ester—specifically, the C17β benzoate ester of estradiol—which was never marketed. It is the C17β positional isomer of the better-known and clinically used estradiol ester estradiol benzoate (estradiol 3-benzoate; Progynon-B). Estradiol 17β-benzoate was first described in the 1930s. See also List of estrogen esters § Estradiol esters References Abandoned drugs Benzoate esters Estradiol esters Secondary alcohols Synthetic estrogens
Estradiol 17β-benzoate
[ "Chemistry" ]
147
[ "Drug safety", "Abandoned drugs" ]
64,352,031
https://en.wikipedia.org/wiki/Estradiol%2017%CE%B2-acetate
Estradiol 17β-acetate is an estrogen and an estrogen ester—specifically, the C17β acetate ester of estradiol—which was never marketed. It is the C17β positional isomer of the better-known and clinically used estradiol ester estradiol acetate (estradiol 3-acetate; Femtrace). See also List of estrogen esters § Estradiol esters References Abandoned drugs Acetate esters Estradiol esters Secondary alcohols Synthetic estrogens
Estradiol 17β-acetate
[ "Chemistry" ]
117
[ "Drug safety", "Abandoned drugs" ]
64,353,431
https://en.wikipedia.org/wiki/Thulium%28II%29%20chloride
Thulium(II) chloride is an inorganic compound with the chemical formula TmCl2. Production Thulium(II) chloride can be produced by reducing thulium(III) chloride by thulium metal: 2 TmCl3 + Tm → 3 TmCl2 Chemical properties Thulium(II) chloride reacts with water violently, producing hydrogen gas and thulium(III) hydroxide. When thulium(II) chloride first touches water, a light red solution is formed, which fades quickly. References Lanthanide halides Thulium compounds Chlorides
Thulium(II) chloride
[ "Chemistry" ]
124
[ "Chlorides", "Inorganic compounds", "Salts" ]
64,354,104
https://en.wikipedia.org/wiki/David%20Sherrill
Charles David Sherrill is a professor of chemistry and computational science and engineering at Georgia Tech working in the areas of theoretical chemistry, computational quantum chemistry, and scientific computing. His research focuses on the development and application of theoretical methods for non-covalent interactions between molecules. He is the lead principal investigator of the Psi open-source quantum chemistry program. Life and education Born in Chattanooga, Tennessee (April 5, 1970), Sherrill received his S.B. in chemistry from MIT. He received his Ph.D. in 1996 from the University of Georgia, working with Professor Henry F. Schefer, III on highly correlated configuration interaction methods. He was an NSF Postdoctoral Fellow in the laboratory of Martin Head-Gordon at the University of California, Berkeley. Career In 1999, Sherrill joined the faculty of the school of chemistry and biochemistry at Georgia Tech. He joined the school of computational science and engineering as a joint faculty member in 2006. He became associate director of Georgia Tech's Institute for Data Engineering and Science (IDEaS) in 2017. He has been an associate editor of The Journal of Chemical Physics since 2009. Research Sherrill develops methods, algorithms, and software for quantum chemistry. He has introduced efficient density-fitting techniques into several quantum chemistry methods, speeding up computations. His research group obtains highly-accurate results for important prototype chemical systems, and uses these results to develop computational protocols that are faster yet still accurate. Sherrill focuses on intermolecular interactions, and has published definitive studies of the strength, geometric dependence, and substituent effects in prototype interactions including π-π, CH/π, S/π, and cation-π interactions. He has developed extensions of symmetry-adapted perturbation theory (SAPT) to analyze these interactions in terms of their fundamental physical forces (electrostatics, exchange/steric repulsion, induction/polarization, and London dispersion forces). A fragment-based partitioning of SAPT allows analyses of which non-bonded contacts are most important for binding, and has been used to understand substituent effects in protein-drug binding. Sherrill has published over 200 peer-reviewed articles on these topics, and presented over 130 invited lectures, including the 2011 Robert S. Mulliken Lecture at the University of Georgia, the keynote talk for the 2015 Workshop on Control of London Dispersion Interactions in Molecular Chemistry in Göttingen, and keynote talks at the 2015 and 2016 meetings of the Southeast Theoretical Chemistry Association. Sherrill's methods and algorithms are made publicly available to the quantum chemistry community through the open-source quantum chemistry program Psi, developed by his group and collaborators worldwide. Awards Sherrill is a Fellow of the American Physical Society, the American Chemical Society, and the American Association for the Advancement of Science. Education Sherrill is active in promoting education in chemistry, quantum chemistry, and data science. He has published an extensive set of notes and lectures on fundamentals of quantum chemistry. His educational efforts have been recognized by his being named the Outreach Volunteer of the Year by the Georgia Section of the American Chemical Society in 2017, and the Class of 1940 W. Howard Ector Outstanding Teacher at Georgia Tech in 2006. References External links David Sherrill: Google Scholar Theoretical chemists Computational chemists American chemists Fellows of the American Chemical Society 1970 births Living people Fellows of the American Physical Society Massachusetts Institute of Technology alumni Georgia Tech faculty
David Sherrill
[ "Chemistry" ]
702
[ "Quantum chemistry", "Physical chemists", "Computational chemists", "Theoretical chemistry", "Computational chemistry", "Theoretical chemists" ]
64,355,017
https://en.wikipedia.org/wiki/Nature%20Environment%20and%20Pollution%20Technology
Nature Environment and Pollution Technology is an open access, peer-reviewed scientific journal of environmental science. It is published quarterly by Technoscience Publications and was established in 2002. The journal is indexed in Scopus, ProQuest, Chemical Abstracts (CAS), EBSCO, References External links Official Website English-language journals Open access journals Academic journals established in 2002 Environmental science journals Quarterly journals
Nature Environment and Pollution Technology
[ "Environmental_science" ]
79
[ "Environmental science journals" ]
64,358,331
https://en.wikipedia.org/wiki/Norna%20Robertson
Norna Robertson (FRSE, FInstP, FRAS, FAPS) is a lead scientist at LIGO at California Institute of Technology, and professor of experimental physics at the University of Glasgow. Her career has focused on experimental research into suspension systems and instrumentation to achieve the detection of gravitational waves. Education Robertson obtained a Ph.D. in experimental physics in 1981 from the University of Glasgow, researching gravitational wave detection and how seismic noise could be suppressed in sensitive measurements. Research and career Robertson began her postdoctoral career as a researcher at Imperial College London studying infrared astronomy. In 1983, she joined the University of Glasgow as a lecturer and returned to gravitational waves research, becoming a Professor in 1999. In 2003, Robertson moved to the Gintzon Laboratory at Stanford University as a visiting professor, where her work focused on suspension systems for Advanced LIGO. She became a lead scientist at the LIGO at California Institute of Technology in 2007, leading an international team of 20 scientists and engineers. Her research contributed to the design of detection instrumentation that ultimately led to the first observation of gravitational waves in 2015. Her work is now focused on the development of ultra-low noise suspensions systems for Advanced LIGO. Awards and honours Robertson was awarded the President's Medal from the Royal Society of Edinburgh in 2016 for her work on suspension systems for gravitational wave detection. She received the California Institute of Technology Staff Service and Impact Award in 2017. She is a Fellow of the Royal Society of Edinburgh, the American Physical Society, the Royal Astronomical Society, the Institute of Physics, and the International Society on General Relativity and Gravitation. References Living people Fellows of the Royal Society of Edinburgh Fellows of the American Physical Society Fellows of the Institute of Physics Fellows of the Royal Astronomical Society Scottish physicists Experimental physicists California Institute of Technology faculty Gravitational-wave astronomy British women scientists Alumni of the University of Glasgow Year of birth missing (living people)
Norna Robertson
[ "Physics", "Astronomy" ]
384
[ "Astrophysics", "Experimental physics", "Gravitational-wave astronomy", "Astronomical sub-disciplines", "Experimental physicists" ]
64,359,053
https://en.wikipedia.org/wiki/Juvenile%20polyp
Juvenile polyps are a type of polyp found in the colon. While juvenile polyps are typically found in children, they may be found in people of any age. Juvenile polyps are a type of hamartomatous polyps, which consist of a disorganized mass of tissue. They occur in about two percent of children. Juvenile polyps often do not cause symptoms (asymptomatic); when present, symptoms usually include gastrointestinal bleeding and prolapse through the rectum. Removal of the polyp (polypectomy) is warranted when symptoms are present, for treatment and definite histopathological diagnosis. In the absence of symptoms, removal is not necessary. Recurrence of polyps following removal is relatively common. Juvenile polyps are usually sporadic, occurring in isolation, although they may occur as a part of juvenile polyposis syndrome. Sporadic juvenile polyps may occur in any part of the colon, but are usually found in the distal colon (rectum and sigmoid). In contrast to other types of colon polyps, juvenile polyps are not premalignant and are not usually associated with a higher risk of cancer; however, individuals with juvenile polyposis syndrome are at increased risk of gastric and colorectal cancer. Unlike juvenile polyposis syndrome, solitary juvenile polyps do not require follow up with surveillance colonoscopy. Signs and symptoms Juvenile polyps often do not cause symptoms (asymptomatic); when present, symptoms usually include gastrointestinal bleeding and prolapse through the rectum. Juvenile polyps are usually sporadic, occurring in isolation, although they may occur as a part of juvenile polyposis syndrome. Sporadic juvenile polyps may occur in any part of the colon, but are usually found in the distal colon (rectum and sigmoid). Histopathology Under microscopy, juvenile polyps are characterized by cystic architecture, mucus-filled glands, and prominent lamina propria. Inflammatory cells may be present. Compared with sporadic polyps, polyps that occur in juvenile polyposis syndrome tend to have more of a frond-like (resembling a leaf) growth pattern with fewer stroma, fewer dilated glands and smaller glands with more proliferation. Syndrome-related juvenile polyps also demonstrate more neoplasia and increased COX-2 expression compared with sporadic juvenile polyps. Diagnosis Juvenile polyps are diagnosed by examination of their distinctive histopathology, generally after polypectomy via endoscopy. Juvenile polyps cause fecal calprotectin level to be elevated. Treatment If symptoms are present, then removal of the polyp (polypectomy) is warranted. Recurrence of polyps following removal is relatively common. Unlike juvenile polyposis syndrome, solitary juvenile polyps do not require follow up with surveillance colonoscopy. Epidemiology Juvenile polyps occur in about 2 percent of children. In contrast to other types of colon polyps, juvenile polyps are not premalignant and are not usually associated with a higher risk of cancer; however, individuals with juvenile polyposis syndrome are at increased risk of gastric and colorectal cancer. References Digestive system neoplasia Histopathology
Juvenile polyp
[ "Chemistry" ]
673
[ "Histopathology", "Microscopy" ]
44,402,565
https://en.wikipedia.org/wiki/Global%20cascades%20model
Global cascades models are a class of models aiming to model large and rare cascades that are triggered by exogenous perturbations which are relatively small compared with the size of the system. The phenomenon occurs ubiquitously in various systems, like information cascades in social systems, stock market crashes in economic systems, and cascading failure in physics infrastructure networks. The models capture some essential properties of such phenomenon. Model description To describe and understand global cascades, a network-based threshold model has been proposed by Duncan J. Watts in 2002. The model is motivated by considering a population of individuals who must make a decision between two alternatives, and their choices depend explicitly on other people's states or choices. The model assumes that an individual will adopt a new particular opinion (product or state) if a threshold fraction of his/her neighbors have adopted the new one, else he would keep his original state. To initiate the model, a new opinion will be randomly distributed among a small fraction of individuals in the network. If the fraction satisfies a particular condition, a large cascades can be triggered.(see Global Cascades Condition) A phase transition phenomenon has been observed: when the network of interpersonal influences is sparse, the size of the cascades exhibits a power law distribution, the most highly connected nodes are critical in triggering cascades, and if the network is relatively dense, the distribution shows a bimodal form, in which nodes with average degree show more importance by serving as triggers. Several generalizations of the Watt's threshold model have been proposed and analyzed in the following years. For example, the original model has been combined with independent interaction models to provide a generalized model of social contagion, which classifies the behavior of the system into three universal classes. It has also been generalized on modular networks degree-correlated networks and to networks with tunable clustering. The role of the initiators has also been studied recently, shows that different initiator would influence the size of the cascades. Watt's threshold model is one of the few models that shows qualitative differences on multiplex networks and single layer networks. It can furthermore exhibit broad and multi-modal cascade size distributions on finite networks. Global cascades condition To derive the precise cascade condition in the original model, a generating function method could be applied. The generating function for vulnerable nodes in the network is: where pk is the probability a node has degree k, and and f is the distribution of the threshold fraction of individuals. The average vulnerable cluster size can be derived as: where z is the average degree of the network. The Global cascades occur when the average vulnerable cluster size diverges The equation could be interpreted as: When , the clusters in the network is small and global cascades will not happen since the early adopters are isolated in the system, thus no enough momentum could be generated. When , the typical size of the vulnerable cluster is infinite, which implies presence of global cascades. Relations with other contagion models The Model considers a change of state of individuals in different systems which belongs to a larger class of contagion problems. However it differs with other models in several aspects: Compared with 1) epidemic model: where contagion events between individual pairs are independent, the effect a single infected node having on an individual depends on the individual's other neighbors in the proposed model. Unlike 2) percolation or self-organized criticality models, the threshold is not expressed as the absolute number of "infected" neighbors around an individual, instead, a corresponding fraction of neighbors is selected. It is also different from 3) random-field ising model and majority voter model, which are frequently analyzed on regular lattices, here, however the heterogeneity of the network plays a significant role. See also Threshold model Information cascade Stock market crash Cascading failure Epidemic model Percolation_theory Self-organized criticality Ising model Voter model Complex contagion Sociological theory of diffusion Global cascade References Mathematical modeling Network theory
Global cascades model
[ "Mathematics" ]
822
[ "Mathematical modeling", "Applied mathematics", "Graph theory", "Network theory", "Mathematical relations" ]
44,403,623
https://en.wikipedia.org/wiki/Meshedness%20coefficient
In graph theory, the meshedness coefficient is a graph invariant of planar graphs that measures the number of bounded faces of the graph, as a fraction of the possible number of faces for other planar graphs with the same number of vertices. It ranges from 0 for trees to 1 for maximal planar graphs. Definition The meshedness coefficient is used to compare the general cycle structure of a connected planar graph to two extreme relevant references. In one end, there are trees, planar graphs with no cycle. The other extreme is represented by maximal planar graphs, planar graphs with the highest possible number of edges and faces for a given number of vertices. The normalized meshedness coefficient is the ratio of available face cycles to the maximum possible number of face cycles in the graph. This ratio is 0 for a tree and 1 for any maximal planar graph. More generally, it can be shown using the Euler characteristic that all n-vertex planar graphs have at most 2n − 5 bounded faces (not counting the one unbounded face) and that if there are m edges then the number of bounded faces is m − n + 1 (the same as the circuit rank of the graph). Therefore, a normalized meshedness coefficient can be defined as the ratio of these two numbers: It varies from 0 for trees to 1 for maximal planar graphs. Applications The meshedness coefficient can be used to estimate the redundancy of a network. This parameter along with the algebraic connectivity which measures the robustness of the network, may be used to quantify the topological aspect of network resilience in water distribution networks. It has also been used to characterize the network structure of streets in urban areas. Limitations Using the definition of the average degree , one can see that in the limit of large graphs (number of edges the meshedness tends to Thus, for large graphs, the meshedness does not carry more information than the average degree. References Graph invariants Planar graphs
Meshedness coefficient
[ "Mathematics" ]
409
[ "Planar graphs", "Graph theory", "Graph invariants", "Mathematical relations", "Planes (geometry)" ]
44,403,744
https://en.wikipedia.org/wiki/Pollination%20network
A pollination network is a bipartite mutualistic network in which plants and pollinators are the nodes, and the pollination interactions form the links between these nodes. The pollination network is bipartite as interactions only exist between two distinct, non-overlapping sets of species, but not within the set: a pollinator can never be pollinated, unlike in a predator-prey network where a predator can be depredated. A pollination network is two-modal, i.e., it includes only links connecting plant and animal communities. Nested structure of pollination networks A key feature of pollination networks is their nested design. A study of 52 mutualist networks (including plant-pollinator interactions and plant-seed disperser interactions) found that most of the networks were nested. This means that the core of the network is made up of highly connected generalists (a pollinator that visits many different species of plant), while specialized species interact with a subset of the species that the generalists interact with (a pollinator that visits few species of plant, which are also visited by generalist pollinators). As the number of interactions in a network increases, the degree of nestedness increases as well. One property that results from nested structure of pollination networks is an asymmetry in specialization, where specialist species are often interacting with some of the most generalized species. This is in contrast to the idea of reciprocal specialization, where specialist pollinators interact with specialist plants. Similar to the relationship between network complexity and network nestedness, the amount of asymmetry in specialization increases as the number of interactions increases. Modularity of networks Another feature that is common in pollination networks is modularity. Modularity occurs when certain groups of species within a network are much more highly connected to each other than they are with the rest of the network, with weak interactions connecting different modules. Within modules it has been shown that individual species play certain roles. Highly specialized species often only interact with individuals within their own module and are known as ‘peripheral species’; more generalized species can be thought of as ‘hubs’ within their own module, with interactions between many different species; there are also species which are very generalized which can act as ‘connectors’ between their own module and other modules. A study of three separate networks, all of which showed modularity, revealed that hub species were always plants and not the insect pollinators. Previous work has found that networks will become nested at a smaller size (number of species) than that where networks frequently become modular. Species loss and robustness to collapse There is substantial interest into the robustness of pollination networks to species loss and collapse, especially due to anthropogenic factors such as habitat destruction. The structure of a network is thought to affect how long it is able to persist after species decline begins. In particular, the nested structure of networks has been shown to protect against complete destruction of the network, because the core group of generalists are the most robust to extinction by habitat loss. Models specifically focused on the effects of habitat loss have shown that specialist species tend to go extinct first, while the last species to go extinct are the most generalized of the network. Other studies focusing specifically on the removal of different types of species showed that species decline is the fastest when removing the most generalized species. However, there have been contrasting results on how rapidly decline occurs with removal of these species. One study showed that even at the fastest rate, the decline was still linear. Another study revealed that with the removal of the most common pollinator species, the network showed a drastic collapse. In addition to focusing on the removal of species themselves, other work has emphasized the importance of studying the loss of interactions, as this will often precede species loss and may well accelerate the rate at which extinction occurs. See also Aeroplankton Biological network References Further reading Application-specific graphs Mutualism (biology) Network theory Pollination Systems biology
Pollination network
[ "Mathematics", "Biology" ]
816
[ "Behavior", "Symbiosis", "Biological interactions", "Graph theory", "Network theory", "Mathematical relations", "Mutualism (biology)", "Systems biology" ]
44,408,590
https://en.wikipedia.org/wiki/Climate-adaptive%20building%20shell
In building engineering, a climate-adaptive building shell (CABS) is a façade or roof that interacts with the variability of its environment in a dynamic way. Conventional structures have static building envelopes and therefore cannot act in response to changing weather conditions and occupant requirements. Well-designed CABS have two main functions: they contribute to energy-saving for heating, cooling, ventilation, and lighting, and they induce a positive impact on the indoor environmental quality of buildings. Definition The description of CABS made by Loonen et al. says that:A climate adaptive building shell has the ability to repeatedly and reversibly change some of its functions, features or behavior over time in response to changing performance requirements and variable boundary conditions, and does this with the aim of improving overall building performance. This definition shows several components that conform CABS, and are addressed in this article. The first part of the definition is related to its fundamental characteristic; being adaptive envelopes, or in other words, having skins that could adjust to new circumstances. This means that envelopes should be able to "alter slightly as to achieve the desired result", "become used to a new situation", and even return to their original stage if needed. Although occupants’ desired conditions are indoors, they are affected by the outdoor surroundings. While these outcomes can be broadly defined, there is a consensus that the purpose of CABS is to provide shelter, protection, and a comfortable indoor environmental quality by consuming the minimum amount of energy needed. Therefore, the objective is to improve the well-being and productivity of people inside the building by making it sensitive to its surroundings. CABS must satisfy different demands that compete or even conflict with each other. For example, they must find the compromise between daylight and glare, fresh air and draft, ventilation and excessive humidity, shutters and luminaires, heat gains and overheating, and others among them. The dynamism of the envelope required to manage these compromises could be accomplished in various ways, for example by moving components, by the introduction of airflows or by a chemical change in a material. However, it is not sufficient to simply add adaptive features to the design or the existing building, they must be integrated into it as a whole system. Therefore, by using CABS technologies, a variety of opportunities are available for a transformation from "manufactured" to "mediated" indoor spaces. Related concepts CABS is only one designation for an envelope concept that can be described by a range of different terms. Several variations on the term 'adaptive' can be used, including: active, advanced, dynamic, interactive, kinetic, responsive, intelligent and switchable. In addition, the concepts of responsive architecture, kinetic architecture, intelligent building are closely related. The main difference with CABS is that the adaptation takes place at the building shell level, whereas the other concepts consider a whole-building approach. Categorization of CABS Like any other system, CABS have several independent characteristics by which they can be categorized. Therefore, the same CABS may fit somehow into all of these categories. What may be different from one CABS to another is the subcategorization, which discriminates based on the attributes of each one of them. The following are some of the possible categorizations that may be found in the literature. Climate responsive systems As the name says, they are categorized based on the climatic factors they tackle. Their behavior is based on producing a change in heat, light, air, water and/or other types of energy. Thus, they are subcategorized into three types: solar-responsive systems, air-flow-responsive systems, and other natural sources responsive systems. Emerging technologies of Climate Adaptive Curtain Wall A climate-adaptive building curtain wall possesses the ability to repeatedly and reversibly modify its heat transfer characteristics (U-Value and SHGC) in response to evolving performance demands and variable environmental conditions. This adaptation aims to enhance the overall efficiency of the building. This capability entails the continuous adjustment of the envelope's parameters autonomously, without relying on external power sources. The primary objective is to elevate the comfort and productivity of individuals within the building by enabling the structure to sensitively react to its surroundings. Additionally, an adaptive shell offers energy-saving benefits, technology demonstrates a potential of 30% reduction in total energy consumption. However, it's not enough to merely advance the technology; it's equally crucial for the new technology to seamlessly integrate into existing infrastructure. To achieve this, the system perpetually alters the building shell's heat transfer properties by air circulation within the hermetically sealed curtain wall panel, achieving the desired effects. Consequently, this pioneering technology will significantly diminish the carbon footprint of tall buildings while enhancing the well-being of their occupants. Solar responsive systems They are based on managing solar energy in different formats. Usually, they use one of the following five types of solar control devices: external, integrated, internal, double skin, and ventilated cavity. The first type of solar energy is solar heat. CABS related to this type of energy are intended to maximize solar heat gain in winter and minimize them in summer. Some examples of this technology are the solar barrel wall (water-filled oil barrels), water bags on the roof, dynamic insulation, and thermochromic (change color due to temperature) materials on walls to get appropriate color and reflectance responding to the outside temperature. Another type of solar energy is solar light. CABS linked with this energy source are based on the control of indoor illuminance levels, distributions, windows views, and glare. To accomplish these tasks, there are three main ways: with traditional mechanical systems (wide range of options from venetian blinds up to a complex motorized system) innovative mechanical systems (rotational, retractable, sliding, active daylighting and self-adjusting fenestration schemes), and smart glass or translucent materials (thermochromic, photochromic, electrochromic materials). This last one is used in windows and can achieve its goal in four ways: change in optical properties, lighting direction, visual appearance, and thermophysical properties. Between these smart materials, electrically-activated glazing for building façades has gained commercial viability and remains as the most visible indicator for smart materials in a building. The third kind of solar energy is solar electricity which mostly relays on installing integrated photovoltaics systems. To be considered CABS they must have the ability to be kinetic, rather than individually movable panels. Normally this is achieved through the use of heliotropic sun-tracking systems to maximize the solar energy capture. Air-flow responsive systems They are those related to natural ventilation and wind electricity. The first ones have the goal of exhausting the excess of carbon dioxide, water vapor, odors and pollutants that tend to accumulate in an indoor space. At the same time, they must replace it with new and fresh air, usually coming from the outside. Some examples of this type of technology are kinetic roof structure and double skin facades. Other less common types of CABS are the ones generating wind electricity. Thus, they convert wind energy into electrical energy via small scale wind turbines integrated into buildings. This can be for example as wind turbines fitted horizontally between each floor. Other examples may be found in buildings such as the Dynamic Tower, the COR Building in Miami and the Greenway Self-park Garage in Chicago. Other natural sources systems They may account for the use of rain, snow and additional natural supplies. Unfortunately, no extra information related to this issue was found. Based on the time frame scale As dynamic technologies, CABS can show different configurations over time, extending from seconds up to changes appreciable during the lifetime of the building. Thus, the four types of adaptations based on the time frame scales are seconds, minutes, hours, and seasons The variation that takes place just in seconds are found randomly in nature. Some examples may be short-term variations in wind speed and direction that may cause shifts in wind-based skins. An example of a shift that occurs within minutes is the cloud cover which has an impact on the daylight availability. Therefore, CABS that use this kind of energy may also fall into this category. Some changes that adjust in the order of hours are fluctuations in air temperature, and the track of sun through the sky (although sun movement around the sky is a continuous process, its track is done in this time scale). Finally, some CABS can adapt across seasons, and therefore are expected to offer extensive performance benefits. Based on the scale of change The adaptive behavior of CABS is related to how its mechanisms work. Therefore, they are either based on a change in behavior (macro-scale) or properties (micro-scale). Macro-scale changes It is often also referred to as “kinetic envelopes”, which implies that a certain kind of observable motion is present, usually resulting in energy changes in the building shell's configuration. This is commonly achieved via moving parts that can perform at least one of the following actions: folding, sliding, expanding, creasing, hinging, rolling, inflating, fanning, rotating, curling, etc. Based on their adaptive level, the macro scale mechanisms can be divided into two types of systems: intelligent building skins and responsive façade systems. The first ones use a centralization building system and sensing equipment to adjust to weather conditions. They should be capable of learning from the occupants’ reactions and considering future weather fluctuation to respond accordingly. Some examples of this kind of feature are building automation and physically adaptive components such as louvers, sunshades, operable windows or smart material assemblies. A responsive façade system has the same functions and performance characteristics of an intelligent building skin but goes even further by having an interactive aspect. This means it incorporates components such as computational algorithms which enable the building system to regulate itself and learn in time. Therefore, a responsive building skin, not only includes mechanisms for satisfying occupants desires and learn from their feedback, but it also encourages a dual educating path where both the building and its residents take place in a constant and growing conversation. Micro-scale changes These kinds of changes directly affect the internal structure of a material either via thermophysical or opaque optical properties, or through the exchange of energy from one form to another. When considering the adaptative level, they usually fall into the smart material category. They are characterized by being altered by outside stimuli such as temperature, heat, moisture, light, electric or magnetic fields. An important consideration in the use of this type of materials is whether their changes are reversible or irreversible. The most attractive property that catches the designers’ attention is its immediacy or real-time response, which in turn improves its functionality and performance, and at the same time decreases its energy use. Some examples are: aerogel (synthetic low-density translucent substance applied in window glazing), phase-change material (like micro-encapsulated wax), salt hydrates, thermochromic polymer films, shape-memory alloys, temperature-responsive polymers, structure integrated photovoltaics, and smart thermobimetal self-ventilating skins. Based on the control type There are two different control types: intrinsic and extrinsic regulators. Intrinsic controls They are characterized by being self-adjusting systems, which means that their adaptive capacity is an integral feature. They are stimulated by environmental conditions such as: temperature, relative humidity, precipitation, wind speed and direction, etc. This self-sufficient control is sometimes referred to as “direct control” since the main drivers are the environmental impacts, without the need for external decision-making devices. Therefore, the need for fewer components may be seen as an advantage, as well as the fact that it can have an immediate change without the need for fuel or electricity. However, a downside is that is can only perform on the environmental conditions and variations it was designed for. Extrinsic controls This kind of controls can take advantage of feedback by changing their behavior based on comparisons of the current state with the desired one. Their structure has three main components: sensors, processors and actuators. Wrapping them up with a logic controller gives them the ability to make changes in two levels: distributed (regulated by local processors) or centralized (via a superior control unit). As an advantage, they have high levels of control allowing for manually intervention for satisfaction and well-being. A disadvantage is the need for various components. Based on the spatial scale The spatial scale of CABS refers to the physical size of a system. Therefore, the adaptation can take place as an envelope, façade, façade component and façade subcomponent. Based on the inspirational scale One of the fundamental characteristics of human beings is the ability to create new things. As a starting point inspiration is needed, which can come from nature or other sources such as own ideas. Therefore, the use of organisms’ morphological or physiological properties or natural behaviors in no-biological sciences is known as biomimetics and is commonly used in building sciences. The CABS who get this source of inspiration are known as biomimetic adaptive building skins (Bio-ABS). Thus, the variation in properties and behaviors are transferred from biological representations that provide environmentally, mechanically, structurally or material-wise efficient strategies to buildings. Within the biomimetic adaptive building skins, there are two ways of categorization. The first one is based on the biomimetic approach. It discriminates according to the order in which the problem is solved. There are two possibilities: initiated through the identification of a technical problem to be solved by a biological solution (top-down) or with the examination of a biological solution to solve a technical problem (bottom-up). The second category of Bio-ABS is based on the adaptation level, which offers three types: morphological ( based on form, structure and texture), physiological, or behavioral. Based on the development stage This categorization embraces any analysis that measures the performance of a given CABS project. The developmental stages can be labeled as a preliminary model (PM), simulated model (SM), pilot-scale prototype (PSP) and full-scale application (FSA). Based on the number of functions This classification relays to the number of environmental factors that a given CABS adjust to when activated by stimuli independently. Some of them are: ventilating, heating/cooling, improving air quality, regulating humidity levels, changing color, and regulating energy demand. In this way, they can be monofunctional or multifunctional. Based on the performance task This last differentiation accounts for the purpose and the evaluation of how effectively the adaptation is being achieved, therefore, divided into two subcategories. The first one is the performance target, which relates to the building aspect that is being assessed. Some examples are: indoor air quality, thermal comfort, visual comfort and energy demand. The second category is the measure and metric improvements. Some usual parameters measured are: displacement, daylight intake, humidification/dehumidification, heat dissipation, airflow, permeability and cooling. Motivations for the implementation of CABS Buildings are exposed to a wide variety of changing conditions during their life cycle. Weather conditions vary not only throughout the year but also throughout the day. Also, the occupants’ load, activities, and preferences vary constantly. Responding to this dynamism from and energy and comfort point of view, CABS offers the ability to actively moderate the exchange of energy across a building's skin over time. By doing this, in response to predominant meteorological conditions and comfort needs, it introduces good energy-saving opportunities. While just for being constructed any building generates changes in its environment (such as solar patterns and wind variations), by having the ability to maximize the use of exterior resources it mitigates its environmental consequences. Thus, CABS use the “existing natural energies to light, heat and ventilate the spaces”, obtaining maximum thermal comfort conditions. As an example, by incorporating the photovoltaic principles into the glass intended to be used in facades, the new skins will generate local and non-polluting electricity to supply the buildings’ energy needs. Also, it promotes the use of daylight, that when it comes from a window with an exterior view it “results in increased productivity, mental function, and memory recall”. The building envelope is one of the most important design parameters determining indoor physical environment related to thermal comfort, visual comfort, and even occupancy working efficiency. To promote the creation of healthier and more productive spaces, not only daylight but natural ventilation, and other external resources must be considered. These are current tasks performed by CABS as environmental-based technologies. Thus, CABS not only have better performance than static envelopes, but also “provide an exciting aesthetic, the aesthetic of change”. The fact that CABS respond to changing conditions in a flexible way provides them the opportunity to maintain a high level of performance during real-time changes. This is achieved through anticipation and reaction. Therefore, the systems can handle environmental uncertainty, which is very appreciated. This flexibility is performed in CABS in three ways: adaptability (climate mediators between indoor and outdoor), multi-ability (multiple and new roles over time), and evolvability (ability to handle changes over a longer time horizon). The use of dynamic and sustainable technologies offer the possibility to have better environmental and economic performances of building envelopes. For example, by having heat avoidance and passive cooling features, buildings can be less expensive because of less cooling energy needs and therefore reduced mechanical equipment required. Even though the demand for satisfying working environment and economic performance has increased, CABS have the potential to undertake this goal. Drawbacks for the implementation of CABS As Mols et al. claim, CABS is an immature concept, needing more research due to the lack of successful applications in practice. Likewise, as a consequence of being an unexplored concept, “the true value of making building shells adaptive is yet an unknown, and we can only guess how much of this potential is accessible with existing concepts and technologies”. At its current stage, the concept is yet more theoretical than practical, being backed up by simulation technologies instead of constructed projects. Kuru et al. add to this point by saying that, from their research, academia projects are more frequent than real-world industrial ones. Since the concept of CABS relays on changes, it is sometimes related to devices and technologies that require higher operational and maintenance activity than static envelopes. This has several implications, such as greater attention to possible failures, the need of repairs, and on some occasions higher operational and maintenance costs. Also, sometimes the need for a centralized control center may affect this issue. Therefore, the election of the kind of technology is an issue that must be taken with care. However, Lechner states that the current reliability of cars demonstrates that movable systems can be made that require few if any repairs over long periods. He finishes this idea by saying that “with good design and materials, exposed building systems have become extremely reliable even with exposure to saltwater and ice in the winter”. Therefore, although there is a concern on the operation and maintenance of these types of technologies, there seems to be a solution in the decision making of the type, the materials and the design of such devices. As dynamic mechanisms, CABS may depend on energy availability. Contrastingly, passive technologies do not present this problem because they do not actively act, presenting a higher robustness of the system towards change. Its independence of any external input (electricity, thermal energy or data) enables its continuing functionality, even in case of power failure. Therefore, to permit continuous operation, the use of backup alternatives such as a secondary energy source is likely to be suggested to some CABS. Finally, the lack of control of several CABS may be seen as a flaw. There are some CABS, like the ones relying on smart materials, that cannot be controlled by the occupant. In these cases, if they do not satisfy the occupants’ desire, they generate an unfortunate outcome. Thus, the possibility to control a given technology may be seen as a strength or a weakness depending on the device, the intention and the task that needs to be achieved. Current status and use of these technologies Historically, the façade has been the main load-bearing structural element of buildings, restricting its functionality and materiality. In the contemporary period, the façade is often liberated from its structural task letting for more flexibility to fit in diverse contexts such as saving/generating energy, providing thermal properties for comfort, and adaptability to changing conditions. Modern construction methods, developments in material sciences, dropping prices of electronic devices, and availability of controllable kinetic façade components now offer rich possibilities for innovative building envelope solutions that respond better to the environmental context, thereby allowing the façade to ‘‘behave’’ as a living organism. However, most of the current status of CABS is focused on trying to better understand the concepts behind these technologies to be transferred and implemented in practical ways on buildings. Kuru et al., identify three major limitations in biomimetic adaptive building skins (Bio-ABS). The limitations suggested are: level of development, regulating diverse environmental factors, and performance evaluation. They suggest that as normal to any immature concept, the majority of the intended projects are conceptual. One of the main reasons is the challenges of combining multiple disciplines like architecture, biomimetics and engineering to finally develop, analyze and measure performance. Moreover, procedures to identify and transfer biological solutions into architectural systems are limited. Current software has limitations in terms of having specific tools and methods that can mimic the performance of Bio-ABS. Adding to this issue, the transition from digital models to the physical application requires the teamwork of experts from different fields, which sometimes can be hard to achieve. Another current deficiency is the focus on monofunctional CABS, which turns to be a waste on the opportunity of improvement. The idea behind CABS is to have envelopes that could respond to various internal and external factors, not just one per building skin. Moreover, the support and development rate of CABS tasks is being uneven. For example, from the research of Kuru et al. the results show that the light management CABS are most comprehensively developed while the energy regulations are the least studied. Thus, while it is likely to see a boost in the implementation of lighting management CABS, the ones related to energy regulation may seem lagging. Similarly, the research currently conducted is characterized by fragmented developments. Some of it going in the direction of material science (e.g. switchable glazing, adaptable thermal mass, and variable insulation), and others in creative processes. As a consequence of the drawbacks presented above, currently, the most common way of using energy efficiency in buildings is having a whole building (not only envelope) approach. There are few examples of façades that incorporate passive or smart technologies to create a comfortable indoor space, except for shading technologies such as blinds or louvers and operable windows for ventilation. Therefore, future improvements in this field may be required to overcome these issues. Future improvements on CABS Several challenges must be faced to improve the growth of CABS. The first one is the creation of custom made software that could analyze dynamic systems based on a climatic pattern. Moreover, if the software can anticipate and examine the future consequences of actions happening at the present, more accurate results can be obtained. This could be improved by introducing logic controls into CABS's software. Finally, making more user-friendly interfaces could ease the usage of these tools. Following this idea, not only software but also the scope of topics that CABS currently gather may be extended as well. Therefore, the creation of new ways to manage and control energy, water and heat must be explored. One way to do it is by engineering how to mimic the biological methods to translate them into a practical way for buildings. The inspiration in nature seems to have great potential. A common characteristic of developing ideas is that to grow and prosper, risks must be taken. Therefore, opening the possibility of failure. CABS are not the exception, and to be successful developers must take the risks, for example, the ones related to long periods of payback time and high operative costs. Mols et al. mention that “If the developer chooses to take the risks, the outcomes are claimed to be beneficiary”. Some of these risks relay on the uncertainty behind CABS. A way to mitigate them is by monitoring operational performance and by conducting post-occupancy evaluations growing data on the actual performance of current CABS which right now is lacking in the literature. As a conclusion, the idea of CABS needs the support and commitment of all buildings stakeholders to be able to transcend. Notable examples Although the concept of CABS is still relatively new, several hundreds of concepts can be found in buildings all over the world. The following list shows an overview of notable examples. Built examples Al Bahar Towers, Aedas, Abu Dhabi Arab World Institute, Jean Nouvel, Paris, France Heliotrope, Rolf Disch, Freiburg, Germany Burke Brise Soleil – Quadracci Pavilion, Milwaukee Art Museum Milwaukee, Wisconsin, United States Surry Hills Library, Francis-Jones Morehen Thorp, Sydney, Australia Bengt Sjostrom Theatre, Studio Gang Architects, Rockford, Illinois, United States Kuggen movable sunscreen, Wingårdh arkitektkontor, Gothenburg, Sweden The Barcelona Media-ICT Building Barcelona, Spain Terrence Donnelly Centre for Cellular and Biomolecular Research Toronto, Canada Devonshire Building University of Newcastle New San Francisco Federal Building San Francisco, United States References Architectural design Building engineering Sustainable building
Climate-adaptive building shell
[ "Engineering" ]
5,313
[ "Sustainable building", "Building engineering", "Construction", "Civil engineering", "Architectural design", "Design", "Architecture" ]
54,439,547
https://en.wikipedia.org/wiki/Administration%20of%20Radioactive%20Substances%20Advisory%20Committee
The Administration of Radioactive Substances Advisory Committee (ARSAC) is an advisory non-departmental public body of the government of the United Kingdom. It is sponsored by the Department of Health. The committee advises government on the certification of doctors and dentists who want to use radioactive medicinal products on people. Doctors and dentists who use radioactive medicinal products (radiopharmaceuticals) on people must get a certificate from health ministers. This certificate allows them to use radioactive medicinal products in diagnosis, therapy and research. ARSAC was set up to advise health ministers with respect to the grant, renewal, suspension, revocation and variation of certificates and generally in connection with the system of prior authorisation required by Article 5(a) of Council Directive 76/579/Euratom. The majority of ARSAC's members are medical doctors who are appointed to the committee as independent experts in their field (for example nuclear medicine). The committee comments on applications in confidence to the ARSAC Support Unit, Public Health England. No individual committee member approves any single application. An official from the Department of Health authorises successful applications on behalf of the Secretary of State. See also Centre for Radiation, Chemical and Environmental Hazards in Oxfordshire References External links Nuclear medicine organizations Non-departmental public bodies of the United Kingdom government
Administration of Radioactive Substances Advisory Committee
[ "Engineering" ]
264
[ "Nuclear medicine organizations", "Nuclear organizations" ]
54,440,098
https://en.wikipedia.org/wiki/5%CE%B1-Dihydronorethisterone
5α-Dihydronorethisterone (5α-DHNET, dihydronorethisterone, 17α-ethynyl-5α-dihydro-19-nortestosterone, or 17α-ethynyl-5α-estran-17β-ol-3-one) is a major active metabolite of norethisterone (norethindrone). Norethisterone is a progestin with additional weak androgenic and estrogenic activity. 5α-DHNET is formed from norethisterone by 5α-reductase in the liver and other tissues. Pharmacology Unlike norethisterone which is purely progestogenic, 5α-DHNET has been found to possess both progestogenic and marked antiprogestogenic activity, showing a profile of progestogenic activity like that of a selective progesterone receptor modulator (SPRM). Moreover, the affinity of 5α-DHNET for the progesterone receptor (PR) is greatly reduced relative to that of norethisterone at only 25% of that of progesterone (versus 150% for norethisterone). 5α-DHNET shows higher affinity for the androgen receptor (AR) compared to norethisterone with approximately 27% of the affinity of the potent androgen metribolone (versus 15% for norethisterone). However, although 5α-DHNET has higher affinity for the AR than does norethisterone, it has significantly diminished and in fact almost abolished androgenic activity in comparison to norethisterone in rodent bioassays. Similar findings were observed for ethisterone (17α-ethynyltestosterone) and its 5α-reduced metabolite, whereas 5α-reduction enhanced both the AR affinity and androgenic potency of testosterone and nandrolone (19-nortestosterone) in rodent bioassays. As such, it appears that the C17α ethynyl group of norethisterone is responsible for its loss of androgenicity upon 5α-reduction. Instead of androgenic activity, 5α-DHNET has been reported to possess some antiandrogenic activity. Norethisterone and 5α-DHNET have been found to act as weak irreversible aromatase inhibitors (Ki = 1.7 μM and 9.0 μM, respectively). However, the concentrations required are probably too high to be clinically relevant at typical dosages of norethisterone. 5α-DHNET specifically has been assessed and found to be selective in its inhibition of aromatase, and does not affect other steroidogenesis enzymes such as cholesterol side-chain cleavage enzyme (P450scc), 17α-hydroxylase/17,20-lyase, 21-hydroxylase, or 11β-hydroxylase. Since it is not aromatized (and hence cannot be transformed into an estrogenic metabolite), unlike norethisterone, 5α-DHNET has been proposed as a potential therapeutic agent in the treatment of estrogen receptor (ER)-positive breast cancer. See also 5α-Dihydroethisterone 5α-Dihydronandrolone 5α-Dihydronormethandrone 5α-Dihydrolevonorgestrel References 5α-Reduced steroid metabolites Ethynyl compounds Anabolic–androgenic steroids Aromatase inhibitors Estranes Human drug metabolites Ketones Selective progesterone receptor modulators
5α-Dihydronorethisterone
[ "Chemistry" ]
769
[ "Ketones", "Chemicals in medicine", "Functional groups", "Human drug metabolites" ]
54,440,318
https://en.wikipedia.org/wiki/19-Noretiocholanolone
19-Noretiocholanolone, also known as 5β-estran-3α-ol-17-one, is a metabolite of nandrolone (19-nortestosterone) and bolandione (19-norandrostenedione) that is formed by 5α-reductase. It is on the list of substances prohibited by the World Anti-Doping Agency since it is a detectable metabolite of nandrolone, an anabolic-androgenic steroid (AAS). Consumption of boar meat, liver, kidneys and heart have been found to increase urinary 19-noretiocholanolone output. See also Etiocholanolone 19-Norandrosterone 5α-Dihydronandrolone 5α-Dihydronorethisterone References 5α-Reduced steroid metabolites Secondary alcohols Estranes Human drug metabolites Ketones World Anti-Doping Agency prohibited substances
19-Noretiocholanolone
[ "Chemistry" ]
209
[ "Pharmacology", "Ketones", "Functional groups", "Medicinal chemistry stubs", "Chemicals in medicine", "Human drug metabolites", "Pharmacology stubs" ]
54,440,992
https://en.wikipedia.org/wiki/Cortifen
Cortifen, also known as cortiphen or kortifen, as well as fencoron, is a synthetic glucocorticoid corticosteroid and cytostatic antineoplastic agent which was developed in Russia for potential treatment of tumors. It is a hydrophobic chlorphenacyl nitrogen mustard ester of 11-deoxycortisol (cortodoxone). See also List of hormonal cytostatic antineoplastic agents List of corticosteroid esters List of Russian drugs References Acetate esters Tertiary alcohols Amines Corticosteroid esters Glucocorticoids Ketones Mineralocorticoids Nitrogen mustards Organochlorides Prodrugs Russian drugs Chloroethyl compounds
Cortifen
[ "Chemistry" ]
170
[ "Ketones", "Functional groups", "Prodrugs", "Chemicals in medicine", "Amines", "Bases (chemistry)" ]
54,441,483
https://en.wikipedia.org/wiki/Ciclometasone
Ciclometasone (brand names Cycloderm, Telocort) is a synthetic glucocorticoid corticosteroid which is marketed in Italy. References Amines Carboxylic acids Organochlorides Corticosteroid esters Diketones Glucocorticoids Pregnanes Diols
Ciclometasone
[ "Chemistry" ]
72
[ "Amines", "Carboxylic acids", "Bases (chemistry)", "Functional groups" ]
54,441,564
https://en.wikipedia.org/wiki/Fluocortin%20butyl
Fluocortin butyl (brand names Lenen, Novoderm, Varlane, Vaspit), or fluocortin 21-butylate, is a synthetic glucocorticoid corticosteroid which is marketed in Germany, Belgium, Luxembourg, Spain, and Italy. Chemically, it is the butyl ester derivative of fluocortin. It was patented in 1971 and approved for medical use in 1977. References Corticosteroid esters Esters Organofluorides Glucocorticoids Pregnanes
Fluocortin butyl
[ "Chemistry" ]
120
[ "Organic compounds", "Esters", "Functional groups" ]
54,441,937
https://en.wikipedia.org/wiki/Langgan
Langgan () is the ancient Chinese name of a gemstone which remains an enigma in the history of mineralogy; it has been identified, variously, as blue-green malachite, blue coral, white coral, whitish chalcedony, red spinel, and red jade. It is also the name of a mythological langgan tree of immortality found in the western paradise of Kunlun Mountain, and the name of the classic waidan alchemical elixir of immortality langgan huadan 琅玕華丹 "Elixir Efflorescence of Langgan". Word The Chinese characters 琅 and 玕 used to write the gemstone name lánggān are classified as radical-phonetic characters that combine the semantically significant "jade radical" 玉 or 王 (commonly used to write names of jades or gemstones) and phonetic elements hinting at pronunciation. Láng 琅 combines the "jade radical" with liáng 良 "good; fine" (interpreted to denote "fine jade") and gān 玕 combines it with the phonetic gān 干 "stem; trunk". The Chinese word yù 玉 is usually translated as "jade" but in some contexts translates as "fine ornamental stone; gemstone; precious stone", and can refer to a variety of rocks that carve and polish well, including jadeite, nephrite, agalmatolite, bowenite, and serpentine. Modern written Chinese láng 琅 and gān 玕 have variant Chinese characters. Láng 琅 is occasionally transcribed as láng 瑯 (with láng 郞 "gentleman") or lán 瓓 (lán 闌 "railing"); and gān 玕 is rarely written as gān 玵 (with a gān 甘 "sweet" phonetic). Guwen "ancient script" variants were láng 𤨜 or 𤦴 and gān 𤥚. Berthold Laufer proposed that langgan was an onomatopoetic word "descriptive of the sound yielded by the sonorous stone when struck". Lang occurs in several imitative words meaning "tinkling of jade pendants/ornaments": lángláng 琅琅 "tinkling/jingling sound", língláng 玲琅 "tinkling/jangling of jade", línláng 琳琅 "beautiful jade; sound of jade", and lángdāng 琅璫 "tinkling sound". Laufer further suggests this etymology would explain the transference of the name langgan from a stone to a coral; Du Wan's 杜綰 Yunlin shipu 雲林石譜 "Stone Catalogue of the Cloudy Forest" (below) expressly states that the coral langgan "when struck develops resonant properties". Classical descriptions The name langgan has undergone remarkable semantic change. The first references to langgan are found in Chinese classics from the Warring States period (475-221 BCE) and Han dynasty (206 BCE-220 CE), which describe it as a valuable gemstone and mineral drug, as well as the mythological fruit of the langgan tree of immortality on Kunlun Mountain. Texts from the turbulent Six Dynasties period (220-589) and Sui dynasty (581-618) used langgan gemstone as a literary metaphor, and an ingredient in alchemical elixirs of immortality, many of which were poisonous. During the Tang dynasty (618-907), langgan was reinterpreted as a type of coral. Several early texts (including the Shujing, Guanzi, and Erya below) recorded langgan in context with the obscure gemstone(s) qiúlín 璆琳. In Classical Chinese syntax, 璆琳 can be parsed as two qiu and lin types of jade or as one qiulin type. A recent dictionary of Classical Chinese says qiú 璆 "fine jade, jade lithophone" is cognate with qiú 球 "precious gem, fine jade; jade chime or lithophone" (which later came to mean "ball; sphere"), and lín 琳 "blue-gem; sapphire". In what may be the earliest record, the c. 5th-3rd centuries BCE Yu Gong "Tribute of Yu the Great" chapter of the Shujing "Classic of Documents" says the tributary products from Yong Province (located in the Wei River plain, one of the ancient Nine Provinces) included qiulin and langgan jade-like gemstones: "Its articles of tribute were the k'ew and lin gem-stones, and the lang-kan precious stones". Legge quotes Kong Anguo's commentary that langgan is "a stone, but like a pearl", and suggests it was possibly lazulite or lapis lazuli, which Laufer calls "purely conjectural". The c. 4th-3rd centuries BCE Guanzi encyclopedic text, named for and attributed to the 7th century BCE philosopher Guan Zhong, who served as Prime Minister to Duke Huan of Qi (r. 685-643 BCE), uses bi 璧 "a flat jade disc with a hole in the center", qiulin 璆琳 "lapis lazuli", and langgan 琅玕 as examples of how establishing diverse local commodities as fiat currencies will encourage foreign economic cooperation. When Duke Huan asks Guanzi about how to politically control the "Four Yi" (meaning "all foreigners" on China's borders), he replies: Since the Yuzhi [i.e., Yuezhi/Kushans in Central Asia] have not paid court, I request our use of white jade discs [白璧] as money. Since those in the Kunlun desert (modern-day Xinjiang and Tibet) have not paid court, I request our use of lapis lazuli and langgan gems as money. … Since a white jade held tight unseen against one's chest or under one's armpit will be used as a thousand pieces of gold, we can obtain the Yuezhi eight thousand li away and make them pay court. Since a lapis lazuli and langgan gem (fashioned in) a hair clasp and earring will be used as a thousand gold pieces, we can obtain [i.e., defeat] [the inhabitants] of the Kunlun deserts eight thousand li away and make them pay court. Therefore if resources are not commandeered, economies will not connect, those distant from each other will have nothing to use for their common interest and the four yi will not be obtained and come to court. Xun Kuang's 3rd century BCE Confucian classic Xunzi has a context criticizing elaborate burials that uses dan'gan 丹矸 (with dān 丹 "cinnabar" and gān 矸 "waste rock", with the "stone radical" and same gān 干 phonetic) and langgan 琅玕. In these ancient times, the body was covered with pearls and jades, the inner coffin was filled with beautifully ornamented embroideries, and he outer coffin was filled with yellow gold and decorated with cinnabar [丹矸] with added layers of laminar verdite. [In the outer tomb chamber were] rhinoceros and elephant ivory fashioned into trees, with precious rubies [琅玕], magnetite lodestones, and flowering aconite for their fruit." (18.7) John Knoblock translates langgan as "rubies", noting perhaps the genuine ruby or balas spinel, were connected with the cult of immortality, and cites the Shanhaijing saying they grow on Mount Kunlun's Fuchang trees, and the Zhen'gao saying that adepts swallow "ruby blossoms" to feign death and become transcendents. Early Chinese dictionaries define langgan. The c. 4th-3rd century BCE Erya geography section (9 Shidi 釋地) lists valuable products from the various regions of ancient China: "The beautiful things of the northwest are the qiulin [璆琳] and langgan gemstones from the wastelands [虛] of Kunlun Mountain". The 121 CE Shuowen jiezi (Jade Radical section 玉部) has two consecutive definitions for lang 琅 and gan 玕. Lang is [used in] langgan, which "resembles a pearl [似珠者]", Gan is [used in] langgan, paraphrasing the Yu Gong, "Yong Province [using the ancient yōng 雝 character for yōng 雍] [produces] qiulin and langgan [gems] [球琳琅玕]". Three sections about western Chinese mountains in the c. 4th-2nd centuries BCE Shanhaijing "Classic of Mountains and Seas" record early geographic legends associating langgan with Xi Wang Mu "Queen Mother of the West" who lives on Jade Mountain in the mythological axis mundi Kunlun Mountain paradise. Two mention langgan gems and one mentions langganshu 琅玕樹 trees. The Shanhaijing translator Anne Birrell exemplifies the difficulties of translating the word langgan in three ways: "pearl-like gems", "red jade", and "precious gem [tree]". First, the "Classic of the Mountains: West" section says Huaijiang 槐江 (lit. "pagoda-tree river") Mountain, located 400 li northeast of Kunlun Mountain, has abundant langgan and other valuable minerals. "On the summit of Mount Carobriver are quantities of green male-yellow 多青雄黃, precious pearl-like gems [藏琅玕], and yellow gold and jade. Granular cinnabar is abundant on its south face and there are quantities of speckled yellow gold and silver on its north face." (2) "Male-yellow" overliterally translates xiónghuáng 雄黃 "realgar; red orpiment"—Compare Richard Strassberg's translation, "On the mountain’s heights is much green realgar, the finest quality of Langgan-Stone, yellow gold, and jade. On its southern slope are many grains of cinnabar, while on its northern slope are much glittering yellow gold and silver.". Guo Pu's 4th century CE Shanhaijing commentary says langgan shi 石 "stone/gem" (cf. zi 子 "seeds" in the third section) resembles a pearl, and cáng 藏 "store; conceal, hide" means yǐn 隱 "conceal; hide". However, Hao Yixing's 郝懿行 1822 commentary says cáng 藏 was originally written zāng 臧 "good", that is, Huaijiang Mountain has the "best" quality langgan. Second, the "Classic of the Great Wilderness: West" section records that on [Xi] Wang Mu 王母 "Queen Mother [of the West]" Mountain: "Here are the sweet-bloom tree, sweet quince, white weeping willow, the look-flesh creature, the triply-grey horse, precious jade [琁瑰], dark green jade gemstone [瑤碧], the white tree, red jade [琅玕], white cinnabar, green cinnabar, and quantities of silver and iron." (16) Third, the "Classic of Regions Within the Seas: West" section refers to a mythical tricephalic creature dwelling in a fuchangshu 服常樹 (lit. "serve constant tree") who guards a langganshu tree south of Kunlun: "The wears-ever fruit tree—on its crown there is a three-headed person who is in charge of the precious gem tree [琅玕樹]." (11) Interpreters disagree whether the langgan tree grows alongside the fuchang tree or grows on it. Guo Pu's commentary admits unfamiliarity with the fuchang 服常 tree; Wu Renchen's 17th-century commentary notes the similarity with the shachang 沙棠 "sand-plum tree" that the Huainanzi lists with langgan, but doubts they are the same. Guo's commentary says langgan zi 子 "seeds". or "fruits" resemble pearls (cf. the Shuowen definition) and quotes the Erya that it is found on Kunlun Mountain. The c. 120 BCE Huainanzi "Terrestrial Forms" chapter (4 墬形) describes langgan trees and langgan jade both found on Mt. Kunlun. The first context describes how Yu the Great controlled the Great Flood and "excavated the wastelands of Kunlun [昆侖之球] to make level ground". "Atop the heights of Kunlun are treelike cereal plants [木禾] thirty-five feet tall. Growing to the west of these are pearl trees [珠樹], jade trees [玉樹], carnelian trees [琁樹], and no-death trees [不死樹]. To the east are found sand-plum trees [沙棠] and malachite trees [琅玕]. To the south are crimson trees [絳樹]. To the north are bi jade trees [碧樹] and yao jade trees [瑤樹]." (4.3), translating with Schafer's "malachite" instead of "coral"). The second context paraphrases the Erya definition (above) of langgan: "The beautiful things of the northwest are the qiu, lin, and langgan jades [球琳琅玕] of the Kunlun Mountains [昆侖]" (4.7), noting that qiu, lin, and langgan are "types of jade, mostly not identifiable with certainty". Medicine Several early classics of traditional Chinese medicine mention langgan. The c. 1st century BCE Huangdi Neijings Suwen 素問 "Basic Questions" section uses langgan beads to describe a healthy pulse. "When man is serene and healthy the pulse of the heart flows and connects, just as pearls are joined together or like a string of red jade [如循琅玕]—then one can speak of a healthy heart". The c. 2nd century CE Nan Jing explains this langgan bead simile: "[If the qi in] the vessels comes tied together like rings, or as if they were following [in their movement a chain of] lang gan stones [如循琅玕], that implies a normal state." Commentaries elaborate that langgan stones "resemble pearls" and their movement is like a "string of jade- or pearl-like beads". The c. 3rd century CE Shennong Bencaojing lists qīng lánggān 青琅玕 "blue-green langgan" or shízhū 石珠 (lit. "rock pearl") as a mineral drug used to treat ailments such as itchy skin, carbuncle, and ALS. This is one of the rare early references to langgan that treats it as a real substance, while many others make it a feature of the divine world. Alchemy The langgan huadan 琅玕華丹 "Elixir Efflorescence of Langgan" name of the waidan "external alchemy" elixir of immortality is the best-known usage of the word langgan. Some other translations are "Elixir of Langgan Efflorescence", "Lang-Kan (Gem) Radiant Elixir", and "Elixir Flower of Langgan". The earliest method of compounding the elixir is found in the Taiwei lingshu ziwen langgan huadan shenzhen shangjing 太微靈書紫文琅玕華丹神真上經 "Supreme Scripture on the Elixir of Langgan Efflorescence, from the Purple Texts Inscribed by the Spirits of Grand Tenuity". This text was originally part of the Daoist Shangqing School scriptural corpus supposedly revealed to Yang Xi (330-c. 386 CE) between 364 and 370. The Purple Texts alchemical recipe for preparing Elixir of Langgan Efflorescence involves nine steps in four stages carried out over thirteen years. The first stage produces the Langgan Efflorescence proper, which when ingested is said to make "one's complexion similar to gold and jade and enables one to summon divine beings". The next three stages further refine and transform the Langgan Elixir, repeatedly plant it in the earth, and eventually generate a tree whose fruits confer immortality when eaten, just like those of the legendary langgan tree on Mount Kunlun. Upon completing any of the nine successive steps in producing the elixir, the alchemist (or adept in the neidan interpretation) can choose to either ingest the products and obtain immortality by ascending into the realm of Shangqing heavens or may continue on to the next step with the promise of ever-increasing rewards. The first stage has one complex waidan step of compounding the primary Langgan Efflorescence. After performing ritual zhāi 齋 "purification practices" for 40 days, the adept spends 60 days to acquire and prepared the elixir's fourteen ingredients, place them in a crucible, add mercury on top of them, lute the crucible with several layers of mud, and after sacrificing wine to the divinities, heating the crucible for 100 days. The elixir's fourteen reagents, given in exalted code names such as "White-Silk Flying Dragon" for quartz, are: cinnabar, realgar, milky quartz, azurite, amethyst, graphite, saltpeter, sulfur, asbestos, mica, iron pyrite, lead carbonate, Turkestan salt (desert lake precipitates containing gypsum, anhydrite, and halite), and orpiment. Based upon these ingredients, Schafer says the end product was probably bluish flint glass with a high lead content. The alchemist can either leave the crucible closed and proceed to the next stage or break it open and consume the langan elixir that is said to yield marvelous results. The efflorescence should have thirty-seven hues. It is a volatile liquid both brilliant and mottled, a purple aurora darkly flashing. This is called the Elixir of Langgan Efflorescence. If, just at dawn on the first day of the eleventh, fourth, or eighth month, you bow repeatedly and ingest one ounce of this elixir with the water from an east-flowing stream, seven-colored pneumas will rise from your head and your face will have the jadelike glow of metallic efflorescence. If you hold your breath, immediately a chariot from the eight shrouded extents of the universe will arrive. When you spit on the ground, your saliva will transform into a flying dragon. When you whistle to your left, divine Transcendents will pay court to you; when you point to the right, the vapors of Three Elementals will join with the wind. Then, in thousands of conveyances, with myriad outriders, you will fly up to Upper Clarity. The second stage comprises two iterative 100-day waidan alchemical steps transforming the elixir. Firing the unopened stage one crucible of Langgan Efflorescence for another 100 days will produce the Lunar Efflorescence of the Yellow Solution [黄水月華], which when consumed will make you "change forms ten thousand times, your eyes will become luminous moons, and you will float above in the Grand Void to fly off to the Palace of Purple Tenuity". The next step of firing the closed crucible for an additional one 100 days will produce three giant pearls called the Jade Essence of the Swirling Solution [徊水玉精]. Ingesting one alchemical pearl supposedly causes you to immediately give off liquid and fire, form gems with your breath, and your body "will become a sun, and the Thearchs of Heaven will descend to greet you. You will rise as a glowing orb to Upper Clarity." The third stage involves four 3-year steps utilizing the elixirs produced in the first two stages to create fantastic seeds that are replanted and grow into increasingly perfected "spirit trees" with fruits of immortality. This stage falls between conventional waidan alchemy and the horticultural art of growing marvelous zhi 芝 "plants of longevity; fungi" such as the lingzhi mushroom. Initially, the adept mixes the Elixir of Langgan Efflorescence with Jade Essence of the Swirling Solution, transforming the jīng 精 "essence; sperm; seed" in the latter name into an actual seed that is planted in an irrigated field. After three years it grows into the Tree of Ringed Adamant [環剛樹子] or Hidden Polypore of the Grand Bourne [太極隱芝], which has a ring-shaped fruit like a red jujube. Next, the adept plants one of the ringed fruits and waters it with the Yellow Solution, and after three years a plant called the Phoenix-Brain Polypore [fengnao zhi 鳳腦芝] will grow like a calabash, with pits like five-colored peaches. Then, a phoenix-brain fruit is planted and watered with Yellow Solution, which after three years will grow into a red tree, like a pine, five or six feet in height, with a jade-white fruit like a pear [赤樹白子]. Lastly, the adept plants the seed of the red tree, waters it with Swirling Solution, waits another three years for the growth of a vermilion tree like a plum, six or seven feet in height, with a halycon-blue fruit like the jujube [絳樹青實]. Upon eating this fruit, the adept will ascend to the heaven of Purple Tenuity. The fourth stage involves two comparatively quicker waidan steps. The adept repeatedly boils equal parts of the Yellow Solution and the Swirling Solution, and transforms them into the Blue Florets of Aqueous Yang [水陽青映]. If you drink this at dawn, your body will issue a blue and gemmy light, your mouth will spew forth purple vapors, and you will rise above to Upper Clarity [Shangqing]. But before departing earth, the adept's last step is to mix the remaining Elixir of Langan Efflorescence with liquified lead and mercury to produce 50-pound ingots of alchemical silver and purple gold, make incantations to the water spirits, and throw both oblatory ingots into a stream. Despite the carefully detailed Purple Texts' waidan recipe for preparing langgan elixirs, scholars have doubted that the authors actually meant for it to be produced and consumed. Some interpret the impractical 13-year elixir recipe as symbolic instructions for what later came to be known as neidan meditative visualization, and is more a "product of religious imagination", drawing on the respected metaphors of alchemical language, than a laboratory manual drawing on the metaphors of meditation. Others believe this "extravagantly impractical recipe" is an attempt to assimilate into conventional waidan alchemy the ancient legends about langgan gems that grow on trees in the paradise of KunIun. The Shangqing Daoist patriarch Tao Hongjing compiled and edited both the c. 370 Taiwei lingshu ziwen langgan huadan shenzhen shangjing and the c. 499 Zhen'gao 真誥 "Declarations of the Perfected" that also mentions langan elixirs in some of the same terminology. One context records that the early Daoist masters Yan Menzi 衍門子, Gao Qiuzi 高丘子, and Master Hongyai 洪涯先生 swallowed langgan hua 琅玕華 "langgan blossoms" to feign death and become xian transcendents and enter the "dark region" beyond the world. Needham and Lu proposed this langgan hua probably refers to a red or green poisonous mushroom, and Knoblock surmised that these "ruby blossoms" were a species of hallucinogenic mushroom connected with the elixir of immortality. Another Zhen'gao context describes how in the Shangqing latter days before the apocalypse (predicted to be in 507) people will practice alchemy to create immortality drugs, including the Langgan Elixir that "will flow and flower in thick billows" and Cloud Langgan. If the adept takes one spatula full of elixir, "their spiritual feathers will spread forth like pinions. Then will they (be able to) peruse the pattern figured on the Vault of Space, and glow forth in the Chamber of Primal Commencement". Several ingredients in the Elixir of Langgan Efflorescence are toxic heavy metals including mercury, lead, and arsenic, and alchemical elixir poisoning was common knowledge in China. Academics have puzzled over why Daoist adepts would knowingly consume a compound of mineral poisons, and Michel Strickmann, a scholar of Daoist and Buddhist studies, proposes that langgan elixir was believed to be an agent of self-liberation that guaranteed immortality to the faithful through a kind of ritual suicide. Since early Daoist literature thoroughly, "even rapturously", described the deadly toxic qualities of many elixirs, Strickmann concluded that scholars need to reexamine the Western stereotype of "accidental elixir poisoning" that supposedly applied to "misguided alchemists and their unwitting imperial patrons". Literature Chinese authors extended the classical descriptions of langgan meaning "a highly valued gem from western China; a mythical tree of immortality on Kunlun Mountain" into a literary and poetic metaphor for the exotic beauties of an idealized natural world. Several early writers described langgan jewelry, both real and fictional. The 2nd-century scholar and scientist Zhang Heng described a party for the Han nobility at which guests were delighted with the presentation of bowls overflowing with zhēnxiū 珍羞 "delicacies; exotic foods" including langgan fruits of paradise. The 3rd-century poet Cao Zhi described hanging "halcyon blue" (cuì 翠) langgan from the waist of his "beautiful person", and the 5th-century poet Jiang Yan adorned a goddess with gems of langgan. Some other authors reinforced use of its name to refer to divine fruits on heavenly trees. Ruan Ji, one of the Seven Sages of the Bamboo Grove, wrote a 3rd-century poem titled "Dining at Sunrise on Langgan Fruit". The 8th-century poet Li Bai wrote about a famished but proud fenghuang that would not deign to peck at bird food, but like a Daoist adept, would scorn all but a diet of langgan. This represents a literary transition from glittering fruit of distant Kunlun, to aristocratic fare in golden bowls, eventually to an elixir of immortality. A further extension of the langgan metaphor was to describe natural images of beautiful crystals and lush vegetation. For example, Ban Zhao's poem on "The Arrival of Winter" says, "The long [Yellow River] forms (crystalline) langgan [written langan 瓓玕] / Layered ice is like banked-up jade". Two of Du Fu's poems figuratively used the word langgan in reference to the vegetation around the forest home of a Daoist recluse, and to the splendid grass that provided seating for guests at a royal picnic near a mysterious grotto. Bamboo was the most typical representative of blue-green langgan in the plant world, compare láng 筤 ("bamboo radical" and the liáng phonetic in láng 琅) "young bamboo; blue'" Liu Yuxi wrote that the famous spotted bamboo of South China was "langgan colored". Geographic sources Chinese texts list many diverse locations from where langgan occurred. Several classical works associate mythical langan trees with Kunlun Mountain (far west or northwest China), and two gives sources of actual langgan gemstones, the Shujing says it was tribute from Yong Province (present day Gansu and Shaanxi) and the Guanzi says the Kunlun desert (Xinjiang and Tibet). Official Chinese histories record langgan coming from different sources. The 3rd-century Weilüe, 5th-century Hou Hanshu, 6th-century Wei shu, and 7th-century Liang shu list langgan among the products of Daqin, which depending on context meant the Near East or the Eastern Roman Empire, especially Syria. The Liang shu also says it was found in Kucha (modern Aksu Prefecture, Xinjiang), the 7th-century Jinshu says in Shaanxi, and the 10th-century Tangshu says in India. The Jiangnan Bielu history of the Southern Tang (937–976) says langgan was mined at Pingze 平澤 in Shu (Sichuan Province). The Daoist scholar and alchemist Tao Hongjing (456-536) notes langgan gemstone was traditionally associated with Sichuan. The Tang pharmacologist Su Jing 蘇敬 (d. 674) reports that it came from the distant Man tribes of the Yunnan–Guizhou Plateau and Hotan/Khotan. Accurately identifying geographic sources may be complicated by langgan referring to more than one mineral, as discussed next. Identifications The precise referent of the Chinese name langgan 琅玕 is uncertain in the present day. Scholars have described it as an "enigmatic archaism of politely pleasant or poetic usage", and "one of the most elusive terms in Chinese mineralogy". Identifications of langgan comprise at least three categories: Blue-green langgan was first recorded circa 4th century BCE, Coral langgan from the 8th century, and Red langgan is from an uncertain date. Edward H. Schafer, an eminent scholar of Tang dynasty literature and history, discussed langgan in several books and articles. His proposed identifications gradually changed from Mediterranean red coral, to coral or a glass-like gem, to chrysoprase or demantoid, to coral or red spinel, and ultimately to malachite. Blue-green langgan Langgan was a qīng 青 "green; blue; greenish black" (see Blue–green distinction in language) gemstone of lustrous appearance mentioned in numerous classical texts. They listed it among historical imperial tribute products presented from the far western regions of China, and as the mineral-fruit of the legendary langgan trees of immortality on Mount Kunlun. Schafer's 1978 monograph on langgan sought to identify the treasured blue-green gemstone, if it ever had a unique identity, and concluded the most plausible identification is malachite, a bright green mineral that was anciently used as a copper ore and an ornamental stone. Two early Chinese mineralogical authorities identified langgan as malachite, commonly called kǒngquèshí 孔雀石 (lit. "peacock stone") or shílǜ 石綠 (lit. "stone green"). Comparing blue-green stones that were known in early East Asia, Schafer disqualified several conceivable identities; demantoid garnet and green tourmaline are rarely of gem quality, while neither apple-green chrysoprase nor light greenish-blue turquoise typically have dark hues. This leaves malachite, This handsome green carbonate of copper has important credentials. It is often found in copper mines, and is therefore regularly at the disposal of copper- and bronze-producing peoples. It has, in certain varieties, a lovely silky luster, caused by its fibrous structure. It is soft and easily cut. It takes a good polish. It was commonly made into beads both in the western and eastern worlds. Above all, even uncut malachite often has a nodular or botryoidal structure, like little clumps of bright green beads, one of the classical forms attributed to lang-kan. Sometimes, too, it is stalactitic, like little stone trees. Furthermore, archeology confirms that malachite was an important gemstone of pre-Han China. Inlays of malachite and turquoise decorated many early Chinese bronze weapons and ritual vessels. Tang sources continued to record blue-green langgan. Su Jing's 652 Xinxiu bencao 新修本草 said it was a glassy substance similar to liúli 琉璃 "colored glaze; glass; glossy gem" that was imported from the Man tribes in the Southwest and from Khotan. In 762, Emperor Daizong of Tang proclaimed a new era name of Baoying 寶應 "Treasure Response" in honor of the discovery of thirteen auspicious treasures in Jiangsu, one of which was glassy langgan beads Coral langgan Tang dynasty herbalists and pharmacists changed the denotation of langgan from the traditional blue-green gemstone to a kind of coral. Chen Cangqi's c. 720 Bencao shiyi 本草拾遺 "Collected Addenda to the Pharmacopoeia" described it a pale red coral, growing like a branched tree on the bottom of the sea, fished by means of nets, and after coming out of the water gradually darkens and turns blue. Langan already had an established connection with coral. Chinese mythology matches two antipodean paradises of Mount Kunlun in the far west and Mount Penglai located on an island in the far eastern Bohai Sea. Both mountains had mythic plants and trees of immortality that attracted Daoist xian transcendents; Kunlun's red langgan trees with blue-green fruits were paralleled by Penglai's shanhu shu 珊瑚樹 "red coral trees". Regarding what variety of blue or green branching coral was identified as this "mineralized subaqueous shrub" langgan. Since it must have been a coral attractive enough to be comparable with the extravagant myths of Kunlun, Schafer suggests considering the blue coral Heliopora coerula. It is the only living species in the family Helioporidae, the only octocoral known to produce a massive skeleton, and was found throughout Pacific and Indian Oceans, although the IUCN currently considers it a vulnerable species. Du Wan's c. 1124 Yunlin shipu mineralogy book has a section (100) on langgan shi 琅玕石 that mentions shanhu "coral". A coral-like stone found in shallow water along the coast of Ningbo Zhejiang. Some specimens are two or three feet high. They must be pulled up by ropes let down from rafts. Though white when first taken from the water, they turn a dull purple after a while. They are patterned everywhere with circles, like ginger branches, and are rather brittle. Though the natives hold … Li Shizhen's 1578 Bencao Gangmu classic pharmacopeia objects to applying the term langgan to these marine invertebrates, which should properly be called shanhu while langgan should only be applied to the stone occurring in the mountains. Li's commentary suggests that the terminological confusion arose from the Shuowen jiezi definition of shanhu 珊瑚: 色赤生於海或生於山 "coral is red colored and grows in the ocean or in the mountains". This puzzling description of mountain corals was more likely a textual misunderstanding than a reference to coral fossils. Red langgan The most recent, and least historically documented, identification of langgan is a red gemstone. The Chinese geologist Chang Hung-Chao (Zhang Hongzhao) propagated this explanation when his book about geological terms in Chinese literature identified langgan as malachite, and noted an alternative construal of reddish spinel or balas ruby from the famous mines at Badakhshan. Some authors have cited Chang's balas ruby identification of langgan; others have used, or even confused, it with ruby, in translations (e.g., "precious rubies"). However, Schafer demonstrates that Chang's "supposed" textual evidence for red langgan is tenuous and suggests that Guo Pu's Shanhai jing commentary created this mineralogical confusion. Guo glosses the langgan tree as red, but is unclear whether this refers to the tree itself or its gem-like fruit. Compare Birrell's and Bokenkamp's Shanhai jing translations of "red jade" and "green kernels from scarlet gem trees". Chang misquotes dan'gan 丹矸 "cinnabar rock" from the Xunzi as dan'gan 丹玕 "cinnabar gan", and cites one textual occurrence of the term. The Shangqing Daoist Dadong zhenjing 大洞真經 Authentic Scripture of the Great Cavern records a heavenly palace named Dan'gan dian 丹玕殿 Basilica of the Cinnabar Gan. Admitting the possibility of interpreting gan 玕 as a monosyllabic truncation for langgan 琅玕, comparable with reading hongpo 红珀 for honghupo 红琥珀 "red amber", Schafer concludes there is insufficient dan'gan evidence for an explicit red variety of langgan. The lyrical term langgan occurs 87 times in the huge Complete Tang Poems collection of Tang poetry, with only two hong langgen 紅琅玕 "red langgan" usages by the Buddhist monk-poets Guanxiu (831-912) and Ji Qi 齊己 (863-937). Both poems use langgan to describe "red coral", the latter (贈念法華經) uses shanhu in the same line: 珊瑚捶打紅琅玕 "coral beating on red langgan" in cold waters. Dictionary translations Chinese-English dictionaries illustrate the multifaceted difficulties of identifying langgan. Compare the following list. Most of these bilingual Chinese dictionaries cross-reference lang and gan to langgan, but a few translate lang and gan independently. In terms of Chinese word morphology, láng 琅 is a free morpheme that can appear alone (for instance, a surname) or in other compound words (such as fàláng 琺琅 "enamel" and Lángyá shān 琅琊山 "Mount Langya (Anhui)") while gān 玕 is a bound morpheme that only occurs in the compound lánggān and does not have independent meaning. The origin of Giles' lang translation "a kind of white carnelian" is unknown, unless it derives from Williams' "a whitish stone". It was copied in Mathews' and various other Chinese dictionaries up to the online standard Unihan Database "a variety of white carnelian; pure". "White carnelian" is a marketing name for "white or whitish chalcedony of faint carnelian color". Carnelian is usually reddish-brown while common chalcedony colors are white, grey, brown, and blue. References Footnotes''' External links Taiwei lingshu ziwen langgan huadan shenzhen shangjing 太微靈書紫文琅玕華丹神真上經, 1445 Ming Dynasty edition Zhengtong daozang'' 正統道藏 Alchemical substances Chinese alchemy Chinese mythology Gemstones Mythological objects
Langgan
[ "Physics", "Chemistry" ]
8,102
[ "Materials", "Alchemical substances", "Gemstones", "Matter" ]
54,442,182
https://en.wikipedia.org/wiki/Flupamesone
Flupamesone (brand name Flutenal), also known as triamcinolone acetonide metembonate, is a synthetic glucocorticoid corticosteroid which is marketed in Spain. It is a dimer of a C21 ester of triamcinolone acetonide. References Acetonides Corticosteroid cyclic ketals Corticosteroid esters Dimers (chemistry) Diols Organofluorides Glucocorticoids Naphthalenes Polyketones Pregnanes
Flupamesone
[ "Chemistry", "Materials_science" ]
116
[ "Dimers (chemistry)", "Polymer chemistry" ]
54,443,743
https://en.wikipedia.org/wiki/Active%20power%20filter
Active power filters (APF) are filters, which can perform the job of harmonic elimination. Active power filters can be used to filter out harmonics in the power system which are significantly below the switching frequency of the filter. The active power filters are used to filter out both higher and lower order harmonics in the power system. The main difference between active power filters and passive power filters is that APFs mitigate harmonics by injecting active power with the same frequency but with reverse phase to cancel that harmonic, where passive power filters use combinations of resistors (R), inductors (L) and capacitors (C) and does not require an external power source or active components such as transistors. This difference, make it possible for APFs to mitigate a wide range of harmonics. See also Static synchronous series compensator Power conditioner Active filter Line filter References Filters Power engineering
Active power filter
[ "Chemistry", "Engineering" ]
191
[ "Chemical equipment", "Filters", "Energy engineering", "Filtration", "Power engineering", "Electrical engineering" ]
54,445,465
https://en.wikipedia.org/wiki/Neutrophil%20swarming
Neutrophil swarming is a type of coordinated neutrophil movement that acts in response to acute tissue inflammation or infection. The term comes from the swarming characteristics of insects that are similar to the behavior of neutrophils in response to an infection. These processes have mostly been studied in tissues of mice and studies of mouse ear tissue has proved to be very effective at observing neutrophil movement. Neutrophil swarming typically aggregates at surface layers of tissue so the thin nature of the mouse ear tissue makes for a good model to study this process. Additionally, zebrafish larvae have been used for the study of neutrophil movement mainly because of their translucence during the first few days of their development. With transgenic lines that fluorescently label zebrafish neutrophils, the cells can be tracked by epifluorescence or confocal microscopy during the course of an inflammatory response. Through this method, specific subpopulations of neutrophils can be tracked and their origin and fate during the induction and resolution of inflammation is observed. Another advantage for using zebrafish to study neutrophil swarming is that adaptive immunity for this organism does not develop until around 4 weeks of age. This allows for the study of neutrophil movement and other host immune responses independent of adaptive immune responses. History Originally, neutrophils were once seen as a solely homogenous (of the same type) populations but as of late, there had been discoveries that show that this is not the case. Instead, they are a mixture (heterogeneity) of mature neutrophils as they have been divided based on their production of cytokine expression of TLR (toll-like receptors), activation of macrophages in immunological responses, and host resistance, and lastly, in vitro angiogenesis and tumorigenesis. Communication Neutrophils have two different forms of communication: homotypic and heterotypic. Homotypic communication, which is between a neutrophil and another neutrophil, is involved in the signaling when their bodies are fighting infections and inflammation. In order to carry out this type of communication, the neutrophils must cross the vascular endothelium and the basal membrane to pass into the interstitial space (IF). These are assisted by chemoattractant gradients as well as signal relays that interact between the neutrophils to carry out this signaling. In addition with communication with each other, the neutrophils must also communicate with the leukocytes (white blood cells) that are directly involved with immunological functions in the bodies of the neutrophils. This would be considered the heterotypic form of communication (neutrophil to leukocyte). Some of the functions of heterotypic communication include regulating when effector molecules are distributed, conducting immune responses, and also leaving lasting effects on cells even once they have been removed. This type of communication can also be referred to as cross-talk. Variations A study of the lymph nodes of mice that were infected by injection of parasites into their earflaps revealed two types of neutrophil swarming: transient and persistent swarms. Transient swarms are characterized by groups of 10-150 neutrophils forming multiple small cell clusters within 10–40 minutes that quickly dispersed. Once the neutrophils moved dispersed, they join other close swarm centers and this oftentimes leads to competition as the neutrophil groups fight for recruiting neutrophils. Persistent swarms showed clusters of more than 300 neutrophils and recruitment lasted for more than 40 minutes. These persistent swarms are also characterized by having constant neutrophil recruitment with large-cell clustering, stable and longer term than transient (a few hours). For both the transient and persistent swarms, the formed neutrophil clusters appeared to be competing with each other with the larger clusters attracting neutrophils from the smaller clusters. The study also revealed two distinct phases in swarm formation. The first phase occurs when a small number of “pioneer” neutrophils respond to an initial signal and form small clusters and this is followed by the second phase where there are a large scale migration of neutrophils leading to the growth of multiple cell clusters. In terms of migration, neutrophils will do something called chemotactic migration in which they go into and out of a swarm center by accumulation (moving towards) or moving out. Another movement is with just individual neutrophils that go will go from one swarm to another when they are in competition. One interesting fact about these two swarm types is that they can work together in the same disrupted tissue in order to restore an inflamed tissue to its original composition. The exact size or duration of swarms depends on the specific inflammatory conditions as well as the tissue type of the infection location. Several factors that influence the swarm phenotype are: the size of the initial tissue damage, the presence of pathogens, the induction of secondary cell death, and the number of recruited neutrophils. A study that compared large scale tissue damage of sterile mouse tissue by a needle prick with small injuries by a laser beam showed that the needle prick provoked a larger and longer swarm response. After the needle injury, hundreds to thousands of neutrophils were recruited that formed stable cell clusters that sometimes were prolonged for days. In comparison, the neutrophil swarms resulting from the laser induced injury only recruited around 50-330 neutrophils which persisted for a few hours. The presence of pathogens can also increase the size of neutrophil swarms, not necessarily because of their presence as a foreign body, but because of the additional cell death that they can cause in infection sites. When cells are lysed in an infection site, they release an assortment of signaling factors that augment the recruitment of neutrophils to the site. Additionally, neutrophil death during a swarm releases more signaling factors to recruit more neutrophils so the initial amount of neutrophils recruited plays a role in how large the propagation effect is during swarming. Stages Stages 1-3 The neutrophil swarming process is categorized into 5 phases: swarm initiation, swarm amplification, additional swarm amplification through intercellular signaling, swarm aggregation and tissue remodeling, and recruitment of myeloid cells and swarm resolution. The first stage of neutrophil swarming details the “pioneer” neutrophils responding to an infection or inflammation site. The neutrophils close to the injury will switch from random motility to chemotactic movement within a period of 5–15 minutes and swarm towards the infection site. In the second stage, the pioneer neutrophils attract a second wave of neutrophils that come from more distant regions of the tissue. The methods of movement to the region of injury depends on the tissue environment the neutrophils are moving towards. Neutrophil swarming in extravascular spaces such as the connective tissue in the skin involves movement without the assistance of integrin proteins and neutrophil attraction by a gradient of chemoattractants. Neutrophils will be guided by the forces generated by the actomyosin cytoskeleton through the path of least resistance to the site of infection. However, for intravascular tissue environments, neutrophil movement is dependent on integrins and chemoattractant signals on the luminal surface of endothelial cells. In this process, distant neutrophils will be recruited by an inflammatory signal and perform integrin-mediated crawling along the vascular walls to reach the neutrophil swarming sites. In the third stage, swarming neutrophils can amplify their recruitment in a feed forward manner through intercellular communication by leukotriene B4 (LTB4). The propagation of neutrophil recruitment leads to multiple, dense neutrophil cell clusters at the site of inflammation. A 2013 study showed that neutrophils lacking the high affinity receptor for LTB4 (LTB4R1) decreased the recruitment of neutrophils at later stages of swarming. In addition, proximal cells to the inflammation site showed chemotaxis similar to the control cells while distant cells were poorly attracted. This finding suggests that the proximal neutrophils that are recruited early on are not affected by the lack of LTB4R1, but distant neutrophils that are required for the propagation of neutrophil swarming are not able to be recruited to the swarming site. These results present LTB4 as a key signaling molecule for a prolonged neutrophil swarm response and recruitment of neutrophils from distant areas of the tissue. Stages 4-5 After stages 1–3, neutrophils slow down in the cell clusters and begin to form aggregates. In this fourth stage, the neutrophil aggregates will aid in rearranging the surrounding extracellular tissue area and create a collagen-free zone at the inflammation center eventually resulting in a wound seal which isolates the site from the rest of the tissue. The exact mechanisms of this are unknown but it is believed that neutrophil proteases from the cell clusters play a role in clearing out the surrounding tissue environment. These neutrophil aggregates become stable as opposed to the constant movement in stages 1–3 by development of high chemoattractant concentrations within the clusters that promote local neutrophil interactions within the cluster. Additionally, neutrophils are switched to an adhesive mode of migration within clusters which further stabilize the aggregates and can prevent neutrophils from leaving the cluster. This switch is believed to be caused by additional secretions of LTB4 and other chemoattractants within the neutrophil aggregates. In stage 5, the swarming response terminates and the clusters dissolve with the resolution of inflammation. Little is known about the mechanisms of this stage but the process may be regulated by neutrophils or external factors from the tissue environment. In a laser-induced skin injury model, neutrophil aggregation typically stopped after 40–60 minutes which occurs at the same time as the appearance of secondary myeloid cell swarms. Knock-in mice studies have shown that the myeloid cells move slower than neutrophils and assemble around the neutrophil aggregates during this stage. These myeloid cells may disrupt the propagation signals of neutrophil chemoattractants or to create competing attractants in the tissue space so that the neutrophil aggregation is less strong. External factors When discussing the neutrophil swarming, it is important to address the other factors in the outside environment that can influence what is happening during migration of these neutrophils whether in packs or individually. A massive influence of neutrophils occurs when there is some sort of inflammatory problem occurring as they influence autocrine and paracrine signaling, involved in the clustering up and recruiting of the neutrophils themselves. Neutrophil swarming are influenced by three main external factors: type of tissues involved, nearby tissue-specific cells, and something referred to as chemoattractant (or when there is a chemical substance that influences a bacterium to move in the direction of their increasing concentration). One of the external factors that impact how communication occurs is the tissue context, as these each have a specific signal that can influence the swarming (the size and persistence of the neutrophil swarms). Two of these types are extravascular swarming and intravascular swarming. The extravascular swarming is due to integrin-independent interstitial movement as well as using soluble directional aspects like LTB4 that affects neutrophil attraction. Extravascular swarming consists of fibrillar (ex. skin) and cell rich (ex. lymph node) while intravascular consists of intrasinusoidal, with an example being the liver. Swarming signals Two triggers of neutrophil swarming include PAMPs or Pathogen Associated Molecular Patterns and DAMPs or Damage-Associated Molecular Patterns. One attribute of note in neutrophil swarming is that it is a conserved protective mechanism that respond when tissues undergo disruption. This can occur in many different tissues of the body includes the ears, liver, lung, and skin. The neutrophil warming can also participate in activating pathogen containment, keeping foreign substances localized and easier to treat and rid the body of later on. In the figure above, we see that at the start of neutrophil swarming, we begin with either an injury, fungi, or bacteria. As discussed above the PAMPs and the DAMPs trigger the initial neutrophil swarming. Then the LTB4 and CXCL2, which are chemoattractants, that are there to further the signals that cause a cascade of intracellular reactions to the disruptions and foreign substances. These begin a process known as swarm aggregation where the bacterial or other substances begin to all congregate together into one massive "ball" of bacteria, as shown above after the second green arrow. Another part of the image includes the box above the LTB4 and CXCL2, including Calcium, complement, ATP, Connexin 43, and Integrins. These also contribute to the chemoattractants by amplifying their signal and causing the swarm aggregation to run more forwardly. However, below is the NADPH Oxidase 2 or NOX2 is a negative regulator of the chemoattractants that may disrupt the events from proceeding forward. These events previously show how neutrophil swarming begins while the steps ahead are going to explain the ending of this process once the body no longer needs this to occur for its health and well-being, if the bacteria fungi or injury has resolved. The most crucial step in terminating the neutrophil swarming is when GPCR kinase 2 or GRK2 has phosphorylated or desensitized the GPCRs (G-protein coupled receptors). Then the other three, Lipoxin A4, resolving E3, and w-OH-LTB4 assist the GRK2 in stopping this process fully. Essential regulators of signaling One of the regulators of signaling includes Calcium. It is one of the positive regulators to chemoattractants such as LTB4 and CXCL2. To obtain calcium, the cell must get the calcium from the intracellular endoplasmic reticulum (ER) or from the extracellular matrix. For the endoplasmic reticulum calcium sequestering, the cell uses a process called SOCE or store-operated calcium entry that induces cascades of signaling via receptors and these then stimulate the release of calcium out of the ER. In order to bring the Calcium from outside the cell a much more complex process occurs by using CRAC or calcium release-activated calcium channels (which also have something called ORAI family members in them). Before this can occur, however, stromal interaction molecule proteins (STIM) must detect the calcium and this then allows the ER to sense a change thus changing their shape and allowing these CRAC channels to gate between the intracellular ER and the extracellular space in order that calcium may be through into the cell then driving downstream mechanisms that were dependent on calcium. During swarming, neutrophils notably exhibit sustained calcium activity in the center of the swarm and produces calcium waves. Another part of regulation includes the chemokines and the cytokines. There are two chemokines that work cooperatively in neutrophil swarming operations, CXCL2 and LTB4. They did tests in order to find out that CXCL2 did in fact assist, making a noticeable impact on the driving of swarming. But, it was a little more complicated than just that one chemokine. In coupling with inhibition of CXCR1 as well as BLT1 and BLT2 there was a decrease in chemoattractant inducement (also known as chemotactic index). In summary, there is a chemokine called CXCL8, which is basically a ligand of CXCR1 and 2 that alongside LTB4 positively promotes the swarming in neutrophils. Basics of neutrophils In order to properly understand neutrophil swarming, one must also understand the basics of the structure and function of neutrophils. They are leukocytes (white blood cells) that are the most abundant WBC in the body and are known for their role in the immune system. In figure 2, it shows the main three ways in which a neutrophil can approach attacking and getting rid of a foreign antigen or bacteria. The top left illustrates degranulation, a process in which the neutrophil itself degradulates and then releases its substances in the outer circulation in which the bacteria lies. These contents work to destroy/break down the bacteria. The second way, on the upper right, shows phagocytosis. This is when the bacteria is brought into the neutrophil by the plasma membrane engulfing it and pulling it inside to create a vacuole. The engulfing process begins with a bacteria, then turns into a phagosome when it being to form a vacuole, and lastly becomes a phagolysosome that contain broken down bacteria with contents from the neutrophil environment inside. These contents have enzymes that degrade the bacteria coupled with a low pH of the internal environment. Lastly, in the bottom part of the image, it shows NETosis. Comparatively to the other bacteria, the bacteria are much larger and thus need this process to be combatted. It includes the creation of NETs or neutrophil extracellular traps that are composed of DNA wrapped around histones and proteins like myeloperoxidase and elastase. These DNA string extensions along with the helper proteins envelop the bacteria and these break down the bacteria. All three of these processes show the action that neutrophils have in targeting and destroying foreign substances, their main job in the body. References Cell biology
Neutrophil swarming
[ "Biology" ]
3,864
[ "Cell biology" ]
54,448,472
https://en.wikipedia.org/wiki/Glasser%27s%20master%20theorem
In integral calculus, Glasser's master theorem explains how a certain broad class of substitutions can simplify certain integrals over the whole interval from to It is applicable in cases where the integrals must be construed as Cauchy principal values, and a fortiori it is applicable when the integral converges absolutely. It is named after M. L. Glasser, who introduced it in 1983. A special case: the Cauchy–Schlömilch transformation A special case called the Cauchy–Schlömilch substitution or Cauchy–Schlömilch transformation was known to Cauchy in the early 19th century. It states that if then where PV denotes the Cauchy principal value. The master theorem If , , and are real numbers and then Examples References External links Integral calculus
Glasser's master theorem
[ "Mathematics" ]
169
[ "Integral calculus", "Calculus" ]
60,841,716
https://en.wikipedia.org/wiki/APBS%20%28software%29
APBS (previously also Advanced Poisson-Boltzmann Solver) is a free and open-source software for solving the equations of continuum electrostatics intended primarily for the large biomolecular systems. It is available under the BSD license. PDB2PQR prepares the protein structure files from Protein Data Bank for use with APBS. The preparation steps include, but aren't limited to adding missing heavy atoms to the structures and assigning charges from a number of force fields. The output file format is PQR and that's where the name of the software comes from. References External links Official documentation APBS, PDB2PQR, and related software - GitHub Molecular modelling software Free and open-source software Free software programmed in Python Free software programmed in C++ Free software programmed in C
APBS (software)
[ "Chemistry" ]
173
[ "Molecular modelling software", "Molecular physics", "Computational chemistry software", "Molecular modelling", "Molecular physics stubs" ]
60,842,845
https://en.wikipedia.org/wiki/Enumeration%20algorithm
In computer science, an enumeration algorithm is an algorithm that enumerates the answers to a computational problem. Formally, such an algorithm applies to problems that take an input and produce a list of solutions, similarly to function problems. For each input, the enumeration algorithm must produce the list of all solutions, without duplicates, and then halt. The performance of an enumeration algorithm is measured in terms of the time required to produce the solutions, either in terms of the total time required to produce all solutions, or in terms of the maximal delay between two consecutive solutions and in terms of a preprocessing time, counted as the time before outputting the first solution. This complexity can be expressed in terms of the size of the input, the size of each individual output, or the total size of the set of all outputs, similarly to what is done with output-sensitive algorithms. Formal definitions An enumeration problem is defined as a relation over strings of an arbitrary alphabet : An algorithm solves if for every input the algorithm produces the (possibly infinite) sequence such that has no duplicate and if and only if . The algorithm should halt if the sequence is finite. Common complexity classes Enumeration problems have been studied in the context of computational complexity theory, and several complexity classes have been introduced for such problems. A very general such class is EnumP, the class of problems for which the correctness of a possible output can be checked in polynomial time in the input and output. Formally, for such a problem, there must exist an algorithm A which takes as input the problem input x, the candidate output y, and solves the decision problem of whether y is a correct output for the input x, in polynomial time in x and y. For instance, this class contains all problems that amount to enumerating the witnesses of a problem in the class NP. Other classes that have been defined include the following. In the case of problems that are also in EnumP, these problems are ordered from least to most specific: Output polynomial, the class of problems whose complete output can be computed in polynomial time. Incremental polynomial time, the class of problems where, for all i, the i-th output can be produced in polynomial time in the input size and in the number i. Polynomial delay, the class of problems where the delay between two consecutive outputs is polynomial in the input (and independent from the output). Strongly polynomial delay, the class of problems where the delay before each output is polynomial in the size of this specific output (and independent from the input or from the other outputs). The preprocessing is generally assumed to be polynomial. Constant delay, the class of problems where the delay before each output is constant, i.e., independent from the input and output. The preprocessing phase is generally assumed to be polynomial in the input. Common techniques Backtracking: The simplest way to enumerate all solutions is by systematically exploring the space of possible results (partitioning it at each successive step). However, performing this may not give good guarantees on the delay, i.e., a backtracking algorithm may spend a long time exploring parts of the space of possible results that do not give rise to a full solution. : This technique improves on backtracking by exploring the space of all possible solutions but solving at each step the problem of whether the current partial solution can be extended to a partial solution. If the answer is no, then the algorithm can immediately backtrack and avoid wasting time, which makes it easier to show guarantees on the delay between any two complete solutions. In particular, this technique applies well to self-reducible problems. Closure under set operations: If we wish to enumerate the disjoint union of two sets, then we can solve the problem by enumerating the first set and then the second set. If the union is non disjoint but the sets can be enumerated in sorted order, then the enumeration can be performed in parallel on both sets while eliminating duplicates on the fly. If the union is not disjoint and both sets are not sorted then duplicates can be eliminated at the expense of a higher memory usage, e.g., using a hash table. Likewise, the cartesian product of two sets can be enumerated efficiently by enumerating one set and joining each result with all results obtained when enumerating the second step. Examples of enumeration problems The vertex enumeration problem, where we are given a polytope described as a system of linear inequalities and we must enumerate the vertices of the polytope. Enumerating the minimal transversals of a hypergraph. This problem is related to monotone dualization and is connected to many applications in database theory and graph theory. Enumerating the answers to a database query, for instance a conjunctive query or a query expressed in monadic second-order. There have been characterizations in database theory of which conjunctive queries could be enumerated with linear preprocessing and constant delay. The problem of enumerating maximal cliques in an input graph, e.g., with the Bron–Kerbosch algorithm Listing all elements of structures such as matroids and greedoids Several problems on graphs, e.g., enumerating independent sets, paths, cuts, etc. Enumerating the satisfying assignments of representations of Boolean functions, e.g., a Boolean formula written in conjunctive normal form or disjunctive normal form, a binary decision diagram such as an OBDD, or a Boolean circuit in restricted classes studied in knowledge compilation, e.g., NNF. Connection to computability theory The notion of enumeration algorithms is also used in the field of computability theory to define some high complexity classes such as RE, the class of all recursively enumerable problems. This is the class of sets for which there exist an enumeration algorithm that will produce all elements of the set: the algorithm may run forever if the set is infinite, but each solution must be produced by the algorithm after a finite time. References Algorithms
Enumeration algorithm
[ "Mathematics" ]
1,279
[ "Algorithms", "Mathematical logic", "Applied mathematics" ]
60,843,293
https://en.wikipedia.org/wiki/PLUMED
PLUMED is an open-source library implementing enhanced-sampling algorithms, various free-energy methods, and analysis tools for molecular dynamics simulations. It is designed to be used together with ACEMD, AMBER, DL_POLY, GROMACS, LAMMPS, NAMD, OpenMM, ABIN, CP2K, i-PI, PINY-MD, and Quantum ESPRESSO, but it can also be used together with analysis and visualization tools VMD, HTMD, and OpenPathSampling. In addition, PLUMED can be used as a standalone tool for analysis of molecular dynamics trajectories. A graphical user interface named METAGUI is available. Collective variables PLUMED offers a large collection of collective variables that serve as descriptions of complex processes that occur during molecular dynamics simulations, for example angles, positions, distances, interaction energies, and total energy. References External links METAGUI Molecular dynamics software Computational biology Free software programmed in C++ Free and open-source software Software using the GNU Lesser General Public License
PLUMED
[ "Chemistry", "Biology" ]
211
[ "Molecular dynamics software", "Molecular physics", "Computational chemistry software", "Molecular dynamics", "Computational biology", "Molecular physics stubs" ]
70,173,531
https://en.wikipedia.org/wiki/Gymnopilus%20turficola
Gymnopilus turficola is a species of agaric fungus in the family Hymenogastraceae. Habitat and distribution It can be found growing in peat in subarctic tundra in northern Finland and in Finnmark, Norway. See also List of Gymnopilus species References turficola Fungi described in 2001 Fungi of Europe Taxa named by Meinhard Michael Moser Fungus species
Gymnopilus turficola
[ "Biology" ]
83
[ "Fungi", "Fungus species" ]
70,179,999
https://en.wikipedia.org/wiki/Suita%20conjecture
In mathematics, the Suita conjecture is a conjecture related to the theory of the Riemann surface, the boundary behavior of conformal maps, the theory of Bergman kernel, and the theory of the L2 extension. The conjecture states the following: It was first proved by for the bounded plane domain and then completely in a more generalized version by . Also, another proof of the Suita conjecture and some examples of its generalization to several complex variables (the multi (high) - dimensional Suita conjecture) were given in and . The multi (high) - dimensional Suita conjecture fails in non-pseudoconvex domains. This conjecture was proved through the optimal estimation of the Ohsawa–Takegoshi L2 extension theorem. Notes References Several complex variables Algebraic geometry
Suita conjecture
[ "Mathematics" ]
157
[ "Functions and mappings", "Mathematical analysis", "Mathematical analysis stubs", "Several complex variables", "Mathematical objects", "Fields of abstract algebra", "Mathematical relations", "Algebraic geometry" ]
70,180,406
https://en.wikipedia.org/wiki/Psoromic%20acid
Psoromic acid is a β-orcinol depsidone with the molecular formula C18H14O8. Psoromic acid inhibits herpes simplex viruses type 1 and type 2. Furthermore, it inhibits the RabGGTase. Psoromic acid occurs in antarctic lichens. References Further reading Lichen products Heterocyclic compounds with 3 rings Oxygen heterocycles Carboxylic acids Lactones Ketones Hydroxyarenes Methoxy compounds
Psoromic acid
[ "Chemistry" ]
104
[ "Natural products", "Lichen products", "Ketones", "Carboxylic acids", "Functional groups" ]
70,182,917
https://en.wikipedia.org/wiki/Robert%20Otto%20Pohl
Robert Otto Pohl (December 17, 1929 – August 30, 2024) was a German-American physicist, specializing in condensed matter physics topics such as solid state physics, thermal conductivity, and thin films, who was the Goldwin Smith Emeritus Professor of Physics at Cornell University where he has been on the faculty since the 1950s. Life and career Robert O. Pohl's father was the physicist Robert Wichard Pohl (1884–1976), whose maternal grandfather was Friedrich Wichard Lange (1826–1884), a member of the Hamburg Parliament. After completing undergraduate study at the University of Freiburg, Robert O. Pohl matriculated as a graduate student at the University of Erlangen. There he graduated with a Diplom (M.S.) in 1955 and a doctorate in 1957 and worked as an assistant in physics for the academic year 1957–1958. He emigrated to the United States in 1958. At Cornell University he was a research associate (from 1958 to 1960), an assistant professor from 1960 to 1963, an associate professor from 1963 to 1968), a full professor from 1968 to 2000, and Goldwin Smith Emeritus Professor of Physics from 2000 to 2024. He held visiting appointments at RWTH Aachen University (1964), the University of Stuttgart (1966–1967), the Ludwig Maximilian University of Munich, the University of Konstanz, the University of Regensburg, New Zealand's University of Canterbury, China's Tongji University, and the Nuclear Research Center in Jülich. Robert O. Pohl has done research on experimental investigations of glass and glassy materials, as well as heat transport and lattice transport behavior in crystalline solids and in amorphous solids, structure of glass, cryogenic techniques, and energy problems. In 1985 he received the Oliver E. Buckley Condensed Matter Prize for "his pioneering work on low energy excitations in amorphous materials and continued important contributions to the understanding of thermal transport in solids." It was considered the highest recognition in condensed matter physics. Pohl was elected in 1972 a fellow of the American Physical Society, in 1984 a fellow of American Association for the Advancement of Science and in 1999 a member of the National Academy of Sciences. For the academic year 1973–1974 he was a Guggenheim Fellow. In 1980 he received the Humboldt US Senior Scientist Award. His doctoral students included Venkatesh Narayanamurti. Springer published Robert Wichard Pohl's 3-volume edition of Einführung in die Physik (vol. 1, Mechanik und Akustik, 1930; vol. 2, Elektrizitätslehre, 1927; vol. 3, 1940, Optik) with many later editions and a 2-volume edition edited by Klaus Lüders and Robert O. Pohl (vol. 1, Mechanik, Akustik und Wärmelehre, 19th edition, 2004; vol. 2, 22nd edition, 2006). Robert O. Pohl added videos of demonstration experiments for the latest editions. Pohl died in Göttingen, Germany on August 30, 2024, at the age of 94. Pohl's opinions on nuclear waste disposal In addition to his main research interests, Robert O. Pohl was concerned about radioactive waste disposal and its effects on the environment and human health. During the Carter administration he served on a Presidential advisory committee on nuclear waste disposal. In a 1982 article published in Physics Today, Pohl wrote: See also Nuclear Waste Policy Act Ocean disposal of radioactive waste Selected publications Articles (over 500 citations) (over 2200 citations) (over 1050 citations) See aluminium nitride. (over 400 citations) (over 450 citations) (over 3050 citations) (over 2000 citations) (over 400 citations) Books translated from Pohls Einführung in die Physik, Band. 1 : Mechanik, Akustik und Wärmelehre by William D. Brewers (Vol. 1 contains 77 videos of experiments.) translated from Pohls Einführung in die Physik, Band.2 : Elektizitaetslehre und Optik by William D. Brewer (Vol. 2 contains 41 videos of experiments.) References External links (movie from the early 1970s) (creators: Klaus Lüders, Robert Otto Pohl, et al.) (creators: Klaus Lüders, Robert Otto Pohl, et al.) 1929 births 2024 deaths 20th-century German physicists 21st-century German physicists 20th-century American physicists 21st-century American physicists Condensed matter physicists University of Freiburg alumni University of Erlangen-Nuremberg alumni Cornell University faculty Fellows of the American Physical Society Fellows of the American Association for the Advancement of Science Members of the United States National Academy of Sciences Oliver E. Buckley Condensed Matter Prize winners
Robert Otto Pohl
[ "Physics", "Materials_science" ]
987
[ "Condensed matter physicists", "Condensed matter physics" ]
70,183,473
https://en.wikipedia.org/wiki/Past%20hypothesis
In cosmology, the past hypothesis is a fundamental law of physics that postulates that the universe started in a low-entropy state, in accordance with the second law of thermodynamics. The second law states that any closed system follows the arrow of time, meaning its entropy never decreases. Applying this idea to the entire universe, the hypothesis argues that the universe must have started from a special event with less entropy than is currently observed, in order to preserve the arrow of time globally. This idea has been discussed since the development of statistical mechanics, but the term "past hypothesis" was coined by philosopher David Albert in 2000. Philosophical and theoretical efforts focus on trying to explain the consistency and the origin of the postulate. The past hypothesis is an exception to the principle of indifference, according to which every possible microstate within a certain macrostate would have an equal probability. The past hypothesis allows only those microstates that are compatible with a much-lower-entropy past, although these states are assigned equal probabilities. If the principle of indifference is applied without taking into account the past hypothesis, a low- or medium-entropy state would have likely evolved both from and toward higher-entropy macrostates, as there are more ways statistically to be high-entropy than low-entropy. The low- or medium-entropy state would have appeared as a "statistical fluctuation" amid a higher-entropy past and a higher-entropy future. Common theoretical frameworks have been developed in order to explain the origin of the past hypothesis based on inflationary models or the anthropic principle. The Weyl curvature hypothesis, an alternative model by Roger Penrose, argues a link between entropy, the arrow of time and the curvature of spacetime (encoded in the Weyl tensor). See also Loschmidt's paradox Entropy as an arrow of time Notes References Philosophy of thermal and statistical physics Philosophy of time
Past hypothesis
[ "Physics", "Chemistry" ]
394
[ "Philosophy of thermal and statistical physics", "Physical quantities", "Time", "Philosophy of time", "Thermodynamics", "Spacetime", "Statistical mechanics" ]
57,756,777
https://en.wikipedia.org/wiki/R13%20%28drug%29
R13 is a small-molecule flavonoid and orally active, potent, and selective agonist of the tropomyosin receptor kinase B (TrkB) – the main signaling receptor for the neurotrophin brain-derived neurotrophic factor (BDNF) – which is under development for the potential treatment of Alzheimer's disease. It is a structural modification and prodrug of tropoflavin (7,8-DHF) with improved potency and pharmacokinetics, namely oral bioavailability and duration. The compound is a replacement for the earlier tropoflavin prodrug R7 and has similar properties to it. It was developed because while R7 displayed a good drug profile in animal studies, it showed almost no conversion into tropoflavin in human liver microsomes. In contrast to R7, R13 is readily hydrolyzed into tropoflavin in human liver microsomes. See also List of investigational antidepressants Tropomyosin receptor kinase B § Agonists References External links 7,8-Dihydoxyflavone and 7,8-substituted flavone derivatives, compositions, and methods related thereto (US9975868B2) Antidementia agents Carbamates Esters Experimental drugs Flavones Neuroprotective agents Nootropics Prodrugs TrkB agonists
R13 (drug)
[ "Chemistry" ]
308
[ "Esters", "Functional groups", "Prodrugs", "Organic compounds", "Chemicals in medicine" ]
57,761,673
https://en.wikipedia.org/wiki/ASCE-ASME%20Journal%20of%20Risk%20and%20Uncertainty%20in%20Engineering%20Systems
The ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems is a peer-reviewed scientific journal established in 2014 by the American Society of Civil Engineers (ASCE) and the American Society of Mechanical Engineers (ASME). It disseminates research findings, best practices concerns, and discussions and debates on risk- and uncertainty-related issues in the areas of civil and mechanical engineering and related fields. Scope The journal covers risk and uncertainty issues in planning, design, construction/manufacturing, utilization, decommissioning and removal, and evaluation of engineering systems. The journal has wide coverage to all sub-disciplines of civil and mechanical engineering and other related fields, including structural engineering, geotechnical engineering, water resources engineering, construction engineering, transport engineering, coastal engineering, nuclear engineering, industrial and manufacturing engineering including gas, oil and chemical, ocean engineering, hazard analysis including climate change, earthquake engineering, associated resilience and sustainability, mechanics, mechatronics, robotics, thermodynamics, human factors, and thermo science. History Professor Bilal M. Ayyub from the Department of Civil and Environmental Engineering, University of Maryland College Park, established the ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems in 2014 in coordination and consultation with an advisory board representing leaders from ASCE and ASME, with the first issue being published in March 2015. As of 2018, it is the only joint journal for both societies with another society in their respective long histories. ASCE and ASME registered the two parts as separate journals as Part A and Part B, respectively, to facilitate the production of the journal along other journals offered by the respective societies. Both journals have the same editorial board and leadership and produced their first quarterly issues at the end of first quarter of 2015. The current Editor-in-Chief of the two journals is Professor Michael Beer. Indexes Both Part A and Part B are listed in the Emerging Sources Citation Index by Clarivate Analytics, formerly Thomson Reuters, and it is eligible for indexing in 2018. From 2016 onward, all articles are included in Web of Science. They are included also in Scopus. The current Impact Factor of Part A is 1.926. based on the latest Journal Citation Reports released by Clarivate Analytics. References External links ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, Part A website ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, Part B website ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, Part A & B website American Society of Mechanical Engineers academic journals American Society of Civil Engineers academic journals Mechanical engineering journals
ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems
[ "Engineering" ]
542
[ "Mechanical engineering journals", "Mechanical engineering" ]
57,761,689
https://en.wikipedia.org/wiki/Electroshapable%20material
Electroshapable materials are composite materials from the class of thermoplastic materials. Electroshapable materials are plastics rigid at room temperature, which can take the form of various objects or plastic elements. They can substitute for more common thermoplastic polymers such as PVC, PE, PC, EVA. The particularity of electroshapable materials lies in their ability to become fluid and malleable when an electric voltage is applied to two ends of the material, before becoming rigid again after the voltage is removed. This process can be reversible. This behavior makes thermoforming fast, reversible and easy to use for many applications. One of the major benefits is the improved comfort of everyday products in contact with the human body. Electroshapable materials are particularly useful in so-called thermoformable products, particularly in the field of sports equipment (e.g. ski boots, soles, body protections) and in the medical sector (e.g. splints). Electroshapable materials should not be confused with electroactive polymers because they are not based on the same physical principle. Physical principle Electroshapable materials are based on a relatively simple physical principle: it is in fact an electrically conductive thermoplastic material in which are positioned two electrodes for applying an electrical voltage. When an electrical voltage is applied, the polymer will act as a heating resistor because of its resistivity. It will heat up homogeneously by the Joule effect until it reaches its melting temperature, beyond which it becomes sufficiently viscous to be malleable to the hand. Ideally the thermoplastic polymer selected has a relatively low melting temperature (60 °C for example) so that the user can handle it without danger of burning and to reduce the amount of energy required for the transition. Any voltage can be used, both direct current and alternating current. However, the electric power required to do the rigid-malleable transition is proportional to the mass of electroshapable materials to be deformed. Depending on the mass of the element, it will be required a minimum electrical power to achieve the melting temperature. The current is determined by the resistance of the electroshapable element to melt. Thus by the geometry of the element and by the resistivity of the material. History Electrically conductive polymers or composites have existed since the 1970s. However, the concept of electroshapable materials is quite recent; it was developed for the first time in France in 2015 by Pierre-Louis Boyer and Alexis Robert. Uses Lack of adaptation of mass produced products Some rigid objects in contact with the human body have a need to adapt to the morphology of the user, in order to distribute the pressure over the entire human-object interface. In order to improve the comfort of the user and to avoid pressure points, which can cause bedsores. This need to adapt rigid forms comes from the extreme variability of anatomy between individuals; most mass production processes are not adapted to meet this variability. Indeed, these methods are generally optimized to make a single shape, often from a mold or die (for example: plastic injection, metal foil stamping). To address this problem of morphological diversity, manufacturers often have to multiply industrial tools in order to offer several dimensions or shapes to their products. The most prominent example is shoes for which there can be up to fifteen to twenty different sizes. Which is very difficult to handle because: Cost: Industrial tools such as injection molds are very expensive. Required investment is proportional to the number of reference. Logistics: With an increasing number of references the stock management also becomes more complicated. Moreover, the manufacturers are very limited in number of product variations, because each is multiplying the number of references. For example, if a shoe manufacturer who proposes a model in fifteen sizes wishes to produce a product in three different colors, he will have to manage 3 × 15, or 45 references. If it wants to make two models, one for large foot and one for thin foot, it will double again to 90 references. Limited by the number of variations, the designers can hardly answer all the anatomical specificities of the individuals. The most common strategic option is to address the products to the physical characteristics of the average individual, which is expected to match the majority of customers, but leaves a significant fringe of users unsatisfied. Thermoformable products The so-called thermoformable products (which use thermoplastic elements often at low melting temperature such as PCL) are a solution allowing products to adapt to the anatomy of the user. Thermoformable products make it possible to adapt an initially standard shape to the anatomy of the user, making it possible to adapt more finely to each user than is possible through product variations. Many current applications use the principle of thermoforming, such as: ski boots, shoe soles, medical splints. One of the limitations of these products using conventional thermoplastics is the equipment required for their implementation. Indeed, the heat energy required to raise the material to its melting point is generally provided by the use of an oven or a water bath. The electroshapable materials also make it possible to form product to the user anatomy. Advantages Making the work easier for the end user, who simply needs to connect the product to a calibrated power source. Speed of the forming process because the material heats intrinsically and homogeneously by Joule effect. So it can melt in seconds, making the whole forming process under a minute. Disadvantages Need for a power supply. The use of a power supply requires compliance with safety standards, especially if the voltage used is greater than 50 V. References Sources Shape changing polymers. LOMA Innovation and its plastic forming technology. A polymer forming with heat. Patent describing the principle of electroformable polymers. External links Composite materials Plastics Polymers Thermoplastics
Electroshapable material
[ "Physics", "Chemistry", "Materials_science" ]
1,210
[ "Unsolved problems in physics", "Composite materials", "Materials", "Polymer chemistry", "Polymers", "Amorphous solids", "Matter", "Plastics" ]
62,047,186
https://en.wikipedia.org/wiki/Kotzig%27s%20theorem
In graph theory and polyhedral combinatorics, areas of mathematics, Kotzig's theorem is the statement that every polyhedral graph has an edge whose two endpoints have total degree at most 13. An extreme case is the triakis icosahedron, where no edge has smaller total degree. The result is named after Anton Kotzig, who published it in 1955 in the dual form that every convex polyhedron has two adjacent faces with a total of at most 13 sides. It was named and popularized in the west in the 1970s by Branko Grünbaum. More generally, every planar graph of minimum degree at least three either has an edge of total degree at most 12, or at least 60 edges that (like the edges in the triakis icosahedron) connect vertices of degrees 3 and 10. If all triangular faces of a polyhedron are vertex-disjoint, there exists an edge with smaller total degree, at most eight. Generalizations of the theorem are also known for graph embeddings onto surfaces with higher genus. The theorem cannot be generalized to all planar graphs, as the complete bipartite graphs and have edges with unbounded total degree. However, for planar graphs with vertices of degree lower than three, variants of the theorem have been proven, showing that either there is an edge of bounded total degree or some other special kind of subgraph. References Planar graphs Theorems in graph theory
Kotzig's theorem
[ "Mathematics" ]
298
[ "Statements about planar graphs", "Planar graphs", "Theorems in discrete mathematics", "Planes (geometry)", "Theorems in graph theory" ]
62,052,484
https://en.wikipedia.org/wiki/Stahl%27s%20theorem
In matrix analysis Stahl's theorem is a theorem proved in 2011 by Herbert Stahl concerning Laplace transforms for special matrix functions. It originated in 1975 as the Bessis-Moussa-Villani (BMV) conjecture by Daniel Bessis, Pierre Moussa, and Marcel Villani. In 2004 Elliott H. Lieb and Robert Seiringer gave two important reformulations of the BMV conjecture. In 2015, Alexandre Eremenko gave a simplified proof of Stahl's theorem. In 2023, Otte Heinävaara proved a structure theorem for Hermitian matrices introducing tracial joint spectral measures that implies Stahl's theorem as a corollary. Statement of the theorem Let denote the trace of a matrix. If and are Hermitian matrices and is positive semidefinite, define , for all real . Then can be represented as the Laplace transform of a non-negative Borel measure on . In other words, for all real , () = , for some non-negative measure depending upon and . References Conjectures that have been proved Theorems in analysis Theorems in measure theory
Stahl's theorem
[ "Mathematics" ]
237
[ "Theorems in mathematical analysis", "Mathematical analysis", "Theorems in measure theory", "Conjectures that have been proved", "Mathematical problems", "Mathematical theorems" ]
62,055,291
https://en.wikipedia.org/wiki/Katalin%20Hangos
Katalin M. Hangos is a Hungarian chemical engineer whose research concerns control theory and chemical process modeling. She is a research professor in the Systems and Control Laboratory of the Institute for Computer Science and Control of the Hungarian Academy of Sciences, and a professor of electrical engineering and information systems at the University of Pannonia. Education Hangos earned a master's degree in chemistry at Eötvös Loránd University in 1976, and returned to Eötvös Loránd University for a bachelor's degree in computer science in 1980. She earned a Ph.D. in chemical engineering in 1984 and, through the Hungarian Academy of Sciences, a D.Sc. in process systems engineering in 1994. Books Hangos is the co-author of: Process Modelling and Model Analysis (with Ian T. Cameron, Academic Press, 2001) Analysis and Control of Nonlinear Process Systems (with József Bokor and Gábor Szederkényi, Springer, 2004) Intelligent Control Systems: An Introduction with Examples (with Rozália Lakner and Miklós Gerzson, Kluwer, 2004) Analysis and Control of Polynomial Dynamic Models with Biological Applications (with Gábor Szederkényi and Attila Magyar, Academic Press, 2018) References External links Year of birth missing (living people) Living people Control theorists Hungarian chemical engineers Women chemical engineers Eötvös Loránd University alumni
Katalin Hangos
[ "Chemistry", "Engineering" ]
283
[ "Women chemical engineers", "Chemical engineers", "Control engineering", "Control theorists" ]
71,628,096
https://en.wikipedia.org/wiki/Potassium%20phosphide
Potassium phosphide is an inorganic semiconductor compound with the formula K3P. It appears as a white crystalline solid or powder. It reacts violently with water and is toxic via ingestion, inhalation and skin absorption. It has a hexagonal structure. Synthesis Potassium phosphide can be synthesised by simply reacting the two elements together: 12K + P4 -> 4K3P Applications Potassium phosphide is used in high power, high frequency applications and also in laser diodes. References Phosphides Potassium compounds Semiconductors
Potassium phosphide
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
117
[ "Electrical resistance and conductance", "Physical quantities", "Inorganic compounds", "Semiconductors", "Inorganic compound stubs", "Materials", "Electronic engineering", "Condensed matter physics", "Solid state engineering", "Matter" ]
71,629,400
https://en.wikipedia.org/wiki/Manganese%28III%29%20chloride
Manganese(III) chloride is the hypothetical inorganic compound with the formula MnCl3. The existence of this binary halide has not been demonstrated. Nonetheless, many derivatives of MnCl3 are known, such as MnCl3(THF)3 and the bench-stable MnCl3(OPPh3)2. Contrasting with the elusive nature of MnCl3, trichlorides of the adjacent metals on the periodic table—iron(III) chloride, chromium(III) chloride, and technetium(III) chloride—are all isolable compounds. History of MnCl3 and its adducts MnCl3 was claimed to be a dark solid and produced by the reaction of "anhydrous manganese(III) acetate" and liquid hydrogen chloride at −100 °C and decomposes above -40 °C. Other claims involved reaction of manganese(III) oxide, manganese(III) oxide-hydroxide, and basic manganese acetate with hydrochloric acid. Given recent investigations however, such claims have been disproved or called into serious doubt. Specifically, all known compounds containing MnCl3 are known to be solvent or ligand-stabilized adducts. Adducts MnCl3 can be stabilized by complexation to diverse Lewis bases, as has been established over the course of many years of study. Meta stable acetonitrile-solvated Mn(III)Cl3 can be prepared at room temperature by treating [Mn12O12(OAc)16(H2O)4] with trimethylsilyl chloride. The treatment of permanganate salts with trimethylsilylchloride generates solutions containing Mn(III)–Cl species for alkene dichlorination reactions; electrocatalytic methods that use Mn(III)–Cl intermediates have been developed for the same purpose. The reaction of manganese dioxide with hydrochloric acid in tetrahydrofuran gives MnCl3(H2O)(THF)2. Manganese(III) fluoride suspended in THF reacts with boron trichloride, giving MnCl3(THF)3 which has the appearance of dark purple prisms. This compound has a monoclinic crystal structure, reacts with water, and decomposes at room temperature. The most readily handled of this series of adducts is MnCl3(OPPh3)2. Pentachloromanganate(III) Another common manganese(III) chloride compound is the pentachloromanganate(III) dianion. It is usually charge balanced with counterion(s) like tetraethylammonium. The pentachloromanganates are typically green in color, light sensitive, maintain pentacoordination in solution, and have S = 2 ground states at room temperature. Crystal structures of pentachloromanganate indicate the anion is square pyramidal. Tetraethylammonium pentachloromanganate(III), [Et4N]2[MnCl5], can be prepared and isolated by treating suspension of [Mn12O12(OAc)16(H2O)4] in diethyl ether with trimethylsilylchloride, collecting the resulting purple solid in the dark, and then treating this solid with 0.6 M solution of tetraethylammonium chloride. The green product is air stable but should be kept in the dark. Manganese(III) monochloride compounds Some manganese compounds with macrocyclic tetradentate coordination can stabilize the manganese(III) monochloride, Mn(III)–Cl, moiety. Jacobson's catalyst is an example of a coordination compound containing the Mn(III)–Cl moiety and is stabilized by N,N,O,O coordination from a salen ligand. Jacobson's catalyst and related Mn(III)–Cl complexes react with O-atom transfer reagents to form high-valent Mn(V)O that are reactive in alkene epoxidation. Tetraphenylporphyrin Mn(III)Cl is a related commercially available compound. Other manganese(III) chloride complexes Bis(triphenylphosphineoxide) manganese(III) chloride References Manganese(III) compounds Chlorides
Manganese(III) chloride
[ "Chemistry" ]
917
[ "Chlorides", "Inorganic compounds", "Salts" ]
71,638,083
https://en.wikipedia.org/wiki/Project%20Nimbus
Project Nimbus () is a cloud computing project of the Israeli government and its military. Overview The Israeli Finance Ministry announced in April 2021 that the contract is to provide "the government, the defense establishment, and others with an all-encompassing cloud solution." Through a $1.2 billion contract, technology companies Google (Google Cloud Platform) and Amazon (Amazon Web Services) were selected to provide Israeli government agencies with cloud computing services, including artificial intelligence and machine learning. Under the contract, the companies will establish local cloud sites that will "keep information within Israel's borders under strict security guidelines." According to a Google spokesperson, the contract is for workloads related to "finance, healthcare, transportation, and education" and does not deal with highly sensitive or classified information," through the tech companies are contractually forbidden from denying service to any particular entities of the Israeli government. Although Project Nimbus' specific mission has not yet been revealed, Google Cloud Platform's AI tools could give the Israeli military and security services the capability for facial detection, automated image categorization, object tracking & sentiment analysis – tools that have previously been used by U.S. Customs and Border Protection for border surveillance. Project Nimbus has four planned phases: the first is purchasing and constructing the cloud infrastructure, the second is crafting government policy for moving operations onto the cloud, the third is moving operations to the cloud, and the fourth is implementing and optimizing cloud operations. The terms Israel set for the project contractually forbid Amazon and Google from halting services due to boycott pressure. A Google spokesperson said that all Google Cloud customers must abide by its terms of service which prohibit customers from using its services to violate people's legal rights or engage in violence, but internal documents from both Google and the Israeli government contradict this claim. Israeli–Palestinian conflict Circa 2022, the contract drew rebuke and condemnation from the companies' shareholders as well as their employees, over concerns that the project would lead to further abuses of Palestinians' human rights in the context of the ongoing occupation and the Israeli–Palestinian conflict. Specifically, they voice concern over how the technology will enable further surveillance of Palestinians and unlawful data collection on them as well as facilitate the expansion of Israel's illegal settlements on Palestinian land. Ariel Koren, who had worked as a marketing manager for Google's educational products and was an outspoken opponent of the project, was given the ultimatum of moving to São Paulo within 17 days or losing her job. In a letter announcing her resignation to her colleagues, Koren wrote that Google "systematically silences Palestinian, Jewish, Arab and Muslim voices concerned about Google's complicity in violations of Palestinian human rights—to the point of formally retaliating against workers and creating an environment of fear," reflecting her view that the ultimatum came in retaliation for her opposition to and organization against the project. She filed retaliation complaints with Google's human resources department and the National Labor Relations Board (NLRB), which dismissed her case based on lack of evidence. The NLRB also found that the ultimatum predated Koren's protected activities. In 2022, Jewish Voice for Peace and MPower Change launched a campaign called No Tech For Apartheid – also known as #NoTechForApartheid – opposing the project. More than 200 Google workers joined a protest group named after this campaign, who argue that the relative lack of oversight for the project mean it will likely be used for violent purposes. In March 2024, a Google Cloud software engineer was fired after a video of them shouting "I refuse to build technology that empowers genocide," in reference to Project Nimbus, at a company event went viral. In April, dozens of employees participated in sit-ins at Google's New York & Sunnyvale Headquarters to protest against Google supplying cloud computing software to the Israeli government. Employees occupied the office of Google Cloud chief executive Thomas Kurian. Nine employees were charged with trespassing and 28 were fired. In April, former Google employees fired for protesting with #NoTechForApartheid, citing an article in +972 Magazine, expressed concerns over Israel's current use of AI-assisted targeting in the Gaza Strip: a program named “The Gospel” categorizes buildings as military bases, while programs called “Lavender” and “Where’s Daddy” identify and falsely classify Palestinian civilians as 'terrorists' and track the their movements for target selection. In December 2024, a New York Times article reported that Google lawyers were worried that "Google Cloud services could be used for, or linked to, the facilitation of human rights violations, including Israeli activity in the West Bank." at least as early as four months before the Nimbus contract was signed. References 2021 establishments in Israel Amazon Web Services Criticism of Google Google Cloud Government of Israel Human rights in the State of Palestine Israel Defense Forces Military projects Surveillance databases Projects established in 2021
Project Nimbus
[ "Engineering" ]
1,010
[ "Military projects" ]
71,639,151
https://en.wikipedia.org/wiki/Monostrontium%20ruthenate
Monostrontium ruthenate is the inorganic compound with the formula SrRuO3. It is one of two main strontium ruthenates, the other having the formula Sr2RuO4. SrRuO3 is a ferromagnetic. It has a perovskite structure as do many complex metal oxides with the ABO3 formula. The Ru4+ ions occupy the octahedral sites and the larger Sr2+ ions are distorted 12-coordinate. References Strontium compounds Ruthenates Transition metal oxides Ferromagnetic materials Perovskites
Monostrontium ruthenate
[ "Physics" ]
126
[ "Materials", "Ferromagnetic materials", "Matter" ]
49,274,221
https://en.wikipedia.org/wiki/Graphs%20and%20Combinatorics
Graphs and Combinatorics (ISSN 0911-0119, abbreviated Graphs Combin.) is a peer-reviewed academic journal in graph theory, combinatorics, and discrete geometry published by Springer Japan. Its editor-in-chief is Katsuhiro Ota of Keio University. The journal was first published in 1985. Its founding editor in chief was Hoon Heng Teh of Singapore, the president of the Southeast Asian Mathematics Society, and its managing editor was Jin Akiyama. Originally, it was subtitled "An Asian Journal". In most years since 1999, it has been ranked as a second-quartile journal in discrete mathematics and theoretical computer science by SCImago Journal Rank. References Academic journals established in 1985 Combinatorics journals Graph theory journals Discrete geometry journals
Graphs and Combinatorics
[ "Mathematics" ]
163
[ "Combinatorics journals", "Graph theory", "Combinatorics", "Graph theory journals", "Mathematical relations" ]
49,277,634
https://en.wikipedia.org/wiki/Approximate%20computing
Approximate computing is an emerging paradigm for energy-efficient and/or high-performance design. It includes a plethora of computation techniques that return a possibly inaccurate result rather than a guaranteed accurate result, and that can be used for applications where an approximate result is sufficient for its purpose. One example of such situation is for a search engine where no exact answer may exist for a certain search query and hence, many answers may be acceptable. Similarly, occasional dropping of some frames in a video application can go undetected due to perceptual limitations of humans. Approximate computing is based on the observation that in many scenarios, although performing exact computation requires large amount of resources, allowing bounded approximation can provide disproportionate gains in performance and energy, while still achieving acceptable result accuracy. For example, in k-means clustering algorithm, allowing only 5% loss in classification accuracy can provide 50 times energy saving compared to the fully accurate classification. The key requirement in approximate computing is that approximation can be introduced only in non-critical data, since approximating critical data (e.g., control operations) can lead to disastrous consequences, such as program crash or erroneous output. Strategies Several strategies can be used for performing approximate computing. Approximate circuits Approximate arithmetic circuits: adders, multipliers and other logical circuits can reduce hardware overhead. For example, an approximate multi-bit adder can ignore the carry chain and thus, allow all its sub-adders to perform addition operation in parallel. Approximate storage and memory Instead of storing data values exactly, they can be stored approximately, e.g., by truncating the lower-bits in floating point data. Another method is to accept less reliable memory. For this, in DRAM and eDRAM, refresh rate assignments can be lowered or controlled. In SRAM, supply voltage can be lowered or controlled. Approximate storage can be applied to reduce MRAM's high write energy consumption. In general, any error detection and correction mechanisms should be disabled. Software-level approximation There are several ways to approximate at software level. Memoization or fuzzy memoization (the use of a vector database for approximate retrieval from a cache, i.e. fuzzy caching) can be applied. Some iterations of loops can be skipped (termed as loop perforation) to achieve a result faster. Some tasks can also be skipped, for example when a run-time condition suggests that those tasks are not going to be useful (task skipping). Monte Carlo algorithms and Randomized algorithms trade correctness for execution time guarantees. The computation can be reformulated according to paradigms that allow easily the acceleration on specialized hardware, e.g. a neural processing unit. Approximate system In an approximate system, different subsystems of the system such as the processor, memory, sensor, and communication modules are synergistically approximated to obtain a much better system-level Q-E trade-off curve compared to individual approximations to each of the subsystems. Application areas Approximate computing has been used in a variety of domains where the applications are error-tolerant, such as multimedia processing, machine learning, signal processing, scientific computing. Therefore, approximate computing is mostly driven by applications that are related to human perception/cognition and have inherent error resilience. Many of these applications are based on statistical or probabilistic computation, such as different approximations can be made to better suit the desired objectives. One notable application in machine learning is that Google is using this approach in their Tensor processing units (TPU, a custom ASIC). Derived paradigms The main issue in approximate computing is the identification of the section of the application that can be approximated. In the case of large scale applications, it is very common to find people holding the expertise on approximate computing techniques not having enough expertise on the application domain (and vice versa). In order to solve this problem, programming paradigms have been proposed. They all have in common the clear role separation between application programmer and application domain expert. These approaches allow the spread of the most common optimizations and approximate computing techniques. See also Artificial neural network Metaheuristic PCMOS References Software optimization Computer architecture Approximations
Approximate computing
[ "Mathematics", "Technology", "Engineering" ]
853
[ "Computer engineering", "Computer architecture", "Mathematical relations", "Computers", "Approximations" ]
49,278,706
https://en.wikipedia.org/wiki/SYSAV%20waste-to-energy%20plant
The SYSAV (Sysav South Scania Waste) waste-to-energy plant is a waste-to-energy plant in Malmö, Sweden, which treats waste from the southern province of Skåne. The plant is owned by fourteen local authorities in Skåne. In 2008, a fourth unit was built alongside engineering consultancy Ramboll, making it is one of the largest waste-to-energy plants in Northern Europe. Overview The SYSAV waste-to-energy plant is the most energy efficient plant in Sweden, as well as being one of the most advanced plants in the world. The plant includes four boilers, the first two of which began operation in 1973. The two advanced boilers, fitted in 2003 and 2008 respectively, are steam boilers that generate electricity and district heating. SYSAV also have various sites throughout the province of Skåne, which are used to process, sort, store and recycle waste. Specific examples include sorting bulky waste, composting, chipping wood, recovering metals and reloading. The sites were originally designed to be landfills, but only a small portion of the waste goes to landfill at two of the sites. The sites include facilities to process household and commercial waste, using waste combustion to recover energy, biological treatment, re-use, recycling and landfill. SYSAV also have a facility for dealing with hazardous waste. References External links Power stations in Sweden Incinerators
SYSAV waste-to-energy plant
[ "Chemistry" ]
290
[ "Incinerators", "Incineration" ]
53,149,467
https://en.wikipedia.org/wiki/Marchetti%20dilatometer%20test
Marchetti dilatometer test or flat dilatometer, is a type of dilatometer commonly designated by DMT. It was created by Silvano Marchetti (1980) and is one of the most versatile tools for soil characterization, namely loose to medium compacted granular soils and soft to medium clays, or even stiffer if a good reaction system is provided. The main reasons for its usefulness deriving geotechnical parameters are related to the simplicity and the speed of execution, generating continuous data profiles of high accuracy and reproducibility. References Measuring instruments Soil mechanics
Marchetti dilatometer test
[ "Physics", "Technology", "Engineering" ]
120
[ "Soil mechanics", "Applied and interdisciplinary physics", "Measuring instruments" ]
53,153,455
https://en.wikipedia.org/wiki/Maximal%20entropy%20random%20walk
Maximal entropy random walk (MERW) is a popular type of biased random walk on a graph, in which transition probabilities are chosen accordingly to the principle of maximum entropy, which says that the probability distribution which best represents the current state of knowledge is the one with largest entropy. While standard random walk chooses for every vertex uniform probability distribution among its outgoing edges, locally maximizing entropy rate, MERW maximizes it globally (average entropy production) by assuming uniform probability distribution among all paths in a given graph. MERW is used in various fields of science. A direct application is choosing probabilities to maximize transmission rate through a constrained channel, analogously to Fibonacci coding. Its properties also made it useful for example in analysis of complex networks, like link prediction, community detection, robust transport over networks and centrality measures. Also in image analysis, for example for detecting visual saliency regions, object localization, tampering detection or tractography problem. Additionally, it recreates some properties of quantum mechanics, suggesting a way to repair the discrepancy between diffusion models and quantum predictions, like Anderson localization. Basic model Consider a graph with vertices, defined by an adjacency matrix : if there is an edge from vertex to , 0 otherwise. For simplicity assume it is an undirected graph, which corresponds to a symmetric ; however, MERW can also be generalized for directed and weighted graphs (for example Boltzmann distribution among paths instead of uniform). We would like to choose a random walk as a Markov process on this graph: for every vertex and its outgoing edge to , choose probability of the walker randomly using this edge after visiting . Formally, find a stochastic matrix (containing the transition probabilities of a Markov chain) such that for all and for all . Assuming this graph is connected and not periodic, ergodic theory says that evolution of this stochastic process leads to some stationary probability distribution such that . Using Shannon entropy for every vertex and averaging over probability of visiting this vertex (to be able to use its entropy), we get the following formula for average entropy production (entropy rate) of the stochastic process: This definition turns out to be equivalent to the asymptotic average entropy (per length) of the probability distribution in the space of paths for this stochastic process. In the standard random walk, referred to here as generic random walk (GRW), we naturally choose that each outgoing edge is equally probable: . For a symmetric it leads to a stationary probability distribution with . It locally maximizes entropy production (uncertainty) for every vertex, but usually leads to a suboptimal averaged global entropy rate . MERW chooses the stochastic matrix which maximizes , or equivalently assumes uniform probability distribution among all paths in a given graph. Its formula is obtained by first calculating the dominant eigenvalue and corresponding eigenvector of the adjacency matrix, i.e. the largest with corresponding such that . Then stochastic matrix and stationary probability distribution are given by for which every possible path of length from the -th to -th vertex has probability . Its entropy rate is and the stationary probability distribution is . In contrast to GRW, the MERW transition probabilities generally depend on the structure of the entire graph (are nonlocal). Hence, they should not be imagined as directly applied by the walker – if random-looking decisions are made based on the local situation, like a person would make, the GRW approach is more appropriate. MERW is based on the principle of maximum entropy, making it the safest assumption when we don't have any additional knowledge about the system. For example, it would be appropriate for modelling our knowledge about an object performing some complex dynamics – not necessarily random, like a particle. Sketch of derivation Assume for simplicity that the considered graph is indirected, connected and aperiodic, allowing to conclude from the Perron–Frobenius theorem that the dominant eigenvector is unique. Hence can be asymptotically () approximated by (or in bra–ket notation). MERW requires uniform distribution along paths. The number of paths with length and vertex in the center is , hence for all , . Analogously calculating probability distribution for two succeeding vertices, one obtains that the probability of being at the -th vertex and next at the -th vertex is . Dividing by the probability of being at the -th vertex, i.e. , gives for the conditional probability of the -th vertex being next after the -th vertex . Weighted MERW: Boltzmann path ensemble We have assumed that for MERW corresponding to uniform ensemble among paths. However, the above derivation works for real nonnegative . Parametrizing and asking for probability of length path , we get: As in Boltzmann distribution of paths for energy defined as sum of over given path. For example, it allows to calculate probability distribution of patterns in Ising model. Examples Let us first look at a simple nontrivial situation: Fibonacci coding, where we want to transmit a message as a sequence of 0s and 1s, but not using two successive 1s: after a 1 there has to be a 0. To maximize the amount of information transmitted in such sequence, we should assume uniform probability distribution in the space of all possible sequences fulfilling this constraint. To practically use such long sequences, after 1 we have to use 0, but there remains a freedom of choosing the probability of 0 after 0. Let us denote this probability by , then entropy coding would allow encoding a message using this chosen probability distribution. The stationary probability distribution of symbols for a given turns out to be . Hence, entropy production is , which is maximized for , known as the golden ratio. In contrast, standard random walk would choose suboptimal . While choosing larger reduces the amount of information produced after 0, it also reduces frequency of 1, after which we cannot write any information. A more complex example is the defected one-dimensional cyclic lattice: let say 1000 nodes connected in a ring, for which all nodes but the defects have a self-loop (edge to itself). In standard random walk (GRW) the stationary probability distribution would have defect probability being 2/3 of probability of the non-defect vertices – there is nearly no localization, also analogously for standard diffusion, which is infinitesimal limit of GRW. For MERW we have to first find the dominant eigenvector of the adjacency matrix – maximizing in: for all positions , where for defects, 0 otherwise. Substituting and multiplying the equation by −1 we get: where is minimized now, becoming the analog of energy. The formula inside the bracket is discrete Laplace operator, making this equation a discrete analogue of stationary Schrodinger equation. As in quantum mechanics, MERW predicts that the probability distribution should lead exactly to the one of quantum ground state: with its strongly localized density (in contrast to standard diffusion). Taking the infinitesimal limit, we can get standard continuous stationary (time-independent) Schrodinger equation ( for ) here. See also Principle of maximum entropy Eigenvector centrality Markov chain Anderson localization References External links Gábor Simonyi, Y. Lin, Z. Zhang, "Mean first-passage time for maximal-entropy random walks in complex networks". Scientific Reports, 2014. Electron Conductance Models Using Maximal Entropy Random Walks Wolfram Demonstration Project Network theory Diffusion Information theory Quantum mechanics
Maximal entropy random walk
[ "Physics", "Chemistry", "Mathematics", "Technology", "Engineering" ]
1,553
[ "Transport phenomena", "Physical phenomena", "Telecommunications engineering", "Diffusion", "Applied mathematics", "Theoretical physics", "Quantum mechanics", "Graph theory", "Network theory", "Computer science", "Information theory", "Mathematical relations" ]
47,502,345
https://en.wikipedia.org/wiki/ARC%20fusion%20reactor
The ARC fusion reactor (affordable, robust, compact) is a design for a compact fusion reactor developed by the Massachusetts Institute of Technology (MIT) Plasma Science and Fusion Center (PSFC). ARC aims to achieve an engineering breakeven of three (to produce three times the electricity required to operate the machine). The key technical innovation is to use high-temperature superconducting magnets in place of ITER's low-temperature superconducting magnets. The proposed device would be about half the diameter of the ITER reactor and cheaper to build. The ARC has a conventional advanced tokamak layout. ARC uses rare-earth barium copper oxide (REBCO) high-temperature superconductor magnets in place of copper wiring or conventional low-temperature superconductors. These magnets can be run at much higher field strengths, 23 T, roughly doubling the magnetic field on the plasma axis. The confinement time for a particle in plasma varies with the square of the linear size, and power density varies with the fourth power of the magnetic field, so doubling the magnetic field offers the performance of a machine 4 times larger. The smaller size reduces construction costs, although this is offset to some degree by the expense of the REBCO magnets. The use of REBCO may allow the magnet windings to be flexible when the machine is not operational. This would allow them to be "folded open" to allow access to the interior of the machine. This would greatly lower maintenance costs, eliminating the need to perform maintenance through small access ports using remote manipulators. If realized, this could improve the reactor's capacity factor, an important metric in power generation costs. The first machine planned to come from the project is a scaled-down demonstrator named SPARC (as Soon as Possible ARC). It is to be built by Commonwealth Fusion Systems, with backing led by Eni, Breakthrough Energy Ventures, Khosla Ventures, Temasek, and Equinor. History The project was announced in 2014. The name and design were inspired by the fictional arc reactor built by Tony Stark, who attended MIT in the comic books. The concept was born as "a project undertaken by a group of MIT students in a fusion design course. The ARC design was intended to show the capabilities of the new magnet technology by developing a design point for a plant producing as much fusion power as ITER at the smallest possible size. The result was a machine about half the linear dimension of ITER, running at 9 tesla and producing more than 500 megawatt (MW) of fusion power. The students also looked at technologies that would allow such a device to operate in steady state and produce more than of electricity." Design features The ARC design incorporates major departures from traditional tokamaks, while retaining conventional D–T (deuterium - tritium) fuel. Magnetic field To achieve a near tenfold increase in fusion power density, the design makes use of REBCO superconducting tape for its toroidal field coils. This material enables higher magnetic field strength to contain heated plasma in a smaller volume. In theory, fusion power density is proportional to the fourth power of the magnetic field strength. The most probable candidate material is yttrium barium copper oxide, with a design temperature of , allowing various coolants (e.g. liquid hydrogen, liquid neon, or helium gas) instead of the much more complicated liquid helium refrigeration chosen by ITER. The official SPARC brochure displays a YBCO cable section that is commercially available and that should allow fields up to 30 T. ARC is planned to be a 270 MWe tokamak reactor with a major radius of , a minor radius of , and an on-axis magnetic field of . The design point has a fusion energy gain factor Qp ≈ 13.6 (the plasma produces 13 times more fusion energy than is required to heat it), yet is fully non-inductive, with a bootstrap fraction of ~63%. The design is enabled by the ~23 T peak field on coil. External current drive is provided by two inboard RF launchers using of lower hybrid and of ion cyclotron fast wave power. The resulting current drive provides a steady-state core plasma far from disruptive limits. Removable vacuum vessel The design includes a removable vacuum vessel (the solid component that separates the plasma and the surrounding vacuum from the liquid blanket). It does not require dismantling the entire device. That makes it well-suited for evaluating design changes. Liquid blanket Most of the solid blanket materials that surround the fusion chamber in conventional designs are replaced by a fluorine lithium beryllium (FLiBe) molten salt that can easily be circulated/replaced, reducing maintenance costs. The liquid blanket provides neutron moderation and shielding, heat removal, and a tritium breeding ratio ≥ 1.1. The large temperature range over which FLiBe is liquid permits blanket operation at with single-phase fluid cooling and a Brayton cycle. See also List of fusion experiments References External links Nuclear fusion Superconductivity Proposed fusion reactors Tokamaks
ARC fusion reactor
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,055
[ "Physical quantities", "Superconductivity", "Materials science", "Condensed matter physics", "Nuclear physics", "Nuclear fusion", "Electrical resistance and conductance" ]
47,505,162
https://en.wikipedia.org/wiki/Multipolarity%20of%20gamma%20radiation
Transitions between excited states (or excited states and the ground state) of a nuclide lead to the emission of gamma quanta. These can be classified by their multipolarity. There are two kinds: electric and magnetic multipole radiation. Each of these, being electromagnetic radiation, consists of an electric and a magnetic field. Multipole radiation Electric dipole, quadrupole, octupole… radiation (generally: 2pole radiation) is also designated as E1, E2, E3,… radiation (generally: E radiation). Similarly, magnetic dipole, quadrupole, octupole… radiation (generally: 2pole radiation) is designated as M1, M2, M3,… radiation (generally: M radiation). There is no monopole radiation (). In quantum mechanics, angular momentum is quantized. The various multipole fields have particular values of angular momentum: E radiation carries an angular momentum in units of ; likewise, M radiation carries an angular momentum in units of . The conservation of angular momentum leads to selection rules, i.e., rules defining which multipoles may or may not be emitted in particular transitions. To make a simple classical comparison, consider the figure of the oscillating dipole. It produces electric field lines travelling outwards, intertwined with magnetic field lines, according to Maxwell's equations. This system of field lines then corresponds to that of E1 radiation. Similar considerations hold for oscillating electric or magnetic multipoles of higher order. Conversely, it is plausible that the multipolarity of radiation can be determined from the angular distribution of the emitted radiation. Quantum numbers and selection rules A state of a nuclide is described by its energy above the ground state, by its angular momentum J (in units of ), and by its parity, i.e., its behaviour under reflection (positive + or negative −). Since the spin of nucleons is ½ (in units of ), and since orbital angular momentum has integer values, J may be an integer or a half integer number. Electric and magnetic multipole radiations of the same order (i.e., dipole, or quadrupole...) carry the same angular momentum (in units of ), but differ in parity. The following relations hold for : Electric multipole radiation: Parity : Here, the electric field has parity , and the magnetic field . Magnetic multipole radiation: Parity : Here, the electric field has parity , and the magnetic field . The designation "electric multipole radiation" seems appropriate since the major part of that radiation is produced by the charge density in the source; conversely, the "magnetic multipole radiation" is mainly due to the current density of the source. In electric multipole radiation, the electric field has a radial component; in magnetic multipole radiation, the magnetic field has a radial component. An example: in the simplified decay scheme of 60Co above, the angular momenta and the parities of the various states are shown (A plus sign means even parity, a minus sign means odd parity). Consider the 1.33 MeV transition to the ground state. Clearly, this must carry away an angular momentum of 2, without change of parity. It is therefore an E2 transition. The case of the 1.17 MeV transition is a bit more complex: going from J = 4 to J = 2, all values of angular momentum from 2 to 6 could be emitted. But in practice, the smallest values are most likely, so it is also a quadrupole transition, and it is E2 since there is no parity change. See also Multipole expansion Notes References Nuclear physics Electromagnetic radiation
Multipolarity of gamma radiation
[ "Physics" ]
766
[ "Electromagnetic radiation", "Physical phenomena", "Radiation", "Nuclear physics" ]
47,507,667
https://en.wikipedia.org/wiki/Institut%20N%C3%A9el
Institut Néel is a research laboratory in condensed matter physics located on Polygone Scientifique in Grenoble, France. It is named after scientist Louis Néel. The institute is an independent research unit (UPR2940) of the French Centre national de la recherche scientifique created in 2007 as a reorganization of four research laboratories: the center for research in very low temperatures (Centre de Recherches sur les très basses températures (CRTBT)), the laboratory for the study of electronic properties of solids (laboratoire d’étude des propriétés électroniques des solides (LEPES)), the Louis Néel laboratory (laboratoire Louis Néel (LLN)), and the Laboratory of crystallography (Laboratoire de cristallographie (LdC)). References Related articles Université Grenoble Alpes External links Condensed matter physics French National Centre for Scientific Research Science and technology in Grenoble Neutron sources
Institut Néel
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
206
[ "Materials science stubs", "Phases of matter", "Materials science", "Condensed matter physics", "Condensed matter stubs", "Matter" ]
62,978,872
https://en.wikipedia.org/wiki/Climate%20Clock
The Climate Clock is a graphic to demonstrate how quickly the planet is approaching 1.5 °C of global warming, given current emissions trends. It also shows the amount of CO2 already emitted, and the global warming to date. PP Climate Clock was launched in 2015 to provide a measuring stick against which viewers can track climate change mitigation progress. The date shown when humanity reaches 1.5 °C will move closer as emissions rise, and further away as emissions decrease. An alternative view projects the time remaining to 2.0 °C of warming. The clock is updated every year to reflect the latest global CO2 emissions trend and rate of climate warming. On September 20, 2021, the clock was delayed to July 28, 2028, likely because of the COP26 Conference and the land protection by indigenous peoples. As of April 2, 2024, the clock counts down to July 21, 2029 at 12:00 PM. The clock is hosted by Human Impact Lab, itself part of Concordia University. Organisations supporting the climate clock include Concordia University, the David Suzuki Foundation, Future Earth, and the Climate Reality Project. As of April 29, 2024, the current climate temperature is 1.297 °C. Relevance 1.5 °C is an important threshold for many climate impacts, as shown by the Special Report on Global Warming of 1.5 °C. Every increment to global temperature is expected to increase weather extremes, such as heat waves and extreme precipitation events. There is also the risk of irreversible ice sheet loss. Consequent sea level rise also increases sharply around 1.75 °C, and virtually all corals could be wiped out at 2 °C warming. The New York Climate Clock In late September 2020, artists and activists, Gan Golan, Katie Peyton Hofstadter, Adrian Carpenter and Andrew Boyd repurposed the Metronome in Union Square in New York City to show the Climate Clock. The goal was to "remind the world every day just how perilously close we are to the brink." This is in juxtaposition to the Doomsday Clock, which measures a variety of factors that could lead to "destroying the world" using "dangerous technologies of our making," with climate change being one of the smaller factors. This specific installation is expected to be one of many in cities around the world. At the time of installation, the clock read 7 years and 102 days. Greta Thunberg, Swedish environmental activist, was involved in the project early on, and reportedly received a hand-held version of the climate clock. Since its inception, the New York Climate Clock has added a second set of numbers for the percentage of the world's energy use that comes from renewable energy sources. See also Climate Action Tracker Doomsday Clock Paris Agreement: limits global warming to 2 °C, pursues 1.5 °C Effects of global warming which further increase CO2 emissions: forest fires, arctic methane release, ... References External links Climate Clock website Another climate clock Climate Action Tracker: continuously tracks emissions of individual countries Alert measurement systems Clocks Political symbols
Climate Clock
[ "Physics", "Technology", "Engineering" ]
624
[ "Machines", "Clocks", "Alert measurement systems", "Measuring instruments", "Physical systems", "Warning systems" ]
62,980,659
https://en.wikipedia.org/wiki/GGSE-4
The Gravity Gradient Stabilization Experiment (GGSE-4) was a technology satellite launched in 1967. This was ostensibly the fourth in a series that developed designs and deployment techniques later applied to the NOSS/Whitecloud reconnaissance satellites. History GGSE-4 was launched by the U.S. Airforce from Vandenberg Air Force Base atop a Thor Agena-D rocket. GGSE-4 remained operational from 1967 through 1972. It is alleged that the real name of GGSE-4 was POPPY 5B or POPPY 5b and that it was a U.S. National Reconnaissance Office satellite designed to collect signals intelligence; POPPY 5B was part of a 7-satellite mission. A partial subset of information about POPPY was declassified in 2005. Other sources say that GGSE-4 weighed only 10 pounds but that it was attached to the much larger Poppy 5, which would have weighed 85 kg and featured an 18-meter boom. It is further alleged that GGSE-4's mass is not at all like GGSE-1's mass and that GGSE'4 weighs 85 kg. 2020 near-miss On , GGSE-4 was expected to pass as closely as 12 meters from IRAS, another un-deorbited satellite left aloft. IRAS was launched in 1983 and abandoned after a 10-month mission. The 14.7-kilometer per second pass had an estimated risk of collision of 5%. Further complications arose from the fact that GGSE-4 was outfitted with an 18 meter long stabilization boom that was in an unknown orientation and may have struck the satellite even if the spacecraft's main body did not. Initial observations from amateur astronomers seemed to indicate that both satellites had survived the pass, with the California-based debris tracking organization LeoLabs later confirming that they had detected no new tracked debris following the incident. See also Gravity Gradient Stabilization Experiment (GGSE-1) References Space
GGSE-4
[ "Physics", "Mathematics" ]
405
[ "Spacetime", "Space", "Geometry" ]
62,983,169
https://en.wikipedia.org/wiki/Chrome%20Azurol%20S
Chrome Azurol S is a histological dye used in biomedical research. Chrome Azural S (CAS) is a common spectrophotometric reagent for detection of certain metals like aluminum which can be toxic in excess and can contribute to people with neurodegenerative disorders. CAS is used to provide quantitative and qualitative information on molecules of interest like aluminum and siderophores. Qualitatively a color change can be observed while also allowing to quantitatively determine concentration of certain ions. References Staining dyes
Chrome Azurol S
[ "Chemistry", "Biology" ]
112
[ "Biochemistry stubs", "Biotechnology stubs", "Biochemistry" ]
62,983,576
https://en.wikipedia.org/wiki/Fluorite%20structure
The fluorite structure refers to a common motif for compounds with the formula MX2. The X ions occupy the eight tetrahedral interstitial sites whereas M ions occupy the regular sites of a face-centered cubic (FCC) structure. Many compounds, notably the common mineral fluorite (CaF2), adopt this structure. Many compounds with formula M2X have an antifluorite structure. In these the locations of the anions and cations are reversed relative to fluorite (an anti-structure); the anions occupy the FCC regular sites whereas the cations occupy the tetrahedral interstitial sites. For example, magnesium silicide, Mg2Si, has a lattice parameter of 6.338 Å with magnesium cations occupying the tetrahedral interstitial sites, in which each silicide anion is surrounded by eight magnesium cations and each magnesium cation is surrounded by four silicide anions in a tetrahedral fashion. Calcium fluoride example Crystallography is a powerful tool to investigate the structures of crystalline materials. It is important to understand the crystal structure of materials to form structure-property relationships. These relationships can help predict the behavior of crystalline materials, as well as introduce the ability to tune their properties. Calcium fluoride is a classic example of a crystal with a fluorite structure. Crystallographic information can be collected via x-ray diffraction, providing information on the locations of electron density within a crystal structure. Using modern software such as Olex2, one can solve a crystal structure from crystallographic output files. Views of calcium fluoride crystal structure In calcium fluoride, the calcium cations are surrounded by fluorine anions that occupy the tetrahedral sites, with an 8:4 coordination number, fluorine to calcium. This ratio is consistent with the stoichiometry of the compound, where the ratio of fluorine to calcium is 2:1. This relationship can be visualized as a cubic array of anions surrounding the calcium cations. Extended fluorite structure Beyond the until cell, the extended crystal structure of fluorite continues packing in a face-centered cubic (fcc) packing structure (also known as cubic close-packed or ccp). This pattern of spherical packing follows an ABC pattern, where each successive layer of spheres settles on top of the adjacent hole of the lattice. In contrast, hexagonal close-packed (hcp), are successively layered with an ABAB pattern. These two types of packing are the most closely packed forms of spherical packing. See also Rock-salt structure References Cubic minerals Minerals in space group 225 Fluorine minerals Crystal structure types
Fluorite structure
[ "Chemistry", "Materials_science" ]
555
[ "Crystallography", "Crystal structure types" ]
62,985,630
https://en.wikipedia.org/wiki/Chinese%20Materials%20Research%20Society
The Chinese Materials Research Society (; abbreviated C-MRS) is a professional body and learned society in the field of materials science and engineering in China, founded on May 16, 1991. As of 2019, the society has 9 subordinate working committees, 22 branches, 184 unit members and more than 8,000 individual members. It is a constituent of the China Association for Science and Technology (CAST) and a member of the International Union of Materials Research Society (IUMRS). The society provides forums for the exchange of information. It aims at promoting the research and development of all kinds of advanced materials, and striving to promote the practical application of new materials, new processes and new technologies in the industry. Scientific publishing Progress in Natural Science: Materials International (PROG NAT SCI-MATER) References External links Materials science organizations Scientific organizations established in 1991 Organizations based in Beijing 1991 establishments in China 1991 in Beijing
Chinese Materials Research Society
[ "Materials_science", "Engineering" ]
184
[ "Materials science organizations", "Materials science" ]
62,985,895
https://en.wikipedia.org/wiki/Power-off%20accuracy%20approach
A power-off accuracy approach, also known as a glide approach, is an aviation exercise used to simulate a landing with an engine failure. The purpose of this training technique is to better develop one's ability to estimate distance and glide ratios. The variation in each angle refers to the degrees an aircraft must turn to be aligned with the runway. Consideration of the wind and use of flaps are important factors in executing power-off accuracy approaches. When the throttle is closed, it is intended to simulate engine failure. Iterations of power-off approaches 90° power-off A 90° approach calls for the throttle to be closed when the aircraft is angled 45° from centerline. On Base leg, the airspeed needs to be lowered to the manufacturer's recommended glide speed. In order to stretch the gliding distance, pilots will often pitch up momentarily to attain best glide speed, also known as Vg. Once this speed is reached, the nose of the plane is slightly lowered to maintain the current airspeed. When turning final, if the pilot justifies that the plane is above the glide path and altitude must be lost, flaps can be used as needed. Depending on the strength of the wind, the pilot will adjust the base leg to be closer or further away from the runway's approach end. Stronger winds tend to result in the closest base, while weaker winds allow for a closer to normal traffic pattern. 180° power-off This variation is an extension of the 90° approach. For the Power-Off 180, on the Downwind leg, the pilot pulls the power to idle when abeam the intended landing point. Immediately following throttle to idle, the plane is pitched to Vg, which is the best glide speed determined by the manufacturer. At this point, the pilot now judges the gliding distance and determines an appropriate time to turn base. When approaching final, the pilot should have a general idea if they are above or below the glide path, which will in turn affect their use of flaps or a Forward Slip. This maneuver is part of the United States Department of Transportation, Federal Aviation Administration (FAA) Commercial Pilot – Airplane Airmen Certification Standards. According to the FAA, completion of this standard demonstrates a pilot's ability to: 360° power-off The 360° Power-off approach requires the plane to glide in a circular pattern, starting 2,000 ft or more, above the intended landing point. When the aircraft is positioned over the landing point, the throttle is closed and again, the proper glide speed must be attained. After establishing the appropriate speed, the pilot can safely steer the plane using medium turns to approach downwind leg. The plane should be around 1,000 to 1,200 ft above the ground at the spot where the plane is parallel to the intended point of landing, in relation to downwind leg. When arriving at base leg position, the plane should be around 800 ft above the terrain. Factors Wind Wind plays a crucial role in Power-off accuracy maneuvers. If pilots fail to take into account the wind, the performance and accuracy of the maneuver becomes swayed. Flaps In each of the power-off accuracy approaches, flaps are a mechanism that can be used to assist in performing the maneuver. Stalling speed is reduced and drag is increased when flaps are extended. This allows for a steeper and slower approach. However, depending on weather conditions or other circumstances, such as prolonged extension of downwind or base leg, use of flaps may not be required. Common errors The Airplane Flying Handbook of the Federal Aviation Administration lists common mistakes pilots make when performing power-off accuracy approaches. A few of these errors are listed below. Force landing to avoid overshooting designated landing spot Extending flaps and/or gears prematurely Downwind leg too far from the runway Poor compensation of wind drift Overextension of downwind leg See also Turbine engine failure References Flight training Powered flight Emergency aircraft operations
Power-off accuracy approach
[ "Physics" ]
788
[ "Power (physics)", "Powered flight", "Physical quantities" ]
73,027,829
https://en.wikipedia.org/wiki/Inverter-based%20resource
An inverter-based resource (IBR) is a source of electricity that is asynchronously connected to the electrical grid via an electronic power converter ("inverter"). The devices in this category, also known as converter interfaced generation (CIG), include the variable renewable energy generators (wind, solar) and battery storage power stations. These devices lack the intrinsic behaviors (like the inertial response of a synchronous generator) and their features are almost entirely defined by the control algorithms, presenting specific challenges to system stability as their penetration increases, for example, a single software fault can affect all devices of a certain type in a contingency (cf. section on Blue Cut fire below). IBRs are sometimes called non-synchronous generators. The design of inverters for the IBR generally follows the IEEE 1547 and NERC PRC-024-2 standards. Grid-following vs. grid-forming A grid-following (GFL) device is synchronized to the local grid voltage and injects an electric current vector aligned with the voltage (in other words, behaves like a current source). The GFL inverters are built into an overwhelming majority of installed IBR devices. Due to their following nature, the GFL device will shut down if a large voltage/frequency disturbance is observed. The GFL devices cannot contribute to the grid strength, dampen active power oscillations, or provide inertia. A grid-forming (GFM) device partially mimics the behavior of a synchronous generator: its voltage is controlled by a free-running oscillator that slows down when more energy is withdrawn from the device. Unlike a conventional generator, the GFM device has no overcurrent capacity and thus will react very differently in the short-circuit situation. Adding the GFM capability to a GFL device is not expensive in terms of components, but affects the revenues: in order to support the grid stability by providing extra power when needed, the power semiconductors need to be oversized and energy storage added. Modeling demonstrates, however, that it is possible to run a power system that almost entirely is based on the GFL devices. A combination of GFM battery storage power station and synchronous condensers (SuperFACTS) is being researched. Features Compliance with IEEE 1547 standard makes the IBR to support safety features: if the sensed line voltage significantly deviates from the nominal (usually outside the limits of 0.9 to 1.1 pu), the IBR shall disconnect from the after a delay (so called ridethrough time), the delay is shorter if the voltage deviation is larger. Once the inverter is off, it will stay disconnected for a significant time (minutes); if the voltage magnitude is unexpected, the inverter shall enter the momentary cessation state: while still connected, it will not inject any power into the grid. This state has a short duration (less than a second). Once an IBR ceases to provide power, it can come back only gradually, ramping its output from zero to full power. The electronic nature of IBRs limits their overload capability: the thermal stress causes their components to even temporarily be able to function at no more than 1-2 times the nameplate capacity, while the synchronous machines can briefly tolerate an overload as high as 5-6 times their rated power. Vulnerabilities New challenges to the system stability came with the increased penetration of IBRs. Incidences of disconnections during contingency events where the fault ride through was expected, and poor damping of subsynchronous oscillations in weak grids were reported. One of the most studied major power contingencies that involved IBRs is the Blue Cut Fire of 2016 in Southern California, with a temporary loss of more than a gigawatt of photovoltaic power in a very short time. Blue Cut fire The Blue Cut fire in the Cajon Pass on August 16, 2016, has affected multiple high-voltage (500 kV and 287 kV) power transmission lines passing through the canyon. Throughout the day thirteen 500 kV line faults and two 287 kV faults were recorded. The faults themselves were transitory and self-cleared in a short time (2-3.5 cycles, less than 60 milliseconds), but the unexpected features of the algorithms in the photovoltaic inverter software triggered multiple massive losses of power, with the largest one of almost 1,200 megawatts at 11:45:16 AM, persisting for multiple minutes. The analysis performed by the North American Electric Reliability Corporation (NERC) had shown that: 700 MW of loss were caused by the poorly designed frequency estimation algorithm. The line faults had distorted the AC waveform and fooled the software into a wrong estimate of the grid frequency dropping below 57 Hz, a threshold where an emergency disconnect shall be initiated. However, the actual frequency during the event had never dropped below 59.867 Hz, well above the low limit of the normal frequency range (59.5 Hz for the Western Interconnection). Additional 450 MW were lost when low line voltage caused the inverters to immediately cease to inject current, with gradual return to operative state within 2 minutes. At least one manufacturer had indicated that injecting the current when the voltage level is below 0.9 pu would involve a major redesign. As a result of the incident, NERC had issued multiple recommendations, involving the changes in inverter design and amendments to the standards. References Sources Electrical engineering
Inverter-based resource
[ "Engineering" ]
1,169
[ "Electrical engineering" ]
73,028,554
https://en.wikipedia.org/wiki/Loyal%20wingman
A loyal wingman is a proposed type of unmanned combat air vehicle (UCAV) which incorporates artificial intelligence (AI) and is capable of collaborating with the next generation of crewed combat aircraft, including sixth-generation fighters and bombers such as the Northrop Grumman B-21 Raider. Also unlike the conventional UCAV, the loyal wingman[sic] is expected to be capable of surviving on the battlefield but to be significantly lower-cost than a crewed aircraft with similar capabilities. In the US, the concept is known as the collaborative combat aircraft (CCA). Characteristics The loyal wingman is a military drone with an onboard AI control system and capability to carry and deliver a significant military weapons load. The AI system is envisaged as being significantly lighter and lower-cost than a human pilot with their associated life support systems, but to offer comparable capability in flying the aircraft and in mission execution. Some concepts are based on a standardised aircraft deployed in two variants; one as a sixth-generation fighter with a human pilot and/or battle commander in the cockpit, and the other as a loyal wingman with an AI system substituted in the same location. BAE Systems envisage the Tempest to be capable of operating in either configuration. Another concept is to develop a shorter-range, and hence smaller and cheaper, wingman to be carried by the manned parent aircraft and air-launched when needed. The drone in turn carries its own munitions. This reduces the overall cost while maintaining protection for the crewed aircraft on the battlefield. Role The principal application is to elevate the role of human pilots to mission commanders, leaving AIs as "loyal wingmen" to operate under their tactical control as high-skill operators of relatively low-cost robotic craft. Loyal wingmen can perform other missions as well, as "a sensor, as a shooter, as a weapons carrier, as a cost reducer". Capabilities A loyal wingman is expected to cost significantly less than a crewed fighter, and will typically be considered vulnerable to attrition. It would have sufficient intelligence and onboard defence systems to survive on the battlefield. The United States Secretary of the Air Force Frank Kendall has described them as remotely controlled versions of targeting pods, electronic warfare pods or weapons carriers to provide additional sensors and munitions; to balance affordability and capability. Development history The concept of the loyal wingman arose in the early 2000s and, since then, countries such as Australia, China, Japan, Russia, the UK and the US have been researching and developing the necessary design criteria and technologies. Australia Boeing Australia is leading development of the MQ-28 Ghost Bat loyal wingman for the RAAF, with BAE Systems Australia providing much of the avionics. The MQ-28 was first flown in 2021 and since then, at least 8 aircraft have been built. China China has been studying the loyal wingman concept since at least 2019 and has shown off some concept airframes. However, although China already manufactures drones and has well-developed swarming technology, the planned level of autonomy or even AI for these systems are not known. Germany European aerospace manufacturer Airbus has proposed the Airbus wingman which is a loyal wingman aircraft. The aircraft would be an unmanned combat aerial vehicle (UCAV) which would accompany a Eurofighter Typhoon or other combat aircraft as a force multiplier. India The HAL CATS Warrior is an AI-enabled wingman drone under development by Hindustan Aeronautics Ltd. (HAL) for the proposed Combat Air Teaming System (CATS). Japan Japan announced a development programme for a loyal wingman drone in 2021, issuing the first round of funding in 2022. The drone is intended to be carried for deployment by a proposed F-X fighter, also under development. Russia Russian projects for wingman-class drones are thought to include the Sukhoi S-70 Okhotnik and the Kronshtadt Grom. However, although Russia already manufactured drones, the planned level of autonomy or even AI for these systems are not known. South Korea In addition to the production of the new generation fighter, KF-21, South Korea plans to develop several types of UCAVs as wingmen to team up with the manned fighter. United Kingdom The RAF in the UK has been developing the Loyal Wingman concept since 2015, with the Spirit Mosquito technology demonstrator flying in 2020. Programme funding was cancelled in June 2022 because the Ministry of Defence felt that it was better spent on less ambitious advances. United States Collaborative combat aircraft (CCA) is the official US Air Force designation for an autonomous combat drone, and is broadly equivalent to the loyal wingman. The USAF Next Generation Air Dominance (NGAD) program was initiated in 2014. It includes the development of CCA. Up to five autonomous CCAs could operate with a manned fighter. The Skyborg programme, going back at least to 2019, is developing the systems to operate wingman drones alongside advanced manned fighters. Of four contenders, the most public is the Kratos XQ-58A Valkyrie. The Air Force Research Laboratory (AFRL) will test their Skyborg manned-unmanned programs such as Autonomous Air Combat Operations (AACO), and DARPA will test its Air Combat Evolution (ACE) artificial intelligence program. The System for Autonomous Control of Simulation (SACS) software for human interface is being developed by Calspan. In 2020, DARPA AlphaDogfight predicted that AI programs that fly fighter aircraft will overmatch human pilots. Two alternative autonomous AI systems have been installed in a General Dynamics X-62 VISTA at the Air Force Test Pilot School. The two systems flew the aircraft in turn, on 9 December 2022. By 16 December 2022 the X-62 Vista had flown eight sorties using ACE, and six sorties using AACO, at a rate of two sorties per day. The General Atomics Longshot is intended to be carried for deployment by the manned aircraft, and is air-launched when needed. This allows a shorter range for the drone, while maintaining advanced protection for the manned aircraft. DARPA adopted the General Dynamics design for its Longshot programme in 2022. In 2022 Heather Penney identified five key elements for the proactive development of autonomous CCA, remote pilots of UAVs and pilots flying separately in manned aircraft (also called crewed-uncrewed teaming, or manned-unmanned teaming). Create concepts that will maximize the strengths of both CCA and piloted aircraft working as a team. Include operators in CCA development to ensure they understand how they will perform in the battlespace. Warfighters must be able to depend on CCA autonomy. Warfighters must have assured control over CCA in highly dynamic operations. Human workloads must be manageable. A typical CCA is estimated to cost between one-half and one-quarter as much as an $80 million F-35. US Air Force Secretary Frank Kendall is aiming for an initial fleet of 1,000 CCAs. List of loyal wingman aircraft Several loyal wingman aircraft are or have been under development. Examples include: Boeing MQ-28 Ghost Bat - 8 in testing (Block 1), planned entry into active service in 2025 General Dynamics X-62 VISTA - system development aircraft HAL CATS Warrior - under development Kratos XQ-58 Valkyrie - development prototype flying Kronshtadt Grom ("Thunder") - under development Spirit Mosquito - program cancelled Sukhoi S-70 Okhotnik - proposed development with upgraded avionics Airbus Wingman - proposed development Northrop Grumman Model 437 - development prototype flying See also Index of aviation articles References Unmanned military aircraft Robotics Command and control
Loyal wingman
[ "Engineering" ]
1,562
[ "Robotics", "Automation" ]
73,034,100
https://en.wikipedia.org/wiki/Stanhope%20Demonstrator
The Stanhope Demonstrator was the first machine to solve problems in logic. It was designed by Charles Stanhope, 3rd Earl Stanhope to demonstrate consequences in logic symbolically. The first model was constructed in 1775. It consisted of two slides coloured red and gray mounted in a square brass frame. This could be used to demonstrate the solution to a syllogistic type of problem in which objects might have two different properties and the question was how many would have both properties. Scales marked zero to ten were used to set the numbers or proportions of objects with the two properties. This form of inference anticipated the numerically definite syllogism which Augustus De Morgan laid out in his book, Formal Logic, in 1847. Construction The device was a brass plate about four inches square which was mounted on a piece of mahogany which was three-quarters of an inch thick. There was an opening with a depression in the wood about one and a half inches square and half an inch deep. This opening was called the holon, meaning whole, and represented the full set of objects under consideration. A slide of red translucent glass could be inserted from the right across the holon. A slide of gray wood could be slid under the red slide. When the device was used for the "Rule for the Logic of Certainty", the gray slider was inserted from the left. When it was used for the "Rule for the Logic of Probability", the gray slider was inserted from above. The red and the gray sliders represented the two affirmative propositions which were being combined. Stanhope called these ho and los. At least four of the devices with this square style were built. In 1879, Robert Harley wrote that he had one which he had been given by Stanhope's great-grandson, Arthur, who had kept one. The other two were owned by Henry Prevost Babbage – the son of Charles Babbage, who continued his work on the Analytical Engine. One of the devices was donated to the Science Museum, London by the last Earl in 1953. Other styles, such as circular models, were constructed, but these were less convenient. See also Logical piano Venn diagram References Automated reasoning Computer-related introductions in the 18th century English inventions History of logic Mechanical calculators Mechanical computers One-of-a-kind computers
Stanhope Demonstrator
[ "Physics", "Technology" ]
474
[ "Physical systems", "Machines", "Mechanical computers" ]
51,585,932
https://en.wikipedia.org/wiki/Hari%20Krishan%20Jain
Hari Krishan Jain (28 May 1930 - 8 April 2019) was an Indian cytogeneticist and plant breeder, known for his contributions to the field of genetic recombination and the control of interchromosome level. He is a former chancellor of the Central Agricultural University, Imphal, a former director of the Indian Agriculture Research Institute and a recipient of honours such as Rafi Ahmed Kidwai Award, Borlaug Award and Om Prakash Bhasin Award. The Council of Scientific and Industrial Research, the apex agency of the Government of India for scientific research, awarded him the Shanti Swarup Bhatnagar Prize for Science and Technology, one of the highest Indian science awards, in 1966, for his contributions to biological sciences. He received the fourth highest Indian civilian honor, the Padma Shri, in 1981. Biography Hari Krishan Jain, born on 28 May 1930 in a Jain family in Gurgaon, in the Indian state of Haryana to Nemi Chand Jain and Chameli Devi, graduated in botany (BSc hons) from the University of Delhi in 1949 after which he secured an associate-ship from the Indian Agriculture Research Institute (IARI) in 1951. Subsequently, he pursued his doctoral studies at the Aberystwyth campus of the University College of Wales, on a Science Research Scholarship of the Royal Commission, London to secure his PhD and returned to India to start his career as a cytologist at IARI in 1956. He stayed at IARI until his superannuation from service as its Director in 1983 during which time he served as the Head of Genetics Division from 1966 to 1978. In 1984, he became associated with the International Service for National Agricultural Research of the Consultative Group for International Agricultural Research (CGIAR) where he served as the deputy director general. Later, he continued his academic life at the Rajasthan College of Agriculture of Maharana Pratap University of Agriculture and Technology, Udaipur until he was appointed as the chancellor of the Central Agricultural University, Imphal. Jain is married to Kusum Lata and the couple has two children, Neera and Reena. Legacy Jain's early researches on Lilium, a genus of herbaceous plants, and its meiotic cell division revealed the correlation between the chromosome condensation and nucleolar synthesis. After joining IARI, he and his colleagues worked on the cytological mechanisms of genetic recombination, on Delphinium, a perennial flowering plant genus. His work contributed to the development of a protocol for controlling the interchromosome level, which was experimentally evidenced by subsequent researches by others. Later, he worked on tomato and Drosophila (popularly known as fruit flies) and his studies assisted in the discovery of chemical mutagen specificity. He headed the wheat development programs of IARI and initiated three such programs to develop high-yielding varieties of wheat. Ribosomal synthesis in plant cells was another area of his researches. He is credited with developing the concept of national multilineal complex of varieties and the proposal of the multiple and inter-cropping patterns, later popularized by the Indian Agricultural Research Institute. He authored five books which included Plant Breeding: Mendelian to Molecular Approaches Genetics: Principles, Concepts and Implications, and Green Revolution: History, Impact and Future and several articles which document the body of his work. Jain has served as a member of the Scientific Advisory Committee (SAC-C) to the Government of India (1982–83) and the Uttar Pradesh State Planning Commission (1978–80). He chaired the Food and Agriculture Committee of the Bhabha Atomic Research Centre (1980–83) and the Indian chapter of the Man and the Biosphere Programme of the UNESCO (1978–83) and has been a member of the advisory committee on biotechnology of the Department of Science and Technology (1982–83). He chaired the consultative group on agriculture of the International Council for Science (ICSU) (1973) and is an Emeritus scientist of the Council for Scientific and Industrial Research, having been elected to the position in 1993. He also sat in the council of the Indian National Science Academy from 1979 to 1981 and served as the vice president of the National Academy of Agricultural Sciences from 2009 to 2011. Books Awards and honors Jain received the Shanti Swarup Bhatnagar Prize for Science and Technology, the highest award of the Council of Scientific and Industrial Research in 1966 for his contributions to biological sciences. The Indian Council of Agricultural Research awarded him the Rafi Ahmed Kidwai Award the next year and he was elected for the Jawaharlal Nehru Fellowship in 1973 for his project, A Study of the Evolving Concepts of the Genetics and their Agricultural & Social Implications. The Government of India included him in the Republic Day honors list for the civilian award of the Padma Shri in 1981 and he received the Borlaug Award in 1982. The Om Prakash Bhasin Award reached him in 1986 and the National Academy of Agricultural Sciences honored him with Dr. B. P. Pal Award in 1999. He is also a recipient of the B. P. Pal Memorial Award of the Indian Science Congress Association which he received in 2004. Jain was elected as a fellow of the Indian National Science Academy in 1974 and he became an elected fellow of the Indian Academy of Sciences in 1975. Two more Indian academies, the National Academy of Sciences, India and the National Academy of Agricultural Sciences elected him as their fellow in 1988 and 1991 respectively. The Indian Agricultural Research Institute honored him with the degree of Doctor of Science (honoris causa) in 2005 and the Central Agricultural University, where he served as the vice chancellor, instituted an annual award, the Dr. H. K. Jain CAU Award, in his honor in 2015, to recognize excellence in agricultural research. See also Indian Agricultural Research Institute Central Agricultural University Genetic recombination Green Revolution Green Revolution in India Notes References External links Recipients of the Shanti Swarup Bhatnagar Award in Biological Science 1930 births Jawaharlal Nehru Fellows People from Gurgaon district Delhi University alumni Indian agriculturalists Alumni of the University of Wales Indian geneticists Plant breeding Scientists from Haryana Indian botanical writers Council of Scientific and Industrial Research Fellows of the Indian Academy of Sciences Fellows of the Indian National Science Academy Fellows of the National Academy of Sciences, India Fellows of the National Academy of Agricultural Sciences Recipients of the Padma Shri in science & engineering Heads of universities and colleges in India 2019 deaths Indian male writers Writers from Haryana 20th-century Indian botanists
Hari Krishan Jain
[ "Chemistry" ]
1,320
[ "Plant breeding", "Molecular biology" ]
51,591,140
https://en.wikipedia.org/wiki/Glossary%20of%20prestressed%20concrete%20terms
This page is a glossary of Prestressed concrete terms. A B C D E F G H I J K L M N O P Q R S T U V W Y See also Cable-stayed bridge Cantilever bridge Concrete Concrete beam Concrete slab Construction Glossary of engineering Glossary of civil engineering Glossary of structural engineering Incremental launch method Precast concrete Prestressed concrete Prestressed structure Reinforced concrete Segmental bridge Prestressing Wedges References Building engineering Civil engineering Prestressed concrete construction Structural engineering Prestressed Concrete Concrete Wikipedia glossaries using description lists
Glossary of prestressed concrete terms
[ "Technology", "Engineering" ]
119
[ "Structural engineering", "Building engineering", "Structural system", "Construction", "Prestressed concrete construction", "Civil engineering", "Concrete", "Architecture" ]
51,596,854
https://en.wikipedia.org/wiki/LiSiCA
LiSiCA (Ligand Similarity using Clique Algorithm) is a ligand-based virtual screening software that searches for 2D and 3D similarities between a reference compound and a database of target compounds which should be represented in a Mol2 format. The similarities are expressed using the Tanimoto coefficients and the target compounds are ranked accordingly. LiSiCA is also available as LiSiCA PyMOL plugin both on Linux and Windows operating systems. Description As an input LiSiCA requires at least one reference compound and database of target compounds. For 3D screening this database has to be a pregenerated database of conformations of target and for 2D screening a topology, that is, a list of atoms and bonds, for each target compound. On each step the algorithm compares reference compound to one of the compounds from target compounds based on their 2D or 3D representation. Both compounds(molecules) are converted to molecular graphs. In 2D and 3D screening the molecular graph vertices represent atoms. In 2D screening edges of molecular graph represent covalent bonds while in 3D screening edges are drawn between every pair of vertices and have no chemical meaning. A product graph generated from molecular graphs is then searched using fast maximum clique algorithm to find the largest substructure common to both compounds. The similarity between compounds is calculated using Tanimoto coefficients and target compounds are ranked according to their Tanimoto coefficients. Feature overview LiSiCA can search 2D and 3D similarities between a reference compound and a database of target compounds. It takes as an input at least one reference compound and a database of target compounds. By default it returns only the compound most similar to the reference compound out of all compounds in database of target compounds. Other optional parameters LiSiCA uses are: Number of CPU threads to useDefault value: the default is to try to detect the number of CPUs and use all of them or, failing that, use 1 Product graph dimensionPossible input: 2, 3Default value: 2 Maximum allowed atom spatial distance difference for the 3D product graph measured in angstromsDefault value: 1.0 Maximum allowed shortest path difference for the 2D product graph measured in the number of covalent bonds between atomsDefault value: 1 Consider hydrogensDefault value: False Number of highest ranked molecules to write to outputDefault value: 0 Maximum allowed number of highest scoring conformations to be outputtedDefault value: 1 In addition LiSiCA PyMOL plugin also offers to load saved results. History LiSiCA Software (March 2015) LiSiCA PyMOL plugin (March 2016) Interesting fact The Slovene word lisica means 'fox', which is why the logo of LiSiCA software is a fox holding two molecules. References External links LiSiCA software LiSiCA plugin Molecular modelling Chemistry software
LiSiCA
[ "Chemistry" ]
573
[ "Molecular physics", "Chemistry software", "Theoretical chemistry", "Molecular modelling", "nan" ]
49,287,688
https://en.wikipedia.org/wiki/Boson%20sampling
Boson sampling is a restricted model of non-universal quantum computation introduced by Scott Aaronson and Alex Arkhipov after the original work of Lidror Troyansky and Naftali Tishby, that explored possible usage of boson scattering to evaluate expectation values of permanents of matrices. The model consists of sampling from the probability distribution of identical bosons scattered by a linear interferometer. Although the problem is well defined for any bosonic particles, its photonic version is currently considered as the most promising platform for a scalable implementation of a boson sampling device, which makes it a non-universal approach to linear optical quantum computing. Moreover, while not universal, the boson sampling scheme is strongly believed to implement computing tasks which are hard to implement with classical computers by using far fewer physical resources than a full linear-optical quantum computing setup. This advantage makes it an ideal candidate for demonstrating the power of quantum computation in the near term. Description Consider a multimode linear-optical circuit of N modes that is injected with M indistinguishable single photons (N>M). Then, the photonic implementation of the boson sampling task consists of generating a sample from the probability distribution of single-photon measurements at the output of the circuit. Specifically, this requires reliable sources of single photons (currently the most widely used ones are parametric down-conversion crystals), as well as a linear interferometer. The latter can be fabricated, e.g., with fused-fiber beam splitters, through silica-on-silicon or laser-written integrated interferometers, or electrically and optically interfaced optical chips. Finally, the scheme also necessitates high efficiency single photon-counting detectors, such as those based on current-biased superconducting nanowires, which perform the measurements at the output of the circuit. Therefore, based on these three ingredients, the boson sampling setup does not require any ancillas, adaptive measurements or entangling operations, as does e.g. the universal optical scheme by Knill, Laflamme and Milburn (the KLM scheme). This makes it a non-universal model of quantum computation, and reduces the amount of physical resources needed for its practical realization. Specifically, suppose the linear interferometer is described by an N×N unitary matrix which performs a linear transformation of the creation (annihilation) operators of the circuit's input modes: Here i (j) labels the input (output) modes, and denotes the creation (annihilation) operators of the output modes (i,j=1,..., N). An interferometer characterized by some unitary naturally induces a unitary evolution on -photon states. Moreover, the map is a homomorphism between -dimensional unitary matrices, and unitaries acting on the exponentially large Hilbert space of the system: simple counting arguments show that the size of the Hilbert space corresponding to a system of M indistinguishable photons distributed among N modes is given by the binomial coefficient (notice that since this homomorphism exists, not all values of are possible). Suppose the interferometer is injected with an input state of single photons with is the number of photons injected into the kth mode). Then, the state at the output of the circuit can be written down as A simple way to understand the homomorphism between and is the following : We define the isomorphism for the basis states: , and get the following result : Consequently, the probability of detecting photons at the kth output mode is given as In the above expression stands for the permanent of the matrix which is obtained from the unitary by repeating times its ith column and times its jth row. Usually, in the context of the boson sampling problem the input state is taken of a standard form, denoted as for which each of the first M modes of the interferometer is injected with a single photon. In this case the above expression reads: where the matrix is obtained from by keeping its first M columns and repeating times its jth row. Subsequently, the task of boson sampling is to sample either exactly or approximately from the above output distribution, given the unitary describing the linear-optical circuit as input. As detailed below, the appearance of the permanent in the corresponding statistics of single-photon measurements contributes to the hardness of the boson sampling problem. Complexity of the problem The main reason of the growing interest towards the model of boson sampling is that despite being non-universal it is strongly believed to perform a computational task that is intractable for a classical computer. One of the main reasons behind this is that the probability distribution, which the boson sampling device has to sample from, is related to the permanent of complex matrices. The computation of the permanent is in the general case an extremely hard task: it falls in the #P-hard complexity class. Moreover, its approximation to within multiplicative error is a #P-hard problem as well. All current proofs of the hardness of simulating boson sampling on a classical computer rely on the strong computational consequences that its efficient simulation by a classical algorithm would have. Namely, these proofs show that an efficient classical simulation would imply the collapse of the polynomial hierarchy to its third level, a possibility that is considered very unlikely by the computer science community, due to its strong computational implications (in line with the strong implications of P=NP problem). Exact sampling The hardness proof of the exact boson sampling problem can be achieved following two distinct paths. Specifically, the first one uses the tools of the computational complexity theory and combines the following two facts: Approximating the probability of a specific measurement outcome at the output of a linear interferometer to within a multiplicative constant is a #P-hard problem (due to the complexity of the permanent) If a polynomial-time classical algorithm for exact boson sampling existed, then the above probability could have been approximated to within a multiplicative constant in the BPPNPcomplexity class, i.e. within the third level of the polynomial hierarchy When combined these two facts along with Toda's theorem result in the collapse of the polynomial hierarchy, which as mentioned above is highly unlikely to occur. This leads to the conclusion that there is no classical polynomial-time algorithm for the exact boson sampling problem. On the other hand, the alternative proof is inspired by a similar result for another restricted model of quantum computation – the model of instantaneous quantum computing. Namely, the proof uses the KLM scheme, which says that linear optics with adaptive measurements is universal for the class BQP. It also relies on the following facts: Linear optics with postselected measurements is universal for PostBQP, i.e. quantum polynomial-time class with postselection (a straightforward corollary of the KLM construction) The class PostBQP is equivalent to PP (i.e. the probabilistic polynomial-time class): PostBQP = PP The existence of a classical boson sampling algorithm implies the simulability of postselected linear optics in the PostBPP class (that is, classical polynomial-time with postselection, known also as the class BPPpath) Again, the combination of these three results, as in the previous case, results in the collapse of the polynomial hierarchy. This makes the existence of a classical polynomial-time algorithm for the exact boson sampling problem highly unlikely. The best proposed classical algorithm for exact boson sampling runs in time for a system with n photons and m output modes. This algorithm leads to an estimate of 50 photons required to demonstrate quantum supremacy with boson sampling. There is also an open-source implementation in R. Approximate sampling The above hardness proofs are not applicable to the realistic implementation of a boson sampling device, due to the imperfection of any experimental setup (including the presence of noise, decoherence, photon losses, etc.). Therefore, for practical needs one necessitates the hardness proof for the corresponding approximate task. The latter consists of sampling from a probability distribution that is close to the one given by , in terms of the total variation distance. The understanding of the complexity of this problem relies then on several additional assumptions, as well as on two yet unproven conjectures. Specifically, the proofs of the exact boson sampling problem cannot be directly applied here, since they are based on the #P-hardness of estimating the exponentially-small probability of a specific measurement outcome. Thus, if a sampler "knew" which we wanted to estimate, then it could adversarially choose to corrupt it (as long as the task is approximate). That is why, the idea is to "hide" the above probability into an N×N random unitary matrix. This can be done knowing that any M×M submatrix of a unitary , randomly chosen according to the Haar measure, is close in variation distance to a matrix of i.i.d. complex random Gaussian variables, provided that M ≤ N1/6 (Haar random matrices can be directly implemented in optical circuits by mapping independent probability density functions for their parameters, to optical circuit components, i.e., beam splitters and phase shifters). Therefore, if the linear optical circuit implements a Haar random unitary matrix, the adversarial sampler will not be able to detect which of the exponentially many probabilities we care about, and thus will not be able to avoid its estimation. In this case is proportional to the squared absolute value of the permanent of the M×M matrix of i.i.d. Gaussians, smuggled inside These arguments bring us to the first conjecture of the hardness proof of approximate boson sampling problem – the permanent-of-Gaussians conjecture: Approximating the permanent of a matrix of i.i.d. Gaussians to within multiplicative error is a #P-hard task. Moreover, the above conjecture can be linked to the estimation of which the given probability of a specific measurement outcome is proportional to. However to establish this link one has to rely on another conjecture – the permanent anticoncentration conjecture: There exists a polynomial Q such that for any M and δ>0 the probability over M×M matrices of the following inequality to hold is smaller than δ: By making use of the above two conjectures (which have several evidences of being true), the final proof eventually states that the existence of a classical polynomial-time algorithm for the approximate boson sampling task implies the collapse of the polynomial hierarchy. It is also worth mentioning another fact important to the proof of this statement, namely the so-called bosonic birthday paradox (in analogy with the well-known birthday paradox). The latter states that if M identical bosons are scattered among N≫M2 modes of a linear interferometer with no two bosons in the same mode, then with high probability two bosons will not be found in the same output mode either. This property has been experimentally observed with two and three photons in integrated interferometers of up to 16 modes. On the one hand this feature facilitates the implementation of a restricted boson sampling device. Namely, if the probability of having more than one photon at the output of a linear optical circuit is negligible, one does not require photon-number-resolving detectors anymore: on-off detectors will be sufficient for the realization of the setup. Although the probability of a specific measurement outcome at the output of the interferometer is related to the permanent of submatrices of a unitary matrix, a boson sampling machine does not allow its estimation. The main reason behind is that the corresponding detection probability is usually exponentially small. Thus, in order to collect enough statistics to approximate its value, one has to run the quantum experiment for exponentially long time. Therefore, the estimate obtained from a boson sampler is not more efficient that running the classical polynomial-time algorithm by Gurvits for approximating the permanent of any matrix to within additive error. Variants Scattershot boson sampling As already mentioned above, for the implementation of a boson sampling machine one necessitates a reliable source of many indistinguishable photons, and this requirement currently remains one of the main difficulties in scaling up the complexity of the device. Namely, despite recent advances in photon generation techniques using atoms, molecules, quantum dots and color centers in diamonds, the most widely used method remains the parametric down-conversion (PDC) mechanism. The main advantages of PDC sources are the high photon indistinguishability, collection efficiency and relatively simple experimental setups. However, one of the drawbacks of this approach is its non-deterministic (heralded) nature. Specifically, suppose the probability of generating a single photon by means of a PDC crystal is ε. Then, the probability of generating simultaneously M single photons is εM, which decreases exponentially with M. In other words, in order to generate the input state for the boson sampling machine, one would have to wait for exponentially long time, which would kill the advantage of the quantum setup over a classical machine. Subsequently, this characteristic restricted the use of PDC sources to proof-of-principle demonstrations of a boson sampling device. Recently, however, a new scheme has been proposed to make the best use of PDC sources for the needs of boson sampling, greatly enhancing the rate of M-photon events. This approach has been named scattershot boson sampling, which consists of connecting N (N>M) heralded single-photon sources to different input ports of the linear interferometer. Then, by pumping all N PDC crystals with simultaneous laser pulses, the probability of generating M photons will be given as Therefore, for N≫M, this results in an exponential improvement in the single photon generation rate with respect to the usual, fixed-input boson sampling with M sources. This setting can also be seen as a problem of sampling N two-mode squeezed vacuum states generated from N PDC sources. Scattershot boson sampling is still intractable for a classical computer: in the conventional setup we fixed the columns that defined our M×M submatrix and only varied the rows, whereas now we vary the columns too, depending on which M out of N PDC crystals generated single photons. Therefore, the proof can be constructed here similar to the original one. Furthermore, scattershot boson sampling has been also recently implemented with six photon-pair sources coupled to integrated photonic circuits of nine and thirteen modes, being an important leap towards a convincing experimental demonstration of the quantum computational supremacy. The scattershot boson sampling model can be further generalized to the case where both legs of PDC sources are subject to linear optical transformations (in the original scattershot case, one of the arms is used for heralding, i.e., it goes through the identity channel). Such a twofold scattershot boson sampling model is also computationally hard, as proven by making use of the symmetry of quantum mechanics under time reversal. Gaussian boson sampling Another photonic implementation of boson sampling concerns Gaussian input states, i.e. states whose quasiprobability Wigner distribution function is a Gaussian one. The hardness of the corresponding sampling task can be linked to that of scattershot boson sampling. Namely, the latter can be embedded into the conventional boson sampling setup with Gaussian inputs. For this, one has to generate two-mode entangled Gaussian states and apply a Haar-random unitary to their "right halves", while doing nothing to the others. Then we can measure the "left halves" to find out which of the input states contained a photon before we applied This is precisely equivalent to scattershot boson sampling, except for the fact that our measurement of the herald photons has been deferred till the end of the experiment, instead of happening at the beginning. Therefore, approximate Gaussian boson sampling can be argued to be hard under precisely the same complexity assumption as can approximate ordinary or scattershot boson sampling. Gaussian resources can be employed at the measurement stage, as well. Namely, one can define a boson sampling model, where a linear optical evolution of input single-photon states is concluded by Gaussian measurements (more specifically, by eight-port homodyne detection that projects each output mode onto a squeezed coherent state). Such a model deals with continuous-variable measurement outcome, which, under certain conditions, is a computationally hard task. Finally, a linear optics platform for implementing a boson sampling experiment where input single-photons undergo an active (non-linear) Gaussian transformation is also available. This setting makes use of a set of two-mode squeezed vacuum states as a prior resource, with no need of single-photon sources or in-line nonlinear amplification medium. This variant uses the Hafnian, a generalization of the permanent. Classically simulable boson sampling tasks The above results state that the existence of a polynomial-time classical algorithm for the original boson sampling scheme with indistinguishable single photons (in the exact and approximate cases), for scattershot, as well as for the general Gaussian boson sampling problems is highly unlikely. Nevertheless, there are some non-trivial realizations of the boson sampling problem that allow for its efficient classical simulation. One such example is when the optical circuit is injected with distinguishable single photons. In this case, instead of summing the probability amplitudes corresponding to photonic many-particle paths, one has to sum the corresponding probabilities (i.e. the squared absolute values of the amplitudes). Consequently, the detection probability will be proportional to the permanent of submatrices of (component-wise) squared absolute value of the unitary The latter is now a non-negative matrix. Therefore, although the exact computation of the corresponding permanent is a #P-complete problem, its approximation can be performed efficiently on a classical computer, due to the seminal algorithm by Jerrum, Sinclaire and Vigoda. In other words, approximate boson sampling with distinguishable photons is efficiently classically simulable. Another instance of classically simulable boson sampling setups consists of sampling from the probability distribution of coherent states injected into the linear interferometer. The reason is that at the output of a linear optical circuit coherent states remain such, and do not create any quantum entanglement among the modes. More precisely, only their amplitudes are transformed, and the transformation can be efficiently calculated on a classical computer (the computation comprises matrix multiplication). This fact can be used to perform corresponding sampling tasks from another set of states: so-called classical states, whose Glauber-Sudarshan P function is a well-defined probability distribution. These states can be represented as a mixture of coherent states due to the optical equivalence theorem. Therefore, picking random coherent states distributed according to the corresponding P function, one can perform efficient classical simulation of boson sampling from this set of classical states. Experimental implementations The above requirements for the photonic boson sampling machine allow for its small-scale construction by means of existing technologies. Consequently, shortly after the theoretical model was introduced, four different groups simultaneously reported its realization. Specifically, this included the implementation of boson sampling with: two and three photons scattered by a six-mode linear unitary transformation (represented by two orthogonal polarizations in 3×3 spatial modes of a fused-fiber beam splitter) by a collaboration between the University of Queensland and MIT three photons in different modes of a six-mode silica-on-silicon waveguide circuit, by a collaboration between Universities of Oxford, Shanghai, London and Southampton three photons in a femtosecond laser-written five-mode interferometer, by a collaboration between universities of Vienna and Jena three photons in a femtosecond laser-written five-mode interferometer implementing a Haar-random unitary transformation, by a collaboration between Milan's Institute of Photonics and Nanotechnology, Universidade Federal Fluminense and Sapienza University of Rome. Later on, more complex boson sampling experiments have been performed, increasing the number of spatial modes of random interferometers up to 13 and 9 modes, and realizing a 6-mode fully reconfigurable integrated circuit. These experiments altogether constitute the proof-of-principle demonstrations of an operational boson sampling device, and route towards its larger-scale implementations. Implementation of scattershot boson sampling A first scattershot boson sampling experiment has been recently implemented using six photon-pair sources coupled to integrated photonic circuits with 13 modes. The 6 photon-pair sources were obtained via type-II PDC processes in 3 different nonlinear crystals (exploiting the polarization degree of freedom). This allowed to sample simultaneously between 8 different input states. The 13-mode interferometer was realized by femtosecond laser-writing technique on alumino-borosilicate glas. This experimental implementation represents a leap towards an experimental demonstration of the quantum computational supremacy. Proposals with alternative photonic platform There are several other proposals for the implementation of photonic boson sampling. This includes, e.g., the scheme for arbitrarily scalable boson sampling using two nested fiber loops. In this case, the architecture employs time-bin encoding, whereby the incident photons form a pulse train entering the loops. Meanwhile, dynamically controlled loop coupling ratios allow the construction of arbitrary linear interferometers. Moreover, the architecture employs only a single point of interference and may thus be easier to stabilize than other implementations. Another approach relies on the realization of unitary transformations on temporal modes based on dispersion and pulse shaping. Namely, passing consecutively heralded photons through time-independent dispersion and measuring the output time of the photons is equivalent to a boson sampling experiment. With time-dependent dispersion, it is also possible to implement arbitrary single-particle unitaries. This scheme requires a much smaller number of sources and detectors and do not necessitate a large system of beam splitters. Certification The output of a universal quantum computer running, for example, Shor's factoring algorithm, can be efficiently verified classically, as is the case for all problems in the non-deterministic polynomial-time (NP) complexity class. It is however not clear that a similar structure exists for the boson sampling scheme. Namely, as the latter is related to the problem of estimating matrix permanents (falling into #P-hard complexity class), it is not understood how to verify correct operation for large versions of the setup. Specifically, the naive verification of the output of a boson sampler by computing the corresponding measurement probabilities represents a problem intractable for a classical computer. A first relevant question is whether it is possible or not to distinguish between uniform and boson-sampling distributions by performing a polynomial number of measurements. The initial argument introduced in Ref. stated that as long as one makes use of symmetric measurement settings the above is impossible (roughly speaking a symmetric measurement scheme does not allow for labeling the output modes of the optical circuit). However, within current technologies the assumption of a symmetric setting is not justified (the tracking of the measurement statistics is fully accessible), and therefore the above argument does not apply. It is then possible to define a rigorous and efficient test to discriminate the boson sampling statistics from an unbiased probability distribution. The corresponding discriminator is correlated to the permanent of the submatrix associated with a given measurement pattern, but can be efficiently calculated. This test has been applied experimentally to distinguish between a boson sampling and a uniform distribution in the 3-photon regime with integrated circuits of 5, 7, 9 and 13 modes. The test above does not distinguish between more complex distributions, such as quantum and classical, or between fermionic and bosonic statistics. A physically motivated scenario to be addressed is the unwanted introduction of distinguishability between photons, which destroys quantum interference (this regime is readily accessible experimentally, for example by introducing temporal delay between photons). The opportunity then exists to tune between ideally indistinguishable (quantum) and perfectly distinguishable (classical) data and measure the change in a suitably constructed metric. This scenario can be addressed by a statistical test which performs a one-on-one likelihood comparison of the output probabilities. This test requires the calculation of a small number of permanents, but does not need the calculation of the full expected probability distribution. Experimental implementation of the test has been successfully reported in integrated laser-written circuits for both the standard boson sampling (3 photons in 7-, 9- and 13-mode interferometers) and the scattershot version (3 photons in 9- and 13-mode interferometers with different input states). Another possibility is based on the bunching property of indinguishable photons. One can analyze the probability to find a k-fold coincidence measurement outcomes (without any multiply populated input mode), which is significantly higher for distinguishable particles than for bosons due to the bunching tendency of the latters. Finally, leaving the space of random matrices one may focus on specific multimode setups with certain features. In particular, the analysis of the effect of bosonic clouding (the tendency for bosons to favor events with all particles in the same half of the output array of a continuous-time many-particle quantum walk) has been proven to discriminate the behavior of distinguishable and indistinguishable particles in this specific platform. A different approach to confirm that the boson sampling machine behaves as the theory predicts is to make use of fully reconfigurable optical circuits. With large-scale single-photon and multiphoton interference verified with predictable multimode correlations in a fully characterized circuit, a reasonable assumption is that the system maintains correct operation as the circuit is continuously reconfigured to implement a random unitary operation. To this end, one can exploit quantum suppression laws (the probability of specific input-output combinations is suppressed when the linear interferometer is described by a Fourier matrix or other matrices with relevant symmetries). These suppression laws can be classically predicted in efficient ways. This approach allows also to exclude other physical models, such as mean-field states, which mimic some collective multiparticle properties (including bosonic clouding). The implementation of a Fourier matrix circuit in a fully reconfigurable 6-mode device has been reported, and experimental observations of the suppression law have been shown for 2 photons in 4- and 8-mode Fourier matrices. Alternative implementations and applications Apart from the photonic realization of the boson sampling task, several other setups have been proposed. This includes, e.g., the encoding of bosons into the local transverse phonon modes of trapped ions. The scheme allows deterministic preparation and high-efficiency readout of the corresponding phonon Fock states and universal manipulation of the phonon modes through a combination of inherent Coulomb interaction and individual phase shifts. This scheme is scalable and relies on the recent advances in ion trapping techniques (several dozens of ions can be successfully trapped, for example, in linear Paul traps by making use of anharmonic axial potentials). Another platform for implementing the boson sampling setup is a system of interacting spins: recent observation show that boson sampling with M particles in N modes is equivalent to the short-time evolution with M excitations in the XY model of 2N spins. One necessitates several additional assumptions here, including small boson bunching probability and efficient error postselection. This scalable scheme, however, is rather promising, in the light of considerable development in the construction and manipulation of coupled superconducting qubits and specifically the D-Wave machine. The task of boson sampling shares peculiar similarities with the problem of determining molecular vibronic spectra: a feasible modification of the boson sampling scheme results in a setup that can be used for the reconstruction of a molecule's Franck–Condon profiles (for which no efficient classical algorithm is currently known). Specifically, the task now is to input specific squeezed coherent states into a linear interferometer that is determined by the properties of the molecule of interest. Therefore, this prominent observation makes the interest towards the implementation of the boson sampling task to get spread well beyond the fundamental basis. It has also been suggested to use a superconducting resonator network Boson Sampling device as an interferometer. This application is assumed to be practical, as small changes in the couplings between the resonators will change the sampling results. Sensing of variation in the parameters capable of altering the couplings is thus achieved, when comparing the sampling results to an unaltered reference. Variants of the boson sampling model have been used to construct classical computational algorithms, aimed, e.g., at the estimation of certain matrix permanents (for instance, permanents of positive-semidefinite matrices related to the corresponding open problem in computer science) by combining tools proper to quantum optics and computational complexity. Coarse-grained boson sampling has been proposed as a resource of decision and function problems that are computationally hard, and may thus have cryptographic applications. The first related proof-of-principle experiment was performed with a photonic boson-sampling machine (fabricated by a direct femtosecond laser-writing technique), and confirmed many of the theoretical predictions. Gaussian boson sampling has been analyzed as a search component for computing binding propensity between molecules of pharmacological interest as well. See also Quantum random circuits Cross-entropy benchmarking Linear optical quantum computing KLM protocol References External links QUCHIP project Quantum Information Lab – Sapienza: video on boson sampling Quantum Information Lab – Sapienza: video on scattershot boson sampling The Qubit Lab – Boson Sampling Quantum information science Quantum optics Quantum algorithms
Boson sampling
[ "Physics" ]
6,232
[ "Quantum optics", "Quantum mechanics" ]
65,838,044
https://en.wikipedia.org/wiki/ML-SA1
ML-SA1 is a chemical compound which acts as an "agonist" (i.e. channel opener) of the TRPML family of calcium channels. It has mainly been studied for its role in activating TRPML1 channels, although it also shows activity at the less studied TRPML2 and TRPML3 subtypes. TRPML1 is important for the function of lysosomes, and ML-SA1 has been used to study several disorders resulting from impaired lysosome function, including mucolipidosis type IV and Niemann-Pick's disease type C, as well as other conditions such as stroke and Alzheimer's disease. References Phthalimides Amides
ML-SA1
[ "Chemistry" ]
151
[ "Amides", "Functional groups" ]
42,978,555
https://en.wikipedia.org/wiki/Al-Ca%20composite
Al-Ca composite is a high-conductivity, high-strength, lightweight composite consisting of sub-micron-diameter pure calcium metal filaments embedded inside a pure aluminium metal matrix. The material is still in the development phase, but it has potential use as an overhead high-voltage power transmission conductor. It could also be used wherever an exceptionally light, high-strength conductor is needed. Its physical properties make it especially well-suited for DC transmission. Compared with conventional conductors such as aluminium-conductor steel-reinforced cable (ACSR), all aluminium alloy conductors (AAAC), aluminium conductor alloy reinforced (ACAR), aluminium conductor composite reinforced ACCR and ACCC conductor that conduct alternating current well and DC current somewhat less well (due to the skin effect), Al-Ca conductor is essentially a single uniform material with high DC conductivity, allowing the core strands and the outer strands of a conductor cable to all be the same wire type. This conductor is inherently strong so that there is no need for a strong (usually poorly conductive) core to support its own weight as is done in conventional conductors. This eliminates the "bird caging", spooling, and thermal fatigue problems caused by thermal expansion coefficient mismatch between the core and outer strands. The Al-Ca phase interfaces strengthen the composite substantially, but do not have a noticeable effect on restricting the mean free path of electrons, which gives the composite both high strength and high conductivity, a combination that is normally difficult to achieve with both pure metals and alloys. The high strength and light weight could reduce the number of towers needed per kilometer for long distance transmission lines. Since towers and their foundations often account for 50% of a powerline's construction cost, building fewer towers would save a substantial fraction of total construction costs. The high strength also could increase transmission reliability in wind/ice loading situations. The high conductivity has the potential to reduce Ohmic losses. Al-Ca composite conductor was invented by Russell and Anderson at Ames Laboratory of the U.S. Department of Energy with the goal of developing the next generation power transmission cables. Al/Ca composite is produced by powder metallurgy and severe deformation processing (extrusion, swaging, wire drawing). This process would be roughly two to three times more expensive than conventional melt processing for ACSR. But the cost saving on tower construction is projected to be substantially larger than the extra cost of the conducting cables. During deformation processing, the Ca powder particles deform into filaments surrounded by the Al matrix, which avoids exposing calcium, a reactive element, to air and moisture. The corrosion resistance of this composite has been found to be similar to that of pure aluminium. Al-Ca composite has good microstructural stability even above 300 °C. The formation of intermetallic compounds at the interface would stabilize the microstructure to avoid the degradation of its various properties during exposure to elevated temperatures, such as those encountered during emergency overload situations. Newest development An Al/Ca (20 vol%) nanofilamentary metal-metal composite was produced by powder metallurgy and severe plastic deformation. Fine Ca metal powders (~200 μm) were produced by centrifugal atomization, mixed with pure Al powder, and deformed by warm extrusion, swaging, and wire drawing to a true strain of 12.9. The Ca powder particles became fine Ca nanofilaments that reinforce the composite substantially by interface strengthening. The conductivity of the composite is slightly lower than the rule-of-mixtures prediction due to minor quantities of impurity inclusions. The elevated temperature performance of this composite was also evaluated by differential scanning calorimetry and resistivity measurements. The ultimate tensile strength is as high as 480 MPa, twice as that of ACSR. The electrical conductivity 33.02 (μΩ m)−1 is higher than most current commonly used conductors. References Aluminium alloys Calcium
Al-Ca composite
[ "Chemistry" ]
812
[ "Alloys", "Aluminium alloys" ]
56,019,069
https://en.wikipedia.org/wiki/Stannylene
Stannylenes (R2Sn:) are a class of organotin(II) compounds that are analogues of carbene. Unlike carbene, which usually has a triplet ground state, stannylenes have a singlet ground state since valence orbitals of tin (Sn) have less tendency to form hybrid orbitals and thus the electrons in 5s orbital are still paired up. Free stannylenes are stabilized by steric protection. Adducts with Lewis bases are also known. History The first persistent stannylene, [(Me3Si)2CH]2Sn, was reported by Michael F. Lappert in 1973. Lappert used the same synthetic approach to synthesize the first diamidostannylene [(Me3Si)2N]2Sn in 1974. The short-lived, transient stannylene Me2Sn has been generated by thermolysis of cyclo-(Me2Sn)6. Synthesis and characterization Persistent stannylene Most alkyl stannylenes have been synthesized by alkylation of tin(II) dihalides with organolithium reagents. For example, the first stannylene, [(Me3Si)2CH]2Sn, was synthesized using (Me3Si)2CHLi and SnCl2. In some cases, stannylenes have been prepared by reduction of a tin(IV) compound by KC8 Amidostannylene can also be synthesized by using a tin(II) dihalide and the lithium amide. Short-lived stannylene The isolation of a transient alkyl stannylene is more difficult. The first isolation of dimethylstannylene was believed to be done by thermolysing cyclostannane (Me2Sn)6, which was the product of the condensation of Me2Sn(NEt2)2 and Me2SnH2. The evidence came from the vibrational frequencies of dimethylstannylene identified by infrared spectroscopy, which is consistent with the calculated value. The existence of this elusive SnMe2 was further confirmed by the discovery of visible light absorption matching the calculated electronic transition of SnMe2 in gas phase. Another method to prepare short-lived stannylene is laser flash photolysis using tetraalkyltin(IV) compound (e.g. SnMe4) as a precursor. The generation of stannylene can be monitored by transient UV-VIS spectroscopy. Structure and bonding Stannylenes can be viewed as sp2-hybridized with vacant 5p orbital and a lone pair. This gives rise to their red color from n to p transition. With specific type of ligands, the electron deficiency of monomeric stannylene is reduced by the agostic interaction from B-H bond. This concept was proved by Mark Kenyon and coworkers in 2006 when they synthesized the cyclic dialkylstannylene [{n-Pr2P(BH3)}(Me3Si)CCH2]2Sn. The crystal structure of the synthesized compound showed the arrangement of one B-H bond toward the Sn atom with the B—H--Sn bond distance of 2.03 Å. The mitigation of Sn electron deficiency was proved by the spectroscopic data, especially the 119Sn NMR spectra which showed the drastically low chemical shift (587 and 787 ppm comparing to 2323 ppm in analogous dialkylstannylene) indicating more electron density around Sn in this case. Reactivity Oligomerization Small, unstable stannylenes (e.g. dimethylstannylene) undergo self-oligomerization yielding cyclic oligostannanes, which can be used as stannylene sources. More bulky stannylenes (e.g. Lappert's stannylene), on the other hand, tend to form a dimer. The nature of the Sn-Sn bond in stannylene dimer is rather different from C-C bond in carbene dimer (i.e. alkene). As alkene develops a typical double bond character and the molecule has a planar geometry, stannylene dimer has a trans-bent geometry. The double bond in stannylene dimer can be considered as two donor-acceptor interactions. The electron localization function (ELF) analysis of stannylene dimer shows a disynaptic basin (electrons in bonding orbitals) on both Sn atom, indicating that the interaction between two Sn atom is two unusual bent dative bonds. Apart from that, the stability of stannylene dimer is also affected by the steric repulsion and dispersion attraction between bulky substituents. Insertion reaction Alkylstannylenes can react with various reagents (e.g. alkyl halides, enones, dienes) in an oxidative addition (or insertion) fashion. The reaction between stannylene and 9,10-phenanthrolinedione produces an EPR signal that was identified to be 9,10-phenanthrenedione radical anion, indicating that this reaction proceeds via radical mechanism. Cycloaddition Although stannylenes are more stable than its lighter carbene analogs, they readily undergo [2+4] cycloaddition reaction with Z-alkenes. The addition of (CH(SiMe3)2)2Sn, to 2,7-diphenylocta-2,3,5,6-tetraene proceeds in a cheletropic fashion, as allowed by Woodward-Hoffmann rules. Metal center for oxidative addition and reductive elimination In terms of the SnII/SnIV couple, certain stannylenes resemble transition metals. The singlet-triplet energy separation is considered to have a strong effect on oxidative addition reactivity, by utilizing a very strong σ-donor boryl (-BX2) ligand. The results demonstrated that not only molecular hydrogen but also E-H bond (E = B, Si, O, N) can be oxidative added on Sn. In ammonia and water cases, the oxidative added product could also undergo reductive elimination, yielding O- or N-B bond formation. See also Carbene analogs Silylene References Organotin compounds Tin(II) compounds Substances discovered in the 1970s
Stannylene
[ "Chemistry" ]
1,381
[ "Functional groups", "Octet-deficient functional groups" ]
56,024,352
https://en.wikipedia.org/wiki/Roof%20seamer
A roof seamer is a portable roll forming machine that is used to install mechanically seamed structural standing-seam metal roof panels, as part of an overall metal construction building envelope system. The machine is small and portable to be handled by an operator on top of a roof. The machine is applied to the overlapping area when two parallel roof panels meet. The action of the machine bends the two panels together to form a joint that has weather-tight qualities superior to other types of roof systems and cladding. History Commonly, a roof seamer is developed as an afterthought. Since roof seamers are dependent on the metal roof system being used, their development was secondary to the roof panel. A roof seamer is a development that replaced a manual process and hand tools of the past. A hammer and small anvil were tools that were used for hemming and seaming roof panels together at the edge where they meet with the next roof panel in sequence. In 1976, a German immigrant and inventor, Ewald Stellrecht, helped develop an early version of a metal roof panel portable roll forming machine in Exton, PA. From this a version of the roof seamer was also created. Since that time, great strides and innovations have been made in the development of roof seaming machines. Also, in the 1970s, Butler Manufacturing developed and released a proprietary roof system that featured the use of an electric roof seamer, dubbed the Roof Runner®, along with hand tools and an operating platform. Many developments have been made since that time to make roof seamers lighter, faster, and more user-friendly. In 1989, Developmental Industries refocused the niche market by developing a line of roof seamers that were universal to many different panel manufacturers' products and were available to rent by the end-user. Traditionally, purchasing a roof seamer meant that it would work with one specific roof panel, manufactured by a specific roof panel manufacturer. By opening up builders and installers to the option of renting, this allowed them to have the option of buying from different sources and greatly reducing their cost, making metal roofing a more accessible option for many that would not consider it before. Design and function Today roof seamers are used around the world. As the rise in popularity in sustainable building products has risen in recent years, the need of a roof seaming tool has also increased. Most roof seaming machines can have a life expectancy of 20 or more years, if proper maintenance and care are exercised. Variables Many variables exist when using a roof seamer that may affect the final product outcome. All of the following variables should be considered and decided on during the design process of the building: Material: Metal roof panels are made from a variety of materials including coated carbon steel, aluminum, zinc, and copper. The type and strength of the material must be considered, not only for tensile strength but also flexural strength. The quality of materials should be considered based on the mill and country it was sourced from. Most often these materials come to the panel manufacturer in the form of a coil, then fed into a roll forming machine to produce the finished roof panel shape and dimension. Material Coating: A coating can be layers of other metals (material treated through the process of galvanization), paint, or extruded coatings. Thickness: Different gauges of metal will present a range of thicknesses that must be accounted for with the forming dies of the roof seaming machine. This, in addition to the thickness of the coating, should be factored in to produce an acceptable seamed profile, but not to compromise the coating. Geography: The particular location of the structure will play into its performance over time. Weather patterns, temperature ranges, and prolonged exposure to the elements can affect the thermal movement of the metal roof panels. Structural Load: Many things can produce "load" on a roof. The most common that will be combated by the roof seamer are considered environmental loads, such as wind, snow, rain, and seismic considerations. Sealants: Sealants are almost always used inside of the panel lapped seams. These sealants can be applied in a factory setting or at the construction site. In either case, liquid caulk sealant and butyl tapes are the most common. In either case, the amount, location, and application method are specified to cause maximum protection for the building system as a whole. Desired results: The "finished seam profile" can be specified by a roof panel manufacturer as an option for the architectural designer to consider. Factors that can affect the desired results would be aesthetic appearance and environmental loads. Roof Pitch: A roof's pitch is simply the angle of the roof. This will create resistance for the roof seamer to overcome. The steeper the pitch, the greater the roof seamer may have to work to ascend and descend the roof panels. Fastening method: Mechanically seamed standing seam roof systems use a hidden fastener system. This system consists of a "clip" that is fabricated out of metal and attached to the structure's substrate with screws. When the panels are installed over these clips, they will be hidden from view and formed into the seam of the panels with the roof seamer. This prevents penetrations from fasteners and screws through the metal roof panel that would be used to secure them to the structure with other types of metal roofs. Ancillary Attachments: Roof-mounted HVAC, solar panels, snow guards, and many other products are often attached to standing seam metal roofs. This additional load, attachment methods and the use of dissimilar materials must be considered. Specifically the use of other materials to prevent galvanic corrosion and premature compromise and degradation of the materials. Power and usage Traditionally, roof seamers are powered by electricity-driven motors. Depending on the operator's location, either 120-volt or 240-volt power may be required. On most construction sites, either temporary electrical power is supplied or power is offered by an electric generator. This gives the operator the flexibility to take the power source onto a roof with them instead of using extensions cords, which can depreciate the power supply and possibly damage the motor of the roof seamer. Training While simple in concept, the effective use of the roof seamer requires a trained person to operate. Training is both practical and effective in on-site troubleshooting. While classroom and practical training are options to learn how to operate a roof seamer, on-the-job training is recommended as being the most effective method. Manuals, videos, and field guides are also methods that will support training. In all cases, training should be completed before operating alone with a roof seamer to teach proper preventive maintenance steps, simple adjustments and troubleshooting in the event of a machine problem. In 2015, the Metal Construction Association published a "best practices" guide for proper use and operation of roof seaming tools. Maintenance As with any tool, proper maintenance will increase the usefulness and life expectancy. Proper maintenance extends beyond the roof seamer, to the working surface on the roof. Before operating the roof seamer, ensure that the roof panel and seam are clean and clear of debris that could mark or gouge the forming dies. During operation check lubrication points and other recommended maintenance steps. In addition, most manufacturers will recommend scheduled service on an annual basis to ensure internal components are not worn or damaged. Other tools In conjunction with the roof seaming machine, there are an array of hand tools that are used. The most common tool that is usually required when operating a roof seamer is a "hand crimper". The hand crimper is used to "flat form" the panel seams into the appropriate configuration to prepare the seam for the roof seamer to be applied. Other common tools are snips, nibblers, and shears. Support organizations There are numerous professional and trade organizations that support metal roofing, metal construction and the core market where roof seamers are used. The Metal Roofing Alliance (MRA), Metal Construction Association (MCA), Metal Buildings Manufacturers Association (MBMA), the Metal Buildings Contractors and Erectors Association (MBCEA), and the National Roofing Contractors Association (NRCA) are just a few. In addition, many distributors and suppliers offer resources and support documentation for their particular product offerings. References Machines Roofing materials
Roof seamer
[ "Physics", "Technology", "Engineering" ]
1,717
[ "Physical systems", "Machines", "Mechanical engineering" ]
56,025,694
https://en.wikipedia.org/wiki/National%20Prize%20for%20Applied%20Sciences%20and%20Technologies%20%28Chile%29
The National Prize for Applied and Technological Sciences () was created in 1992 as one of the replacements for the National Prize for Sciences under Law 19169. The other two prizes in this same area are for Exact Sciences and Natural Sciences. It is part of the National Prize of Chile. Jury The jury is made up of the Minister of Education, who calls it, two academics assigned by the Council of Rectors, the President of the National Commission for Scientific and Technological Research (CONICYT), and the last recipient of the prize. Prize The prize consists of: A diploma A cash prize amounting to 6,562,457 pesos () which is adjusted every year, according to the previous year's consumer price index A pension of 20 (approximately US$1,600) in January of the corresponding year, which remains constant for the rest of the year Winners 1992, Raúl Sáez 1994: 1996: Julio Meneghello 1998: Fernando Mönckeberg Barros 2000: Andrés Weintraub Pohorille 2002: Pablo Valenzuela 2004: Juan Asenjo 2006: Edgar Kausel 2008: José Miguel Aguilera 2010: Juan Carlos Castilla 2012: Ricardo Uauy 2014: José Rodríguez Pérez 2016: 2018: See also CONICYT List of agriculture awards List of engineering awards References 1992 establishments in Chile Chilean science and technology awards Invention awards Engineering awards Materials science awards Agriculture awards 1992 in Chilean law
National Prize for Applied Sciences and Technologies (Chile)
[ "Materials_science", "Technology", "Engineering" ]
293
[ "Materials science", "Engineering awards", "Agriculture awards", "Science and technology awards", "Materials science awards", "Invention awards" ]
56,028,417
https://en.wikipedia.org/wiki/Optical%20cluster%20state
Optical cluster states are a proposed tool to achieve quantum computational universality in linear optical quantum computing (LOQC). As direct entangling operations with photons often require nonlinear effects, probabilistic generation of entangled resource states has been proposed as an alternative path to the direct approach. Creation of the cluster state On a silicon photonic chip, one of the most common platforms for implementing LOQC, there are two typical choices for encoding quantum information, though many more options exist. Photons have useful degrees of freedom in the spatial modes of the possible photon paths or in the polarization of the photons themselves. The way in which a cluster state is generated varies with which encoding has been chosen for implementation. Storing information in the spatial modes of the photon paths is often referred to as dual rail encoding. In a simple case, one might consider the situation where a photon has two possible paths, a horizontal path with creation operator and a vertical path with creation operator , where the logical zero and one states are then represented by and . Single qubit operations are then performed by beam splitters, which allow manipulation of the relative superposition weights of the modes, and phase shifters, which allow manipulation of the relative phases of the two modes. This type of encoding lends itself to the Nielsen protocol for generating cluster states. In encoding with photon polarization, logical zero and one can be encoded via the horizontal and vertical states of a photon, e.g. and . Given this encoding, single qubit operations can be performed using waveplates. This encoding can be used with the Browne-Rudolph protocol. Nielsen protocol In 2004, Nielsen proposed a protocol to create cluster states, borrowing techniques from the Knill-Laflamme-Milburn protocol (KLM protocol) to probabilistically create controlled-Z connections between qubits which, when performed on a pair of states (normalization being ignored), forms the basis for cluster states. While the KLM protocol requires error correction and a fairly large number of modes in order to get very high probability two-qubit gate, Nielsen's protocol only requires a success probability per gate of greater than one half. Given that the success probability for a connection using ancilla photons is , relaxation of the success probability from nearly one to anything over one half presents a major advantage in resources, as well as simply reducing the number of required elements in the photonic circuit. To see how Nielsen brought about this improvement, consider the photons being generated for qubits as vertices on a two dimensional grid, and the controlled-Z operations being probabilistically added edges between nearest neighbors. Using results from percolation theory, it can be shown that as long as the probability of adding edges is above a certain threshold, there will exist a complete grid as a sub-graph with near unit probability. Because of this, Nielsen's protocol doesn't rely on every individual connection being successful, just enough of them that the connections between photons allow a grid. Yoran-Reznik protocol Among the first proposals of utilizing resource states for optical quantum computing was the Yoran-Reznik protocol in 2003. While the proposed resource in this protocol was not exactly a cluster state, it brought many of the same key concepts to the attention of those considering the possibilities of optical quantum computing and still required connecting multiple separate one-dimensional chains of entangled photons via controlled-Z operations. This protocol is somewhat unique in that it utilizes both the spatial mode degree of freedom along with the polarization degree of freedom to help entanglement between qubits. Given a horizontal path, denoted by , and a vertical path, denoted by , a 50:50 beam splitter connecting the paths followed by a -phase shifter on path , we can perform the transformations where denotes a photon with polarization on path . In this way, we have the path of the photon entangled with its polarization. This is sometimes referred to as hyperentanglement, a situation in which the degrees of freedom of a single particle are entangled with each other. This, paired with the Hong-Ou-Mandel effect and projective measurements on the polarization state, can be used to create path entanglement between photons in a linear chain. These one-dimensional chains of entangled photons still need to be connected via controlled-Z operations, similar to the KLM protocol. These controlled-Z connection s between chains are still probabilistic, relying on measurement dependent teleportation with special resource states. However, due to the fact that this method does not include Fock measurements on the photons being used for computation as the KLM protocol does, the probabilistic nature of implementing controlled-Z operations presents much less of a problem. In fact, as long as connections occur with probability greater than one half, the entanglement present between chains will be enough to perform useful quantum computation, on average. Browne-Rudolph protocol An alternative approach to building cluster states that focuses entirely on photon polarization is the Browne-Rudolph protocol. This method rests on performing parity checks on a pair of photons to stitch together already entangled sets of photons, meaning that this protocol requires entangled photon sources. Browne and Rudolph proposed two ways of doing this, called type-I and type-II fusion. Type-I fusion In type-I fusion, photons with either vertical or horizontal polarization are injected into modes and , connected by a polarizing beam splitter. Each of the photons sent into this system is part of a Bell pair that this method will try to entangle. Upon passing through the polarizing beam splitter, the two photons will go opposite ways if they have the same polarization or the same way if they have the opposite polarization, e.g. or Then on one of these modes, a projective measurement onto the basis is performed. If the measurement is successful, i.e. if it detects anything, then the detected photon is destroyed, but the remaining photons from the Bell pairs become entangled. Failure to detect anything results in an effective loss of the involved photons in a way that breaks any chain of entangled photons they were on. This can make attempting to make connections between already developed chains potentially risky. Type-II fusion Type-II fusion works similarly to type-I fusion, with the differences being that a diagonal polarizing beam splitter is used and the pair of photons is measured in the two-qubit Bell basis. A successful measurement here involves measuring the pair to be in a Bell state with no relative phase between the superposition of states (e.g. as opposed to ). This again entangles any two clusters already formed. A failure here performs local complementation on the local subgraph, making an existing chain shorter rather than cutting it in half. In this way, while it requires the use of more qubits in combining entangled resources, the potential loss for attempts to connect two chains together are not as expensive for type-II fusion as they are for type-I fusion. Computing with cluster states Once a cluster state has been successfully generated, computation can be done with the resource state directly by applying measurements to the qubits on the lattice. This is the model of measurement-based quantum computation (MQC), and it is equivalent to the circuit model. Logical operations in MQC come about from the byproduct operators that occur during quantum teleportation. For example, given a single qubit state , one can connect this qubit to a plus state via a two-qubit controlled-Z operation. Then, upon measuring the first qubit (the original ) in the Pauli-X basis, the original state of the first qubit is teleported to the second qubit with a measurement outcome dependent extra rotation, which one can see from the partial inner product of the measurement acting on the two-qubit state: . for denoting the measurement outcome as either the eigenstate of Pauli-X for or the eigenstate for . A two qubit state connected by a pair of controlled-Z operations to the state yields a two-qubit operation on the teleported state after measuring the original qubits: . for measurement outcomes and . This basic concept extends to arbitrarily many qubits, and thus computation is performed by the byproduct operators of teleportation down a chain. Adjusting the desired single-qubit gates is simply a matter of adjusting the measurement basis on each qubit, and non-Pauli measurements are necessary for universal quantum computation. Experimental Implementations Spatial encoding Path-entangled two qubit states have been generated in laboratory settings on silicon photonic chips in recent years, making important steps in the direction of generating optical cluster states. Among methods of doing this, it has been shown experimentally that spontaneous four-wave mixing can be used with the appropriate use of microring resonators and other waveguides for filtering to perform on-chip generation of two-photon Bell states, which are equivalent to two-qubit cluster states up to local unitary operations. To do this, a short laser pulse is injected into an on-chip waveguide that splits into two paths. This forces the pulse into a superposition of the possible directions it could go. The two paths are coupled to microring resonators that allow circulation of the laser pulse until spontaneous four-wave mixing occurs, taking two photons from the laser pulse and converting them into a pair of photons, called the signal and idler with different frequencies in a way that conserves energy. In order to prevent the generation of multiple photon pairs at once, the procedure takes advantage of the conservation of energy and ensures that there is only enough energy in the laser pulse to create a single pair of photons. Because of this restriction, spontaneous four-wave mixing can only occur in one of the microring resonators at a time, meaning that the superposition of paths that the laser pulse could take is converted into a superposition of paths the two photons could be on. Mathematically, if denotes the laser pulse, the paths are labeled as and , the process can be written as where is the representation of having of photon on path . With the state of the two photons being in this kind of superposition, they are entangled, which can be verified by tests of Bell inequalities. Polarization encoding Polarization entangled photon pairs have also been produced on-chip. The setup involves a silicon wire waveguide that is split in half by a polarization rotator. This process, like the entanglement generation described for the dual rail encoding, makes use of the nonlinear process of spontaneous four-wave mixing, which can occur in the silicon wire on either side of the polarization rotator. However, the geometry of these wires are designed such that horizontal polarization is preferred in the conversion of laser pump photons to signal and idler photons. Thus when the photon pair is generated, both photons should have the same polarization, i.e. . The polarization rotator is then designed with the specific dimensions such that horizontal polarization is switched to vertical polarization. Thus any pairs of photons generated before the rotator exit the waveguide with vertical polarization and any pairs generated on the other end of the wire exit the waveguide still having horizontal polarization. Mathematically, the process is, up to overall normalization, . Assuming that equal space on each side of the rotator makes spontaneous four-wave mixing equally likely one each side, the output state of the photons is maximally entangled: . States generated this way could potentially be used to build a cluster state using the Browne-Rudolph protocol. References Quantum information science Quantum optics
Optical cluster state
[ "Physics" ]
2,424
[ "Quantum optics", "Quantum mechanics" ]
56,030,164
https://en.wikipedia.org/wiki/%CE%A7-bounded
In graph theory, a -bounded family of graphs is one for which there is some function such that, for every integer the graphs in with (clique number) can be colored with at most colors. The function is called a -binding function for . These concepts and their notations were formulated by András Gyárfás. The use of the Greek letter chi in the term -bounded is based on the fact that the chromatic number of a graph is commonly denoted . An overview of the area can be found in a survey of Alex Scott and Paul Seymour. Nontriviality It is not true that the family of all graphs is -bounded. As , and showed, there exist triangle-free graphs of arbitrarily large chromatic number, so for these graphs it is not possible to define a finite value of . Thus, -boundedness is a nontrivial concept, true for some graph families and false for others. Specific classes Every class of graphs of bounded chromatic number is (trivially) -bounded, with equal to the bound on the chromatic number. This includes, for instance, the planar graphs, the bipartite graphs, and the graphs of bounded degeneracy. Complementarily, the graphs in which the independence number is bounded are also -bounded, as Ramsey's theorem implies that they have large cliques. Vizing's theorem can be interpreted as stating that the line graphs are -bounded, with . The claw-free graphs more generally are also -bounded with . This can be seen by using Ramsey's theorem to show that, in these graphs, a vertex with many neighbors must be part of a large clique. This bound is nearly tight in the worst case, but connected claw-free graphs that include three mutually-nonadjacent vertices have even smaller chromatic number, . Other -bounded graph families include: The perfect graphs, with The graphs of boxicity two, which is the intersection graphs of axis-parallel rectangles, with (big O notation) The graphs of bounded clique-width The intersection graphs of scaled and translated copies of any compact convex shape in the plane The polygon-circle graphs, with The circle graphs, with and (generalizing circle graphs) the "outerstring graphs", intersection graphs of bounded curves in the plane that all touch the unbounded face of the arrangement of the curves The outerstring graph is an intersection graph of curves that lie in a common half-plane and have one endpoint on the boundary of that half-plane The intersection graphs of curves that cross a fixed curve between 1 and times The even-hole-free graphs, with , as every such graph has a vertex whose neighborhood is the union of two cliques However, although intersection graphs of convex shapes, circle graphs, and outerstring graphs are all special cases of string graphs, the string graphs themselves are not -bounded. They include as a special case the intersection graphs of line segments, which are also not -bounded. Unsolved problems According to the Gyárfás–Sumner conjecture, for every tree , the graphs that do not contain as an induced subgraph are -bounded. For instance, this would include the case of claw-free graphs, as a claw is a special kind of tree. However, the conjecture is known to be true only for certain special trees, including paths and radius-two trees. A -bounded class of graphs is polynomially -bounded if it has a -binding function that grows at most polynomially as a function of . As every -vertex graph contains an independent set with cardinality at least , all polynomially -bounded classes have the Erdős–Hajnal property. Another problem on -boundedness was posed by Louis Esperet, who asked whether every hereditary class of graphs that is -bounded is also polynomially -bounded. A strong counterexample to Esperet's conjecture was announced in 2022 by Briański, Davies, and Walczak, who proved that there exist -bounded hereditary classes whose function can be chosen arbitrarily as long as it grows more quickly than a certain cubic polynomial. References External links Chi-bounded, Open Problem Garden Graph coloring
Χ-bounded
[ "Mathematics" ]
864
[ "Graph coloring", "Mathematical relations", "Graph theory" ]
56,030,165
https://en.wikipedia.org/wiki/Gy%C3%A1rf%C3%A1s%E2%80%93Sumner%20conjecture
In graph theory, the Gyárfás–Sumner conjecture asks whether, for every tree and complete graph , the graphs with neither nor as induced subgraphs can be properly colored using only a constant number of colors. Equivalently, it asks whether the -free graphs are -bounded. It is named after András Gyárfás and David Sumner, who formulated it independently in 1975 and 1981 respectively. It remains unproven. In this conjecture, it is not possible to replace by a graph with cycles. As Paul Erdős and András Hajnal have shown, there exist graphs with arbitrarily large chromatic number and, at the same time, arbitrarily large girth. Using these graphs, one can obtain graphs that avoid any fixed choice of a cyclic graph and clique (of more than two vertices) as induced subgraphs, and exceed any fixed bound on the chromatic number. The conjecture is known to be true for certain special choices of , including paths, stars, and trees of radius two. It is also known that, for any tree , the graphs that do not contain any subdivision of are -bounded. References External links Graphs with a forbidden induced tree are chi-bounded, Open Problem Garden Graph coloring Conjectures Unsolved problems in graph theory
Gyárfás–Sumner conjecture
[ "Mathematics" ]
263
[ "Unsolved problems in mathematics", "Graph coloring", "Graph theory", "Conjectures", "Unsolved problems in graph theory", "Mathematical relations", "Mathematical problems" ]
54,455,799
https://en.wikipedia.org/wiki/Stronger%20uncertainty%20relations
Heisenberg's uncertainty relation is one of the fundamental results in quantum mechanics. Later Robertson proved the uncertainty relation for two general non-commuting observables, which was strengthened by Schrödinger. However, the conventional uncertainty relation like the Robertson-Schrödinger relation cannot give a non-trivial bound for the product of variances of two incompatible observables because the lower bound in the uncertainty inequalities can be null and hence trivial even for observables that are incompatible on the state of the system. The Heisenberg–Robertson–Schrödinger uncertainty relation was proved at the dawn of quantum formalism and is ever-present in the teaching and research on quantum mechanics. After about 85 years of existence of the uncertainty relation this problem was solved recently by Lorenzo Maccone and Arun K. Pati. The standard uncertainty relations are expressed in terms of the product of variances of the measurement results of the observables and , and the product can be null even when one of the two variances is different from zero. However, the stronger uncertainty relations due to Maccone and Pati provide different uncertainty relations, based on the sum of variances that are guaranteed to be nontrivial whenever the observables are incompatible on the state of the quantum system. (Earlier works on uncertainty relations formulated as the sum of variances include, e.g., He et al., and Ref. due to Huang.) The Maccone–Pati uncertainty relations The Heisenberg–Robertson or Schrödinger uncertainty relations do not fully capture the incompatibility of observables in a given quantum state. The stronger uncertainty relations give non-trivial bounds on the sum of the variances for two incompatible observables. For two non-commuting observables and the first stronger uncertainty relation is given by where , , is a vector that is orthogonal to the state of the system, i.e., and one should choose the sign of so that this is a positive number. The other non-trivial stronger uncertainty relation is given by where is a unit vector orthogonal to . The form of implies that the right-hand side of the new uncertainty relation is nonzero unless is an eigenstate of . One can prove an improved version of the Heisenberg–Robertson uncertainty relation which reads as The Heisenberg–Robertson uncertainty relation follows from the above uncertainty relation. Remarks In quantum theory, one should distinguish between the uncertainty relation and the uncertainty principle. The former refers solely to the preparation of the system which induces a spread in the measurement outcomes, and does not refer to the disturbance induced by the measurement. The uncertainty principle captures the measurement disturbance by the apparatus and the impossibility of joint measurements of incompatible observables. The Maccone–Pati uncertainty relations refer to preparation uncertainty relations. These relations set strong limitations for the nonexistence of common eigenstates for incompatible observables. The Maccone–Pati uncertainty relations have been experimentally tested for qutrit systems. The new uncertainty relations not only capture the incompatibility of observables but also of quantities that are physically measurable (as variances can be measured in the experiment). References Other sources Research Highlight, NATURE ASIA, 19 January 2015, "Heisenberg's uncertainty relation gets stronger" Quantum mechanics Mathematical physics
Stronger uncertainty relations
[ "Physics", "Mathematics" ]
707
[ "Applied mathematics", "Theoretical physics", "Mathematical physics", "Quantum mechanics" ]
54,456,309
https://en.wikipedia.org/wiki/Flashed%20glass
Flashed glass, or flash glass, is a type of glass created by coating a colorless gather of glass with one or more thin layers of colored glass. This is done by placing a piece of melted glass of one color into another piece of melted glass of a different color and then blowing the glass. As well as its use for glass vessels, it has been very widely used in making stained glass since medieval times, often in combination with "pot metal glass", made by colouring molten glass, giving colour all through the sheet. The colored glass can be partly or completely etched away (through exposure to acid or via sandblasting), resulting in colorless spots where the colored glass has been removed. Flashed glass can be made from various colors of glass. A finished piece of flashed glass appears translucent. See also Cased glass Glass engraving Satsuma Kiriko cut glass Stained glass References Glass Glass types
Flashed glass
[ "Physics", "Chemistry" ]
186
[ "Homogeneous chemical mixtures", "Amorphous solids", "Unsolved problems in physics", "Glass" ]
54,457,921
https://en.wikipedia.org/wiki/Alternating%20current%20electrospinning
Alternating current electrospinning is a fiber formation technique to produce micro- and nanofibers from polymer solutions under the dynamic drawing force of the electrostatic field with periodically changing polarity. The main benefit of alternating current electrospinning is that multiple times higher productivities are achievable compared to widely used direct current electrospinning setups. References Nanotechnology Spinning
Alternating current electrospinning
[ "Materials_science", "Engineering" ]
81
[ "Nanotechnology", "Materials science" ]
67,303,095
https://en.wikipedia.org/wiki/Simple-As-Possible%20computer
The Simple-As-Possible (SAP) computer is a simplified computer architecture designed for educational purposes and described in the book Digital Computer Electronics by Albert Paul Malvino and Jerald A. Brown. The SAP architecture serves as an example in Digital Computer Electronics for building and analyzing complex logical systems with digital electronics. Digital Computer Electronics successively develops three versions of this computer, designated as SAP-1, SAP-2, and SAP-3. Each of the last two build upon the immediate previous version by adding additional computational, flow of control, and input/output capabilities. SAP-2 and SAP-3 are fully Turing-complete. The instruction set architecture (ISA) that the computer final version (SAP-3) is designed to implement is patterned after and upward compatible with the ISA of the Intel 8080/8085 microprocessor family. Therefore, the instructions implemented in the three SAP computer variations are, in each case, a subset of the 8080/8085 instructions. Variants Ben Eater's Design YouTuber and former Khan Academy employee Ben Eater created a tutorial building an 8-bit Turing-complete SAP computer on breadboards from logical chips (7400-series) capable of running simple programs such as computing the Fibonacci sequence. Eater's design consists of the following modules: An adjustable-speed (upper limitation of a few hundred Hertz) clock module that can be put into a "manual mode" to step through the clock cycles. Three register modules (Register A, Register B, and the Instruction Register) that "store small amounts of data that the CPU is processing." An arithmetic logic unit (ALU) capable of adding and subtracting 8-bit 2's complement integers from registers A and B. This module also has a flags register with two possible flags (Z and C). Z stands for "zero," and is activated if the ALU outputs zero. C stands for "carry," and is activated if the ALU produces a carry-out bit. A RAM module capable of storing 16 bytes. This means that the RAM is 4-bit addressable. As Eater's website puts it, "this is by far its [the computer's] biggest limitation". A 4-bit program counter that keeps track of the current processor instruction, corresponding to a 4-bit addressable RAM. An output register that displays its content on four 7-segment displays, capable of displaying both unsigned and 2's complement signed integers. The 7-segment display outputs are controlled by EEPROMs, which are programmed using an Arduino microcontroller. A bus that connects these components together. The components connect to the bus using tri-state buffers. A "control logic" module that defines "the opcodes the processor recognizes and what happens when it executes each instruction," as well as enabling the computer to be Turing-complete. The CPU microcodes are programmed into EEPROMs using an Arduino microcontroller. Ben Eater's design has inspired multiple other variants and improvements, primarily on Eater's Reddit forum. Some examples of improvements are: An expanded RAM module capable of storing 256 bytes, utilizing the entire 8-bit address space. With the help of segmentation registers, the RAM module can be further expanded to a 16-bit address space, matching the standard for 8-bit computers. A stack register that allows incrementing and decrementing the stack pointer. References External links SAP-1 online simulator (in English, Spanish and Catalan) Design and Implementation of a Simple-As-Possible 1 (SAP-1) Computer using an FPGA and VHDL An implementation of Simple As Possible computer - SAP1, written in VHDL (in English and Portuguese) SAP-1 simulation using Digital Works (in English and Portuguese) Some of Ben Eater's computer videos including the 8-bit computer. Computer architecture
Simple-As-Possible computer
[ "Technology", "Engineering" ]
804
[ "Computers", "Computer engineering", "Computer architecture" ]
67,313,101
https://en.wikipedia.org/wiki/Arthur%20John%20Ahearn
Arthur John Ahearn (20 June 1902 – 12 June 1990) was an American physicist and mass spectrometry researcher. Career and research Ahearn graduated from Ripon College in 1923 and went to graduate school at the University of Minnesota, where he completed his PhD in 1931. At this time, he had already moved to Bell Labs, where he started to work in 1929 until his retirement in 1966. His research at Bell labs involved electron emission, electron optics and electron microscopy, thermionics, and mass spectrometry. During his time at Bell labs, he worked with Bruce Hannay to develop the first spark source mass spectrometer. They showed that this approach can be used to analyze semiconductors, specifically measure dopants in semiconductors at high sensitivity. Ahearn received the Spectroscopy award at Pittcon in 1971. He and his wife Ella had two children. References 1902 births 1990 deaths Spectroscopists 20th-century American physicists Ripon College (Wisconsin) alumni University of Minnesota alumni Mass spectrometrists
Arthur John Ahearn
[ "Physics", "Chemistry" ]
215
[ "Biochemists", "Mass spectrometry", "Spectrum (physical sciences)", "Mass spectrometrists" ]
67,315,994
https://en.wikipedia.org/wiki/Long-range%20restriction%20mapping
Long-range restriction mapping is an alternative genomic mapping technique to short-range, also called fine-scale mapping. Both forms utilize restriction enzymes in order to decipher the previously unknown order of DNA segments; the main difference between the two being the amount of DNA that comprises the final map. The unknown DNA is broken into many smaller fragments by these restriction enzymes at specific sites on the molecule, and then the fragments can later be analyzed by their individual sizes. A final long-range map can span hundreds to thousands of kilobytes of genetic data at many different loci. The long-range maps cover very large genomics regions in order to display the physical relationship of DNA segments targeted by restriction enzymes. These restriction sites are an integral component to the formation of long-range mapping. Genetic linkage data can be combined with gel electrophoresis procedures to provide gene order as well as distance on chromosomes. To accomplish this, the genetic linkage information is used to create a theory-based hypothesis: one that can be tested with gel electrophoresis and extended DNA sequencing protocols. Construction The formation of a long-range restriction map is similar to a short-range map, but there is an increase in experimental complexity as the size of the genomic section increases. To begin this process, magnification of DNA quantity has to occur. Endonuclease-mediated long polymerase chain reactions allow for DNA fragments of up to 40 kb to be amplified. In some practices, two equivalents of DNA are restricted at one site, and a third equivalent is restricted in both of the sites. With enough purified plasmid DNA and digestive enzymes, the Pulsed-field gel electrophoresis (PFGE) process can begin: alternating voltages are combined with a standard gel electrophoresis that results in a much longer procedure. To run this gel effectively, the DNA of interest must be combined with specific rare-cutting restriction endonuclease. After running the gel and imaging it, usually in UV light, the size of the DNA fragments can be determined. So far this process is very similar to the short-range mapping technique. After Pulsed-field gel electrophoresis, a southern blotting technique is performed and detections of specific fragments using molecular probes occur to complete the production of large-scale restriction maps. The map is created via an elaborate and deductive process of interpreting data. From the PFGE and the southern blotting, an experimenter must analyze the molecular probes in order to find a descending number of similarities in a ranking of these fragments. In some novel experiments the type of gel electrophoresis has been adapted to try and increase the resolution of genetic information. Capillary electrophoresis has been used in conjunction with laser-induced fluorescence detection to elevate the process of restriction mapping. This type of electrophoresis focuses on the specific charges of ions and their movement in an electrophoretic field instead of whole DNA fragments. The fluorescence of these atoms allows for visualization of atomic movement; essentially the process zooms in on the field of view of a standard gel electrophoresis. Applications These types of restriction maps can provide insight into the identification of genes in many disorders, eventually increasing the possibility of successful therapies. Duchenne muscular dystrophy, cystic fibrosis, and retinitis pigmentosa are a few of many genetic diseases that have benefited from the information restriction mapping has provided. The biochemical origins of these diseases, along with the majority of other genetic diseases, are unknown and this can hinder the progress of preventative or even symptomatic treatment. Knowing that mutation is the source of novel genetic variation, being able to connect the physical distance of these nucleotide changes with disease-linked structural novelties is the most pertinent application of long-range restriction mapping. Even the study of illnesses that are not congenital have benefitted from long-range restriction mapping, specifically HPV-, HIV-, and certain hormone connected brain tumors. The organization that restriction mapping provides allows for novel experiments to draw connections between genetic disparities and life-afflicting diseases. Restriction mapping can often be cheaper than full genetic sequencing, allowing labs to visually represent aspects of the genome they might not otherwise have access to. Advancements in computer programming has allowed some automated software to produce potential restriction maps, forming another path to visualization when experimental costs get too high. See also Pulsed-field gel electrophoresis, detailed methodology on this specific version of gel electrophoresis References Long-Range Restriction Mapping Genomics techniques Restriction enzymes
Long-range restriction mapping
[ "Chemistry", "Biology" ]
953
[ "Genetics techniques", "Restriction enzymes", "Genomics techniques", "Molecular biology techniques" ]
68,710,620
https://en.wikipedia.org/wiki/Automated%20decision-making
Automated decision-making (ADM) involves the use of data, machines and algorithms to make decisions in a range of contexts, including public administration, business, health, education, law, employment, transport, media and entertainment, with varying degrees of human oversight or intervention. ADM involves large-scale data from a range of sources, such as databases, text, social media, sensors, images or speech, that is processed using various technologies including computer software, algorithms, machine learning, natural language processing, artificial intelligence, augmented intelligence and robotics. The increasing use of automated decision-making systems (ADMS) across a range of contexts presents many benefits and challenges to human society requiring consideration of the technical, legal, ethical, societal, educational, economic and health consequences. Overview There are different definitions of ADM based on the level of automation involved. Some definitions suggests ADM involves decisions made through purely technological means without human input, such as the EU's General Data Protection Regulation (Article 22). However, ADM technologies and applications can take many forms ranging from decision-support systems that make recommendations for human decision-makers to act on, sometimes known as augmented intelligence or 'shared decision-making', to fully automated decision-making processes that make decisions on behalf of individuals or organizations without human involvement. Models used in automated decision-making systems can be as simple as checklists and decision trees through to artificial intelligence and deep neural networks (DNN). Since the 1950s computers have gone from being able to do basic processing to having the capacity to undertake complex, ambiguous and highly skilled tasks such as image and speech recognition, gameplay, scientific and medical analysis and inferencing across multiple data sources. ADM is now being increasingly deployed across all sectors of society and many diverse domains from entertainment to transport. An ADM system (ADMS) may involve multiple decision points, data sets, and technologies (ADMT) and may sit within a larger administrative or technical system such as a criminal justice system or business process. Data Automated decision-making involves using data as input to be analyzed within a process, model, or algorithm or for learning and generating new models. ADM systems may use and connect a wide range of data types and sources depending on the goals and contexts of the system, for example, sensor data for self-driving cars and robotics, identity data for security systems, demographic and financial data for public administration, medical records in health, criminal records in law. This can sometimes involve vast amounts of data and computing power. Data quality The quality of the available data and its ability to be used in ADM systems is fundamental to the outcomes. It is often highly problematic for many reasons. Datasets are often highly variable; corporations or governments may control large-scale data, restricted for privacy or security reasons, incomplete, biased, limited in terms of time or coverage, measuring and describing terms in different ways, and many other issues. For machines to learn from data, large corpora are often required, which can be challenging to obtain or compute; however, where available, they have provided significant breakthroughs, for example, in diagnosing chest X-rays. ADM technologies Automated decision-making technologies (ADMT) are software-coded digital tools that automate the translation of input data to output data, contributing to the function of automated decision-making systems. There are a wide range of technologies in use across ADM applications and systems. ADMTs involving basic computational operations Search (includes 1-2-1, 1-2-many, data matching/merge) Matching (two different things) Mathematical Calculation (formula) ADMTs for assessment and grouping: User profiling Recommender systems Clustering Classification Feature learning Predictive analytics (includes forecasting) ADMTs relating to space and flows: Social network analysis (includes link prediction) Mapping Routing ADMTs for processing of complex data formats Image processing Audio processing Natural Language Processing (NLP) Other ADMT Business rules management systems Time series analysis Anomaly detection Modelling/Simulation Machine learning Machine learning (ML) involves training computer programs through exposure to large data sets and examples to learn from experience and solve problems. Machine learning can be used to generate and analyse data as well as make algorithmic calculations and has been applied to image and speech recognition, translations, text, data and simulations. While machine learning has been around for some time, it is becoming increasingly powerful due to recent breakthroughs in training deep neural networks (DNNs), and dramatic increases in data storage capacity and computational power with GPU coprocessors and cloud computing. Machine learning systems based on foundation models run on deep neural networks and use pattern matching to train a single huge system on large amounts of general data such as text and images. Early models tended to start from scratch for each new problem however since the early 2020s many are able to be adapted to new problems. Examples of these technologies include Open AI's DALL-E (an image creation program) and their various GPT language models, and Google's PaLM language model program. Applications ADM is being used to replace or augment human decision-making by both public and private-sector organisations for a range of reasons including to help increase consistency, improve efficiency, reduce costs and enable new solutions to complex problems. Debate Research and development are underway into uses of technology to assess argument quality, assess argumentative essays and judge debates. Potential applications of these argument technologies span education and society. Scenarios to consider, in these regards, include those involving the assessment and evaluation of conversational, mathematical, scientific, interpretive, legal, and political argumentation and debate. Law In legal systems around the world, algorithmic tools such as risk assessment instruments (RAI), are being used to supplement or replace the human judgment of judges, civil servants and police officers in many contexts. In the United States RAI are being used to generate scores to predict the risk of recidivism in pre-trial detention and sentencing decisions, evaluate parole for prisoners and to predict "hot spots" for future crime. These scores may result in automatic effects or may be used to inform decisions made by officials within the justice system. In Canada ADM has been used since 2014 to automate certain activities conducted by immigration officials and to support the evaluation of some immigrant and visitor applications. Economics Automated decision-making systems are used in certain computer programs to create buy and sell orders related to specific financial transactions and automatically submit the orders in the international markets. Computer programs can automatically generate orders based on predefined set of rules using trading strategies which are based on technical analyses, advanced statistical and mathematical computations, or inputs from other electronic sources. Business Continuous auditing Continuous auditing uses advanced analytical tools to automate auditing processes. It can be utilized in the private sector by business enterprises and in the public sector by governmental organizations and municipalities. As artificial intelligence and machine learning continue to advance, accountants and auditors may make use of increasingly sophisticated algorithms which make decisions such as those involving determining what is anomalous, whether to notify personnel, and how to prioritize those tasks assigned to personnel. Media and entertainment Digital media, entertainment platforms, and information services increasingly provide content to audiences via automated recommender systems based on demographic information, previous selections, collaborative filtering or content-based filtering. This includes music and video platforms, publishing, health information, product databases and search engines. Many recommender systems also provide some agency to users in accepting recommendations and incorporate data-driven algorithmic feedback loops based on the actions of the system user. Large-scale machine learning language models and image creation programs being developed by companies such as OpenAI and Google in the 2020s have restricted access however they are likely to have widespread application in fields such as advertising, copywriting, stock imagery and graphic design as well as other fields such as journalism and law. Advertising Online advertising is closely integrated with many digital media platforms, websites and search engines and often involves automated delivery of display advertisements in diverse formats. 'Programmatic' online advertising involves automating the sale and delivery of digital advertising on websites and platforms via software rather than direct human decision-making. This is sometimes known as the waterfall model which involves a sequence of steps across various systems and players: publishers and data management platforms, user data, ad servers and their delivery data, inventory management systems, ad traders and ad exchanges. There are various issues with this system including lack of transparency for advertisers, unverifiable metrics, lack of control over ad venues, audience tracking and privacy concerns. Internet users who dislike ads have adopted counter measures such as ad blocking technologies which allow users to automatically filter unwanted advertising from websites and some internet applications. In 2017, 24% of Australian internet users had ad blockers. Health Deep learning AI image models are being used for reviewing x-rays and detecting the eye condition macular degeneration. Social services Governments have been implementing digital technologies to provide more efficient administration and social services since the early 2000s, often referred to as e-government. Many governments around the world are now using automated, algorithmic systems for profiling and targeting policies and services including algorithmic policing based on risks, surveillance sorting of people such as airport screening, providing services based on risk profiles in child protection, providing employment services and governing the unemployed. A significant application of ADM in social services relates to the use of predictive analytics – eg predictions of risks to children from abuse/neglect in child protection, predictions of recidivism or crime in policing and criminal justice, predictions of welfare/tax fraud in compliance systems, predictions of long term unemployment in employment services. Historically these systems were based on standard statistical analyses, however from the early 2000s machine learning has increasingly been developed and deployed. Key issues with the use of ADM in social services include bias, fairness, accountability and explainability which refers to transparency around the reasons for a decision and the ability to explain the basis on which a machine made a decision. For example Australia's federal social security delivery agency, Centrelink, developed and implemented an automated processes for detecting and collecting debt which led to many cases of wrongful debt collection in what became known as the RoboDebt scheme. Transport and mobility Connected and automated mobility (CAM) involves autonomous vehicles such as self-driving cars and other forms of transport which use automated decision-making systems to replace various aspects of human control of the vehicle. This can range from level 0 (complete human driving) to level 5 (completely autonomous). At level 5 the machine is able to make decisions to control the vehicle based on data models and geospatial mapping and real-time sensors and processing of the environment. Cars with levels 1 to 3 are already available on the market in 2021. In 2016 The German government established an 'Ethics Commission on Automated and Connected Driving' which recommended connected and automated vehicles (CAVs) be developed if the systems cause fewer accidents than human drivers (positive balance of risk). It also provided 20 ethical rules for the adaptation of automated and connected driving. In 2020 the European Commission strategy on CAMs recommended that they be adopted in Europe to reduce road fatalities and lower emissions however self-driving cars also raise many policy, security and legal issues in terms of liability and ethical decision-making in the case of accidents, as well as privacy issues. Issues of trust in autonomous vehicles and community concerns about their safety are key factors to be addressed if AVs are to be widely adopted. Surveillance Automated digital data collections via sensors, cameras, online transactions and social media have significantly expanded the scope, scale, and goals of surveillance practices and institutions in government and commercial sectors. As a result there has been a major shift from targeted monitoring of suspects to the ability to monitor entire populations. The level of surveillance now possible as a result of automated data collection has been described as surveillance capitalism or surveillance economy to indicate the way digital media involves large-scale tracking and accumulation of data on every interaction. Ethical and legal issues There are many social, ethical and legal implications of automated decision-making systems. Concerns raised include lack of transparency and contestability of decisions, incursions on privacy and surveillance, exacerbating systemic bias and inequality due to data and algorithmic bias, intellectual property rights, the spread of misinformation via media platforms, administrative discrimination, risk and responsibility, unemployment and many others. As ADM becomes more ubiquitous there is greater need to address the ethical challenges to ensure good governance in information societies. ADM systems are often based on machine learning and algorithms which are not easily able to be viewed or analysed, leading to concerns that they are 'black box' systems which are not transparent or accountable. A report from Citizen lab in Canada argues for a critical human rights analysis of the application of ADM in various areas to ensure the use of automated decision-making does not result in infringements on rights, including the rights to equality and non-discrimination; freedom of movement, expression, religion, and association; privacy rights and the rights to life, liberty, and security of the person. Legislative responses to ADM include: The European General Data Protection Regulation (GDPR), introduced in 2016, is a regulation in EU law on data protection and privacy in the European Union (EU). Article 22(1) enshrines the right of data subjects not to be subject to decisions, which have legal or other significant effects, being based solely on automatic individual decision making. GDPR also includes some rules on the right to explanation however the exact scope and nature of these is currently subject to pending review by the Court of Justice of the European Union. These provisions were not first introduced in the GDPR, but have been present in a similar form across Europe since the Data Protection Directive in 1995, and the 1978 French law, the . Similarly scoped and worded provisions with varying attached rights and obligations are present in the data protection laws of many other jurisdictions across the world, including Uganda, Morocco and the US state of Virginia. Rights for the explanation of public sector automated decisions forming 'algorithmic treatment' under the French loi pour une République numérique. Bias ADM may incorporate algorithmic bias arising from: Data sources, where data inputs are biased in their collection or selection Technical design of the algorithm, for example where assumptions have been made about how a person will behave Emergent bias, where the application of ADM in unanticipated circumstances creates a biased outcome Explainability Questions of biased or incorrect data or algorithms and concerns that some ADMs are black box technologies, closed to human scrutiny or interrogation, has led to what is referred to as the issue of explainability, or the right to an explanation of automated decisions and AI. This is also known as Explainable AI (XAI), or Interpretable AI, in which the results of the solution can be analysed and understood by humans. XAI algorithms are considered to follow three principles - transparency, interpretability and explainability. Information asymmetry Automated decision-making may increase the information asymmetry between individuals whose data feeds into the system and the platforms and decision-making systems capable of inferring information from that data. On the other hand it has been observed that in financial trading the information asymmetry between two artificial intelligent agents may be much less than between two human agents or between human and machine agents. A research validated Daniel Kahneman's theory on noisy decisions by human experts in finance. It demonstrates the inherent inconsistencies in human judgments, which consequently affect the outcomes of automated decisions made by AI decision-support systems. Research fields Many academic disciplines and fields are increasingly turning their attention to the development, application and implications of ADM including business, computer sciences, human computer interaction (HCI), law, public administration, and media and communications. The automation of media content and algorithmically driven news, video and other content via search systems and platforms is a major focus of academic research in media studies. The ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT) was established in 2018 to study transparency and explainability in the context of socio-technical systems, many of which include ADM and AI. Key research centres investigating ADM include: Algorithm Watch, Germany ARC Centre of Excellence for Automated Decision-Making and Society, Australia Citizen Lab, Canada Informatics Europe See also Automated decision support Algorithmic bias Decision-making software Decision Management Ethics of artificial intelligence Government by algorithm Machine learning Recommender systems References Science and technology studies & Digital technology Machine learning
Automated decision-making
[ "Technology", "Engineering" ]
3,363
[ "Information and communications technology", "Machine learning", "Science and technology studies", "Automation", "Control engineering", "Digital technology", "Artificial intelligence engineering" ]
59,431,451
https://en.wikipedia.org/wiki/Fraunhofer-Center%20for%20High%20Temperature%20Materials%20and%20Design%20HTL
The Fraunhofer Center for High Temperature Materials and Design is a research center of the Fraunhofer Institute for Silicate Research in Würzburg, a research institute of the Fraunhofer Society. It predominantly conducts research in high temperature technologies energy-efficient heating processes and thus contributes to sustainable technological progress. It is headquartered in Bayreuth and has additional locations in Würzburg and Münchberg. History The centre was founded in 2012 with the aim of pooling the ceramics research of the Fraunhofer ISC. Its research building in Bayreuth was opened in 2015 and funded by the Bavarian Ministry for Economic Affairs, the German Federal Ministry of Education and Research, and the European Regional Development Fund. In 2014, the Fraunhofer Application Center for Textile Fiber Ceramics (TFK) was founded in cooperation with the Hof University of Applied Sciences. Since 2017, the premises of the Fraunhofer-Center HTL in Bayreuth are being extended by a technical center with a fiber pilot plant, which is to be completed in late 2019. The costs for this plant amount to 20 Million Euros, which are predominantly taken over by the Bavarian Ministry for Economic Affairs and the German Federal Ministry of Education and Research. The plant itself is a one-of-its-kind in Europe and its goal is to open production of ceramic fibers in Europe. Research areas The Fraunhofer-Center HTL has two business areas: Thermal Process Technology and CMC's (Ceramic matrix composites). One of the applications of CMC's are, for instance, the production of ceramic brakes, which currently are expensive in production, and the Fraunhofer-Center HTL is currently researching ways to reduce costs therein. In the CMC business field, HTL has a closed manufacturing chain from fibre development to textile fibre processing to matrix construction to finishing and coating of CMC components. CMC are characterised by high operating temperatures, corrosion resistance and damage tolerance and are therefore used to improve high-temperature processes. In addition, processes such as 3D printing are also available at the Fraunhofer Centre HTL for the production of metal and ceramic components with complex geometries. To test high-temperature materials and optimise their manufacturing processes, the Fraunhofer Centre HTL is developing ThermoOptic Measuring (TOM) furnaces. Materials and components can also be characterised using various non-destructive and mechanical as well as thermal testing methods. Focus of work Materials Material design: Calculation of the application properties of multiphase materials Ceramics: development of oxide, non-oxide and silicate ceramics along the entire manufacturing chain Metal-ceramic composites: Development of metal components and composites Ceramic fibres: Development of ceramic fibres from laboratory scale to pilot scale Ceramic coatings: Development and characterisation of liquid coating varnishes on behalf of customers and for sampling purposes Components Component design: Design of components made of ceramics, metals or composites using finite element (FE) modelling CMC components: Design and fabrication of CMC components using carbon, silicon carbide or oxide ceramic fibres 3D printing: manufacturing of prototypes and small series from ceramics, metals or metal-ceramic composites Manufacturing processes Textile technology: development of textile processing methods for inorganic fibres including sampling Heat processes: In-situ characterisation of the behaviour of solids and melts during the heating process as well as process optimisation Application firings: Conducting test firings and application firings in defined atmospheres Characterisation Materials testing: Non-destructive, mechanical and thermal measurement of the composition, microstructure and application properties of materials ThermoOptic Measurement (TOM): Simulation of industrial heat treatment processes in the temperature range from room temperature to over 2000 °C and in all relevant furnace atmospheres Industrial furnace analysis: recording of the energy balance as well as the temperature and atmosphere distribution in the production furnace Infrastructure Location Bayreuth At the Fraunhofer Centre HTL in Bayreuth, 80 office workplaces are available on an area of approx. 600 m2. The technical centre compromises 15 laboratories and halls on an area of approx. 2000 m2. Specialised technical equipment is in use there. These include: approx. 40 different industrial furnaces twelve thermo-optical measureing systems (TOM) specially developed at the HTL Stereolithography printers for ceramic components Powder bed printers for ceramics and metals CMC processing equipment equipment for non-destructive testing (computer tomography with a 225 kV and 450 kV radiation source, terahertz technology, ultrasound diagnostics, thermography) five-axis machining centre laser sintering system The fibre pilot plant opened at the Bayreuth site in 2019 increases the pilot plant area of the Fraunhofer Centre HTL by approx. 1200 m2 and is used for the production of ceramic reinforcement fibres and the development of new high-temperature resistant fibre types. Location Würzburg In the premises of the parent institute Fraunhofer ISC in Würzburg, the HTL has 20 office workstations, three laboratories and a pilot plant with an area of 630 m2. The facilities and spinning towers operated in Würzburg are used to develop ceramic fibres and ceramic coatings on a laboratory and pilot plant scale. Location Münchberg On the site of the Institute for Material Sciences ifm at Hof University of Applied Sciences, the Fraunhofer Centre HTL has 14 office workplaces as well as four laboratories and four pilot plants with an area of over 5,500 m2. A total of ten weaving looms of different sizes and types, a variable braiding machine, a double rapier weaving machine with single thread control and numerous systems for testing fibres, rovings and textiles are used. Cooperations Fraunhofer-Allianz AdvanCer Fraunhofer-Allianz Energie Fraunhofer-Allianz Leichtbau Fraunhofer-Allianz Textil References External links Fraunhofer-Center for High Temperature Materials and Design HTL Fraunhofer-Institute for Silicate Research Fraunhofer-Center for High Temperature Materials as part of the FUDIPO Project https://www.cem-wave.eu/ Organisations based in Germany Ceramics Ceramic materials Ceramic engineering Research and development in Germany Research in Germany
Fraunhofer-Center for High Temperature Materials and Design HTL
[ "Engineering" ]
1,291
[ "Ceramic engineering", "Ceramic materials" ]
59,440,568
https://en.wikipedia.org/wiki/R-454B
R-454B, also known by the trademarked names Opteon XL41, Solstice 454B, and Puron Advance, is a zeotropic blend of 68.9 percent difluoromethane (R-32), a hydrofluorocarbon, and 31.1 percent 2,3,3,3-tetrafluoropropene (R-1234yf), a hydrofluoroolefin. Because of its reduced global warming potential (GWP), R-454B is intended to be an alternative to refrigerant R-410A in new equipment. R-454B has a GWP of 466, which is 78 percent lower than R-410A's GWP of 2088. R-454B is non-toxic and mildly flammable, with an ASHRAE safety classification of A2L. In the United States, it is expected to be packaged in a container that is red or has a red band on the shoulder or top. History The refrigeration industry has been seeking replacements for R-410A because of its high global warming potential. R-454B, formerly known as DL-5A, has been selected by several manufacturers. R-454B was developed at and is manufactured by Chemours. Carrier first announced introduction of R-454B in ducted residential and light commercial packaged refrigeration and air conditioning products in 2018, with R-454B-based products launches starting in 2023. Related refrigerants R-454B is not the only blend of R-32 and R-1234yf to be proposed as a refrigerant. Other blends include R-454A (35 percent R-32, 65 percent R-1234yf) and R-454C (21.5 percent R-32, 78.5 percent R1234yf). There are also several blends that include a third component. See also R-410A, a refrigerant that is being phased out, and which R-454B is a popular replacement for Difluoromethane, R-32, another R-410A replacement List of refrigerants References Refrigerants Greenhouse gases Daikin
R-454B
[ "Chemistry", "Environmental_science" ]
489
[ "Greenhouse gases", "Environmental chemistry" ]
59,441,761
https://en.wikipedia.org/wiki/Network%20synthesis
Network synthesis is a design technique for linear electrical circuits. Synthesis starts from a prescribed impedance function of frequency or frequency response and then determines the possible networks that will produce the required response. The technique is to be compared to network analysis in which the response (or other behaviour) of a given circuit is calculated. Prior to network synthesis, only network analysis was available, but this requires that one already knows what form of circuit is to be analysed. There is no guarantee that the chosen circuit will be the closest possible match to the desired response, nor that the circuit is the simplest possible. Network synthesis directly addresses both these issues. Network synthesis has historically been concerned with synthesising passive networks, but is not limited to such circuits. The field was founded by Wilhelm Cauer after reading Ronald M. Foster's 1924 paper A reactance theorem. Foster's theorem provided a method of synthesising LC circuits with arbitrary number of elements by a partial fraction expansion of the impedance function. Cauer extended Foster's method to RC and RL circuits, found new synthesis methods, and methods that could synthesise a general RLC circuit. Other important advances before World War II are due to Otto Brune and Sidney Darlington. In the 1940s Raoul Bott and Richard Duffin published a synthesis technique that did not require transformers in the general case (the elimination of which had been troubling researchers for some time). In the 1950s, a great deal of effort was put into the question of minimising the number of elements required in a synthesis, but with only limited success. Little was done in the field until the 2000s when the issue of minimisation again became an active area of research, but as of 2023, is still an unsolved problem. A primary application of network synthesis is the design of network synthesis filters but this is not its only application. Amongst others are impedance matching networks, time-delay networks, directional couplers, and equalisation. In the 2000s, network synthesis began to be applied to mechanical systems as well as electrical, notably in Formula One racing. Overview Network synthesis is all about designing an electrical network that behaves in a prescribed way without any preconception of the network form. Typically, an impedance is required to be synthesised using passive components. That is, a network consisting of resistances (R), inductances (L) and capacitances (C). Such networks always have an impedance, denoted , in the form of a rational function of the complex frequency variable s. That is, the impedance is the ratio of two polynomials in s. There are three broad areas of study in network synthesis; approximating a requirement with a rational function, synthesising that function into a network, and determining equivalents of the synthesised network. Approximation The idealised prescribed function will rarely be capable of being exactly described by polynomials. It is therefore not possible to synthesise a network to exactly reproduce it. A simple, and common, example is the brick-wall filter. This is the ideal response of a low-pass filter but its piecewise continuous response is impossible to represent with polynomials because of the discontinuities. To overcome this difficulty, a rational function is found that closely approximates the prescribed function using approximation theory. In general, the closer the approximation is required to be, the higher the degree of the polynomial and the more elements will be required in the network. There are many polynomials and functions used in network synthesis for this purpose. The choice depends on which parameters of the prescribed function the designer wishes to optimise. One of the earliest used was Butterworth polynomials which results in a maximally flat response in the passband. A common choice is the Chebyshev approximation in which the designer specifies how much the passband response can deviate from the ideal in exchange for improvements in other parameters. Other approximations are available for optimising time delay, impedance matching, roll-off, and many other requirements. Realisation Given a rational function, it is usually necessary to determine whether the function is realisable as a discrete passive network. All such networks are described by a rational function, but not all rational functions are realisable as a discrete passive network. Historically, network synthesis was concerned exclusively with such networks. Modern active components have made this limitation less relevant in many applications, but at the higher radio frequencies passive networks are still the technology of choice. There is a simple property of rational functions that predicts whether the function is realisable as a passive network. Once it is determined that a function is realisable, there a number of algorithms available that will synthesise a network from it. Equivalence A network realisation from a rational function is not unique. The same function may realise many equivalent networks. It is known that affine transformations of the impedance matrix formed in mesh analysis of a network are all impedance matrices of equivalent networks (further information at ). Other impedance transformations are known, but whether there are further equivalence classes that remain to be discovered is an open question. A major area of research in network synthesis has been to find the realisation which uses the minimum number of elements. This question has not been fully solved for the general case, but solutions are available for many networks with practical applications. History The field of network synthesis was founded by German mathematician and scientist Wilhelm Cauer (1900–1945). The first hint towards a theory came from American mathematician Ronald M. Foster (1896–1998) when he published A reactance theorem in 1924. Cauer immediately recognised the importance of this work and set about generalising and extending it. His thesis in 1926 was on "The realisation of impedances of prescibed frequency dependence" and is the beginning of the field. Cauer's most detailed work was done during World War II, but he was killed shortly before the end of the war. His work could not be widely published during the war, and it was not until 1958 that his family collected his papers and published them for the wider world. Meanwhile, progress had been made in the United States based on Cauer's pre-war publications and material captured during the war. English self-taught mathematician and scientist Oliver Heaviside (1850–1925) was the first to show that the impedance of an RLC network was always a rational function of a frequency operator, but provided no method of realising a network from a rational function. Cauer found a necessary condition for a rational function to be realisable as a passive network. South African Otto Brune (1901–1982) later coined the term positive-real function (PRF) for this condition. Cauer postulated that PRF was a necessary and sufficient condition but could not prove it, and suggested it as a research project to Brune, who was his grad student in the United States at the time. Brune published the missing proof in his 1931 doctoral thesis. Foster's realisation was limited to LC networks and was in one of two forms; either a number of series LC circuits in parallel, or a number of parallel LC circuits in series. Foster's method was to expand into partial fractions. Cauer showed that Foster's method could be extended to RL and RC networks. Cauer also found another method; expanding as a continued fraction which results in a ladder network, again in two possible forms. In general, a PRF will represent an RLC network; with all three kinds of element present the realisation is trickier. Both Cauer and Brune used ideal transformers in their realisations of RLC networks. Having to include transformers is undesirable in a practical implementation of a circuit. A method of realisation that did not require transformers was provided in 1949 by Hungarian-American mathematician Raoul Bott (1923–2005) and American physicist Richard Duffin (1909–1996). The Bott and Duffin method provides an expansion by repeated application of Richards' theorem, a 1947 result due to American physicist and applied mathematician Paul I. Richards (1923–1978). The resulting Bott-Duffin networks have limited practical use (at least for rational functionals of high degree) because the number of components required grows exponentially with the degree. A number of variations of the original Bott-Duffin method all reduce the number of elements in each section from six to five, but still with exponentially growing overall numbers. Papers achieving this include Pantell (1954), Reza (1954), Storer (1954) and Fialkow & Gest (1955). As of 2010, there has been no further significant advance in synthesising rational functions. In 1939, American electrical engineer Sidney Darlington showed that any PRF can be realised as a two-port network consisting only of L and C elements and terminated at its output with a resistor. That is, only one resistor is required in any network, the remaining components being lossless. The theorem was independently discovered by both Cauer and Giovanni Cocci. The corollary problem, to find a synthesis of PRFs using R and C elements with only one inductor, is an unsolved problem in network theory. Another unsolved problem is finding a proof of Darlington's conjecture (1955) that any RC 2-port with a common terminal can be realised as a series-parallel network. An important consideration in practical networks is to minimise the number of components, especially the wound components—inductors and transformers. Despite great efforts being put into minimisation, no general theory of minimisation has ever been discovered as it has for the Boolean algebra of digital circuits. Cauer used elliptic rational functions to produce approximations to ideal filters. A special case of elliptic rational functions is the Chebyshev polynomials due to Pafnuty Chebyshev (1821–1894) and is an important part of approximation theory. Chebyshev polynomials are widely used to design filters. In 1930, British physicist Stephen Butterworth (1885–1958) designed the Butterworth filter, otherwise known as the maximally-flat filter, using Butterworth polynomials. Butterworth's work was entirely independent of Cauer, but it was later found that the Butterworth polynomials were a limiting case of the Chebyshev polynomials. Even earlier (1929) and again independently, American engineer and scientist Edward Lawry Norton (1898–1983) designed a maximally-flat mechanical filter with a response entirely analogous to Butterworth's electrical filter. In the 2000s, interest in further developing network synthesis theory was given a boost when the theory started to be applied to large mechanical systems. The unsolved problem of minimisation is much more important in the mechanical domain than the electrical due to the size and cost of components. In 2017, researchers at the University of Cambridge, limiting themselves to considering biquadratic rational functions, determined that Bott-Duffin realisations of such functions for all series-parallel networks and most arbitrary networks had the minimum number of reactances (Hughes, 2017). They found this result surprising as it showed that the Bott-Duffin method was not quite so non-minimal as previously thought. This research partly centred on revisiting the Ladenheim Catalogue. This is an enumeration of all distinct RLC networks with no more than two reactances and three resistances. Edward Ladenheim carried out this work in 1948 while a student of Foster. The relevance of the catalogue is that all these networks are realised by biquadratic functions. Applications The single most widely used application of network synthesis is in the design of signal processing filters. The modern designs of such filters are almost always some form of network synthesis filter. Another application is the design of impedance matching networks. Impedance matching at a single frequency requires only a trivial network—usually one component. Impedance matching over a wide band, however, requires a more complex network, even in the case that the source and load resistances do not vary with frequency. Doing this with passive elements and without the use of transformers results in a filter-like design. Furthermore, if the load is not a pure resistance then it is only possible to achieve a perfect match at a number of discrete frequencies; the match over the band as a whole must be approximated. The designer first prescribes the frequency band over which the matching network is to operate, and then designs a band-pass filter for that band. The only essential difference between a standard filter and a matching network is that the source and load impedances are not equal. There are differences between filters and matching networks in which parameters are important. Unless the network has a dual function, the designer is not too concerned over the behaviour of the impedance matching network outside the passband. It does not matter if the transition band is not very narrow, or that the stopband has poor attenuation. In fact, trying to improve the bandwidth beyond what is strictly necessary will detract from the accuracy of the impedance match. With a given number of elements in the network, narrowing the design bandwidth improves the matching and vice versa. The limitations of impedance matching networks were first investigated by American engineer and scientist Hendrik Wade Bode in 1945, and the principle that they must necessarily be filter-like was established by Italian-American computer scientist Robert Fano in 1950. One parameter in the passband that is usually set for filters is the maximum insertion loss. For impedance matching networks, a better match can be obtained by also setting a minimum loss. That is, the gain never rises to unity at any point. Time-delay networks can be designed by network synthesis with filter-like structures. It is not possible to design a delay network that has a constant delay at all frequencies in a band. An approximation to this behaviour must be used limited to a prescribed bandwidth. The prescribed delay will occur at most at a finite number of spot frequencies. The Bessel filter has maximally-flat time-delay. The application of network synthesis is not limited to the electrical domain. It can be applied to systems in any energy domain that can be represented as a network of linear components. In particular, network synthesis has found applications in mechanical networks in the mechanical domain. Consideration of mechanical network synthesis led Malcolm C. Smith to propose a new mechanical network element, the inerter, which is analogous to the electrical capacitor. Mechanical components with the inertance property have found an application in the suspensions of Formula One racing cars. Synthesis techniques Synthesis begins by choosing an approximation technique that delivers a rational function approximating the required function of the network. If the function is to be implemented with passive components, the function must also meet the conditions of a positive-real function (PRF). The synthesis technique used depends in part on what form of network is desired, and in part how many kinds of elements are needed in the network. A one-element-kind network is a trivial case, reducing to an impedance of a single element. A two-element-kind network (LC, RC, or RL) can be synthesised with Foster or Cauer synthesis. A three-element-kind network (an RLC network) requires more advanced treatment such as Brune or Bott-Duffin synthesis. Which, and how many kinds of, elements are required can be determined by examining the poles and zeroes (collectively called critical frequencies) of the function. The requirement on the critical frequencies is given for each kind of network in the relevant sections below. Foster synthesis Foster's synthesis, in its original form, can be applied only to LC networks. A PRF represents a two-element-kind LC network if the critical frequencies of all exist on the axis of the complex plane of (the s-plane) and will alternate between poles and zeroes. There must be a single critical frequency at the origin and at infinity, all the rest must be in conjugate pairs. must be the ratio of an even and odd polynomial and their degrees must differ by exactly one. These requirements are a consequence of Foster's reactance theorem. Foster I form Foster's first form (Foster I form) synthesises as a set of parallel LC circuits in series. For example, can be expanded into partial fractions as, The first term represents a series inductor, a consequence of having a pole at infinity. If it had had a pole at the origin, that would represent a series capacitor. The remaining two terms each represent conjugate pairs of poles on the axis. Each of these terms can be synthesised as a parallel LC circuit by comparison with the impedance expression for such a circuit, The resulting circuit is shown in the figure. Foster II form Foster II form synthesises as a set of series LC circuits in parallel. The same method of expanding into partial fractions is used as for Foster I form, but applied to the admittance, , instead of . Using the same example PRF as before, Expanded in partial fractions, The first term represents a shunt inductor, a consequence of having a pole at the origin (or, equivalently, has a zero at the origin). If it had had a pole at infinity, that would represent a shunt capacitor. The remaining two terms each represent conjugate pairs of poles on the axis. Each of these terms can be synthesised as a series LC circuit by comparison with the admittance expression for such a circuit, The resulting circuit is shown in the figure. Extension to RC or RL networks Foster synthesis can be extended to any two-element-kind network. For instance, the partial fraction terms of an RC network in Foster I form will each represent an R and C element in parallel. In this case, the partial fractions will be of the form, Other forms and element kinds follow by analogy. As with an LC network, The PRF can be tested to see if it is an RC or RL network by examining the critical frequencies. The critical frequencies must all be on the negative real axis and alternate between poles and zeroes, and there must be an equal number of each. If the critical frequency nearest, or at, the origin is a pole, then the PRF is an RC network if it represents a , or it is an RL network if it represents a . Vice versa if the critical frequency nearest, or at, the origin is a zero. These extensions of the theory also apply to the Cauer forms described below. Immittance In the Foster synthesis above, the expansion of the function is the same procedure in both the Foster I form and Foster II form. It is convenient, especially in theoretical works, to treat them together as an immittance rather than separately as either an impedance or an admittance. It is only necessary to declare whether the function represents an impedance or an admittance at the point that an actual circuit needs to be realised. Immittance can also be used in the same way with the Cauer I and Cauer II forms and other procedures. Cauer synthesis Cauer synthesis is an alternative synthesis to Foster synthesis and the conditions that a PRF must meet are exactly the same as Foster synthesis. Like Foster synthesis, there are two forms of Cauer synthesis, and both can be extended to RC and RL networks. Cauer I form The Cauer I form expands into a continued fraction. Using the same example as used for the Foster I form, or, in more compact notation, The terms of this expansion can be directly implemented as the component values of a ladder network as shown in the figure. The given PRF may have a denominator that has a greater degree than the numerator. In such cases, the multiplicative inverse of the function is expanded instead. That is, if the function represents , then is expanded instead and vice versa. Cauer II form Cauer II form expands in exactly the same way as Cauer I form except that lowest degree term is extracted first in the continued fraction expansion rather than the highest degree term as is done in Cauer I form. The example used for the Cauer I form and the Foster forms when expanded as a Cauer II form results in some elements having negative values. This particular PRF, therefore, cannot be realised in passive components as a Cauer II form without the inclusion of transformers or mutual inductances. The essential reason that the example cannot be realised as a Cauer II form is that this form has a high-pass topology. The first element extracted in the continued fraction is a series capacitor. This makes it impossible for the zero of at the origin to be realised. The Cauer I form, on the other hand, has a low-pass topology and naturally has a zero at the origin. However, the of this function can be realised as a Cauer II form since the first element extracted is a shunt inductor. This gives a pole at the origin for , but that translates to the necessary zero at the origin for . The continued fraction expansion is, and the realised network is shown in the figure. Brune synthesis The Brune synthesis can synthesise any arbitrary PRF, so in general will result in a 3-element-kind (i.e. RLC) network. The poles and zeroes can lie anywhere in the left-hand half of the complex plane. The Brune method starts with some preliminary steps to eliminate critical frequencies on the imaginary axis as in the Foster method. These preliminary steps are sometimes called the Foster preamble. There is then a cycle of steps to produce a cascade of Brune sections. Removal of critical frequencies on the imaginary axis Poles and zeroes on the axis represent L and C elements that can be extracted from the PRF. Specifically, a pole at the origin represents a series capacitor a pole at infinity represents a series inductance a zero at the origin represents a shunt inductor a zero at infinity represents a shunt capacitor a pair of poles at represents a parallel LC circuit of resonant frequency in series a pair of zeroes at represents a series LC circuit of resonant frequency in shunt After these extractions, the remainder PRF has no critical frequencies on the imaginary axis and is known as a minimum reactance, minimum susceptance function. Brune synthesis proper begins with such a function. Broad outline of method The essence of the Brune method is to create a conjugate pair of zeroes on the axis by extracting the real and imaginary parts of the function at that frequency, and then extract the pair of zeroes as a resonant circuit. This is the first Brune section of the synthesised network. The resulting remainder is another minimum reactance function that is two degrees lower. The cycle is then repeated, each cycle producing one more Brune section of the final network until just a constant value (a resistance) remains. The Brune synthesis is canonical, that is, the number of elements in the final synthesised network is equal to the number of arbitrary coefficients in the impedance function. The number of elements in the synthesised circuit cannot therefore be reduced any further. Removal of minimum resistance A minimum reactance function will have a minimum real part, , at some frequency . This resistance can be extracted from the function leaving a remainder of another PRF called a minimum positive-real function, or just minimum function. For example, the minimum reactance function has and . The minimum function, , is therefore, Removal of a negative inductance or capacitance Since has no real part, we can write, For the example function, In this case, is negative, and we interpret it as the reactance of a negative-valued inductor, . Thus, and after substituting in the values of and . This inductance is then extracted from , leaving another PRF, , The reason for extracting a negative value is because is a PRF, which it would not be if were positive. This guarantees that will also be PRF (because the sum of two PRFs is also PRF). For cases where is a positive value, the admittance function is used instead and a negative capacitance is extracted. How these negative values are implemented is explained in a later section. Removal of a conjugate pair of zeroes Both the real and imaginary parts of have been removed in previous steps. This leaves a pair of zeroes in at as shown by factorising the example function; Since such a pair of zeroes represents a shunt resonant circuit, we extract it as a pair of poles from the admittance function, The rightmost term is the extracted resonant circuit with and . The network synthesised so far is shown in the figure. Removal of a pole at infinity must have a pole at infinity, since one was created there by the extraction of a negative inductance. This pole can now be extracted as a positive inductance. Thus as shown in the figure. Replacing negative inductance with a transformer The negative inductance cannot be implemented directly with passive components. However, the "tee" of inductors can be converted into mutually coupled inductors which absorbs the negative inductance. With a coupling coefficient of unity (tightly coupled) the mutual inductance, , in the example case is 2.0. Rinse and repeat In general, will be another minimum reactance function and the Brune cycle is then repeated to extract another Brune section In the example case, the original PRF was of degree 2, so after reducing it by two degrees, only a constant term is left which, trivially, synthesises as a resistance. Positive X In step two of the cycle it was mentioned that a negative element value must be extracted in order to guarantee a PRF remainder. If is positive, the element extracted must be a shunt capacitor instead of a series inductor if the element is to be negative. It is extracted from the admittance instead of the impedance . The circuit topology arrived at in step four of the cycle is a Π (pi) of capacitors plus an inductor instead of a tee of inductors plus a capacitor. It can be shown that this Π of capacitors plus inductor is an equivalent circuit of the tee of inductors plus capacitor. Thus, it is permissible to extract a positive inductance and then proceed as though were PRF, even though it is not. The correct result will still be arrived at and the remainder function will be PRF so can be fed into the next cycle. Bott-Duffin synthesis The Bott-Duffin synthesis begins as with the Brune synthesis by removing all poles and zeroes on the axis. Then Richards' theorem is invoked, which states for, if is a PRF then is a PRF for all real, positive values of . Making the subject of the expression results in, An example of one cycle of Bott-Duffin synthesis is shown in the figures. The four terms in this expression are, respectively, a PRF ( in the diagram), an inductance, , in parallel with it, another PRF ( in the diagram), and a capacitance, , in parallel with it. A pair of critical frequencies on the axis is then extracted from each of the two new PRFs (details not given here) each realised as a resonant circuit. The two residual PRFs ( and in the diagram) are each two degrees lower than . The same procedure is then repeatedly applied to the new PRFs generated until just a single element remains. Since the number of PRFs generated doubles with each cycle, the number of elements synthesised will grow exponentially. Although the Bott-Duffin method avoids the use of transformers and can be applied to any expression capable of realisation as a passive network, it has limited practical use due to the high component count required. Bayard synthesis Bayard synthesis is a state-space synthesis method based on the Gauss factorisation procedure. This method returns a synthesis using the minimum number of resistors and contains no gyrators. However, the method is non-canonical and will, in general, return a non-minimal number of reactance elements. Darlington synthesis Darlington synthesis starts from a different perspective to the techniques discussed so far, which all start from a prescribed rational function and realise it as a one-port impedance. Darlington synthesis starts with a prescribed rational function that is the desired transfer function of a two-port network. Darlington showed that any PRF can be realised as a two-port network using only L and C elements with a single resistor terminating the output port. The Darlington and related methods are called the insertion loss method. The method can be extended to multi-port networks with each port terminated with a single resistor. The Darlington method, in general, will require transformers or coupled inductors. However, most common filter types can be constructed by the Darlington method without these undesirable features. Active and digital realisations If the requirement to use only passive elements is lifted, then the realisation can be greatly simplified. Amplifiers can be used to buffer the parts of the network from each other so that they do not interact. Each buffered cell can directly realise a pair of poles of the rational function. There is then no need for any kind of iterative expansion of the function. The first example of this kind of synthesis is due to Stephen Butterworth in 1930. The Butterworth filter he produced became a classic of filter design, but more frequently implemented with purely passive rather than active components. More generally applicable designs of this kind include the Sallen–Key topology due to R. P. Sallen and E. L. Key in 1955 at MIT Lincoln Laboratory, and the biquadratic filter. Like the Darlington approach, Butterworth and Sallen-Key start with a prescribed transfer function rather than an impedance. A major practical advantage of active implementation is that it can avoid the use of wound components (transformers and inductors) altogether. These are undesirable for manufacturing reasons. Another feature of active designs is that they are not limited to PRFs. Digital realisations, like active circuits, are not limited to PRFs and can implement any rational function simply by programming it in. However, the function may not be stable. That is, it may lead to oscillation. PRFs are guaranteed to be stable, but other functions may not be. The stability of a rational function can be determined by examining the poles and zeroes of the function and applying the Nyquist stability criterion. References Bibliography Sources Aatre, Vasudev K., Network Theory and Filter Design, New Age International, 1986 . Anderson, Brian D.O.; Vongpanitlerd, Sumeth, Network Analysis and Synthesis: A Modern Systems Theory Approach, Courier Corporation, 2013 . Awang, Zaiki, Microwave Systems Design, Springer, 2013 . Bakshi, U.A.; Bakshi, A.V., Circuit Analysis - II, Technical Publications, 2009 . Bakshi, U.A.; Chitode, J.S., Linear Systems Analysis, Technical Publications, 2009 . Belevitch, Vitold, "Summary of the history of circuit theory", Proceedings of the IRE, vol. 50, iss. 5, pp. 848–855, May 1962. Carlin, Herbert J.; Civalleri, Pier Paolo, Wideband Circuit Design, CRC Press, 1997 . Cauer, Emil; Mathis, Wolfgang; Pauli, Rainer, "Life and Work of Wilhelm Cauer (1900 – 1945)", Proceedings of the Fourteenth International Symposium of Mathematical Theory of Networks and Systems (MTNS2000), Perpignan, June, 2000. Chao, Alan; Athans, Michael, "Stability robustness to unstructured uncertainty for linear time invariant systems", ch. 30 in, Levine, William S., The Control Handbook, CRC Press, 1996 . Chen, Michael Z.Q.; Hu, Yinlong, Inerter and Its Application in Vibration Control Systems, Springer, 2019 . Chen, Michael Z.Q.; Smith, Malcolm C., "Electrical and mechanical passive network synthesis", pp. 35–50 in, Blondel, Vincent D.; Boyd, Stephen P.; Kimuru, Hidenori (eds), Recent Advances in Learning and Control, Springer, 2008 . Comer, David J.; Comer, Donald T., Advanced Electronic Circuit Design, Wiley, 2003 . Darlington, Sidney "A history of network synthesis and filter theory for circuits composed of resistors, inductors, and capacitors", IEEE Transactions: Circuits and Systems, vol. 31, pp. 3–13, 1984. Ghosh, S.P., Chakroborty, A.K., Network Analysis and Synthesis, Tata McGraw Hill, 2010 . Glisson, Tildon H., Introduction to Circuit Analysis and Design, Springer, 2011 ISBN . Houpis, Constantine H.; Lubelfeld, Jerzy, Pulse Circuits, Simon and Schuster, 1970 . Hubbard, John H., "The Bott-Duffin synthesis of electrical circuits", pp. 33–40 in, Kotiuga, P. Robert (ed), A Celebration of the Mathematical Legacy of Raoul Bott, American Mathematical Society, 2010 . Hughes, Timothy H.; Morelli, Alessandro; Smith, Malcolm C., "Electrical network synthesis: A survey of recent work", pp. 281–293 in, Tempo, R.; Yurkovich, S.; Misra, P. (eds), Emerging Applications of Control and Systems Theory, Springer, 2018 . Kalman, Rudolf, "Old and new directions of research in systems theory", pp. 3–13 in, Willems, Jan; Hara, Shinji; Ohta, Yoshito; Fujioka, Hisaya (eds), Perspectives in Mathematical System Theory, Control, and Signal Processing, Springer, 2010 . Lee, Thomas H., Planar Microwave Engineering, Cambridge University Press, 2004 . Matthaei, George L.; Young, Leo; Jones, E.M.T., Microwave Filters, Impedance-Matching Networks, and Coupling Structures, McGraw-Hill 1964 . Paarmann, Larry D., Design and Analysis of Analog Filters, Springer Science & Business Media, 2001 . Robertson, Ean; Somjit, Nutapong; Chongcheawchamnan Mitchai, Microwave and Millimetre-Wave Design for Wireless Communications, John Wiley & Sons, 2016 . Shenoi, Belle A., Magnitude and Delay Approximation of 1-D and 2-D Digital Filters, Springer, 2012 . Sisodia, M.L.; Gupta, Vijay Laxmi, Microwaves : Introduction To Circuits, Devices And Antennas, New Age International, 2007 . Storer, James Edward, Passive Network Synthesis, McGraw-Hill, 1957 . Swanson, David C., Signal Processing for Intelligent Sensor Systems with MATLAB, CRC Press, 2012 . Vaisband, Inna P.; Jakushokas, Renatas, Popovich, Mikhail; Mezhiba, Andrey V.; Köse, Selçuk; Friedman Eby G., On-Chip Power Delivery and Management, Springer, 2016 . Wanhammar, Lars, Analog Filters using MATLAB, Springer, 2009 . Youla, Dante C., Theory and Synthesis of Linear Passive Time-Invariant Networks, Cambridge University Press, 2015 . Wing, Omar, Classical Circuit Theory, Springer, 2008 . Primary documents Bott, Raoul; Duffin, Richard, "Impedance synthesis without use of transformers", Journal of Applied Physics, vol. 20, iss. 8, p. 816, August 1949. Bode, Hendrik, Network Analysis and Feedback Amplifier Design, pp. 360–371, D. Van Nostrand Company, 1945 . Brune, Otto, "Synthesis of a finite two-terminal network whose driving-point impedance is a prescribed function of frequency", MIT Journal of Mathematics and Physics, vol. 10, pp. 191–236, April 1931. Butterworth, Stephen, "On the theory of filter amplifiers", Experimental Wireless and the Wireless Engineer, vol. 7, no. 85, pp. 536–541, October 1930. Cauer, Wilhelm, "Die Verwirklichung der Wechselstromwiderstände vorgeschriebener Frequenzabhängigkeit" (The realisation of impedances of prescribed frequency dependence), Archiv für Elektrotechnik, vol. 17, pp. 355–388, 1926 (in German). Cauer, Wilhelm, "Vierpole mit vorgeschriebenem D ̈ampfungs-verhalten", Telegraphen-, Fernsprech-, Funk- und Fern-sehtechnik, vol. 29, pp. 185–192, 228–235. 1940 (in German). Cocci, Giovanni, "Rappresentazione di bipoli qualsiasi con quadripoli di pure reattanze chiusi su resistenze", Alta Frequenza, vol. 9, pp. 685–698, 1940 (in Italian). Darlington, Sidney, "Synthesis of reactance 4-poles which produce prescribed insertion loss characteristics: Including special applications to filter design", MIT Journal of Mathematics and Physics, vol. 18, pp. 257–353, April 1939. Fano, Robert, "Theoretical limitations on the broadband matching of arbitrary impedances", Journal of the Franklin Institute, vol. 249, iss. 1, pp. 57–83, January 1950. Fialko, Aaron; Gerst, Irving, "Impedance synthesis without mutual coupling", Quarterly of Applied Mathematics, vol. 12, No. 4, pp. 420–422, 1955 Hughes, Timothy H., "Why RLC realizations of certain impedances need many more energy storage elements than expected", IEEE Transactions on Automatic Control, vol. 62, iss 9, pp. 4333-4346, September 2017. Hughes, Timothy H., "Passivity and electric circuits: a behavioral approach", IFAC-PapersOnLine, vol. 50, iss. 1, pp. 15500–15505, July 2017. Ladenheim, Edward L., A Synthesis of Biquadratic Impedances, Master's thesis, Polytechnic Institute of Brooklyn, New York, 1948. Pantell, R.H., "A new method of driving point impedance synthesis", Proceedings of the IRE, vol. 42, iss. 5, p. 861, 1954. Reza, F.M., "A bridge equivalent for a Brune cycle terminated in a resistor", Proceedings of the IRE, vol. 42, iss. 8, p. 1321, 1954. Richards, Paul I., "A special class of functions with positive real part in a half-plane", Duke Mathematical Journal, vol. 14, no. 3, 777–786, 1947. Sallen, R.P.; Key, E.L, "A practical method of designing RC active filters", IRE Transactions on Circuit Theory, vol. 2, iss. 1 pp. 74–85, March 1955. Smith, Malcolm C., "Synthesis of mechanical networks: the inerter", IEEE Transactions on Automatic Control, vol. 47, iss. 10, pp. 1648–1662, Oct 2002. Storer, J.E., "Relationship between the Bott-Duffin and Pantell impedance synthesis", Proceedings of the IRE, vol. 42, iss. 9, p. 1451, September 1954. Analog circuits Electronic design History of electronic engineering
Network synthesis
[ "Engineering" ]
8,225
[ "Electronic design", "Analog circuits", "Electronic engineering", "History of electronic engineering", "Design" ]
59,442,969
https://en.wikipedia.org/wiki/Methylhippuric%20acid
Methylhippuric acid is a carboxylic acid and organic compound. Methylhippuric acid has three isomers. The isomers include 2-, 3-, and 4-methylhippuric acid. Methylhippuric acids are metabolites of the isomers of xylene. The presence of methylhippuric acid can be used as a biomarker to determine exposure to xylene. See also Hippuric acid References Carboxylic acids Benzamides Human metabolites
Methylhippuric acid
[ "Chemistry" ]
108
[ "Carboxylic acids", "Functional groups" ]
59,448,183
https://en.wikipedia.org/wiki/Convolutional%20sparse%20coding
The convolutional sparse coding paradigm is an extension of the global sparse coding model, in which a redundant dictionary is modeled as a concatenation of circulant matrices. While the global sparsity constraint describes signal as a linear combination of a few atoms in the redundant dictionary , usually expressed as for a sparse vector , the alternative dictionary structure adopted by the convolutional sparse coding model allows the sparsity prior to be applied locally instead of globally: independent patches of are generated by "local" dictionaries operating over stripes of . The local sparsity constraint allows stronger uniqueness and stability conditions than the global sparsity prior, and has shown to be a versatile tool for inverse problems in fields such as image understanding and computer vision. Also, a recently proposed multi-layer extension of the model has shown conceptual benefits for more complex signal decompositions, as well as a tight connection the convolutional neural networks model, allowing a deeper understanding of how the latter operates. Overview Given a signal of interest and a redundant dictionary , the sparse coding problem consist of retrieving a sparse vector , denominated the sparse representation of , such that . Intuitively, this implies is expressed as a linear combination of a small number of elements in . The global sparsity constraint prior has been shown to be useful in many ill-posed inverse problems such as image inpainting, super-resolution, and coding. It has been of particular interest for image understanding and computer vision tasks involving natural images, allowing redundant dictionaries to be efficiently inferred As an extension to the global sparsity constraint, recent pieces in the literature have revisited the model to reach a more profound understanding of its uniqueness and stability conditions. Interestingly, by imposing a local sparsity prior in , meaning that its independent patches can be interpreted as sparse vectors themselves, the structure in can be understood as a “local" dictionary operating over each independent patch. This model extension is denominated convolutional sparse coding (CSC) and drastically reduces the burden of estimating signal representations while being characterized by stronger uniqueness and stability conditions. Furthermore, it allows for to be efficiently estimated via projected gradient descent algorithms such as orthonormal matching pursuit (OMP) and basis pursuit (BP), while performing in a local fashion Besides its versatility in inverse problems, recent efforts have focused on the multi-layer version of the model and provided evidence of its reliability for recovering multiple underlying representations. Moreover, a tight connection between such a model and the well-established convolutional neural network model (CNN) was revealed, providing a new tool for a more rigurous understanding of its theoretical conditions. The convolutional sparse coding model provides a very efficient set of tools to solve a wide range of inverse problems, including image denoising, image inpainting, and image superresolution. By imposing local sparsity constraints, it allows to efficiently tackle the global coding problem by iteratively estimating disjoint patches and assembling them into a global signal. Furthermore, by adopting a multi-layer sparse model, which results from imposing the sparsity constraint to the signal inherent representations themselves, the resulting "layered" pursuit algorithm keeps the strong uniqueness and stability conditions from the single-layer model. This extension also provides some interesting notions about the relation between its sparsity prior and the forward pass of the convolutional neural network, which allows to understand how the theoretical benefits of the CSC model can provide a strong mathematical meaning of the CNN structure. Sparse coding paradigm Basic concepts and models are presented to explain into detail the convolutional sparse representation framework. On the grounds that the sparsity constraint has been proposed under different models, a short description of them is presented to show its evolution up to the model of interest. Also included are the concepts of mutual coherence and restricted isometry property to establish uniqueness stability guarantees. Global sparse coding model Allow signal to be expressed as a linear combination of a small number of atoms from a given dictionary . Alternatively, the signal can be expressed as , where corresponds to the sparse representation of , which selects the atoms to combine and their weights. Subsequently, given , the task of recovering from either the noise-free signal itself or an observation is denominated sparse coding. Considering the noise-free scenario, the coding problem is formulated as follows: The effect of the norm is to favor solutions with as much zero elements as possible. Furthermore, given an observation affected by bounded energy noise: , the pursuit problem is reformulated as: Stability and uniqueness guarantees for the global sparse model Let the spark of be defined as the minimum number of linearly independent columns: Then, from the triangular inequality, the sparsest vector satisfies: . Although the spark provides an upper bound, it is unfeasible to compute in practical scenarios. Instead, let the mutual coherence be a measure of similarity between atoms in . Assuming -norm unit atoms, the mutual coherence of is defined as: , where are atoms. Based on this metric, it can be proven that the true sparse representation can be recovered if and only if . Similarly, under the presence of noise, an upper bound for the distance between the true sparse representation and its estimation can be established via the restricted isometry property (RIP). A k-RIP matrix with constant corresponds to: , where is the smallest number that satisfies the inequality for every . Then, assuming , it is guaranteed that . Solving such a general pursuit problem is a hard task if no structure is imposed on dictionary . This implies learning large, highly overcomplete representations, which is extremely expensive. Assuming such a burden has been met and a representative dictionary has been obtained for a given signal , typically based on prior information, can be estimated via several pursuit algorithms. Pursuit algorithms for the global sparse model Two basic methods for solving the global sparse coding problem are orthogonal matching pursuit (OMP) and basis pursuit (BP). OMP is a greedy algorithm that iteratively selects the atom best correlated with the residual between and a current estimation, followed by a projection onto a subset of pre-selected atoms. On the other hand, basis pursuit is a more sophisticated approach that replaces the original coding problem by a linear programming problem. Based on this algorithms, the global sparse coding provides considerably loose bounds for the uniqueness and stability of . To overcome this, additional priors are imposed over to guarantee tighter bounds and uniqueness conditions. The reader is referred to (, section 2) for details regarding this properties. Convolutional sparse coding model A local prior is adopted such that each overlapping section of is sparse. Let be constructed from shifted versions of a local dictionary . Then, is formed by products between and local patches of . From the latter, can be re-expressed by disjoint sparse vectors : . Similarly, let be a set of consecutive vectors . Then, each disjoint segment in is expressed as: , where operator extracts overlapping patches of size starting at index . Thus, contains only nonzero columns. Hence, by introducing operator which exclusively preserves them: where is known as the stripe dictionary, which is independent of , and is denominated the i-th stripe. So, corresponds to a patch aggregation or convolutional interpretation: Where corresponds to the i-th atom from the local dictionary and is constructed by elements of patches : . Given the new dictionary structure, let the pseudo-norm be defined as: . Then, for the noise-free and noise-corrupted scenarios, the problem can be respectively reformulated as: Stability and uniqueness guarantees for the convolutional sparse model For the local approach, mutual coherence satisfies: So, if a solution obeys , then it is the sparsest solution to the problem. Thus, under the local formulation, the same number of non-zeros is permitted for each stripe instead of the full vector! Similar to the global model, the CSC is solved via OMP and BP methods, the latter contemplating the use of the iterative shrinkage thresholding algorithm (ISTA) for splitting the pursuit into smaller problems. Based on the pseudonorm, if a solution exists satisfying , then both methods are guaranteed to recover it. Moreover, the local model guarantees recovery independently of the signal dimension, as opposed to the prior. Stability conditions for OMP and BP are also guaranteed if its exact recovery condition (ERC) is met for a support with a constant . The ERC is defined as: , where denotes the Pseudo-inverse. Algorithm 1 shows the Global Pursuit method based on ISTA. Algorithm 1: 1D CSC via local iterative soft-thresholding. Input: : Local Dictionary, : observation, : Regularization parameter, : step size for ISTA, tol: tolerance factor, maxiters: maximum number of iterations. (Initialize disjoint patches.) (Initialize residual patches.) Repeat (Coding along disjoint patches) (Patch Aggregation) (Update residuals) Until tol or maxiters. Multi-layered convolutional sparse coding model By imposing the sparsity prior in the inherent structure of , strong conditions for a unique representation and feasible methods for estimating it are granted. Similarly, such a constraint can be applied to its representation itself, generating a cascade of sparse representations: Each code is defined by a few atoms of a given set of convolutional dictionaries. Based on these criteria, yet another extension denominated multi-layer convolutional sparse coding (ML-CSC) is proposed. A set of analytical dictionaries can be efficiently designed, where sparse representations at each layer are guaranteed by imposing the sparsity prior over the dictionaries themselves. In other words, by considering dictionaries to be stride convolutional matrices i.e. atoms of the local dictionaries shift elements instead of a single one, where corresponds to the number of channels in the previous layer, it is guaranteed that the norm of the representations along layers is bounded. For example, given the dictionaries , the signal is modeled as , where is its sparse code, and is the sparse code of . Then, the estimation of each representation is formulated as an optimization problem for both noise-free and noise-corrupted scenarios, respectively. Assuming : In what follows, theoretical guarantees for the uniqueness and stability of this extended model are described. Theorem 1: (Uniqueness of sparse representations) Consider signal satisfies the (ML-CSC) model for a set of convolutional dictionaries with mutual coherence . If the true sparse representations satisfy , then a solution to the problem will be its unique solution if the thresholds are chosen to satisfy: . Theorem 2: (Global stability of the noise-corrupted scenario) Consider signal satisfies the (ML-CSC) model for a set of convolutional dictionaries is contaminated with noise , where . resulting in . If and , then the estimated representations satisfy the following: . Projection-based algorithms As a simple approach for solving the ML-CSC problem, either via the or norms, is by computing inner products between and the dictionary atoms to identify the most representatives ones. Such a projection is described as: which have closed-form solutions via the hard-thresholding and soft-thresholding algorithms , respectively. If a nonnegative constraint is also contemplated, the problem can be expressed via the norm as: which closed-form solution corresponds to the soft nonnegative thresholding operator , where . Guarantees for the Layered soft-thresholding approach are included in the Appendix (Section 6.2). Theorem 3: (Stable recovery of the multi-layered soft-thresholding algorithm) Consider signal that satisfies the (ML-CSC) model for a set of convolutional dictionaries with mutual coherence is contaminated with noise , where . resulting in . Denote by and the lowest and highest entries in . Let be the estimated sparse representations obtained for . If and is chosen according to: Then, has the same support as , and , for Connections to convolutional neural networks Recall the forward pass of the convolutional neural network model, used in both training and inference steps. Let be its input and the filters at layer , which are followed by the rectified linear unit (RLU) , for bias . Based on this elementary block, taking as example, the CNN output can be expressed as: Finally, comparing the CNN algorithm and the Layered thresholding approach for the nonnegative constraint, it is straightforward to show that both are equivalent: As explained in what follows, this naive approach of solving the coding problem is a particular case of a more stable projected gradient descent algorithm for the ML-CSC model. Equipped with the stability conditions of both approaches, a more clear understanding about the class of signals a CNN can recover, under what noise conditions can an estimation be accurately attained, and how can its structure be modified to improve its theoretical conditions. The reader is referred to (, section 5) for details regarding their connection. Pursuit algorithms for the multi-layer CSC model A crucial limitation of the forward pass is it being unable to recover the unique solution of the DCP problem, which existence has been demonstrated. So, instead of using a thresholding approach at each layer, a full pursuit method is adopted, denominated layered basis pursuit (LBP). Considering the projection onto the ball, the following problem is proposed: where each layer is solved as an independent CSC problem, and is proportional to the noise level at each layer. Among the methods for solving the layered coding problem, ISTA is an efficient decoupling alternative. In what follows, a short summary of the guarantees for the LBP are established. Theorem 4: (Recovery guarantee) Consider a signal characterized by a set of sparse vectors , convolutional dictionaries and their corresponding mutual coherences . If , then the LBP algorithm is guaranteed to recover the sparse representations. Theorem 5: (Stability in the presence of noise) Consider the contaminated signal , where and is characterized by a set of sparse vectors and convolutional dictionaries . Let be solutions obtained via the LBP algorithm with parameters . If and , then: (i) The support of the solution is contained in that of , (ii) , and (iii) Any entry greater in absolute value than is guaranteed to be recovered. Applications of the convolutional sparse coding model: image inpainting As a practical example, an efficient image inpainting method for color images via the CSC model is shown. Consider the three-channel dictionary , where denotes the -th atom at channel , represents signal by a single cross-channel sparse representation , with stripes denoted as . Given an observation , where randomly chosen channels at unknown pixel locations are fixed to zero, in a similar way to impulse noise, the problem is formulated as: By means of ADMM, the cost function is decoupled into simpler sub-problems, allowing an efficient estimation. Algorithm 2 describes the procedure, where is the DFT representation of , the convolutional matrix for the term . Likewise, and correspond to the DFT representations of and , respectively, corresponds to the Soft-thresholding function with argument , and the norm is defined as the norm along the channel dimension followed by the norm along the spatial dimension . The reader is referred to (, Section II) for details on the ADMM implementation and the dictionary learning procedure. Algorithm 2: Color image inpainting via the convolutional sparse coding model. Input: : DFT of convolutional matrices , : Color observation, : Regularization parameter, : step sizes for ADMM, tol: tolerance factor, maxiters: maximum number of iterations. Repeat Until tol or maxiters. References External links SParse Optimization Research COde (SPORCO) Coding theory
Convolutional sparse coding
[ "Mathematics" ]
3,308
[ "Discrete mathematics", "Coding theory" ]
44,413,693
https://en.wikipedia.org/wiki/Design%20smell
In computer programming, a design smell is a structure in a design that indicates a violation of fundamental design principles, and which can negatively impact the project's quality. The origin of the term can be traced to the term "code smell" which was featured in the book Refactoring: Improving the Design of Existing Code by Martin Fowler. Details Different authors have defined the word "smell" in different ways: N. Moha et al.: "Code and design smells are poor solutions to recurring implementation and design problems." R. C. Martin: "Design smells are the odors of rotting software." Fowler: "Smells are certain structures in the code that suggest (sometimes they scream for) the possibility of refactoring." Design smells indicate the accumulated design debt (one of the prominent dimensions of technical debt). Bugs or unimplemented features are not accounted as design smells. Design smells arise from the poor design decisions that make the design fragile and difficult to maintain. It is a good practice to identify design smells in a software system and apply appropriate refactoring to eliminate it to avoid accumulation of technical debt. The context (characterized by various factors such as the problem at hand, design eco-system, and platform) plays an important role to decide whether a certain structure or decision should be considered as a design smell. Generally, it is appropriate to live with design smells due to constraints imposed by the context. Nevertheless, design smells should be tracked and managed as technical debt because they degrade the overall system quality over time. Common design smells Missing abstraction when clumps of data or encoded strings are used instead of creating an abstraction. Also known as "primitive obsession" and "data clumps". Multifaceted abstraction when an abstraction has multiple responsibilities assigned to it. Also known as "conceptualization abuse". Duplicate abstraction when two or more abstractions have identical names or implementation or both. Also known as "alternative classes with different interfaces" and "duplicate design artifacts". Deficient encapsulation when the declared accessibility of one or more members of an abstraction is more permissive than actually required. Unexploited encapsulation when client code uses explicit type checks (using chained if-else or switch statements that check for the type of the object) instead of exploiting the variation in types already encapsulated within a hierarchy. Broken modularization when data and/or methods that ideally should have been localized into a single abstraction are separated and spread across multiple abstractions. Insufficient modularization when an abstraction exists that has not been completely decomposed, and a further decomposition could reduce its size, implementation complexity, or both. Circular dependency. Cyclically dependent modularization when two or more abstractions depend on each other directly or indirectly (creating a tight coupling between the abstractions). Also known as "cyclic dependencies". Cyclic hierarchy when a supertype in a hierarchy depends on any of its subtypes. Also known as "inheritance/reference cycles". Unfactored hierarchy when there is unnecessary duplication among types in a hierarchy. Broken hierarchy when a supertype and its subtype conceptually do not share an “IS-A” relationship resulting in broken substitutability. Also known as "inappropriate use of inheritance" and "misapplying IS A". See also Anti-pattern Software rot References Computer programming folklore Software engineering folklore Odor
Design smell
[ "Engineering" ]
698
[ "Software engineering", "Software engineering folklore" ]
44,416,015
https://en.wikipedia.org/wiki/Magnetic%20skyrmion
In physics, magnetic skyrmions (occasionally described as 'vortices,' or 'vortex-like' configurations) are statically stable solitons which have been predicted theoretically and observed experimentally in condensed matter systems. Magnetic skyrmions can be formed in magnetic materials in their 'bulk' such as in manganese monosilicide (MnSi), or in magnetic thin films. They can be achiral, or chiral (Fig. 1 a and b are both chiral skyrmions) in nature, and may exist both as dynamic excitations or stable or metastable states. Although the broad lines defining magnetic skyrmions have been established de facto, there exist a variety of interpretations with subtle differences. Most descriptions include the notion of topology – a categorization of shapes and the way in which an object is laid out in space – using a continuous-field approximation as defined in micromagnetics. Descriptions generally specify a non-zero, integer value of the topological index, (not to be confused with the chemistry meaning of 'topological index'). This value is sometimes also referred to as the winding number, the topological charge (although it is unrelated to 'charge' in the electrical sense), the topological quantum number (although it is unrelated to quantum mechanics or quantum mechanical phenomena, notwithstanding the quantization of the index values), or more loosely as the “skyrmion number.” The topological index of the field can be described mathematically as where is the topological index, is the unit vector in the direction of the local magnetization within the magnetic thin, ultra-thin or bulk film, and the integral is taken over a two-dimensional space. (A generalization to a three-dimensional space is possible). Passing to spherical coordinates for the space ( ) and for the magnetisation ( ), one can understand the meaning of the skyrmion number. In skyrmion configurations the spatial dependence of the magnetisation can be simplified by setting the perpendicular magnetic variable independent of the in-plane angle () and the in-plane magnetic variable independent of the radius ( ). Then the topological skyrmion number reads: where p describes the magnetisation direction in the origin (p=1 (−1) for ) and W is the winding number. Considering the same uniform magnetisation, i.e. the same p value, the winding number allows to define the skyrmion () with a positive winding number and the antiskyrmion with a negative winding number and thus a topological charge opposite to the one of the skyrmion. What this equation describes physically is a configuration in which the spins in a magnetic film are all aligned orthonormal to the plane of the film, with the exception of those in one specific region, where the spins progressively turn over to an orientation that is perpendicular to the plane of the film but anti-parallel to those in the rest of the plane. Assuming 2D isotropy, the free energy of such a configuration is minimized by relaxation towards a state exhibiting circular symmetry, resulting in the configuration illustrated schematically (for a two dimensional skyrmion) in figure 1. In one dimension, the distinction between the progression of magnetization in a 'skyrmionic' pair of domain walls, and the progression of magnetization in a topologically trivial pair of magnetic domain walls, is illustrated in figure 2. Considering this one dimensional case is equivalent to considering a horizontal cut across the diameter of a 2-dimensional hedgehog skyrmion (fig. 1(a)) and looking at the progression of the local spin orientations. It is worth observing that there are two different configurations which satisfy the topological index criterion stated above. The distinction between these can be made clear by considering a horizontal cut across both of the skyrmions illustrated in figure 1, and looking at the progression of the local spin orientations. In the case of fig. 1(a) the progression of magnetization across the diameter is cycloidal. This type of skyrmion is known as a hedgehog skyrmion. In the case of fig. 1(b), the progression of magnetization is helical, giving rise to what is often called a vortex skyrmion. Stability The skyrmion magnetic configuration is predicted to be stable because the atomic spins which are oriented opposite those of the surrounding thin-film cannot ‘flip around’ to align themselves with the rest of the atoms in the film, without overcoming an energy barrier. This energy barrier is often ambiguously described as arising from ‘topological protection.’ (See Topological stability vs. energy stability). Depending on the magnetic interactions existing in a given system, the skyrmion topology can be a stable, meta-stable, or unstable solution when one minimizes the system's free energy. Theoretical solutions exist for both isolated skyrmions and skyrmion lattices. However, since the stability and behavioral attributes of skyrmions can vary significantly based on the type of interactions in a system, the word 'skyrmion' can refer to substantially different magnetic objects. For this reason, some physicists choose to reserve use of the term 'skyrmion' to describe magnetic objects with a specific set of stability properties, and arising from a specific set of magnetic interactions. Definitions In general, definitions of magnetic skyrmions fall into 2 categories. Which category one chooses to refer to depends largely on the emphasis one wishes to place on different qualities. A first category is based strictly on topology. This definition may seem appropriate when considering topology-dependent properties of magnetic objects, such as their dynamical behavior. A second category emphasizes the intrinsic energy stability of certain solitonic magnetic objects. In this case, the energy stability is often (but not necessarily) associated with a form of chiral interaction, which might originate from the Dzyaloshinskii-Moriya interaction (DMI), or spiral magnetism originating from double-exchange mechanism (DE) or competing Heisenberg exchange interaction. When expressed mathematically, definitions in the first category state that magnetic spin-textures with a spin-progression satisfying the condition: where is an integer ≥1, can be qualified as magnetic skyrmions. Definitions in the second category similarly stipulate that a magnetic skyrmion exhibits a spin-texture with a spin-progression satisfying the condition: where is an integer ≥1, but further suggest that there must exist an energy term that stabilizes the spin-structure into a localized magnetic soliton whose energy is invariant by translation of the soliton's position in space. (The spatial energy invariance condition constitutes a way to rule out structures stabilized by locally-acting factors external to the system, such as confinement arising from the geometry of a specific nanostructure). The first set of definitions for magnetic skyrmions is a superset of the second, in that it places less stringent requirements on the properties of a magnetic spin texture. This definition finds a raison d'être because topology itself determines some properties of magnetic spin textures, such as their dynamical responses to excitations. The second category of definitions may be preferred to underscore intrinsic stability qualities of some magnetic configurations. These qualities arise from stabilizing interactions which may be described in several mathematical ways, including for example by using higher-order spatial derivative terms such as 2nd or 4th order terms to describe a field, (the mechanism originally proposed in particle physics by Tony Skyrme for a continuous field model), or 1st order derivative functionals known as Lifshitz invariants—energy contributions linear in first spatial derivatives of the magnetization—as later proposed by Alexei Bogdanov. (An example of such a 1st order functional is the Dzyaloshinskii-Moriya Interaction). In all cases the energy term acts to introduce topologically non-trivial solutions to a system of partial differential equations. In other words, the energy term acts to render possible the existence of a topologically non-trivial magnetic configuration that is confined to a finite, localized region, and possesses an intrinsic stability or meta-stability relative to a trivial homogeneously magnetized ground-state — i.e. a magnetic soliton. An example hamiltonian containing one set of energy terms that allows for the existence of skyrmions of the second category is the following: where the first, second, third and fourth sums correspond to the exchange, Dzyaloshinskii-Moriya, Zeeman (responsible for the "usual" torques and forces observed on a magnetic dipole moment in a magnetic field), and magnetic Anisotropy (typically magnetocrystalline anisotropy) interaction energies respectively. Note that equation (2) does not contain a term for the dipolar, or 'demagnetizing' interaction between atoms. As in eq. (2), the dipolar interaction is sometimes omitted in simulations of ultra-thin two-dimensional magnetic films, because it tends to contribute a minor effect in comparison with the others. Braided skyrmion tubes have been observed in FeGe. If a skyrmion tube has finite length with Bloch points at either end, it has been called a toron or a dipole string. A bound state of a skyrmion and a vortex of the XY-model, is in fact a type of screw dislocation of helimagnetic order in chiral magnets. Role of the topology Topological stability vs. energetic stability A non-trivial topology does not in itself imply energetic stability. There is in fact no necessary relation between topology and energetic stability. Hence, one must be careful not to confuse ‘topological stability,’ which is a mathematical concept, with energy stability in real physical systems. Topological stability refers to the idea that in order for a system described by a continuous field to transition from one topological state to another, a rupture must occur in the continuous field, i.e. a discontinuity must be produced. For example, if one wishes to transform a flexible balloon doughnut (torus) into an ordinary spherical balloon, it is necessary to introduce a rupture on some part of the balloon doughnut's surface. Mathematically, the balloon doughnut would be described as 'topologically stable.' However, in physics, the free energy required to introduce a rupture enabling the transition of a system from one ‘topological’ state to another is always finite. For example, it is possible to turn a rubber ballon into flat piece of rubber by poking it with a needle (and popping it!). Thus, while a physical system can be approximately described using the mathematical concept of topology, attributes such as energetic stability are dependent on the system's parameters—the strength of the rubber in the example above—not the topology per se. In order to draw a meaningful parallel between the concept of topological stability and the energy stability of a system, the analogy must necessarily be accompanied by the introduction of a non-zero phenomenological ‘field rigidity’ to account for the finite energy needed to rupture the field's topology. Modeling and then integrating this field rigidity can be likened to calculating a breakdown energy-density of the field. These considerations suggest that what is often referred to as ‘topological protection,’ or a 'topological barrier,' should more accurately be referred to as a 'topology-related energy barrier,' though this terminology is somewhat cumbersome. A quantitative evaluation of such a topological barrier can be obtained by extracting the critical magnetic configuration when the topological number changes during the dynamical process of a skyrmion creation event. Applying the topological charge defined in a lattice, the barrier height is theoretically shown to be proportional to the exchange stiffness. Further observations It is important to be cognizant of the fact that magnetic =1 structures are in fact not stabilized by virtue of their ‘topology,’ but rather by the field rigidity parameters that characterize a given system. However, this does not suggest that topology plays an insignificant role with respect to energetic stability. On the contrary, topology may create the possibility for certain stable magnetic states to exist, which otherwise could not. However, topology in itself does not guarantee the stability of a state. In order for a state to have stability associated with its topology, it must be further accompanied by a non-zero field rigidity. Thus, topology can be considered a necessary but insufficient condition for the existence of certain classes of stable objects. While this distinction may at first seem pedantic, its physical motivation becomes apparent when considering two magnetic spin configurations of identical topology =1, but subject to the influences of only one differing magnetic interaction. For example, we may consider one spin configuration with, and one configuration without the presence of magnetocrystalline anisotropy, oriented perpendicular to the plane of an ultra-thin magnetic film. In this case, the =1 configuration that is influenced by the magnetocrystalline anisotropy will be more energetically stable than the =1 configuration without it, in spite of identical topologies. This is because the magnetocrystalline anisotropy contributes to the field rigidity, and it is the field rigidity, not the topology, that confers the notable energy barrier protecting the topological state. Finally, it is interesting to observe that in some cases, it is not the topology which helps =1 configurations to be stable, but rather the converse, as it is the stability of the field (which depends on the relevant interactions) which favors the =1 topology. This is to say that the most stable energy configuration of the field constituents, (in this case magnetic atoms), may in fact be to arrange into a topology which can be described as an =1 topology. Such is the case for magnetic skyrmions stabilized by the Dzyaloshinskii–Moriya interaction, which causes adjacent magnetic spins to 'prefer' having a fixed angle between each other (energetically speaking). Note that from a point of view of practical applications this does not alter the usefulness of developing systems with Dzyaloshinskii–Moriya interaction, as such applications depend strictly on the topology [of the skyrmions, or lack thereof], which encodes the information, and not the underlying mechanisms which stabilize the necessary topology. These examples illustrate why use of the terms 'topological protection' or 'topological stability' interchangeably with the concept of energy stability is misleading, and is liable to lead to fundamental confusion. Limitations of applying the concept of topology One must exercise caution when making inferences based on topology-related energy barriers, as it can be misleading to apply the notion of topology—a description which only rigorously applies to continuous fields— to infer the energetic stability of structures existing in discontinuous systems. Giving way to this temptation is sometimes problematic in physics, where fields which are approximated as continuous become discontinuous below certain size-scales. Such is the case for example when the concept of topology is associated with the micromagnetic model—which approximates the magnetic texture of a system as a continuous field—and then applied indiscriminately without consideration of the model's physical limitations (i.e. that it ceases to be valid at atomic dimensions). In practice, treating the spin textures of magnetic materials as vectors of a continuous field model becomes inaccurate at size-scales on the order of < 2 nm, due to the discretization of the atomic lattice. Thus, it is not meaningful to speak of magnetic skyrmions below these size-scales. Practical applications Magnetic skyrmions are anticipated to allow for the existence of discrete magnetic states which are significantly more energetically stable (per unit volume) than their single-domain counterparts. For this reason, it is envisioned that magnetic skyrmions may be used as bits to store information in future memory and logic devices, where the state of the bit is encoded by the existence or non-existence of the magnetic skyrmion. The dynamical magnetic skyrmion exhibits strong breathing which opens the avenue for skyrmion-based microwave applications. Simulations also indicate that the position of magnetic skyrmions within a film/nanotrack may be manipulated using spin currents or spin waves. Thus, magnetic skyrmions also provide promising candidates for future racetrack-type in-memory logic computing technologies. References Quasiparticles Magnetism
Magnetic skyrmion
[ "Physics", "Materials_science" ]
3,360
[ "Quasiparticles", "Subatomic particles", "Condensed matter physics", "Matter" ]
44,418,559
https://en.wikipedia.org/wiki/DNA%20polymerase%20epsilon
DNA polymerase epsilon is a member of the DNA polymerase family of enzymes found in eukaryotes. It is composed of the following four subunits: POLE (central catalytic unit), POLE2 (subunit 2), POLE3 (subunit 3), and POLE4 (subunit 4). Recent evidence suggests that it plays a major role in leading strand DNA synthesis and nucleotide and base excision repair. Research had conducted to study nucleotide excision repair DNA synthesis by DNA polymerase epsilon in the presence of PCNA (proliferating cell nuclear antigen), RFC (replication factor C) and RPA (replication protein A). Either DNA polymerase epsilon or DNA polymerase delta along with DNA ligase can be used to repair UV-damaged DNA. However, it is found that DNA polymerase delta require the presence of both RFC and PCNA in order in DNA repair. In addition, it only produces small amount of fractionated DNA ligated products. DNA polymerase epsilon proves to be best suited for nucleotide excision repair. DNA polymerase epsilon is independent of both PCNA and RFC, and produces mostly ligated DNA products. It is also found that under one condition where DNA polymerase epsilon require PCNA and RFC: nucleotide excision repair in the presence of single strand binding protein RPA. PCNA and RFC function as anchor and direct DNA polymerase epsilon onto the DNA template. References Polymerase chain reaction DNA replication DNA repair DNA-binding proteins
DNA polymerase epsilon
[ "Chemistry", "Biology" ]
306
[ "Biochemistry methods", "Genetics techniques", "DNA repair", "Polymerase chain reaction", "DNA replication", "Molecular genetics", "Cellular processes" ]
44,422,772
https://en.wikipedia.org/wiki/History%20of%20radio%20receivers
Radio waves were first identified in German physicist Heinrich Hertz's 1887 series of experiments to prove James Clerk Maxwell's electromagnetic theory. Hertz used spark-excited dipole antennas to generate the waves and micrometer spark gaps attached to dipole and loop antennas to detect them. These precursor radio receivers were primitive devices, more accurately described as radio wave "sensors" or "detectors", as they could only receive radio waves within about 100 feet of the transmitter, and were not used for communication but instead as laboratory instruments in scientific experiments and engineering demonstrations. Spark era The first radio transmitters, used during the initial three decades of radio from 1887 to 1917, a period called the spark era, were spark gap transmitters which generated radio waves by discharging a capacitance through an electric spark. Each spark produced a transient pulse of radio waves which decreased rapidly to zero. These damped waves could not be modulated to carry sound, as in modern AM and FM transmission. So spark transmitters could not transmit sound, and instead transmitted information by radiotelegraphy. The transmitter was switched on and off rapidly by the operator using a telegraph key, creating different length pulses of damped radio waves ("dots" and "dashes") to spell out text messages in Morse code. Therefore, the first radio receivers did not have to extract an audio signal from the radio wave like modern receivers, but just detected the presence of the radio signal, and produced a sound during the "dots" and "dashes". The device which did this was called a "detector". Since there were no amplifying devices at this time, the sensitivity of the receiver mostly depended on the detector. Many different detector devices were tried. Radio receivers during the spark era consisted of these parts: An antenna, to intercept the radio waves and convert them to tiny radio frequency electric currents. A tuned circuit, consisting of a capacitor connected to a coil of wire, which acted as a bandpass filter to select the desired signal out of all the signals picked up by the antenna. Either the capacitor or coil was adjustable to tune the receiver to the frequency of different transmitters. The earliest receivers, before 1897, did not have tuned circuits, they responded to all radio signals picked up by their antennas, so they had little frequency-discriminating ability and received any transmitter in their vicinity. Most receivers used a pair of tuned circuits with their coils magnetically coupled, called a resonant transformer (oscillation transformer) or "loose coupler". A detector, which produced a pulse of DC current for each damped wave received. An indicating device such as an earphone, which converted the pulses of current into sound waves. The first receivers used an electric bell instead. Later receivers in commercial wireless systems used a Morse siphon recorder, which consisted of an ink pen mounted on a needle swung by an electromagnet (a galvanometer) which drew a line on a moving paper tape. Each string of damped waves constituting a Morse "dot" or "dash" caused the needle to swing over, creating a displacement of the line, which could be read off the tape. With such an automated receiver a radio operator did not have to continuously monitor the receiver. The signal from the spark gap transmitter consisted of damped waves repeated at an audio frequency rate, from 120 to perhaps 4000 per second, so in the earphone the signal sounded like a musical tone or buzz, and the Morse code "dots" and "dashes" sounded like beeps. The first person to use radio waves for communication was Guglielmo Marconi. Marconi invented little himself, but he was first to believe that radio could be a practical communication medium, and singlehandedly developed the first wireless telegraphy systems, transmitters and receivers, beginning in 1894–5, mainly by improving technology invented by others. Oliver Lodge and Alexander Popov were also experimenting with similar radio wave receiving apparatus at the same time in 1894–5, but they are not known to have transmitted Morse code during this period, just strings of random pulses. Therefore, Marconi is usually given credit for building the first radio receivers. Coherer receiver The first radio receivers invented by Marconi, Oliver Lodge and Alexander Popov in 1894–5 used a primitive radio wave detector called a coherer, invented in 1890 by Edouard Branly and improved by Lodge and Marconi. The coherer was a glass tube with metal electrodes at each end, with loose metal powder between the electrodes. It initially had a high resistance. When a radio frequency voltage was applied to the electrodes, its resistance dropped and it conducted electricity. In the receiver the coherer was connected directly between the antenna and ground. In addition to the antenna, the coherer was connected in a DC circuit with a battery and relay. When the incoming radio wave reduced the resistance of the coherer, the current from the battery flowed through it, turning on the relay to ring a bell or make a mark on a paper tape in a siphon recorder. In order to restore the coherer to its previous nonconducting state to receive the next pulse of radio waves, it had to be tapped mechanically to disturb the metal particles. This was done by a "decoherer", a clapper which struck the tube, operated by an electromagnet powered by the relay. The coherer is an obscure antique device, and even today there is some uncertainty about the exact physical mechanism by which the various types worked. However it can be seen that it was essentially a bistable device, a radio-wave-operated switch, and so it did not have the ability to rectify the radio wave to demodulate the later amplitude modulated (AM) radio transmissions that carried sound. In a long series of experiments Marconi found that by using an elevated wire monopole antenna instead of Hertz's dipole antennas he could transmit longer distances, beyond the curve of the Earth, demonstrating that radio was not just a laboratory curiosity but a commercially viable communication method. This culminated in his historic transatlantic wireless transmission on December 12, 1901, from Poldhu, Cornwall to St. John's, Newfoundland, a distance of 3500 km (2200 miles), which was received by a coherer. However the usual range of coherer receivers even with the powerful transmitters of this era was limited to a few hundred miles. The coherer remained the dominant detector used in early radio receivers for about 10 years, until replaced by the crystal detector and electrolytic detector around 1907. In spite of much development work, it was a very crude unsatisfactory device. It was not very sensitive, and also responded to impulsive radio noise (RFI), such as nearby lights being switched on or off, as well as to the intended signal. Due to the cumbersome mechanical "tapping back" mechanism it was limited to a data rate of about 12-15 words per minute of Morse code, while a spark-gap transmitter could transmit Morse at up to 100 WPM with a paper tape machine. Other early detectors The coherer's poor performance motivated a great deal of research to find better radio wave detectors, and many were invented. Some strange devices were tried; researchers experimented with using frog legs and even a human brain from a cadaver as detectors. By the first years of the 20th century, experiments in using amplitude modulation (AM) to transmit sound by radio (radiotelephony) were being made. So a second goal of detector research was to find detectors that could demodulate an AM signal, extracting the audio (sound) signal from the radio carrier wave. It was found by trial and error that this could be done by a detector that exhibited "asymmetrical conduction"; a device that conducted current in one direction but not in the other. This rectified the alternating current radio signal, removing one side of the carrier cycles, leaving a pulsing DC current whose amplitude varied with the audio modulation signal. When applied to an earphone this would reproduce the transmitted sound. Below are the detectors that saw wide use before vacuum tubes took over around 1920. All except the magnetic detector could rectify and therefore receive AM signals: Magnetic detector - Developed by Guglielmo Marconi in 1902 from a method invented by Ernest Rutherford and used by the Marconi Co. until it adopted the Audion vacuum tube around 1912, this was a mechanical device consisting of an endless band of iron wires which passed between two pulleys turned by a windup mechanism. The iron wires passed through a coil of fine wire attached to the antenna, in a magnetic field created by two magnets. The hysteresis of the iron induced a pulse of current in a sensor coil each time a radio signal passed through the exciting coil. The magnetic detector was used on shipboard receivers due to its insensitivity to vibration. One was part of the wireless station of the RMS Titanic which was used to summon help during its famous 15 April 1912 sinking. Electrolytic detector ("liquid barretter") - Invented in 1903 by Reginald Fessenden, this consisted of a thin silver-plated platinum wire enclosed in a glass rod, with the tip making contact with the surface of a cup of nitric acid. The electrolytic action caused current to be conducted in only one direction. The detector was used until about 1910. Electrolytic detectors that Fessenden had installed on US Navy ships received the first AM radio broadcast on Christmas Eve, 1906, an evening of Christmas music transmitted by Fessenden using his new alternator transmitter. Thermionic diode (Fleming valve) - The first vacuum tube, invented in 1904 by John Ambrose Fleming, consisted of an evacuated glass bulb containing two electrodes: a cathode consisting of a hot wire filament similar to that in an incandescent light bulb, and a metal plate anode. Fleming, a consultant to Marconi, invented the valve as a more sensitive detector for transatlantic wireless reception. The filament was heated by a separate current through it and emitted electrons into the tube by thermionic emission, an effect which had been discovered by Thomas Edison. The radio signal was applied between the cathode and anode. When the anode was positive, a current of electrons flowed from the cathode to the anode, but when the anode was negative the electrons were repelled and no current flowed. The Fleming valve was used to a limited extent but was not popular because it was expensive, had limited filament life, and was not as sensitive as electrolytic or crystal detectors. Crystal detector (cat's whisker detector) - invented around 1904–1906 by Henry H. C. Dunwoody and Greenleaf Whittier Pickard, based on Karl Ferdinand Braun's 1874 discovery of "asymmetrical conduction" in crystals, these were the most successful and widely used detectors before the vacuum tube era and gave their name to the crystal radio receiver (below). One of the first semiconductor electronic devices, a crystal detector consisted of a pea-sized pebble of a crystalline semiconductor mineral such as galena (lead sulfide) whose surface was touched by a fine springy metal wire mounted on an adjustable arm. This functioned as a primitive diode which conducted electric current in only one direction. In addition to their use in crystal radios, carborundum crystal detectors were also used in some early vacuum tube radios because they were more sensitive than the vacuum tube grid-leak detector. During the vacuum tube era, the term "detector" changed from meaning a radio wave detector to mean a demodulator, a device that could extract the audio modulation signal from a radio signal. That is its meaning today. Tuning "Tuning" means adjusting the frequency of the receiver to the frequency of the desired radio transmission. The first receivers had no tuned circuit, the detector was connected directly between the antenna and ground. Due to the lack of any frequency selective components besides the antenna, the bandwidth of the receiver was equal to the broad bandwidth of the antenna. This was acceptable and even necessary because the first Hertzian spark transmitters also lacked a tuned circuit. Due to the impulsive nature of the spark, the energy of the radio waves was spread over a very wide band of frequencies. To receive enough energy from this wideband signal the receiver had to have a wide bandwidth also. When more than one spark transmitter was radiating in a given area, their frequencies overlapped, so their signals interfered with each other, resulting in garbled reception. Some method was needed to allow the receiver to select which transmitter's signal to receive. Multiple wavelengths produced by a poorly tuned transmitter caused the signal to "dampen", or die down, greatly reducing the power and range of transmission. In 1892, William Crookes gave a lecture on radio in which he suggested using resonance to reduce the bandwidth of transmitters and receivers. Different transmitters could then be "tuned" to transmit on different frequencies so they did not interfere. The receiver would also have a resonant circuit (tuned circuit), and could receive a particular transmission by "tuning" its resonant circuit to the same frequency as the transmitter, analogously to tuning a musical instrument to resonance with another. This is the system used in all modern radio. Tuning was used in Hertz's original experiments and practical application of tuning showed up in the early to mid 1890s in wireless systems not specifically designed for radio communication. Nikola Tesla's March 1893 lecture demonstrating the wireless transmission of power for lighting (mainly by what he thought was ground conduction) included elements of tuning. The wireless lighting system consisted of a spark-excited grounded resonant transformer with a wire antenna which transmitted power across the room to another resonant transformer tuned to the frequency of the transmitter, which lighted a Geissler tube. Use of tuning in free space "Hertzian waves" (radio) was explained and demonstrated in Oliver Lodge's 1894 lectures on Hertz's work. At the time Lodge was demonstrating the physics and optical qualities of radio waves instead of attempting to build a communication system but he would go on to develop methods (patented in 1897) of tuning radio (what he called "syntony"), including using variable inductance to tune antennas. By 1897 the advantages of tuned systems had become clear, and Marconi and the other wireless researchers had incorporated tuned circuits, consisting of capacitors and inductors connected together, into their transmitters and receivers. The tuned circuit acted like an electrical analog of a tuning fork. It had a high impedance at its resonant frequency, but a low impedance at all other frequencies. Connected between the antenna and the detector it served as a bandpass filter, passing the signal of the desired station to the detector, but routing all other signals to ground. The frequency of the station received f was determined by the capacitance C and inductance L in the tuned circuit: Inductive coupling In order to reject radio noise and interference from other transmitters near in frequency to the desired station, the bandpass filter (tuned circuit) in the receiver has to have a narrow bandwidth, allowing only a narrow band of frequencies through. The form of bandpass filter that was used in the first receivers, which has continued to be used in receivers until recently, was the double-tuned inductively-coupled circuit, or resonant transformer (oscillation transformer or RF transformer). The antenna and ground were connected to a coil of wire, which was magnetically coupled to a second coil with a capacitor across it, which was connected to the detector. The RF alternating current from the antenna through the primary coil created a magnetic field which induced a current in the secondary coil which fed the detector. Both primary and secondary were tuned circuits; the primary coil resonated with the capacitance of the antenna, while the secondary coil resonated with the capacitor across it. Both were adjusted to the same resonant frequency. This circuit had two advantages. One was that by using the correct turns ratio, the impedance of the antenna could be matched to the impedance of the receiver, to transfer maximum RF power to the receiver. Impedance matching was important to achieve maximum receiving range in the unamplified receivers of this era. The coils usually had taps which could be selected by a multiposition switch. The second advantage was that due to "loose coupling" it had a much narrower bandwidth than a simple tuned circuit, and the bandwidth could be adjusted. Unlike in an ordinary transformer, the two coils were "loosely coupled"; separated physically so not all the magnetic field from the primary passed through the secondary, reducing the mutual inductance. This gave the coupled tuned circuits much "sharper" tuning, a narrower bandwidth than a single tuned circuit. In the "Navy type" loose coupler (see picture), widely used with crystal receivers, the smaller secondary coil was mounted on a rack which could be slid in or out of the primary coil, to vary the mutual inductance between the coils. When the operator encountered an interfering signal at a nearby frequency, the secondary could be slid further out of the primary, reducing the coupling, which narrowed the bandwidth, rejecting the interfering signal. A disadvantage was that all three adjustments in the loose coupler - primary tuning, secondary tuning, and coupling - were interactive; changing one changed the others. So tuning in a new station was a process of successive adjustments. Selectivity became more important as spark transmitters were replaced by continuous wave transmitters which transmitted on a narrow band of frequencies, and broadcasting led to a proliferation of closely spaced radio stations crowding the radio spectrum. Resonant transformers continued to be used as the bandpass filter in vacuum tube radios, and new forms such as the variometer were invented. Another advantage of the double-tuned transformer for AM reception was that when properly adjusted it had a "flat top" frequency response curve as opposed to the "peaked" response of a single tuned circuit. This allowed it to pass the sidebands of AM modulation on either side of the carrier with little distortion, unlike a single tuned circuit which attenuated the higher audio frequencies. Until recently the bandpass filters in the superheterodyne circuit used in all modern receivers were made with resonant transformers, called IF transformers. Patent disputes Marconi's initial radio system had relatively poor tuning limiting its range and adding to interference. To overcome this drawback he developed a four circuit system with tuned coils in "syntony" at both the transmitters and receivers. His 1900 British #7,777 (four sevens) patent for tuning filed in April 1900 and granted a year later opened the door to patents disputes since it infringed on the Syntonic patents of Oliver Lodge, first filed in May 1897, as well as patents filed by Ferdinand Braun. Marconi was able to obtain patents in the UK and France but the US version of his tuned four circuit patent, filed in November 1900, was initially rejected based on it being anticipated by Lodge's tuning system, and refiled versions were rejected because of the prior patents by Braun, and Lodge. A further clarification and re-submission was rejected because it infringed on parts of two prior patents Tesla had obtained for his wireless power transmission system. Marconi's lawyers managed to get a resubmitted patent reconsidered by another examiner who initially rejected it due to a pre-existing John Stone Stone tuning patent, but it was finally approved it in June 1904 based on it having a unique system of variable inductance tuning that was different from Stone who tuned by varying the length of the antenna. When Lodge's Syntonic patent was extended in 1911 for another 7 years the Marconi Company agreed to settle that patent dispute, purchasing Lodge's radio company with its patent in 1912, giving them the priority patent they needed. Other patent disputes would crop up over the years including a 1943 US Supreme Court ruling on the Marconi Company's ability to sue the US government over patent infringement during World War I. The Court rejected the Marconi Company's suit saying they could not sue for patent infringement when their own patents did not seem to have priority over the patents of Lodge, Stone, and Tesla. Crystal radio receiver Although it was invented in 1904 in the wireless telegraphy era, the crystal radio receiver could also rectify AM transmissions and served as a bridge to the broadcast era. In addition to being the main type used in commercial stations during the wireless telegraphy era, it was the first receiver to be used widely by the public. During the first two decades of the 20th century, as radio stations began to transmit in AM voice (radiotelephony) instead of radiotelegraphy, radio listening became a popular hobby, and the crystal was the simplest, cheapest detector. The millions of people who purchased or homemade these inexpensive reliable receivers created the mass listening audience for the first radio broadcasts, which began around 1920. By the late 1920s the crystal receiver was superseded by vacuum tube receivers and became commercially obsolete. However it continued to be used by youth and the poor until World War II. Today these simple radio receivers are constructed by students as educational science projects. The crystal radio used a cat's whisker detector, invented by Harrison H. C. Dunwoody and Greenleaf Whittier Pickard in 1904, to extract the audio from the radio frequency signal. It consisted of a mineral crystal, usually galena, which was lightly touched by a fine springy wire (the "cat whisker") on an adjustable arm. The resulting crude semiconductor junction functioned as a Schottky barrier diode, conducting in only one direction. Only particular sites on the crystal surface worked as detector junctions, and the junction could be disrupted by the slightest vibration. So a usable site was found by trial and error before each use; the operator would drag the cat's whisker across the crystal until the radio began functioning. Frederick Seitz, a later semiconductor researcher, wrote: Such variability, bordering on what seemed the mystical, plagued the early history of crystal detectors and caused many of the vacuum tube experts of a later generation to regard the art of crystal rectification as being close to disreputable. The crystal radio was unamplified and ran off the power of the radio waves received from the radio station, so it had to be listened to with earphones; it could not drive a loudspeaker. It required a long wire antenna, and its sensitivity depended on how large the antenna was. During the wireless era it was used in commercial and military longwave stations with huge antennas to receive long distance radiotelegraphy traffic, even including transatlantic traffic. However, when used to receive broadcast stations a typical home crystal set had a more limited range of about 25 miles. In sophisticated crystal radios the "loose coupler" inductively coupled tuned circuit was used to increase the Q. However it still had poor selectivity compared to modern receivers. Heterodyne receiver and BFO Beginning around 1905 continuous wave (CW) transmitters began to replace spark transmitters for radiotelegraphy because they had much greater range. The first continuous wave transmitters were the Poulsen arc invented in 1904 and the Alexanderson alternator developed 1906–1910, which were replaced by vacuum tube transmitters beginning around 1920. The continuous wave radiotelegraphy signals produced by these transmitters required a different method of reception. The radiotelegraphy signals produced by spark gap transmitters consisted of strings of damped waves repeating at an audio rate, so the "dots" and "dashes" of Morse code were audible as a tone or buzz in the receivers' earphones. However the new continuous wave radiotelegraph signals simply consisted of pulses of unmodulated carrier (sine waves). These were inaudible in the receiver headphones. To receive this new modulation type, the receiver had to produce some kind of tone during the pulses of carrier. The first crude device that did this was the tikker, invented in 1908 by Valdemar Poulsen. This was a vibrating interrupter with a capacitor at the tuner output which served as a rudimentary modulator, interrupting the carrier at an audio rate, thus producing a buzz in the earphone when the carrier was present. A similar device was the "tone wheel" invented by Rudolph Goldschmidt, a wheel spun by a motor with contacts spaced around its circumference, which made contact with a stationary brush. In 1901 Reginald Fessenden had invented a better means of accomplishing this. In his heterodyne receiver an unmodulated sine wave radio signal at a frequency fO offset from the incoming radio wave carrier fC was generated by a local oscillator and applied to a rectifying detector such as a crystal detector or electrolytic detector, along with the radio signal from the antenna. In the detector the two signals mixed, creating two new heterodyne (beat) frequencies at the sum fC + fO and the difference fC − fO between these frequencies. By choosing fO correctly the lower heterodyne fC − fO was in the audio frequency range, so it was audible as a tone in the earphone whenever the carrier was present. Thus the "dots" and "dashes" of Morse code were audible as musical "beeps". A major attraction of this method during this pre-amplification period was that the heterodyne receiver actually amplified the signal somewhat, the detector had "mixer gain". The receiver was ahead of its time, because when it was invented there was no oscillator capable of producing the radio frequency sine wave fO with the required stability. Fessenden first used his large radio frequency alternator, but this was not practical for ordinary receivers. The heterodyne receiver remained a laboratory curiosity until a cheap compact source of continuous waves appeared, the vacuum tube electronic oscillator invented by Edwin Armstrong and Alexander Meissner in 1913. After this it became the standard method of receiving CW radiotelegraphy. The heterodyne oscillator is the ancestor of the beat frequency oscillator (BFO) which is used to receive radiotelegraphy in communications receivers today. The heterodyne oscillator had to be retuned each time the receiver was tuned to a new station, but in modern superheterodyne receivers the BFO signal beats with the fixed intermediate frequency, so the beat frequency oscillator can be a fixed frequency. Armstrong later used Fessenden's heterodyne principle in his superheterodyne receiver (below). Vacuum tube era The Audion (triode) vacuum tube invented by Lee De Forest in 1906 was the first practical amplifying device and revolutionized radio. Vacuum tube transmitters replaced spark transmitters and made possible four new types of modulation: continuous wave (CW) radiotelegraphy, amplitude modulation (AM) around 1915 which could carry audio (sound), frequency modulation (FM) around 1938 which had much improved audio quality, and single sideband (SSB). The amplifying vacuum tube used energy from a battery or electrical outlet to increase the power of the radio signal, so vacuum tube receivers could be more sensitive and have a greater reception range than the previous unamplified receivers. The increased audio output power also allowed them to drive loudspeakers instead of earphones, permitting more than one person to listen. The first loudspeakers were produced around 1915. These changes caused radio listening to evolve explosively from a solitary hobby to a popular social and family pastime. The development of amplitude modulation (AM) and vacuum-tube transmitters during World War I, and the availability of cheap receiving tubes after the war, set the stage for the start of AM broadcasting, which sprang up spontaneously around 1920. The advent of radio broadcasting increased the market for radio receivers greatly, and transformed them into a consumer product. At the beginning of the 1920s the radio receiver was a forbidding high-tech device, with many cryptic knobs and controls requiring technical skill to operate, housed in an unattractive black metal box, with a tinny-sounding horn loudspeaker. By the 1930s, the broadcast receiver had become a piece of furniture, housed in an attractive wooden case, with standardized controls anyone could use, which occupied a respected place in the home living room. In the early radios the multiple tuned circuits required multiple knobs to be adjusted to tune in a new station. One of the most important ease-of-use innovations was "single knob tuning", achieved by linking the tuning capacitors together mechanically. The dynamic cone loudspeaker invented in 1924 greatly improved audio frequency response over the previous horn speakers, allowing music to be reproduced with good fidelity. Convenience features like large lighted dials, tone controls, pushbutton tuning, tuning indicators and automatic gain control (AGC) were added. The receiver market was divided into the above broadcast receivers and communications receivers, which were used for two-way radio communications such as shortwave radio. A vacuum-tube receiver required several power supplies at different voltages, which in early radios were supplied by separate batteries. By 1930 adequate rectifier tubes were developed, and the expensive batteries were replaced by a transformer power supply that worked off the house current. Vacuum tubes were bulky, expensive, had a limited lifetime, consumed a large amount of power and produced a lot of waste heat, so the number of tubes a receiver could economically have was a limiting factor. Therefore, a goal of tube receiver design was to get the most performance out of a limited number of tubes. The major radio receiver designs, listed below, were invented during the vacuum tube era. A defect in many early vacuum-tube receivers was that the amplifying stages could oscillate, act as an oscillator, producing unwanted radio frequency alternating currents. These parasitic oscillations mixed with the carrier of the radio signal in the detector tube, producing audible beat notes (heterodynes); annoying whistles, moans, and howls in the speaker. The oscillations were caused by feedback in the amplifiers; one major feedback path was the capacitance between the plate and grid in early triodes. This was solved by the Neutrodyne circuit, and later the development of the tetrode and pentode around 1930. Edwin Armstrong is one of the most important figures in radio receiver history, and during this period invented technology which continues to dominate radio communication. He was the first to give a correct explanation of how De Forest's triode tube worked. He invented the feedback oscillator, regenerative receiver, the superregenerative receiver, the superheterodyne receiver, and modern frequency modulation (FM). The first vacuum-tube receivers The first amplifying vacuum tube, the Audion, a crude triode, was invented in 1906 by Lee De Forest as a more sensitive detector for radio receivers, by adding a third electrode to the thermionic diode detector, the Fleming valve. It was not widely used until its amplifying ability was recognized around 1912. The first tube receivers, invented by De Forest and built by hobbyists until the mid-1920s, used a single Audion which functioned as a grid-leak detector which both rectified and amplified the radio signal. There was uncertainty about the operating principle of the Audion until Edwin Armstrong explained both its amplifying and demodulating functions in a 1914 paper. The grid-leak detector circuit was also used in regenerative, TRF, and early superheterodyne receivers (below) until the 1930s. To give enough output power to drive a loudspeaker, 2 or 3 additional vacuum tube stages were needed for audio amplification. Many early hobbyists could only afford a single tube receiver, and listened to the radio with earphones, so early tube amplifiers and speakers were sold as add-ons. In addition to very low gain of about 5 and a short lifetime of about 30 – 100 hours, the primitive Audion had erratic characteristics because it was incompletely evacuated. De Forest believed that ionization of residual air was key to Audion operation. This made it a more sensitive detector but also caused its electrical characteristics to vary during use. As the tube heated up, gas released from the metal elements would change the pressure in the tube, changing the plate current and other characteristics, so it required periodic bias adjustments to keep it at the correct operating point. Each Audion stage usually had a rheostat to adjust the filament current, and often a potentiometer or multiposition switch to control the plate voltage. The filament rheostat was also used as a volume control. The many controls made multitube Audion receivers complicated to operate. By 1914, Harold Arnold at Western Electric and Irving Langmuir at GE realized that the residual gas was not necessary; the Audion could operate on electron conduction alone. They evacuated tubes to a lower pressure of 10−9 atm, producing the first "hard vacuum" triodes. These more stable tubes did not require bias adjustments, so radios had fewer controls and were easier to operate. During World War I civilian radio use was prohibited, but by 1920 large-scale production of vacuum tube radios began. The "soft" incompletely evacuated tubes were used as detectors through the 1920s then became obsolete. Regenerative (autodyne) receiver The regenerative receiver, invented by Edwin Armstrong in 1913 when he was a 23-year-old college student, was used very widely until the late 1920s particularly by hobbyists who could only afford a single-tube radio. Today transistor versions of the circuit are still used in a few inexpensive applications like walkie-talkies. In the regenerative receiver the gain (amplification) of a vacuum tube or transistor is increased by using regeneration (positive feedback); some of the energy from the tube's output circuit is fed back into the input circuit with a feedback loop. The early vacuum tubes had very low gain (around 5). Regeneration could not only increase the gain of the tube enormously, by a factor of 15,000 or more, it also increased the Q factor of the tuned circuit, decreasing (sharpening) the bandwidth of the receiver by the same factor, improving selectivity greatly. The receiver had a control to adjust the feedback. The tube also acted as a grid-leak detector to rectify the AM signal. Another advantage of the circuit was that the tube could be made to oscillate, and thus a single tube could serve as both a beat frequency oscillator and a detector, functioning as a heterodyne receiver to make CW radiotelegraphy transmissions audible. This mode was called an autodyne receiver. To receive radiotelegraphy, the feedback was increased until the tube oscillated, then the oscillation frequency was tuned to one side of the transmitted signal. The incoming radio carrier signal and local oscillation signal mixed in the tube and produced an audible heterodyne (beat) tone at the difference between the frequencies. A widely used design was the Armstrong circuit, in which a "tickler" coil in the plate circuit was coupled to the tuning coil in the grid circuit, to provide the feedback. The feedback was controlled by a variable resistor, or alternately by moving the two windings physically closer together to increase loop gain, or apart to reduce it. This was done by an adjustable air core transformer called a variometer (variocoupler). Regenerative detectors were sometimes also used in TRF and superheterodyne receivers. One problem with the regenerative circuit was that when used with large amounts of regeneration the selectivity (Q) of the tuned circuit could be too sharp, attenuating the AM sidebands, thus distorting the audio modulation. This was usually the limiting factor on the amount of feedback that could be employed. A more serious drawback was that it could act as an inadvertent radio transmitter, producing interference (RFI) in nearby receivers. In AM reception, to get the most sensitivity the tube was operated very close to instability and could easily break into oscillation (and in CW reception did oscillate), and the resulting radio signal was radiated by its wire antenna. In nearby receivers, the regenerative's signal would beat with the signal of the station being received in the detector, creating annoying heterodynes, (beats), howls and whistles. Early regeneratives which oscillated easily were called "bloopers". One preventive measure was to use a stage of RF amplification before the regenerative detector, to isolate it from the antenna. But by the mid-1920s "regens" were no longer sold by the major radio manufacturers. Superregenerative receiver This was a receiver invented by Edwin Armstrong in 1922 which used regeneration in a more sophisticated way, to give greater gain. It was used in a few shortwave receivers in the 1930s, and is used today in a few cheap high frequency applications such as walkie-talkies and garage door openers. In the regenerative receiver the loop gain of the feedback loop was less than one, so the tube (or other amplifying device) did not oscillate but was close to oscillation, giving large gain. In the superregenerative receiver, the loop gain was made equal to one, so the amplifying device actually began to oscillate, but the oscillations were interrupted periodically. This allowed a single tube to produce gains of over 106. TRF receiver The tuned radio frequency (TRF) receiver, invented in 1916 by Ernst Alexanderson, improved both sensitivity and selectivity by using several stages of amplification before the detector, each with a tuned circuit, all tuned to the frequency of the station. A major problem of early TRF receivers was that they were complicated to tune, because each resonant circuit had to be adjusted to the frequency of the station before the radio would work. In later TRF receivers the tuning capacitors were linked together mechanically ("ganged") on a common shaft so they could be adjusted with one knob, but in early receivers the frequencies of the tuned circuits could not be made to "track" well enough to allow this, and each tuned circuit had its own tuning knob. Therefore, the knobs had to be turned simultaneously. For this reason most TRF sets had no more than three tuned RF stages. A second problem was that the multiple radio frequency stages, all tuned to the same frequency, were prone to oscillate, and the parasitic oscillations mixed with the radio station's carrier in the detector, producing audible heterodynes (beat notes), whistles and moans, in the speaker. This was solved by the invention of the Neutrodyne circuit (below) and the development of the tetrode later around 1930, and better shielding between stages. Today the TRF design is used in a few integrated (IC) receiver chips. From the standpoint of modern receivers the disadvantage of the TRF is that the gain and bandwidth of the tuned RF stages are not constant but vary as the receiver is tuned to different frequencies. Since the bandwidth of a filter with a given Q is proportional to the frequency, as the receiver is tuned to higher frequencies its bandwidth increases. Neutrodyne receiver The Neutrodyne receiver, invented in 1922 by Louis Hazeltine, was a TRF receiver with a "neutralizing" circuit added to each radio amplification stage to cancel the feedback to prevent the oscillations which caused the annoying whistles in the TRF. In the neutralizing circuit a capacitor fed a feedback current from the plate circuit to the grid circuit which was 180° out of phase with the feedback which caused the oscillation, canceling it. The Neutrodyne was popular until the advent of cheap tetrode tubes around 1930. Reflex receiver The reflex receiver, invented in 1914 by Wilhelm Schloemilch and Otto von Bronk, and rediscovered and extended to multiple tubes in 1917 by Marius Latour and William H. Priess, was a design used in some inexpensive radios of the 1920s which enjoyed a resurgence in small portable tube radios of the 1930s and again in a few of the first transistor radios in the 1950s. It is another example of an ingenious circuit invented to get the most out of a limited number of active devices. In the reflex receiver the RF signal from the tuned circuit is passed through one or more amplifying tubes or transistors, demodulated in a detector, then the resulting audio signal is passed again though the same amplifier stages for audio amplification. The separate radio and audio signals present simultaneously in the amplifier do not interfere with each other since they are at different frequencies, allowing the amplifying tubes to do "double duty". In addition to single tube reflex receivers, some TRF and superheterodyne receivers had several stages "reflexed". Reflex radios were prone to a defect called "play-through" which meant that the volume of audio did not go to zero when the volume control was turned down. Superheterodyne receiver The superheterodyne, invented in 1918 during World War I by Edwin Armstrong when he was in the Signal Corps, is the design used in almost all modern receivers, except a few specialized applications. It is a more complicated design than the other receivers above, and when it was invented required 6 - 9 vacuum tubes, putting it beyond the budget of most consumers, so it was initially used mainly in commercial and military communication stations. However, by the 1930s the "superhet" had replaced all the other receiver types above. In the superheterodyne, the "heterodyne" technique invented by Reginald Fessenden is used to shift the frequency of the radio signal down to a lower "intermediate frequency" (IF), before it is processed. Its operation and advantages over the other radio designs in this section are described above in The superheterodyne design By the 1940s the superheterodyne AM broadcast receiver was refined into a cheap-to-manufacture design called the "All American Five", because it only used five vacuum tubes: usually a converter (mixer/local oscillator), an IF amplifier, a detector/audio amplifier, audio power amplifier, and a rectifier. This design was used for virtually all commercial radio receivers until the transistor replaced the vacuum tube in the 1970s. Semiconductor era The invention of the transistor in 1947 revolutionized radio technology, making truly portable receivers possible, beginning with transistor radios in the late 1950s. Although portable vacuum tube radios were made, tubes were bulky and inefficient, consuming large amounts of power and requiring several large batteries to produce the filament and plate voltage. Transistors did not require a heated filament, reducing power consumption, and were smaller and much less fragile than vacuum tubes. Portable radios Companies first began manufacturing radios advertised as portables shortly after the start of commercial broadcasting in the early 1920s. The vast majority of tube radios of the era used batteries and could be set up and operated anywhere, but most did not have features designed for portability such as handles and built in speakers. Some of the earliest portable tube radios were the Winn "Portable Wireless Set No. 149" that appeared in 1920 and the Grebe Model KT-1 that followed a year later. Crystal sets such as the Westinghouse Aeriola Jr. and the RCA Radiola 1 were also advertised as portable radios. Thanks to miniaturized vacuum tubes first developed in 1940, smaller portable radios appeared on the market from manufacturers such as Zenith and General Electric. First introduced in 1942, Zenith's Trans-Oceanic line of portable radios were designed to provide entertainment broadcasts as well as being able to tune into weather, marine and international shortwave stations. By the 1950s, a "golden age" of tube portables included lunchbox-sized tube radios like the Emerson 560, that featured molded plastic cases. So-called "pocket portable" radios like the RCA BP10 had existed since the 1940s, but their actual size was compatible with only the largest of coat pockets. But some, like the Privat-ear and Dyna-mite pocket radios, were small enough to fit a pocket. The development of the bipolar junction transistor in the early 1950s resulted in it being licensed to a number of electronics companies, such as Texas Instruments, who produced a limited run of transistorized radios as a sales tool. The Regency TR-1, made by the Regency Division of I.D.E.A. (Industrial Development Engineering Associates) of Indianapolis, Indiana, was launched in 1954. The era of true, shirt-pocket sized portable radios followed, with manufacturers such as Sony, Zenith, RCA, DeWald, and Crosley offering various models. The Sony TR-63 released in 1957 was the first mass-produced transistor radio, leading to the mass-market penetration of transistor radios. Digital technology The development of integrated circuit (IC) chips in the 1970s created another revolution, allowing an entire radio receiver to be put on an IC chip. IC chips reversed the economics of radio design used with vacuum-tube receivers. Since the marginal cost of adding additional amplifying devices (transistors) to the chip was essentially zero, the size and cost of the receiver was dependent not on how many active components were used, but on the passive components; inductors and capacitors, which could not be integrated easily on the chip. The development of RF CMOS chips, pioneered by Asad Ali Abidi at UCLA during the 1980s and 1990s, allowed low power wireless devices to be made. The current trend in receivers is to use digital circuitry on the chip to do functions that were formerly done by analog circuits which require passive components. In a digital receiver the IF signal is sampled and digitized, and the bandpass filtering and detection functions are performed by digital signal processing (DSP) on the chip. Another benefit of DSP is that the properties of the receiver; channel frequency, bandwidth, gain, etc. can be dynamically changed by software to react to changes in the environment; these systems are known as software-defined radios or cognitive radio. Many of the functions performed by analog electronics can be performed by software instead. The benefit is that software is not affected by temperature, physical variables, electronic noise and manufacturing defects. Digital signal processing permits signal processing techniques that would be cumbersome, costly, or otherwise infeasible with analog methods. A digital signal is essentially a stream or sequence of numbers that relay a message through some sort of medium such as a wire. DSP hardware can tailor the bandwidth of the receiver to current reception conditions and to the type of signal. A typical analog only receiver may have a limited number of fixed bandwidths, or only one, but a DSP receiver may have 40 or more individually selectable filters. DSP is used in cell phone systems to reduce the data rate required to transmit voice. In digital radio broadcasting systems such as Digital Audio Broadcasting (DAB), the analog audio signal is digitized and compressed, typically using a modified discrete cosine transform (MDCT) audio coding format such as AAC+. "PC radios", or radios that are designed to be controlled by a standard PC are controlled by specialized PC software using a serial port connected to the radio. A "PC radio" may not have a front-panel at all, and may be designed exclusively for computer control, which reduces cost. Some PC radios have the great advantage of being field upgradable by the owner. New versions of the DSP firmware can be downloaded from the manufacturer's web site and uploaded into the flash memory of the radio. The manufacturer can then in effect add new features to the radio over time, such as adding new filters, DSP noise reduction, or simply to correct bugs. A full-featured radio control program allows for scanning and a host of other functions and, in particular, integration of databases in real-time, like a "TV-Guide" type capability. This is particularly helpful in locating all transmissions on all frequencies of a particular broadcaster, at any given time. Some control software designers have even integrated Google Earth to the shortwave databases, so it is possible to "fly" to a given transmitter site location with a click of a mouse. In many cases the user is able to see the transmitting antennas where the signal is originating from. Since the Graphical User Interface to the radio has considerable flexibility, new features can be added by the software designer. Features that can be found in advanced control software programs today include a band table, GUI controls corresponding to traditional radio controls, local time clock and a UTC clock, signal strength meter, a database for shortwave listening with lookup capability, scanning capability, or text-to-speech interface. The next level in integration is "software-defined radio", where all filtering, modulation and signal manipulation is done in software. This may be a PC soundcard or by a dedicated piece of DSP hardware. There will be a RF front-end to supply an intermediate frequency to the software defined radio. These systems can provide additional capability over "hardware" receivers. For example, they can record large swaths of the radio spectrum to a hard drive for "playback" at a later date. The same SDR that one minute is demodulating a simple AM broadcast may also be able to decode an HDTV broadcast in the next. An open-source project called GNU Radio is dedicated to evolving a high-performance SDR. All-digital radio transmitters and receivers present the possibility of advancing the capabilities of radio. References Receiver (radio) Receivers
History of radio receivers
[ "Engineering" ]
10,307
[ "Radio electronics", "Receiver (radio)" ]
44,424,907
https://en.wikipedia.org/wiki/Process%20qualification
Process qualification is the qualification of manufacturing and production processes to confirm they are able to operate at a certain standard during sustained commercial manufacturing. Data covering critical process parameters must be recorded and analyzed to ensure critical quality attributes can be guaranteed throughout production. This may include testing equipment at maximum operating capacity to show quantity demands can be met. Once all processes have been qualified the manufacturer should have a complete understanding of the process design and have a framework in place to routinely monitor operations. Only after process qualification has been completed can the manufacturing process begin production for commercial use. Equally important as qualifying processes and equipment is qualifying software and personnel. A well trained staff and accurate, thorough records helps ensure ongoing protection from process faults and quick recovery from otherwise costly process malfunctions. In many countries qualification measures are also required, especially in the pharmaceutical manufacturing field. Process qualification should cover the following aspects of manufacturing: Facility Utilities Equipment Personnel End-to-end manufacturing Control protocols and monitoring software. Process qualification is the second stage of process validation. A vital component of process qualification is process performance qualification protocol. PPQ protocol is essential in defining and maintaining production standards within an organization. See also Installation qualification Design qualification Performance qualification Process validation References External links Drugregulations.org Formal methods Enterprise modelling Business process management
Process qualification
[ "Engineering" ]
257
[ "Software engineering", "Systems engineering", "Enterprise modelling", "Formal methods" ]
44,425,089
https://en.wikipedia.org/wiki/Decision%20Model%20and%20Notation
In business analysis, the Decision Model and Notation (DMN) is a standard published by the Object Management Group. It is a standard approach for describing and modeling repeatable decisions within organizations to ensure that decision models are interchangeable across organizations. The DMN standard provides the industry with a modeling notation for decisions that will support decision management and business rules. The notation is designed to be readable by business and IT users alike. This enables various groups to effectively collaborate in defining a decision model: the business people who manage and monitor the decisions, the business analysts or functional analysts who document the initial decision requirements and specify the detailed decision models and decision logic, the technical developers responsible for the automation of systems that make the decisions. The DMN standard can be effectively used standalone but it is also complementary to the BPMN and CMMN standards. BPMN defines a special kind of activity, the Business Rule Task, which "provides a mechanism for the process to provide input to a business rule engine and to get the output of calculations that the business rule engine might provide" that can be used to show where in a BPMN process a decision defined using DMN should be used. DMN has been made a standard for Business Analysis according to BABOK v3. Elements of the standard The standard includes three main elements Decision Requirements Diagrams that show how the elements of decision-making are linked into a dependency network. Decision tables to represent how each decision in such a network can be made. Business context for decisions such as the roles of organizations or the impact on performance metrics. A Friendly Enough Expression Language (FEEL) that can be used to evaluate expressions in a decision table and other logic formats. Use cases The standard identifies three main use cases for DMN Defining manual decision making Specifying the requirements for automated decision-making Representing a complete, executable model of decision-making Benefits Using the DMN standard will improve business analysis and business process management, since other popular requirement management techniques such as BPMN and UML do not handle decision making growth of projects using business rule management systems or BRMS, which allow faster changes it facilitates better communications between business, IT and analytic roles in a company it provides an effective requirements modeling approach for Predictive Analytics projects and fulfills the need for "business understanding" in methodologies for advanced analytics such as CRISP-DM it provides a standard notation for decision tables, the most common style of business rules in a BRMS Relationship to BPMN DMN has been designed to work with BPMN. Business process models can be simplified by moving process logic into decision services. DMN is a separate domain within the OMG that provides an explicit way to connect to processes in BPMN. Decisions in DMN can be explicitly linked to processes and tasks that use the decisions. This integration of DMN and BPMN has been studied extensively. DMN expects that the logic of a decision will be deployed as a stateless, side-effect free Decision Service. Such a service can be invoked from a business process and the data in the process can be mapped to the inputs and outputs of the decision service. DMN BPMN example As mentioned, BPMN is a related OMG Standard for process modeling. DMN complements BPMN, providing a separation of concerns between the decision and the process. The example here describes a BPMN process and DMN DRD (Decision Requirements Diagram) for onboarding a bank customer. Several decisions are modeled and these decisions will direct the processes response. New bank account process In the BPMN process model shown in the figure, a customer makes a request to open a new bank account. The account application provides the account representative with all the information needed to create an account and provide the requested services. This includes the name, address and various forms of identification. In the next steps of the work flow, the 'Know Your Customer' (KYC) services are called. In the 'KYC' services, the name and address are validated; followed by a check against the international criminal database (Interpol) and the database of persons that are 'Politically exposed persons (PEP)'. The PEP is a person who is either entrusted with a prominent political position or a close relative thereof. Deposits from persons on the PEP list are potentially corrupt. This is shown as two services on the process model. Anti-money-laundering (AML) regulations require these checks before the customer account is certified. The results of these services plus the forms of identification are sent to the Certify New Account decision. This is shown as a 'rule' activity, verify account, on the process diagram. If the new customer passes certification, then the account is classified into onboarding for Business Retail, Retail, Wealth Management and High Value Business. Otherwise the customer application is declined. The Classify New Customer Decision classifies the customer. If the verify-account process returns a result of 'Manual' then the PEP or the Interpol check returned a close match. The account representative must visually inspect the name and the application to determine if the match is valid and accept or decline the application. Certify new account decision An account is certified for opening if the individual's' address is verified, and if valid identification is provided, and if the applicant is not on a list of criminals or politically exposed persons. These are shown as sub-decisions below the 'certify new account' decision. The account verification services provides a 100% match of the applicants address. For identification to be valid, the customer must provide a driver's license, passport or government issued ID. The checks against PEP and Interpol are 'Fuzzy' matches and return matching score values. Scores above 85 are considered a 'match' and scores between 65 and 85 would require a 'manual' screening process. People who match either of these lists are rejected by the account application process. If there is a partial match with a score between 65 and 85, against the Interpol or PEP list then the certification is set to manual and an account representative performs a manual verification of the applicant's data. These rules are reflected in the figure below, which presents the decision table for whether to pass the provided name for the lists checks. Client category The client's on-boarding process is driven by what category they fall in. The category is decided by the: Type of client, business or private The size of the funds on deposit And the estimated net worth This decision is shown below: There are 6 business rules that determine the client's category and these are shown in the decision table here: Summary example In this example, the outcome of the 'Verify Account' decision directed the responses of the new account process. The same is true for the 'Classify Customer' decision. By adding or changing the business rules in the tables, one can easily change the criteria for these decisions and control the process differently. Modeling is a critical aspect of improving an existing process or business challenge. Modeling is generally done by a team of business analysts, IT personnel, and modeling experts. The expressive modeling capabilities of BPMN allows business analyst to understand the functions of the activities of the process. Now with the addition of DMN, business analysts can construct an understandable model of complex decisions. Combining BPMN and DMN yields a very powerful combination of models that work synergistically to simplify processes. Relationship to decision mining and process mining Automated discovery techniques that infer decision models from process execution data have been proposed as well. Here, a DMN decision model is derived from a data-enriched event log, along with the process that uses the decisions. In doing so, decision mining complements process mining with traditional data mining approaches. cDMN extension Constraint Decision Model and Notation (cDMN) is a formal notation for expressing knowledge in a tabular, intuitive format. It extends DMN with constraint reasoning and related concepts while aiming to retain the user-friendliness of the original. cDMN is also meant to express other problems besides business modeling, such as complex component design. It extends DMN in four ways: Constraint modelling (see Constraint programming) Adding expressive data representation, such as typed predicates and functions (similar to First-order logic) Data tables, in which each entry represents a different problem instance Quantification Due to these additions, cDMN models can express more complex problems. Furthermore, they can also express some DMN models in more compact, less-convoluted ways. Unlike DMN, cDMN is not deterministic, in the sense that a set of input values could have multiple different solutions. Indeed, where a DMN model always defines a single solution, a cDMN model defines a solution space. Usage of cDMN models can also be integrated in Business Process Model and Notation process models, just like DMN. Example As an example, consider the well-known map coloring or Graph coloring problem. Here, we wish to color a map in such a way that no bordering countries share the same color. The constraint table shown in the figure (as denoted by its E* hit policy in the top-left corner) expresses this logic. It is read as "For each country c1, country c2 holds that if they are different countries which border, then the color of c1 is not the color of c2. Here, the first two columns introduce two quantifiers, both of type country, which serve as universal quantifier. In the third column, the 2-ary predicate borders is used to express when two countries have a shared border. Finally, the last column uses the 1-ary function color of, which maps each country on a color. References External links DMN specifications published by Object Management Group DMN Technology Capability Kit: Test platform for evaluating DMN standard conformance of DMN software products cDMN on readthedocs.io Enterprise modelling Diagrams Decision-making Rule engines Analytics Business analysis Modeling languages
Decision Model and Notation
[ "Engineering" ]
2,063
[ "Systems engineering", "Enterprise modelling" ]
64,362,585
https://en.wikipedia.org/wiki/Ann%20and%20H.J.%20Smead%20Department%20of%20Aerospace%20Engineering%20Sciences
The Ann and H.J. Smead Department of Aerospace Engineering Sciences is a department within the College of Engineering & Applied Science at the University of Colorado Boulder, providing aerospace education and research. Housed primarily in the Aerospace Engineering Sciences building on the university's East Campus in Boulder, it awards baccalaureate, masters, and PhD degrees, as well as certificates, graduating approximately 225 students annually. The Ann and H.J. Smead Department of Aerospace Engineering Sciences is ranked 10th in the nation in both undergraduate and graduate aerospace engineering education among public universities by US News & World Report. History Aerospace engineering at the University of Colorado Boulder initially began as an option within the university’s mechanical engineering program in 1930. In 1946, it was split off and became the Department of Aeronautical Engineering under the leadership of aerospace education pioneer Karl Dawson Wood, who served as its first chair. It was renamed the Department of Aerospace Engineering Sciences in 1963. Both the State of Colorado and the department grew as aerospace research centers during the space race. In 1948. The Laboratory for Atmospheric and Space Physics was founded on campus as the Upper Air Laboratory, followed a few years later by Ball Aerospace Corporation, which opened a research facility in Boulder that eventually became their headquarters, and Lockheed Martin Space Systems, which established a strategic plant in nearby southwest Denver in 1955. The later addition of numerous federal research labs to the Boulder landscape, including the National Institute of Standards and Technology (NIST), National Oceanic and Atmospheric Administration, National Center for Atmospheric Research, and in Golden, the National Renewable Energy Laboratory, further expanded the area’s research center. Today, Boulder and the surrounding Denver Metro are home to operations for large aerospace corporations and small startups. In 2017, the department was renamed the Ann and H.J. Smead Department of Aerospace Engineering Sciences in recognition of former Kaiser Aerospace & Electronics Corp CEO, Harold “Joe” Smead, and his widow Ann Smead, in recognition of their significant contributions to the department. Later the same year ground was broken on a 175,000 square-foot, $101 million aerospace building, which opened in 2019. The department now conducts a wide range of research across aeronautical and astronautical science and engineering, as well as in Earth and space sciences. Much of the department’s research cuts across these focus areas including astrodynamics, autonomous systems, bioastronautics, and remote sensing. Facilities Aerospace Mechanics Research Center - Dedicated to the development of next-generation aerospace structures and systems. Known for expertise in multiphysics modeling and optimization of structural systems. Autonomous Vehicle Systems lab - Researching spacecraft dynamics, formation flying, and orbital debris removal utilizing electrostatic force fields. Bioastronautics Laboratories - Low and high bay facilities housing a human centrifuge, Dream Chaser cabin mock-up, thermal vacuum chamber, and other human spacecraft mock-ups. BioServe Space Technologies – Originally founded through a NASA grant in 1987, designs, builds, and operates life science research and hardware for microgravity environments. Facilities include a payload operations center for conducting live uplinks with orbiting astronauts. Colorado Center for Astrodynamics Research – Conducts astrodynamics, space weather, and remote sensing research. Is the largest center in the department, by number of faculty and students. Experimental Aerodynamics Laboratory (EAL) – Housed in a dedicated research building adjacent to the Department’s Aerospace Engineering Sciences Building and containing a low-speed wind tunnel, the EAL is devoted to improving production, understanding, and control of complex flow fields in aerodynamic applications. Research and Engineering Center for Unmanned Vehicles (RECUV) – Research center for development and execution of scientific and commercial experiments for mitigation of natural disasters and national defense utilizing aerial, ground-based, and submersible unmanned vehicles. Part of the multi-university TORUS project partnership. UAV Fabrication Lab - Dedicated to the design and construction of unmanned aerial vehicles and scientific instruments carried by them. Woods / Composites and Metal Machine Shops - Containing four-axis CNC milling machines and lathes, water jets, welding equipment, metal and plastic 3D printers, composite ovens, and indoor hazardous materials test cell. Notable people George Born - Pioneering aerospace researcher and professor who founded the Colorado Center for Astrodynamics Research. Adolf Busemann - Former professor, designer of the swept-wing aircraft Steve Chappell - aerospace engineer and NASA scientist. Member of the NASA Extreme Environment Mission Operations 14 (NEEMO 14) aquanaut crew Charbel Farhat - Former professor, current chair of Stanford University's Department of Aeronautics and Astronautics. Moriba Jah - astrodynamicist, professor at the University of Texas-Austin, and former spacecraft navigator for NASA's Jet Propulsion Laboratory. Steve Jolly – Director and chief engineer of commercial civil space at Lockheed Martin Space Systems. Mark Sirangelo - Current faculty member and former Executive Vice President of Sierra Nevada Space Systems Michael T. Voorhees - Entrepreneur, engineer, designer, geographer, and aeronaut Karl Dawson Wood – Aerospace education pioneer and department founder. Current Faculty Members of the National Academies Brian Argrow Penina Axelrad Daniel Baker Kristine Larson David Marshall Daniel Scheeres CU Boulder-Affiliated Astronauts Loren Acton, NASA astronaut Patrick Baudry, CNES astronaut Vance D. Brand, NASA astronaut Scott Carpenter, NASA astronaut in second orbital flight of Project Mercury Kalpana Chawla, NASA astronaut, died on Columbia Takao Doi, NASA astronaut Samuel T. Durrance, NASA astronaut Richard Hieb, NASA astronaut, current professor Marsha Ivins, NASA astronaut John M. Lounge, NASA astronaut George Nelson, NASA astronaut Ellison Onizuka, NASA astronaut, died on Challenger in January 1986 Stuart Roosa, NASA astronaut, flew on Apollo 14 Ronald M. Sega, NASA astronaut Steven Swanson, NASA astronaut Jack Swigert, NASA astronaut, flew on Apollo 13 Joe Tanner, NASA astronaut, retired professor James Voss, NASA astronaut, current professor References University of Colorado Boulder Aerospace engineering University departments in the United States
Ann and H.J. Smead Department of Aerospace Engineering Sciences
[ "Engineering" ]
1,226
[ "Aerospace engineering" ]
64,366,263
https://en.wikipedia.org/wiki/Developable%20roller
In geometry, a developable roller is a convex solid whose surface consists of a single continuous, developable face. While rolling on a plane, most developable rollers develop their entire surface so that all the points on the surface touch the rolling plane. All developable rollers have ruled surfaces. Four families of developable rollers have been described to date: the prime polysphericons, the convex hulls of the two disc rollers (TDR convex hulls), the polycons and the Platonicons. Construction Each developable roller family is based on a different construction principle. The prime polysphericons are a subfamily of the polysphericon family. They are based on bodies made by rotating regular polygons around one of their longest diagonals. These bodies are cut in two at their symmetry plane and the two halves are reunited after being rotated at an offset angle relative to each other. All prime polysphericons have two edges made of one or more circular arcs and four vertices. All of them, but the sphericon, have surfaces that consist of one kind of conic surface and one, or more, conical or cylindrical frustum surfaces. Two-disc rollers are made of two congruent symmetrical circular or elliptical sectors. The sectors are joined to each other such that the planes in which they lie are perpendicular to each other, and their axes of symmetry coincide. The convex hulls of these structures constitute the members of the TDR convex hull family. All members of this family have two edges (the two circular or elliptical arcs). They may have either 4 vertices, as in the sphericon (which is a member of this family as well) or none, as in the oloid. Like the prime polysphericons the polycons are based on regular polygons but consist of identical pieces of only one type of cone with no frustum parts. The cone is created by rotating two adjacent edges of a regular polygon (and in most cases their extensions as well) around the polygon's axis of symmetry that passes through their common vertex. A polycon based on an n-gon (a polygon with n edges) has n edges and n + 2 vertices. The sphericon, which is a member of this family as well, has circular edges. The hexacon's edges are parabolic. All other polycons' edges are hyperbolic. Like the polycons, the Platonicons are made of only one type of conic surface. Their unique feature is that each one of them circumscribes one of the five Platonic solids. Unlike the other families, this family is not infinite. 14 Platonicons have been discovered to date. Rolling motion Unlike axially symmetrical bodies that, if unrestricted, can perform a linear rolling motion (like the sphere or the cylinder) or a circular one (like the cone), developable rollers meander while rolling. Their motion is linear only on average. In the case of the polycons and Platonicons, as well as some of the prime polysphericons, the path of their center of mass consists of circular arcs. In the case of the prime polysphericons that have surfaces that contain cylindrical parts the path is a combination of circular arcs and straight lines. A general expression for the shape of the path of the TDR convex hulls center of mass has yet to be derived. In order to maintain a smooth rolling motion the center of mass of a rolling body must maintain a constant height. All prime polysphericons, polycons, and platonicons and some of the TDR convex hulls share this property. Some of the TDR convex hulls, like the oloid, do not possess this property. In order for a TDR convex hull to maintain constant height the following must hold: Where a and b are the half minor and major axes of the elliptic arcs, respectively, and c is the distance between their centers. For example, in the case where the skeletal structure of the convex hull TDR consists of two circular segments with radius r, for the center of mass to be kept at constant height, the distance between the sectors' centers should be equal to r. References External links Sphericon series A list of the first members of the polysphericon family and a discussion about their various kinds. Geometric shapes Euclidean solid geometry
Developable roller
[ "Physics", "Mathematics" ]
916
[ "Geometric shapes", "Euclidean solid geometry", "Mathematical objects", "Space", "Geometric objects", "Spacetime" ]
64,367,549
https://en.wikipedia.org/wiki/GW190814
GW 190814 was a gravitational wave (GW) signal observed by the LIGO and Virgo detectors on 14 August 2019 at 21:10:39 UTC, and having a signal-to-noise ratio of 25 in the three-detector network. The signal was associated with the astronomical super event S190814bv, located 790 million light years away, in location area 18.5 deg2 towards Cetus or Sculptor. No optical counterpart was discovered despite an extensive search of the probability region. Discovery In June 2020, astronomers reported details of a compact binary merging, in the "mass gap" of cosmic collisions, of a first-ever "mystery object", either an extremely heavy neutron star (that was theorized not to exist) or a too-light black hole, with a black hole, that was detected as the gravitational wave GW190814. The mass of the lighter component is estimated to be 2.6 times the mass of the Sun ( ≈ ), placing it in the aforementioned mass gap between neutron stars and black holes. Despite an intensive search, no optical counterpart to the gravitational wave was observed. The lack of emitted light could be consistent with either a situation in which a black hole entirely consumed a neutron star or the merger of two black holes. See also Gravitational-wave astronomy List of gravitational wave observations Multi-messenger astronomy Notes References External links (24 June 2020; Science Fellow) (24 June 2020; LIGO Scientific Collaboration) (23 June 2020; Max Planck Institute for Gravitational Physics) (23 June 2020; Gravitational-wave Open Science Center (GWOSC)) Black holes Gravitational waves Neutron stars Theory of relativity August 2019 2019 in science 2019 in outer space
GW190814
[ "Physics", "Astronomy" ]
350
[ "Black holes", "Physical phenomena", "Physical quantities", "Unsolved problems in physics", "Astrophysics", "Waves", "Density", "Theory of relativity", "Stellar phenomena", "Gravitational waves", "Astronomical objects" ]
64,368,349
https://en.wikipedia.org/wiki/Alexios%20Polychronakos
Alexios Polychronakos (born 1959, in Greece) is a theoretical physicist. He studied electrical engineering at the National Technical University of Athens (diploma in 1982) and did graduate work in theoretical physics at the California Institute of Technology (Ph.D. 1987 ) under the supervision of John Preskill. Polychronakos is a professor of physics at the City College of New York. He is considered an authority on quantum field theory, quantum statistics, anyons, integrable systems, and quantum fluids, having authored over 110 refereed papers. He is a Fellow of the American Physical Society (2012), cited for "For important contributions to the field of statistical mechanics and integrable systems, including the Polychronakos model and the exchange operator formalism, fractional statistics, matrix model description of quantum Hall systems as well as other areas such as noncommutative geometry". References External links Polychronakos' profile at CUNY Inspire profile Google scholar profile 20th-century Greek physicists 21st-century American physicists California Institute of Technology alumni Living people Particle physicists Fellows of the American Physical Society Theoretical physicists Mathematical physicists 1959 births
Alexios Polychronakos
[ "Physics" ]
238
[ "Theoretical physics", "Particle physicists", "Particle physics", "Theoretical physicists" ]
47,512,011
https://en.wikipedia.org/wiki/Alpha-aminoadipic%20and%20alpha-ketoadipic%20aciduria
Alpha-aminoadipic and alpha-ketoadipic aciduria is an autosomal recessive metabolic disorder characterized by an increased urinary excretion of alpha-ketoadipic acid and alpha-aminoadipic acid. It is caused by mutations in DHTKD1, which encodes the E1 subunit of the oxoglutarate dehydrogenase complex (alpha-ketoglutarate dehydrogenase complex). References Autosomal recessive disorders Metabolic disorders
Alpha-aminoadipic and alpha-ketoadipic aciduria
[ "Chemistry", "Biology" ]
106
[ "Biotechnology stubs", "Biochemistry stubs", "Biochemistry", "Metabolic disorders", "Metabolism" ]
47,516,955
https://en.wikipedia.org/wiki/Filters%20in%20topology
Filters in topology, a subfield of mathematics, can be used to study topological spaces and define all basic topological notions such as convergence, continuity, compactness, and more. Filters, which are special families of subsets of some given set, also provide a common framework for defining various types of limits of functions such as limits from the left/right, to infinity, to a point or a set, and many others. Special types of filters called have many useful technical properties and they may often be used in place of arbitrary filters. Filters have generalizations called (also known as ) and , all of which appear naturally and repeatedly throughout topology. Examples include neighborhood filters/bases/subbases and uniformities. Every filter is a prefilter and both are filter subbases. Every prefilter and filter subbase is contained in a unique smallest filter, which they are said to . This establishes a relationship between filters and prefilters that may often be exploited to allow one to use whichever of these two notions is more technically convenient. There is a certain preorder on families of sets (subordination), denoted by that helps to determine exactly when and how one notion (filter, prefilter, etc.) can or cannot be used in place of another. This preorder's importance is amplified by the fact that it also defines the notion of filter convergence, where by definition, a filter (or prefilter) to a point if and only if where is that point's neighborhood filter. Consequently, subordination also plays an important role in many concepts that are related to convergence, such as cluster points and limits of functions. In addition, the relation which denotes and is expressed by saying that also establishes a relationship in which is to as a subsequence is to a sequence (that is, the relation which is called , is for filters the analog of "is a subsequence of"). Filters were introduced by Henri Cartan in 1937 and subsequently used by Bourbaki in their book as an alternative to the similar notion of a net developed in 1922 by E. H. Moore and H. L. Smith. Filters can also be used to characterize the notions of sequence and net convergence. But unlike sequence and net convergence, filter convergence is defined in terms of subsets of the topological space and so it provides a notion of convergence that is completely intrinsic to the topological space; indeed, the category of topological spaces can be equivalently defined entirely in terms of filters. Every net induces a canonical filter and dually, every filter induces a canonical net, where this induced net (resp. induced filter) converges to a point if and only if the same is true of the original filter (resp. net). This characterization also holds for many other definitions such as cluster points. These relationships make it possible to switch between filters and nets, and they often also allow one to choose whichever of these two notions (filter or net) is more convenient for the problem at hand. However, assuming that "subnet" is defined using either of its most popular definitions (which are those given by Willard and by Kelley), then in general, this relationship does extend to subordinate filters and subnets because as detailed below, there exist subordinate filters whose filter/subordinate-filter relationship cannot be described in terms of the corresponding net/subnet relationship; this issue can however be resolved by using a less commonly encountered definition of "subnet", which is that of an AA-subnet. Thus filters/prefilters and this single preorder provide a framework that seamlessly ties together fundamental topological concepts such as topological spaces (via neighborhood filters), neighborhood bases, convergence, various limits of functions, continuity, compactness, sequences (via sequential filters), the filter equivalent of "subsequence" (subordination), uniform spaces, and more; concepts that otherwise seem relatively disparate and whose relationships are less clear. Motivation Archetypical example of a filter The archetypical example of a filter is the at a point in a topological space which is the family of sets consisting of all neighborhoods of By definition, a neighborhood of some given point is any subset whose topological interior contains this point; that is, such that Importantly, neighborhoods are required to be open sets; those are called . Listed below are those fundamental properties of neighborhood filters that ultimately became the definition of a "filter." A is a set of subsets of that satisfies all of the following conditions: :    –  just as since is always a neighborhood of (and of anything else that it contains); :    –  just as no neighborhood of is empty; :   If  –  just as the intersection of any two neighborhoods of is again a neighborhood of ; :   If then  –  just as any subset of that includes a neighborhood of will necessarily a neighborhood of (this follows from and the definition of "a neighborhood of "). Generalizing sequence convergence by using sets − determining sequence convergence without the sequence A is by definition a map from the natural numbers into the space The original notion of convergence in a topological space was that of a sequence converging to some given point in a space, such as a metric space. With metrizable spaces (or more generally first-countable spaces or Fréchet–Urysohn spaces), sequences usually suffices to characterize, or "describe", most topological properties, such as the closures of subsets or continuity of functions. But there are many spaces where sequences can be used to describe even basic topological properties like closure or continuity. This failure of sequences was the motivation for defining notions such as nets and filters, which fail to characterize topological properties. Nets directly generalize the notion of a sequence since nets are, by definition, maps from an arbitrary directed set into the space A sequence is just a net whose domain is with the natural ordering. Nets have their own notion of convergence, which is a direct generalization of sequence convergence. Filters generalize sequence convergence in a different way by considering the values of a sequence. To see how this is done, consider a sequence which is by definition just a function whose value at is denoted by rather than by the usual parentheses notation that is commonly used for arbitrary functions. Knowing only the image (sometimes called "the range") of the sequence is not enough to characterize its convergence; multiple sets are needed. It turns out that the needed sets are the following, which are called the of the sequence : These sets completely determine this sequence's convergence (or non-convergence) because given any point, this sequence converges to it if and only if for every neighborhood (of this point), there is some integer such that contains all of the points This can be reworded as: every neighborhood must contain some set of the form as a subset. Or more briefly: every neighborhood must contain some tail as a subset. It is this characterization that can be used with the above family of tails to determine convergence (or non-convergence) of the sequence Specifically, with the family of in hand, the is no longer needed to determine convergence of this sequence (no matter what topology is placed on ). By generalizing this observation, the notion of "convergence" can be extended from sequences/functions to families of sets. The above set of tails of a sequence is in general not a filter but it does "" a filter via taking its (which consists of all supersets of all tails). The same is true of other important families of sets such as any neighborhood basis at a given point, which in general is also not a filter but does generate a filter via its upward closure (in particular, it generates the neighborhood filter at that point). The properties that these families share led to the notion of a , also called a , which by definition is any family having the minimal properties necessary and sufficient for it to generate a filter via taking its upward closure. Nets versus filters − advantages and disadvantages Filters and nets each have their own advantages and drawbacks and there's no reason to use one notion exclusively over the other. Depending on what is being proved, a proof may be made significantly easier by using one of these notions instead of the other. Both filters and nets can be used to completely characterize any given topology. Nets are direct generalizations of sequences and can often be used similarly to sequences, so the learning curve for nets is typically much less steep than that for filters. However, filters, and especially ultrafilters, have many more uses outside of topology, such as in set theory, mathematical logic, model theory (ultraproducts, for example), abstract algebra, combinatorics, dynamics, order theory, generalized convergence spaces, Cauchy spaces, and in the definition and use of hyperreal numbers. Like sequences, nets are and so they have the . For example, like sequences, nets can be "plugged into" other functions, where "plugging in" is just function composition. Theorems related to functions and function composition may then be applied to nets. One example is the universal property of inverse limits, which is defined in terms of composition of functions rather than sets and it is more readily applied to functions like nets than to sets like filters (a prominent example of an inverse limit is the Cartesian product). Filters may be awkward to use in certain situations, such as when switching between a filter on a space and a filter on a dense subspace In contrast to nets, filters (and prefilters) are families of and so they have the . For example, if is surjective then the under of an arbitrary filter or prefilter is both easily defined and guaranteed to be a prefilter on 's domain, whereas it is less clear how to pullback (unambiguously/without choice) an arbitrary sequence (or net) so as to obtain a sequence or net in the domain (unless is also injective and consequently a bijection, which is a stringent requirement). Similarly, the intersection of any collection of filters is once again a filter whereas it is not clear what this could mean for sequences or nets. Because filters are composed of subsets of the very topological space that is under consideration, topological set operations (such as closure or interior) may be applied to the sets that constitute the filter. Taking the closure of all the sets in a filter is sometimes useful in functional analysis for instance. Theorems and results about images or preimages of sets under a function may also be applied to the sets that constitute a filter; an example of such a result might be one of continuity's characterizations in terms of preimages of open/closed sets or in terms of the interior/closure operators. Special types of filters called have many useful properties that can significantly help in proving results. One downside of nets is their dependence on the directed sets that constitute their domains, which in general may be entirely unrelated to the space In fact, the class of nets in a given set is too large to even be a set (it is a proper class); this is because nets in can have domains of cardinality. In contrast, the collection of all filters (and of all prefilters) on is a set whose cardinality is no larger than that of Similar to a topology on a filter on is "intrinsic to " in the sense that both structures consist of subsets of and neither definition requires any set that cannot be constructed from (such as or other directed sets, which sequences and nets require). Preliminaries, notation, and basic notions In this article, upper case Roman letters like and denote sets (but not families unless indicated otherwise) and will denote the power set of A subset of a power set is called (or simply, ) where it is if it is a subset of Families of sets will be denoted by upper case calligraphy letters such as , , and . Whenever these assumptions are needed, then it should be assumed that is non-empty and that etc. are families of sets over The terms "prefilter" and "filter base" are synonyms and will be used interchangeably. Warning about competing definitions and notation There are unfortunately several terms in the theory of filters that are defined differently by different authors. These include some of the most important terms such as "filter." While different definitions of the same term usually have significant overlap, due to the very technical nature of filters (and point–set topology), these differences in definitions nevertheless often have important consequences. When reading mathematical literature, it is recommended that readers check how the terminology related to filters is defined by the author. For this reason, this article will clearly state all definitions as they are used. Unfortunately, not all notation related to filters is well established and some notation varies greatly across the literature (for example, the notation for the set of all prefilters on a set) so in such cases this article uses whatever notation is most self describing or easily remembered. The theory of filters and prefilters is well developed and has a plethora of definitions and notations, many of which are now unceremoniously listed to prevent this article from becoming prolix and to allow for the easy look up of notation and definitions. Their important properties are described later. Sets operations The or in of a family of sets is and similarly the of is Throughout, is a map. Topology notation Denote the set of all topologies on a set Suppose is any subset, and is any point. If then Nets and their tails A is a set together with a preorder, which will be denoted by (unless explicitly indicated otherwise), that makes into an () ; this means that for all there exists some such that For any indices the notation is defined to mean while is defined to mean that holds but it is true that (if is antisymmetric then this is equivalent to ). A is a map from a non-empty directed set into The notation will be used to denote a net with domain Warning about using strict comparison If is a net and then it is possible for the set which is called , to be empty (for example, this happens if is an upper bound of the directed set ). In this case, the family would contain the empty set, which would prevent it from being a prefilter (defined later). This is the (important) reason for defining as rather than or even and it is for this reason that in general, when dealing with the prefilter of tails of a net, the strict inequality may not be used interchangeably with the inequality Filters and prefilters The following is a list of properties that a family of sets may possess and they form the defining properties of filters, prefilters, and filter subbases. Whenever it is necessary, it should be assumed that Many of the properties of defined above and below, such as "proper" and "directed downward," do not depend on so mentioning the set is optional when using such terms. Definitions involving being "upward closed in " such as that of "filter on " do depend on so the set should be mentioned if it is not clear from context. There are no prefilters on (nor are there any nets valued in ), which is why this article, like most authors, will automatically assume without comment that whenever this assumption is needed. Basic examples Named examples The singleton set is called the or It is the unique filter on because it is a subset of every filter on ; however, it need not be a subset of every prefilter on The dual ideal is also called (despite not actually being a filter). It is the only dual ideal on that is not a filter on If is a topological space and then the neighborhood filter at is a filter on By definition, a family is called a (resp. a ) at if and only if is a prefilter (resp. is a filter subbase) and the filter on that generates is equal to the neighborhood filter The subfamily of open neighborhoods is a filter base for Both prefilters also form a bases for topologies on with the topology generated being coarser than This example immediately generalizes from neighborhoods of points to neighborhoods of non-empty subsets is an if for some sequence of points is an or a on if is a filter on generated by some elementary prefilter. The filter of tails generated by a sequence that is not eventually constant is necessarily an ultrafilter. Every principal filter on a countable set is sequential as is every cofinite filter on a countably infinite set. The intersection of finitely many sequential filters is again sequential. The set of all cofinite subsets of (meaning those sets whose complement in is finite) is proper if and only if is infinite (or equivalently, is infinite), in which case is a filter on known as the or the on If is finite then is equal to the dual ideal which is not a filter. If is infinite then the family of complements of singleton sets is a filter subbase that generates the Fréchet filter on As with any family of sets over that contains the kernel of the Fréchet filter on is the empty set: The intersection of all elements in any non-empty family is itself a filter on called the or of which is why it may be denoted by Said differently, Because every filter on has as a subset, this intersection is never empty. By definition, the infimum is the finest/largest (relative to ) filter contained as a subset of each member of If are filters then their infimum in is the filter If are prefilters then is a prefilter that is coarser than both (that is, ); indeed, it is one of the finest such prefilters, meaning that if is a prefilter such that then necessarily More generally, if are non−empty families and if then and is a greatest element of Let and let The or of denoted by is the smallest (relative to ) dual ideal on containing every element of as a subset; that is, it is the smallest (relative to ) dual ideal on containing as a subset. This dual ideal is where is the -system generated by As with any non-empty family of sets, is contained in filter on if and only if it is a filter subbase, or equivalently, if and only if is a filter on in which case this family is the smallest (relative to ) filter on containing every element of as a subset and necessarily Let and let The or of denoted by if it exists, is by definition the smallest (relative to ) filter on containing every element of as a subset. If it exists then necessarily (as defined above) and will also be equal to the intersection of all filters on containing This supremum of exists if and only if the dual ideal is a filter on The least upper bound of a family of filters may fail to be a filter. Indeed, if contains at least two distinct elements then there exist filters for which there does exist a filter that contains both If is not a filter subbase then the supremum of does not exist and the same is true of its supremum in but their supremum in the set of all dual ideals on will exist (it being the degenerate filter ). If are prefilters (resp. filters on ) then is a prefilter (resp. a filter) if and only if it is non-degenerate (or said differently, if and only if mesh), in which case it is coarsest prefilters (resp. coarsest filter) on that is finer (with respect to ) than both this means that if is any prefilter (resp. any filter) such that then necessarily in which case it is denoted by Other examples Let and let which makes a prefilter and a filter subbase that is not closed under finite intersections. Because is a prefilter, the smallest prefilter containing is The -system generated by is In particular, the smallest prefilter containing the filter subbase is equal to the set of all finite intersections of sets in The filter on generated by is All three of the -system generates, and are examples of fixed, principal, ultra prefilters that are principal at the point is also an ultrafilter on Let be a topological space, and define where is necessarily finer than If is non-empty (resp. non-degenerate, a filter subbase, a prefilter, closed under finite unions) then the same is true of If is a filter on then is a prefilter but not necessarily a filter on although is a filter on equivalent to The set of all dense open subsets of a (non-empty) topological space is a proper -system and so also a prefilter. If the space is a Baire space, then the set of all countable intersections of dense open subsets is a -system and a prefilter that is finer than If (with ) then the set of all such that has finite Lebesgue measure is a proper -system and a free prefilter that is also a proper subset of The prefilters and are equivalent and so generate the same filter on Since is a Baire space, every countable intersection of sets in is dense in (and also comeagre and non-meager) so the set of all countable intersections of elements of is a prefilter and -system; it is also finer than, and not equivalent to, Ultrafilters There are many other characterizations of "ultrafilter" and "ultra prefilter," which are listed in the article on ultrafilters. Important properties of ultrafilters are also described in that article. The ultrafilter lemma The following important theorem is due to Alfred Tarski (1930). A consequence of the ultrafilter lemma is that every filter is equal to the intersection of all ultrafilters containing it. Assuming the axioms of Zermelo–Fraenkel (ZF), the ultrafilter lemma follows from the Axiom of choice (in particular from Zorn's lemma) but is strictly weaker than it. The ultrafilter lemma implies the Axiom of choice for finite sets. If dealing with Hausdorff spaces, then most basic results (as encountered in introductory courses) in Topology (such as Tychonoff's theorem for compact Hausdorff spaces and the Alexander subbase theorem) and in functional analysis (such as the Hahn–Banach theorem) can be proven using only the ultrafilter lemma; the full strength of the axiom of choice might not be needed. Kernels The kernel is useful in classifying properties of prefilters and other families of sets. If then and this set is also equal to the kernel of the -system that is generated by In particular, if is a filter subbase then the kernels of all of the following sets are equal: (1) (2) the -system generated by and (3) the filter generated by If is a map then Equivalent families have equal kernels. Two principal families are equivalent if and only if their kernels are equal. Classifying families by their kernels If is a principal filter on then and and is also the smallest prefilter that generates Family of examples: For any non-empty the family is free but it is a filter subbase if and only if no finite union of the form covers in which case the filter that it generates will also be free. In particular, is a filter subbase if is countable (for example, the primes), a meager set in a set of finite measure, or a bounded subset of If is a singleton set then is a subbase for the Fréchet filter on Characterizing fixed ultra prefilters If a family of sets is fixed (that is, ) then is ultra if and only if some element of is a singleton set, in which case will necessarily be a prefilter. Every principal prefilter is fixed, so a principal prefilter is ultra if and only if is a singleton set. Every filter on that is principal at a single point is an ultrafilter, and if in addition is finite, then there are no ultrafilters on other than these. The next theorem shows that every ultrafilter falls into one of two categories: either it is free or else it is a principal filter generated by a single point. Finer/coarser, subordination, and meshing The preorder that is defined below is of fundamental importance for the use of prefilters (and filters) in topology. For instance, this preorder is used to define the prefilter equivalent of "subsequence", where "" can be interpreted as " is a subsequence of " (so "subordinate to" is the prefilter equivalent of "subsequence of"). It is also used to define prefilter convergence in a topological space. The definition of meshes with which is closely related to the preorder is used in topology to define cluster points. Two families of sets and are , indicated by writing if If do not mesh then they are . If then are said to if mesh, or equivalently, if the of which is the family does not contain the empty set, where the trace is also called the of Example: If is a subsequence of then is subordinate to in symbols: and also Stated in plain English, the prefilter of tails of a subsequence is always subordinate to that of the original sequence. To see this, let be arbitrary (or equivalently, let be arbitrary) and it remains to show that this set contains some For the set to contain it is sufficient to have Since are strictly increasing integers, there exists such that and so holds, as desired. Consequently, The left hand side will be a subset of the right hand side if (for instance) every point of is unique (that is, when is injective) and is the even-indexed subsequence because under these conditions, every tail (for every ) of the subsequence will belong to the right hand side filter but not to the left hand side filter. For another example, if is any family then always holds and furthermore, A non-empty family that is coarser than a filter subbase must itself be a filter subbase. Every filter subbase is coarser than both the -system that it generates and the filter that it generates. If are families such that the family is ultra, and then is necessarily ultra. It follows that any family that is equivalent to an ultra family will necessarily ultra. In particular, if is a prefilter then either both and the filter it generates are ultra or neither one is ultra. The relation is reflexive and transitive, which makes it into a preorder on The relation is antisymmetric but if has more than one point then it is symmetric. Equivalent families of sets The preorder induces its canonical equivalence relation on where for all is to if any of the following equivalent conditions hold: The upward closures of are equal. Two upward closed (in ) subsets of are equivalent if and only if they are equal. If then necessarily and is equivalent to Every equivalence class other than contains a unique representative (that is, element of the equivalence class) that is upward closed in Properties preserved between equivalent families Let be arbitrary and let be any family of sets. If are equivalent (which implies that ) then for each of the statements/properties listed below, either it is true of or else it is false of : Not empty Proper (that is, is not an element) Moreover, any two degenerate families are necessarily equivalent. Filter subbase Prefilter In which case generate the same filter on (that is, their upward closures in are equal). Free Principal Ultra Is equal to the trivial filter In words, this means that the only subset of that is equivalent to the trivial filter the trivial filter. In general, this conclusion of equality does not extend to non−trivial filters (one exception is when both families are filters). Meshes with Is finer than Is coarser than Is equivalent to Missing from the above list is the word "filter" because this property is preserved by equivalence. However, if are filters on then they are equivalent if and only if they are equal; this characterization does extend to prefilters. Equivalence of prefilters and filter subbases If is a prefilter on then the following families are always equivalent to each other: ; the -system generated by ; the filter on generated by ; and moreover, these three families all generate the same filter on (that is, the upward closures in of these families are equal). In particular, every prefilter is equivalent to the filter that it generates. By transitivity, two prefilters are equivalent if and only if they generate the same filter. Every prefilter is equivalent to exactly one filter on which is the filter that it generates (that is, the prefilter's upward closure). Said differently, every equivalence class of prefilters contains exactly one representative that is a filter. In this way, filters can be considered as just being distinguished elements of these equivalence classes of prefilters. A filter subbase that is also a prefilter can be equivalent to the prefilter (or filter) that it generates. In contrast, every prefilter is equivalent to the filter that it generates. This is why prefilters can, by and large, be used interchangeably with the filters that they generate while filter subbases cannot. Set theoretic properties and constructions relevant to topology Trace and meshing If is a prefilter (resp. filter) on then the trace of which is the family is a prefilter (resp. a filter) if and only if mesh (that is, ), in which case the trace of is said to be . The trace is always finer than the original family; that is, If is ultra and if mesh then the trace is ultra. If is an ultrafilter on then the trace of is a filter on if and only if For example, suppose that is a filter on is such that Then mesh and generates a filter on that is strictly finer than When prefilters mesh Given non-empty families the family satisfies and If is proper (resp. a prefilter, a filter subbase) then this is also true of both In order to make any meaningful deductions about from needs to be proper (that is, which is the motivation for the definition of "mesh". In this case, is a prefilter (resp. filter subbase) if and only if this is true of both Said differently, if are prefilters then they mesh if and only if is a prefilter. Generalizing gives a well known characterization of "mesh" entirely in terms of subordination (that is, ): Two prefilters (resp. filter subbases) mesh if and only if there exists a prefilter (resp. filter subbase) such that and If the least upper bound of two filters exists in then this least upper bound is equal to Images and preimages under functions Throughout, will be maps between non-empty sets. Images of prefilters Let Many of the properties that may have are preserved under images of maps; notable exceptions include being upward closed, being closed under finite intersections, and being a filter, which are not necessarily preserved. Explicitly, if one of the following properties is true of then it will necessarily also be true of (although possibly not on the codomain unless is surjective): ultra, ultrafilter, filter, prefilter, filter subbase, dual ideal, upward closed, proper/non-degenerate, ideal, closed under finite unions, downward closed, directed upward. Moreover, if is a prefilter then so are both The image under a map of an ultra set is again ultra and if is an ultra prefilter then so is If is a filter then is a filter on the range but it is a filter on the codomain if and only if is surjective. Otherwise it is just a prefilter on and its upward closure must be taken in to obtain a filter. The upward closure of is where if is upward closed in (that is, a filter) then this simplifies to: If then taking to be the inclusion map shows that any prefilter (resp. ultra prefilter, filter subbase) on is also a prefilter (resp. ultra prefilter, filter subbase) on Preimages of prefilters Let Under the assumption that is surjective: is a prefilter (resp. filter subbase, -system, closed under finite unions, proper) if and only if this is true of However, if is an ultrafilter on then even if is surjective (which would make a prefilter), it is nevertheless still possible for the prefilter to be neither ultra nor a filter on If is not surjective then denote the trace of by where in this case particular case the trace satisfies: and consequently also: This last equality and the fact that the trace is a family of sets over means that to draw conclusions about the trace can be used in place of and the can be used in place of For example: is a prefilter (resp. filter subbase, -system, proper) if and only if this is true of In this way, the case where is not (necessarily) surjective can be reduced down to the case of a surjective function (which is a case that was described at the start of this subsection). Even if is an ultrafilter on if is not surjective then it is nevertheless possible that which would make degenerate as well. The next characterization shows that degeneracy is the only obstacle. If is a prefilter then the following are equivalent: is a prefilter; is a prefilter; ; meshes with and moreover, if is a prefilter then so is If and if denotes the inclusion map then the trace of is equal to This observation allows the results in this subsection to be applied to investigating the trace on a set. Subordination is preserved by images and preimages The relation is preserved under both images and preimages of families of sets. This means that for families Moreover, the following relations always hold for family of sets : where equality will hold if is surjective. Furthermore, If then and where equality will hold if is injective. Products of prefilters Suppose is a family of one or more non-empty sets, whose product will be denoted by and for every index let denote the canonical projection. Let be non−empty families, also indexed by such that for each The of the families is defined identically to how the basic open subsets of the product topology are defined (had all of these been topologies). That is, both the notations denote the family of all cylinder subsets such that for all but finitely many and where for any one of these finitely many exceptions (that is, for any such that necessarily ). When every is a filter subbase then the family is a filter subbase for the filter on generated by If is a filter subbase then the filter on that it generates is called the . If every is a prefilter on then will be a prefilter on and moreover, this prefilter is equal to the coarsest prefilter such that for every However, may fail to be a filter on even if every is a filter on Convergence, limits, and cluster points Throughout, is a topological space. Prefilters vs. filters With respect to maps and subsets, the property of being a prefilter is in general more well behaved and better preserved than the property of being a filter. For instance, the image of a prefilter under some map is again a prefilter; but the image of a filter under a non-surjective map is a filter on the codomain, although it will be a prefilter. The situation is the same with preimages under non-injective maps (even if the map is surjective). If is a proper subset then any filter on will not be a filter on although it will be a prefilter. One advantage that filters have is that they are distinguished representatives of their equivalence class (relative to ), meaning that any equivalence class of prefilters contains a unique filter. This property may be useful when dealing with equivalence classes of prefilters (for instance, they are useful in the construction of completions of uniform spaces via Cauchy filters). The many properties that characterize ultrafilters are also often useful. They are used to, for example, construct the Stone–Čech compactification. The use of ultrafilters generally requires that the ultrafilter lemma be assumed. But in the many fields where the axiom of choice (or the Hahn–Banach theorem) is assumed, the ultrafilter lemma necessarily holds and does not require an addition assumption. A note on intuition Suppose that is a non-principal filter on an infinite set has one "upward" property (that of being closed upward) and one "downward" property (that of being directed downward). Starting with any there always exists some that is a subset of ; this may be continued ad infinitum to get a sequence of sets in with each being a subset of The same is true going "upward", for if then there is no set in that contains as a proper subset. Thus when it comes to limiting behavior (which is a topic central to the field of topology), going "upward" leads to a dead end, while going "downward" is typically fruitful. So to gain understanding and intuition about how filters (and prefilter) relate to concepts in topology, the "downward" property is usually the one to concentrate on. This is also why so many topological properties can be described by using only prefilters, rather than requiring filters (which only differ from prefilters in that they are also upward closed). The "upward" property of filters is less important for topological intuition but it is sometimes useful to have for technical reasons. For example, with respect to every filter subbase is contained in a unique smallest filter but there may not exist a unique smallest prefilter containing it. Limits and convergence A family is said to to a point of if Explicitly, means that every neighborhood contains some as a subset (that is, ); thus the following then holds: In words, a family converges to a point or subset if and only if it is than the neighborhood filter at A family converging to a point may be indicated by writing and saying that is a of if this limit is a point (and not a subset), then is also called a . As usual, is defined to mean that and is the limit point of that is, if also (If the notation "" did not also require that the limit point be unique then the equals sign would no longer be guaranteed to be transitive). The set of all limit points of is denoted by In the above definitions, it suffices to check that is finer than some (or equivalently, finer than every) neighborhood base in of the point (for example, such as or when ). Examples If is Euclidean space and denotes the Euclidean norm (which is the distance from the origin, defined as usual), then all of the following families converge to the origin: the prefilter of all open balls centered at the origin, where the prefilter of all closed balls centered at the origin, where This prefilter is equivalent to the one above. the prefilter where is a union of spheres centered at the origin having progressively smaller radii. This family consists of the sets as ranges over the positive integers. any of the families above but with the radius ranging over (or over any other positive decreasing sequence) instead of over all positive reals. Drawing or imagining any one of these sequences of sets when has dimension suggests that intuitively, these sets "should" converge to the origin (and indeed they do). This is the intuition that the above definition of a "convergent prefilter" make rigorous. Although was assumed to be the Euclidean norm, the example above remains valid for any other norm on The one and only limit point in of the free prefilter is since every open ball around the origin contains some open interval of this form. The fixed prefilter does not converges in to any and so although does converge to the since However, not every fixed prefilter converges to its kernel. For instance, the fixed prefilter also has kernel but does not converges (in ) to it. The free prefilter of intervals does not converge (in ) to any point. The same is also true of the prefilter because it is equivalent to and equivalent families have the same limits. In fact, if is any prefilter in any topological space then for every More generally, because the only neighborhood of is itself (that is, ), every non-empty family (including every filter subbase) converges to For any point its neighborhood filter always converges to More generally, any neighborhood basis at converges to A point is always a limit point of the principle ultra prefilter and of the ultrafilter that it generates. The empty family does not converge to any point. Basic properties If converges to a point then the same is true of any family finer than This has many important consequences. One consequence is that the limit points of a family are the same as the limit points of its upward closure: In particular, the limit points of a prefilter are the same as the limit points of the filter that it generates. Another consequence is that if a family converges to a point then the same is true of the family's trace/restriction to any given subset of If is a prefilter and then converges to a point of if and only if this is true of the trace If a filter subbase converges to a point then do the filter and the -system that it generates, although the converse is not guaranteed. For example, the filter subbase does not converge to in although the (principle ultra) filter that it generates does. Given the following are equivalent for a prefilter converges to converges to There exists a family equivalent to that converges to Because subordination is transitive, if and moreover, for every both and the maximal/ultrafilter converge to Thus every topological space induces a canonical convergence defined by At the other extreme, the neighborhood filter is the smallest (that is, coarsest) filter on that converges to that is, any filter converging to must contain as a subset. Said differently, the family of filters that converge to consists exactly of those filter on that contain as a subset. Consequently, the finer the topology on then the prefilters exist that have any limit points in Cluster points A family is said to a point of if it meshes with the neighborhood filter of that is, if Explicitly, this means that and every neighborhood of In particular, a point is a or an of a family if meshes with the neighborhood filter at The set of all cluster points of is denoted by where the subscript may be dropped if not needed. In the above definitions, it suffices to check that meshes with some (or equivalently, meshes with every) neighborhood base in of When is a prefilter then the definition of " mesh" can be characterized entirely in terms of the subordination preorder Two equivalent families of sets have the exact same limit points and also the same cluster points. No matter the topology, for every both and the principal ultrafilter cluster at If clusters to a point then the same is true of any family coarser than Consequently, the cluster points of a family are the same as the cluster points of its upward closure: In particular, the cluster points of a prefilter are the same as the cluster points of the filter that it generates. Given the following are equivalent for a prefilter : clusters at The family generated by clusters at There exists a family equivalent to that clusters at for every neighborhood of If is a filter on then for every neighborhood There exists a prefilter subordinate to (that is, ) that converges to This is the filter equivalent of " is a cluster point of a sequence if and only if there exists a subsequence converging to In particular, if is a cluster point of a prefilter then is a prefilter subordinate to that converges to The set of all cluster points of a prefilter satisfies Consequently, the set of all cluster points of prefilter is a closed subset of This also justifies the notation for the set of cluster points. In particular, if is non-empty (so that is a prefilter) then since both sides are equal to Properties and relationships Just like sequences and nets, it is possible for a prefilter on a topological space of infinite cardinality to not have cluster points or limit points. If is a limit point of then is necessarily a limit point of any family than (that is, if then ). In contrast, if is a cluster point of then is necessarily a cluster point of any family than (that is, if mesh and then mesh). Equivalent families and subordination Any two equivalent families can be used in the definitions of "limit of" and "cluster at" because their equivalency guarantees that if and only if and also that if and only if In essence, the preorder is incapable of distinguishing between equivalent families. Given two prefilters, whether or not they mesh can be characterized entirely in terms of subordination. Thus the two most fundamental concepts related to (pre)filters to Topology (that is, limit and cluster points) can both be defined in terms of the subordination relation. This is why the preorder is of such great importance in applying (pre)filters to Topology. Limit and cluster point relationships and sufficient conditions Every limit point of a non-degenerate family is also a cluster point; in symbols: This is because if is a limit point of then mesh, which makes a cluster point of But in general, a cluster point need not be a limit point. For instance, every point in any given non-empty subset is a cluster point of the principle prefilter (no matter what topology is on ) but if is Hausdorff and has more than one point then this prefilter has no limit points; the same is true of the filter that this prefilter generates. However, every cluster point of an prefilter is a limit point. Consequently, the limit points of an prefilter are the same as its cluster points: that is to say, a given point is a cluster point of an ultra prefilter if and only if converges to that point. Although a cluster point of a filter need not be a limit point, there will always exist a finer filter that does converge to it; in particular, if clusters at then is a filter subbase whose generated filter converges to If is a filter subbase such that then In particular, any limit point of a filter subbase subordinate to is necessarily also a cluster point of If is a cluster point of a prefilter then is a prefilter subordinate to that converges to If and if is a prefilter on then every cluster point of belongs to and any point in is a limit point of a filter on Primitive sets A subset is called if it is the set of limit points of some ultrafilter (or equivalently, some ultra prefilter). That is, if there exists an ultrafilter such that is equal to which recall denotes the set of limit points of Since limit points are the same as cluster points for ultra prefilters, a subset is primitive if and only if it is equal to the set of cluster points of some ultra prefilter For example, every closed singleton subset is primitive. The image of a primitive subset of under a continuous map is contained in a primitive subset of Assume that are two primitive subset of If is an open subset of that intersects then for any ultrafilter such that In addition, if are distinct then there exists some and some ultrafilters such that and Other results If is a complete lattice then: The limit inferior of is the infimum of the set of all cluster points of The limit superior of is the supremum of the set of all cluster points of is a convergent prefilter if and only if its limit inferior and limit superior agree; in this case, the value on which they agree is the limit of the prefilter. Limits of functions defined as limits of prefilters Suppose is a map from a set into a topological space and If is a limit point (respectively, a cluster point) of then is called a or (respectively, a ) Explicitly, is a limit of with respect to if and only if which can be written as (by definition of this notation) and stated as If the limit is unique then the arrow may be replaced with an equals sign The neighborhood filter can be replaced with any family equivalent to it and the same is true of The definition of a convergent net is a special case of the above definition of a limit of a function. Specifically, if is a net then where the left hand side states that is a limit while the right hand side states that is a limit with respect to (as just defined above). The table below shows how various types of limits encountered in analysis and topology can be defined in terms of the convergence of images (under ) of particular prefilters on the domain This shows that prefilters provide a general framework into which many of the various definitions of limits fit. The limits in the left-most column are defined in their usual way with their obvious definitions. Throughout, let be a map between topological spaces, If is Hausdorff then all arrows in the table may be replaced with equal signs and may be replaced with By defining different prefilters, many other notions of limits can be defined; for example, Divergence to infinity Divergence of a real-valued function to infinity can be defined/characterized by using the prefilters where along if and only if and similarly, along if and only if The family can be replaced by any family equivalent to it, such as for instance (in real analysis, this would correspond to replacing the strict inequality in the definition with and the same is true of and So for example, if then if and only if holds. Similarly, if and only if or equivalently, if and only if More generally, if is valued in (or some other seminormed vector space) and if then if and only if holds, where Filters and nets This section will describe the relationships between prefilters and nets in great detail because of how important these details are applying filters to topology − particularly in switching from utilizing nets to utilizing filters and vice verse. Nets to prefilters In the definitions below, the first statement is the standard definition of a limit point of a net (respectively, a cluster point of a net) and it is gradually reworded until the corresponding filter concept is reached. If is a map and is a net in then Prefilters to nets A is a pair consisting of a non-empty set and an element For any family let Define a canonical preorder on pointed sets by declaring There is a canonical map defined by If then the tail of the assignment starting at is Although is not, in general, a partially ordered set, it is a directed set if (and only if) is a prefilter. So the most immediate choice for the definition of "the net in induced by a prefilter " is the assignment from into If is a prefilter on is a net in and the prefilter associated with is ; that is: This would not necessarily be true had been defined on a proper subset of If is a net in then it is in general true that is equal to because, for example, the domain of may be of a completely different cardinality than that of (since unlike the domain of the domain of an arbitrary net in could have cardinality). Partially ordered net The domain of the canonical net is in general not partially ordered. However, in 1955 Bruns and Schmidt discovered a construction (detailed here: Filter (set theory)#Partially ordered net) that allows for the canonical net to have a domain that is both partially ordered and directed; this was independently rediscovered by Albert Wilansky in 1970. Because the tails of this partially ordered net are identical to the tails of (since both are equal to the prefilter ), there is typically nothing lost by assuming that the domain of the net associated with a prefilter is both directed partially ordered. If can further be assumed that the partially ordered domain is also a dense order. Subordinate filters and subnets The notion of " is subordinate to " (written ) is for filters and prefilters what " is a subsequence of " is for sequences. For example, if denotes the set of tails of and if denotes the set of tails of the subsequence (where ) then (which by definition means ) is true but is in general false. If is a net in a topological space and if is the neighborhood filter at a point then If is an surjective open map, and is a prefilter on that converges to then there exist a prefilter on such that and is equivalent to (that is, ). Subordination analogs of results involving subsequences The following results are the prefilter analogs of statements involving subsequences. The condition "" which is also written is the analog of " is a subsequence of " So "finer than" and "subordinate to" is the prefilter analog of "subsequence of." Some people prefer saying "subordinate to" instead of "finer than" because it is more reminiscent of "subsequence of." Non-equivalence of subnets and subordinate filters Subnets in the sense of Willard and subnets in the sense of Kelley are the most commonly used definitions of "subnet." The first definition of a subnet ("Kelley-subnet") was introduced by John L. Kelley in 1955. Stephen Willard introduced in 1970 his own variant ("Willard-subnet") of Kelley's definition of subnet. AA-subnets were introduced independently by Smiley (1957), Aarnes and Andenaes (1972), and Murdeshwar (1983); AA-subnets were studied in great detail by Aarnes and Andenaes but they are not often used. A subset of a preordered space is or in if for every there exists some such that If contains a tail of then is said to be in }}; explicitly, this means that there exists some such that (that is, for all satisfying ). A subset is eventual if and only if its complement is not frequent (which is termed ). A map between two preordered sets is if whenever satisfy then Kelley did not require the map to be order preserving while the definition of an AA-subnet does away entirely with any map between the two nets' domains and instead focuses entirely on − the nets' common codomain. Every Willard-subnet is a Kelley-subnet and both are AA-subnets. In particular, if is a Willard-subnet or a Kelley-subnet of then Example: If and is a constant sequence and if and then is an AA-subnet of but it is neither a Willard-subnet nor a Kelley-subnet of AA-subnets have a defining characterization that immediately shows that they are fully interchangeable with sub(ordinate)filters. Explicitly, what is meant is that the following statement is true for AA-subnets: If are prefilters then if and only if is an AA-subnet of If "AA-subnet" is replaced by "Willard-subnet" or "Kelley-subnet" then the above statement becomes . In particular, as this counter-example demonstrates, the problem is that the following statement is in general false: statement: If are prefilters such that is a Kelley-subnet of Since every Willard-subnet is a Kelley-subnet, this statement thus remains false if the word "Kelley-subnet" is replaced with "Willard-subnet". If "subnet" is defined to mean Willard-subnet or Kelley-subnet then nets and filters are not completely interchangeable because there exists a filter–sub(ordinate)filter relationships that cannot be expressed in terms of a net–subnet relationship between the two induced nets. In particular, the problem is that Kelley-subnets and Willard-subnets are fully interchangeable with subordinate filters. If the notion of "subnet" is not used or if "subnet" is defined to mean AA-subnet, then this ceases to be a problem and so it becomes correct to say that nets and filters are interchangeable. Despite the fact that AA-subnets do not have the problem that Willard and Kelley subnets have, they are not widely used or known about. Topologies and prefilters Throughout, is a topological space. Examples of relationships between filters and topologies Bases and prefilters Let be a family of sets that covers and define for every The definition of a base for some topology can be immediately reworded as: is a base for some topology on if and only if is a filter base for every If is a topology on and then the definitions of is a basis (resp. subbase) for can be reworded as: is a base (resp. subbase) for if and only if for every is a filter base (resp. filter subbase) that generates the neighborhood filter of at Neighborhood filters The archetypical example of a filter is the set of all neighborhoods of a point in a topological space. Any neighborhood basis of a point in (or of a subset of) a topological space is a prefilter. In fact, the definition of a neighborhood base can be equivalently restated as: "a neighborhood base is any prefilter that is equivalent the neighborhood filter." Neighborhood bases at points are examples of prefilters that are fixed but may or may not be principal. If has its usual topology and if then any neighborhood filter base of is fixed by (in fact, it is even true that ) but is principal since In contrast, a topological space has the discrete topology if and only if the neighborhood filter of every point is a principal filter generated by exactly one point. This shows that a non-principal filter on an infinite set is not necessarily free. The neighborhood filter of every point in topological space is fixed since its kernel contains (and possibly other points if, for instance, is not a T1 space). This is also true of any neighborhood basis at For any point in a T1 space (for example, a Hausdorff space), the kernel of the neighborhood filter of is equal to the singleton set However, it is possible for a neighborhood filter at a point to be principal but discrete (that is, not principal at a point). A neighborhood basis of a point in a topological space is principal if and only if the kernel of is an open set. If in addition the space is T1 then so that this basis is principal if and only if is an open set. Generating topologies from filters and prefilters Suppose is not empty (and ). If is a filter on then is a topology on but the converse is in general false. This shows that in a sense, filters are topologies. Topologies of the form where is an filter on are an even more specialized subclass of such topologies; they have the property that proper subset is open or closed, but (unlike the discrete topology) never both. These spaces are, in particular, examples of door spaces. If is a prefilter (resp. filter subbase, -system, proper) on then the same is true of both and the set of all possible unions of one or more elements of If is closed under finite intersections then the set is a topology on with both being bases for it. If the -system covers then both are also bases for If is a topology on then is a prefilter (or equivalently, a -system) if and only if it has the finite intersection property (that is, it is a filter subbase), in which case a subset will be a basis for if and only if is equivalent to in which case will be a prefilter. Topological properties and prefilters Neighborhoods and topologies The neighborhood filter of a nonempty subset in a topological space is equal to the intersection of all neighborhood filters of all points in A subset is open in if and only if whenever is a filter on and then Suppose are topologies on Then is finer than (that is, ) if and only if whenever is a filter on if then Consequently, if and only if for every filter and every if and only if However, it is possible that while also for every filter converges to point of if and only if converges to point of Closure If is a prefilter on a subset then every cluster point of belongs to If is a non-empty subset, then the following are equivalent: is a limit point of a prefilter on Explicitly: there exists a prefilter such that is a limit point of a filter on There exists a prefilter such that The prefilter meshes with the neighborhood filter Said differently, is a cluster point of the prefilter The prefilter meshes with some (or equivalently, with every) filter base for (that is, with every neighborhood basis at ). The following are equivalent: is a limit points of There exists a prefilter such that Closed sets If is not empty then the following are equivalent: is a closed subset of If is a prefilter on such that then If is a prefilter on such that is an accumulation points of then If is such that the neighborhood filter meshes with then Hausdorffness The following are equivalent: is a Hausdorff space. Every prefilter on converges to at most one point in The above statement but with the word "prefilter" replaced by any one of the following: filter, ultra prefilter, ultrafilter. Compactness As discussed in this article, the Ultrafilter Lemma is closely related to many important theorems involving compactness. The following are equivalent: is a compact space. Every ultrafilter on converges to at least one point in That this condition implies compactness can be proven by using only the ultrafilter lemma. That compactness implies this condition can be proven without the ultrafilter lemma (or even the axiom of choice). The above statement but with the word "ultrafilter" replaced by "ultra prefilter". For every filter there exists a filter such that and converges to some point of The above statement but with each instance of the word "filter" replaced by: prefilter. Every filter on has at least one cluster point in That this condition is equivalent to compactness can be proven by using only the ultrafilter lemma. The above statement but with the word "filter" replaced by "prefilter". Alexander subbase theorem: There exists a subbase such that every cover of by sets in has a finite subcover. That this condition is equivalent to compactness can be proven by using only the ultrafilter lemma. If is the set of all complements of compact subsets of a given topological space then is a filter on if and only if is compact. Continuity Let be a map between topological spaces Given the following are equivalent: is continuous at Definition: For every neighborhood of there exists some neighborhood of such that If is a filter on such that then The above statement but with the word "filter" replaced by "prefilter". The following are equivalent: is continuous. If is a prefilter on such that then If is a limit point of a prefilter then is a limit point of Any one of the above two statements but with the word "prefilter" replaced by "filter". If is a prefilter on is a cluster point of is continuous, then is a cluster point in of the prefilter A subset of a topological space is dense in if and only if for every the trace of the neighborhood filter along does not contain the empty set (in which case it will be a filter on ). Suppose is a continuous map into a Hausdorff regular space and that is a dense subset of a topological space Then has a continuous extension if and only if for every the prefilter converges to some point in Furthermore, this continuous extension will be unique whenever it exists. Products Suppose is a non-empty family of non-empty topological spaces and that is a family of prefilters where each is a prefilter on Then the product of these prefilters (defined above) is a prefilter on the product space which as usual, is endowed with the product topology. If then if and only if Suppose are topological spaces, is a prefilter on having as a cluster point, and is a prefilter on having as a cluster point. Then is a cluster point of in the product space However, if then there exist sequences such that both of these sequences have a cluster point in but the sequence does have a cluster point in Example application: The ultrafilter lemma along with the axioms of ZF imply Tychonoff's theorem for compact Hausdorff spaces: Let be compact topological spaces. Assume that the ultrafilter lemma holds (because of the Hausdorff assumption, this proof does need the full strength of the axiom of choice; the ultrafilter lemma suffices). Let be given the product topology (which makes a Hausdorff space) and for every let denote this product's projections. If then is compact and the proof is complete so assume Despite the fact that because the axiom of choice is not assumed, the projection maps are not guaranteed to be surjective. Let be an ultrafilter on and for every let denote the ultrafilter on generated by the ultra prefilter Because is compact and Hausdorff, the ultrafilter converges to a unique limit point (because of 's uniqueness, this definition does not require the axiom of choice). Let where satisfies for every The characterization of convergence in the product topology that was given above implies that Thus every ultrafilter on converges to some point of which implies that is compact (recall that this implication's proof only required the ultrafilter lemma). Examples of applications of prefilters Uniformities and Cauchy prefilters A uniform space is a set equipped with a filter on that has certain properties. A or is a prefilter on whose upward closure is a uniform space. A prefilter on a uniform space with uniformity is called a if for every entourage there exists some that is , which means that A is a minimal element (with respect to or equivalently, to ) of the set of all Cauchy filters on Examples of minimal Cauchy filters include the neighborhood filter of any point Every convergent filter on a uniform space is Cauchy. Moreover, every cluster point of a Cauchy filter is a limit point. A uniform space is called (resp. ) if every Cauchy prefilter (resp. every elementary Cauchy prefilter) on converges to at least one point of (replacing all instance of the word "prefilter" with "filter" results in equivalent statement). Every compact uniform space is complete because any Cauchy filter has a cluster point (by compactness), which is necessarily also a limit point (since the filter is Cauchy). Uniform spaces were the result of attempts to generalize notions such as "uniform continuity" and "uniform convergence" that are present in metric spaces. Every topological vector space, and more generally, every topological group can be made into a uniform space in a canonical way. Every uniformity also generates a canonical induced topology. Filters and prefilters play an important role in the theory of uniform spaces. For example, the completion of a Hausdorff uniform space (even if it is not metrizable) is typically constructed by using minimal Cauchy filters. Nets are less ideal for this construction because their domains are extremely varied (for example, the class of all Cauchy nets is not a set); sequences cannot be used in the general case because the topology might not be metrizable, first-countable, or even sequential. The set of all on a Hausdorff topological vector space (TVS) can made into a vector space and topologized in such a way that it becomes a completion of (with the assignment becoming a linear topological embedding that identifies as a dense vector subspace of this completion). More generally, a is a pair consisting of a set together a family of (proper) filters, whose members are declared to be "", having all of the following properties: For each the discrete ultrafilter at is an element of If is a subset of a proper filter then If and if each member of intersects each member of then The set of all Cauchy filters on a uniform space forms a Cauchy space. Every Cauchy space is also a convergence space. A map between two Cauchy spaces is called if the image of every Cauchy filter in is a Cauchy filter in Unlike the category of topological spaces, the category of Cauchy spaces and Cauchy continuous maps is Cartesian closed, and contains the category of proximity spaces. Topologizing the set of prefilters Starting with nothing more than a set it is possible to topologize the set of all filter bases on with the , which is named after Marshall Harvey Stone. To reduce confusion, this article will adhere to the following notational conventions: Lower case letters for elements Upper case letters for subsets Upper case calligraphy letters for subsets (or equivalently, for elements such as prefilters). Upper case double-struck letters for subsets For every let where These sets will be the basic open subsets of the Stone topology. If then From this inclusion, it is possible to deduce all of the subset inclusions displayed below with the exception of For all where in particular, the equality shows that the family is a -system that forms a basis for a topology on called the . It is henceforth assumed that carries this topology and that any subset of carries the induced subspace topology. In contrast to most other general constructions of topologies (for example, the product, quotient, subspace topologies, etc.), this topology on was defined with using anything other than the set there were preexisting structures or assumptions on so this topology is completely independent of everything other than (and its subsets). The following criteria can be used for checking for points of closure and neighborhoods. If then: : belongs to the closure of if and only if : is a neighborhood of if and only if there exists some such that (that is, such that for all ). It will be henceforth assumed that because otherwise and the topology is which is uninteresting. Subspace of ultrafilters The set of ultrafilters on (with the subspace topology) is a Stone space, meaning that it is compact, Hausdorff, and totally disconnected. If has the discrete topology then the map defined by sending to the principal ultrafilter at is a topological embedding whose image is a dense subset of (see the article Stone–Čech compactification for more details). Relationships between topologies on and the Stone topology on Every induces a canonical map defined by which sends to the neighborhood filter of If then if and only if Thus every topology can be identified with the canonical map which allows to be canonically identified as a subset of (as a side note, it is now possible to place on and thus also on the topology of pointwise convergence on so that it now makes sense to talk about things such as sequences of topologies on converging pointwise). For every the surjection is always continuous, closed, and open, but it is injective if and only if (that is, a Kolmogorov space). In particular, for every topology the map is a topological embedding (said differently, every Kolmogorov space is a topological subspace of the space of prefilters). In addition, if is a map such that (which is true of for instance), then for every the set is a neighborhood (in the subspace topology) of See also Notes Proofs Citations References (Provides an introductory review of filters in topology and in metric spaces.) Filters General topology
Filters in topology
[ "Chemistry", "Mathematics", "Engineering" ]
15,179
[ "General topology", "Chemical equipment", "Filters", "Topology", "Filtration" ]