id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
1,759,187 | https://en.wikipedia.org/wiki/Superabundant%20number | In mathematics, a superabundant number is a certain kind of natural number. A natural number is called superabundant precisely when, for all :
where denotes the sum-of-divisors function (i.e., the sum of all positive divisors of , including itself). The first few superabundant numbers are . For example, the number 5 is not a superabundant number because for , and 5, the sigma is , and .
Superabundant numbers were defined by . Unknown to Alaoglu and Erdős, about 30 pages of Ramanujan's 1915 paper "Highly Composite Numbers" were suppressed. Those pages were finally published in The Ramanujan Journal 1 (1997), 119–153. In section 59 of that paper, Ramanujan defines generalized highly composite numbers, which include the superabundant numbers.
Properties
proved that if n is superabundant, then there exist a k and a1, a2, ..., ak such that
where pi is the i-th prime number, and
That is, they proved that if n is superabundant, the prime decomposition of n has non-increasing exponents (the exponent of a larger prime is never more than that a smaller prime) and that all primes up to are factors of n. Then in particular any superabundant number is an even integer, and it is a multiple of the k-th primorial
In fact, the last exponent ak is equal to 1 except when n is 4 or 36.
Superabundant numbers are closely related to highly composite numbers. Not all superabundant numbers are highly composite numbers. In fact, only 449 superabundant and highly composite numbers are the same . For instance, 7560 is highly composite but not superabundant. Conversely, 1163962800 is superabundant but not highly composite.
Alaoglu and Erdős observed that all superabundant numbers are highly abundant.
Not all superabundant numbers are Harshad numbers. The first exception is the 105th superabundant number, 149602080797769600. The digit sum is 81, but 81 does not divide evenly into this superabundant number.
Superabundant numbers are also of interest in connection with the Riemann hypothesis, and with Robin's theorem that the Riemann hypothesis is equivalent to the statement that
for all n greater than the largest known exception, the superabundant number 5040. If this inequality has a larger counterexample, proving the Riemann hypothesis to be false, the smallest such counterexample must be a superabundant number .
Not all superabundant numbers are colossally abundant.
Extension
The generalized -super abundant numbers are those such that for all , where is the sum of the -th powers of the divisors of .
1-super abundant numbers are superabundant numbers. 0-super abundant numbers are highly composite numbers.
For example, generalized 2-super abundant numbers are 1, 2, 4, 6, 12, 24, 48, 60, 120, 240, ...
References
.
.
.
External links
MathWorld: Superabundant number
Divisor function
Integer sequences | Superabundant number | Mathematics | 681 |
37,443,770 | https://en.wikipedia.org/wiki/Antonio%20Banfi | Antonio Banfi (Vimercate, 30 September 1886 – Milano, 22 July 1957) was an Italian philosopher and senator. He is also noted for founding the Italian philosophical school called critical rationalism.
Although influenced by the neo-Kantians in Marburg and Edmund Husserl, whom he knew personally, Banfi moved away from idealism and instead focused on Marxism, in particular historical materialism. Banfi joined the Italian Communist Party in 1947. He was elected to the Italian Senate in 1948 and again in 1953.
Banfi was a chair of the University of Milan's History of Philosophy department. Among his students were Dino Formaggio and Mario Dal Pra.
References
Bibliography
Garin, E., "Banfi, Antonio" in Brochert, D. M. (ed.), Encyclopedia of Philosophy, Second Edition, vol. 1 (Thomson Gale, 2006), p. 476.
Garin, E., History of Italian Philosophy, vol. 2 (Rodopi, 2008), p. 1292.
Guiat, C., The French and Italian Communist Parties: Comrades and Culture (Frank Cass Publishers, 2003), pp. 144–150.
1886 births
1957 deaths
People from Vimercate
Italian Communist Party politicians
Senators of Legislature I of Italy
Senators of Legislature II of Italy
Members of the Italian Senate from Lombardy
20th-century Italian philosophers
Materialists
Manifesto of the Anti-Fascist Intellectuals | Antonio Banfi | Physics | 291 |
26,561,301 | https://en.wikipedia.org/wiki/Legacy-free%20PC | A legacy-free PC is a type of personal computer that lacks a floppy or optical disc drive, legacy ports, and an Industry Standard Architecture (ISA) bus (or sometimes, any internal expansion bus at all). According to Microsoft, "The basic goal for these requirements is that the operating system, devices, and end users cannot detect the presence of the following: ISA slots or devices; legacy floppy disk controller (FDC); and PS/2, serial, parallel, and game ports." The legacy ports are usually replaced with Universal Serial Bus (USB) ports. A USB adapter may be used if an older device must be connected to a PC lacking these ports. According to the 2001 edition of Microsoft's PC System Design Guide, a legacy-free PC must be able to boot from a USB device.
Removing older, usually bulkier ports and devices allows a legacy-free PC to be much more compact than earlier systems and many fall into the nettop or all-in-one form factor. Netbooks and ultrabooks could also be considered a portable form of a legacy-free PC. Legacy-free PCs can be more difficult to upgrade than a traditional beige box PC, and are more typically expected to be replaced completely when they become obsolete. Many legacy-free PCs include modern devices that may be used to replace ones omitted, such as a memory card reader replacing the floppy drive.
As the first decade of the 21st century progressed, the legacy-free PC went mainstream, with legacy ports removed from commonly available computer systems in all form factors. However, the PS/2 keyboard connector still retains some use, as it can offer some uses (e.g. implementation of n-key rollover) not offered by USB.
With those parts becoming increasingly rare on newer computers as of the late 2010s and early 2020s, the term "legacy-free PC" itself has also become increasingly rare.
History
Late 1980s
In 1987, IBM released the IBM PS/2 line with new internal architecture; the BIOS and the new PS/2 port and VGA port was introduced, but this line was heavily criticized for a relatively high-closed proprietary architecture and low compatibility with PC-cloned hardware.
1990s
In 1998, Apple's iMac G3 was introduced as the first widely known example of a legacy-free PC, and drew much criticism for its lack of legacy peripherals such as a floppy drive and Apple Desktop Bus (ADB) connector; However, its success popularized USB ports.
Compaq released the iPaq desktop in 1999.
From November 1999 to July 2000, Dell's WebPC was an early less-successful Wintel legacy-free PC.
2000s
More legacy-free PCs were introduced around 2000 after the prevalence of USB and broadband internet made many of the older ports and devices obsolete. They largely took the form of low-end, consumer systems with the motivation of making computers less expensive, easier to use, and more stable and manageable. The Dell Studio Hybrid, Asus Eee Box and MSI Wind PC are examples of later, more-successful Intel-based legacy-free PCs.
Apple introduced the Apple Modem on October 12, 2005 and removed the internal 56K modem on new computers. The MacBook Air, introduced on January 29, 2008, also omits a built-in SuperDrive and wired Ethernet connectivity that was available on all other Mac computers sold at the time. The SuperDrive would later be removed from all Macs by the end of 2016, and wired Ethernet would later be removed from all MacBook models. These removals were followed by other PC manufacturers who ship lightweight laptops.
Intel introduced their LGA 775 CPU socket in 2004, replacing their previous CPUs and sockets with PGA packaging.
2010s
Northbridge, southbridge, and FSB have been replaced by more integrated architectures starting from early 2010s.
The relaunched MacBook in 2015 dropped features such as the MagSafe charging port and the Secure Digital (SD) memory card reader. It only kept two types of ports: a 3.5 mm audio jack and a USB 3.1 Type-C port. This configuration later found its way in the MacBook Pro in 2016, the only difference being that two or four Thunderbolt 3 ports were included instead of just one. In addition, all MacBook Pro except for the entry-level model replaced the function keys with a Touch Bar. These changes led to criticism because many users used the features that Apple had removed, yet this approach has been copied to various degree by some other laptop vendors. However, the 2021 MacBook Pro models once again include function keys and do not feature a Touch Bar, seemingly in response to the aforementioned poor reception.
The legacy BIOS was replaced by the Unified Extensible Firmware Interface. PCI has fallen out of favor, as it has been superseded by PCIe.
See also
Nettop
Netbook
PC 2001
WebPC
iPAQ (desktop computer)
Network computer
Thin client
Legacy system
References
Cloud clients
Information appliances
Personal computers
Classes of computers
Legacy hardware | Legacy-free PC | Technology | 1,025 |
467,198 | https://en.wikipedia.org/wiki/Gallium%20nitride | Gallium nitride () is a binary III/V direct bandgap semiconductor commonly used in blue light-emitting diodes since the 1990s. The compound is a very hard material that has a Wurtzite crystal structure. Its wide band gap of 3.4 eV affords it special properties for applications in optoelectronics, high-power and high-frequency devices. For example, GaN is the substrate that makes violet (405 nm) laser diodes possible, without requiring nonlinear optical frequency doubling.
Its sensitivity to ionizing radiation is low (like other group III nitrides), making it a suitable material for solar cell arrays for satellites. Military and space applications could also benefit as devices have shown stability in high radiation environments.
Because GaN transistors can operate at much higher temperatures and work at much higher voltages than gallium arsenide (GaAs) transistors, they make ideal power amplifiers at microwave frequencies. In addition, GaN offers promising characteristics for THz devices. Due to high power density and voltage breakdown limits GaN is also emerging as a promising candidate for 5G cellular base station applications. Since the early 2020s, GaN power transistors have come into increasing use in power supplies in electronic equipment, converting AC mains electricity to low-voltage DC.
Physical properties
GaN is a very hard (Knoop hardness 14.21 GPa), mechanically stable wide-bandgap semiconductor material with high heat capacity and thermal conductivity. In its pure form it resists cracking and can be deposited in thin film on sapphire or silicon carbide, despite the mismatch in their lattice constants. GaN can be doped with silicon (Si) or with oxygen to n-type and with magnesium (Mg) to p-type. However, the Si and Mg atoms change the way the GaN crystals grow, introducing tensile stresses and making them brittle. Gallium nitride compounds also tend to have a high dislocation density, on the order of 108 to 1010 defects per square centimeter.
The U.S. Army Research Laboratory (ARL) provided the first measurement of the high field electron velocity in GaN in 1999. Scientists at ARL experimentally obtained a peak steady-state velocity of , with a transit time of 2.5 picoseconds, attained at an electric field of 225 kV/cm. With this information, the electron mobility was calculated, thus providing data for the design of GaN devices.
Developments
One of the earliest syntheses of gallium nitride was at the George Herbert Jones Laboratory in 1932.
An early synthesis of gallium nitride was by Robert Juza and Harry Hahn in 1938.
GaN with a high crystalline quality can be obtained by depositing a buffer layer at low temperatures. Such high-quality GaN led to the discovery of p-type GaN, p–n junction blue/UV-LEDs and room-temperature stimulated emission (essential for laser action). This has led to the commercialization of high-performance blue LEDs and long-lifetime violet laser diodes, and to the development of nitride-based devices such as UV detectors and high-speed field-effect transistors.
LEDs
High-brightness GaN light-emitting diodes (LEDs) completed the range of primary colors, and made possible applications such as daylight-visible full-color LED displays, white LEDs and blue laser devices. The first GaN-based high-brightness LEDs used a thin film of GaN deposited via metalorganic vapour-phase epitaxy (MOVPE) on sapphire. Other substrates used are zinc oxide, with lattice constant mismatch of only 2% and silicon carbide (SiC). Group III nitride semiconductors are, in general, recognized as one of the most promising semiconductor families for fabricating optical devices in the visible short-wavelength and UV region.
GaN transistors and power ICs
The very high breakdown voltages, high electron mobility, and high saturation velocity of GaN has made it an ideal candidate for high-power and high-temperature microwave applications, as evidenced by its high Johnson's figure of merit. Potential markets for high-power/high-frequency devices based on GaN include microwave radio-frequency power amplifiers (e.g., those used in high-speed wireless data transmission) and high-voltage switching devices for power grids. A potential mass-market application for GaN-based RF transistors is as the microwave source for microwave ovens, replacing the magnetrons currently used. The large band gap means that the performance of GaN transistors is maintained up to higher temperatures (~400 °C) than silicon transistors (~150 °C) because it lessens the effects of thermal generation of charge carriers that are inherent to any semiconductor. The first gallium nitride metal semiconductor field-effect transistors (GaN MESFET) were experimentally demonstrated in 1993 and they are being actively developed.
In 2010, the first enhancement-mode GaN transistors became generally available. Only n-channel transistors were available. These devices were designed to replace power MOSFETs in applications where switching speed or power conversion efficiency is critical. These transistors are built by growing a thin layer of GaN on top of a standard silicon wafer, often referred to as GaN-on-Si by manufacturers. This allows the FETs to maintain costs similar to silicon power MOSFETs but with the superior electrical performance of GaN. Another seemingly viable solution for realizing enhancement-mode GaN-channel HFETs is to employ a lattice-matched quaternary AlInGaN layer of acceptably low spontaneous polarization mismatch to GaN.
GaN power ICs monolithically integrate a GaN FET, GaN-based drive circuitry and circuit protection into a single surface-mount device. Integration means that the gate-drive loop has essentially zero impedance, which further improves efficiency by virtually eliminating FET turn-off losses. Academic studies into creating low-voltage GaN power ICs began at the Hong Kong University of Science and Technology (HKUST) and the first devices were demonstrated in 2015. Commercial GaN power IC production began in 2018.
CMOS logic
In 2016 the first GaN CMOS logic using PMOS and NMOS transistors was reported with gate lengths of 0.5 μm (gate widths of the PMOS and NMOS transistors were 500 μm and 50 μm, respectively).
Applications
LEDs and lasers
GaN-based violet laser diodes are used to read Blu-ray Discs. The mixture of GaN with In (InGaN) or Al (AlGaN) with a band gap dependent on the ratio of In or Al to GaN allows the manufacture of light-emitting diodes (LEDs) with colors that can go from red to ultra-violet.
Transistors and power ICs
GaN transistors are suitable for high frequency, high voltage, high temperature and high-efficiency applications. GaN is efficient at transferring current, and this ultimately means that less energy is lost to heat.
GaN high-electron-mobility transistors (HEMT) have been offered commercially since 2006, and have found immediate use in various wireless infrastructure applications due to their high efficiency and high voltage operation. A second generation of devices with shorter gate lengths will address higher-frequency telecom and aerospace applications.
GaN-based metal–oxide–semiconductor field-effect transistors (MOSFET) and metal–semiconductor field-effect transistor (MESFET) transistors also offer advantages including lower loss in high power electronics, especially in automotive and electric car applications. Since 2008 these can be formed on a silicon substrate. High-voltage (800 V) Schottky barrier diodes (SBDs) have also been made.
The higher efficiency and high power density of integrated GaN power ICs allows them to reduce the size, weight and component count of applications including mobile and laptop chargers, consumer electronics, computing equipment and electric vehicles.
GaN-based electronics (not pure GaN) have the potential to drastically cut energy consumption, not only in consumer applications but even for power transmission utilities.
Unlike silicon transistors that switch off due to power surges, GaN transistors are typically depletion mode devices (i.e. on / resistive when the gate-source voltage is zero). Several methods have been proposed to reach normally-off (or E-mode) operation, which is necessary for use in power electronics:
the implantation of fluorine ions under the gate (the negative charge of the F-ions favors the depletion of the channel)
the use of a MIS-type gate stack, with recess of the AlGaN
the integration of a cascaded pair constituted by a normally-on GaN transistor and a low voltage silicon MOSFET
the use of a p-type layer on top of the AlGaN/GaN heterojunction
Radars
GaN technology is also utilized in military electronics such as active electronically scanned array radars.
Thales Group introduced the Ground Master 400 radar in 2010 utilizing GaN technology. In 2021 Thales put in operation more than 50,000 GaN Transmitters on radar systems.
The U.S. Army funded Lockheed Martin to incorporate GaN active-device technology into the AN/TPQ-53 radar system to replace two medium-range radar systems, the AN/TPQ-36 and the AN/TPQ-37. The AN/TPQ-53 radar system was designed to detect, classify, track, and locate enemy indirect fire systems, as well as unmanned aerial systems. The AN/TPQ-53 radar system provided enhanced performance, greater mobility, increased reliability and supportability, lower life-cycle cost, and reduced crew size compared to the AN/TPQ-36 and the AN/TPQ-37 systems.
Lockheed Martin fielded other tactical operational radars with GaN technology in 2018, including TPS-77 Multi Role Radar System deployed to Latvia and Romania. In 2019, Lockheed Martin's partner ELTA Systems Limited, developed a GaN-based ELM-2084 Multi Mission Radar that was able to detect and track air craft and ballistic targets, while providing fire control guidance for missile interception or air defense artillery.
On April 8, 2020, Saab flight tested its new GaN designed AESA X-band radar in a JAS-39 Gripen fighter. Saab already offers products with GaN based radars, like the Giraffe radar, Erieye, GlobalEye, and Arexis EW. Saab also delivers major subsystems, assemblies and software for the AN/TPS-80 (G/ATOR)
India's Defence Research and Development Organisation is developing Virupaakhsha radar for Sukhoi Su-30MKI based on GaN technology. The radar is a further development of Uttam AESA Radar for use on HAL Tejas which employs GaAs technology.
Nanoscale
GaN nanotubes and nanowires are proposed for applications in nanoscale electronics, optoelectronics and biochemical-sensing applications.
Spintronics potential
When doped with a suitable transition metal such as manganese, GaN is a promising spintronics material (magnetic semiconductors).
Synthesis
Bulk substrates
GaN crystals can be grown from a molten Na/Ga melt held under 100 atmospheres of pressure of N2 at 750 °C. As Ga will not react with N2 below 1000 °C, the powder must be made from something more reactive, usually in one of the following ways:
2 Ga + 2 NH3 → 2 GaN + 3 H2
Ga2O3 + 2 NH3 → 2 GaN + 3 H2O
Gallium nitride can also be synthesized by injecting ammonia gas into molten gallium at at normal atmospheric pressure.
Metal-organic vapour phase epitaxy
Blue, white and ultraviolet LEDs are grown on industrial scale by metalorganic vapour-phase epitaxy (MOVPE). The precursors are ammonia with either trimethylgallium or triethylgallium, the carrier gas being nitrogen or hydrogen. Growth temperature ranges between . Introduction of trimethylaluminium and/or trimethylindium is necessary for growing quantum wells and other kinds of heterostructures.
Molecular beam epitaxy
Commercially, GaN crystals can be grown using molecular beam epitaxy or MOVPE. This process can be further modified to reduce dislocation densities. First, an ion beam is applied to the growth surface in order to create nanoscale roughness. Then, the surface is polished. This process takes place in a vacuum. Polishing methods typically employ a liquid electrolyte and UV irradiation to enable mechanical removal of a thin oxide layer from the wafer. More recent methods have been developed that utilize solid-state polymer electrolytes that are solvent-free and require no radiation before polishing.
Safety
GaN dust is an irritant to skin, eyes and lungs. The environment, health and safety aspects of gallium nitride sources (such as trimethylgallium and ammonia) and industrial hygiene monitoring studies of MOVPE sources have been reported in a 2004 review.
Bulk GaN is non-toxic and biocompatible. Therefore, it may be used in the electrodes and electronics of implants in living organisms.
See also
Schottky diode
Semiconductor devices
Molecular-beam epitaxy
Epitaxy
Lithium-ion battery
References
External links
Ioffe data archive
Nitrides
Gallium compounds
Inorganic compounds
III-V semiconductors
Wurtzite structure type | Gallium nitride | Chemistry | 2,794 |
31,413,242 | https://en.wikipedia.org/wiki/Normet | Normet is a private Finland-headquartered, global technology company. The company manufactures underground equipment and applications and provides aftermarket services, construction chemicals, and rock support products and expertise for customer processes in underground mining and tunnelling as well as for civil construction industries. Normet employs over 1,800 professionals and operates globally in over 50 locations and 30 countries.
History
Peltosalmi Machine Workshop (1962–1972)
Normet's roots trace back to 1962 when brothers, farmer Jaakko Sarvela and agronomist Jussi Sarvela, founded a machine workshop on their farm after transitioning to livestock-free agriculture. In the old barn of the farm, earthmoving equipment suitable for tractors started to be manufactured with a workforce of two employees. Within the first three years of operation, the company's workforce exceeded fifty, and the product range shifted primarily to the manufacturing of forestry machinery. Export of products began.
The company's flagship product was a tractor loader. Other products from the early days of the workshop included forestry machines such as a skidding winch and Finland's first three-axle forestry tractor. The Sarvela brothers received a national entrepreneur award in 1969.
Normet Oy (1972–2004)
In 1972, the Sarvelas sold their company to Orion Corporation, which acquired the machine workshop employing 180 people. The background of the deal was Orion's fear that the state might socialise the company. By venturing into the machine workshop industry, it believed it would save at least one business area if the state were to take over medicine manufacturing. With the acquisition, the name was changed to Normet Oy, and the company began exporting mining machines to Sweden.
In early 1999, Orion sold its subsidiary Normet to Sitra's' Fenno Management. The company employed approximately 310 people.
In 2004, Normet's net sales were 44 million euros.
Normet Group (2005–)
In 2005, the entire share capital of Normet was sold to a company named Normet Group. The owners of Normet Group were Cantell Oy (70 percent stake), Anssi Soila, and Normet's management (30 percent combined). Cantell Oy was owned by board professional and investor Aaro Cantell. The group employed about 210 people at this time.
In 2008, Normet had 17 offices in nine countries. A new office was planned in Mongolia, where the world's largest copper deposit had been found. The largest delivery of the year consisted of over 60 mining machines exported to Russia for Norilsk Nickel. The deal was worth over 20 million euros.
In 2009, Normet's net sales exceeded 90 million euros.
In 2010, Normet manufactured a heavy series of mining equipment in Iisalmi and Santiago de Chile, and the group employed about 450 professionals in 23 countries. Also in 2010, Normet expanded its business into construction chemicals by acquiring 40 percent of TAM International. TAM International employed over 70 people, and its net sales were 9 million euros. The group's net sales were over 100 million euros, and it had over 600 employees.
In 2011, Normet acquired Swedish company Essverk Berg AB, which manufactured and serviced mining and tunnel construction equipment. The company had 25 employees in Ludvika. Normet's net sales were 170 million euros.
In early 2012, Normet had 700 employees, about half of whom were in Iisalmi. Large mining and tunnelling machines were assembled at the Iisalmi equipment manufacturing site. In 2012 Normet acquired 60 percent of its associate company TAM Group, which then became fully owned by Normet. Net sales increased that year by nearly 40 percent to 235 million euros, and its operating profit by 68 percent. New subsidiaries started operating in Mexico, Norway, and South Africa. The company's main products were solutions for charging and blasting, drilling processes, and ground support in tunnel construction and underground mining. It also developed, manufactured, and marketed heavy machinery and construction chemicals.
In 2016, Normet acquired the concrete spraying business from Swedish company Atlas Copco.
In 2019, Normet commenced the development of battery powered Normet SmartDrive® vehicles to drive electrification in the industry. The first vehicle to enter the market was an electric battery powered concrete spraying version. SmartDrive vehicles today also include charging, lifting, ground support, and material transportation variants.
In 2019, Ed Santamaria took over as President and CEO, succeeding Robin Lindahl.
In 2022, Normet acquired the Australian company Garock, a leading manufacturer of ground support systems for the mining industries rock reinforcement solutions. The acquisition of Aliva Equipment, a manufacturer of equipment and accessories for the application of sprayed concrete, also took place in 2022.
In 2022, Normet opened a global technology and production hub in Jaipur, India. The new facility houses an equipment production facility, a service rebuild centre, an R&D centre, a remote monitoring centre, and a technology training centre for operations and maintenance.
In 2023, Normet acquired the Finland-based boom systems manufacturer Rambooms and hydraulic attachments supplier Marakon.
In 2023, Normet also acquired Finland-based Remion Ltd, which designs and implements industrial internet service solutions.
In 2023, Normet became a minority shareholder of Lekatech, a Finnish start-up focused on revolutionising electric hammering technology, and the Swiss company Motics, a specialist in electric drive and battery storage solutions for off-highway applications.
Accolades
Ernst & Young awarded Aaro Cantell the Entrepreneur of the Year award in 2008. In 2011, Normet Group was awarded the President of the Republic's Internationalization Award.
References
Technology companies established in 1962
Mining engineering companies
Engineering companies of Finland | Normet | Engineering | 1,176 |
78,553,678 | https://en.wikipedia.org/wiki/Christian%20Hackenberger | Christian P. R. Hackenberger (b. Osnabruck, 1976) is a German chemist. He is a professor of Chemical Biology at the Humboldt University of Berlin and heads the research unit Biomolecule Modification and Delivery at the Leibniz Research Institute for Molecular Pharmacology. He is a co-founder of the Munich-based biotech company Tubulis.
Life and education
Christian Hackenberger grew up in Damme. He attended the Gymnasium Damme, where he obtained his Abitur in 1995. After completing his civil service, he studied chemistry at the Alfred-Ludwig-Universität in Freiberg (1996–1998) and the University of Wisconsin-Madison (M.S. with Samuel H. Gellman, 1998–1999), with support from the German Academic Scholarship Foundation. He pursued his doctoral studies at the RTWH Aachen (2000–2003), where he worked under Prof. Carsten Bolm as a Kekulé Fellow from the Fonds der Chemischen Industrie. During this time, he also worked as an editorial assistant in scientific journalism for the WDR broadcast "Quarks & Co". From 2003 to 2005 he was a DAAD Postdoctoral Fellow at the Massachusetts Institute of Technology under Prof. Barbara Imperiali.
In 2005, Hackenberger founded his own research group at the Free University of Berlin in 2005 as an Emmy Noether Fellow. In 2011, he was appointed as W2 Professor of Bioorganic Chemistry at the Free University of Berlin as the first Plus 3 awardee from the Boehringer Ingelheim Foundation. In 2012, he became Leibniz-Humboldt Professor for Chemical Biology at the Leibniz Research Institute for Molecular Pharmacology and the Humboldt University of Berlin. In 2020, Hackenberger co-founded the Munich-based biotechnology company Tubulis, which specializes in developing antibody-drug conjugates. He has served as associate editor of the Royal Society of Chemistry's scientific journals Organic and Biomolecular Chemistry (2015–2023) and Chemical Science (since 2024). Hackenberger lives in Berlin, and is married to the art historian Michel Otayek.
Research
Hackenberger's research at the Leibniz Research Institute focuses on chemical strategies to functionalize proteins and antibodies using highly selective chemical reactions to generate protein-based therapeutics against cancer, Alzheimer's and viral infections. A particular focus of Hackenberger's research group is the engineering of new reactions for the modification and cellular delivery of proteins and antibodies to advance their use in biological and pharmacological research.
Awards
Hackenberger has received numerous awards for his work, including the Heinz Maier-Leibnitz Prize of the German Research Foundation (2011), the ORCHEM Prize of the German Chemical Society (2012), the Zervas Award of the European Peptide Society (2018), the Breakthrough of the Year Award in the life sciences from the Falling Walls Foundation (2020), the Astra-Zeneca Award of the Royal Society of Chemistry (2023), the Xiaoyu Hu Memorial Award of the Chinese Peptide Society (2023), and the Max Bergmann Medal (2024).
External links
The Hackenberger Group (Homepage)
The Hackenberger Group (X Profile)
BR Campus Talk: Wie kommen Medikamente im Körper sicher ans Ziel?
References
1976 births
Living people
21st-century German chemists
People from Damme
Humboldt University of Berlin
Chemical biologists
Free University of Berlin | Christian Hackenberger | Chemistry | 736 |
23,294,928 | https://en.wikipedia.org/wiki/Proxyphylline | Proxyphylline is a xanthine derivative that acts as a cardiac stimulant, vasodilator and bronchodilator.
References
Adenosine receptor antagonists
Phosphodiesterase inhibitors
Xanthines | Proxyphylline | Chemistry | 52 |
64,175,740 | https://en.wikipedia.org/wiki/Neotoma%20Paleoecology%20Database | The Neotoma Paleoecology Database (Neotoma) is an open international data resource that stores and shares multiple kinds of fossil, paleoecological, and paleoenvironmental data. Neotoma specializes in fossil data holdings at timescales covering the last several decades to the last several million years. Neotoma is organized and led by scientists and enhances data consistency through community curation by experts. Neotoma data are open to all and available to anyone with an internet connection.
Neotoma data are used by scientists and teachers (especially paleoecologists, biogeographers, and archaeologists) to study the responses of species and ecosystems to past environmental change and growing human activity. Paleoclimatologists use Neotoma data to help reconstruct past climates. Sample research questions addressed include: 1) How sensitive are ecosystems to past climate change. 2) Why were rates of tree range expansion so fast after the end of the last ice age, given that tree seed dispersal distances are usually so short (Reid's Paradox)? 3) Where and when did humans begin transforming ecosystems? 4) What were the causes and consequences of the widespread extinctions of large animals over the last 50,000 years? 5) Which ecosystems are characterized by abrupt change between alternate stable states and what triggers these abrupt changes? 6) How have freshwater resources and aquatic ecosystems been affected by human land use and activity over the last several decades?
Data types and data volume
The species and taxa stored in Neotoma represent a breadth of terrestrial and aquatic organisms: plants (pollen and larger fossils), mammals and other vertebrates, insects and other invertebrates, diatoms, ostracodes, and testate amoebae. Neotoma also stores the age estimates provided by radiometric dating (e.g. radiocarbon, lead-210) and the age estimates that are derived from statistical models of age as a function of depth in sediment column. The Neotoma data model is extensible to other types of paleoecological and paleoenvironmental variables.
Data volume in Neotoma is growing rapidly, as are the data holdings in other paleontological and contemporary databases. As of May 2020, Neotoma held 7 million individual observations from over 38,700 datasets, 18,600 sites, 7,000 scientific papers, 6,000 authors, and 100 countries [1]. For comparison, On Nov 8, 2017, Neotoma held 3.8 million observations, from 17,275 datasets and 9,269 sites.
History
The intellectual foundations of Neotoma trace back to efforts by early paleontologists and paleoecologists in the first half of the 20th century to assemble many individual records into larger mapped syntheses. As von Post wrote, paleoecologists must "think horizontally, work vertically," i.e. think across both time and space to understand the processes governing the ever-changing distribution of species, the associations among species, and the diversity of life.
These efforts accelerated in the 1970s and 1980s, when a number of scientific teams began assembling databases of fossil distributions to study the spatial distributions of species over space and time and the effects of past environmental variations on these distributions. These efforts were powered by advances in computing capabilities and the growing availability of radiocarbon and other radiometric dates to provide a common time framework for all fossil occurrences. Much of this work focused on environmental and ecological changes accompanying the glacial-interglacial cycles of the Quaternary. These databases were used both by paleoclimatologists to draw inferences about past climates that could be used to test the paleoclimatic simulations of earth system models, and by paleoecologists interested in how past community dynamics were driven by these environmental changes. For example, Margaret Davis demonstrated tree species experienced large range shifts with the climate changes at the end of the last ice age and that species responded individualistically. As a result, many past communities were 'no analog,' i.e. their mixtures of species lack any close counterpart in modern communities. Some records and Constituent Databases in Neotoma extend deeper into the Cenozoic.
In parallel, other research teams were gathering fossil records from high-resolution sediment archives spanning the last few decades to centuries to study the effects of human activities upon communities and ecosystems. Examples include the effects of acid rain on ecosystems in the 1980s, or the eutrophication of many lake ecosystems due to increasing nutrient runoff into lakes and streams.
Many of these initial data-gathering efforts were led by individual pioneers (e.g. Margaret Davis, Tom Webb, Russ Graham, Bjorn Berglund, Jacques-Louis Beaulieu) or by small research teams. As these efforts have matured and as the amount of data has grown, the volume and complexity of paleoecological data is now beyond the capacity of any single individual expert to manage or curate. At the same time, many smaller paleontological and paleoecological databases have been unable to keep up with current advances in informatics, or have gone offline as funding lapsed or lead investigators retired or moved on.
Hence, the fields of paleoecology and paleontology have developed data governance models based on community curation, in which data resources like Neotoma are managed by communities of scientists working together to curate and share their data. Neotoma follows a model of centralized informatics but distributed scientific governance, and is best viewed as a coalition of Constituent Databases that share a common set of database and software resources, while retaining separate rights to govern and curate the data in their Data Stewards' domains of expertise. For example, the European Pollen Database uses the Neotoma data model and software services, but is governed by its own board and community of expert data stewards.
Neotoma works closely with the Paleobiology Database, which has a similar intellectual history, but has focused on the entire history of life, at timescales of millions to hundreds of millions of years. Together, Neotoma and the Paleobiology Database have helped launch the EarthLife Consortium, a non-profit umbrella organization to support the easy and free sharing of paleoecological and paleobiological data.
Data curation and governance
Neotoma employs a model of distributed data curation and governance. In this model, Neotoma data are curated and governed by a community of Data Stewards, organized into Constituent Databases. These Constituent Databases can be organized by region, time, or taxonomic group. For example, FAUNMAP is a Constituent Database in Neotoma that manages Quaternary fossil vertebrate records in North America, while MioMap primarily emphasizes Miocene vertebrate records. For pollen data, Constituent Databases are organized geographically and include the European Pollen Database, the North American Pollen Database, and the Latin American Pollen Database. Other major Constituent Databases include the Testate Amoebae Database, the International Ostracode Database, and the Diatom Paleoecology Data Cooperative. All data in Neotoma are uploaded and curated by Data Stewards associated with one or more Constituent Databases. This model of distributed community curation is essential to ensuring data quality and consistency.
Neotoma is led by a Neotoma Leadership Council (NLC) comprising 14 elected councilors, of which 2 seats are reserved for early career scientists (Bylaws). Elections are held annually, with roughly one-third of the NLC elected each cycle.
Neotoma is a recommended data facility for the Earth Sciences Division of the National Science Foundation, Past Global Changes, and the American Quaternary Association. Neotoma is a member of the ICSU World Data System and is registered with COPDESS registry for scientific data sources adhering to FAIR (Findable, Accessible, Interoperable, Reproducible) principles. Neotoma has been supported by multiple sources, including the National Science Foundation and the Belmont Forum.
Data use and access
Use of data in Neotoma is governed by a Creative Commons NC-BY license, which permits unrestricted use as long as data sources are properly acknowledged and cited (Neotoma Data Use Policy). Proper full citation of data in Neotoma occurs at three levels: Neotoma itself, the governing Constituent Database(s), and the original authors.
Data can be retrieved from Neotoma in several ways. Neotoma Explorer is a map-based interface designed for quick-look searches and first-pass data explorations. Explorer is well suited for researchers interested in quick-look searches and data views and for explorations by high school and college-level teachers and students. Teaching exercises using Neotoma Explorer have been prepared and hosted by the Science and Education Research Center (SERC) at Carleton College. An R package (neotoma) supports exporting of data from Neotoma into the R programmatic environment. Application Programmatic Interfaces (APIs) support access to Neotoma data by third-party software developers. Resources using Neotoma data include the Flyover Country app for travelers and the Global Pollen Project.
References
External links
Neotoma Paleoecology Database
Biological databases
Paleontology websites | Neotoma Paleoecology Database | Biology | 1,886 |
39,126,009 | https://en.wikipedia.org/wiki/Cliftonite | Cliftonite is a natural form of graphite that occurs as small octahedral inclusions in iron-containing meteorites. It typically accompanies kamacite, and more rarely schreibersite, cohenite or plessite.
Cliftonite was first considered to be a new form of carbon, then a pseudomorph of graphite after diamond, and finally reassigned to a pseudomorph of graphite after kamacite. Cliftonite is typically observed in minerals that experienced high pressures. It can also be synthesized by annealing an Fe-Ni-C alloy at ambient pressure for several hundred hours. The annealing is carried out in two stages: first a mixture of cohenite and kamacite is formed in air at ca. 950 °C; it is then partly converted to cliftonite in vacuum at ca. 550 °C.
The Campo del Cielo region of Argentina is noted for a crater field containing a group of distinctive iron meteorites.
References
Native element minerals
Allotropes of carbon
Hexagonal minerals
Minerals in space group 194 | Cliftonite | Chemistry | 218 |
8,918,323 | https://en.wikipedia.org/wiki/Conservative%20system | In mathematics, a conservative system is a dynamical system which stands in contrast to a dissipative system. Roughly speaking, such systems have no friction or other mechanism to dissipate the dynamics, and thus, their phase space does not shrink over time. Precisely speaking, they are those dynamical systems that have a null wandering set: under time evolution, no portion of the phase space ever "wanders away", never to be returned to or revisited. Alternately, conservative systems are those to which the Poincaré recurrence theorem applies. An important special case of conservative systems are the measure-preserving dynamical systems.
Informal introduction
Informally, dynamical systems describe the time evolution of the phase space of some mechanical system. Commonly, such evolution is given by some differential equations, or quite often in terms of discrete time steps. However, in the present case, instead of focusing on the time evolution of discrete points, one shifts attention to the time evolution of collections of points. One such example would be Saturn's rings: rather than tracking the time evolution of individual grains of sand in the rings, one is instead interested in the time evolution of the density of the rings: how the density thins out, spreads, or becomes concentrated. Over short time-scales (hundreds of thousands of years), Saturn's rings are stable, and are thus a reasonable example of a conservative system and more precisely, a measure-preserving dynamical system. It is measure-preserving, as the number of particles in the rings does not change, and, per Newtonian orbital mechanics, the phase space is incompressible: it can be stretched or squeezed, but not shrunk (this is the content of Liouville's theorem).
Formal definition
Formally, a measurable dynamical system is conservative if and only if it is non-singular, and has no wandering sets.
A measurable dynamical system (X, Σ, μ, τ) is a Borel space (X, Σ) equipped with a sigma-finite measure μ and a transformation τ. Here, X is a set, and Σ is a sigma-algebra on X, so that the pair (X, Σ) is a measurable space. μ is a sigma-finite measure on the sigma-algebra. The space X is the phase space of the dynamical system.
A transformation (a map) is said to be Σ-measurable if and only if, for every σ ∈ Σ, one has . The transformation is a single "time-step" in the evolution of the dynamical system. One is interested in invertible transformations, so that the current state of the dynamical system came from a well-defined past state.
A measurable transformation is called non-singular when if and only if . In this case, the system (X, Σ, μ, τ) is called a non-singular dynamical system. The condition of being non-singular is necessary for a dynamical system to be suitable for modeling (non-equilibrium) systems. That is, if a certain configuration of the system is "impossible" (i.e. ) then it must stay "impossible" (was always impossible: ), but otherwise, the system can evolve arbitrarily. Non-singular systems preserve the negligible sets, but are not required to preserve any other class of sets. The sense of the word singular here is the same as in the definition of a singular measure in that no portion of is singular with respect to and vice versa.
A non-singular dynamical system for which is called invariant, or, more commonly, a measure-preserving dynamical system.
A non-singular dynamical system is conservative if, for every set of positive measure and for every , one has some integer such that . Informally, this can be interpreted as saying that the current state of the system revisits or comes arbitrarily close to a prior state; see Poincaré recurrence for more.
A non-singular transformation is incompressible if, whenever one has , then .
Properties
For a non-singular transformation , the following statements are equivalent:
τ is conservative.
τ is incompressible.
Every wandering set of τ is null.
For all sets σ of positive measure, .
The above implies that, if and is measure-preserving, then the dynamical system is conservative. This is effectively the modern statement of the Poincaré recurrence theorem. A sketch of a proof of the equivalence of these four properties is given in the article on the Hopf decomposition.
Suppose that and is measure-preserving. Let be a wandering set of . By definition of wandering sets and since preserves , would thus contain a countably infinite union of pairwise disjoint sets that have the same -measure as . Since it was assumed , it follows that is a null set, and so all wandering sets must be null sets.
This argumentation fails for even the simplest examples if . Indeed, consider for instance , where denotes the Lebesgue measure, and consider the shift operator . Since the Lebesgue measure is translation-invariant, is measure-preserving. However, is not conservative. In fact, every interval of length strictly less than contained in is wandering. In particular, can be written as a countable union of wandering sets.
Hopf decomposition
The Hopf decomposition states that every measure space with a non-singular transformation can be decomposed into an invariant conservative set and a wandering (dissipative) set. A commonplace informal example of Hopf decomposition is the mixing of two liquids (some textbooks mention rum and coke): The initial state, where the two liquids are not yet mixed, can never recur again after mixing; it is part of the dissipative set. Likewise any of the partially-mixed states. The result, after mixing (a cuba libre, in the canonical example), is stable, and forms the conservative set; further mixing does not alter it. In this example, the conservative set is also ergodic: if one added one more drop of liquid (say, lemon juice), it would not stay in one place, but would come to mix in everywhere. One word of caution about this example: although mixing systems are ergodic, ergodic systems are not in general mixing systems! Mixing implies an interaction which may not exist. The canonical example of an ergodic system that does not mix is the Bernoulli process: it is the set of all possible infinite sequences of coin flips (equivalently, the set of infinite strings of zeros and ones); each individual coin flip is independent of the others.
Ergodic decomposition
The ergodic decomposition theorem states, roughly, that every conservative system can be split up into components, each component of which is individually ergodic. An informal example of this would be a tub, with a divider down the middle, with liquids filling each compartment. The liquid on one side can clearly mix with itself, and so can the other, but, due to the partition, the two sides cannot interact. Clearly, this can be treated as two independent systems; leakage between the two sides, of measure zero, can be ignored. The ergodic decomposition theorem states that all conservative systems can be split into such independent parts, and that this splitting is unique (up to differences of measure zero). Thus, by convention, the study of conservative systems becomes the study of their ergodic components.
Formally, every ergodic system is conservative. Recall that an invariant set σ ∈ Σ is one for which τ(σ) = σ. For an ergodic system, the only invariant sets are those with measure zero or with full measure (are null or are conull); that they are conservative then follows trivially from this.
When τ is ergodic, the following statements are equivalent:
τ is conservative and ergodic
For all measurable sets σ, ; that is, σ "sweeps out" all of X.
For all sets σ of positive measure, and for almost every , there exists a positive integer n such that .
For all sets and of positive measure, there exists a positive integer n such that
If , then either or the complement has zero measure: .
See also
KMS state, a description of thermodynamic equilibrium in quantum mechanical systems; dual to modular theories for von Neumann algebras.
Notes
References
Further reading
Ergodic theory
Dynamical systems | Conservative system | Physics,Mathematics | 1,733 |
3,789,546 | https://en.wikipedia.org/wiki/Trigenus | In low-dimensional topology, the trigenus of a closed 3-manifold is an invariant consisting of an ordered triple . It is obtained by minimizing the genera of three orientable handle bodies — with no intersection between their interiors— which decompose the manifold as far as the Heegaard genus need only two.
That is, a decomposition with
for and being the genus of .
For orientable spaces, ,
where is 's Heegaard genus.
For non-orientable spaces the has the form
depending on the
image of the first Stiefel–Whitney characteristic class under a Bockstein homomorphism, respectively for
It has been proved that the number has a relation with the concept of Stiefel–Whitney surface, that is, an orientable surface which is embedded in , has minimal genus and represents the first Stiefel–Whitney class under the duality map , that is, . If then , and if
then .
Theorem
A manifold S is a Stiefel–Whitney surface in M, if and only if S and M−int(N(S)) are orientable.
References
J.C. Gómez Larrañaga, W. Heil, V.M. Núñez. Stiefel–Whitney surfaces and decompositions of 3-manifolds into handlebodies, Topology Appl. 60 (1994), 267–280.
J.C. Gómez Larrañaga, W. Heil, V.M. Núñez. Stiefel–Whitney surfaces and the trigenus of non-orientable 3-manifolds, Manuscripta Math. 100 (1999), 405–422.
"On the trigenus of surface bundles over ", 2005, Soc. Mat. Mex. | pdf
Geometric topology
3-manifolds | Trigenus | Mathematics | 367 |
25,081,608 | https://en.wikipedia.org/wiki/Victor%20Bangert | Victor Bangert (born 28 November 1950) is Professor of Mathematics at the Mathematisches Institut in Freiburg, Germany. His main interests are differential geometry and dynamical systems theory. He specialises in the theory of closed geodesics, wherein one of his significant results, combined with another one due to John Franks, implies that every Riemannian 2-sphere possesses infinitely many closed geodesics. He also made important contributions to Aubry–Mather theory.
He obtained his Ph.D. from Universität Dortmund in 1977 under the supervision of Rolf Wilhelm Walter, with the thesis Konvexität in riemannschen Mannigfaltigkeiten.
He served in the editorial board of "manuscripta mathematica" from 1996 to 2017.
Bangert was an invited speaker at the 1994 International Congress of Mathematicians in Zürich.
Selected publications
References
External links
Personal webpage
Oberwolfach photos of Victor Bangert
20th-century German mathematicians
21st-century German mathematicians
1950 births
Living people
German geometers
Dynamical systems theorists
German systems scientists
Scientists from Osnabrück | Victor Bangert | Mathematics | 223 |
54,942,000 | https://en.wikipedia.org/wiki/MAPK%20networks | Mitogen-activated protein kinase (MAPK) networks are the pathways and signaling of MAPK, which is a protein kinase that consists of amino acids serine and threonine. MAPK pathways have both a positive and negative regulation in plants. A positive regulation of MAPK networks is to help in assisting with stresses from the environment. A negative regulation of MAPK networks is pertaining to a high quantity of reactive oxygen species (ROS) in the plant.
MAPK networks
Mitogen-activated protein kinase (MAPK) networks can be found in eukaryotic cells. MAPK pathways in plants are known to regulate cell growth, cell development, cell death, and cell responses to environmental stimuli. Only a few of the MAPK mechanism components are known and have been studied. The components such as Arabidopsis MAPKKKs YODA, ANP2/ANP3, and MP3K6/MP3K7 functions in the development of the cell. MEKK1 and ANP1 function in the response to environmental stress. Unfortunately, only eight out of the twenty mitogen-activated protein kinases have been studied. The most commonly studied MAPKs are MPK3, MPK4, and MPK6, which are activated by a diversity of stimuli including abiotic stresses, pathogens, and oxidative stressors. MPK4 negatively regulates biotic stress signaling, while MPK3 and MPK6 function as positive mediators of defense responses. The plant has these positive and negative mediators allowing for normal plant growth and development, which has been proven true by the severely dwarfed phenotype of mpk4 and the embryo lethal phenotype of mpk3 and mpk6 mutants.
Positive regulation pathways in plants
Plants have many protection mechanisms to cope with stresses from the environment, which include ultraviolet light, cold or hot weather, windy days, and mechanical wounding. There are multiple pathways, but one pathway that plants have been able to develop is a self-defense mechanism by recognize pathogens through pathogen-associated molecular patterns (PAMPs) via cell surface-located pathogen-recognition receptors. These receptors induce intracellular signal pathways within the plant cells, while also resulting in PAMP-triggered immunity. Responses to PAMPs target broadly instead of specifically. This immunity requires downstream components via the MAPK cascade to activate the MAP kinases. The flagellin, a peptide of flg22, triggers a rapid and strong activation of MPK3, MPK4, and MPK6. MPK4 and MPK6 can be activated by harpin proteins. MPK3 and MPK6 are very similar proteins and have a function as regulators in abscission, stomatal development, signaling various abiotic stresses, and defense responses to certain pathogens. Experimentation has proposed that the MAPK module MEKK1-MKK4/MKK5-MPK3/MPK6 may be responsible for flg22 signal transmission. All of the proposed modules appear to be correct expect for MEKK1 because plants with mekk1 mutated have a compromised flg22-triggered activation of MPK4, yet they have normal activation of MPK3 and MPK6. Data has shown that MAPK cascade is composed of MKK4/MKK5 and MPK3/MPK6 in response to fungal pathogens. The observation shows that the activation of MPK3/MPK6 in conditional gain-of-function plants for MKK4/MKK5 or MEKK1/MKKKa is sufficient to induce camalexin, which is a major phytoalexin in Arabidopsis. The stomata are considered to be the entry point for pathogenic invaders because microbial invaders enter the plant at the stomata. A recent study has shown that MAPK cascades play a role in abiotic and biotic stress responses. The main pathways in stomatal development and dynamics are MPK3 and MPK6. During a drought, the stomata closes and is believed to be mediated by the phytohormone, abscisic acid, and involves MKK1, MPK3, and MPK6. Another way of closing the stomata is through a closing process that is called pathogen-induced, which is an innate response from the plant. Campestris (Xcc) excretes a chemical that reverts stomatal closure that was caused by bacteria and abscisic acid (ABA). Most stomata close in the presence of ABA, but some are unresponsive to bacteria. In Arabidopsis Xcc does not revert bacteria-induced or ABA-induced stomatal closure. Scientists are not certain if MAPK cascades are responsible for the signaling, so further investigation is needed for this.
Negative regulation
The identification of MEKK1-MKK1/2-MPK4 in pathogen signaling has been a tremendous finding. Mekk1, mkk1/mkk2 double and mpk4 mutations are dwarfed and acquire too much of reactive oxygen species. The mutations are considered to be from the enhancement of SA levels, which is partially reversed by bacterial SA hydrolase. Mekk1, mkk1/mkk2 double and mpk4 mutations have cell death occur spontaneously, pathogenesis-related genes and increased resistance to pathogens. MEKK1 appears to have deregulation pathways that are unknown and do not involve MKK1/MKK2 nor MPK4. MEKK1 interact with WRKY53, which is responsible for mekk1 genes set, and alter the activity of WRKY53 that is a short portion of MAPK signaling. Substrates of MPK4 are three proteins: WRKY33, WRKY25, and MKS1. Ternary MKS1-MPK4-WRKY33 complexes have been recognized by nuclear extracts. Recruitment of WRKY33 depends on the phosphorylation of MPK4. Once activated, MPK3 phosphorylates MKS1, which releases WRKY33 from the ternary complex. The free WRKY33 is believed to induce transcription on target genes., allowing for a negative regulation by MPK4. Pathogens have developed mechanisms that inactivate PAMP-induced signaling pathways through the MAPK networks. Andrea Pitzschke and her colleges claim “AvrPto and AvrPtoB interact with the FLS2 receptor and its co-receptor BAK1. AvrPtoB catalyzes the polyubiquitination and subsequent proteasome-dependent degradation of FLS2” (Pitzschke 3). AvrPto interacts with BAK1 and interrupts the binding of FLS2. Pseudomonas syringae have HopAI1, which is a phosphothreonin lyase, and dephosphorylates the threonine residue at the upstream MAPKKs. HopAI1 interacts with MPK3 and MPK6 allowing for flg22 activation to occur. In certain soil-borne pathogens that carry flagellin variants cannot be detected by FLS2, but there is still a triggered innate immune response. The immune response has been shown to be from the EF-Tu protein. Flg22, elf18, FLS2 and EFR have receptors that are in the same subfamily of LRR-RLKs, LRRXII. This means that elf18 and flg22 induce extracellular alkalization, rapid activation of MAPKs, and gene responses that are similar to each other. Although there appears to be an important relationship between MAPKs with EF-Tu-triggered defense, the evidence remains to be unclear. The reason for this unclear relationship is because of Agrobacterium tumefaciens, which infects into segments of plant DNA. EFR1 mutants do not recognize EF-Tu, but there is no research on MAPK activities and efr1. Initiation of defense signaling can be a positive effect to the plant pathogens because activating MPK3 in response to flg22 causes phosphorylation and translocation of virE2 interacting protein 1 (VIP1). VIP1 serves as a shuttle for the pathogenic T-DNA, but the induction of defense genes can occur as well. This allows for the spreading and cessation of the pathogen in the plant, but the pathogen can overcome the problem by attacking VIP1 for proteasome degradation by VirF, which is a virulence factor that encodes an F-box protein.
References
Signal transduction | MAPK networks | Chemistry,Biology | 1,785 |
39,260,084 | https://en.wikipedia.org/wiki/Spectral%20line%20shape | Spectral line shape or spectral line profile describes the form of an electromagnetic spectrum in the vicinity of a spectral line – a region of stronger or weaker intensity in the spectrum. Ideal line shapes include Lorentzian, Gaussian and Voigt functions, whose parameters are the line position, maximum height and half-width. Actual line shapes are determined principally by Doppler, collision and proximity broadening. For each system the half-width of the shape function varies with temperature, pressure (or concentration) and phase. A knowledge of shape function is needed for spectroscopic curve fitting and deconvolution.
Origins
A spectral line can result from an electron transition in an atom, molecule or ion, which is associated with a specific amount of energy, E. When this energy is measured by means of some spectroscopic technique, the line is not infinitely sharp, but has a particular shape. Numerous factors can contribute to the broadening of spectral lines. Broadening can only be mitigated by the use of specialized techniques, such as Lamb dip spectroscopy. The principal sources of broadening are:
Lifetime broadening. According to the uncertainty principle the uncertainty in energy, ΔE and the lifetime, Δt, of the excited state are related by
This determines the minimum possible line width. As the excited state decays exponentially in time this effect produces a line with Lorentzian shape in terms of frequency (or wavenumber).
Doppler broadening. This is caused by the fact that the velocity of atoms or molecules relative to the observer follows a Maxwell distribution, so the effect is dependent on temperature. If this were the only effect the line shape would be Gaussian.
Pressure broadening (Collision broadening). Collisions between atoms or molecules reduce the lifetime of the upper state, Δt, increasing the uncertainty ΔE. This effect depends on both the density (that is, pressure for a gas) and the temperature, which affects the rate of collisions. The broadening effect is described by a Lorentzian profile in most cases.
Proximity broadening. The presence of other molecules close to the molecule involved affects both line width and line position. It is the dominant process for liquids and solids. An extreme example of this effect is the influence of hydrogen bonding on the spectra of protic liquids.
Observed spectral line shape and line width are also affected by instrumental factors. The observed line shape is a convolution of the intrinsic line shape with the instrument transfer function.
Each of these mechanisms, and others, can act in isolation or in combination. If each effect is independent of the other, the observed line profile is a convolution of the line profiles of each mechanism. Thus, a combination of Doppler and pressure broadening effects yields a Voigt profile.
Line shape functions
Lorentzian
A Lorentzian line shape function can be represented as
where L signifies a Lorentzian function standardized, for spectroscopic purposes, to a maximum value of 1; is a subsidiary variable defined as
where is the position of the maximum (corresponding to the transition energy E), p is a position, and w is the full width at half maximum (FWHM), the width of the curve when the intensity is half the maximum intensity (this occurs at the points ). The unit of , and is typically wavenumber or frequency. The variable x is dimensionless and is zero at .
Gaussian
The Gaussian line shape has the standardized form,
The subsidiary variable, x, is defined in the same way as for a Lorentzian shape. Both this function and the Lorentzian have a maximum value of 1 at x = 0 and a value of 1/2 at x=±1.
Voigt
The third line shape that has a theoretical basis is the Voigt function, a convolution of a Gaussian and a Lorentzian,
where σ and γ are half-widths. The computation of a Voigt function and its derivatives are more complicated than a Gaussian or Lorentzian.
Spectral fitting
A spectroscopic peak may be fitted to multiples of the above functions or to sums or products of functions with variable parameters. The above functions are all symmetrical about the position of their maximum. Asymmetric functions have also been used.
Instances
Atomic spectra
For atoms in the gas phase the principal effects are Doppler and pressure broadening. Lines are relatively sharp on the scale of measurement so that applications such as atomic absorption spectroscopy (AAS) and Inductively coupled plasma atomic emission spectroscopy (ICP) are used for elemental analysis. Atoms also have distinct x-ray spectra that are attributable to the excitation of inner shell electrons to excited states. The lines are relatively sharp because the inner electron energies are not very sensitive to the atom's environment. This is applied to X-ray fluorescence spectroscopy of solid materials.
Molecular spectra
For molecules in the gas phase, the principal effects are Doppler and pressure broadening. This applies to rotational spectroscopy, rotational-vibrational spectroscopy and vibronic spectroscopy.
For molecules in the liquid state or in solution, collision and proximity broadening predominate and lines are much broader than lines from the same molecule in the gas phase. Line maxima may also be shifted. Because there are many sources of broadening, the lines have a stable distribution, tending towards a Gaussian shape.
Nuclear magnetic resonance
The shape of lines in a nuclear magnetic resonance (NMR) spectrum is determined by the process of free induction decay. This decay is approximately exponential, so the line shape is Lorentzian. This follows because the Fourier transform of an exponential function in the time domain is a Lorentzian in the frequency domain. In NMR spectroscopy the lifetime of the excited states is relatively long, so the lines are very sharp, producing high-resolution spectra.
Magnetic resonance imaging
Gadolinium-based pharmaceuticals alter the relaxation time, and hence spectral line shape, of those protons that are in water molecules that are transiently attached to the paramagnetic atoms, resulting contrast enhancement of the MRI image. This allows better visualisation of some brain tumours.
Applications
Curve decomposition
Some spectroscopic curves can be approximated by the sum of a set of component curves. For example, when Beer's law
applies, the total absorbance, A, at wavelength λ, is a linear combination of the absorbance due to the individual components, k, at concentration, ck. ε is an extinction coefficient. In such cases the curve of experimental data may be decomposed into sum of component curves in a process of curve fitting. This process is also widely called deconvolution. Curve deconvolution and curve fitting are completely different mathematical procedures.
Curve fitting can be used in two distinct ways.
The line shapes and parameters and of the individual component curves have been obtained experimentally. In this case the curve may be decomposed using a linear least squares process simply to determine the concentrations of the components. This process is used in analytical chemistry to determine the composition of a mixture of the components of known molar absorptivity spectra. For example, if the heights of two lines are found to be h1 and h2, c1 = h1 / ε1 and c2 = h2 / ε2.
Parameters of the line shape are unknown. The intensity of each component is a function of at least 3 parameters, position, height and half-width. In addition one or both of the line shape function and baseline function may not be known with certainty. When two or more parameters of a fitting curve are not known the method of non-linear least squares must be used. The reliability of curve fitting in this case is dependent on the separation between the components, their shape functions and relative heights, and the signal-to-noise ratio in the data. When Gaussian-shaped curves are used for the decomposition of set of Nsol spectra into Npks curves, the and parameters are common to all Nsol spectra. This allows to calculated the heights of each Gaussian curve in each spectrum (Nsol·Npks parameters) by a (fast) linear least squares fitting procedure, while the and w parameters (2·Npks parameters) can be obtained with a non-linear least-square fitting on the data from all spectra simultaneously, thus reducing dramatically the correlation between optimized parameters.
Derivative spectroscopy
Spectroscopic curves can be subjected to numerical differentiation.
When the data points in a curve are equidistant from each other the Savitzky–Golay convolution method may be used. The best convolution function to use depends primarily on the signal-to-noise ratio of the data. The first derivative (slope, ) of all single line shapes is zero at the position of maximum height. This is also true of the third derivative; odd derivatives can be used to locate the position of a peak maximum.
The second derivatives, , of both Gaussian and Lorentzian functions have a reduced half-width. This can be used to apparently improve spectral resolution. The diagram shows the second derivative of the black curve in the diagram above it. Whereas the smaller component produces a shoulder in the spectrum, it appears as a separate peak in the 2nd. derivative. Fourth derivatives, , can also be used, when the signal-to-noise ratio in the spectrum is sufficiently high.
Deconvolution
Deconvolution can be used to apparently improve spectral resolution. In the case of NMR spectra, the process is relatively straight forward, because the line shapes are Lorentzian, and the convolution of a Lorentzian with another Lorentzian is also Lorentzian. The Fourier transform of a Lorentzian is an exponential. In the co-domain (time) of the spectroscopic domain (frequency) convolution becomes multiplication. Therefore, a convolution of the sum of two Lorentzians becomes a multiplication of two exponentials in the co-domain. Since, in FT-NMR, the measurements are made in the time domain division of the data by an exponential is equivalent to deconvolution in the frequency domain. A suitable choice of exponential results in a reduction of the half-width of a line in the frequency domain. This technique has been rendered all but obsolete by advances in NMR technology. A similar process has been applied for resolution enhancement of other types of spectra, with the disadvantage that the spectrum must be first Fourier transformed and then transformed back after the deconvoluting function has been applied in the spectrum's co-domain.
See also
Fano resonance
Holtsmark distribution
Zero-phonon line and phonon sideband
Notes
References
Further reading
External links
Curve Fitting in Raman and IR Spectroscopy: Basic Theory of Line Shapes and Applications
21st International Conference on Spectral Line Shapes, St. Petersburg (2012)
Spectroscopy | Spectral line shape | Physics,Chemistry | 2,237 |
4,832,740 | https://en.wikipedia.org/wiki/Osmiridium | Osmiridium and iridosmine are natural alloys of the elements osmium and iridium, with traces of other platinum-group metals.
Osmiridium has been defined as containing a higher proportion of iridium, with iridosmine containing more osmium. However, as the content of the natural Os-Ir alloys varies considerably, the constituent percentages of specimens often reflects the reverse situation of osmiridium describing specimens containing a higher proportion of osmium and iridosmine specimens containing more iridium.
Nomenclature
In 1963, M. H. Hey proposed using iridosmine for hexagonal specimens with 32% < Os < 80%, osmiridium for cubic specimens with Os < 32% and native osmium for specimens Os > 80% (the would-be mineral native iridium of >80% purity was not known at that time).
In 1973, Harris and Cabri defined the following names for Os-Ir-Ru alloys: ruthenosmiridium was applied to cubic Os-Ir-Ru alloys, where Ir < 80% of (Os+Ir+Ru) and Ru > 10% of (Os+Ir+Ru) with no single other element >10% of the total; Rutheniridosmine was applied to cubic Os-Ir-Ru alloys, where Os < 80% of (Os+Ir+Ru) and Ru is 10–80% of (Os+Ir+Ru) with no single other element >10% of the total; the Ru-Os alloys be known as ruthenian osmium (>50% Os), osmian ruthenium (>50% Ru); the Ru-Ir alloys be known as iridian ruthenium and ruthenian iridium where the boundary between them is defined by the alloy's miscibility gap (a minimum 57% Ir for ruthenian iridium and a minimum of 55% Ru for iridian ruthenium).
The nomenclature of Os-Ir-Ru alloys were revised again by Harris and Cabri in 1991. Afterwards, only four names were applied to minerals whose compositions lie within the ternary Os-Ir-Ru system: osmium (native osmium) for all hexagonal alloys with Os the major element; iridium (native iridium) for all cubic alloys with iridium the major element; rutheniridosmine for all hexagonal alloys with Ir the major element; and ruthenium (native ruthenium) for all hexagonal alloys with Ru the major element. The mineral names iridosmine, osmiridium, rutheniridosmium, ruthenian osmium, osmian ruthenium, ruthenium iridium and iridian ruthenium were proposed to be retired.
Related compounds
Natural alloys also occur in the systems Ir-Os-Rh, Os-Ir-Pt, Ru-Ir-Pt, Ir-Ru-Rh and Pd-Ir-Pt. Names used in these systems have included platiniridium (or platinian iridium) and iridrhodruthenium.
However, there is no universally accepted method of plotting these compositions and their names, especially in the ternary systems.
Properties
The properties of all these alloys generally fall between those of the members, but hardness is greater than the individual constituents.
Occurrence
Os-Ir alloys are very rare, but can be found in mines of other platinum-group metals. One very productive mine was operated at Adamsfield near Tyenna in Tasmania during the Second World War with the ore shipped out by railway from Maydena. The site of the mine is now totally reclaimed by dense natural bush. It was once one of the world's major producers of this rare metal, and the osmiridium was mostly found in shallow alluvial workings. The alloy is currently valued at about US$400 per troy ounce.
Isolation
It can be isolated by adding a piece to aqua regia, which has the ability to dissolve gold and platinum but not osmiridium. It occurs naturally as small, extremely hard, flat metallic grains with hexagonal crystal structure.
References
Osmiridium—Mindat.org
Iridium minerals
Native element minerals
Osmium compounds
Precious metal alloys
Superhard materials | Osmiridium | Physics,Chemistry | 870 |
39,941,590 | https://en.wikipedia.org/wiki/NGC%205668 | NGC 5668 is a nearly face-on spiral galaxy, visual magnitude about 11.5, located about 81 million light years away in the constellation Virgo. It was discovered on 29 April 1786 by William Herschel.
NGC 5668 is a member of the NGC 5638 Group of galaxies, itself one of the Virgo III Groups strung out to the east of the Virgo Supercluster of galaxies. In addition, A.M. Garcia listed NGC 5668 in the 31 member NGC 5746 galaxy group (also known as LGG 386).
As seen from the Earth, it is inclined by an angle of 18° to the line of sight along a position angle of 145°. The morphological classification in the De Vaucouleurs system is SA(s)d, indicating a pure spiral structure with loosely wound arms. However, optical images of the galaxy indicate the presence of a weak bar structure spanning an angle of 12″ across the nucleus. There is a dwarf galaxy located around to the southeast of NGC 5668, and the two may be gravitationally interacting.
Supernovae
Three supernovae have been observed in this galaxy:
SN1952G (type unknown, mag. 17.9) was discovered by Fritz Zwicky on 18 April 1952.
SN1954B (typeIa, mag. 12.3) was discovered by Paul Wild on 4 May 1954. [Note: Some sources incorrectly list the discovery date as 27 April 1954.]
SN2004G (typeII, mag. 17.2) was discovered by Reiki Kushida on 19 January 2004. It was initially imaged at 43" to the west and 12".5 south of the galaxy core.
High velocity clouds of neutral hydrogen have been observed in NGC 5668, which may have their origin in supernova explosions and strong stellar winds.
Gallery
See also
List of NGC objects (5001–6000)
References
External links
Virgo (constellation)
Unbarred spiral galaxies
5668
09363
17860429
Discoveries by William Herschel
052018
+01-37-028
14309+0440 | NGC 5668 | Astronomy | 443 |
4,660,608 | https://en.wikipedia.org/wiki/Deutschsprachige%20Anwendervereinigung%20TeX | Deutschsprachige Anwendervereinigung TeX e.V., or DANTE e.V., is the German-language TeX users group.
With about 2000 members, it is the largest TeX users group worldwide. DANTE was founded on 14 April 1989 in the German city of Heidelberg by a handful of TeX enthusiasts who had met formerly on a non-regular basis. According to its statutes, DANTE consults TeX users from all German-speaking countries and funds TeX-related projects. It also represents members towards other TeX users groups.
DANTE runs the main CTAN backbone server, . Conferences with talks, tutorials and general member meetings are held bi-annually on changing locations all over the German-speaking countries. The conferences are free of charge for everyone interested in TeX. The member journal, (German, 'The TeXnical comedy', a pun on Dante's Divine Comedy referring back to the group's name), is published quarterly.
DANTE's international counterpart is the TeX Users Group (TUG) that was founded in 1980.
External links
DANTE, Deutschsprachige Anwendervereinigung TeX e. V.
TeX
User groups
1989 establishments in West Germany
Organizations established in 1989
Scientific organisations based in Germany
Heidelberg | Deutschsprachige Anwendervereinigung TeX | Mathematics | 257 |
24,220,030 | https://en.wikipedia.org/wiki/Neutron%20tomography | Neutron tomography is a form of computed tomography involving the production of three-dimensional images by the detection of the absorbance of neutrons produced by a neutron source. It creates a three-dimensional image of an object by combining multiple planar images with a known separation. It has a resolution of down to 25 μm. Whilst its resolution is lower than that of X-ray tomography, it can be useful for specimens containing low contrast between the matrix and object of interest; for instance, fossils with a high carbon content, such as plants or vertebrate remains.
Neutron tomography can have the unfortunate side-effect of leaving imaged samples radioactive if they contain appreciable levels of certain elements such as cobalt, however in practice this neutron activation is low and short-lived such that the method is considered non-destructive.
The increasing availability of neutron imaging instruments at research reactors and spallation sources via peer-reviewed user access programs has seen neutron tomography achieve increasing impact across diverse applications including earth sciences, palaeontology, cultural heritage, materials research and engineering. In 2022, it was reported in the journal Gondwana Research that an ornithopod dinosaur was serendipitously discovered by neutron tomography in the gut content of Confractosuchus, a Cretaceous crocodyliform from the Winton Formation of central Queensland, Australia. This is the first time that a dinosaur has been discovered using neutron tomography, and to this day, the partially digested dinosaur remains entirely embedded within the surrounding matrix.
See also
References
Tomography
Neutron instrumentation | Neutron tomography | Technology,Engineering | 324 |
11,550,543 | https://en.wikipedia.org/wiki/SYS%20%28command%29 | In computing, sys is a command used in many operating system command-line shells and also in Microsoft BASIC.
DOS, Windows, etc.
SYS is an external command of Seattle Computer Products 86-DOS, Microsoft MS-DOS, IBM PC DOS, Digital Research FlexOS, IBM/Toshiba 4690 OS, PTS-DOS, Itautec/Scopus Tecnologia SISNE plus, and Microsoft Windows 9x operating systems. It is used to make an already formatted medium bootable. It will install a boot sector capable of booting the operating system into the first logical sector of the volume. Further, it will copy the principal DOS system files, that is, the DOS-BIOS (IO.SYS or IBMBIO.COM) and the DOS kernel (MSDOS.SYS or IBMDOS.COM) into the root directory of the target. Due to restrictions in the implementation of the boot loaders in the boot sector and DOS' IO system, these two files must reside in the first two directory entries and be stored at the beginning of the data area under MS-DOS and PC DOS. Depending on version, the whole files or only a varying number of sectors of the DOS-BIOS (down to only three sectors in modern issues of DOS) will have to be stored in one consecutive part. SYS will try to physically rearrange other files on the medium in order to make room for these files in their required locations. This is why SYS needs to bypass the filesystem driver in the running operating system. Other DOS derivatives such as DR-DOS do not have any such restrictions imposed by the design of the boot loaders, therefore under these systems, SYS will install a DR-DOS boot sector, which is capable of mounting the filesystem, and can then simply copy the two system files into the root directory of the target.
SYS will also copy the command line shell (COMMAND.COM) into the root directory. The command can be applied to hard drives and floppy disks to repair or create a boot sector.
Although an article on Microsoft's website says the SYS command was introduced in MS-DOS version 2.0, this is incorrect. SYS actually existed in 86-DOS 0.3 already. According to The MS-DOS Encyclopedia, the command was licensed to IBM as part of the first version of MS-DOS, and as such it was part of MS-DOS/PC DOS from the very beginning (IBM PC DOS 1.0 and MS-DOS 1.25).
DR DOS 6.0 includes an implementation of the command.
Syntax
The command syntax is:
SYS [drive1:][path] drive2:
Arguments:
[drive1:][path] – The location of the system files
drive2: – The drive to which the files will be copied
Example
C:\>sys a:
Microsoft BASIC
SYS is also a command in Microsoft BASIC used to execute a machine language program in memory. The command took the form SYS n where n is a memory location where the executable code starts. Home computer platforms typically publicised dozens of entry points to built-in routines (such as Commodore's KERNAL) that were used by programmers and users to access functionality not easily accessible through BASIC.
See also
List of DOS commands
Disk formatting
References
Further reading
External links
MS-DOS and Windows command line sys command
Open source SYS implementation that comes with MS-DOS v2.0
External DOS commands
Microcomputer software
Microsoft free software
BASIC commands | SYS (command) | Technology | 731 |
60,831,967 | https://en.wikipedia.org/wiki/Beach%20cleaning | Beach cleaning or clean-up is the process of removing solid litter, dense chemicals, and organic debris deposited on a beach or coastline by the tide, local visitors, or tourists. Humans pollute beaches with materials such as plastic bottles and bags, plastic straws, fishing gear, cigarette filters, six-pack rings, surgical masks and many other items that often lead to environmental degradation. Every year hundreds of thousands of volunteers comb beaches and coastlines around the world to clean this debris. These materials are also called "marine debris" or "marine pollution" and their quantity has been increasing due to anthropocentric activities.
There are some major sources of beach debris such as beach users, oceans, sea drifts, and river flow. Many beach users leave their litter behind on the beaches after activities. Also, marine debris or chemicals such as raw oil drift from oceans or seas and accumulate on beaches. Additionally, many rivers bring some cities' trashes to beaches. These pollutants harm marine life and ecology, human health, and coastal tourism. Hartley et al.'s (2015) study shows that environmental education is important to eliminate many beach pollutants on beaches and the marine environment.
Marine debris
There are two causes of the degradation of marine ecology and marine debris: the direct forces (population growth, technological development, and economic growth) and proximity forces (land transformation and industrial processes). We can think of the direct forces as underlying causes of why we consume an excessive amount of goods by industry process. The excessive consumption of goods causes marine debris because the goods have been packaged by manufactured cheap non-recycle materials such as plastic. Solid waste plastics cannot decompose easily in nature and their decomposition process takes thousands of years to million years but plastic breaks down into continuously smaller pieces (>5 mm) forming that is called micro-plastics. Thus, such solid waste products are called marine debris that can be seen all through coastlines and on many beaches through the world. There can be many sources of marine debris such as land-based, marine-based, and other anthropocentric activities.
Million tons of land-based waste products such as plastics, papers, woods, and metals end up in seas, oceans, and beaches through the wind, oceans currents (five major gyres), sewage, runoff, storm-water drains and rivers. Massive amount of marine debris has become a severe menace to the marine environment, aquatic life and humankind. Most land-based sources are illegal dumping, landfills, and petrochemical and other industry disposals. Also, other marine-based sources originate from anthropocentric marine activities that are drifted fishing lines, nets, plastic ropes or other petrochemical products from remote islands or lands, shipping vessels or fishing boats by wind and oceanic currents. Marine debris source is also anthropocentric activities of local populations such as beach goers, tourists and city or town sewage.
Montesinos et al., (2020) study of the total amount of 16,123 beach litter items to determine the source of marine debris at 40 bathing areas along the coast of Cádiz, Spain. The study displays that the sources of 88.5% of plastics, 67% cigarette butts, and cloth litters are related to the activity of beach-goers and tourists, 5.5% of cotton swabs, wet wipes, sanitary towels, tampons, and condoms are related to wastewater discharges at places close to rivers and tidal creeks mouths. Besides, the sources of 2.1% fishing lines, nets, and 0.6% Styrofoam are related to fishing activities and marine sources. Besides, some marine debris indicates that they are dumped directly by some international ships or by tourists into the sea on the beach from different countries such as hard food container (from Portugal), a bottle cap (Morocco), a cleaner bottle (Turkey), a food wrapper and other items related to navigation (Germany). Montesinos et al.'s study (2020) demonstrate that some marine debris can travel hundreds of kilometers and end up very far from its source because of the ocean and sea currents.
Also, tropical and subtropical islands are marine pollution hot spots as their relatively vulnerable ecosystems are being severely affected by both local and foreign marine debris. de Scisciolo et al. (2016) study on ten beaches along the leeward and windward coastlines of Aruba that is one of the Lesser Antilles islands located in the Southern Caribbean Sea. They try to determine differences of marine debris in macro (>25 mm), meso-debris (2–25 mm) and micro-debris (<2 mm) densities. The result of their study shows that meso-debris which are rounded plastic products are found on the windward coastlines because the windward coastlines experience higher pressure from distal marine-based debris. Natural factors such as wind and oceanic currents cause the accumulation and distribution of plastic meso-debris to windward coastlines. And macro-debris that contains a larger proportion of originating from eating, drinking and smoking and recreational activities are found leeward sites of the island because the leeward sites experience higher pressures from local land-based debris such as plastic plates, bottles and plastic straws.
Ghost gear
Marine debris consists of millions of tons of abandoned plastic fishing gear. Nearly 640,000 tons of plastic gear is dumped or abandoned in the oceans every year. According to Unger and Harrison, 6.4 tons of pollutant dumps the oceans every year, and the most of them are consist of by durable synthetic fishing gear, packaging, materials, raw plastic, and convenience items. Such extremely durable plastic gear cannot decompose in the seawater and marine environment and they wash up on beaches driven by inshore currents and wind. Such discarded gear such as plastic fishing lines, nets, and floats are called "ghost gear". About 46% of the 79 thousand of ghost gear that is the size of many football fields has been found at the Great Pacific Garbage Patch constituted in 2018. The discarded fishing nets and lines kill or inflict myriad marine animals such as fish, sharks, whales, dolphins, sea turtles, seals, and marine birds every year. And about 30% of fishing populations have been declining and %70 other marine animals suffer by abandoned gear each year. Besides, the huge fishing industry is an important driver of declines marine ecology by overfishing activities. Overfishing causes when big fishing vessels catch tons of fish faster than stock refills. Moreover, overfishing impacts 4.5 billion people who depend on at least 15% of fish for protein, and fishing is the principal livelihood.
Benefits
Public health
Clean beaches have many benefits for human health because the polluted beaches imperil human lives by beach accidents. Many items left on beaches such as broken glasses, sharp metals, or hard plastics may injure beach-goers physically. Also, marine debris such as fishing gear or nets may risk human life on the beaches. Such pollutants may be a trap for beach users and cause very serious injuries or drowning accidents for tourists.
Ecology
Researches on marine debris have substantially increased our knowledge of the amount and composition of marine debris as well as its impacts on the marine environment, aquatic life and people. Marine debris is very harmful to marine organisms such as plants, invertebrates, fish, seabirds, sea turtles and other large marine mammals. Marine debris contains plastic liters that are composed of industrial chemicals or toxins. These chemicals can be destructive to aquatic organisms because toxins accumulate in the tissues of marine organisms and they cause specific effects such as behavioral changes and alterations in metabolic processes. Also, a combination of plastic and seawater materials such as polycyclic aromatic hydrocarbons (PAHs), polychlorinated biphenyls (PCBs) and heavy metals can be fatal for marine life. Moreover, consumption of micro-plastics by larger marine organisms cause obstructions of the intestinal tract that leads to starvation and death because of reduced energy fitness. According to the U.S. Marine Mammal Commission, 111 out of the world's 312 species of seabirds, 26 species of marine mammals, and six out of seven of the words species of sea turtles have experienced issues with beach litter ingestion. Studies reveal that micro-plastics negatively impact human health due to consumption of marine organisms by humans.
In addition to all these impacts, the marine debris and beach litter pose dangers to wildlife on the beaches and marine ecology. Many beach pollutants such as fishing gears and nets or oil spills jeopardize many sea animals including sea turtles, seabirds, and dolphins, and can cause serious injuries or death. Marine animals can become trapped by contaminants such as fishing lines or nets.
The present issue with all of the aforementioned ailments are only made possible from human impacts, and could be ultimately prevented without human and marine interaction. It was reported by the United Nations Joint Group of Experts on the Scientific Aspects of Marine Pollution (GESAMP) that pollution originating from land was said to make up 80% of the world's marine pollution.
Sustainability
Clean beaches are indicators of the environmental quality and sustainable development level of a country. The Beach Cleaning Health Index is a cleaning classification method of European countries and their environments. The index determines the level of sustainability and cleanness of the countries and their beaches through classification notes such as A for excellent, B for good, C for regular, and D for bad.
There are numerous sustainability indices that have been created in the name of beach health and general appearance. These indexes are dependent on a wide range of variables that are used to assess both the anthropocentric as well as natural changes to beaches. These indexes' variables often merge the goals of both environmental preservation and that of the region to which the beach belongs. In addition to the heath index used in many European countries, in 2005 Israel generated its own beach analysis, their clean coast index (CCI). The goal since the start of this program has been to maintain cleanliness of all Israel's coastline, as well as educate the public on the importance of migrating marine litter. This is one of the first Indexes to determine more than just the amount of waste removed from a beach, as has been done in the past.
The CCI evaluated beach cleanliness every 2 weeks for a period of 7 months. By using this index on a periodic basis they were able to determine what processes worked well and which one did not. Other countries in the Caribbean are employing a different form of beach health index, called the Beach Quality Index (BQI). The BQI assesses many aspects of beaches, not just litter or overall cleanliness, but anthropocentric impacts and long term effects to act somewhat as a checklist for environmental quality issues. The BQI classifies beaches as both urban and urbanized, in the hopes of assessing them to their best ability, and including all factors that may impact varying beaches. The BQI helps by establishing various components and categories to help with this classification, something that not all beach indexes include.
Tourism
Beaches are recreational areas and attract many local and international visitors through sunbathing, swimming, walking or surfing activities. This coastal tourism is important for many countries because tourism activities contribute to a large facet of their economy. Therefore, a polluted beach or coastline may substantially impact a country's economy negatively. Contaminated beaches have become a global concern since the beginning of industrialization. Contaminated beaches are unattractive for international and local tourists due to aesthetic value or health concerns. Hutchings et al.'s (2000) study shows that a clean beach is a very important determinant of many local and international tourists in South Africa.
Public engagement and awareness, education, and behavior change
Participation in beach cleaning is associated with a better understanding of the issue of marine litter and its impacts. Beach cleaning volunteers demonstrated more accurate knowledge of the amount and type of waste in the local environment, as well as greater awareness of the causes and consequences of marine litter. For example, Hartley et al. (2015) found that students that volunteered to clean a local beach with their school could more accurately identify the primary origins of marine litter and estimate the lifespan of plastic. By highlighting the connection between human behavior and marine litter, beach cleaning increases the likelihood that participants will habitually remove and appropriately dispose of coastal trash, as well as engage in prevention and mitigation efforts. By comparing beach cleaning to other coastal activities—walking on the beach and rock pooling—Wyles et al. (2017) aimed to identify the benefits unique to beach cleaning. In doing so, Wyles et al. (2017) discovered that individuals that participated in beach cleaning reported a significantly greater increase in their intention to live an environmentally-friendly lifestyle and their awareness of marine issues compared to other test groups after the intervention.
Wellbeing
Beach cleaning has been shown to cultivate a positive mood and feeling of fulfillment. Wyles et al. (2017) compared the effect various coastal activities—beach cleaning, rock pooling, and walking on the beach—had on well being. The study found that participants experienced an improvement in mood across all three activities, although individuals who participated in beach cleaning reported a statistically significant difference in the sense of meaning they derived from beach cleaning compared to walking on the beach and rock pooling.
Additional research on the effects of beach cleaning on personal well being has not been conducted. However, the two core components of beach cleaning—spending time by the ocean and volunteering to advance environmental stewardship—have been associated with improved well being, mood, and outlook on life. For example, Koss and Kingsley (2010) found that individuals who volunteered at protected marine areas in Australia experienced greater mental and emotional well being and enhanced connection with the natural environment.
While beach cleaning can improve well being, Wyles et al. (2017) discovered that participants reported a statistically significant lower level of rejuvenation and relaxation when beach cleaning compared to rock pooling and walking on the beach.
Lastly, the well being benefits associated with beach cleaning are not only limited to the individuals actively removing trash from the coast but can be enjoyed by community members and beach goers as a whole. Wyles et al. (2016) claims that the presence of litter can diminish the psychological benefits of beaches. Beach goers in Wyles et al. (2016) even described feelings of sadness or anger when confronted with litter, explaining that these emotions emerged because the trash negatively impacts the environment and distracts from the beauty of the landscape.
Methods
The process of beach cleaning requires good management methods, adequate human resources, and funds. Solid litters cleaning methods are very different than oil spill cleaning methods. The beach cleaning process may be done using machinery such as sand cleaning machines that rake or sift the sand or/and other chemicals such as oil dispersants. This beach cleaning may be done by professionals company, civic organizations, the military or volunteers such as the Great Canadian Shoreline Cleanup and Marine Conservation Society.
Mechanical vs. manual cleaning
There are two types of beach cleaning—mechanical and manual. These methods are also referred to as mechanical grooming and nonmechanical grooming. Mechanical beach cleaning is defined as litter and/or organic material removal that relies on the work of automatic or push machinery that rakes or sieves the most superficial layer of sand. Manual cleaning involves individuals picking up trash exclusively by hand. The suggested beach cleaning approach incorporates manual and mechanical cleaning as this combination is most cost effective and environmentally sound.
Environmental concerns
Wrack cover and biodiversity
Mechanical cleaning removes organic materials, like seaweed, algae, and plants, alongside anthropogenic waste, such as plastic bottles, cigarette butts, and food packaging, leading to disturbances in the ecosystem and food chain. Organic materials naturally found on beaches, also known as wrack, provide critical nutrients and compose the foundation of the food chain. The elimination of this food source impacts organisms ranging from meiofauna to predator birds, resulting in a loss of biodiversity and a decrease in species abundance. For example, Dugan et al. studied the relationship between wrack abundance and the richness, abundance, and biomass of macrofauna of fifteen sandy beaches in Southern California and found that ungroomed beaches with relatively low levels of wrack had a mean abundance of macrofauna that thrive in the presence of wrack that was almost nine times greater than groomed beaches. Additionally, ungroomed beaches with relatively large amounts of wrack supported more than thirteen species of macrofauna that live in and around wrack while groomed beaches supported less than three. Furthermore, the presence of two shorebirds was positively correlated with the presence of wrack-associated macrofauna, indicating that beaches with more extensive wrack cover support vertebrates higher in the food chain and create a more rich, biodiverse ecosystem. Overall, the presence of wrack allows for detritivores, like isopods and talitrid amphipod, invertebrates like beetles, foraging birds, and scavenging vertebrates like mice, rats, foxes, and badgers to live and feed in that environment.
Wrack removal and public health
While removing wrack from beaches can harm the environment, the presence of excessive wrack can threaten beach goers' health. Collections of wrack decompose quickly which generates a foul odor. This environment attracts unpleasant, and even dangerous microbes and animals. Flies and buzzards are drawn to the smell of the decomposing wrack. While a large bird population increases biodiversity, the birds leave their droppings which also increase the density of potentially harmful microbes in the sand. Additionally, microbes that thrive in the presence of feces, called fecal indicator organisms, can reproduce in the conditions created by decomposing wrack. Wrack can sustain potentially harmful bacteria and fecal indicator organisms like Escherichia coli and Enterococci, which can cause gastrointestinal illness. In fact, a positive relation between time spent on wet sandy beaches and the incidence of contracting a gastrointestinal illness has been identified.
Topographic and vegetation alterations
Groomed beaches are wider, sustain substantially less vegetation, and have fewer and flatter topographic features, like dunes and hummocks, than ungroomed beaches. Naturally beaches should have a narrow stretch of sand closest to the ocean that is flattened by the tide below the extreme high tide line. Beyond this zone, the land should be composed of vegetated dunes that are infrequently touched by tides. However, mechanical beach cleaning has converted many beaches into much wider expanses of flat sand, most of which remains undisturbed by the tide and void of vegetation. Mechanical beach cleaning destroys vegetation, hummocks, and newly-formed dunes, leading to an immediate flattening of the landscape. Mechanical cleaning not only damages existing vegetation but deters the growth of future vegetation. Dugan and Hubbard found that the groomed portions of a beach experienced significantly lower rates of plant survival and reproduction after germination than the ungroomed sections of the same beach.
As vegetation abundance and the height and presence of dunes and hummocks decrease, sand transport patterns change in a way that furthers the extent of flattened topography. Hummocks, dunes, and vegetation act as obstacles that slow sand movement triggered by the wind. When these features disappear, the formation of future hummocks and dunes becomes more difficult and unlikely.
As beaches grow flatter and wider, the abundance and diversity of vegetation decreases further because vegetation requires stable sand dunes to take root and grow.
In this way, mechanical beach cleaning triggers a positive feedback loop that exacerbates the flattening and widening of beaches alongside the loss of vegetation abundance and diversity. Halting mechanical beach cleaning stops this cycle and can rebuild the damaged topography and lost vegetation. For example, Dugan and Hubbard observed that four years after stopping mechanical grooming, the San Buenaventura State Beach recovered 20 to 40 meters of vegetation, formed new hummocks and the beginning stages of sand dunes, improved sand stability, and increased the number of plants that survived beyond germination.
Best practices
A number of best practices for carrying out beach cleaning have been discussed in the literature.
Combination of mechanical and manual cleaning methods
This method allows urban and more intensely used beaches to manage larger quantities of litter while minimizing the environmental impact of mechanical cleaning. In fact, beaches cleaned less than three times a week sustain a level of biodiversity and species abundance that is similar or only slightly lower than beaches that are strictly cleaned by hand. For example, Morton et al. (2015) found that mechanical beach cleaning did not affect biodiversity but concede that this likely due to the fact that the beach only underwent mechanical cleaning once to twice a week and had moved wrack from popular sections of the beaches to less commonly visited sites. Additionally, Stelling-Wood et al. (2016) studied ghost crab populations as an indicator species for overall biodiversity on sandy beaches and discovered that the frequency of mechanical beach cleaning was the most influential factor on population size. Beaches that were mechanically cleaned less than three times a week housed the highest number of ghost crabs.
Reduction of quantity of beach litter through educational programs
Educational programs and volunteering effectively catalyze behavior change and awareness around marine pollution, leading to a reduction in marine debris and a willingness to clean that is present on beaches. More information can be found about the benefits of educational and volunteer programs under the Public Engagement and Beach Cleaning header of this page. Decreasing the quantity of marine litter makes manual beach cleaning an easier, more effective option, even for urban, frequently used beaches.
Relocation of collections of wrack to ungroomed or less popular areas of a beach
In doing so, the critical nutrient provided by wrack remains in the ecosystem, limiting disruptions to the food chain and ecosystem. Oftentimes, the nutrients from wrack will be redistributed to groomed portions of the beaches through wind and waves. For this reason, it is most important that this suggestion be implemented on beaches with consistently low tides.
Public engagement and beach cleaning
There are three primary ways the public can learn about or participate in beach cleaning: educational programs, awareness campaigns, and volunteering. All modes of public engagement can increase awareness of the issue of marine litter, educate participants about marine litter and ocean conservation, and motivate behavior change. When volunteers participate in beach cleaning, they can use mechanical or manual methods.
Educational programs and awareness campaigns
Educational and awareness campaigns can be developed by schools or promoted by government. Both have effectively enhanced their target audience's knowledge of marine litter, perception of the extent of the issue, and catalyzed behavior change.
Multiple studies research the impact of service learning programs on students' level of knowledge accumulation and awareness of both marine litter and broader marine conservation issues. For example, Owens (2018) studied the self-reported change in students' perception of their knowledge about ocean conservation and environmental behavior. The study compared the responses of two groups: an undergraduate class enrolled in a seminar course supplemented by a service learning opportunity cleaning beaches and an undergraduate class enrolled in a traditional laboratory-based environmental science course. Students who participated in beach cleaning reported a significantly greater perception of knowledge and environmentally-friendly behavior compared to the students in the laboratory-based class. The students who participated in beach cleaning also saw a significantly greater increase in their scores for perceived knowledge and environmentally-friendly behavior compared to the other cohort.
Educational campaigns can spread knowledge and incite behavior change beyond the target audience. For example, Hartley et al. (2015) explains that students who participated in beach cleaning with their school encouraged their friends and family to join them in adopting mitigation and prevention behaviors.
Volunteering
Volunteering improves participants' awareness and knowledge about marine litter and increases the likelihood that individuals will take continued action to address the issue. For example, Hartley et al. (2015) claims that after volunteering to clean a local beach with their school, children reported engaging in mitigation and prevention behavior more frequently, such as purchasing fewer, single-use plastic items, appropriately disposing of their waste, and recycling. Uneputty et al. (1998) found that individuals who had volunteered to clean beaches continued to remove trash from beaches and not litter months after they had participated in a volunteer program. Furthermore, surveys and interviews have revealed that once individuals begin volunteering in marine conservation efforts, they want to continue.
Multiple studies have determined that volunteers, whether organized through schools and universities or individual interest, can significantly reduce the quantity of solid waste on beaches.
Numerous volunteer beach cleaning programs have been facilitated by schools that promote service learning opportunities. These studies, in conjunction with research conducted with participants that joined programs entirely voluntarily, have demonstrated that groups that were and were not previously concerned about marine litter can experience an increase in awareness and knowledge, as well as positive behavior change through the hands on experience and learning involved in volunteering.
Beach cleaning volunteers reap the same, if not more, benefits from their participation as individuals who participate in other coastal activities. Wyles et al. (2017) studied the impact various coastal activities—beach cleaning, rock pooling, and walking on the beach—had on well being and discovered that all three led to a similar betterment in mood. However, individuals who participated in beach cleaning described a more intense sense of fulfillment when compared to the groups.
While further research has not been completed on the mental and emotional benefits of beach cleaning, volunteers who promote environmental stewardship have reported improvements in their well being.
Public engagement and collection methods
A study conducted in Catalonia in the late 1990s found that, on the beaches of the Llobregat Delta, engaging with the public through manual methods of beach cleanup improved citizen participation as compared to mechanical methods. Moving towards manual cleaning by citizens can benefit both the environment and aid in the local municipalities work of keeping the beaches clean. Dominguez's 2005 study found a correlation between citizens and the use of manual beach cleaning methods. This study also found that the amount of manual labor as well as employees required to manually clean stretches of beaches to be much less than anticipated.
Most polluted and cleanest beaches in the world
Most polluted beaches
Many researchers report that the ocean currents transfer floating litter by the five subtropical gyres. Thus, anthropocentric marine debris is present in all oceans, beaches and at the sea surface, even the Arctic sea ice contains small plastics particles or micro-plastics. According to Bhatia (2019), the ten most polluted beaches in the world are:{
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"properties": {},
"geometry": {
"type": "Point",
"coordinates": [
103.95572662353516,
10.220497303462976
]
}
},
{
"type": "Feature",
"properties": {},
"geometry": {
"type": "Point",
"coordinates": [
98.76590967178346,
7.676633535361854
]
}
},
{
"type": "Feature",
"properties": {},
"geometry": {
"type": "Point",
"coordinates": [
-155.5988931655884,
18.970787529076187
]
}
},
{
"type": "Feature",
"properties": {},
"geometry": {
"type": "Point",
"coordinates": [
115.1675319671631,
-8.726969207892507
]
}
},
{
"type": "Feature",
"properties": {},
"geometry": {
"type": "Point",
"coordinates": [
72.82279014587404,
19.065808992031442
]
}
},
{
"type": "Feature",
"properties": {},
"geometry": {
"type": "Point",
"coordinates": [
116.06678009033205,
5.974290189203834
]
}
},
{
"type": "Feature",
"properties": {},
"geometry": {
"type": "Point",
"coordinates": [
-43.16159248352051,
-22.813766860624725
]
}
},
{
"type": "Feature",
"properties": {},
"geometry": {
"type": "Point",
"coordinates": [
103.52120876312256,
10.606261093834862
]
}
},
{
"type": "Feature",
"properties": {},
"geometry": {
"type": "Point",
"coordinates": [
-70.02342224121095,
18.410726642469253
]
}
},
{
"type": "Feature",
"properties": {},
"geometry": {
"type": "Point",
"coordinates": [
-117.61688232421875,
33.41539481578252
]
}
}
]
}
Phú Quốc, Vietnam.
Maya Bay, Thailand.
Kamilo Beach, Hawaii, US.
Kuta Beach, Indonesia.
Juhu Beach, India.
Kota Kinabalu, Malaysia.
Guanabara Bay, Brazil.
Serendipity Beach, Cambodia.
Haina, Dominican Republic.
San Clemente Pier, California, US.
Cleanest beaches
According to Nguyen (2019), there are still some clean beaches around the world. To find out if a beach is clean or not is to look for a blue flag. The Blue Flag is the world's most recognized voluntary eco-labels awarded to beaches, marinas, and sustainable boating tourism operators. The blue flag shows when a beach has high environmental and quality standards. The six the cleanest Blue Flag awarded beaches are:{
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"properties": {},
"geometry": {
"type": "Point",
"coordinates": [
-96.56484603881836,
50.708525693689275
]
}
},
{
"type": "Feature",
"properties": {},
"geometry": {
"type": "Point",
"coordinates": [
-109.81534481048585,
22.930136464438526
]
}
},
{
"type": "Feature",
"properties": {},
"geometry": {
"type": "Point",
"coordinates": [
34.95519161224366,
32.82558149941856
]
}
},
{
"type": "Feature",
"properties": {},
"geometry": {
"type": "Point",
"coordinates": [
14.350290298461916,
35.9702438848471
]
}
},
{
"type": "Feature",
"properties": {},
"geometry": {
"type": "Point",
"coordinates": [
10.54043769836426,
57.46111107839989
]
}
},
{
"type": "Feature",
"properties": {},
"geometry": {
"type": "Point",
"coordinates": [
-25.57823181152344,
37.741535019661306
]
}
}
]
}
Victoria Beach, Canada.
Santa Maria Beach, Los Cabos.
Dado Beach, Israel.
Mellieha Bay, Malta.
Palmestranden Beach, Denmark.
Zona Balnear da Lagoa, Portugal.
Gallery
See also
Global warming
Anthropocene
Ecology
Marine ecosystem
Oil spill
Earth Day
Exploding whale
Great Canadian Shoreline Cleanup
Marine Conservation Society
Ocean Conservancy
References
Cleaning
Environmental protection
Marine conservation
Environmental volunteering
Ocean pollution | Beach cleaning | Chemistry,Environmental_science | 6,694 |
70,076,585 | https://en.wikipedia.org/wiki/High%20integrity%20software | High-integrity software is software whose failure may cause serious damage with possible "life-threatening consequences." "Integrity is important as it demonstrates the safety, security, and maintainability of... code." Examples of high-integrity software are nuclear reactor control, avionics software, automotive safety-critical software and process control software.
A number of standards are applicable to high-integrity software, including:
DO-178C, Software Considerations in Airborne Systems and Equipment Certification
CENELEC EN 50128, Railway applications - Communication, signalling and processing systems - Software for railway control and protection systems
IEC 61508, Functional Safety of Electrical/Electronic/Programmable Electronic Safety-related Systems (E/E/PE, or E/E/PES)
ISO 26262, Road Vehicles - Functional Safety (especially 'part 6' of the standard, which is titled "Product development at the software level"
See also
Safety-critical system
High availability software
Formal methods
Software of unknown pedigree
References
External links
Software by type
Software quality
Safety engineering | High integrity software | Technology,Engineering | 214 |
52,787,692 | https://en.wikipedia.org/wiki/EXPRES | The EXtreme PREcision Spectrograph (EXPRES) is an optical fiber fed echelle instrument designed and built at the Yale Exoplanet Laboratory to be installed on the 4.3-meter Lowell Discovery Telescope operated by Lowell Observatory. It has a goal to achieve 10 cm/s radial velocity precision. It uses a laser frequency comb to calibrate the primary wavelength for EXPRES.
See also
ESPRESSO spectrograph
HARPS3
References
Astronomical instruments
Telescope instruments
Exoplanet search projects
Spectrographs | EXPRES | Physics,Chemistry,Astronomy | 106 |
28,126,280 | https://en.wikipedia.org/wiki/C19H21NO6 | {{DISPLAYTITLE:C19H21NO6}}
The molecular formula C19H21NO6 may refer to:
Oxodipine
XP-21279 | C19H21NO6 | Chemistry | 38 |
3,655,654 | https://en.wikipedia.org/wiki/Concise%20International%20Chemical%20Assessment%20Document | Concise International Chemical Assessment Documents (CICADs) are published by the World Health Organization within the framework of the International Programme on Chemical Safety (IPCS). They describe the toxicological properties of chemical compounds.
CICADs are prepared in draft form by one or two experts from national bodies such as the US CDC, and then peer reviewed by an international group of experts. They do not constitute the official policy of any of the bodies which contribute to their publication.
References
External links
Official site
Chemical safety | Concise International Chemical Assessment Document | Chemistry | 103 |
22,236,319 | https://en.wikipedia.org/wiki/Quantum-confined%20Stark%20effect | The quantum-confined Stark effect (QCSE) describes the effect of an external electric field upon the light absorption spectrum or emission spectrum of a quantum well (QW). In the absence of an external electric field, electrons and holes within the quantum well may only occupy states within a discrete set of energy subbands. Only a discrete set of frequencies of light may be absorbed or emitted by the system. When an external electric field is applied, the electron states shift to lower energies, while the hole states shift to higher energies. This reduces the permitted light absorption or emission frequencies. Additionally, the external electric field shifts electrons and holes to opposite sides of the well, decreasing the overlap integral, which in turn reduces the recombination efficiency (i.e. fluorescence quantum yield) of the system.
The spatial separation between the electrons and holes is limited by the presence of the potential barriers around the quantum well, meaning that excitons are able to exist in the system even under the influence of an electric field. The quantum-confined Stark effect is used in QCSE optical modulators, which allow optical communications signals to be switched on and off rapidly.
Even if Quantum Objects (Wells, Dots or Discs, for instance) emit and absorb light generally with higher energies than the band gap of the material, the QCSE may shift the energy to values lower than the gap. This was evidenced recently in the study of quantum discs embedded in a nanowire.
Theoretical description
The shift in absorption lines can be calculated by comparing the energy levels in unbiased and biased quantum wells. It is a simpler task to find the energy levels in the unbiased system, due to its symmetry. If the external electric field is small, it can be treated as a perturbation to the unbiased system and its approximate effect can be found using perturbation theory.
Unbiased system
The potential for a quantum well may be written as
,
where is the width of the well and is the height of the potential barriers. The bound states in the well lie at a set of discrete energies, and the associated wavefunctions can be written using the envelope function approximation as follows:
In this expression, is the cross-sectional area of the system, perpendicular to the quantization direction, is a periodic Bloch function for the energy band edge in the bulk semiconductor and is a slowly varying envelope function for the system.
If the quantum well is very deep, it can be approximated by the particle in a box model, in which . Under this simplified model, analytical expressions for the bound state wavefunctions exist, with the form
The energies of the bound states are
where is the effective mass of an electron in a given semiconductor.
Biased system
Supposing the electric field is biased along the z direction,
the perturbing Hamiltonian term is
The first order correction to the energy levels is zero due to symmetry.
.
The second order correction is, for instance n=1,
for electron, where the additional approximation of neglecting the perturbation terms due to the bound states with k even and > 2 has been introduced. By comparison, the perturbation terms from odd-k states are zero due to symmetry.
Similar calculations can be applied to holes by replacing the electron effective mass with the hole effective mass . Introducing the total effective mass , the energy shift of the first optical transition induced by QCSE can be approximated to:
The downward shift in the confined energy level discussed in the above equation is referred to as the Franz-Keldysh effect.
The approximations made so far are quite crude, nonetheless the energy shift does show experimentally a square law dependence from the applied electric field, as predicted.
Absorption coefficient
Additionally to the redshift towards lower energies of the optical transitions, the DC electric field also induces a decrease in magnitude of the absorption coefficient, as it decreases the overlapping integrals of relating valence and conduction band wave functions. Given the approximations made so far and the absence of any applied electric field along z, the overlapping integral for transitions will be:
.
To calculate how this integral is modified by the quantum-confined Stark effect we once again employ time independent perturbation theory.
The first order correction for the wave function is
.
Once again we look at the energy level and consider only the perturbation from the level (notice that the perturbation from would be due to symmetry). We obtain
for the conduction and valence band respectively, where has been introduced as a normalization constant. For any applied electric field we obtain
.
Thus, according to Fermi's golden rule, which says that transition probability depends on the above overlapping integral, optical transition strength is weakened.
Excitons
The description of quantum-confined Stark effect given by second order perturbation theory is extremely simple and intuitive. However to correctly depict QCSE the role of excitons has to be taken into account. Excitons are quasiparticles consisting of a bound state of an electron-hole pair, whose binding energy in a bulk material can be modelled as that of an hydrogenic atom
where is the Rydberg constant, is the reduced mass of the electron-hole pair and is the relative electric permittivity.
The exciton binding energy has to be included in the energy balance of photon absorption processes:
.
Exciton generation therefore redshift the optical band gap towards lower energies.
If an electric field is applied to a bulk semiconductor, a further redshift in the absorption spectrum is observed due to Franz–Keldysh effect. Due to their opposite electric charges, the electron and the hole constituting the exciton will be pulled apart under the influence of the external electric field. If the field is strong enough
then excitons cease to exist in the bulk material. This somewhat limits the applicability of Franz-Keldysh for modulation purposes, as the redshift induced by the applied electric field is countered by shift towards higher energies due to the absence of exciton generations.
This problem does not exist in QCSE, as electrons and holes are confined in the quantum wells. As long as the quantum well depth is comparable to the excitonic Bohr radius, strong excitonic effects will be present no matter the magnitude of the applied electric field. Furthermore, quantum wells behave as two dimensional systems, which strongly enhance excitonic effects with respect to bulk material. In fact, solving the Schrödinger equation for a Coulomb potential in a two dimensional system yields an excitonic binding energy of
which is four times as high as the three dimensional case for the solution.
Optical modulation
Quantum-confined Stark effect's most promising application lies in its ability to perform optical modulation in the near infrared spectral range, which is of great interest for silicon photonics and down-scaling of optical interconnects.
A QCSE based electro-absorption modulator consists of a PIN structure where the instrinsic region contains multiple quantum wells and acts as a waveguide for the carrier signal. An electric field can be induced perpendicularly to the quantum wells by applying an external, reverse bias to the PIN diode, causing QCSE. This mechanism can be employed to modulate wavelengths below the band gap of the unbiased system and within the reach of the QCSE induced redshift.
Although first demonstrated in GaAs/AlxGa1-xAs quantum wells, QCSE started to generate interest after its demonstration in Ge/SiGe. Differently from III/V semiconductors, Ge/SiGe quantum well stacks can be epitaxially grown on top of a silicon substrate, provided the presence of some buffer layer in between the two. This is a decisive advantage as it allows Ge/SiGe QCSE to be integrated with CMOS technology and silicon photonics systems.
Germanium is an indirect gap semiconductor, with a bandgap of 0.66 eV. However it also has a relative minimum in the conduction band at the point, with a direct bandgap of 0.8 eV, which corresponds to a wavelength of 1550 nm. QCSE in Ge/SiGe quantum wells can therefore be used to modulate light at 1.55 , which is crucial for silicon photonics applications as 1.55 is the optical fiber`s transparency window and the most extensively employed wavelength for telecommunications.
By fine tuning material parameters such as quantum well depth, biaxial strain and silicon content in the well, it is also possible to tailor the optical band gap of the Ge/SiGe quantum well system to modulate at 1310 nm, which also corresponds to a transparency window for optical fibers.
Electro-optic modulation by QCSE using Ge/SiGe quantum wells has been demonstrated up to 23 GHz with energies per bit as low as 108 fJ. and integrated in a waveguide configuration on a SiGe waveguide
See also
Franz–Keldysh effect
Citations
General sources
Mark Fox, Optical properties of solids, Oxford, New York, 2001.
Hartmut Haug, Quantum Theory of the Optical and Electronic Properties of Semiconductors, World Scientific, 2004.
https://web.archive.org/web/20100728030241/http://www.rle.mit.edu/sclaser/6.973%20lecture%20notes/Lecture%2013c.pdf
Shun Lien Chuang, Physics of Photonics Devices, Wiley, 2009.
Quantum electronics
Quantum mechanics | Quantum-confined Stark effect | Physics,Materials_science | 1,934 |
30,701 | https://en.wikipedia.org/wiki/The%20Computer%20Contradictionary | The Computer Contradictionary is a non-fiction book by Stan Kelly-Bootle that compiles a satirical list of definitions of computer industry terms. It was originally published as 'The devil's DP dictionary'. it is an example of "cynical lexicography" in the tradition of Ambrose Bierce's The Devil's Dictionary. Rather than offering a factual account of usage, its definitions are largely made up by the author.
The book was published in May 1995 by MIT Press and is an update of Kelly-Bootle's The Devil's DP Dictionary which appeared in 1981.
Examples
Endless loop. See: Loop, endless
Loop, endless. See: Endless loop
Recursion. See: Recursion
Reception
The Los Angeles Times panned the book, wrote that it was "smartly-titled" but was an "awfully stupid book". ACM Computing Reviews recommended dipping into it because "a dictionary is a difficult read".
References
Satirical books
MIT Press books
1995 books
Computer humour | The Computer Contradictionary | Technology | 210 |
22,446,466 | https://en.wikipedia.org/wiki/Hebeloma%20cylindrosporum | Hebeloma cylindrosporum is a species of mushroom-forming fungus in the genus Hebeloma and the family Hymenogastraceae. The mushroom is a Basidiomycota, which has many of the mushroom-forming fungi species. It was described as new to science in 1965 by French mycologist Henri Romagnesi.
Taxonomy and Phylogeny
Cylindr- is derived from Greek and means cylindrical, and -sporum is derived from Latin and means it has a spore. Cylindrosporum then means "cylindrical and sporus".
The mushroom was first described by the mycologist Henri Romagnesi, who found the mushroom in France in 1961. However, the mycologist A.A. Pearson had found the mushroom in South Africa in 1948.
The genetic material of H. cylindrosporum has been sequenced. It has been found that the Hebeloma genus phylogenetic data may be "related to saprotrophic species in the genera Agrocybe or Pholiota". The species H. cylindrosporum is "phylogenetically distantly related to H. crustuliniforme, and other Hebeloma species".
Morphology
The cap of the mushroom is usually convex, and the color of the mushroom is usually a "yellowish brown, occasionally dark brick, rarely cinnamon". H. cylindrosporum has gills that are usually notched before attaching to the stem of the fungi, but the gills are sometimes not notched before attaching to the stem.
H. cylindrosporum spores are cylindrically shaped, which is also where the fungus gets its name. The spores are "often brown, occasionally yellow brown, rarely beige or yellow". The fugus does have a basidia.
Ecology
H. cylindrosporum is an ectomycorrhizal species of fungi. The fungus is "associated with Pinus pinaster". The H. cylindrosporum forms a Hartig net with the roots of the pine tree, and helps the pine tree to take up phosphorus and nitrogen. H. cylindrosporum can also associate with other hosts such as Larix laricina, Dryas integrifolia, and Quercus acutissima in a laboratory.
The fungus has mainly been found in Europe. However, the fungus has been found in Africa and Temperate Asia. H. cylindrosporum has been found in "sandy soils with no or very little organic matter", mostly being found in "costal sand dune ecosystems along the Atlantic south-west coast of France".
Relevance for Humans
There have been several studies done accessing how H. cylindrosporum interacts with different metals.
In one study, it was found that H. cylindrosporum has two genes that are "involved in metal homeostasis and detoxification". The study found that both genes were able to show some sort of response to copper, but only one of the genes was "highly responsive to Cu induction and is likely to be involved in the detoxification of this metal". The gene that was less responsive to copper was found to be the only gene "involved in conferring tolerance to Cd". Another study involving cadmium and H. cylindrosporum found that an enzyme made by the fungus increased the production of "a core component in the mycorrhizal defense system under Cd stress for Cd homeostasis and detoxification".
In another study involving arsenic and H. cylindrosporum, it was found that "when in an As contaminated soil, these ECM fungi forms a symbiotic association with the plant roots, protecting the host plant from As stress". This study found that the fungi "transfer of soil As to plant roots by conjugating it with gluthathione and accumulating inside the vacuoles". The researchers conclude "that H. cylindrosporum is efficient in dealing with As stress and may offer global potential in its bioremediation".
See also
List of Hebeloma species
References
cylindrosporum
Fungi described in 1965
Fungi of Europe
Fungus species | Hebeloma cylindrosporum | Biology | 864 |
23,588,906 | https://en.wikipedia.org/wiki/Adulterated%20food%20in%20the%20United%20States | Adulteration is a legal offense and when the food fails to meet the legal standards set by the government, it is said to have been Adulterated Food. One form of adulteration is the addition of another substance to a food item in order to increase the quantity of the food item in raw form or prepared form, which results in the loss of the actual quality of the food item. These substances may be either available food items or non-food items. Among meat and meat products some of the items used to adulterate are water or ice, carcasses, or carcasses of animals other than the animal meant to be consumed. In the case of seafood, adulteration may refer to species substitution (mislabeling), which replaces the species identified on the product label with another species, or undisclosed processing methods, in which treatments such as additives, excessive glazing, or short-weighting are not disclosed to the consumer.
History
Historians have recognized cases of food adulteration in Ancient Rome and the Middle Ages. Contemporary accounts of adulteration date from the 1850s to the present day.
Legislative
In the United States, the Food and Drug Administration (FDA), regulates and enforces laws on food safety as well as Food Defense. The FDA provides some technical definitions of adulterated food in various United States laws.
1906 (21 U.S.C. 601 et. seq.)
1938 Federal Food, Drug, and Cosmetic Act (21 U.S.C. 321 et seq.)
1957 Poultry Products Inspection Act (21 U.S.C. 451 et seq.)
2011 Food Safety and Modernization Act
Federal Food, Drug, and Cosmetic Act
The Federal Food, Drug, and Cosmetic (FD&C) Act 1988) provides that food is "adulterated" if it meets any one of the following criteria:
(1) it bears or contains any "poisonous or deleterious substance" which may render it injurious to health;
(2) it bears or contains any added poisonous or added deleterious substance (other than a pesticide residue, food additive, color additive, or new animal drug, which are covered by separate provisions) that is unsafe;
(3) its container is composed, in whole or in part, of any poisonous or deleterious substance which may render the contents injurious to health;
or (4) it bears or contains a pesticide chemical residue that is unsafe. (Note: The United States Environmental Protection Agency (EPA) establishes tolerances for pesticide residues in foods, which are enforced by the FDA.)
Food also meets the definition of adulteration if:
(5) it is, or it bears or contains, an unsafe food additive;
(6) it is, or it bears or contains, an unsafe new animal drug;
(7) it is, or it bears or contains, an unsafe color additive;
(8) it consists, in whole or in part, of "any filthy, putrid, or decomposed substance" or is otherwise unfit for food;
or (9) it has been prepared, packed, or held under unsanitary conditions (insect, rodent, or bird infestation) whereby it may have become contaminated with filth or rendered injurious to health.
Further, food is considered adulterated if:
(10) it has been irradiated and the irradiation processing was not done in conformity with a regulation permitting irradiation of the food in question (the FDA has approved irradiation of a number of foods, including refrigerated or frozen uncooked meat, fresh or frozen uncooked poultry, and seeds for sprouting [21 C.F.R. Part 179].);
(11) it contains a dietary ingredient that presents a significant or unreasonable risk of illness or injury under the conditions of use recommended in labeling (for example, foods or dietary supplements containing aristolochic acids, which have been linked to kidney failure, have been banned.);
(12) a valuable constituent has been omitted in whole or in part or replaced with another substance; damage or inferiority has been concealed in any manner; or a substance has been added to increase the product's bulk or weight, reduce its quality or strength, or make it appear of greater value than it is (this is "economic adulteration");
or (13) it is offered for import into the United States and is a food that has previously been refused admission unless the person reoffering the food establishes that it is in compliance with U.S. law [21 U.S.C. § 342].
Federal Meat Inspection Act and the Poultry Products Inspection Act
The Federal Meat Inspection Act and the Poultry Products Inspection Act of 1957 contain similar provisions for meat and poultry products. [21 U.S.C. § 453(g), 601(m).
Poisonous or deleterious substances
Generally, if a food contains a poisonous or deleterious substance that may render it injurious to health, it is considered to be adulterated. For example, apple cider contaminated with E. coli O157:H7 and Brie cheese contaminated with Listeria monocytogenes are adulterated. There are two exceptions to this general rule. First, if the poisonous substance is inherent or naturally occurring and its quantity in the food does not ordinarily render it injurious to health, the food will not be considered adulterated. Thus, a food that contains a natural toxin at very low levels that would not ordinarily be harmful (for instance, small amounts of amygdalin in apricot kernels) is not adulterated.
Second, if the poisonous or deleterious substance is unavoidable and is within an established tolerance, regulatory limit, or action level, the food will not be deemed to be adulterated. Tolerances and regulatory limits are thresholds above which a food will be considered adulterated. They are binding on FDA, the food industry, and the courts. Action levels are limits at or above which FDA may regard food as adulterated. They are not binding on FDA. FDA has established numerous action levels (for example, one part per million methylmercury in fish), which are set forth in its booklet Action Levels for Poisonous or Deleterious Substances in Human Food and Animal Feed.
If a food contains a poisonous substance in excess of a tolerance, regulatory limit, or action level, mixing it with "clean" food to reduce the level of contamination is not allowed. Deliberately mixing adulterated food with good food renders the finished product adulterated (FDA, Compliance Policy Guide [CPG § 555.200]).
Filth and foreign matter of adulteration
Filth and extraneous material include any objectionable substances in foods, such as foreign matter (for example, glass, metal, plastic, wood, stones, sand, cigarette butts), undesirable parts of the raw plant material (such as stems, pits in pitted olives, pieces of shell in canned oysters), and filth (namely, mold, rot, insect and rodent parts, excreta, decomposition). Under a strict reading of the FD&C Act, any amount of filth in a portion of food would render it adulterated. FDA regulations, however, authorize the agency to issue Defect Action Levels (DALs) for natural, unavoidable defects that at low levels do not pose a human health hazard [21 C.F.R. § 110.110]. These DALs are advisory only; they do not have the force of law and do not bind FDA. DALs are set forth in FDA's Compliance Policy Guides and are compiled in the FDA and Center for Food Safety and Applied Nutrition (CFSAN) Defect Action Level Handbook.
In most cases, DALs are food-specific and defect-specific. For example, the DAL for insect fragments in peanut butter is an average of thirty or more insect fragments per 100 grams (g) [CPG § 570.300]. In the case of hard or sharp foreign objects, the DAL, which is based on the size of the object and the likelihood it will pose a risk of choking or injury, applies to all foods (see CPG § 555.425).
Economic-adulteration
A portion of food is adulterated if it omits a valuable constituent or substitutes another substance, in whole or in part, for a valuable constituent (for instance, olive oil diluted with tea tree oil); conceals damage or inferiority in any manner (such as fresh fruit with food coloring on its surface to conceal defects); or any substance has been added to it or packed with it to increase its bulk or weight, reduce its quality or strength, or make it appear bigger or of greater value than it is (for example, scallops to which water has been added to make them heavier).
Microbiological contamination and adulteration of food
The fact that a food is contaminated with pathogens (harmful microorganisms such as bacteria, viruses, or protozoa) may, or may not, render it adulterated. Generally, for ready-to-eat foods, the presence of pathogens will cause the food adulterated. For example, the presence of Salmonella on fresh fruits or vegetables or in ready-to-eat meat or poultry products (such as luncheon meats) will render those products adulterated.
For meat and poultry products, which are regulated by USDA, the rules are more complicated. Ready-to-eat meat and poultry products contaminated with pathogens, such as Salmonella or Listeria monocytogenes, are adulterated. (Note that hotdogs are considered ready-to-eat products.) For raw meat or poultry products, pathogens will not always be adulterated (because raw meat and poultry products are intended to be cooked, and proper cooking should kill pathogens). Raw poultry contaminated with Salmonella is not adulterated. However, USDA's Food Safety and Inspection Service (FSIS) has ruled that raw meat or poultry products contaminated with E. coli O157:H7 are adulterated. This is because normal cooking methods may not reduce E. coli O157:H7 below infectious levels. E. coli O157:H7 is the only pathogen that is considered an adulterant when present in raw meat or poultry products.
Enforcement actions
If food is adulterated, FDA and FSIS have a broad array of enforcement tools. These include seizing and condemning the product, detaining imported product, enjoining persons from manufacturing or distributing the product, or requesting a recall of the product. Enforcement action is usually preceded by a Warning Letter from the FDA to the manufacturer or distributor of the adulterated product. In the case of an adulterated meat or poultry product, FSIS has certain additional powers. FSIS may suspend or withdraw federal inspection of an official establishment. Without federal inspection, an establishment may not produce or process meat or poultry products, and therefore must cease operations. With the exception of infant formula, neither the FDA nor FSIS has the authority to require a company to recall an adulterated food product. However, the ability to generate negative publicity gives them considerable powers of persuasion.
State regulators generally have similar enforcement tools at their disposal to prevent the manufacture and distribution of adulterated food. In addition, many states have the authority to immediately embargo adulterated food and to impose civil fines. Federal agencies often will coordinate with state or local authorities to remove unsafe food from the market as quickly as possible.
See also
Adulterant
List of foodborne illness outbreaks by death toll - includes contamination incidents
Counterfeit medications
References
External links
Food safety
Adulteration
Food fraud | Adulterated food in the United States | Chemistry | 2,426 |
13,341,540 | https://en.wikipedia.org/wiki/Asymmetric%20norm | In mathematics, an asymmetric norm on a vector space is a generalization of the concept of a norm.
Definition
An asymmetric norm on a real vector space is a function that has the following properties:
Subadditivity, or the triangle inequality:
Nonnegative homogeneity: and every non-negative real number
Positive definiteness:
Asymmetric norms differ from norms in that they need not satisfy the equality
If the condition of positive definiteness is omitted, then is an asymmetric seminorm. A weaker condition than positive definiteness is non-degeneracy: that for at least one of the two numbers and is not zero.
Examples
On the real line the function given by
is an asymmetric norm but not a norm.
In a real vector space the of a convex subset that contains the origin is defined by the formula
for .
This functional is an asymmetric seminorm if is an absorbing set, which means that and ensures that is finite for each
Corresponce between asymmetric seminorms and convex subsets of the dual space
If is a convex set that contains the origin, then an asymmetric seminorm can be defined on by the formula
For instance, if is the square with vertices then is the taxicab norm Different convex sets yield different seminorms, and every asymmetric seminorm on can be obtained from some convex set, called its dual unit ball. Therefore, asymmetric seminorms are in one-to-one correspondence with convex sets that contain the origin. The seminorm is
positive definite if and only if contains the origin in its topological interior,
degenerate if and only if is contained in a linear subspace of dimension less than and
symmetric if and only if
More generally, if is a finite-dimensional real vector space and is a compact convex subset of the dual space that contains the origin, then is an asymmetric seminorm on
See also
References
S. Cobzas, Functional Analysis in Asymmetric Normed Spaces, Frontiers in Mathematics, Basel: Birkhäuser, 2013; .
Linear algebra
Norms (mathematics) | Asymmetric norm | Mathematics | 442 |
52,829,079 | https://en.wikipedia.org/wiki/Aspergillus%20acidus | Aspergillus acidus is a species of fungus in the genus Aspergillus. Aspergillus acidus can be used in food fermentation for tea.
References
Further reading
http://www.fung-growth.org
acidus
Fungi described in 1989
Molds used in food production
Fungus species | Aspergillus acidus | Biology | 67 |
27,515,306 | https://en.wikipedia.org/wiki/Kicked%20rotator | The kicked rotator, also spelled as kicked rotor, is a paradigmatic model for both Hamiltonian chaos (the study of chaos in Hamiltonian systems) and quantum chaos. It describes a free rotating stick (with moment of inertia ) in an inhomogeneous "gravitation like" field that is periodically switched on in short pulses. The model is described by the Hamiltonian
,
where is the angular position of the stick ( corresponds to the position of the rotator at rest), is the conjugated momentum of , is the kicking strength, is the kicking period and is the Dirac delta function.
Classical properties
Stroboscopic dynamics
The equations of motion of the kicked rotator writeTheses equations show that between two consecutive kicks, the rotator simply moves freely: the momentum is conserved and the angular position growths linearly in time. On the other hand, during each kick the momentum abruptly jumps by a quantity , where is the angular position near the kick. The kicked rotator dynamics can thus be described by the discrete mapwhere and are the canonical coordinates at time , just before the -th kick. It is usually more convenient to introduce dimensionless momentum , time and kicking strength to reduce the dynamics to the single parameter mapknown as Chirikov standard map, with the caveat that is not periodic as in the standard map. However, one can directly see that two rotators with same initial angular position but shifted dimensionless momentum and (with an arbitrary integer) will have the same exact stroboscopic dynamics, but with dimensionless momentum shifted at any time by (this is why stroboscopic phase portraits of the kicked rotator are usually displayed in a single momentum cell ).
Transition from integrability to chaos
The kicked rotator is a prototype model used to illustrate the transition from integrability to chaos in Hamiltonian systems and in particular the Kolmogorov–Arnold–Moser theorem. In the limit , the system describes the free motion of the rotator, the momentum is conserved (the system is integrable) and the corresponding trajectories are straight lines in the plane (phase space), that is tori. For small, but non-vanishing perturbation , instabilities and chaos starts to develop. Only quasi-periodic orbits (represented by invariant tori in phase space) remain stable, while other orbits become unstable. For larger , invariant tori are eventually destroyed by the perturbation. For the value , the last invariant tori connecting and in phase space is destroyed.
Diffusion in momentum direction
For , chaotic unstable orbits are no longer constraints by invariant tori in the momentum direction and can explore the full phase space. For , the particle after each kicks typically moved over a large distance, which strongly modifies the amplitude and sign of the following kick. At long time enough, the particle as thus been submitted to a series of kicks with quasi-random amplitudes. This quasi-random walk is responsible for a diffusion process in the momentum direction (where the average runs over different initial conditions).
More precisely, after kicks, the momentum of a particle with initial momentum writes (obtained by iterating times the standard map). Assuming that kicks are randoms and uncorrelated in time, the spreading of the momentum distribution writesThe classical diffusion coefficient in momentum direction is then given in first approximation by . Corrections coming from neglected correlation terms can actually be taken into account, leading to the improved expressionwhere is the Bessel function of first kind.
The quantum kicked rotator
Stroboscopic dynamics
The dynamics of the quantum kicked rotator (with wave function ) is governed by the time dependent Schrödinger's equation
with (or equivalently ).
As for classical dynamics, a stroboscopic point of view can be adopted by introducing the time propagator over a kicking period (that is the Floquet operator) so that . After a careful integration of the time-dependent Schrödinger's equation, one finds that can be written as the product of two operatorsWe recover the classical interpretation: the dynamics of the quantum kicked rotor between two kicks is the succession of a free propagation during a time followed by a short kick. This simple expression of the Floquet operator (a product of two operators, one diagonal in momentum basis, the other one diagonal in angular position basis) allows to easily numerically solve the evolution of a given wave function using split-step method.
Because of the periodic boundary conditions at , any wave function can be expanded in a discrete momentum basis (with , integer) see Bloch theorem), so that
Using this relation with the above expression of , we find the recursion relationwhere is a Bessel function of first kind.
Dynamical localization
It has been discovered that the classical diffusion is suppressed in the quantum kicked rotator. It was later understood that this is a manifestation of a quantum dynamical localization effect that parallels Anderson localization. There is a general argument that leads to the following estimate for the breaktime of the diffusive behavior
Where is the classical diffusion coefficient. The associated localization scale in momentum is therefore .
Link with Anderson tight-binding model
The quantum kicked rotor can actually formally be related to the Anderson tight-binding model a celebrated Hamiltonian that describes electrons in a disordered lattice with lattice site state , where Anderson localization takes place (in one dimension)where the are random on-site energies, and the are the hoping amplitudes between sites and .
In the quantum kicked rotator it can be shown, that the plane wave with quantized momentum play the role of the lattice sites states. The full mapping to the Anderson tight-binding model goes as follow (for a given eigenstates of the Floquet operator, with quasi-energy )Dynamical localization in the quantum kicked rotator then actually takes place in the momentum basis.
The effect of noise and dissipation
If noise is added to the system, the dynamical localization is destroyed, and diffusion is induced. This is somewhat similar to hopping conductance.
The proper analysis requires to figure out how the dynamical correlations that are responsible for the localization effect are diminished.
Recall that the diffusion coefficient is , because the change in the momentum is the sum of quasi-random kicks . An exact expression for is obtained by calculating the "area" of the correlation function , namely the sum . Note that . The same calculation recipe holds also in the quantum mechanical case, and also if noise is added.
In the quantum case, without the noise, the area under is zero (due to long negative tails), while with the noise a practical approximation is where the coherence time is inversely proportional to the intensity of the noise. Consequently, the noise induced diffusion coefficient is
Also the problem of quantum kicked rotator with dissipation (due to coupling to a thermal bath) has been considered. There is an issue here how to introduce an interaction that respects the angle periodicity of the position coordinate, and is still spatially homogeneous. In the first works
a quantum-optic type interaction has been assumed that involves a momentum dependent coupling. Later a way to formulate a purely position dependent coupling, as in the Calderia-Leggett model, has been figured out, which can be regarded as the earlier version of the DLD model.
Experimental realization with cold atoms
The first experimental realizations of the quantum kicked rotator have been achieved by Mark G. Raizen group in 1995, later followed by the Auckland group, and have encouraged a renewed interest in the theoretical analysis. In this kind of experiment, a sample of cold atoms provided by a magneto-optical trap interacts with a pulsed standing wave of light. The light being detuned with respect to the atomic transitions, atoms undergo a space-periodic conservative force. Hence, the angular dependence is replaced by a dependence on position in the experimental approach. Sub-milliKelvin cooling is necessary to obtain quantum effects: because of the Heisenberg uncertainty principle, the de Broglie wavelength, i.e. the atomic wavelength, can become comparable to the light wavelength. For further information, see.
Thanks to this technique, several phenomena have been investigated, including the noticeable:
quantum Ratchets;
the Anderson transition in 3D.
See also
Circle map
References
External links
Scholarpedia entry
Quantum mechanics
Articles containing video clips
Chaotic maps | Kicked rotator | Physics,Mathematics | 1,718 |
54,550,764 | https://en.wikipedia.org/wiki/Saraswati%20Supercluster | The Saraswati Supercluster is a massive galaxy supercluster about 1.2 gigaparsecs (4 billion light years) away within the Stripe 82 region of SDSS, in the direction of the constellation Pisces. It is one of the largest structures found in the universe, with a major axis in diameter of about 200 Mpc (652 million light years). It consists of at least 43 galaxy clusters, and has the mass of , forming a galaxy filament.
Discovery
The Saraswati supercluster was discovered by a team of astrophysicists from the Inter-University Centre for Astronomy and Astrophysics and Indian Institute of Science Education and Research led by Joydeep Bagchi and colleagues in Pune, India in 2017.
Analyzing the data of Stripe 82 of the comprehensive Sloan Digital Sky Survey, particularly the sets of LOWZ data from the Baryon Oscillation Spectroscopic Survey, part of the DR12 catalogue of the SDSS, the team discovered an overdensity of the sampled 625 galaxies from LOWZ and 3,016 from the LEGACY-BOSS-SOUTHERN, a survey of the southern sky that is also a part of SDSS DR12.
It was named after the Hindu goddess of knowledge Saraswati, as well as the mythological Sarasvati river; since the Sanskrit name also means “ever flowing stream with many pools”, and the supercluster has many clusters and groups moving and merging together.
Cosmology
The Saraswati Supercluster is one of the largest and most massive superclusters known, comparable to the massive Shapley Concentration in the nearby universe. The supercluster consists of 43 massive galaxy clusters, the most massive being Abell 2631 and ZwCL 2341.1+0000 respectively. It is surrounded by a network of galaxy filaments, clusters, and voids.
The Saraswati supercluster and its environs reveal that some extreme large-scale, prominent matter density enhancements had formed in the past when dark energy had just started to dominate structure formation. This galactic concentration sheds light on the role of dark energy and cosmological initial conditions in supercluster formation.
References
Galaxy superclusters
Astronomical objects discovered in 2017 | Saraswati Supercluster | Astronomy | 468 |
35,940,038 | https://en.wikipedia.org/wiki/Chronopolis%20%28short%20story%29 | Chronopolis is a science fiction short story by British writer J. G. Ballard, first published in 1960. The story begins with a man in prison, Newman, and proceeds to examine his fascination with the concept of time in a world where clocks have been prohibited and are regulated by time police.
"Chronopolis" appears in an anthology edited by Andrew Goodwyn, Science Fiction Stories.
References
1960 short stories
Fiction about time
Science fiction short stories
Short stories by J. G. Ballard
Social science fiction | Chronopolis (short story) | Physics | 105 |
68,009,575 | https://en.wikipedia.org/wiki/MERMOZ | MERMOZ (also, MERMOZ project and Monitoring planEtary suRfaces with Modern pOlarimetric characteriZation) is an astrobiology project designed to remotely detect biosignatures of life. Detection is based on molecular homochirality, a characteristic property of the biochemicals of life. The aim of the project is to remotely identify and characterize life on the planet Earth from space, and to extend this technology to other solar system bodies and exoplanets. The project began in 2018, and is a collaboration of the University of Bern, University of Leiden and Delft University of Technology.
According to a member of the research team, “When light is reflected by biological matter, a part of the light’s electromagnetic waves will travel in either clockwise or counterclockwise spirals ... This phenomenon is called circular polarization and is caused by the biological matter’s homochirality.” These unique spirals of light indicate living materials; whereas, non-living materials do not reflect such unique spirals of light, according to the researchers.
The research team conducted feasibility studies, using a newly designed detection instrument, based on circular spectropolarimetry, and named FlyPol+ (an upgrade from the original FlyPol), by flying in a helicopter at an altitude of and velocity of for 25 minutes. The results were successful in remotely detecting living material, and quickly (within seconds) distinguishing living material from non-living material. The researchers concluded: "Circular spectropolarimetry can be a powerful technique to detect life beyond Earth, and we emphasize the potential of utilizing circular spectropolarimetry as a remote sensing tool to characterize and monitor in detail the vegetation physiology and terrain features of Earth itself."
The researchers next expect to scan the Earth from the International Space Station (ISS) with their detection instruments. One consequence of further successful studies is a possible pathfinder space mission, scheduled to launch in 2024.
See also
Bioindicator
Biosignature
Taphonomy
References
Astrobiology
Astrochemistry
Bioindicators
Biology terminology
Search for extraterrestrial intelligence | MERMOZ | Chemistry,Astronomy,Biology,Environmental_science | 428 |
32,695,878 | https://en.wikipedia.org/wiki/Horizontal%20pitch | Horizontal pitch (HP) is a unit of length defined by the Eurocard printed circuit board standard used to measure the horizontal width of rack mounted electronic equipment, similar to the rack unit (U) used to measure vertical heights of rack mounted equipment. One HP is wide. A standard 19-inch rack is 95 HP wide of which 84 HP is typically usable. A standard 23-inch rack is 115 HP wide of which 104HP is typically usable.
References
Units of length
Mechanical standards
Computer enclosure | Horizontal pitch | Mathematics,Engineering | 100 |
5,830,353 | https://en.wikipedia.org/wiki/Collybia%20nuda | Collybia nuda, commonly known as the blewit or wood blewit and previously described as Lepista nuda and Clitocybe nuda, is an edible mushroom native to Europe and North America. Described by Pierre Bulliard in 1790, it was also known as Tricholoma nudum for many years. It is found in both coniferous and deciduous woodlands. It is a fairly distinctive mushroom that is widely eaten. It has been cultivated in Britain, the Netherlands and France. This species was reassigned to the genus Collybia in 2023.
Taxonomy and naming
The French mycologist Pierre Bulliard described the wood blewit in his work Herbier de la France in 1790 as Agaricus nudus, reporting that it was common in the woods all year. He wrote of two varieties: one whose gills and cap are initially light violet and mature to burgundy, while the other has wine-coloured gills that intensify in colour with age. He added that the first variety was often confused with Cortinarius violaceus, though it has a "nude" cap and no spidery web veil unlike the other species. English naturalist James Bolton gave it the name Agaricus bulbosa—the bulbous agaric—in his An History of Fungusses growing about Halifax in 1791. He noted that it was rare in the region, though had found some in Ovenden.
German mycologist Paul Kummer placed it in the genus Tricholoma in 1871, the same year that English botanist Mordecai Cubitt Cooke placed it in Lepista. It was known by these names for many years, with some authors accepting Lepista and while others retained the wood blewit in Tricholoma. In 1969 Howard E. Bigelow and Alexander H. Smith reviewed Lepista and reclassified it as a subgenus of Clitocybe Finnish mycologist Harri Harmaja has called for the sinking of Lepista into Clitocybe, with C. nebularis as the type species of the latter genus. Hence the wood blewit is classified as either Lepista nuda or Clitocybe nuda.
A 2015 genetic study found that the genera Collybia and Lepista were closely related to the core clade of Clitocybe, but that all three were polyphyletic, with many members in lineages removed from other members of the same genus and instead more closely related to the other two. To complicate matters, the wood blewit is not closely related to the type species of Lepista, L. densifolia. Alvarado and colleagues declined to define the genera but proposed several options and highlighted the need for a wider analysis.
The species is commonly known as the wood blewit. Cooke called it the amethyst lepista, John Sibthorp called it the blue-gilled agaric in his 1794 work Flora Oxoniensis.
Description
This mushroom can range from lilac to purple-pink. Some North American specimens are duller and tend toward tan, but usually have purplish tones on the stem and gills. Younger specimens are lighter with more convex caps, while mature specimens have a darker color and flatter cap, ranging from in diameter. The gills are attached to the short, stout stem, which is about long and 1–2.5 cm wide, sometimes larger at the base. Wood blewits have a very distinctive odor, which has been likened by one author to that of frozen orange juice.
Wood blewits can be easily distinguished by their odor, as well as by their spore print, which is white to pale pink.
Similar species
Wood blewits can be confused with certain blue or purple species of the genus Cortinarius, including the uncommon C. camphoratus, many of which may be poisonous. Cortinarius mushrooms often have the remains of a veil under their caps and a ring-like impression on their stem. Cortinarius species produce a rusty brown spore print after several hours on white paper. Their brown spores often dust their stems and objects beneath them.
The species also resembles Collybia brunneocephala, Clitocybe tarda, Laccaria amethysteo-occidentalis, and Lepista subconnexa.
Distribution and habitat
The wood blewit is found in Europe and North America and is becoming more common in Australia and New Zealand, where it appears to have been introduced. In Australia it has developed a relationship with some eucalyptus species and gorse; with an entirely different growth pattern and differs slightly in appearance to its European Lepista nuda cousins.
It is a saprotrophic species, growing on decaying leaf litter. In the United Kingdom, it appears from September through to December.
Soil analysis of soil containing mycelium from a wood blewit fairy ring under Norway spruce (Picea abies) and Scots pine (Pinus sylvestris) in southeast Sweden yielded fourteen halogenated low molecular weight organic compounds, three of which were brominated and the others chlorinated. It is unclear whether these were metabolites or pollutants. Brominated compounds are unknown as metabolites from terrestrial fungi.
The form glaucocana is found in mountainous environs.
Ecology
In Australia, male satin bowerbirds collect blue objects to decorate their bowers with. A young male was reported to have collected wood blewits to this end near Braidwood in southern New South Wales.
Edibility
Wood blewits are good edible mushrooms.
Blewits can be eaten as a cream sauce or sautéed in butter. They can also be cooked like tripe or as omelette filling, and also make good stewing mushrooms. They have a strong flavour, so they combine well with leeks or onions.
Wood blewits can be dried, or can be preserved in olive oil or white vinegar after blanching.
The wood blewit has been cultivated in Britain, the Netherlands and France. Cultivated wood blewits are said not to taste as good as wild wood blewits.
Gallery
References
External links
Harmaja, H. (2003). Notes on Clitocybe s. lato (Agaricales). Ann. Bot. Fennici 40: 213-218.
Denise Gregory's key to Clitocybe in California
"Mushroom-Collecting.com - The Blewit"
"Clitocybe nuda / Lepista nuda: The Blewit" by Michael Kuo
Clitocybe nuda - Tom Volk's Fungus of the Month for November 1998
All that Rain Promises and More - Blewit
La Cave des Roches - caves in France where cultivated Clitocybe nuda are grown.
nuda
Edible fungi
Fungi described in 1790
Fungi of Europe
Fungi of North America
Fungi in cultivation
Taxa named by Jean Baptiste François Pierre Bulliard
Fungus species | Collybia nuda | Biology | 1,413 |
4,217,131 | https://en.wikipedia.org/wiki/Ditellurium%20decafluoride | Ditellurium decafluoride was widely reported in the literature but what was believed to be Te2F10 has been shown to be teflic anhydride, F5TeOTeF5. An account as to how this error occurred was made by P. M. Watkins.
If it existed, it would be valence isoelectronic with disulfur decafluoride, and have a similar structure.
References
Tellurium compounds
Fluorides
Nonmetal halides
Chalcohalides
Hypothetical chemical compounds | Ditellurium decafluoride | Chemistry | 114 |
24,505,456 | https://en.wikipedia.org/wiki/Gymnopilus%20tropicus | Gymnopilus tropicus is a species of mushroom in the family Hymenogastraceae.
See also
List of Gymnopilus species
External links
Gymnopilus tropicus at Index Fungorum
tropicus
Fungus species | Gymnopilus tropicus | Biology | 52 |
60,876,593 | https://en.wikipedia.org/wiki/Nilsson%20model | The Nilsson model is a nuclear shell model treating the atomic nucleus as a deformed sphere. In 1953, the first experimental examples were found of rotational bands in nuclei, with their energy levels following the same J(J+1) pattern of energies as in rotating molecules. Quantum mechanically, it is impossible to have a collective rotation of a sphere, so this implied that the shape of these nuclei was nonspherical. In principle, these rotational states could have been described as coherent superpositions of particle-hole excitations in the basis consisting of single-particle states of the spherical potential. But in reality, the description of these states in this manner is intractable, due to the large number of valence particles—and this intractability was even greater in the 1950s, when computing power was extremely rudimentary. For these reasons, Aage Bohr, Ben Mottelson, and Sven Gösta Nilsson constructed models in which the potential was deformed into an ellipsoidal shape. The first successful model of this type is the one now known as the Nilsson model. It is essentially a nuclear shell model using a harmonic oscillator potential, but with anisotropy added, so that the oscillator frequencies along the three Cartesian axes are not all the same. Typically the shape is a prolate ellipsoid, with the axis of symmetry taken to be z.
Hamiltonian
For an axially symmetric shape with the axis of symmetry being the z axis, the Hamiltonian is
Here m is the mass of the nucleon, N is the total number of harmonic oscillator quanta in the spherical basis, is the orbital angular momentum operator, is its square (with eigenvalues ), is the average value of over the N shell, and s is the intrinsic spin.
The anisotropy of the potential is such that the length of an equipotential along the z is greater than the length on the transverse axes in the ratio . This is conventionally expressed in terms of a deformation parameter δ so that the harmonic oscillator part of the potential can be written as the sum of a spherically symmetric harmonic oscillator and a term proportional to δ. Positive values of δ indicate prolate deformations, like an American football. Most nuclei in their ground states have equilibrium shapes such that δ ranges from 0 to 0.2, while superdeformed states have (a 2-to-1 axis ratio).
The mathematical details of the deformation parameters are as follows. Considering the success of the nuclear liquid drop model, in which the nucleus is taken to be an incompressible fluid, the harmonic oscillator frequencies are constrained so that remains constant with deformation, preserving the volume of equipotential surfaces. Reproducing the observed density of nuclear matter requires , where A is the mass number. The relation between δ and the anisotropy is , while the relation between δ and the axis ratio is .
The remaining two terms in the Hamiltonian do not relate to deformation and are present in the spherical shell model as well. The spin-orbit term represents the spin-orbit dependence of the strong nuclear force; it is much larger than, and has the opposite sign compared to, the special-relativistic spin-orbit splitting. The purpose of the term is to mock up the flat profile of the nuclear potential as a function of radius. For nuclear wavefunctions (unlike atomic wavefunctions) states with high angular momentum have their probability density concentrated at greater radii. The term prevents this from shifting a major shell up or down as a whole. The two adjustable constants are conventionally parametrized as and . Typical values of κ and μ for heavy nuclei are 0.06 and 0.5. With this parametrization, occurs as a simple scaling factor throughout all the calculations.
Choice of basis and quantum numbers
For ease of computation using the computational resources of the 1950s, Nilsson used a basis consisting of eigenstates of the spherical hamiltonian. The Nilsson quantum numbers are . The difference between the spherical and deformed Hamiltonian is proportional to , and this has matrix elements that are easy to calculate in this basis. They couple the different N shells. Eigenstates of the deformed Hamiltonian have good parity (corresponding to even or odd N) and Ω, the projection of the total angular momentum along the symmetry axis. In the absence of a cranking term (see below), time-reversal symmetry causes states with opposite signs of Ω to be degenerate, so that in the calculations only positive values of Ω need to be considered.
Interpretation
In an odd, well-deformed nucleus, the single-particle levels are filled up to the Fermi level, and the odd particle's Ω and parity give the spin and parity of the ground state.
Cranking
Because the potential is not spherically symmetric, the single-particle states are not states of good angular momentum J. However, a Lagrange multiplier , known as a "cranking" term, can be added to the Hamiltonian. Usually the angular frequency vector ω is taken to be perpendicular to the symmetry axis, although tilted-axis cranking can also be considered. Filling the single-particle states up to the Fermi level then produces states whose expected angular momentum along the cranking axis has the desired value set by the Lagrange multiplier.
Total energy
Often one wants to calculate a total energy as a function of deformation. Minima of this function are predicted equilibrium shapes. Adding the single-particle energies does not work for this purpose, partly because kinetic and potential terms are out of proportion by a factor of two, and partly because small errors in the energies accumulate in the sum. For this reason, such sums are usually renormalized using a procedure introduced by Strutinsky.
Plots of energy levels
Single-particle levels can be shown in a "spaghetti plot," as functions of the deformation. A large gap between energy levels at zero deformation indicates a particle number at which there is a shell closure: the traditional "magic numbers." Any such gap, at a zero or nonzero deformation, indicates that when the Fermi level is at that height, the nucleus will be stable relative to the liquid drop model.
External links
An open-source software implementation
References
Nilsson, S.G. "Binding states of individual nucleons in strongly deformed nuclei," doctoral thesis, 1955
Olivius, P., "Extending the nuclear cranking model to tilted axis rotation and alternative mean field potentials," doctoral thesis, Lund University, 2004, http://www.matfys.lth.se/staff/Peter.Olivius/thesis.pdf — describes a modern implementation of the model
Strutinsky, Nucl. Phys. A122 (1968) 1 -- original paper on the Strutinsky method
Salamon and Kruppa, "Curvature Correction in the Strutinsky's Method," http://arxiv.org/abs/1004.0079 — an open-access description of the Strutinsky method
Unknown author, "Appendix Nuclear Structure", with a full array of Nilsson charts for both proton and neutron shells, as well as an equivalent diagram for simple harmonic oscillator nuclei at different deformations: https://application.wiley-vch.de/books/info/0-471-35633-6/toi99/www/struct/struct.pdf ***
Nuclear physics | Nilsson model | Physics | 1,568 |
52,294,456 | https://en.wikipedia.org/wiki/Maximilian%20Fichtner | Maximilian Fichtner (born 1961 in Heidelberg, Germany) is professor for Solid State Chemistry at the Ulm University and executive director of the Helmholtz Institute Ulm for Electrochemical Energy Storage (HIU).
Education
Fichtner was educated in Food Chemistry and Chemistry at the University Karlsruhe, now Karlsruhe Institute of Technology where he was awarded by the Diploma in Chemistry. In 1992 he received the Ph.D. in Chemistry/Surface Science with distinction and the Hermann Billing Award for his thesis. In the thesis he developed a novel method for a spatially resolved speciation of beam-sensitive salts by SIMS. With the method he analysed the surface composition of atmospheric salt aerosol particles and contributed to the current climate model.
Career
Following his PhD, Fichtner spent two years as a young researcher at the former Karlsruhe Nuclear Research Center (KfK) and developed his method further so that it could be applied to organic materials also. In 1994 he became assistant to the board of directors of the Karlsruhe Research Center (FZK), in the area Basic Research and New Technologies, with Herbert Gleiter as director. In 1997 he left to build up a new activity on microprocess engineering, with a focus on heterogeneous catalysis in microchannels, for fuel processing (methanol steam reforming, partial oxidation of methane) and synthesis of chemicals. The group was eventually integrated in the new Institute for Microprocess Engineering in 2001. In 2000 he was offered a position at the new Institute of Nanotechnology, INT (Founding directors: Herbert Gleiter, Jean-Marie-Lehn, Dieter Fenske) to build up a new activity on nanoscale materials for energy storage. Since then he is group leader there. In 2012 he received a call by the Ulm University to become a professor (W3) in Solid State Chemistry, which he accepted in 2013. The position is connected to a function as group leader at the new Helmholtz Institute Ulm. Since 2015 he has been executive director of the institute.
Fichtner has co-ordinated several EU projects and collaborative projects from the German ministries of Economy and Research and Education. He has been organizer of various symposia at MRS and GRC conferences, and he was Chair of the GORDON Research Conference on Metal-Hydrogen Systems in 2013 and of the 1st International Symposium on Magnesium Batteries (MagBatt) in 2016.
Research
In his career Fichtner worked on various topics, covering Theoretical Chemistry, Instrumental Analysis, Higher Administration, Chemical Engineering, Heterogeneous Catalysis, Hydrogen Storage, Electrochemistry and Battery Research.
Pioneering achievements were the first measurements of salts with Secondary Neutral Mass Spectrometry, the development of a depth-resolved speciation of beam sensitive salts, a microstructure reactor which could safely burn and transfer the heat from a stoichiometric hydrogen-oxygen mixture to a thermo oil, thus demonstrating the enormous capability of running dangerous reactions in microstructure reactors safely.
In the development of hydrogen storage materials, new complex hydride compounds were synthesized and investigated, the fasted charge and discharge of an aluminum hydride to date by a new Ti13 catalyst, first applied for that purpose by the Bogdanovig group of Max Planck Mülheim, was independently confirmed. Further work in this area was focused on elucidating nanoscale effects in energy materials and studies, based on pioneering work since the late 1990s by various groups from all over the world on hydrogen and the effects by nanostructures, of the change of thermodynamic properties of complex hydrides was conducted in his group.
In battery research, new synthesis methods were developed to stabilize conversion materials, new types of batteries based on anionic shuttles were presented and new electrolytes were developed for magnesium properties with outstanding voltage windows and non-nucleophilic properties, making reversible Mg-S cells possible. Moreover, a new class of cathode materials is being studied with the highest packing densities for Li ions to date, the so-called Li-excess disordered rocksalt materials (DRX), developed by the Gerbrand Ceder group.
References
External links
1961 births
Living people
Scientists from Heidelberg
20th-century German chemists
Academic staff of the University of Ulm
21st-century German chemists
Solid state chemists
Karlsruhe Institute of Technology alumni | Maximilian Fichtner | Chemistry | 907 |
8,674,784 | https://en.wikipedia.org/wiki/Soricomorpha | Soricomorpha (from Greek "shrew-form") is a formerly used taxon within the class of mammals. In the past it formed a significant group within the former order Insectivora. However, Insectivora was shown to be polyphyletic and various new orders were split off from it, including Afrosoricida (tenrecs, golden moles, otter shrews), Macroscelidea (elephant shrews), and Erinaceomorpha (hedgehogs and gymnures), with the four remaining extant and recent families of Soricomorpha shown here then being treated as a separate order. Insectivora was left empty and disbanded.
Subsequently, Soricomorpha itself was shown to be paraphyletic, because Soricidae shared a more recent common ancestor with Erinaceidae than with other soricomorphs. The combination of Soricomorpha and Erinaceidae, referred to as order Eulipotyphla, has been shown to be monophyletic.
Living members of the group range in size from the Etruscan shrew, at about and , to the Cuban solenodon, at about and .
Soricomorpha
Family Soricidae (shrews)
Subfamily Crocidurinae: (white-toothed shrews)
Subfamily Soricinae: (red-toothed shrews)
Subfamily Myosoricinae: (African white-toothed shrews)
Family Talpidae: (moles and close relatives)
Subfamily Scalopinae (New World moles and close relatives)
Subfamily Talpinae (Old World moles and close relatives)
Subfamily Uropsilinae (Chinese shrew-like moles)
Family Solenodontidae: solenodons (rare primitive eulipotyphlans of the Caribbean; two extant species)
Family † Nesophontidae: West Indian shrews (recently extinct eulipotyphlans of the Caribbean)
Family † Heterosoricidae
genus †Atasorex
genus †Dinosorex
genus †Domnina
genus †Gobisorex
genus †Heterosorex
genus †Ingentisorex
genus †Lusorex
genus †Paradomnina
genus †Quercysorex
Family † Nyctitheriidae
References
Taxa named by William King Gregory
Extant Eocene first appearances
Paraphyletic groups
Eulipotyphla
Obsolete mammal taxa | Soricomorpha | Biology | 508 |
23,637 | https://en.wikipedia.org/wiki/Phase%20%28matter%29 | In the physical sciences, a phase is a region of material that is chemically uniform, physically distinct, and (often) mechanically separable. In a system consisting of ice and water in a glass jar, the ice cubes are one phase, the water is a second phase, and the humid air is a third phase over the ice and water. The glass of the jar is a different material, in its own separate phase. (See .)
More precisely, a phase is a region of space (a thermodynamic system), throughout which all physical properties of a material are essentially uniform. Examples of physical properties include density, index of refraction, magnetization and chemical composition.
The term phase is sometimes used as a synonym for state of matter, but there can be several immiscible phases of the same state of matter (as where oil and water separate into distinct phases, both in the liquid state). It is also sometimes used to refer to the equilibrium states shown on a phase diagram, described in terms of state variables such as pressure and temperature and demarcated by phase boundaries. (Phase boundaries relate to changes in the organization of matter, including for example a subtle change within the solid state from one crystal structure to another, as well as state-changes such as between solid and liquid.) These two usages are not commensurate with the formal definition given above and the intended meaning must be determined in part from the context in which the term is used.
Types of phases
Distinct phases may be described as different states of matter such as gas, liquid, solid, plasma or Bose–Einstein condensate. Useful mesophases between solid and liquid form other states of matter.
Distinct phases may also exist within a given state of matter. As shown in the diagram for iron alloys, several phases exist for both the solid and liquid states. Phases may also be differentiated based on solubility as in polar (hydrophilic) or non-polar (hydrophobic). A mixture of water (a polar liquid) and oil (a non-polar liquid) will spontaneously separate into two phases. Water has a very low solubility (is insoluble) in oil, and oil has a low solubility in water. Solubility is the maximum amount of a solute that can dissolve in a solvent before the solute ceases to dissolve and remains in a separate phase. A mixture can separate into more than two liquid phases and the concept of phase separation extends to solids, i.e., solids can form solid solutions or crystallize into distinct crystal phases. Metal pairs that are mutually soluble can form alloys, whereas metal pairs that are mutually insoluble cannot.
As many as eight immiscible liquid phases have been observed. Mutually immiscible liquid phases are formed from water (aqueous phase), hydrophobic organic solvents, perfluorocarbons (fluorous phase), silicones, several different metals, and also from molten phosphorus. Not all organic solvents are completely miscible, e.g. a mixture of ethylene glycol and toluene may separate into two distinct organic phases.
Phases do not need to macroscopically separate spontaneously. Emulsions and colloids are examples of immiscible phase pair combinations that do not physically separate.
Phase equilibrium
Left to equilibration, many compositions will form a uniform single phase, but depending on the temperature and pressure even a single substance may separate into two or more distinct phases. Within each phase, the properties are uniform but between the two phases properties differ.
Water in a closed jar with an air space over it forms a two-phase system. Most of the water is in the liquid phase, where it is held by the mutual attraction of water molecules. Even at equilibrium molecules are constantly in motion and, once in a while, a molecule in the liquid phase gains enough kinetic energy to break away from the liquid phase and enter the gas phase. Likewise, every once in a while a vapor molecule collides with the liquid surface and condenses into the liquid. At equilibrium, evaporation and condensation processes exactly balance and there is no net change in the volume of either phase.
At room temperature and pressure, the water jar reaches equilibrium when the air over the water has a humidity of about 3%. This percentage increases as the temperature goes up. At 100 °C and atmospheric pressure, equilibrium is not reached until the air is 100% water. If the liquid is heated a little over 100 °C, the transition from liquid to gas will occur not only at the surface but throughout the liquid volume: the water boils.
Number of phases
For a given composition, only certain phases are possible at a given temperature and pressure. The number and type of phases that will form is hard to predict and is usually determined by experiment. The results of such experiments can be plotted in phase diagrams.
The phase diagram shown here is for a single component system. In this simple system, phases that are possible, depend only on pressure and temperature. The markings show points where two or more phases can co-exist in equilibrium. At temperatures and pressures away from the markings, there will be only one phase at equilibrium.
In the diagram, the blue line marking the boundary between liquid and gas does not continue indefinitely, but terminates at a point called the critical point. As the temperature and pressure approach the critical point, the properties of the liquid and gas become progressively more similar. At the critical point, the liquid and gas become indistinguishable. Above the critical point, there are no longer separate liquid and gas phases: there is only a generic fluid phase referred to as a supercritical fluid. In water, the critical point occurs at around 647 K (374 °C or 705 °F) and 22.064 MPa.
An unusual feature of the water phase diagram is that the solid–liquid phase line (illustrated by the dotted green line) has a negative slope. For most substances, the slope is positive as exemplified by the dark green line. This unusual feature of water is related to ice having a lower density than liquid water. Increasing the pressure drives the water into the higher density phase, which causes melting.
Another interesting though not unusual feature of the phase diagram is the point where the solid–liquid phase line meets the liquid–gas phase line. The intersection is referred to as the triple point. At the triple point, all three phases can coexist.
Experimentally, phase lines are relatively easy to map due to the interdependence of temperature and pressure that develops when multiple phases form. Gibbs' phase rule suggests that different phases are completely determined by these variables. Consider a test apparatus consisting of a closed and well-insulated cylinder equipped with a piston. By controlling the temperature and the pressure, the system can be brought to any point on the phase diagram. From a point in the solid stability region (left side of the diagram), increasing the temperature of the system would bring it into the region where a liquid or a gas is the equilibrium phase (depending on the pressure). If the piston is slowly lowered, the system will trace a curve of increasing temperature and pressure within the gas region of the phase diagram. At the point where gas begins to condense to liquid, the direction of the temperature and pressure curve will abruptly change to trace along the phase line until all of the water has condensed.
Interfacial phenomena
Between two phases in equilibrium there is a narrow region where the properties are not that of either phase. Although this region may be very thin, it can have significant and easily observable effects, such as causing a liquid to exhibit surface tension. In mixtures, some components may preferentially move toward the interface. In terms of modeling, describing, or understanding the behavior of a particular system, it may be efficacious to treat the interfacial region as a separate phase.
Crystal phases
A single material may have several distinct solid states capable of forming separate phases. Water is a well-known example of such a material. For example, water ice is ordinarily found in the hexagonal form ice Ih, but can also exist as the cubic ice Ic, the rhombohedral ice II, and many other forms. Polymorphism is the ability of a solid to exist in more than one crystal form. For pure chemical elements, polymorphism is known as allotropy. For example, diamond, graphite, and fullerenes are different allotropes of carbon.
Phase transitions
When a substance undergoes a phase transition (changes from one state of matter to another) it usually either takes up or releases energy. For example, when water evaporates, the increase in kinetic energy as the evaporating molecules escape the attractive forces of the liquid is reflected in a decrease in temperature. The energy required to induce the phase transition is taken from the internal thermal energy of the water, which cools the liquid to a lower temperature; hence evaporation is useful for cooling. See Enthalpy of vaporization. The reverse process, condensation, releases heat. The heat energy, or enthalpy, associated with a solid to liquid transition is the enthalpy of fusion and that associated with a solid to gas transition is the enthalpy of sublimation.
Phases out of equilibrium
While phases of matter are traditionally defined for systems in thermal equilibrium, work on quantum many-body localized (MBL) systems has provided a framework for defining phases out of equilibrium. MBL phases never reach thermal equilibrium, and can allow for new forms of order disallowed in equilibrium via a phenomenon known as localization protected quantum order. The transitions between different MBL phases and between MBL and thermalizing phases are novel dynamical phase transitions whose properties are active areas of research.
Notes
References
External links
French physicists find a solution that reversibly solidifies with a rise in temperature – α-cyclodextrin, water, and 4-methylpyridine
Engineering thermodynamics
Condensed matter physics
Concepts in physics | Phase (matter) | Physics,Chemistry,Materials_science,Engineering | 2,067 |
25,700,376 | https://en.wikipedia.org/wiki/Corynebacteriophage | A corynebacteriophage (or just corynephage) is a DNA-containing bacteriophage specific for bacteria of genus Corynebacterium as its host. Corynebacterium diphtheriae virus strain Corynebacterium diphtheriae phage (aka Corynephage β or just β-phage) introduces toxigenicity into strains of Corynebacterium diphtheriae as it encodes diphtheria toxin,
it has subtypes beta c and beta vir. According to proposed taxonomic classification, corynephages β and ω are unclassified members of the genus Lambdavirus, family Siphoviridae.
Corynebacteriophage play a crucial role in the ecology and evolution of Corynebacterium species. They are viruses that specifically target and infect these bacteria, injecting their genetic material into the bacterial host and using the host's cellular machinery to replicate and produce more phages. The study of Corynephages is important not only for understanding the biology of these viruses but also for potential applications in biotechnology, such as using phages for bacterial control.
Corynebacteriophage, like other bacteriophages, exhibit a diverse range of structures and are classified based on various characteristics, including morphology, nucleic acid type, and genetic content.
Structure and Classification
Corynephages are a diverse group of viruses that specifically infect bacteria within the Corynebacterium genus. Structurally, these phages exhibit a variety of forms, with most having a distinct head-and-tail morphology characteristic of the order Caudovirales. The capsid, typically icosahedral in shape, encloses the genetic material, which can be either double-stranded or single-stranded DNA, varying in size among different phage species. The tail structure, used for attachment to host cells, can be short or long, and its complexity varies across different phages. The specific morphology of a corynephage plays a key role in its classification and is a crucial factor in understanding its interaction with host bacteria.
Genetic Material:
DNA Phages: Most Corynebacteriophages have DNA as their genetic material. The DNA can be single-stranded or double-stranded. RNA Phages: Some phages, though less common, have RNA genomes.
Morphology:
Siphoviridae: Phages in this family typically have long, non-contractile tails and are characterized by their ability to undergo lysogeny (integration of their DNA into the host genome).
Myoviridae: These phages have long, contractile tails and are often associated with a lytic life cycle.
Podoviridae: Phages in this family have short tails.
Life Cycles and Host Interaction
Corynephages exhibit two primary life cycles: lytic and lysogenic. In the lytic cycle, the phage attaches to the bacterial cell, injects its DNA, and uses the cell's machinery to replicate its genome and produce new virions. This process eventually leads to the lysis of the host cell and the release of new phage particles. In contrast, the lysogenic cycle involves the integration of the phage's genetic material into the host's genome, forming a prophage. This integrated DNA is replicated along with the host's genome during cell division. The prophage may eventually enter the lytic cycle in response to specific triggers, such as stress or UV radiation. The choice between these life cycles is influenced by various factors, including the environmental conditions and the genetic makeup of both the phage and the host. Host Specificity Phages are often classified based on their host range and specificity, indicating the particular bacterial species or strain they can infect.
Applications in Research and Medicine
Corynephages are of significant interest in the fields of microbiology and biotechnology. Their specificity to Corynebacterium species makes them valuable tools in the study of these bacteria, including pathogenic strains like Corynebacterium diphtheriae. Phages are used to understand bacterial genetics, evolution, and mechanisms of pathogenicity. In the era of increasing antibiotic resistance, corynephages also offer potential in phage therapy, an alternative to traditional antibiotics. Phage therapy exploits the natural ability of phages to infect and lyse specific bacteria, providing a targeted approach to treat bacterial infections, especially those resistant to conventional treatments. Additionally, the genetic diversity and adaptability of corynephages make them a rich source for biotechnological applications, such as the development of novel antibacterial agents or the delivery of therapeutic genes.
Genome Size
The genomes of corynephages can vary widely in size. They typically range from about 15,000 to 100,000 base pairs, although this can vary depending on the specific phage.
Corynebacterium phage Corlili
Phage stock CL31 obtained from Felix D'Herelle Reference Center for Bacterial Viruses. CL31 was originally isolated on C. glutamicum ATCC 15990 Centre de Recherche de Biochimie et de Génétique Cellulaire
CNRS, 31062 Toulouse, France before 1985.
The original CL31 phage stock makes plaques on MB001, albeit at a lower plaque formation efficiency than on ATCC 15990. Plaques obtained on MB001, when picked and propagated on the same strain, resulted in very high titer phage stocks.
References
Bacteriophages | Corynebacteriophage | Biology | 1,145 |
30,310,414 | https://en.wikipedia.org/wiki/SN%202010lt | SN 2010lt is a supernova located in the galaxy UGC 3378 in Camelopardalis. It was discovered by amateur astronomers Kathryn Aurora Gray, her father Paul Gray, of Fredericton, New Brunswick, Canada and David J. Lane of Stillwater Lake, Nova Scotia, Canada. Upon discovery, Kathryn Aurora Gray became the youngest person to ever discover a supernova, being 10 years old when she did so. The previous record was held by the 14-year-old Caroline Moore.
Discovery
The images were taken at Lane's Abbey Ridge Observatory on 31 December 2010 with a Celestron C14 0.36-meter f/5.5 telescope, and the supernova was spotted by them on 2 January 2011. The discovery was confirmed on 3 January 2011 by the amateur astronomers Brian Tieman and Jack Newton and announced by the IAU Central Bureau of Astronomical Telegrams at the Harvard–Smithsonian Center for Astrophysics, Cambridge, Massachusetts. It was announced by the Royal Astronomical Society of Canada on the same day.
Discoverers
David J. Lane: SN 2010lt is Lane's fourth discovery of a supernova. The previous were SN 1995F, SN 2005B and SN 2005ea. All were discovered with Paul Gray.
Paul Gray: SN 2010lt is his seventh discovery of a supernova.
Kathryn Aurora Gray: her first discovery.
Description
SN 2010lt is about 20" west and 10" north of the galaxy center. It is a sub-luminous (1991bg-like) type Ia supernova and was discovered when near maximum light. The supernova could not be detected (detection limit approximately 18.5 of apparent magnitude) at its current position on images taken between October 2005 and March 2006.
See also
List of supernovae
References
External links
Dave Lane website, Supernovae discoveries made by Paul Gray and Dave Lane from Abbey Ridge Observatory
Abbey Ridge Observatory
Images of SN 2010lt by Joseph Brimacombe in flickr
Kathryn Gray's website
Supernovae
Camelopardalis
20110102 | SN 2010lt | Chemistry,Astronomy | 419 |
5,162,675 | https://en.wikipedia.org/wiki/Tibolone | Tibolone, sold under the brand name Livial among others, is a medication which is used in menopausal hormone therapy and in the treatment of postmenopausal osteoporosis and endometriosis. The medication is available alone and is not formulated or used in combination with other medications. It is taken by mouth.
Side effects of tibolone include acne and increased hair growth among others. Tibolone is a synthetic steroid with weak estrogenic, progestogenic, and androgenic activity, and hence is an agonist of the estrogen, progesterone, and androgen receptors. It is a prodrug of several metabolites. The estrogenic effects of tibolone may show tissue selectivity in their distribution.
Tibolone was developed in the 1960s and was introduced for medical use in 1988. It is marketed widely throughout the world. The medication is not available in the United States.
Medical uses
Tibolone is used in the treatment of menopausal symptoms like hot flashes and vaginal atrophy, postmenopausal osteoporosis, and endometriosis. It has similar or greater effectiveness compared to older menopausal hormone therapy medications, but shares a similar side effect profile. It has also been investigated as a possible treatment for female sexual dysfunction.
Tibolone reduces hot flashes, prevents bone loss, improves vaginal atrophy and urogenital symptoms (e.g., vaginal dryness, dyspareunia), and has positive effects on mood and sexual function. The medication may have greater benefits on libido than standard menopausal hormone therapy, which may be related to its androgenic effects. It is associated with low rates of vaginal bleeding and breast pain.
A 2015 network meta-analysis of randomized controlled trials found that tibolone was associated with a significantly decreased risk of breast cancer ( = 0.317). The decrease in risk was greater than that observed with most of the aromatase inhibitors and selective estrogen receptor modulators that were included in the analysis. However, paradoxically, other research has found evidence supporting an increased risk of breast cancer with tibolone.
Available forms
Tibolone is available in the form of 2.5 mg oral tablets. It is typically used once daily at a dosage of 1.25 or 2.5 mg.
Side effects
A report in September 2009 from Health and Human Services' Agency for Healthcare Research and Quality suggests that tamoxifen, raloxifene, and tibolone used to reduce the risk of breast cancer significantly reduce the occurrence of invasive breast cancer in midlife and older women, but also increase the risk of adverse effects.
Tibolone can infrequently produce androgenic side effects such as acne and increased facial hair growth. Such side effects have been found to occur in 3 to 6% of treated women.
A 2016 Cochrane review has been published on the short-term and long-term effects of tibolone, including adverse effects. Possible adverse effects of tibolone include unscheduled vaginal bleeding ( = 2.79; incidence 13–26% more than placebo), an increased risk of breast cancer in women with a history of breast cancer ( = 1.5) although apparently not without a history of breast cancer ( = 0.52), an increased risk of cerebrovascular events (strokes) ( = 1.74) and cardiovascular events ( = 1.38), and an increased risk of endometrial cancer ( = 2.04). However, most of these figures are based on very low-quality evidence.
Tibolone has been associated with increased risk of endometrial cancer in most studies.
Pharmacology
Pharmacodynamics
Tibolone possesses a complex pharmacology and has weak estrogenic, progestogenic, and androgenic activity. Tibolone, 3α-hydroxytibolone, and 3β-hydroxytibolone act as agonists of the estrogen receptors. Tibolone and its metabolite δ4-tibolone act as agonists of the progesterone and androgen receptors, while 3α-hydroxytibolone and 3β-hydroxytibolone, conversely, act as antagonists of these receptors. Relative to other progestins, tibolone, including its metabolites, has been described as possessing moderate functional antiestrogenic activity (that is, progestogenic activity), moderate estrogenic activity, high androgenic activity, and no clinically significant glucocorticoid, antiglucocorticoid, mineralocorticoid, or antimineralocorticoid activity. The ovulation-inhibiting dosage of tibolone is 2.5 mg/day.
Estrogenic activity
Tibolone and its two major active metabolites, 3α-hydroxytibolone and 3β-hydroxytibolone, act as potent, fully activating agonists of the estrogen receptor (ER), with a high preference for the ERα. These estrogenic metabolites of tibolone have much weaker activity as estrogens than estradiol (e.g., have 3–29% of the affinity of estradiol for the ), but occur at relatively high concentrations that are sufficient for full and marked estrogenic responses to occur.
The estrogenic effects of tibolone show tissue selectivity in their distribution, with desirable effects in bone, the brain, and the vagina, and lack of undesirable action in the uterus, breast, and liver. The observations of tissue selectivity with tibolone have been theorized to be the result of metabolism, enzyme modulation (e.g., of estrogen sulfatase and estrogen sulfotransferase), and receptor modulation that vary in different target tissues. This selectivity differs mechanistically from that of selective estrogen receptor modulators (SERMs) such as tamoxifen, which produce their tissue selectivity via means of modulation of the ER. As such, to distinguish it from SERMs, tibolone has been variously described as a "selective tissue estrogenic activity regulator" (STEAR), "selective estrogen enzyme modulator" (SEEM), or "tissue-specific receptor and intracrine mediator" (TRIM). More encompassingly, tibolone has also been described as a "selective progestogen, estrogen, and androgen regulator" (SPEAR), which is meant to reflect the fact that it is tissue-selective and that it regulates effects not only of estrogens but of all three of the major sex hormone classes. Although indications of tissue selectivity with tibolone have been observed, the medication has paradoxically nonetheless been associated with increased risk of endometrial cancer and breast cancer in clinical studies.
It was reported in 2002 that tibolone or its metabolite δ4-tibolone is transformed by aromatase into the potent estrogen 7α-methylethinylestradiol in women, analogously to the transformation of norethisterone into ethinylestradiol. Controversy and disagreement followed when other researchers contested the findings however. By 2008, these researchers had asserted that tibolone is not aromatized in women and that the previous findings of 7α-methylethinylestradiol detection were merely a methodological artifact. In accordance, a 2009 study found that an aromatase inhibitor had no effect on the estrogenic potencies of tibolone or its metabolites in vitro, unlike the case of testosterone. In addition, another 2009 study found that the estrogenic effects of tibolone on adiposity in rats do not require aromatization (as indicated by the use of aromatase knockout mice), further in support that 3α-hydroxytibolone and 3β-hydroxytibolone are indeed responsible for such effects. These findings are also in accordance with the fact that tibolone decreases sex hormone-binding globulin (SHBG) levels by 50% in women and does not increase the risk of venous thromboembolism (VTE) ( = 0.92), which would not be expected if the medication formed a potent, liver metabolism-resistant estrogen similar to ethinylestradiol in important quantities. (For comparison, combined oral contraceptives containing ethinylestradiol, due mostly or completely to the estrogen component, have been found to increase SHBG levels by 200 to 400% and to increase the risk of VTE by about 4-fold ( = 4.03).)
In spite of the preceding, others have held, as recently as 2011, that tibolone is converted into 7α-methylethinylestradiol in small quantities. They have claimed that 19-nortestosterone derivatives like tibolone, due to lacking a C19 methyl group, indeed are not substrates of the classical aromatase enzyme, but instead are still transformed into the corresponding estrogens by other cytochrome P450 monooxygenases. In accordance, the closely structurally related AAS trestolone (7α-methyl-19-nortestosterone or 17α-desethynyl-δ4-tibolone) has been found to be transformed into 7α-methylestradiol by human placental microsomes in vitro. Also in accordance, considerably disproportionate formation of ethinylestradiol occurs when norethisterone is taken orally (and hence undergoes first-pass metabolism in the liver) relative to parenterally, despite the absence of aromatase in the adult human liver.
Progestogenic activity
Tibolone and δ4-tibolone act as agonists of the progesterone receptor (PR). Tibolone has low affinity of 6% of that of promegestone for the PR, while δ4-tibolone has high affinity of 90% of that of promegestone for the PR. In spite of its high affinity for the PR however, δ4-tibolone possesses only weak progestogenic activity, about 13% of that of norethisterone. The weak progestogenic activity of tibolone may not be sufficient to fully counteract estrogenic activity of tibolone in the uterus and may be responsible for the increased risk of endometrial cancer that has been observed with tibolone in women in large cohort studies.
Androgenic activity
Tibolone, mainly via δ4-tibolone, has androgenic activity. Whereas tibolone itself has only about 6% of the affinity of metribolone for the androgen receptor, δ4-tibolone has relatively high affinity of about 35% of the affinity of metribolone for this receptor. At typical clinical dosages in women, the androgenic effects of tibolone are weak. However, relative to other 19-nortestosterone progestins, the androgenic activity of tibolone is high, with a potency comparable to that of testosterone. Indeed, the androgenic effects of tibolone have been ranked as stronger than those of all other commonly used 19-nortestosterone progestins (e.g., norethisterone, levonorgestrel, others).
The androgenic effects of tibolone have been postulated to be involved in the reduced breast cell proliferation, reduced breast cancer risk, improvement in sexual function, less unfavorable changes in hemostatic parameters relative to estrogen–progestogen combinations, and changes in liver protein synthesis (e.g., 30% reductions in HDL cholesterol levels, 20% reduction in triglyceride levels, and 50% reduction in SHBG levels) observed with tibolone. They are also responsible for the androgenic side effects of tibolone such as acne and increased hair growth in some women.
Other activities
Tibolone, 3α-hydroxytibolone, and 3β-hydroxytibolone act as antagonists of the glucocorticoid and mineralocorticoid receptors, with preference for the mineralocorticoid receptor. However, their affinities for these receptors are low, and tibolone has been described as possessing no clinically significant glucocorticoid, antiglucocorticoid, mineralocorticoid, or antimineralocorticoid activity.
Pharmacokinetics
The mean oral bioavailability of tibolone is 92%. Its plasma protein binding is 96.3%. It is bound to albumin, and both tibolone and its metabolites have low affinity for SHBG. Tibolone is metabolized in the liver and intestines. It is a prodrug and is rapidly transformed into several metabolites, including δ4-tibolone, 3α-hydroxytibolone, and 3β-hydroxytibolone, as well as sulfate conjugates of these metabolites. 3α-Hydroxytibolone is formed by 3α-hydroxysteroid dehydrogenase, 3β-hydroxytibolone is formed by 3β-hydroxysteroid dehydrogenase, δ4-tibolone is formed by Δ5-4-isomerase, and the sulfate conjugates of tibolone and its metabolites are formed by sulfotransferases, mainly SULT2A1.
The sulfate conjugates can be transformed back into free steroids by steroid sulfatase. Following a single oral dose of 2.5 mg tibolone, peak serum levels of tibolone were 1.6 ng/mL, of δ4-tibolone were 0.8 ng/mL, of 3α-hydroxytibolone were 16.7 ng/mL, and of 3β-hydroxytibolone were 3.7 ng/mL after 1 to 2 hours. The elimination half-life of tibolone is 45 hours. It is excreted in urine 40% and feces 60%.
Chemistry
Tibolone, also known as 7α-methylnoretynodrel, as well as 7α-methyl-17α-ethynyl-19-nor-δ5(10)-testosterone or as 7α-methyl-17α-ethynylestr-5(10)-en-17β-ol-3-one, is a synthetic estrane steroid and a derivative of testosterone and 19-nortestosterone. It is more specifically a derivative of norethisterone (17α-ethynyl-19-nortestosterone) and is a member of the estrane subgroup of the 19-nortestosterone family of progestins. Tibolone is the 7α-methyl derivative of the progestin noretynodrel (17α-ethynyl-δ5(10)-19-nortestosterone). Other steroids related to tibolone include the progestin norgesterone (17α-vinyl-δ5(10)-19-nortestosterone) and the anabolic steroids trestolone (7α-methyl-19-nortestosterone) and mibolerone (7α,17α-dimethyl-19-nortestosterone).
History
Tibolone was developed in the 1960s. It was first introduced in the Netherlands in 1988, and was subsequently introduced in the United Kingdom in 1991.
Society and culture
Generic names
Tibolone is the generic name of the drug and its , , , , and . It is also known by its developmental code name ORG-OD-14.
Brand names
Tibolone is marketed under the brand names Livial, Tibofem, and Ladybon among others.
Availability
Tibolone is used widely in the European Union, Asia, Australasia, and elsewhere in the world, but is not available in the United States.
Legal status
Tibolone is a Schedule IV controlled substance in Canada under the 1996 Controlled Drugs and Substances Act. It is classified as an anabolic steroid under this act, due to its relatively high activity as an agonist, and is the only norethisterone (17α-ethynyl-19-nortestosterone) derivative that is classified as such. Tibolone is banned by as an anabolic steroid category S1 largely due to its conversion to the delta-4 tibolone metabolite, which is a potent androgen.
References
Further reading
Alkene derivatives
Ethynyl compounds
Anabolic–androgenic steroids
Antiglucocorticoids
Antimineralocorticoids
Aphrodisiacs
Estranes
Female sexual dysfunction drugs
Ketones
Drugs developed by Merck & Co.
Progestogens
Drugs developed by Schering-Plough
Steroid sulfatase inhibitors
Synthetic estrogens
Tertiary alcohols
World Anti-Doping Agency prohibited substances | Tibolone | Chemistry | 3,620 |
54,983,009 | https://en.wikipedia.org/wiki/Thioxoethenylidene | Thioxoethenylidene, is a reactive heteroallene molecule with formula CCS.
Occurrence
CCS is found in space in large quantities. This includes the Taurus Molecular Cloud in TMC-1, TMC-1c and L1521B. These are likely in young starless molecular cloud cores.
Production
By condensing propadienedithione SCCCS or thioxopropadienone OCCCS in solid argon and irradiating with ultraviolet radiation, CCS is formed. Another way is via a glow discharge in a mixture of carbon disulfide and helium. Yet another way is through electron irradiation of sulfur containing heterocycles.
CCS and the anion CCS− can be formed in solid neon matrices also.
Properties
CCS can be a ligand. It can form an asymmetrical bridge between two molybdenum atoms in Mo2(μ,σ(C):η2(C′S)-CCS)(CO)4(hydrotris(3,5-dimethylpyrazol-1-yl)borate)2 In this one carbon atom has a triple bond to a molybdenum and the other has a double bond to the other molybdenum atom, which also has a single bond to the sulfur atom.
The ultraviolet spectrum shows absorption bands between 2800 and 3370 Å and also in the near infrared between 7500 and 10000 Å.
CCS can react with CCCS to form C5S.
The infrared spectrum in solid argon shows a vibration band at 1666.6 cm−1 called v1 and another called v2 at 862.7 cm−1. The 2v1 overtone is at 3311.1 cm−1. A combination vibration and bending band is at 2763.4 cm−1
The microwave spectrum has emission lines 43 − 32 at 45.4 GHz and 21 - 10 at 22.3 GHz, important for detection of molecules in molecular clouds.
Theoretical predictions show that the C-C bond is 1.304 Å long and the C–S bond is 1.550 Å.
References
Sulfur(−II) compounds
Inorganic carbon compounds | Thioxoethenylidene | Chemistry | 458 |
31,813,101 | https://en.wikipedia.org/wiki/Hopeaphenol | Hopeaphenol is a stilbenoid. It is a resveratrol tetramer. It has been first isolated from Dipterocarpaceae like Shorea ovalis. It has also been isolated from wines from North Africa.
It shows an opposite effect to vitisin A on apoptosis of myocytes isolated from adult rat heart.
See also
Phenolic compounds in wine
References
Resveratrol oligomers
Natural phenol tetramers
Wine chemistry | Hopeaphenol | Chemistry | 99 |
93,466 | https://en.wikipedia.org/wiki/Apache%20Xalan | Xalan is a popular open source software library from the Apache Software Foundation, that implements the XSLT 1.0 XML transformation language and the XPath 1.0 language. The Xalan XSLT processor is available for both the Java and C++ programming languages. It combines technology from two main sources: an XSLT processor originally created by IBM under the name LotusXSL, and an XSLT compiler created by Sun Microsystems under the name XSLTC. A wrapper for the Eiffel language is available.
See also
Java XML
Apache Xerces
libxml2
Saxon XSLT
References
External links
Xalan Home page
Xalan
Java (programming language) libraries
Java platform
Software using the Apache license
XSLT processors | Apache Xalan | Technology | 159 |
20,863,982 | https://en.wikipedia.org/wiki/Ground%20reconnaissance | Ground reconnaissance (also terrestrial reconnaissance, ground recon), is a type of reconnaissance that is employed along the elements of ground warfare. It is the collection of intelligence that strictly involves routes, areas, zones (terrain-oriented); and the enemy (force-oriented). Ground reconnaissance is considered to be the most effective type of reconnaissance but also the slowest method in obtaining information about the terrain and enemy.
Those units in contact with the enemy, especially patrols, are among the most reliable sources of information. Combat engineers are also good sources of information. These engineer units conduct engineer reconnaissance of an area and can provide detailed reporting on lines of communications; i.e., roads, rivers, railroad lines, bridges, and obstacles to maneuver.
See also
Armoured reconnaissance
Special reconnaissance
References
Maneuver tactics
Military engineering
Military intelligence collection
Reconnaissance | Ground reconnaissance | Engineering | 167 |
14,447,758 | https://en.wikipedia.org/wiki/Generic%20Bootstrapping%20Architecture | Generic Bootstrapping Architecture (GBA) is a technology that enables the authentication of a user. This authentication is possible if the user owns a valid identity on an HLR (Home Location Register) or on an HSS (Home Subscriber Server).
GBA is standardized at the 3GPP (http://www.3gpp.org/ftp/Specs/html-info/33220.htm). The user authentication is instantiated by a shared secret, one in the smartcard, for example a SIM card inside the mobile phone and the other is on the HLR/HSS.
GBA authenticates by making a network component challenge the smartcard and verify that the answer is the one predicted by the HLR/HSS.
Instead of asking the service provider to trust the BSF and relying on it for every authentication request, the BSF establishes a shared secret between the simcard card and the service provider. This shared secret is limited in time and for a specific domain.
Strong points
This solution has some strong points of certificate and shared secrets without having some of their weaknesses:
- There is no need for user enrollment phase nor secure deployment of keys, making this solution a very low cost one when compared to PKI.
- Another advantage is the ease with which the authentication method may be integrated into terminals and service providers, as it is based on HTTP's well known "Digest access authentication". Every Web server already implement HTTP digest authentication and the effort to implement GBA on top of digest authentication is minimal. For example, it could be implemented on SimpleSAMLPhP http://rnd.feide.no/simplesamlphp with 500 PHP lines of code and only a few tens of lines of code are Service Provider specific making it really easy to port it to another Web site.
- On device side is needed:
A Web browser (in fact an HTTP client) implementing digest authentication and the special case designed by a "3gpp" string in the HTTP header.
A means to dialog with a smartcard and signed the challenge sent by the BSF, either Bluetooth SAP or a Java or native application could be used to serve the request coming from the browser.
Technical overview
Actually, contents in this section are from external literature.
There are two ways to use GAA (Generic Authentication Architecture).
The first, GBA, is based on a shared secret between the client and server
The second, SSC, is based on public-private key pairs and digital certificates.
In the shared secret cases, the customer and the operator are first mutually authenticated through 3G and Authentication Key (AKA) and they agree on session keys which can then be used between the client and services that the customer wants to use.
This is called bootstrapping.
After that, the services can retrieve the session keys from the operator, and they can be used in some application specific protocol between the client and services.
Figure above shows the network GAA entities and interfaces between them. Optional entities are drawn with lines
network and borders dotted the scoreboard. The User Equipment (UE) is, for example, the user's mobile phone. The UE and
Bootstrapping Server Function (BSF) mutually authenticate themselves during the Ub (number [2] above) interface, using the Digest access authentication AKA protocol. The UE also communicates with the Network Application Functions (NAF), which are the implementation servers, over the Ua [4] interface, which can use any specific application protocol necessary.
BSF retrieves data from the subscriber from the Home Subscriber Server (HSS) during the Zh [3] interface, which uses the
Diameter Base Protocol. If there are several HSS in the network, BSF must first know which one to use. This can be done by either setting up a pre-defined HSS to BSF, or by querying the Subscriber Locator Function (SLF).
NAFs recover the key session of BSF during the Zn [5] interface, which also uses the diameter at the base Protocol. If
NAF is not in the home network, it must use a Zn-proxy to contact BSF .
Uses
The SPICE project developed an extended Use Case named "split terminal" where a user on a PC can authenticate with their mobile phone: http://www.ist-spice.org/demos/demo3.htm . The NAF was developed on SimpleSAMLPhP and a Firefox extension was developed to process the GBA digest authencation request from the BSF. Bluetooth SIM Access Profile was used between the Firefox browser and the mobile phone. Later a partner developed a "zero installation" concept.
The research institute Fraunhofer FOKUS developed an OpenID extension for Firefox which uses GBA authentication.Presentation at ICIN 2008 by Peter Weik
The Open Mobile Terminal Platform http://www.omtp.org references GBA in its Advanced Trusted Environment: OMTP TR1 recommendation, first released in May 2008.
Sadly, despite many advantages and potential uses of GBA, its implementation in handsets has been limited since GBA standardization in 2006. Most notably, GBA was implemented in Symbian-based handsets.
References
Cryptographic protocols
Mobile technology | Generic Bootstrapping Architecture | Technology | 1,099 |
1,410,424 | https://en.wikipedia.org/wiki/Dettol | Dettol is a brand line of products used for disinfection and as an antiseptic. This brand was created with the introduction of Dettol antiseptic liquid in 1933 by the British company Reckitt and Colman. The Dettol brand line has been expanded over the years and now includes products containing many different active ingredients. The name Dettol was invented by Polish scientist Garbold Witnossky.
Chloroxylenol products
Dettol antiseptic liquid
Quaternary Ammonia (Benzalkonium Chloride) Products
Dettol 5-in-1 Antibacterial Washing Machine Cleaner
Dettol All In One Disinfectant Spray
Dettol Antibacterial Floor Wipes
Dettol Laundry Cleanser
Dettol Antibacterial disinfectant Wipes
Dettol Cleansing Surface Wipes
Dettol Multi Purpose Cleaner Spray
Dettol Multi Purpose Cleaning Wipes
Dettol Power & Pure Bathroom Spray
Dettol Protect 24 Multi Surface Cleaner Spray
Dettol Protect 24 Multi Surface Wipes
Dettol Surface Cleanser Spray
Dettol Washing Machine Cleaner
Dettol On The Go 2in1 Antibacterial Wipes
Lactic acid products
Dettol Antibacterial Spray
Dettol Big & Strong Bathroom Wipes
Dettol Big & Strong Kitchen Wipes
Dettol Power & Pure Bathroom Wipes
Dettol Power & Pure Kitchen Wipes
Bleach products
Dettol Mould & Mildew Remover
Alcohol products
Dettol Spray & Wear
Dettol On The Go Sanitiser Spray
Dettol On The Go Hand Sanitiser Gel Aloe Vera
Lemongrass oil products
Dettol Sensitive Bar Soap
Dettol Original Bar Soap
References
British brands
Cleaning products
Disinfectants
Reckitt brands | Dettol | Chemistry | 371 |
49,630,805 | https://en.wikipedia.org/wiki/3D%20cell%20culture%20in%20wood-based%20nanocellulose%20hydrogel | Hydrogel from wood-based nanofibrillated cellulose (NFC) is used as a matrix for 3D cell culture, providing a three-dimensional environment that more closely resembles the conditions found in living tissue.
As plant based material, it does not contain any human- or animal-derived components. Nanocellulose is instead derived from wood pulp that has been processed to create extremely small, nanoscale fibers. These fibers can be used to create a hydrogel, which is a type of material that is made up of a network of cross-linked polymer chains and is able to hold large amounts of water.
Overview
As the natural extracellular matrix (ECM) is important in the survival, proliferation, differentiation and migration of the cells, hydrogels mimicking natural ECM structure are considered as potential approaches towards in vivo –like cell culturing. GrowDex is NFC hydrogel for 3D cell culture commercialized by UPM, Finland.
Material properties
NFC fiber network structure and dimensions in hydrogel resemble human ECM. Stiffness can be tuned to optimize the conditions for each cell type. Shear-thinning property of the material makes the gel ready to use without cross-linking or gelification step. The nanocellulose hydrogel can be completely degraded by cellulase enzyme treatment while retaining the 3D cell structures.
Applications
NFC hydrogel in 3D cell culture offers a platform for various biomedical applications. Different cell lines and cell types have been cultured in NFC, including e.g. differentiation of human hepatic cells to functional organotypic cultures, and proliferation of human pluripotent stem cells. Organotypic liver cell cultures can be used in drug discovery for testing liver toxicity and metabolism of the novel drug candidates. The possibility to use the hydrogel with robotic dispensers enables its use in high throughput screening (HTS) formats.
Additionally, 3D cell culture using wood-based nanocellulose hydrogel can be used for tissue engineering.
References
Cell culture | 3D cell culture in wood-based nanocellulose hydrogel | Biology | 415 |
52,970,612 | https://en.wikipedia.org/wiki/Krishnan%20Raghavachari | Krishnan Raghavachari (born 3 April 1953, in Chennai, India) is a Professor of Chemistry at Indiana University Bloomington.
Raghavachari began his education in his native India, completing his undergraduate degree in 1973 at Madras University and his masters from the Indian Institute of Technology in 1975. Following this, he moved to the United States to attend Carnegie-Mellon University for his doctorate under the tutelage of John Pople, completing it in 1981. Upon completing his degree, Raghavachari entered the private sector as a research scientist at Bell Labs. He served as a member of the technical staff until 1987 when he was named a distinguished member. In 2002, he joined the faculty at Indiana University.
Raghavachari has been credited as one of the top quantum chemists in the United States and responsible for developing methods allowing for widespread use of computational chemistry. Among the methods he has developed over his career are CCSD(T), used to evaluate bond energies and the properties of molecules and the Gaussian-2, 3, and 4 methods. Over the course of his career, Raghavachari has given over 150 invited lectures, published over 320 scientific papers, and has been cited over 50,000 times by others in the field. He has also served as chair of the Theoretical Chemistry Subdivision of the American Chemical Society, on the editorial boards of the Journal of Physical Chemistry, Journal of Computational Chemistry, Theoretical Chemistry Accounts, and Journal of Materials Research.
Honours and awards
2009 Davisson-Germer Prize in Surface Physics, American Physical Society
2008 Fellow of the Royal Society of Chemistry
2001 Fellow of the American Physical Society
References
1953 births
Living people
People from Chennai
IIT Madras alumni
Indiana University Bloomington staff
University of Madras alumni
Carnegie Mellon University alumni
21st-century American chemists
Theoretical chemists
Members of the International Academy of Quantum Molecular Science
Fellows of the American Physical Society
Fellows of the Royal Society of Chemistry
Indian emigrants to the United States
American academics of Indian descent
21st-century Indian chemists | Krishnan Raghavachari | Chemistry | 406 |
34,971,906 | https://en.wikipedia.org/wiki/Phyllochron | The phyllochron is the intervening period between the sequential emergence of leaves on the main stem of a plant, also rendered as leaf appearance−1. This measurement is used by botanists and agronomists to describe the growth and development of plants, especially cereals. The term phyllochron was first described in 1966. The interval between leaf appearances can be recorded in both standard measurements of time as well as thermal time (e.g. growing degree units). One phytomer unit is added over the course of one phyllochron. No significantly robust equation to predict phyllochrons has been developed.
Variation
Increases in phyllochron in cereals correlates with growing degree units in a slightly curvilinear fashion. In all cultivars of cereals, fluctuations in temperature are the primary factor that affects the length of the phyllochron. Less important secondary factors emerge in a number of different and sometimes contradictory studies on phyllochron response to variation in light, CO2 level, irrigation, nitrogen availability, salinity, soil properties, planting depth, planting time, and genotype. In cereals, the phyllochron may vary in speed between the main stem and the tillers. The phyllochron may or may not be equal to the length of time taken for one leaf to grow. It is more accurate to determine the value in a laboratory study than in the field, as field studies have not always noted the non-linear relationship of temperature and leaf appearance.
See also
Phenology
Phytomer
Plant development
Plastochron
References
Plant anatomy
Plant morphology | Phyllochron | Biology | 344 |
20,622,617 | https://en.wikipedia.org/wiki/Three-point%20estimation | The three-point estimation technique is used in management and information systems applications for the construction of an approximate probability distribution representing the outcome of future events, based on very limited information. While the distribution used for the approximation might be a normal distribution, this is not always so. For example, a triangular distribution might be used, depending on the application.
In three-point estimation, three figures are produced initially for every distribution that is required, based on prior experience or best-guesses:
a = the best-case estimate
m = the most likely estimate
b = the worst-case estimate
These are then combined to yield either a full probability distribution, for later combination with distributions obtained similarly for other variables, or summary descriptors of the distribution, such as the mean, standard deviation or percentage points of the distribution. The accuracy attributed to the results derived can be no better than the accuracy inherent in the three initial points, and there are clear dangers in using an assumed form for an underlying distribution that itself has little basis.
Estimation
Based on the assumption that a PERT distribution governs the data, several estimates are possible. These values are used to calculate an E value for the estimate and a standard deviation (SD) as L-estimators, where:
E = (a + 4m + b) / 6
SD = (b − a) / 6
E is a weighted average which takes into account both the most optimistic and most pessimistic estimates provided. SD measures the variability or uncertainty in the estimate.
In Program Evaluation and Review Techniques (PERT) the three values are used to fit a PERT distribution for Monte Carlo simulations.
The triangular distribution is also commonly used. It differs from the double-triangular by its simple triangular shape and by the property that the mode does not have to coincide with the median. The mean (expected value) is then:
E = (a + m + b) / 3.
In some applications, the triangular distribution is used directly as an estimated probability distribution, rather than for the derivation of estimated statistics.
Project management
To produce a project estimate the project manager:
Decomposes the project into a list of estimable tasks, i.e. a work breakdown structure
Estimates the expected value E(task) and the standard deviation SD(task) of this estimate for each task time
Calculates the expected value for the total project work time as
Calculates the value SD(project) for the standard error of the estimated total project work time as: under the assumption that the project work time estimates are uncorrelated
The E and SD values are then used to convert the project time estimates to confidence intervals as follows:
The 68% confidence interval for the true project work time is approximately E(project) ± SD(project)
The 90% confidence interval for the true project work time is approximately E(project) ± 1.645 × SD(project)
The 95% confidence interval for the true project work time is approximately E(project) ± 2 × SD(project)
The 99.7% confidence interval for the true project work time is approximately E(project) ± 3 × SD(project)
Information Systems typically uses the 95% confidence interval for all project and task estimates.
These confidence interval estimates assume that the data from all of the tasks combine to be approximately normal (see asymptotic normality). Typically, there would need to be 20–30 tasks for this to be reasonable, and each of the estimates E for the individual tasks would have to be unbiased.
See also
Five-number summary
Seven-number summary
Program Evaluation and Review Technique (PERT)
References
Statistical approximations
Informal estimation | Three-point estimation | Mathematics | 741 |
48,609,392 | https://en.wikipedia.org/wiki/Group%20Fortifications%20Francois-de-Guise | The , renamed Group Fortifications Francois-de-Guise after 1919 by the French, is a military structure located in the municipality of Châtel-Saint-Germain, close to Metz. It is part of the second fortified belt of forts of Metz and had its baptism of fire in late 1944, when the Battle of Metz occurred.
Historical context
During the annexation, Metz was a German garrison of between 15,000 and 20,000 men at the beginning of the period and exceeded 25,000 men just before the First World War, gradually becoming the first stronghold of the German Reich. The completed the Second fortified belt of Metz composed of Festen Wagner (1904–1912), Crown Prince (1899–1905), Leipzig (1907–1912), empress (1899–1905), Lorraine (1899–1905), Freiherr von der Goltz (1907–1916), Haeseler (1899–1905), Prince Regent Luitpold (1907–1914) and Infantry-Werk Belle-Croix (1908–1914).
Overall design
The Group Fortification Francois de Guise was built by Germany in the first annexation. It was part of a wider program of fortifications called "Moselstellung", which encompassed fortresses scattered between Thionville and Metz in the valley Moselle. The aim of Germany was to protect against a French attack to retake Alsace-Lorraine from the German Empire. The fortification system was designed to accommodate the growing advances in artillery since the end of the 19th century. Based on new defensive concepts, such as dispersal and concealment, the fortified group was to be, in case of attack, an impassable barrier for French forces.
From 1899, the Schlieffen Plan of the German General Staff designed the fortifications of the Moselstellung, between Metz and Thionville, to be like a lock for blocking any advance of French troops in case of conflict. This concept of a fortified line on the Moselle was a significant innovation compared to Système Séré de Rivières developed by the French. It later inspires the engineers of the Maginot Line.
Construction and facilities
The Group Fortification Francois de Guise, over an area of 80 hectares, was built from 1907 to 1912. The perimeter defense of the Group Fortification Francois de Guise was provided by two infantry positions, the Folie works and Leipzig works. The three fortified barracks could receive 360 men. The batteries are equipped with rotating turret howitzers 100 mm wide. Scattered on the high points, 6 observation turrets and 12 observation posts allow perfect monitoring of the sector. Each infantry item has a power plant equipped with three diesel 20 hp engines. The works are scattered over a wide area and concealed by the natural topography. All works, connected by 270 m of underground galleries, are surrounded by a network of barbed wire.
Successive assignments
From 1890 the garrison relief was guaranteed by the fort troops Corps XVI stationed at Metz and Thionville. From 1914 to 1918, the fort was spared any fighting, and used simply as outpost by the German army. After 1918, the Group Fortification Francis de Guise was invested by the French army. In 1939, it serves as an outpost for the French army. Taken over by the Germans in June 1940, it served as a training ground. Beginning in September 1944, during the Battle of Metz, German troops reorganized its defense, and integrated it into the defensive system set up around Metz. After World War II, the fort was taken over by the French army. The fortified group comprising forts Leipzig and Madness was used during the Cold War from 1953 to 1958 as part of air defense, having a transmission vocation. This place was then the Work "F" of the DAT ("Radar Station Master 40/921").
After a War command post exercise in 1963, it became in 1967 the command center of the Tactical Air Force 1 Aerial region (FATAC), but was transferred to the Air Base 128-Metz Frescaty two years earlier. "Nuclear Biological Chemical" protection (NBC) at the works was designed then.
Second World War
After the departure of French troops in June 1940, the German army reinvested the fort. In early September 1944, the Battle of Metz began. The German command integrated the defensive system set up around Metz. On September 2, 1944, Metz was declared, in effect by the Reich, fortress Hitler. The fortress was required to be defended to the last by German troops, whose leaders were all sworn to the Führer The defense is organized around the forts of Metz. From September 6, 1944, the Group Fortification Francis de Guise serves as a forward base on the front line for German units of the 462th Volks-Grenadier-Division. At that time, German troops are based firmly at the strengths of the sector, especially in the Group Fortification Francois de Guise, ideally located between the Group Fortification Lorraine, or Festivals Lorraine, and the Fort Jeanne d'Arc, or Fixed Empress. In the area of Amanvillers - Saint-Privat is held further north by the 1010th Backup Regiment of Colonel Richter of the 462th Infanterie-Division and further south by the Cadets of the Fahnenjunkerschule VI des Heeres, "Metz" under the command of Wehrmacht (army)Colonel Joachim von Siegroth.
On the morning of September 9, 1944, the American artillery rained shells on identified German positions, paving the way for the infantry and the tanks of Task force McConnell. Arriving in the Wood Jaumont US troops' 2nd Infantry regiment were taken under fire by Fort Kellermann. The German batteries eliminated, within moments, seven tanks and two freestanding guns, forcing the column to withdraw precipitately. Wanting to bypass the fortifications from the north, the Americans were soon under fire from a German counterattack before being stopped by gunfire from Group Fortification Lorraine. The artillery of the US campaign immediately resumed his attacks on the fortifications of the sector, but without great results considering the terrain and vegetation. The 3rd Battalion of the Task force, in charge of the right flank of the attack, fell on the fortified farmhouse of Moscou, a veritable redoubt between the German fortifications, before being taken under heavy fire from Gravelotte. The 2nd Battalion Task force, which was heading towards Vernéville with relative ease, was finally stopped by gunfire from a sunken road, west of Fort Francis de Guise. The day ended with a failure for Colonel Roffe, who regretted high losses—14 officers and 332 men—for "twenty odd forts". He therefore calls for air support from General Silvester.
On September 10, 1944, three squadrons of fighter-bombers dropped their bombs on the eastern sector of Amanvillers, where the fortifications were grouped. The P-47s reach their targets, but the 500-pound bombs have little effect on the reinforced concrete fortifications. The infantry attack was launched at 18:00, meeting fierce resistance. Despite the support of tanks, it stopped three hours later, out of breath. Towards Gravelotte, in the Woods Génivaux, American troops also destroyed the Officer Cadets of wehrmacht Colonel Siegroth that dominate the field. On September 10, 1944, the commander of the 7th Armored Division agreed to take a position near Roncourt to support a new attack of the 2nd Infantry regiment.
On September 11, 1944, at 6:30 am, the tanks were headed for Pierrevillers, wiping the passage with sporadic gunfire, they finally came across a roadblock, with fire coming from anti-tank guns, which were camouflaged and difficult to locate. The infantry, however, managed to take a position on the wooded slopes, northwest of the village of Bronvaux, too far, however, from the objective, which was to support the 2nd Infantry regiment. Despite several counterattacks by the 462th Infanterie Division, American troops arrive to take over the land in the late afternoon, after a rolling artillery barrage targeting fortifications in the sector, and which uses smoke shells for cover.
On September 13, 1944, the US military redeployed its troops on the front line to concentrate its attack on the fortifications. But fatigue and stress disoriented the men of the 2nd Infantry regiment, which were ultimately relieved on September 14. The 1st Battalion Task force, hard hit by the shelling of the 462th Volks-Grenadier-Division and specific small arms fire, had to withdraw with difficulty behind a screen of smoke rockets, more than five hundred meters from Amanvillers. Around 14:00, an air strike on Amanvillers did not allow the infantry to advance, the village being too close to the fortifications of the sector to be taken in full.
Fatigue and stress soon disoriented men of the 2nd Infantry Regiment, who were finally relieved from this "Hell Hole » on September 14, 1944. Two regiments, reinforced by the engineering companies of the 90th Infantry Division, took over in the area: the 357th Infantry Regiment of Colonel Barth took position along the Wood Jaumont, to the east of Saint-Privat-la-Montagne, while the 359th Infantry Regiment of Colonel Bacon took position to the east of Gravelotte.
On September 15, 1944, an attack was planned on the Canrobert sector of buildings and Kellermann sector to the north and Fort Jeanne d'Arc on the southern sector. The approach was difficult, German soldiers defending inch by inch. American bazookas were not effective on the concrete bunkers, and tanks, followed by armed sections of flamethrowers, throw themselves on the first German lines, neither reaching them, nor neutralizing them, nor taking them. General McLain then concluded that a frontal attack would be doomed to failure and ordered his troops to keep the pressure on the outposts of the 462th Volks-Grenadier-Division without attacking frontally forts Jeanne-d'Arc and Lorraine.
On September 16, 1944, in thick fog, the attack on Support Point Canrobert started at 10:00. It was stopped two hours later by the Fahnenjunker of Colonel Siegroth, who fought man-to-man without mercy. The Americans' 357th Infantry Regiment withdrew, leaving 72 soldiers in the field. At 5:00 pm, the 1st Battalion of the same regiment was stopped in its tracks by artillery and small arms. In the southern sector, the 2nd Battalion lost 15 officers and 117 men under heavy fire from mortars and automatic weapons from the buffer strip. At nightfall, the battalion advanced only 200 meters. Seeing that the Americans gradually ate away at their lines, the German artillery redoubled its fire, managing to contain the two regiments, and raising fears with General McLain of a new counterattack. Before the pugnacity of the elite troops of the 462th Volks-Grenadier-Division, General McLain, in agreement with the General Walker, decided to suspend the attacks, pending further plans of the General Staff of the 90th Infantry Division.
After a rainy and cold month of little fighting, fighting resumed early November 1944. On November 9, in preparation for the offensive on Metz, the Air Force sent no less than 1,299 heavy bombers, B-17s and B-24s, to drop 3,753 tons of bombs, and 1,000 to 2,000 books on fortifications and strategic points in the combat zone of IIIrd army. Most bombers, having dropped bombs without visibility at over 20,000 feet, missed their military objectives. In Metz, the 689 loads of bombs destined to strike Fort Jeanne d'Arc and six other forts, designated as priority targets, merely cause collateral damage, proving once again the inadequacy of the massive bombing of military targets.
At dawn on November 14, 1944, the 105 mm howitzers from the 359th Field Artillery Battalion opened fire on the area located on either side of Fort Jeanne d'Arc, between the Fort Francis de Guise and the Fort Driant to pave the way for the 379th Infantry regiment, whose goal was to reach the Moselle. The attack is focused on Fort Jeanne d'Arc, which ended up being completely encircled by US troops. After two deadly counterattacks against the men of Major Voss by the 462th Volksgrenadier division, German troops soon fell back to the Group Fortification. They would come out again. For the commander of Fort Jeanne-d'Arc, the finding was bitter: Losses are heavy and they did not prevent the Americans from reaching the Moselle.
Under pressure from the American artillery, and armored troops, the German units of the 462th Volks-Grenadier-Division eventually fell back on a more limited basis, before shutting themselves in the fort, West of Metz during the final assault on the old city of Lorraine. While the US military managed to pass the Moselle on November 18, 1944, the US command was forced to keep back forces to neutralize the elements of the 462th Volksgrenadier division still entrenched in the Group Fortification Francois de Guise and the forts surrounding. On the evening of November 23, 1944, shortly before midnight, the last detachments of the 379th Infantry Regiment withdraw from Moscou Farm, from the Farm St-Hubert, from the bunker south of Fort Guise and the Group Fortification Francois de Guise, leaving room for fresh troops of 5th Infantry Division. The Fort de Bois-la-Dame still held by a hundred men of the 462th Volks-Grenadier-Division, the Fort St Hubert and the Fort de Marival, each still manned by fifty men, finally surrendered November 26, 1944.
Fort Jeanne d'Arc was the last of the forts of Metz to disarm. Determined German resistance, bad weather and floods, inopportunity, and a general tendency to underestimate the firepower of the fortifications of Metz, helped slow the US offensive, giving the opportunity to the German Army to withdraw in good order to the Saar. The objective of the German staff, which was to stall US troops for the longest possible time at Metz before they could reach the front of the Siegfried Line, was largely achieved.
Notes & references
Notes
References
See also
Forts of Metz
Fortifications of Metz
Battle of Metz
Fortifications of Metz
World War II defensive lines | Group Fortifications Francois-de-Guise | Engineering | 2,927 |
3,898,076 | https://en.wikipedia.org/wiki/List%20of%20AMD%20graphics%20processing%20units | The following is a list that contains general information about GPUs and video cards made by AMD, including those made by ATI Technologies before 2006, based on official specifications in table-form.
Field explanations
The headers in the table listed below describe the following:
Model – The marketing name for the GPU assigned by AMD/ATI. Note that ATI trademarks have been replaced by AMD trademarks starting with the Radeon HD 6000 series for desktop and AMD FirePro series for professional graphics.
Codename – The internal engineering codename for the GPU.
Launch – Date of release for the GPU.
Architecture – The microarchitecture used by the GPU.
Fab – Fabrication process. Average feature size of components of the GPU.
Transistors – Number of transistors on the die.
Die size – Physical surface area of the die.
Core config – The layout of the graphics pipeline, in terms of functional units.
Core clock – The reference base and boost (if available) core clock frequency.
Fillrate
Pixel - The rate at which pixels can be rendered by the raster operators to a display. Measured in pixels/s.
Texture - The rate at which textures can be mapped by the texture mapping units onto a polygon mesh. Measured in texels/s.
Performance
Shader operations - How many operations the pixel shaders (or unified shaders in Direct3D 10 and newer GPUs) can perform. Measured in operations/s.
Vertex operations - The amount of geometry operations that can be processed on the vertex shaders in one second (only applies to Direct3D 9.0c and older GPUs). Measured in vertices/s.
Memory
Bus type – Type of memory bus utilized.
Bus width – Maximum bit width of the memory bus utilized.
Size – Size of the graphics memory.
Clock – The reference memory clock frequency.
Bandwidth – Maximum theoretical memory bandwidth based on bus type and width.
TDP (Thermal design power) – Maximum amount of heat generated by the GPU chip, measured in Watt.
TBP (Typical board power) – Typical power drawn by the total board, including power for the GPU chip and peripheral equipment, such as Voltage regulator module, memory, fans, etc., measured in Watt.
Bus interface – Bus by which the graphics processor is attached to the system (typically an expansion slot, such as PCI, AGP, or PCIe).
API support – Rendering and computing APIs supported by the GPU and driver.
Due to conventions changing over time, some numerical definitions such as core config, core clock, performance and memory should not be compared one-to-one across generations. The following tables are for reference use only, and do not reflect actual performance.
Video codec acceleration
R100 – Video Immersion
R200 – Video Immersion II
R300 – Video Immersion II + Video Shader
R410 – Video Shader HD
R420 – Video Shader HD + DXVA
R520 – Avivo Video
R600 – Avivo HD – UVD 1.0
R700 – UVD 2, UVD 2.2
Evergreen – UVD 2.2
Northern Islands – UVD 3 (HD 67xx UVD 2.2)
Southern Islands – UVD 3.1, VCE 1.0
Sea Islands – UVD 4.2, VCE 2.0
Volcanic Islands – UVD 5.0, 6.0, VCE 3.0
Arctic Islands – UVD 6.3, VCE 3.4
Vega – UVD 7.0, VCE 4.0 and VCN 1.0 only at AMD Raven Ridge
Navi 1X – VCN 2.0
Navi 2X – VCN 3.0
Navi 3X – VCN 3.0
Features overview
API overview
Desktop GPUs
Wonder series
Mach series
Rage series
1 Pixel pipelines : Vertex shaders : Texture mapping units : Render output units
2 OpenGL 1.0 (Generic 2D) is provided through software implementations.
Radeon R100 series
All models include Direct3D 7.0 and OpenGL 1.3
The R100 cards were originally launched without any numbering as Radeon SDR, DDR, LE and VE; these products were later "rebranded" to their names within the numbered naming scheme, when the Radeon 8000 series was introduced.
1 Pixel pipelines : Vertex shaders : Texture mapping units : Render output units
A First number indicates cards with 32 MB of memory. Second number indicates cards with 64 MB of memory.
B First number indicates OEM cards. Second number indicates Retail cards.
IGP (3xx series)
All models are manufactured with a 180 nm fabrication process
All models include Direct3D 7.0 and OpenGL 1.3
Based on the Radeon VE
1 Pixel pipelines : Vertex shaders : Texture mapping units : Render output units
Radeon R200 series
All models are manufactured with a 150 nm fabrication process
All models include Direct3D 8.1 and OpenGL 1.4
1 Pixel shaders : Vertex shaders : Texture mapping units : Render output units
IGP (9000 series)
All models are manufactured with a 150 nm fabrication process
All models include Direct3D 8.1 and OpenGL 1.4
Based on the Radeon 9200
1 Pixel shaders : Vertex shaders : Texture mapping units : Render output units
Radeon R300 series
AGP (9000 series, X1000 series)
All models include Direct3D 9.0 and OpenGL 2.0
All models use an AGP 8x interface
1 Pixel shaders : Vertex Shaders : Texture mapping units : Render output units
2 The 256-bit version of the 9800 SE when unlocked to 8-pixel pipelines with third party driver modifications should function close to a full 9800 Pro.
PCIe (X3xx, X5xx, X6xx, X1000 series)
All models include Direct3D 9.0 and OpenGL 2.0
All models use a PCIe ×16 interface
1 Pixel shaders : Vertex Shaders : Texture mapping units : Render output units
IGP (X2xx, 11xx series)
All models include Direct3D 9.0 and OpenGL 2.0
Based on the Radeon X300
1Pixel shaders : Vertex Shaders : Texture mapping units : Render output units
Radeon R400 series
AGP (X7xx, X8xx)
All models include AGP 8×
All models include Direct3D 9.0b and OpenGL 2.0
1 Pixel shaders : Vertex shaders : Texture mapping units : Render output units
PCIe (X5xx, X7xx, X8xx, X1000 series)
All models include PCIe ×16
All models include Direct3D 9.0b and OpenGL 2.0
1 Pixel shaders : Vertex Shaders : Texture mapping units : Render output units
IGP (X12xx, 21xx)
All models include Direct3D 9.0b and OpenGL 2.0
Based on Radeon X700
Radeon X1000 series
Note that ATI X1000 series cards (e.g. X1900) do not have Vertex Texture Fetch, hence they do not fully comply with the VS 3.0 model. Instead, they offer a feature called "Render to Vertex Buffer (R2VB)" that provides functionality that is an alternative Vertex Texture Fetch.
1 Pixel shaders : Vertex shaders : Texture mapping units : Render output units
Radeon HD 2000 series
Radeon HD 3000 series
IGP (HD 3000)
All Radeon HD 3000 IGP models include Direct3D 10.0 and OpenGL 3.3
1 Unified shaders : Texture mapping units : Render output units
2 The clock frequencies may vary in different usage scenarios, as AMD PowerPlay technology is implemented. The clock frequencies listed here refer to the officially announced clock specifications.
3 The sideport is a dedicated memory bus. It is preferably used for a frame buffer.
All-in-Wonder series
1 Pixel shaders : Vertex shaders : Texture mapping units : Render output units
2 Unified shaders : Texture mapping units : Render output units
Radeon HD 4000 series
1 Unified shaders : Texture mapping units : Render output units
2 The effective data transfer rate of GDDR5 is quadruple its nominal clock, instead of double as it is with other DDR memory.
3 The TDP is reference design TDP values from AMD. Different non-reference board designs from vendors may lead to slight variations in actual TDP.
4 All models feature UVD2 and PowerPlay.
IGP (HD 4000)
All Radeon HD 4000 IGP models include Direct3D 10.1 and OpenGL 2.0
1 Unified shaders : Texture mapping units : Render output units
2 The clock frequencies may vary in different usage scenarios, as ATI PowerPlay technology is implemented. The clock frequencies listed here refer to the officially announced clock specifications.
3 The sideport is a dedicated memory bus. It preferably used for frame buffer.
Radeon HD 5000 series
The HD5000 series is the last series of AMD GPUs which supports two analog CRT-monitors with a single graphics card (i.e. with two RAM-DACs).
AMD Eyefinity introduced.
Radeon HD 6000 series
• The Radeon HD 6000 series has a new tesselation engine which is said to double the performance when working with tesselation compared to the previous HD 5000 series.
IGP (HD 6000)
All models feature the UNB/MC Bus interface
All models lack double-precision FP
With driver Update OpenGL 4.4 available (Last Catalyst 15.12). OpenGL 4.5 available with Crimson Beta (driver version 15.30 or higher).
All models feature Angle independent anisotropic filtering, UVD3, and AMD Eyefinity capabilities, with up to three outputs.
All models feature 3D Blu-ray Disc acceleration.
Embedded GPUs as part of AMD's Lynx platform APUs.
Radeon HD 7000 series
IGP (HD 7000)
All models feature the UNB/MC Bus interface
All models do not support double-precision FP
TeraScale 2 (VLIW5) based APUs feature angle independent anisotropic filtering, UVD3, and Eyefinity capabilities, with up to three outputs.
TeraScale 3 (VLIW4) based APUs feature angle independent anisotropic filtering, UVD3.2, and Eyefinity capabilities, with up to four outputs.
Radeon HD 8000 series
Radeon 200 series
Radeon 300 series
Radeon 400 series
Radeon 500 series
Radeon RX Vega series
Radeon VII series
Radeon RX 5000 series
Radeon RX 6000 series
Radeon RX 7000 series
Mobile GPUs
These GPUs are either integrated into the mainboard or occupy a Mobile PCI Express Module (MXM).
Rage Mobility series
1 Vertex shaders : Pixel shaders : Texture mapping units : Render output units.
Mobility Radeon series
1 Vertex shaders : Pixel shaders : Texture mapping units : Render output units.
Mobility Radeon X300, X600, X700, X800 series
1 Vertex shaders : Pixel shaders : Texture mapping units : Render output units.
Mobility Radeon X1000 series
1 Vertex shaders : Pixel shaders : Texture mapping units : Render output units.
Mobility Radeon HD 2000 series
OpenGL 3.3 is possible with latest drivers for all RV6xx.
1 Vertex shaders : Pixel shaders : Texture mapping units : Render output units.
2 Unified Shaderss : Texture mapping units : Render output units
Mobility Radeon HD 3000 series
1 Unified Shaders : Texture mapping units : Render output units
Mobility Radeon HD 4000 series
1 Unified shaders : Texture mapping units : Render output units
2 The effective data transfer rate of GDDR5 is quadruple its nominal clock, instead of double as it is with other DDR memory.
Mobility Radeon HD 5000 series
1 Unified shaders : Texture mapping units : Render output units
2 The effective data transfer rate of GDDR5 is quadruple its nominal clock, instead of double as it is with other DDR memory.
Radeon HD 6000M series
IGP (HD 6000)
All models feature the UNB/MC Bus interface
All models lack double-precision FP
All models feature Angle independent anisotropic filtering, UVD3, and Eyefinity capabilities, with up to three outputs.
IGP (HD 6000G)
All models include Direct3D 11, OpenGL 4.4 and OpenCL 1.2
All models feature the UNB/MC Bus interface
All models lack double-precision FP
All models feature angle independent anisotropic filtering, UVD3 and Eyefinity capabilities, with up to three outputs.
All models feature VLIW5
1 Unified shaders : Texture mapping units : Render output units : Compute units
2 TDP specified for AMD reference designs, includes CPU power consumption. Actual TDP of retail products may vary.
Radeon HD 7000M series
IGP (HD 7000G)
Radeon HD 8000M series
Radeon M200 series
Radeon M300 series
Radeon M400 series
Radeon M500 series
Radeon 600 series
Radeon RX 5000M series
Radeon RX 6000M series
Radeon RX 7000M series
Workstation GPUs
FireGL series
1 Vertex shaders : Pixel shaders : Texture mapping units : Render output units
2 Unified shaders : Texture mapping units : Render output units : Compute Units
FireMV (Multi-View) series
1 Vertex shaders : Pixel shaders : Texture mapping unit : Render output units
2 Unified shaders : Texture mapping unit : Render output units
FirePro (Multi-View) series
FirePro 3D series (V000)
1 Unified shaders : Texture mapping units : Render output units : Compute Units
2 The effective data transfer rate of GDDR5 is quadruple its nominal clock, instead of double as it is with other DDR memory
3 Windows 7, 8.1, 10 Support for Fire Pro Cards with Terascale 2 and later by firepro driver 15.301.2601
FirePro series (Vx900)
1 Unified shaders : Texture mapping units : Render output units : Compute Units
2 The effective data transfer rate of GDDR5 is quadruple its nominal clock, instead of double as it is with other DDR memory.
3 Support for Windows 7, 8.1 for OpenGL 4.4 and OpenCL 2.0, when Hardware is prepared with firepro driver 14.502.1045
FirePro Workstation series (Wx000)
Vulkan 1.0 and OpenGL 4.5 possible for GCN with Driver Update FirePro equal to Radeon Crimson 16.3 or higher.
Vulkan 1.1 possible for GCN with Radeon Pro Software 18.Q1.1 or higher. It might not fully apply to GCN 1.0 or 1.1 GPUs.
1 Unified shaders : Texture mapping units : Render output units : Compute Units
2 The effective data transfer rate of GDDR5 is quadruple its nominal clock, instead of double as it is with other DDR memory.
3 OpenGL 4.4: support with AMD FirePro driver release 14.301.000 or later, in footnotes of specs
FirePro D-Series
In 2014, AMD released the D-Series specifically for Mac Pro workstations.
1 Unified shaders : Texture mapping units : Render output units : compute units
FirePro Workstation series (Wx100)
Vulkan 1.0 and OpenGL 4.5 possible for GCN with Driver Update FirePro equal to Radeon Crimson 16.3 or higher. OpenCL 2.1 and 2.2 possible for all OpenCL 2.0-Cards with Driver Update in Future (Khronos). Linux Support for OpenCL is limited with AMDGPU Driver 16.60 actual to Version 1.2.
Vulkan 1.1 possible for GCN with Radeon Pro Software 18.Q1.1 or higher. It might not fully apply to GCN 1.0 or 1.1 GPUs.
1 Unified shaders : Texture mapping units : Render output units : compute units
2 The effective data transfer rate of GDDR5 is quadruple its nominal clock, instead of double as it is with other DDR memory.
3 OpenGL 4.4: support with AMD FirePro driver release 14.301.000 or later, in footnotes of specs
FirePro Workstation series (Wx300)
Vulkan 1.1 possible for GCN with Radeon Pro Software 18.Q1.1 or higher. It might not fully apply to GCN 1.0 or 1.1 GPUs.
Radeon PRO series
Radeon Pro WX x100 series
Vulkan 1.1 possible for GCN with Radeon Pro software 18.Q1.1 or higher.
Radeon Pro WX x200 series
Radeon Pro Vega series
Radeon Pro 5000 series
Radeon Pro W5000 series
Radeon Pro W6000 series
Radeon Pro W7000 series
Mobile workstation GPUs
Mobility FireGL series
FirePro Mobile series
Radeon Pro WX x100 Mobile series
Half precision power (FP16) is equal to single precision power (FP32) in 4th GCN generation (in 5th Gen: half precision (FP16) = 2× SP (FP32))
Radeon Pro 400 series
Radeon Pro 500 series
Radeon Pro WX x200 Mobile series
Radeon Pro Vega series
Radeon Pro 5000M series
Radeon Pro W5000M series
Radeon Pro W6000M series
Server GPUs
FireStream series
FirePro Remote series
1 Unified shaders : Texture mapping units : Render output units : compute units
2 The effective data transfer rate of GDDR5 is quadruple its nominal clock, instead of double as it is with other DDR memory.
FirePro Server series (S000x/Sxx 000)
Vulkan 1.0 and OpenGL 4.5 possible for GCN with Driver Update FirePro equal to Radeon Crimson 16.3 or higher. OpenGL 4.5 was only in Windows available. Actual Linux Driver support OpenGL 4.5 and Vulkan 1.0, but only OpenCL 1.2 by AMDGPU Driver 16.60.
Vulkan 1.1 possible for GCN with Radeon Pro Software 18.Q1.1 or higher. It might not fully apply to GCN 1.0 or 1.1 GPUs.
1 Unified shaders : Texture mapping units : Render output units: Compute units
2 The effective data transfer rate of GDDR5 is quadruple its nominal clock, instead of double as it is with other DDR memory.
3 OpenGL 4.4: support with AMD FirePro driver release 14.301.000 or later, in footnotes of specs
Radeon Sky series
1 Unified shaders : Texture mapping units : Render output units : compute units
2 The effective data transfer rate of GDDR5 is quadruple its nominal clock, instead of double as it is with other DDR memory.
Radeon Pro V series
Radeon Instinct series
Embedded GPUs
1 Unified shaders : Texture mapping units : Render output units
2 CU = Compute units
3 The effective data transfer rate of GDDR5 is quadruple its nominal clock, instead of double as it is with other DDR memory.
Console GPUs
1 Pixel shaders : Vertex shaders : Texture mapping units : Render output units
2 Unified shaders : Texture mapping units : Render output units
3 Unified shaders : Texture mapping units : Render output units : RT Cores
4 The Latte looks similar to the RV730 used in the Radeon HD4650/4670.
5 In most cases, especially in games, half of this data can be considered.
See also
List of Nvidia graphics processing units
List of Intel graphics processing units
List of AMD processors with 3D graphics
Apple M1
Radeon RX Vega M
Video Coding Engine, AMD's equivalent SIP core till 2017
Video Core Next, AMD's video core which combines the functionality of Video Coding Engine and Unified Video Decoder
References
External links
TechPowerUp! GPU Database
ATI Technologies
ATI graphics processing units
Lists of microprocessors
ATI | List of AMD graphics processing units | Technology | 4,314 |
4,319,717 | https://en.wikipedia.org/wiki/Electrostatic%20fieldmeter | An electrostatic fieldmeter, also called a static meter is a tool used in the static control industry. It is used for non-contact measurement of electrostatic charge on an object. It measures the force between the induced charges in a sensor and the charge present on the surface of an object. This force is converted to volts, measuring both the initial peak voltage and the rate at which it falls away.
Operation
a charge monitoring probe is placed close (1 mm to 5 mm) to the surface to be measured and the probe body is driven to the same potential as the measured unknown by an electronic circuit. This achieves a high accuracy measurement that is virtually insensitive to variations in probe-to-surface distances. The technique also prevents arc-over between the probe and measured surface when measuring high voltages.
Alternative method
The operation of an electrostatic field meter is based on the charge-discharge process of an electrically floating electrode: Corona source charges a floating electrode, which discharges with a regular repetition frequency to the earth-electrode. The discharge repetition frequency is the measured variable which is a function of the background electrostatic field.
Beside static charge control in electrostatic discharge (ESD) sensitive environments, another possible application is the measurement of the atmospheric electric field, if sufficient sensitivity is available.
See also
Coulombmeter
Electrometer
Electroscope
Electrostatic voltmeter
Faraday cup
References
Further reading
– Electrostatic field meter - Texaco, Inc., 1987 (filed 1983)
– Non-contact autoranging electrostatic fieldmeter with automatic distance indicator, Simco-Ion, Hatfield, PA, 1987 (filed 1985)
Electrical meters
Electrical test equipment
Electronic test equipment
Electronics work tools
Electrostatics
Measuring instruments | Electrostatic fieldmeter | Technology,Engineering | 347 |
316,966 | https://en.wikipedia.org/wiki/Ecological%20modernization | Ecological modernization is a school of thought that argues that both the state and the market can work together to protect the environment. It has gained increasing attention among scholars and policymakers in the last several decades internationally. It is an analytical approach as well as a policy strategy and environmental discourse (Hajer, 1995).
Origins and key elements
Ecological modernization emerged in the early 1980s within a group of scholars at Free University and the Social Science Research Centre in Berlin, among them Joseph Huber, and . Various authors pursued similar ideas at the time, e.g. Arthur H. Rosenfeld, Amory Lovins, Donald Huisingh, René Kemp, or Ernst Ulrich von Weizsäcker. Further substantial contributions were made by Arthur P.J. Mol, Gert Spaargaren and David A Sonnenfeld (Mol and Sonnenfeld, 2000; Mol, 2001).
One basic assumption of ecological modernization relates to environmental readaptation of economic growth and industrial development. On the basis of enlightened self-interest, economy and ecology can be favourably combined: Environmental productivity, i.e. productive use of natural resources and environmental media (air, water, soil, ecosystems), can be a source of future growth and development in the same way as labour productivity and capital productivity. This includes increases in energy and resource efficiency as well as product and process innovations such as environmental management and sustainable supply chain management, clean technologies, benign substitution of hazardous substances, and product design for environment. Radical innovations in these fields can not only reduce quantities of resource turnover and emissions, but also change the quality or structure of the industrial metabolism. In the co-evolution of humans and nature, and in order to upgrade the environment's carrying capacity, ecological modernization gives humans an active role to play, which may entail conflicts with nature conservation.
There are different understandings of the scope of ecological modernization - whether it is just about techno-industrial progress and related aspects of policy and economy, and to what extent it also includes cultural aspects (ecological modernization of mind, value orientations, attitudes, behaviour and lifestyles). Similarly, there is some pluralism as to whether ecological modernization would need to rely mainly on government, or markets and entrepreneurship, or civil society, or some sort of multi-level governance combining the three. Some scholars explicitly refer to general modernization theory as well as non-Marxist world-system theory, others don't.
Ultimately, however, there is a common understanding that ecological modernization will have to result in innovative structural change. So research is now still more focused on environmental innovations, or eco-innovations, and the interplay of various societal factors (scientific, economic, institutional, legal, political, cultural) which foster or hamper such innovations (Klemmer et al., 1999; Huber, 2004; Weber and Hemmelskamp, 2005; Olsthoorn and Wieczorek, 2006).
Ecological modernization shares a number of features with neighbouring, overlapping approaches. Among the most important are
the concept of sustainable development
the approach of industrial metabolism (Ayres and Simonis, 1994)
the concept of industrial ecology (Socolow, 1994)
Additional elements
A special topic of ecological modernization research during recent years was sustainable household, i.e. environment-oriented reshaping of lifestyles, consumption patterns, and demand-pull control of supply chains (Vergragt, 2000; OECD 2002).
Some scholars of ecological modernization share an interest in industrial symbiosis, i.e. inter-site recycling that helps to reduce the consumption of resources via increasing efficiency (i.e. pollution prevention, waste reduction), typically by taking externalities from one economic production process and using them as raw material inputs for another (Christoff, 1996).
Ecological modernization also relies on product life-cycle assessment and the analysis of materials and energy flows. In this context, ecological modernization promotes 'cradle to cradle' manufacturing (Braungart and McDonough, 2002), contrasted against the usual 'cradle to grave' forms of manufacturing - where waste is not re-integrated back into the production process.
Another special interest in the ecological modernization literature has been the role of social movements and the emergence of civil society as a key agent of change (Fisher and Freudenburg, 2001).
As a strategy of change, some forms of ecological modernization may be favored by business interests because they seemingly meet the triple bottom line of economics, society, and environment, which, it is held, underpin sustainability, yet do not challenge free market principles. This contrasts with many environmental movement perspectives, which regard free trade and its notion of business self-regulation as part of the problem, or even an origin of environmental degradation. Under ecological modernization, the state is seen in a variety of roles and capacities: as the enabler for markets that help produce the technological advances via competition; as the regulatory (see regulation) medium through which corporations are forced to 'take back' their various wastes and re-integrate them in some manner into the production of new goods and services (e.g. the way that car corporations in Germany are required to accept back cars they manufactured once those vehicles have reached the end of their product lifespan); and in some cases as an institution that is incapable of addressing critical local, national, and global environmental problems. In the latter case, ecological modernization shares with Ulrich Beck (1999, 37-40) and others notions of the necessity of emergence of new forms of environmental governance, sometimes referred to as subpolitics or political modernization, where the environmental movement, community groups, businesses, and other stakeholders increasingly take on direct and leadership roles in stimulating environmental transformation. Political modernization of this sort requires certain supporting norms and institutions such as a free, independent, or at least critical press, basic human rights of expression, organization, and assembly, etc. New media such as the Internet greatly facilitate this.
Criticisms
Critics argue that ecological modernization will fail to protect the environment and does nothing to alter the impulses within the capitalist economic mode of production (see capitalism) that inevitably lead to environmental degradation (Foster, 2002). As such, it is just a form of 'green-washing'. Critics question whether technological advances alone can achieve resource conservation and better environmental protection, particularly if left to business self-regulation practices (York and Rosa, 2003). For instance, many technological improvements are currently feasible but not widely utilized. The most environmentally friendly product or manufacturing process (which is often also the most economically efficient) is not always the one automatically chosen by self-regulating corporations (e.g. hydrogen or biofuel vs. peak oil). In addition, some critics have argued that ecological modernization does not redress gross injustices that are produced within the capitalist system, such as environmental racism - where people of color and low income earners bear a disproportionate burden of environmental harm such as pollution, and lack access to environmental benefits such as parks, and social justice issues such as eliminating unemployment (Bullard, 1993; Gleeson and Low, 1999; Harvey, 1996) - environmental racism is also referred to as issues of the asymmetric distribution of environmental resources and services (Everett & Neu, 2000). Moreover, the theory seems to have limited global efficacy, applying primarily to its countries of origin - Germany and the Netherlands, and having little to say about the developing world (Fisher and Freudenburg, 2001). Perhaps the harshest criticism though, is that ecological modernization is predicated upon the notion of 'sustainable growth', and in reality this is not possible because growth entails the consumption of natural and human capital at great costs to ecosystems and societies.
Ecological modernization, its effectiveness and applicability, strengths and limitations, remains a dynamic and contentious area of environmental social science research and policy discourse in the early 21st century.
See also
Bright green environmentalism
Ecological design
Ecological civilization
Ecomodernism
Environmental sociology
Reflexive modernization
Restoration ecology
Sustainable development
References
Ayres, R. U. and Simonis, U. E., 1994, Industrial Metabolism. Restructuring for Sustainable Development, Tokyo, UN University Press.
Beck, U., 1999, World Risk Society, Cambridge, UK, Polity Press, .
Braungart, M., and McDonough, W., 2002, Cradle to Cradle. Remaking the way we make things, New York, N.Y., North Point Press.
Bullard, R., (ed.) 1993, Confronting Environmental Racism: Voices from the Grassroots, Boston, South End Press.
Dickens, P. 2004, Society & Nature: Changing Our Environment, Changing Ourselves, Cambridge, UK, Polity, .
Everett, J., and Neu, D., 2000, "Ecological Modernization and the Limits of Environmental Accounting?", Accounting Forum, 24(1), pp. 5–29.
Fisher, D.R., and Freudenburg, W.R., 2001, "Ecological modernization and its critics: Assessing the past and looking toward the future", Society and Natural Resources, 14, pp. 701–709.
Foster, J.B., 2002, Ecology Against Capitalism, New York, Monthly Review Press.
Gleeson, B. and Low, N. (eds.) 1999, Global Ethics and Environment, London, Routledge.
Hajer, M.A., 1995, The Politics of Environmental Discourse: Ecological Modernization and the Policy Process, Oxford, UK, Oxford University Press, .
Harvey, D., 1996, Justice, Nature and the Geography of Difference, Malden, Ma., Blackwell, p. 377-402.
Huber, J., 2004, New Technologies and Environmental Innovation, Cheltenham, UK, Edward Elgar.
Klemmer, P., et al., 1999, Environmental Innovations. Incentives and Barriers, Berlin, Analytica.
Mol, A.P.J., 2001, Globalization and Environmental Reform: The Ecological Modernization of the Global Economy, Cambridge, Ma., MIT Press, .
Mol, A.P.J., and Sonnenfeld, D.A., (eds.) 2000, Ecological Modernisation around the World: Perspectives and Critical Debates, London and Portland, OR, Frank Cass/ Routledge, .
Mol, A.P.J., Sonnenfeld, D.A., and Spaargaren, G., (eds.) 2009, The Ecological Modernisation Reader: Environmental Reform in Theory and Practice, London and New York, Routledge, hardback, paperback.
OECD (ed.), Towards Sustainable Household Consumption? Trends and Policies in OECD Countries, Paris, OECD Publ., 2002.
Olsthoorn, X., and Wieczorek, A., (eds.) 2006, Understanding Industrial Transformation. Views from Different Disciplines, Dordrecht: Springer.
Redclift, M. R., and Woodgate, G. (eds.) 1997, The International Handbook of Environmental Sociology, Cheltenham, UK, Edward Elgar, .
Redclift, M. R., and Woodgate, G., (eds.) 2005, New Developments in Environmental Sociology, Cheltenham, Edward Elgar, .
Socolow, R. et al., (eds.) 1994, Industrial Ecology and Global Change, Cambridge University Press.
Vergragt, Ph., Strategies Towards the Sustainable Household, SusHouse Project Final Report, Delft University of Technology, NL, 2000.
Environmental sociology
Environmental social science concepts
Environmental policy
Industrial ecology
Global ethics
Modernity | Ecological modernization | Chemistry,Engineering,Environmental_science | 2,375 |
19,169,796 | https://en.wikipedia.org/wiki/Mouse%20warping | Mouse warping is a facility provided by some window managers that automatically positions the pointer to the centre of the current application window when the application is made current.
Window managers that support mouse warping
afterstep
awesome
wmii
User interface techniques | Mouse warping | Technology | 50 |
27,152,751 | https://en.wikipedia.org/wiki/Vsevolod%20Klechkovsky | Vsevolod Mavrikievich Klechkovsky (; also transliterated as Klechkovskii and Klechkowski; November 28, 1900 – May 2, 1972) was a Soviet and Russian agricultural chemist known for his work with radioisotopes.
Biography
He graduated in 1929 from the Moscow agricultural academy and worked there from 1930. He became a professor in 1955, and an academician of the All-Union Academy of Agricultural Sciences of the Soviet Union (known as VASKhNIL) in 1956.
His use of isotopic labeling in the advance of soil chemistry led to his being considered a founder of agricultural radiology. He was one of the first to study plant nutrition using radioisotopes, for which he received the Stalin Prize in 1952 along with his academy co-workers. He studied the behavior of heavy nuclei daughter isotopes in soils.
Following the 1957 Kyshtym disaster, Klechkovsky led the research projects studying the long-term effects of radioactive contamination at the site.
Klechkovsky also studied theoretical chemistry, and proposed a theoretical justification of the empirical Madelung rule for the ordering of atomic orbital energies. This rule is therefore sometimes called Klechkovsky's rule, especially in Russian and in French sources.
References
1900 births
1972 deaths
20th-century Russian chemists
Scientists from Moscow
Academicians of the VASKhNIL
Recipients of the Stalin Prize
Recipients of the Order of Lenin
Recipients of the Order of the Red Banner of Labour
People involved with the periodic table
Soviet chemists | Vsevolod Klechkovsky | Chemistry | 314 |
22,688,097 | https://en.wikipedia.org/wiki/Branches%20of%20physics | Physics is a scientific discipline that seeks to construct and experimentally test theories of the physical universe. These theories vary in their scope and can be organized into several distinct branches, which are outlined in this article.
Classical mechanics
Classical mechanics is a model of the physics of forces acting upon bodies; includes sub-fields to describe the behaviors of solids, gases, and fluids. It is often referred to as "Newtonian mechanics" after Isaac Newton and his laws of motion. It also includes the classical approach as given by Hamiltonian and Lagrange methods. It deals with the motion of particles and the general system of particles.
There are many branches of classical mechanics, such as: statics, dynamics, kinematics, continuum mechanics (which includes fluid mechanics), statistical mechanics, etc.
Mechanics: A branch of physics in which we study the object and properties of an object in form of a motion under the action of the force.
Thermodynamics and statistical mechanics
The first chapter of The Feynman Lectures on Physics is about the existence of atoms, which Feynman considered to be the most compact statement of physics, from which science could easily result even if all other knowledge was lost. By modeling matter as collections of hard spheres, it is possible to describe the kinetic theory of gases, upon which classical thermodynamics is based.
Thermodynamics studies the effects of changes in temperature, pressure, and volume on physical systems on the macroscopic scale, and the transfer of energy as heat. Historically, thermodynamics developed out of the desire to increase the efficiency of early steam engines.
The starting point for most thermodynamic considerations is the laws of thermodynamics, which postulate that energy can be exchanged between physical systems as heat or work. They also postulate the existence of a quantity named entropy, which can be defined for any system. In thermodynamics, interactions between large ensembles of objects are studied and categorized. Central to this are the concepts of system and surroundings. A system is composed of particles, whose average motions define its properties, which in turn are related to one another through equations of state. Properties can be combined to express internal energy and thermodynamic potentials, which are useful for determining conditions for equilibrium and spontaneous processes.
Electromagnetism and photonics
The study of the behaviours of electrons, electric media, magnets, magnetic fields, and general interactions of light.
Relativistic mechanics
The special theory of relativity enjoys a relationship with electromagnetism and mechanics; that is, the principle of relativity and the principle of stationary action in mechanics can be used to derive Maxwell's equations, and vice versa.
The theory of special relativity was proposed in 1905 by Albert Einstein in his article "On the Electrodynamics of Moving Bodies". The title of the article refers to the fact that special relativity resolves an inconsistency between Maxwell's equations and classical mechanics. The theory is based on two postulates: (1) that the mathematical forms of the laws of physics are invariant in all inertial systems; and (2) that the speed of light in vacuum is constant and independent of the source or observer. Reconciling the two postulates requires a unification of space and time into the frame-dependent concept of spacetime.
General relativity is the geometrical theory of gravitation published by Albert Einstein in 1915/16. It unifies special relativity, Newton's law of universal gravitation, and the insight that gravitation can be described by the curvature of space and time. In general relativity, the curvature of spacetime is produced by the energy of matter and radiation.
Quantum mechanics, atomic physics, and molecular physics
Quantum mechanics is the branch of physics treating atomic and subatomic systems and their interaction based on the observation that all forms of energy are released in discrete units or bundles called "quanta". Remarkably, quantum theory typically permits only probable or statistical calculation of the observed features of subatomic particles, understood in terms of wave functions. The Schrödinger equation plays the role in quantum mechanics that Newton's laws and conservation of energy serve in classical mechanics—i.e., it predicts the future behavior of a dynamic system—and is a wave equation that is used to solve for wavefunctions.
For example, the light, or electromagnetic radiation emitted or absorbed by an atom has only certain frequencies (or wavelengths), as can be seen from the line spectrum associated with the chemical element represented by that atom. The quantum theory shows that those frequencies correspond to definite energies of the light quanta, or photons, and result from the fact that the electrons of the atom can have only certain allowed energy values, or levels; when an electron changes from one allowed level to another, a quantum of energy is emitted or absorbed whose frequency is directly proportional to the energy difference between the two levels. The photoelectric effect further confirmed the quantization of light.
In 1924, Louis de Broglie proposed that not only do light waves sometimes exhibit particle-like properties, but particles may also exhibit wave-like properties. Two different formulations of quantum mechanics were presented following de Broglie's suggestion. The wave mechanics of Erwin Schrödinger (1926) involves the use of a mathematical entity, the wave function, which is related to the probability of finding a particle at a given point in space. The matrix mechanics of Werner Heisenberg (1925) makes no mention of wave functions or similar concepts but was shown to be mathematically equivalent to Schrödinger's theory. A particularly important discovery of the quantum theory is the uncertainty principle, enunciated by Heisenberg in 1927, which places an absolute theoretical limit on the accuracy of certain measurements; as a result, the assumption by earlier scientists that the physical state of a system could be measured exactly and used to predict future states had to be abandoned. Quantum mechanics was combined with the theory of relativity in the formulation of Paul Dirac. Other developments include quantum statistics, quantum electrodynamics, concerned with interactions between charged particles and electromagnetic fields; and its generalization, quantum field theory.
String Theory
A possible candidate for the theory of everything, this theory combines the theory of general relativity and quantum mechanics to make a single theory. This theory can predict about properties of both small and big objects. This theory is currently under the developmental stage.
Optics and acoustics
Optics is the study of light motions including reflection, refraction, diffraction, and interference.
Acoustics is the branch of physics involving the study of mechanical waves in different mediums.
Condensed matter physics
The study of the physical properties of matter in a condensed phase.
High-energy particle physics and nuclear physics
Particle physics studies the nature of particles, while nuclear physics studies the atomic nuclei.
Cosmology
Cosmology studies how the universe came to be, and its eventual fate. It is studied by physicists and astrophysicists. It also studies about fictional universes people made, how the universes came to be, and their eventual fate and destruction.
Interdisciplinary fields
To the interdisciplinary fields, which define partially sciences of their own, belong e.g. the
agrophysics is a branch of science bordering on agronomy and physics
astrophysics, the physics in the universe, including the properties and interactions of celestial bodies in astronomy
atmospheric physics is the application of physics to the study of the atmosphere
space physics is the study of plasmas as they occur naturally in the Earth's upper atmosphere (aeronomy) and within the Solar System
biophysics, studying the physical interactions of biological processes
chemical physics, the science of physical relations in chemistry
computational physics, the application of computers and numerical methods to physical systems
econophysics, dealing with physical processes and their relations in the science of economy
environmental physics, the branch of physics concerned with the measurement and analysis of interactions between organisms and their environment
engineering physics, the combined discipline of physics and engineering
geophysics, the sciences of physical relations on our planet
mathematical physics, mathematics pertaining to physical problems
medical physics, the application of physics in medicine to prevention, diagnosis, and treatment
physical chemistry, dealing with physical processes and their relations in the science of physical chemistry
physical oceanography, is the study of physical conditions and physical processes within the ocean, especially the motions and physical properties of ocean waters
psychophysics, the science of physical relations in psychology
quantum computing, the study of quantum-mechanical computation systems
sociophysics or social physics, is a field of science which uses mathematical tools inspired by physics to understand the behaviour of human crowds
Summary
The table below lists the core theories along with many of the concepts they employ.
References | Branches of physics | Physics | 1,773 |
18,931,962 | https://en.wikipedia.org/wiki/Software%20maintainer | In free and open source software and inner source software, a software maintainer or package maintainer is usually one or more people who build source code into a binary package for distribution, commit patches, or organize code in a source repository. If the maintainer stops doing their work on the project, then the development of the project stops. If another person not associated with the maintainer releases a new version of the project, it is said that a fork has been created. So for example happened with uClibc.
Maintainers often cryptographically sign binaries so that people can verify their authenticity.
See also
Software maintenance
Software developer
Code review
List of software package management systems
References
Software maintenance
fi:Ylläpitäjä | Software maintainer | Engineering | 145 |
63,506,637 | https://en.wikipedia.org/wiki/NGC%20736 | NGC 736 is an elliptical galaxy in the constellation Triangulum. It is an estimated 200 million light years from the Milky Way and has a diameter of approximately 85,000 light years. NGC 736 was discovered on September 12, 1784 by the German-British astronomer William Herschel.
See also
List of NGC objects (1–1000)
References
External links
736
Triangulum
Elliptical galaxies
007289 | NGC 736 | Astronomy | 85 |
41,975,647 | https://en.wikipedia.org/wiki/Elvis%20operator | In certain computer programming languages, the Elvis operator, often written ?:, is a binary operator that returns the evaluated first operand if that operand evaluates to a value likened to logically true (according to a language-dependent convention, in other words, a truthy value), and otherwise returns the evaluated second operand (in which case the first operand evaluated to a value likened to logically false, in other words, a falsy value). This is identical to a short-circuit or with "last value" semantics. The notation of the Elvis operator was inspired by the ternary conditional operator, ? :, since the Elvis operator expression A ?: B is approximately equivalent to the ternary conditional expression A ? A : B.
The name "Elvis operator" refers to the fact that when its common notation, ?:, is viewed sideways, it resembles an emoticon of Elvis Presley with his signature hairstyle.
A similar operator is the null coalescing operator, where the boolean truth(iness) check is replaced with a check for non-null instead. This is usually written ??, and can be seen in languages like C# or Dart.
Alternative syntaxes
In several languages, such as Common Lisp, Clojure, Lua, Object Pascal, Perl, Python, Ruby, and JavaScript, the logical disjunction operator (typically || or or) has the same behavior as the above: returning its first operand if it would evaluate to a truthy value, and otherwise evaluating and returning its second operand, which may be a truthy or falsy value. When the left-hand side is truthy, the right-hand side is not even evaluated; it is "short-circuited". This is different than the behavior in other languages such as C/C++, where the result of || will always be a (proper) boolean.
Example
Boolean variant
In a language that supports the Elvis operator, something like this:
x = f() ?: g()
will set x equal to the result of f() if that result is truthy, and to the result of g() otherwise.
It is equivalent to this example, using the conditional ternary operator:
x = f() ? f() : g()
except that it does not evaluate f() twice if it yields truthy. Note the possibility of arbitrary behaviour if f() is not a state-independent function that always returns the same result.
Object reference variant
This code will result in a reference to an object that is guaranteed to not be null. Function f() returns an object reference instead of a boolean, and may return null, which is universally regarded as falsy:
x = f() ?: "default value"
Languages supporting the Elvis operator
In GNU C and C++ (that is: in C and C++ with GCC extensions), the second operand of the ternary operator is optional. This has been the case since at least GCC 2.95.3 (March 2001), and seems to be the original Elvis operator.
In Apache Groovy, the "Elvis operator" ?: is documented as a distinct operator; this feature was added in Groovy 1.5 (December 2007). Groovy, unlike GNU C and PHP, does not simply allow the second operand of ternary ?: to be omitted; rather, binary ?: must be written as a single operator, with no whitespace in between.
In PHP, it is possible to leave out the middle part of the ternary operator since PHP 5.3. (June 2009).
The Fantom programming language has the ?: binary operator that compares its first operand with null.
In Kotlin, the Elvis operator returns its left-hand side if it is not null, and its right-hand side otherwise. A common pattern is to use it with return, like this:
In Gosu, the ?: operator returns the right operand if the left is null as well.
In C#, the null-conditional operator, ?. is referred to as the "Elvis operator", but it does not perform the same function. Instead, the null-coalescing operator ?? does.
In ColdFusion and CFML, the Elvis operator was introduced using the ?: syntax.
The Xtend programming language has an Elvis operator.
In Google's Closure Templates, the Elvis operator is a null coalescing operator, equivalent to isNonnull($a) ? $a : $b.
In Ballerina, the Elvis operator L ?: R returns the value of L if it's not nil. Otherwise, return the value of R.
In JavaScript, the nullish coalescing (??) operator is a logical operator that returns its right-hand side operand when its left-hand side operand is null or undefined, and otherwise returns its left-hand side operand.
See also
?: or conditional operator, when used as a ternary operator
Safe navigation operator, often ?.
Spaceship operator <=>
Option type
References
Articles with example code
Binary operations
Conditional constructs
Operators (programming)
Elvis Presley | Elvis operator | Mathematics | 1,083 |
46,179,854 | https://en.wikipedia.org/wiki/Touch%20Surgery | Touch Surgery is a London, New York City, Sydney and Auckland-based health technology app and trading name for the company Digital Surgery LTD. Digital Surgery is a health tech company shaping the future of surgery through the convergence of surgical expertise and technology. The app was first discussed in 2010. The Touch Surgery mobile app is a mobile surgical training platform designed to simulate surgical procedures. As of October 2019, The Touch Surgery mobile app included surgical instructions for about 200 surgical procedures in 17 different specialties.
Validation Studies
Touch Surgery app has been validated by over 19 independent peer-reviewed publications for its innovative approach to training surgeons virtually.
The McDougal principles of validation (construct, face and content) have been conducted by 4 research groups. Each of the groups belonged to a different procedures, surgical specialties and institutions. The first paper was from an orthopaedic team in London (UK), followed by a Plastic surgical team at Stanford Hospital in Stanford (USA) an oro-maxillofacial team at Mount Sinai in New York (USA) and a General Surgery team in Heidelberg (Germany).
Construct validation tests the ability of the simulator to differentiate between different experience groups. Content validation is the expert opinion of content within the simulations with focus on the accuracy of the anatomy, steps and use of instruments. Face validation is the subjective assessment of the platform as a training tool (face value). Statistical significance was demonstrated in all studies proving repeatability of validation irrespective of procedure, specialty and location.
Learning curves and knowledge transfer
Participants naive to Intramedullary femoral nailing and Touch Surgery were assessed on an exam paper created to assess their understanding of the surgical procedure. The participants then used the app for learning and regularly tested themselves on the app. Finally they retook the exam-paper to see how much they had learned. Mastery on the app took 6 consecutive repetitions. A significant rise in score was noted and scores rivaled that of the surgeons that validated the exam paper.
Training Efficiency
The cardiothoracic team at Stanford assessed medical students on their understanding of Cardiopulmonary Bypass (CPB) on a 25 question exam. The exam had been validated by five cardiac surgeons. The medical students were then randomly allocated to use either the text based resources or the Touch Surgery app. The students were allowed to use the resource for 45 minutes and then retook the exam on CPB. Baseline test showed that there was no difference between the two groups before intervention; however there was a statistically significant difference in the two groups after the 45 minute preparation session. This study demonstrated that Touch Surgery could outperform traditional text-based learning tools.
In another randomised controlled trial of skills transfer from Touch Surgery to laparoscopic cholecystectomy, the Touch Surgery app has proved effective for providing cognitive training in laparoscopic cholecystectomies to medical students.
Microsoft Partnership
In October 2018, Touch Surgery (Digital Surgery) and Microsoft announced their partnership and their aim to make operations safer by using Microsoft Hololens. The company has been made an official Microsoft Mixed Reality Partner and is creating mixed-reality apps and programs for Microsoft HoloLens to help surgeons as they work.
Safe Surgery 2020
In January, 2019 Safe Surgery 2020 and Digital Surgery have launched their simulation training platform, Touch Surgery, in Tanzania to upskill surgical workers in Tanzania and East Africa. Safe Surgery 2020 is a collaboration of foundations, nonprofits, educational institutions and local governments who want to make surgery safe, affordable and accessible across the world.
Assist International, Digital Surgery and local health facilities and hospitals, including Kijabe Hospital in Kenya, have been collaborating closely to develop a series of simulation trainings for the most common procedures provided by primary facilities in LMICs, such as C-sections and hysterectomies. The simulations are developed specifically for surgical teams operating in low-resource settings to ensure the procedure is achievable with basic instruments. The simulation integrates the WHO's surgical safety checklist to reinforce the importance of following the checklist to improve patient outcomes. The Safe Surgery 2020 hospitals received in-person and tele-mentoring training, and education on the Touch Surgery app as well as practical tools such as tablets in each facility for continuous learning.
References
External links
Touch Surgery official website
Companies based in the London Borough of Islington
Companies based in New York City
Health care companies established in 2013
Educational publishing companies of the United Kingdom
Medical simulation
Multinational companies
Medical software | Touch Surgery | Biology | 915 |
57,117,782 | https://en.wikipedia.org/wiki/Spot-tag | A Spot-tag is a 12-amino acid peptide tag recognized by a single-domain antibody (sdAb, or nanobody). Due to the small size of a Spot-tag (12 amino acids) and the robust Spot-nanobody (14.7 kD) that specifically binds to Spot-tagged proteins, Spot-tag can be used for multiple capture and detection applications: Immunoprecipitation, affinity purification, immunofluorescence, and super-resolution microscopy. Recombinant proteins can be engineered to express the Spot-tag.
Spot-tag Sequence
Amino acid sequence
PDRVRAVSHWSS
Codon optimized DNA sequence
See also
Protein tag
References
Amino acids
Peptides
Proteins | Spot-tag | Chemistry | 150 |
147,184 | https://en.wikipedia.org/wiki/Broadband | In telecommunications, broadband or high speed is the wide-bandwidth data transmission that exploits signals at a wide spread of frequencies or several different simultaneous frequencies, and is used in fast Internet access. The transmission medium can be coaxial cable, optical fiber, wireless Internet (radio), twisted pair cable, or satellite.
Originally used to mean 'using a wide-spread frequency' and for services that were analog at the lowest level, nowadays in the context of Internet access, 'broadband' is often used to mean any high-speed Internet access that is seemingly always 'on' and is faster than dial-up access over traditional analog or ISDN PSTN services.
The ideal telecommunication network has the following characteristics: broadband, multi-media, multi-point, multi-rate and economical implementation for a diversity of services (multi-services). The Broadband Integrated Services Digital Network (B-ISDN) was planned to provide these characteristics. Asynchronous Transfer Mode (ATM) was promoted as a target technology for meeting these requirements.
Overview
Different criteria for "broad" have been applied in different contexts and at different times. Its origin is in physics, acoustics, and radio systems engineering, where it had been used with a meaning similar to "wideband", or in the context of audio noise reduction systems, where it indicated a single-band rather than a multiple-audio-band system design of the compander. Later, with the advent of digital telecommunications, the term was mainly used for transmission over multiple channels. Whereas a passband signal is also modulated so that it occupies higher frequencies (compared to a baseband signal which is bound to the lowest end of the spectrum, see line coding), it is still occupying a single channel. The key difference is that what is typically considered a broadband signal in this sense is a signal that occupies multiple (non-masking, orthogonal) passbands, thus allowing for much higher throughput over a single medium but with additional complexity in the transmitter/receiver circuitry.
The term became popularized through the 1990s as a marketing term for Internet access that was faster than dial-up access (dial-up being typically limited to a maximum of 56 kbit/s). This meaning is only distantly related to its original technical meaning.
Since 1999, broadband Internet access has been a factor in public policy. In that year, at the World Trade Organization Biannual Conference called “Financial Solutions to Digital Divide” in Seattle, the term “Meaningful Broadband” was introduced to the world leaders, leading to the activation of a movement to close the digital divide. Fundamental aspects of this movement are to suggest that the equitable distribution of broadband is a fundamental human right.
Personal computing facilitated easy access, manipulation, storage, and exchange of information, and required reliable data transmission. Communicating documents by images and the use of high-resolution graphics terminals provided a more natural and informative mode of human interaction than do voice and data alone. Video teleconferencing enhances group interaction at a distance. High-definition entertainment video improves the quality of pictures, but requires much higher transmission rates.
These new data transmission requirements may require new transmission means other than the present overcrowded radio spectrum. A modern telecommunications network (such as the broadband network) must provide all these different services (multi-services) to the user.
Differences from old telephony
Conventional telephony communication used:
the voice medium only,
connected only two telephones per telephone call, and
used circuits of fixed bit-rates.
Modern services can be:
multimedia,
multi-point, and
multirate.
These aspects are examined individually in the following three sub-sections.
Multimedia
A multimedia call may communicate audio, data, still images, or full-motion video, or any combination of these media. Each medium has different demands for communication quality, such as:
bandwidth requirement,
signal latency within the network, and
signal fidelity upon delivery by the network.
The information content of each medium may affect the information generated by other media. For example, voice could be transcribed into data via voice recognition, and data commands may control the way voice and video are presented. These interactions most often occur at the communication terminals, but may also occur within the network.
Multipoint
Traditional voice calls are predominantly two party calls, requiring a point-to-point connection using only the voice medium. To access pictorial information in a remote database would require a point-to-point connection that sends low bit-rate queries to the database and high bit-rate video from the database. Entertainment video applications are largely point-to-multi-point connections, requiring one way communication of full motion video and audio from the program source to the viewers. Video teleconferencing involves connections among many parties, communicating voice, video, as well as data. Offering future services thus requires flexible management of the connection and media requests of a multipoint, multimedia communication call.
Multirate
A multirate service network is one which flexibly allocates transmission capacity to connections. A multimedia network has to support a broad range of bit-rates demanded by connections, not only because there are many communication media, but also because a communication medium may be encoded by algorithms with different bit-rates. For example, audio signals can be encoded with bit-rates ranging from less than 1 kbit/s to hundreds of kbit/s, using different encoding algorithms with a wide range of complexity and quality of audio reproduction. Similarly, full motion video signals may be encoded with bit-rates ranging from less than 1 Mbit/s to hundreds of Mbit/s. Thus a network transporting both video and audio signals may have to integrate traffic with a very broad range of bit-rates.
A single network for multiple services
Traditionally, different telecommunications services were carried via separate networks: voice on the telephone network, data on computer networks such as local area networks, video teleconferencing on private corporate networks, and television on broadcast radio or cable networks.
These networks were largely engineered for a specific application and are not suited to other applications. For example, the traditional telephone network is too noisy and inefficient for bursty data communication. On the other hand, data networks which store and forward messages using computers had limited connectivity, usually did not have sufficient bandwidth for digitised voice and video signals, and suffer from unacceptable delays for the real-time signals. Television networks using radio or cables were largely broadcast networks with minimum switching facilities.
It was desirable to have a single network for providing all these communication services to achieve the economy of sharing. This economy motivates the general idea of an integrated services network. Integration avoids the need for many overlaying networks, which complicates network management and reduces flexibility in the introduction and evolution of services. This integration was made possible with advances in broadband technologies and high-speed information processing of the 1990s.
While multiple network structures were capable of supporting broadband services, an ever-increasing percentage of broadband and MSO providers opted for fibre-optic network structures to support both present and future bandwidth requirements.
CATV (cable television), HDTV (high definition television), VoIP (voice over internet protocol), and broadband internet are some of the most common applications now being supported by fibre optic networks, in some cases directly to the home (FTTh – Fibre To The Home). These types of fibre optic networks incorporate a wide variety of products to support and distribute the signal from the central office to an optic node, and ultimately to the subscriber (end-user).
Broadband technologies
Telecommunications
In telecommunications, a broadband signalling method is one that handles a wide band of frequencies. "Broadband" is a relative term, understood according to its context. The wider (or broader) the bandwidth of a channel, the greater the data-carrying capacity, given the same channel quality.
In radio, for example, a very narrow band will carry Morse code, a broader band will carry speech, and a still broader band will carry music without losing the high audio frequencies required for realistic sound reproduction. This broad band is often divided into channels or "frequency bins" using passband techniques to allow frequency-division multiplexing instead of sending a higher-quality signal.
In data communications, a 56k modem will transmit a data rate of 56 kilobits per second (kbit/s) over a 4-kilohertz-wide telephone line (narrowband or voiceband). In the late 1980s, the Broadband Integrated Services Digital Network (B-ISDN) used the term to refer to a broad range of bit rates, independent of physical modulation details. The various forms of digital subscriber line (DSL) services are broadband in the sense that digital information is sent over multiple channels. Each channel is at a higher frequency than the baseband voice channel, so it can support plain old telephone service on a single pair of wires at the same time. However, when that same line is converted to a non-loaded twisted-pair wire (no telephone filters), it becomes hundreds of kilohertz wide (broadband) and can carry up to 100 megabits per second using very high-bit rate digital subscriber line (VDSL or VHDSL) techniques.
Modern networks have to carry integrated traffic consisting of voice, video and data. The Broadband Integrated Services Digital Network (B-ISDN) was designed for these needs. The types of traffic supported by a broadband network can be classified according to three characteristics:
Bandwidth is the amount of network capacity required to support a connection.
Latency is the amount of delay associated with a connection. Requesting low latency in the quality of service (QoS) profile means that the cells need to travel quickly from one point in the network to another.
Cell-delay variation (CDV) is the range of delays experienced by each group of associated cells. Low cell-delay variation means a group of cells must travel through the network without getting too far apart from one another.
Cellular networks utilize various standards for data transmission, including 5G which can support one million separate devices per square kilometer.
Requirements of the types of traffic
The types of traffic found in a broadband network (with examples) and their respective requirements are summarised in Table 1.
Computer networks
Many computer networks use a simple line code to transmit one type of signal using a medium's full bandwidth using its baseband (from zero through the highest frequency needed). Most versions of the popular Ethernet family are given names, such as the original 1980s 10BASE5, to indicate this. Networks that use cable modems on standard cable television infrastructure are called broadband to indicate the wide range of frequencies that can include multiple data users as well as traditional television channels on the same cable. Broadband systems usually use a different radio frequency modulated by the data signal for each band.
The total bandwidth of the medium is larger than the bandwidth of any channel.
The 10BROAD36 broadband variant of Ethernet was standardized by 1985, but was not commercially successful.
The DOCSIS standard became available to consumers in the late 1990s, to provide Internet access to cable television residential customers. Matters were further confused by the fact that the 10PASS-TS standard for Ethernet ratified in 2008 used DSL technology, and both cable and DSL modems often have Ethernet connectors on them.
TV and video
A television antenna may be described as "broadband" because it is capable of receiving a wide range of channels, while e.g. a low-VHF antenna is "narrowband" since it receives only 1 to 5 channels. The U.S. federal standard FS-1037C defines "broadband" as a synonym for wideband. "Broadband" in analog video distribution is traditionally used to refer to systems such as cable television, where the individual channels are modulated on carriers at fixed frequencies. In this context, baseband is the term's antonym, referring to a single channel of analog video, typically in composite form with separate baseband audio. The act of demodulating converts broadband video to baseband video. Fiber optic allows the signal to be transmitted farther without being repeated. Cable companies use a hybrid system using fiber to transmit the signal to neighborhoods and then changes the signal from light to radio frequency to be transmitted over coaxial cable to homes. Doing so reduces the use of having multiple head ends. A head end gathers all the information from the local cable networks and movie channels and then feeds the information into the system.
However, "broadband video" in the context of streaming Internet video has come to mean video files that have bit-rates high enough to require broadband Internet access for viewing. "Broadband video" is also sometimes used to describe IPTV Video on demand.
Alternative technologies
Power lines have also been used for various types of data communication. Although some systems for remote control are based on narrowband signaling, modern high-speed systems use broadband signaling to achieve very high data rates. One example is the ITU-T G.hn standard, which provides a way to create a local area network up to 1 Gigabit/s (which is considered high-speed as of 2014) using existing home business and home wiring (including power lines, but also phone lines and coaxial cables).
In 2014, researchers at Korea Advanced Institute of Science and Technology made developments on the creation of ultra-shallow broadband optical instruments.
Internet broadband
In the context of Internet access, the term "broadband" is used loosely to mean "access that is always on and faster than the traditional dial-up access".
A range of more precise definitions of speed have been prescribed at times, including:
"Greater than the primary rate" (which ranged from about 1.5 to 2 Mbit/s) —CCITT in "broadband service" in 1988.
"Internet access that is always on and faster than the traditional dial-up access" —US National Broadband Plan of 2009
4 Mbit/s downstream, 1 Mbit/s upstream —Federal Communications Commission (FCC), 2010
25 Mbit/s downstream, 3 Mbit/s upstream —FCC, 2015
50 Mbit/s downstream, 10 Mbit/s upstream —Canadian Radio-television and Telecommunications Commission (CRTC)
Broadband Internet service in the United States was effectively treated or managed as a public utility by net neutrality rules until being overturned by the FCC in December 2017.
Speed qualifiers
A number of national and international regulators categorize broadband connections according to upload and download speeds, stated in Mbit/s (megabits per second).
In Australia, the Australian Competition and Consumer Commission also requires Internet Service Providers to quote speed during night time and busy hours
Global bandwidth concentration
Bandwidth has historically been very unequally distributed worldwide, with increasing concentration in the digital age. Historically only 10 countries have hosted 70–75% of the global telecommunication capacity (see pie-chart Figure on the right). In 2014, only three countries (China, the US, and Japan) host 50% of the globally installed telecommunication bandwidth potential. The U.S. lost its global leadership in terms of installed bandwidth in 2011, being replaced by China, which hosts more than twice as much national bandwidth potential in 2014 (29% versus 13% of the global total).
See also
Mobile broadband
Ultra-wideband
Wireless broadband
Nation specific:
Broadband mapping in the United States
Internet in Malaysia
Internet in the United Kingdom
List of broadband providers in the United States
National broadband plan
References
External links
Digital technology | Broadband | Technology | 3,155 |
8,828,258 | https://en.wikipedia.org/wiki/MOS-controlled%20thyristor | An MOS-controlled thyristor (MCT) is a voltage-controlled fully controllable thyristor, controlled by MOSFETs (metal–oxide–semiconductor field-effect transistors). It was invented by V.A.K. Temple in 1984, and was principally similar to the earlier insulated-gate bipolar transistor (IGBT). MCTs are similar in operation to GTO thyristors, but have voltage controlled insulated gates. They have two MOSFETs of opposite conductivity types in their equivalent circuits. One is responsible for turn-on and the other for turn-off. A thyristor with only one MOSFET in its equivalent circuit, which can only be turned on (like normal SCRs), is called an MOS-gated thyristor.
Positive voltage on the gate terminal with respect to the cathode turns the thyristor to the on state.
Negative voltage on the gate terminal with respect to the anode, which is close to cathode voltage during the on state, turns the thyristor to the off state.
MCTs were commercialized only briefly.
External links
Field-effect-controlled thyristor
"MOS GTO—A Turn Off Thyristor with MOS-Controlled Emitter Shorts," IEDM 85, M. Stoisiek and H. Strack, Siemens AG, Munich FRG pp. 158–161.
"MOS-Controlled Thyristors—A New Class of Power Devices", IEEE Transactions on Electron Devices, Vol. ED-33, No. 10, Oct. 1986, Victor A. K. Temple, pp. 1609 through 1618.
References
Solid state switches
Power electronics
MOSFETs | MOS-controlled thyristor | Engineering | 360 |
16,713,695 | https://en.wikipedia.org/wiki/Elizabeth%20Furnace | Elizabeth Furnace was a blast furnace in the Shenandoah Valley that was used to create pig iron from 1836 to 1888 using Passage Creek for water power. Iron ore was mined nearby, purified in the furnace, and then pig iron was transported over the Massanutten Mountain to the South Fork of the Shenandoah River for forging in Harpers Ferry, West Virginia. The road used to transport this iron is still used today by hikers climbing to the top of the Massanutten Mountain via the Massanutten Trail. Much of the original stone structure still exists, as well as a restored cabin, and an outdoor recreation area.
Elizabeth Furnace Recreation Area
The Elizabeth Furnace Recreation Area, located in George Washington National Forest just north of Fort Valley, Virginia, consists of three main areas: the group camping area, the picnic area and the family camping area.
The group camping area, located at , includes fire rings and open pit toilets.
The picnic area, located at , includes picnic tables, open pit toilets, open fields, access to several well blazed and maintained hiking trails (most notably the Massanutten / Tuscarora Trail), mountain biking trails and fishing in Passage Creek. There is a trout hatchery near Passage Creek where a fishing license is required.
The family camping area, located at , includes 33 first-come, first-served pay camp sites, fire rings, and a restored 1830s cabin.
References
History of Virginia
Parks in Shenandoah County, Virginia
Buildings and structures in Shenandoah County, Virginia
George Washington and Jefferson National Forests
Ironworks in Virginia
1836 establishments in Virginia
1886 disestablishments in the United States
Virginia Historic Landmarks
Companies established in 1836
Foundries | Elizabeth Furnace | Chemistry | 343 |
64,538,576 | https://en.wikipedia.org/wiki/Peter%20Henrici%20Prize | The Peter Henrici Prize (; ; ) is a prize awarded jointly by ETH Zurich and the Society for Industrial and Applied Mathematics (SIAM) for "original contributions to applied analysis and numerical analysis and/or for exposition appropriate for applied mathematics and scientific computing". The prize is named in honor of the Swiss numerical analyst Peter Henrici, who was a professor at ETH Zurich for 25 years.
Description
The prize, initiated in 1999 with funds contributed by ETH Zurich, is awarded every four years. It consists of a certificate containing the citation and (as of 2023) a cash prize of $5,000 (US). The winner is chosen by a prize committee, consisting of four members, two members chosen by SIAM and two others by ETH Zurich. "The prize may be awarded to any member of the scientific community who meets the general guideline of the prize description."
Award ceremony
The award is presented every four years at the International Congress on Industrial and Applied Mathematics (ICIAM). The presentation of the prize is made by the SIAM president. The recipient is requested to give a lecture at the conference.
Prize winners
1999 : Germund Dahlquist
2003 : Ernst Hairer and Gerhard Wanner
2007 : Gilbert Strang
2011 : Bjorn Engquist
2015 : Eitan Tadmor
2019 : Weinan E
See also
Alice Roth Lecture Series
References
Awards established in 1999
Awards of the Society for Industrial and Applied Mathematics
Numerical analysis
ETH Zurich
1999 establishments in Switzerland | Peter Henrici Prize | Mathematics | 298 |
9,445,193 | https://en.wikipedia.org/wiki/Attack%20of%20the%20Alligators%21 | "Attack of the Alligators!" is an episode of Thunderbirds, a British Supermarionation television series created by Gerry and Sylvia Anderson and filmed by their production company AP Films (APF) for ITC Entertainment. Written by Alan Pattillo and directed by David Lane, it was first broadcast on 10 March 1966 on ATV Midlands as the 23rd episode of Series One. It is the 24th episode in the official running order.
Set in the 2060s, the series follows the exploits of International Rescue, an organisation that uses technologically advanced rescue vehicles to save human life. The main characters are ex-astronaut Jeff Tracy, founder of International Rescue, and his five adult sons, who pilot the organisation's main vehicles: the Thunderbird machines. The plot of "Attack of the Alligators!" sees a group of alligators grow to enormous size after their swamp is contaminated by a new food additive. When the reptiles lay siege to a house, International Rescue is called in to save the trapped occupants.
Combining science-fiction and haunted house themes, with a plot deliberately written to be "nightmarish", "Attack of the Alligators!" was filmed at APF Studios in Slough in late 1965. It was the first APF production to use live animals, the re-sized alligators being played by juvenile crocodiles. Filming of the episode was controversial as the crew resorted to using electric shocks to coax movement out of the animals. Concern for the crocodiles' welfare prompted an investigation by the Royal Society for the Prevention of Cruelty to Animals (RSPCA), which ultimately took no action against APF.
"Attack of the Alligators!" remains a favourite with Thunderbirds fans and commentators and is generally regarded as one of the series' best episodes. Along with "The Cham-Cham", the next episode to enter production, it went over-budget, causing the final instalment of Thunderbirds Series One ("Security Hazard") to be re-written as a clip show to lower costs. In 1976, "Attack of the Alligators!" inspired an episode of The New Avengers titled "Gnaws", written by ex-Thunderbirds writer Dennis Spooner.
Plot
A businessman, Blackmer, visits the reclusive Dr Orchard, a scientist who lives in a dilapidated house on the Ambro River. From the local plant Sidonicus americanus, Orchard has developed a food additive called "theramine" that increases the size of animals. Enlargement of animal stock presents a simple solution to world famine as well as other economic advantages. Blackmer's boatman, Culp, has been eavesdropping on the meeting. When a storm forces Blackmer to stay at the house overnight, Culp decides to steal the theramine and sell it to the highest bidder. Waiting until the house's other occupants are asleep, he breaks into Orchard's laboratory and pours some theramine into a vial. The rest of the supply is accidentally knocked into a sink and drains into the Ambro River.
When Blackmer and Culp leave the next morning, their boat is attacked by an alligator, now enormous due to the theramine contamination. Orchard's assistant, Hector McGill, manages to rescue Blackmer but Culp is nowhere to be found. The house is quickly surrounded by three giant alligators that repeatedly hurl themselves at the building with Orchard, Blackmer, McGill and the housekeeper, Mrs Files, trapped inside. At Mrs Files' suggestion, McGill transmits a distress call to International Rescue. This is picked up by John Tracy on the Thunderbird 5 space station and relayed to Tracy Island, where Jeff immediately dispatches his other four sons to the danger zone in Thunderbirds 1 and 2.
Arriving in Thunderbird 1 and transferring to a hover-jet, Scott fires the hover-jet's missile gun to disperse the alligators and accesses the house via the laboratory window. The room eventually caves in, forcing Scott and the others to retreat to the lounge. There, they are confronted by Culp, who holds them at gunpoint. Virgil, Alan and Gordon arrive in Thunderbird 2. Alan and Gordon man tranquiliser guns and subdue two of the alligators. When the third returns to the house, Alan exits Thunderbird 2 on another hover-jet to lure it away. He hits a tree and falls off the hover-jet, but is saved by Gordon, who tranquilises the alligator before it reaches Alan.
Threatening to empty the entire theramine vial into the Ambro unless he is given safe passage upriver, Culp sets off in Blackmer's boat. At the same time, Gordon launches Thunderbird 4. A fourth, much larger alligator appears and attacks the boat, killing Culp. Virgil disposes of the creature with a missile fired from Thunderbird 2. Later, Gordon finds the theramine vial intact on the riverbed. After his sons return to Tracy Island, Jeff announces that theramine will be subject to international security restrictions. Tin-Tin has been away on a shopping trip and has bought Alan a present for his birthday – a pygmy alligator.
Regular voice cast
Ray Barrett as John Tracy
Peter Dyneley as Jeff Tracy
Christine Finn as Tin-Tin Kyrano
David Graham as Gordon Tracy and Brains
David Holliday as Virgil Tracy
Shane Rimmer as Scott Tracy
Matt Zimmerman as Alan Tracy
Production
The episode was partly inspired by H. G. Wells' 1904 novel The Food of the Gods and How It Came to Earth and its theme of animal size change. Another influence was the 1927 film The Cat and the Canary and its 1939 re-make, both of which feature stalkers and a haunted house premise. In an interview, Gerry Anderson described housekeeper Mrs Files as a "Mrs Danvers-type character". Writer Alan Pattillo, who according to special effects supervisor Derek Meddings "had tried to come up with the most nightmarish rescue situation he could", had wanted to direct the episode as well. In the end, however, it was directed by David Lane. The opening scene features an insert shot of a stormy sky that later introduced the opening titles of The Prisoner.
"Attack of the Alligators!" was filmed in October and November 1965. The production overran its one-month schedule, forcing the crew to work extra hours, and sometimes long into the night, to finish the filming. Special effects assistant Ian Wingrove remembered that the episode's complex technical aspects had the crew "[working] day and night ... through a weekend". According to Lane, at one stage the shoot ran for 48 hours straight, with two editors processing the footage in shifts. He added: "I think Derek [Meddings] went three days, non-stop, just shooting."
The alligators in the episode were portrayed not by actual alligators, as Gerry Anderson had originally intended, but juvenile crocodiles. These were acquired from a private zoo in the north of England to double as the enlarged alligators on the episode's scale model sets and water tanks. The crocodiles that appear in the episode were long; a larger specimen, measuring , was not used as it proved too aggressive to be taken out of its box. The crew kept the water tanks heated to a suitably warm temperature and used electric shocks to coax movement out of the crocodiles. The animals were unpredictable and difficult to control, either basking in the heat of the studio lights or disappearing into the tanks for hours at a time. To make them more visible to the cameras, the crew attached them to guiding rods and co-ordinated their movements. The use of live animals in both puppet and model shots required an unusually high level of collaboration between the puppet and effects crews.
Effects director Brian Johnson and several other crew members refused to take part on animal welfare grounds. Camera operator Alan Perry did not remember any of the crocodiles being mistreated; series supervising director Desmond Saunders, however, claimed that more than one specimen died of pneumonia after being left in an unheated tank overnight. Director David Elliott, though filming a different episode at the time, recalled that another dislocated one of its limbs after receiving an electric shock. Puppet operator Christine Glanville admitted that the filming could not have been pleasant for the crocodiles because the tanks contained "all sorts of dirty paint water, oil and soapy water to make it look swampy." Saunders commented: "It was scandalous. It was one of the great episodes. Nevertheless there was a price to be paid for it."
Animal cruelty concerns prompted an anonymous telephone call to the RSPCA, which dispatched an inspector to the studios. After a brief investigation, no action was taken against APF. This coincided with a decision to increase the voltage of the electric shocks to induce greater movement from the crocodiles. According to Gerry Anderson, when the inspector arrived, "Meddings explained that his team were laying the crocodiles down and they weren't doing anything. They were just lying there. The RSPCA man said, well, they would, because of the warmth of the lamps. So Derek said, 'We've been giving them a touch with an electrode just to make them move.' The guy asked what voltage they were using and Derek said it was about 20 volts, and the guy said, 'Oh, they've got terribly thick skins, you know. If you want them to move, you'll have to pump it up to 60.'" The inspector later joined the production to work alongside the crocodiles' handler.
Filming with the crocodiles was often hazardous. During a promotional photoshoot featuring Lady Penelope (who does not appear in the episode), one of the animals attacked the puppet and destroyed one of its legs. While filming a scene, Meddings was pulling one of the crocodiles towards him on a rope when the animal slid out of its harness. Meddings wrote of the incident: "My crew never saw me move as fast as I did to get out of the tank when I pulled the rope and realised the creature was free." Of the largest crocodile, which was kept at the back of the stage when not being used, Wingrove recalled: "You would forget that it was there, then one day someone shouted 'Look out!' and we turned round to see this big crocodile walking across the stage – which cleared of people very quickly!"
Both this episode and "The Cham-Cham", the next to enter production, overspent their budgets. This led the writers to re-work the final episode of Thunderbirds Series One ("Security Hazard") as a clip show to reduce costs.
Broadcast and reception
Originally transmitted on 10 March 1966, "Attack of the Alligators!" had its first UK-wide network broadcast on 20 March 1992 on BBC2. During that channel's 2000-2001 Thunderbirds re-run, the episode became the eleventh to be repeated when it replaced "Brink of Disaster", which along with "The Perils of Penelope" had been postponed until the end of the run due to similarities between the story and real-world events (both episodes feature dangerous situations involving trains and 2000 had seen several major railway accidents, most notably the Hatfield rail crash).
Critical response
"Attack of the Alligators!" is a popular episode of Thunderbirds and is widely regarded as one of the series' best. It was well received by Sylvia Anderson, who described it as her favourite episode. Lew Grade, head of distributor ITC, expressed great satisfaction with the filming during a visit to APF Studios in 1965. Stephen La Rivière considers the story one of the most unusual of the series, while Peter Webber of DVD Monthly magazine calls the episode "just insane".
In 2004, "Attack of the Alligators!" was re-issued on DVD in North America as part of A&E Video's The Best of Thunderbirds: The Favorite Episodes. Reviewing the release for the website DVD Verdict, David Gutierrez awarded "Attack of the Alligators!" a perfect score of 100, declaring it the best episode in the collection and praising its production values: "It's like a beautifully directed short film". He elaborated: "'Attack of the Alligators!' serves as a terrific example of how strong Thunderbirds can look. It's not Howdy Doody sporting a jetpack – it's an hour-long programme that feels like a motion picture."
Susanna Lazarus of Radio Times suggests that the episode is memorable specifically for its crocodile footage. The techniques used to produce the footage have caused the episode to be described as "controversial" by some sources. Mark Pickavance of the website Den of Geek criticises the footage from a visual standpoint, arguing that the use of scale sets with young crocodiles, "shot in super close-up to make them seem huge", does not produce a convincing illusion of giant alligators. Author Dave Thompson compares the giant reptiles to Swamp Thing, a superorganism featured in the DC Comics Universe.
In 1976, Thunderbirds writer Dennis Spooner adapted the premise of "Attack of the Alligators!" while writing "Gnaws", an episode of The New Avengers featuring an enlarged rat.
References
Works cited
External links
1966 British television episodes
Fiction about size change
Thunderbirds (TV series) episodes
Works about crocodilians | Attack of the Alligators! | Physics,Mathematics | 2,756 |
15,369,710 | https://en.wikipedia.org/wiki/Ceramic%20colorants | Ceramic colorants are added to a glaze or a clay to create color. Carbonates and oxides of certain metals, characterize most colorants including the commonly used cobalt carbonate, cobalt oxide, chrome oxide, red iron oxide, and copper carbonate. These colorants can create a multitude of colors depending on other materials they interact with and to which temperature and in which atmosphere they are fired.
Cobalt
Cobalt is commonly used in either its carbonate (CoCO3) or its oxide (Co3O4) forms. In the presence of most fluxes, it yields blue colors ranging from low saturation pastels to high saturation midnight blues in both oxidation and reduction atmospheres. However, in the presence of magnesium, cobalt can become purple or pink depending on whether it was fired in oxidation (yields purple) or reduction. Cobalt is also commonly used in black glazes and in washes as decorative medium. Common saturation percentages for low saturation range from (.25 to .5%) and in high saturation from (1 to 2%).
Chrome
"Chrome is a rather versatile and fickle colorant," (Chappell). Chrome oxide (Cr2O3) is commonly used for achieving greens. However, in the presence of zinc, chrome can produce brown. Glazes with tin oxide present will often blush to pink if fumed with chrome or if chrome is present in the glaze with the tin, often intense pinks occur. If fired above cone 6, chrome will fume and become a gas in the kiln. Common saturation percentages for chrome at low saturation range from (.25 to .5%) and at higher percentages from (1 to 2%). Chrome is a refractory.
Red Iron
Iron is commonly used as a colorant in its red iron oxide form as (Fe2O3). Red iron oxide is commonly used to produce earthy reds and browns. It is the metal responsible for making earthenwares red. Iron is also another tricky colorant because of its ability to yield different colors under different circumstances. At low percentages (.5-1%) and in the presence of potassium, iron will become light blue or light blue-green in reduction (as is seen in traditional celadons). In the presence of barium, iron may become yellow green. When used in combination with calcium, red iron oxide can become pale yellow or amber in oxidation or green in reduction. Common percentages for red iron oxide range from (4 up to 10%).
Copper
Copper's carbonate form (CuCO3) is commonly used to produce greens, turquoise, and copper reds. If need be, copper oxide (CuO) can be substituted but has a larger particle size and glazes should be adjusted to generally half the amount called for. In barium based glazes greenish blues often result from copper. Alkaline feldspar glazes with copper fired in reduction atmospheres will often yield ox blood or copper red glazes discovered by the Chinese. When fired above cone 8 copper can become unstable and will often fume off of a glaze in vapor form.
References
Clay and Glazes: Revised edition. Chappell, James. Watson-Guptill Publications, New York. 1991.
Glazes for the Craft Potter. Fraser, Harry. Watson Guptill Publications, New York. 1974.
Clay and Glaze for the Potter. Rhodes, Daniel. Krause Publications. 2000. Expanded and Revised by Robin Hopper.
Ceramic materials | Ceramic colorants | Engineering | 731 |
13,398,531 | https://en.wikipedia.org/wiki/Metaclazepam | Metaclazepam (marketed under the brand name Talis) is a drug which is a benzodiazepine derivative. It is a relatively selective anxiolytic with less sedative or muscle relaxant properties than other benzodiazepines such as diazepam or bromazepam. It has an active metabolite N-desmethylmetaclazepam, which is the main metabolite of metaclazepam. There is no significant difference in metabolism between younger and older individuals.
Metaclazepam is slightly more effective as an anxiolytic than bromazepam, or diazepam, with a 15 mg dose of metaclazepam equivalent to 4 mg of bromazepam. Metaclazepam can interact with alcohol producing additive sedative-hypnotic effects. Fatigue is a common side effect from metaclazepam at high doses. Small amounts of metaclazepam as well as its metabolites enter into human breast milk.
See also
Benzodiazepine
References
Benzodiazepines
2-Chlorophenyl compounds
Ethers
Bromoarenes | Metaclazepam | Chemistry | 245 |
28,047,484 | https://en.wikipedia.org/wiki/Alfredo%20di%20Braccio%20Award | The Alfredo di Braccio Award is a prestigious prize for young Italian scientists given by the Italian Accademia Nazionale dei Lincei.
Award winners
Every year a top young chemist or physicist receives this honor for their research.
2008 Chemistry prize was awarded to Lorenzo Malavasi (University of Pavia, Italy)
2009 Physics prize was awarded (ex aequo) to Alessandro Mirizzi (University of Bari, Italy) and Alessio Recati (CNR Trento, Italy)
2010 Chemistry prize was awarded to Riccardo Baron (CSV Health, USA)
2011 Physics prize was awarded (ex aequo) to Antonio Politano (University of Calabria, Italy) and Alessandro Giuliani (Roma Tre University, Italy)
2012 Chemistry prize was awarded to Tiziano Montini (University of Trieste, Italy)
2013 Physics prize was awarded (ex aequo) to Francesco Pellegrino (University of Catania, Italy) and Pasquale Serpico (CNRS, France)
2014 Physics prize was awarded to Stefano Protti (University of Pavia)
2015 Physics prize was awarded (ex aequo) to Filippo Caruso (University of Florence, Italy), Michele Cicoli (University of Bologna, Italy), and Alessandro Pitanti (CNR Pisa, Italy)
2016 Chemistry prize was awarded to Francesca Maria Toma (Lawrence Berkeley National Laboratory, Italy)
2017 Physics prize was awarded to Marco Genoni (University of Milan, Italy)
2018 Chemistry prize was awarded to Lorenzo Mino (University of Turin, Italy)
2019 Physics prize awarded (ex aequo) to Matteo Lucchini and Andrea Crespi (Polytechnic University of Milan, Italy), and Lorenzo Rovigatti (University of Rome "La Sapienza", Italy)
2020 Chemistry prize was awarded to Raffaele Cucciniello (University of Salerno, Italy)
2021 Physics prize was awarded (ex aequo) to Eleonora Di Valentino (Durham University, UK) and Sunny Vagnozzi (University of Cambridge, UK)
2022 Chemistry prize was awarded to Gianvito Vilé (Polytechnic University of Milan, Italy)
See also
List of chemistry awards
List of physics awards
References
Chemistry awards
Physics awards
Italian science and technology awards | Alfredo di Braccio Award | Technology | 468 |
14,775,368 | https://en.wikipedia.org/wiki/Elastration | Elastration (a portmanteau of "elastic" and "castration") is a bloodless method of male castration and docking commonly used for livestock. Elastration is simply banding the body part (scrotum or tail) until it drops off. This method is favored for its simplicity, low cost, and minimal training requirements.
Castration
Elastration is the most common method used to castrate sheep and goats, but is also common in cattle.
Procedure
Elastration involves restraining the animal, without the need for anesthesia or sedation (unlike most other castration methods), in a position that provides access to the genitals. Special elastrator pliers are then used to place a tight latex (rubber) elastrator ring gently around the base of the scrotum. This cuts the blood supply to the scrotum and testicles, which will totally decay and slough off within a few weeks. Care must be taken during the procedure to ensure that both testicles are fully descended and properly located inside the scrotum, and that the animal's nipples are not included within the ring. Elastration is normally limited to castrations done during the first few weeks of life, and it cannot be used for species where the scrotum does not have a narrow base, such as pigs or horses. It is commonly recommended to not use this method on goats until they are 8 weeks or older. This is due to possible complications that could occur later in life like urinary calculi. Goats that are banded during the first month of age are most at risk.
Possible complications
The country of Lithuania has banned the practice due to their belief that the procedure is inhumane. There is some evidence that elastration is more painful if carried out on older animals, although much of the immediate pain of application can be prevented by injection of local anaesthesia into the scrotal neck and testicles. Practitioners usually try to elastrate as soon as possible, once the testicles have descended, to reduce the amount of dead tissue, infection, and accompanying complications. However, with some animals such as goats, castrating too early increases the frequency of kidney stones and urinary problems due to reduced size of the urethra, so elastration may be postponed. If bull calves are castrated within the first one or two days the testes may sometimes be small and soft enough to be drawn up through the ring, and they continue to develop above the scrotum – surgical castration then becomes necessary.
Improper use of banding can result in death and charges of cruelty.
Docking
The same tool and rings are also used to dock the tails of many breeds of sheep, to prevent dung building up on the tails (which can lead to fly strike). This is usually done at the same time as castration of the ram lambs.
It is also called sheep marking in Australia and signaling in Argentina and Uruguay due to being done at the same time as the "signaling" or marking of the lambs' ears.
See also
Animal husbandry
Banding (medical)
Castration
Docking (animal)
References
Animal equipment
Theriogenology
Veterinary castration | Elastration | Biology | 653 |
7,402,988 | https://en.wikipedia.org/wiki/C-MAC | C-MAC is the television technology variant approved by the European Broadcasting Union (EBU) for satellite transmissions. The digital information is modulated using 2-4PSK (phase-shift keying), a variation of quadrature PSK where only two of the phaser angles (±90°) are used.
The data capacity for C-MAC is 3 Mbit/s.
C-MAC data has to be sent to the transmitter separately from the vision.
The transmitter switches between FM (vision) and PSK (sound/data) modulation during each television line period.
C-MAC variants : E-MAC
E-MAC (Extended MAC) is 16:9 version of C-MAC. Originally E-MAC was designed for 15:9 pictures, it later adopted the 16:9 aspect ratio.
In E-MAC all the 4:3 information is transmitted exactly as in C-MAC so that C-MAC receivers are still compatible.
E-MAC hides extra luminance and chrominance information in the field blanking interval and parts of the line blanking interval.
E-MAC has a lower data capacity because luminance is hidden where data would usually be located.
A 'steering' signal is transmitted to indicate to the 16:9 receiver whereabouts the 4:3 picture information.
E-MAC receivers stitch the 4:3 and helper wide-screen data into a seamless 16:9 picture.
Technical details
MAC transmits luminance and chrominance data separately in time rather than separately in frequency (as other analog television formats do, such as composite video).
Audio and Scrambling (selective access)
Audio, in a format similar to NICAM was transmitted digitally rather than as an FM sub-carrier.
The MAC standard included a standard scrambling system, Euro-Crypt, a precursor to the standard DVB-CSA encryption system.
See also
TV transmission systems
Analog high-definition television systems
DVB-S
DVB-T
Multiplexed Analogue Components
PAL
SECAM
References
Television technology
Video formats | C-MAC | Technology | 414 |
1,933,920 | https://en.wikipedia.org/wiki/List%20of%20Mozilla%20products | The following is a list of Mozilla Foundation / Mozilla Corp. products. All products, unless specified, are cross-platform by design.
Client applications
Firefox Browser - An open-source web browser.
Firefox Focus - A privacy-focused mobile web browser.
Firefox for Android (also Firefox Daylight) - A web browser for mobile phones and smaller non-PC devices.
Firefox Monitor - An online service for alerting the user when their email addresses and passwords have been leaked in data breaches.
Firefox Relay - A privacy focused email masking service which allows for the creation of disposable email aliases
Mozilla Thunderbird - An email and news client.
Mozilla VPN - A virtual private network client.
SeaMonkey (formerly Mozilla Application Suite) - An Internet suite.
ChatZilla - The IRC component, also available as a Firefox extension.
Mozilla Calendar - Originally planned to be a calendar component for the suite; became the base of Mozilla Sunbird.
Mozilla Composer - The HTML editor component.
Mozilla Mail & Newsgroups - The email and news component.
Components
DOM Inspector - An inspector for DOM.
Gecko - The layout engine.
Necko - The network library.
Rhino - The JavaScript engine written in Java programming language.
Servo - A layout engine.
SpiderMonkey - The JavaScript engine written in C programming language.
Venkman - A JavaScript debugger.
Development tools
Bugzilla - A bugtracker.
HTTP Observatory - A tool that helps developers and website administrators improve the security of their site by determining the site's compliance with best security practices.
Rust (programming language)
Skywriter - An extensible and interoperable web-based framework for code editing.
Treeherder - A detective tool that allows developers to manage software builds and to correlate build failures on various platforms and configurations with particular code changes (Predecessors: TBPL and Tinderbox).
API/Libraries
Netscape Portable Runtime (NSPR) - A platform abstraction layer that makes operating systems appear the same.
Network Security Services (NSS) - A set of libraries designed to support cross-platform development of security-enabled client and server applications.
Network Security Services for Java (JSS) - A Java interface to NSS.
Personal Security Manager (PSM) - A set of libraries that performs cryptographic operations on behalf of a client application.
Other tools
Client Customization Kit (CCK) - A set of tools that helps distributors customize and distribute the client.
Mozbot - An IRC bot written in Perl.
Mozilla Directory SDK - For writing applications which access, manage, and update the information stored in an LDAP directory.
Mozilla Raindrop - Was an upcoming technology for sending messages.
Mstone - A multi-protocol stress and performance measurement tool.
Thimble - Mozilla's web-based educational code editor, part of the company's "Webmakers" project (Thimble was shut down in December 2019 and its projects were migrated to Glitch).
Technologies
JavaScript - The de facto client-side scripting programming language originated from Netscape Navigator.
NPAPI - A plugin architecture originated from Netscape Navigator.
XBL - A markup language for binding an XML element with its behavior(s).
XPCOM - A software componentry model similar to COM.
XPConnect - A binding between XPCOM and JavaScript.
XPInstall - A technology for installing extensions.
XTF - A framework for implementing new XML elements.
XUL - A markup language for user interface.
Abandoned
Bonsai - A web-based interface for the CVS.
Camino - A web browser intended for Mac OS X.
Classilla - A web browser made for PowerPC-based classic Macintosh operating systems.
ElectricalFire - A Java virtual machine using just-in-time compilation.
Firefox Lockwise - A mobile application and integral part of Firefox Browser, for securely storing & syncing passwords.
Firefox OS - An open source operating system for smartphones and tablet computers mainly based on HTML5.
Firefox Reality - A web browser optimized for virtual reality.
Firefox Send - A web-based file sharing platform with end-to-end encryption and a link that automatically expires.
Mariner - The improved layout engine based on code of Netscape Communicator.
Minimo - A web browser for handheld devices.
Mozilla Grendel - A mail and news client written in Java programming language.
Mozilla Persona - A decentralized authentication system for the web.
Mozilla Sunbird - A calendar client.
Xena ("Javagator") - A communicator suite rewritten in Java programming language.
References
External links
The Mozilla.org Projects List
Mozilla
Mozilla
Mozilla | List of Mozilla products | Technology | 1,029 |
22,055,406 | https://en.wikipedia.org/wiki/July%202028%20lunar%20eclipse | A partial lunar eclipse will occur at the Moon’s ascending node of orbit on Thursday, July 6, 2028, with an umbral magnitude of 0.3908. A lunar eclipse occurs when the Moon moves into the Earth's shadow, causing the Moon to be darkened. A partial lunar eclipse occurs when one part of the Moon is in the Earth's umbra, while the other part is in the Earth's penumbra. Unlike a solar eclipse, which can only be viewed from a relatively small area of the world, a lunar eclipse may be viewed from anywhere on the night side of Earth. Occurring about 4 days before apogee (on July 11, 2028, at 18:25 UTC), the Moon's apparent diameter will be smaller.
Visibility
The eclipse will be completely visible over east Africa, Asia, Antarctica, and Australia, seen rising over west and central Africa and Europe and setting over the central Pacific Ocean.
Eclipse details
Shown below is a table displaying details about this particular solar eclipse. It describes various parameters pertaining to this eclipse.
Eclipse season
This eclipse is part of an eclipse season, a period, roughly every six months, when eclipses occur. Only two (or occasionally three) eclipse seasons occur each year, and each season lasts about 35 days and repeats just short of six months (173 days) later; thus two full eclipse seasons always occur each year. Either two or three eclipses happen each eclipse season. In the sequence below, each eclipse is separated by a fortnight.
Related eclipses
Eclipses in 2028
A partial lunar eclipse on January 12.
An annular solar eclipse on January 26.
A partial lunar eclipse on July 6.
A total solar eclipse on July 22.
A total lunar eclipse on December 31.
Metonic
Preceded by: Lunar eclipse of September 18, 2024
Followed by: Lunar eclipse of April 25, 2032
Tzolkinex
Preceded by: Lunar eclipse of May 26, 2021
Followed by: Lunar eclipse of August 19, 2035
Half-Saros
Preceded by: Solar eclipse of July 2, 2019
Followed by: Solar eclipse of July 13, 2037
Tritos
Preceded by: Lunar eclipse of August 7, 2017
Followed by: Lunar eclipse of June 6, 2039
Lunar Saros 120
Preceded by: Lunar eclipse of June 26, 2010
Followed by: Lunar eclipse of July 18, 2046
Inex
Preceded by: Lunar eclipse of July 28, 1999
Followed by: Lunar eclipse of June 17, 2057
Triad
Preceded by: Lunar eclipse of September 5, 1941
Followed by: Lunar eclipse of May 8, 2115
Lunar eclipses of 2027–2031
Saros 120
Half-Saros cycle
A lunar eclipse will be preceded and followed by solar eclipses by 9 years and 5.5 days (a half saros). This lunar eclipse is related to two total solar eclipses of Solar Saros 127.
See also
List of lunar eclipses and List of 21st-century lunar eclipses
Notes
External links
2028-07
2028-07
2028 in science | July 2028 lunar eclipse | Astronomy | 621 |
32,903,628 | https://en.wikipedia.org/wiki/Granule%20%28geology%29 | A granule is a clast of rock with a particle size of 2 to 4 millimetres based on the Krumbein phi scale of sedimentology. Granules are generally considered to be larger than sand (0.0625 to 2 millimetres diameter) and smaller than pebbles (4 to 64 millimetres diameter). A rock made predominantly of granules is termed a granule conglomerate.
See also
Gravel
Particle size (grain size)
References
Stone (material)
Sedimentology
Granularity of materials | Granule (geology) | Physics,Chemistry | 100 |
45,204,119 | https://en.wikipedia.org/wiki/Drilling%20jumbo | A Drilling jumbo or drill jumbo is a rock drilling machine.
Use
Drilling jumbos are usually used in underground mining, if mining is done by drilling and blasting. They are also used in tunnelling, if rock hardness prevents use of tunnelling machines. It is considered as a powerful tool to facilitate labor-intensive process for mineral extraction.
Description
A drilling jumbo consists of one, two or three rock drill carriages, sometimes a platform, which the miner stands on to load the holes with explosives, clear the face of the tunnel or do something else. The carriages are bolted onto the chassis, which supports the miner's cabin as well as the engine. Although modern drilling jumbos are relatively large, there are smaller ones for use in cramped conditions. Whereas modern jumbos are usually fitted with rubber tires and diesel-powered, there are also exist variants with steel wheels, to ride on rails and even single carriage sled-mounted ones. Electric power is also common, and historic jumbos were powered by compressed air. Electricity and compressed air produce little to no exhaust gases, which is preferable if work is done in smaller tunnels where good ventilation is difficult. The drilling jumbo was invented in 1849 by J.J Couch of Philadelphia.
References
Mining equipment | Drilling jumbo | Engineering | 259 |
53,450,372 | https://en.wikipedia.org/wiki/Plasmodium%20falciparum%20erythrocyte%20membrane%20protein%201 | Plasmodium falciparum erythrocyte membrane protein 1 (PfEMP1) is a family of proteins present on the membrane surface of red blood cells (RBCs or erythrocytes) that are infected by the malarial parasite Plasmodium falciparum. PfEMP1 is synthesized during the parasite's blood stage (erythrocytic schizogony) inside the RBC, during which the clinical symptoms of falciparum malaria are manifested. Acting as both an antigen and adhesion protein, it is thought to play a key role in the high level of virulence associated with P. falciparum. It was discovered in 1984 when it was reported that infected RBCs had unusually large-sized cell membrane proteins, and these proteins had antibody-binding (antigenic) properties. An elusive protein, its chemical structure and molecular properties were revealed only after a decade, in 1995. It is now established that there is not one but a large family of PfEMP1 proteins, genetically regulated (encoded) by a group of about 60 genes called var. Each P. falciparum is able to switch on and off specific var genes to produce a functionally different protein, thereby evading the host's immune system. RBCs carrying PfEMP1 on their surface stick to endothelial cells, which facilitates further binding with uninfected RBCs (through the processes of sequestration and rosetting), ultimately helping the parasite to both spread to other RBCs as well as bringing about the fatal symptoms of P. falciparum malaria.
Introduction
Malaria is the deadliest among infectious diseases, accounting for approximately 429,000 human deaths in 2015 as of the latest estimate by the World Health Organization. In humans, malaria can be caused by five Plasmodium parasites, namely P. falciparum, P. vivax, P. malariae, P. ovale and P. knowlesi. P. falciparum is the most dangerous species, attributed to >99% of malaria's death toll, with 70% of these deaths occurring in children under the age of five years. The parasites are transmitted through the bites of female mosquitos (of the species of Anopheles). Before invading the RBCs and causing the symptoms of malaria, the parasites first multiply in the liver. The daughter parasites called merozoites then only infect the RBCs. They undergo structural development inside the RBCs, becoming trophozoites and schizonts. It is during this period that malarial symptoms are produced.
Unlike RBCs infected by other Plasmodium species, P. falciparum-infected RBCs had been known to spontaneously stick together. By the early 1980s, it was established that when the parasite (both the trophozoite and schizont forms) enters the blood stream and infects RBCs, the infected cells form knobs on their surface. Then they become sticky, and get attached to the walls (endothelium) of the blood vessels through a process called cytoadhesion, or cytoadherence. Such attachment favours binding with and accumulation of other RBCs. This process is known as sequestration. It is during this condition that the parasites induce an immune response (antigen-antibody reaction) and evade destruction in the spleen. Although the process and significance of sequestration were described in detail by two Italian physicians Amico Bignami and Ettore Marchiafava in the early 1890s, it took a century to discover the actual factor for the stickiness and virulence.
Discovery
PfEMP1 was discovered by Russell J. Howard and his colleagues at the US National Institutes of Health in 1984. Using the techniques of radioiodination and immunoprecipitation, they found a unique but yet unknown antigen from P. falciparum-infected RBCs that appeared to cause binding with other cells. Since the antigenic protein could only be detected in infected cells, they asserted that the protein was produced by the malarial parasite, and not by RBCs. The antigen was large and appeared to be different in size in different strains of P. falciparum obtained from night monkey (Aotus). In one strain, called Camp (from Malaysia), the antigen was found to have a molecular size of approximately 285 kDa; while in the other, called St. Lucia (from El Salvador), it was approximately 260 kDa. Both antigens bind to cultured skin cancer (melanoma) cells. But the researchers failed to confirm whether or not the protein actually was an adhesion molecule to the wall of blood vessels. Later in the same year, they found out that the unknown antigen was associated only with RBCs having small lumps called knobs on their surface. The first human RBC antigen was reported in 1986. Howard's team found that the antigens from Gambian children, who were suffering from falciparum malaria, were similar to those from the RBCs of night monkey. They determined that the molecular sizes of the proteins ranged from 250 to 300 kDa.
In 1987, they discovered another type of surface antigen from the same Camp and St. Lucia strains of malarial parasites. This was also a large-sized protein of about 300 kDa, but quite different from the antigens reported in 1984. The new protein was unable to bind to melanoma cells and present only inside the cell. Hence, they named the earlier protein Plasmodium falciparum erythrocyte membrane protein 1 (PfEMP1), to distinguish it from the newly identified Plasmodium falciparum erythrocyte membrane protein 2 (PfEMP2). The distinction was confirmed the next year, with an additional information that PfEMP1 is relatively less in number.
Although some of the properties of PfEMP1 were firmly established, the protein was difficult to isolate due to its low occurrence. Five years after its discovery, one of the original researchers Irwin Sherman began to doubt the existence of PfEMP1 as a unique protein. He argued that the antigen could be merely a surface protein of RBCs that changes upon infection with malarial parasites. A consensus was achieved in 1995 following the identification (by cloning) of the gene for PfEMP1. The discovery of the genes was independently reported by Howard's team and two other teams at NIH. Howard's team identified two genes for PfEMP1, and recombinant protein products of these genes were shown to have antigenic and adhesive properties. They further affirmed that PfEMP1 is the key molecule in the ability of P. falciparum to evade the host's immune system. Joseph D. Smith and others showed that PfEMP1 is actually a large family of proteins encoded by a multigene family called var. The gene products can bind to a variety of receptors including those on endothelial cells. Xin-Zhuan Su and others showed that there could be more than 50 var genes which are distributed on different chromosomes of the malarial parasite.
Structure
PfEMP1 is a large family of proteins having high molecular weights ranging from 200 to 350 kDa. The wide range of molecular size reflects extreme variation in the amino acid composition of the proteins. But all the PfEMP1 proteins can be described as having three basic structural components, namely, an extracellular domain (ECD), a transmembrane domain (TMD) and an intracellular acidic terminal segment (ATS). The extracellular domain is fully exposed on the cell surface, and is the most variable region. It consists of a number of sub-domains, including a short and conserved N terminal segment (NTS) at the outermost region, followed by a highly variable Duffy-binding-like (DBL) domain, sometimes a Ca2+-binding C2 domain, and then one or two cysteine-rich interdomain regions (CIDRs).
Duffy-binding-like domains are so named because of their similarity to the Duffy binding proteins of P. vivax and P. knowlesi. There are six variant types of DBL, named DBLα, DBLβ, DBLγ, DBLδ, DBLε and DBLζ. CIDR is also divided into three classes: CIDRα, CIDRβ and CIDRγ. Both DBL and CIDR have an additional type called PAM, so named because of their specific involvement in pregnancy-associated malaria (PAM). In spite of the diverse DBL and CIDR proteins, the extracellular amino terminal region is partly conserved, consisting of about 60 amino acids of NTS, one each of DBLα and CIDR1 proteins in tandem. This semi-conserved DBLα-CIDR1 region is called the head structure. The last CIDR region joins the TMD, which is embedded in the cell membrane. The TMD and ATS are highly conserved among different PfEMP1s, and their structures have been solved using solution NMR ().
The head structure is followed by a variable combination of diverse DBL and CIDR proteins, and in many cases along with C2. This variation gives rise to different types of PfEMP1. The DBL-CIDR combination in a particular type of PfEMP1 protein is never random, but organized into specific sequences known as domain cassettes. In some domain cassettes, there are only two or few DBL domains and CIDR domains, but in others they cover the entire length of the PfEMP1. These differences are responsible for different binding capacity among different PfEMP1s. For instance, among the most well-known types, VAR3 (earlier called type 3 PfEMP1) is the smallest, consisting of only NTS with DBL1α and DBL2ε domains in the ECD. Its molecular size is approximately 150 kDa. In domain cassette (DC) 4 type, the ECD is made up of three domains DBLα1.1/1.4, CIDRα1.6 and DBLβ3. The DBLβ3 domain contains a binding site for intercellular adhesion molecule 1 (ICAM1). This is particularly implicated with the development of brain infection. VAR2CSA is atypical in having a single domain cassette that consists of three N terminal DBLPAM domains followed by three DBLε domains and one CIDRPAM. The seven domains always occur together. The usual NTS is absent. The protein specifically binds to chondroitin sulphate A (CSA); hence the name VAR2CSA.
Synthesis and transport
The PfEMP1 proteins are regulated and produced (encoded) by about 60 different var genes, but an individual P. falciparum would switch on only a single var gene at a time to produce only one type of PfEMP. The var genes are distributed in two exons. Exon 1 encodes amino acids of the highly variable ECD, while exon 2 encodes those of the conserved TMD and ATS. Based on their location in the chromosome and sequence, the var genes are generally classified into three major groups, A, B, and C, and two intermediate groups, B/A and B/C; or sometimes simply into five classes, upsA, upsB, upsC, upsD, and upsE respectively. Groups A and B are found towards the terminal end (subtelomeric) region of the chromosome, while group C is in the central (centromeric) region.
Once the PfEMP1 protein is fully synthesized (translated), it is carried to the cytoplasm towards the RBC membrane. The NTS is crucial for such directional movement. Within the cytoplasm, the newly synthesized protein is attached to a Golgi-like membranous vesicle called the Maurer's cleft. Inside the Maurer's clefts is a family of proteins called Plasmodium helical interspersed subtelomeric (PHIST) proteins. Of the PHIST proteins, PFI1780w and PFE1605w bind the intracellular ATS of PfEMP1 during transport to the RBC membrane.
The PfEMP1 molecule is deposited at the RBC membrane at the knobs. These knobs are easily identified as conspicuous bumps on the infected RBCs from the early trophozoite stage onward. The malarial parasite cannot induce its virulence on RBCs without knobs. As many as 10,000 knobs are distributed throughout the surface of a mature infected RBC, and each knob is 50-80 nm in diameter. The export of pfEMP1 from Maurer's cleft to RBC membrane is mediated by binding of another protein produced by the parasite called knob-associated histidine-rich protein (KAHRP). KAHRP enhances the structural rigidity of infected RBC and adhesion of PfEMP1 on the knobs. It is also directly responsible for forming knobs, as indicated by the fact that kahrp gene-deficient malarial parasites do not form knobs. To form a knob, KAHRP aggregates several membrane skeletal proteins of the host RBC, such as spectrin, actin, ankyrin R, and spectrin–actin band 4.1 complex. Upon arrival at the knob, PfEMP1 is attached to the spectrin network using the PHIST proteins.
Function
The primary function of PfEMP1 is to bind and attach RBCs to the wall of the blood vessels. The most important binding properties of P. falciparum known to date are mediated by the head structure of PfEMP1, consisting of DBL domains and CIDRs. DBL domains can bind to a variety of cell receptors including thrombospondin (TSP), complement receptor 1 (CR1), chondroitin sulfate A (CSA), P-selectin, endothelial protein C receptor (EPCR), and heparan sulfate. The DBL domain adjacent to the head structure binds to ICAM-1. CIDRs mainly bind to a large variety of cluster determinant 36 (CD36). These bindings produce the pathogenic characteristics of the parasite, such as sequestration of infected cells in different tissues, invasion of RBCs, and clustering of infected cells by a process called rosetting.
CIDR1 protein in the semi-conserved head structure is the principal and best understood adhesion site of PfEMP1. It binds with CD36 on endothelial cells. Only group B and C proteins are able to bind, and that too with only those having CIDRα2-6 sequence types. On the other hand, group A proteins have either CIDRα1 or CIDRβ/γ/δ, and they are responsible for the most severe condition of malaria. Binding with ICAM-1 is achieved through the DBLβ domain adjacent to the head structure. However, many PfEMP1s having DBLβ domain do not bind to ICAM-1, and it appears that only the DBLβ paired with C2 domain can to bind to ICAM-1. The DBLα-CIDRγ tandem pair is the main factor for rosetting, sticking together the infected RBC with the uninfected cells, and thereby clogging of the blood vessels. This activity is performed through binding with CR1.
The most dangerous malarial infection is in the brain and is called cerebral malaria. In cerebral malaria, the PfEMP1 proteins involved are DC8 and DC13. They are named after the number of domain cassettes they contain, and are capable of binding not only endothelial cells of the brain, but also in different organs including brain, lung, heart, and bone marrow. Initially, it was assumed that PfEMP1 binds to ICAM-1 in the brain, but DC8 and DC13 were found incompatible with ICAM-1. Instead DC8 and DC13 specifically bind to EPCR using CIDRα sub-types such as CIDRα1.1, CIDRα1.4, CIDRα1.5 and CIDRα1.7. However, it was later shown that DC13 can bind to both ICAM-1 and EPCR. EPCR is thus a potential vaccine and drug target in cerebral malaria.
VAR2CSA is unique in that it is mostly produced by the placenta during pregnancy (the condition called pregnancy-associated malaria, PAM, or placental malaria). The majority of PAM is therefore due to VAR2SCA. Unlike other PfEMP1, VAR2CSA binds to chondroitin sulfate A present on the vascular endothelium of the placenta. Although its individual domains can bind to CSA, its entire structure is used for complete binding. The major complication in PAM is low-birth-weight babies. However, women who survived the first infection generally develop an effective immune response. In P. falciparum-prevalent regions in Africa, pregnant women are found to contain high levels of antibody (immunoglobulin G, or IgG) against VAR2CSA, which protect them the placenta-attacking malarial parasite. They are noted for giving birth to heavier babies.
Clinical importance
In a normal human immune system, malarial parasite binding to RBCs stimulates the production of antibodies that attack the PfEMP1 molecules. Binding of antibody with PfEMP1 disables the binding properties of DBL domains, causing loss of cell adhesion, and the infected RBC is destroyed. In this scenario, malaria is avoided. However, to evade the host's immune response, different P. falciparum switch on and off different var genes to produce functionally different (antigenically distinct) PfEMP1s. Each variant type of PfEMP1 has different binding property, and thus, is not always recognized by antibodies.
By default, all the var genes in the malarial parasite are inactivated. Activation (gene expression) of var is initiated upon infection of the organs. Further, in each organ only specific var genes are activated. The severity of the infection is determined by the type of organ in which infection occurs, hence, the type of var gene activated. For examples, in the most severe cases of malaria, such as cerebral malaria, only the var genes for the PfEMP1 proteins DC8 and DC13 are switched on. Upon the synthesis of DC8 and DC13, their CIDRα1 domains bind to EPCR, which brings about the onset of severe malaria. The abundance of the gene products (transcripts) of these PfEMP1 proteins (specifically the CIDRα1 subtype transcripts) directly relates to the severity of the disease. This further indicates that preventing the interaction between CIDRα1 and EPCR would be good target for a potential vaccine. In pregnancy-associated malaria, another severe type of falciparum malaria, the gene for VAR2CSA (named var2csa) is activated in the placenta. Binding of VAR2CSA to CSA is the primary cause of premature delivery, death of the foetus and severe anaemia in the mother. This indicates that drugs targeting VAR2CSA will be able to prevent the effects of malaria, and for this reason VAR2CSA is the leading candidate for development of a PAM vaccine.
References
falciparum erythrocyte
Antigens
Apicomplexan proteins | Plasmodium falciparum erythrocyte membrane protein 1 | Chemistry | 4,100 |
22,562,213 | https://en.wikipedia.org/wiki/Phytanoyl-CoA | Phytanoyl-CoA is a coenzyme A derivative of phytanic acid.
The enzyme phytanoyl-CoA hydroxylase catalyses hydroxylation of phytanoyl-CoA.
References
Thioesters of coenzyme A | Phytanoyl-CoA | Chemistry | 57 |
8,725,881 | https://en.wikipedia.org/wiki/Aspen%20Achievement%20Academy | Aspen Achievement Academy was a wilderness therapy program for adolescents, based in Loa, Utah.
It was operated as a part of Aspen Education Group.
The program has been moved, in name only, and merged with another wilderness therapy program in Utah - Outback Therapeutic Expeditions - in March 2011.
According to the program's promotional materials, Aspen Achievement Academy enrolled adolescent males and females, ages 13–17, with a history of moderate to severe emotional and behavioral problems, such as low self-esteem, academic underachievement, substance abuse, and family conflict. The program had a flexible length of stay, with a minimum of 35 days. Some parents use the services of a teen escort company to transport their children to the site.
The program's website state that the program was JCAHO certified and licensed as an Outdoor Treatment Program by the State of Utah Department of Human Services. It had memberships in the National Association of Therapeutic Schools and Programs and the Outdoor Behavioral Healthcare Industry Council.
In news media and popular culture
Aspen Achievement Academy has been a subject of several media reports and works of popular culture:
The 1999 book Shouting at the Sky: Troubled Teens and the Promise of the Wild by Gary Ferguson, recounts the author's experiences and observations during several months he spent in the wilderness with teens at Aspen Achievement Academy.
The third season of the UK TV series Brat Camp was filmed at Aspen Achievement Academy, and aired in the UK beginning in February 2006.
In January 1996, six teenagers ran away from an Aspen group. They were found by law enforcement officials and returned to the program, but the incident raised concerns that future escapees might assault tourists, hikers or recreationists on the public lands that Aspen used. Afterward, the Bureau of Land Management, which manages these lands, was reported to have conducted a review to determine whether to renew or terminate Aspen's access permit.
In April 2007 a 16-year-old male student died after hanging himself with a piece of seatbelt webbing.
History
Aspen Achievement Academy (AAA) was founded in 1988 by Doug Nelson, Dr. Keith Hooker, Doug Cloward, and Madolyn Liebing, Ph.D. It was originally named Wilderness Academy. AAA is known for being the first wilderness therapy programs to have a clinician (Liebing) who provided individual therapy. AAA was also the first Utah State licensed wilderness therapy program.
References
White, W. (2012) Chapter 2: “A History of Adventure Therapy” in Adventure Therapy: Theory, Practice, and Research by Gass, M, Gillis, L. Russell, K. Routledge/Bruner-Mazel Press.
External links
Aspen Achievement Academy program homepage
Behavior modification
Troubled teen programs | Aspen Achievement Academy | Biology | 548 |
8,531,714 | https://en.wikipedia.org/wiki/Electric%20Power%20Research%20Institute | EPRI, is an American independent, nonprofit organization that conducts research and development related to the generation, delivery, and use of electricity to help address challenges in the energy industry, including reliability, efficiency, affordability, health, safety, and the environment.
EPRI's principal offices and laboratories are located in Palo Alto, California; Charlotte, North Carolina; Knoxville, Tennessee; Washington, DC; and Lenox, Massachusetts.
History
In November 1965, the Great Northeastern Blackout left 30 million people in the United States without electricity. Historic in scale and impact, it demonstrated the nation's growing dependence upon electricity and its vulnerability to power loss. The event marked a watershed moment for the U.S. electricity sector and triggered the creation of the Electric Power Research Institute.
Following the blackout, leaders in Congress held hearings in the early 1970s about the lack of research supporting the power industry.
Dr. Chauncey Starr, then the Dean of the UCLA School of Engineering and Applied Science, led the initiative, proposed by Congress, to create an independent research and development organization to support the electricity sector and address its technical and operational challenges. In 1972, at a formal hearing of the U.S. Senate Commerce Committee, Starr presented a vision for the Electric Power Research Institute to serve Congress's mandate for objective, scientific research. Starr served as the first President of EPRI for five years and formally retired at age 65, but continued to work at EPRI for the next 30 years.
Research
According to EPRI's 2018 Research Portfolio, EPRI's work encompasses research in technology, the workforce, operations, systems planning and other areas that guide and support the development of new regulatory frameworks, market opportunities, and value to energy consumers.
Generation
EPRI's generation research focuses on information, processes and technologies to improve the flexibility, reliability, performance, and efficiency of the existing fossil-fueled and renewable energy generating fleet.
Nuclear
EPRI conducts research on nuclear cost-effective technologies, technical guidance, and knowledge transfer tools to help maximize the value of existing nuclear assets and inform the deployment of new nuclear technology.
Energy Delivery and Customer Solutions
EPRI's distributed energy resources and customer research area focuses on distributed energy resource (DER) integration, efficient electrification, connectivity and information technology enabling an integrated grid and cyber security guidance.
The transmission, distribution, and substation research focuses on improving transmission asset management analytics, technology for mobile field guides, robotics and sensors to automate asset inspections, and improving understanding of electromagnetic pulse (EMP).
Technology Innovation
EPRI researches and develops early-stage and breakthrough technologies that could lead to promising concepts, new knowledge, and potential breakthroughs.
See also
NOREM
Central Power Research Institute
References
External links
Official website
EPRI Portfolio
EPRI Journal
Energy research institutes
Engineering research institutes
Electric power in the United States
Energy in California
Research institutes in the San Francisco Bay Area
Non-profit organizations based in Palo Alto, California
1973 establishments in California
Research institutes established in 1973 | Electric Power Research Institute | Engineering | 599 |
13,037,094 | https://en.wikipedia.org/wiki/Trichloro%28dichlorophenyl%29silane | Trichloro(dichlorophenyl)silane is a family of chemical compounds, all with formula Si(C6H3Cl2)Cl3.
See also
Organosilicon#Silyl halides
Organochlorosilanes
Chlorobenzene derivatives | Trichloro(dichlorophenyl)silane | Chemistry | 62 |
4,364,853 | https://en.wikipedia.org/wiki/Digestion%20%28alchemy%29 | In alchemy, digestion refers to the process by which raw materials are transformed into a more purified or refined state. This concept is akin to the biological process of digestion in living organisms, where food is broken down into simpler forms for nourishment. However, in alchemy, digestion is metaphorical and symbolic rather than biological. The term is often used to describe the maturation of materials, where a substance undergoes a series of stages that lead to its ultimate transformation. This process typically involves heat, moisture, and time, allowing the substance to be 'digested' or broken down into its essence. Alchemy describes several stages of digestion, which are critical to achieving the 'philosopher's stone,' the legendary substance thought to grant immortality and the ability to transmute base metals into gold.
The key stages
Calcination
This is the initial stage where the material is subjected to heat, leading to the release of volatile substances. In this phase, the material is broken down into ash, representing the stripping away of impurities.
Dissolution
Following calcination, the ashes are dissolved in a liquid, often water or an acidic solution. This step symbolizes the dissolution of the ego and the surrender of the self, paving the way for spiritual and material transformation.
Separation
The next phase involves separating the dissolved substance from any residual impurities. This process highlights the importance of discernment in both material and spiritual contexts.
Conjunction
After separation, the purified elements are recombined, creating a new, more refined substance. This symbolizes the integration of opposites, such as the masculine and feminine, leading to balance and harmony.
Fermentation
This stage represents a new birth or awakening. The material is subjected to processes that invoke change, often aided by the introduction of a 'soul' or spirit to the mixture.
Distillation
After fermentation, the mixture undergoes distillation, where it is purified through boiling and condensation. This step is crucial for removing remaining impurities and concentrating the essence of the material.
Coagulation
The final stage of digestion involves the solidification of the purified substance into its ultimate form. This symbolizes the achievement of the philosopher's stone, a representation of perfection and enlightenment.
References
Alchemical processes | Digestion (alchemy) | Chemistry | 469 |
15,185,443 | https://en.wikipedia.org/wiki/Korn%27s%20inequality | In mathematical analysis, Korn's inequality is an inequality concerning the gradient of a vector field that generalizes the following classical theorem: if the gradient of a vector field is skew-symmetric at every point, then the gradient must be equal to a constant skew-symmetric matrix. Korn's theorem is a quantitative version of this statement, which intuitively says that if the gradient of a vector field is on average not far from the space of skew-symmetric matrices, then the gradient must not be far from a particular skew-symmetric matrix. The statement that Korn's inequality generalizes thus arises as a special case of rigidity.
In (linear) elasticity theory, the symmetric part of the gradient is a measure of the strain that an elastic body experiences when it is deformed by a given vector-valued function. The inequality is therefore an important tool as an a priori estimate in linear elasticity theory.
Statement of the inequality
Let be an open, connected domain in -dimensional Euclidean space , . Let be the Sobolev space of all vector fields on that, along with their (first) weak derivatives, lie in the Lebesgue space . Denoting the partial derivative with respect to the ith component by , the norm in is given by
Then there is a (minimal) constant , known as the Korn constant of , such that, for all ,
where denotes the symmetrized gradient given by
Inequality is known as Korn's inequality.
See also
Hardy inequality
Poincaré inequality
References
.
.
.
.
External links
Inequalities
Sobolev spaces
Solid mechanics | Korn's inequality | Physics,Mathematics | 331 |
39,516,295 | https://en.wikipedia.org/wiki/Pomato | The pomato (a portmanteau of potato and tomato), also known as a tomtato, is a grafted plant that is produced by grafting together tomato plant and a potato plant, both of which are members of the Solanum genus in the Solanaceae (nightshade) family. Cherry tomatoes grow on the vine, while white potatoes grow in the soil from the same plant.
Background
The concept of grafting related potatoes and tomatoes so that both are produced on the same plant dates back to at least the early 19th century - 1833.
As with all grafts, this plant will not occur in nature and cannot be grown from seed, because the two parts of the plant remain genetically separate, and only rely on each other for nourishment and growth. Like most standard types of plant grafting, a small incision is made in the stem of both plants and they are strapped together. Once the cuts have healed and the plants are joined, the leafy top of the potato plant can be cut away and the roots of the tomato can be removed, leaving the leaves of the tomato plant to nourish the roots of the potato plant. The rootstock (potato) acts as a stable and healthy root system and the scions (tomato) are chosen for their fruit, flowers or leaves. The tomatoes should be ready to harvest after about 12 weeks during the summer months, the potatoes should be ready after the tomato leaves begin to die back, normally in early autumn. Grafting in this way can be used to produce many different related crops from the same plant, for example the growing popularity of 'Fruit salad' trees, which is a single tree that produces multiple types of citrus fruits, or a tree with a variety of fruits with stones (peach, plum etc.).
Benefits
Pomato plants have been seen as a new technology to make food production more efficient, as they maximize the number of crops that can be produced on a piece of land or in a small urban environment like a balcony. This has significant impacts on developing countries like Kenya, where farmers can save on space, time and labor without affecting the quality of their produce by growing pomato plants. In addition, grafting can improve resistance to bacteria, viruses and fungi, attract a more diverse group of pollinators and provide a sturdy trunk for delicate ornamental plants.
Commercial products
Grafted pomato plants were launched in the United Kingdom in September 2013 by a horticultural mail-order company Thompson & Morgan, who sold pre-grafted plants branded as the "TomTato". The Incredible Edible nursery in New Zealand announced a "DoubleUP Potato Tom" in the same month. Thompson & Morgan claim that this is the first time the plant has been produced commercially, and director Paul Hansord describes originating the TomTato idea himself 15 years ago in the US, when visiting a garden where someone had planted a potato under a tomato as a joke.
In fiction
Pomatos play a central role in the 1969 book The Life and Extraordinary Adventures of Private Ivan Chonkin by Vladimir Voinovich, where a tomato/potato hybrid is invented by the book's antagonist, Gladyshev, but later eaten by a cow. The plant was called the 'Path to Socialism' or PATS.
In the Fallout series, a potato and tomato hybrid created by cross-pollination of these plants, called a ‘Tato’, can be found and eaten by the player character.
The pomato appears as a farmable crop in My Time at Sandrock, a post-apocalyptic farm life sim.
See also
References
Horticulture
Plant genetics
Vegetables
Tomatoes
Potatoes
1930 introductions
Graft chimeras | Pomato | Biology | 749 |
2,957,747 | https://en.wikipedia.org/wiki/Trimetaphan%20camsilate | Trimetaphan camsilate (INN) or trimethaphan camsylate (USAN), trade name Arfonad, is a sympatholytic drug used in rare circumstances to lower blood pressure.
Trimetaphan is a ganglionic blocker: it counteracts cholinergic transmission at a specific type of nicotinic acetylcholine receptors in the autonomic ganglia and therefore blocks both the sympathetic nervous system and the parasympathetic nervous system. It acts as a non-depolarizing competitive antagonist at the nicotinic receptor, is short-acting, and is given intravenously.
It was discovered by Leo Sternbach.
Effects
Trimetaphan is a sulfonium compound and therefore carries a positive charge. Being charged, it cannot cross lipid cell membranes, such as those that comprise the blood–brain barrier. Due to this, trimethaphan does not have any effect on the central nervous system.
The ciliary muscle of the eye functions to round the lens for accommodation and is controlled mainly by parasympathetic system input. With administration of a ganglion-blocking drug, the ciliary muscle cannot contract (cycloplegia) and the patient loses the ability to focus their eyes.
Trimetaphan has a strong effect on the cardiovascular system. The size of blood vessels is primarily controlled by the sympathetic nervous system. Loss of sympathetic system input to the blood vessels causes them to get larger (vasodilation) which has the effect of lowering blood pressure. Postural hypotension is a common side effect of such drugs. Trimethaphan causes a histamine release which further lowers blood pressure. Effects on the heart include a decreased force of contraction and an increase in heart rate (tachycardia). Reflexive tachycardia can be diminished or undetected because trimetaphan is also blocking the sympathetic ganglia innervating the heart.
The motility of the gastrointestinal tract is regulated by the parasympathetic system, and blockage of this input results in diminished motility and constipation.
A rare side effect of trimethaphan administration is sudden respiratory arrest. The mechanism behind it is unknown, as trimethaphan does not appear to block neuromuscular transmission, and respiratory arrest is not an expected consequence of ganglionic blockage.
Therapeutic uses
The therapeutic uses of trimetaphan are very limited due to the competition from newer drugs that are more selective in their actions and effects produced. It is occasionally used to treat a hypertensive crisis and dissecting aortic aneurysm, to treat pulmonary edema, and to reduce bleeding during neurosurgery.
References
Further reading
Imidazolidinones
Nicotinic antagonists
Peripherally selective drugs
Sulfonium compounds
Ureas
Drugs developed by Hoffmann-La Roche | Trimetaphan camsilate | Chemistry | 602 |
62,431,273 | https://en.wikipedia.org/wiki/Puccinia%20sorghi | Puccinia sorghi, or common rust of maize, is a species of rust fungus that infects corn and species from the plant genus Oxalis.
Host and symptoms
Puccinia sorghi often first appears after silking in maize. The first early symptom includes chlorotic specks on the leaf. The obvious sign of this plant pathogen is golden-brown pustules or bumps on the above-ground surface of the plant tissue. These bumps are urediniospores which can spread to other plants and cause further infection. They are circular and powdery, which result from spores breaking through the leaf surface. While they are only about 1–2 mm each, they are very numerous with equal frequencies on upper and lower leaf surfaces. Over time, these blister-like bumps can change from brown to black, changing from urediniospores to teliospores. The most common place to find these spores is on the plant leaf, but they can develop on husks, tassels, and stalks as well. P. sorghi has two hosts making it a heteroecious rust. Maize and Oxalis are the two hosts for P. sorghi. In comparison, the other common type of maize rust is southern corn rust (Puccinia polysora) and it has a higher variety of hosts including maize, silver plumegrass, eastern gamagrass, Tripsacum lanceolatum, T. laxum, and T. pilorum.
Disease cycle
There are five spore stages in P. sorghi. The spore types are teliospores, basidiospores, pycniospores, aeciospores, and urediniospores.
Every year, viable urediniospores must travel to the north from the warmer southern climate. Since P. sorghi is an obligate parasite, it requires living plant tissue in order to survive. Therefore, this disease cannot overwinter in northern US states. The severity of the disease depends largely on weather conditions and how many spores are carried north each season. Urediniospores infect leaves and produce more spores to create a secondary inoculum and polycyclic disease cycle. Once the urediniospores mature on the plant tissue and turn black they become teliospores. Urediniospores measure 22-33 × 20-28 μm. Teliospores are two-celled and measure 27-53 μm. Teliospores overwinter in the southern climate and germinate in the spring. Teliospores produce basidiospores which spread by wind to infect Oxalis. They infect Oxalis and produce sexual spores (pycniospores) and aeciospores. Aeciospores are windblown to maize and infect the plant.
Management
The use of resistant maize hybrids is the best way to manage P. sorghi. There are two types of resistance that exist. The first is partial resistance which results in fewer rust spots by reducing germination rate. This type of resistance makes P. sorghi less severe by slowing down development of number of urediniospores. The other type of resistance is qualitative. This type relies on a single gene which provides total resistance to the plant. Other management tactics include foliar application of fungicide and cultural control. For fungicide application, plants should be monitored throughout the season, spraying when there are six or more pustules per leaf. Fungicide groups that can be used include mixed modes of action, DMI Triazoles (Group 3), and QoI Strobilurins (Group 11). Cultural control can be more effective in areas where the spores can overwinter. Debris should be collected and destroyed by burning along with eradication of Oxalis in surrounding areas. In northern areas where the spores can't overwinter, early planting time can help avoid P. sorghi. Younger leaves are more susceptible to infection, by planting earlier the crop will be more mature and more resilient by the time the spores arrive.
References
sorghi
Fungi described in 1832
Fungal plant pathogens and diseases
Maize diseases
Fungus species | Puccinia sorghi | Biology | 886 |
57,901,226 | https://en.wikipedia.org/wiki/Harold%20P.%20Eubank | Harold Porter Eubank (23 October 1924 – 23 March 2006, in Kilmarnock, Virginia) was an American physicist, specializing in magnetic fusion energy research.
Eubank grew up in rural Virginia and then in WW II served in the U.S. Army, receiving a Bronze Star. He received in 1948 a B.S. in physics from the College of William and Mary, in 1950 an M.S. from Syracuse University, and in 1953 a Ph.D. from Brown University. He was an assistant professor at Brown University until 1959. From 1959 to 1985 Eubank was a research physicist at the Princeton Plasma Physics Laboratory (PPPL). He headed neutral beam research at PPPL and was one of the world's leading experts on high temperature plasmas heated by neutral beams. In 1977 he was the chair of the Division of Plasma Physics at the American Physical Society.
Eubank published more than 100 papers and spoke frequently at scientific meetings in the U.S. and internationally. Upon his death he was survived by his widow, two sons, a daughter, two granddaughters, two step-children, and his first wife.
Awards and honors
1975 — elected a Fellow of the American Physical Society
1981 — Distinguished Associate Award from the United States Department of Energy
1982 — Elliott Cresson Medal and a Life Fellow Membership from the Franklin Institute in Philadelphia
References
1924 births
2006 deaths
20th-century American physicists
College of William & Mary alumni
Syracuse University alumni
Brown University alumni
United States Department of Energy National Laboratories personnel
Fellows of the American Physical Society
Plasma physicists
United States Army personnel of World War II | Harold P. Eubank | Physics | 325 |
11,671,698 | https://en.wikipedia.org/wiki/Soil%20salinity%20control | Soil salinity control refers to controlling the process and progress of soil salinity to prevent soil degradation by salination and reclamation of already salty (saline) soils. Soil reclamation is also known as soil improvement, rehabilitation, remediation, recuperation, or amelioration.
The primary man-made cause of salinization is irrigation. River water or groundwater used in irrigation contains salts, which remain in the soil after the water has evaporated.
The primary method of controlling soil salinity is to permit 10–20% of the irrigation water to leach the soil, so that it will be drained and discharged through an appropriate drainage system. The salt concentration of the drainage water is normally 5 to 10 times higher than that of the irrigation water which meant that salt export will more closely match salt import and it will not accumulate.
Problems with soil salinity
Salty (saline) soils have high salt content. The predominant salt is normally sodium chloride (NaCl, "table salt"). Saline soils are therefore also sodic soils but there may be sodic soils that are not saline, but alkaline.
According to a study by UN University, about , representing 20% of the world's irrigated lands are affected, up from in the early 1990s. In the Indo-Gangetic Plain, home to over 10% of the world's population, crop yield losses for wheat, rice, sugarcane and cotton grown on salt-affected lands could be 40%, 45%, 48%, and 63%, respectively.
Salty soils are a common feature and an environmental problem in irrigated lands in arid and semi-arid regions, resulting in poor or little crop production. The causes of salty soils are often associated with high water tables, which are caused by a lack of natural subsurface drainage to the underground. Poor subsurface drainage may be caused by insufficient transport capacity of the aquifer or because water cannot exit the aquifer, for instance, if the aquifer is situated in a topographical depression.
Worldwide, the major factor in the development of saline soils is a lack of precipitation. Most naturally saline soils are found in (semi) arid regions and climates of the earth.
Primary cause
Man-made salinization is primarily caused by salt found in irrigation water. All irrigation water derived from rivers or groundwater, regardless of water purity, contains salts that remain behind in the soil after the water has evaporated.
For example, assuming irrigation water with a low salt concentration of 0.3 g/L (equal to 0.3 kg/m3 corresponding to an electric conductivity of about 0.5 FdS/m) and a modest annual supply of irrigation water of 10,000 m3/ha (almost 3 mm/day) brings 3,000 kg salt/ha each year. With the absence of sufficient natural drainage (as in waterlogged soils), and proper leaching and drainage program to remove salts, this would lead to high soil salinity and reduced crop yields in the long run.
Much of the water used in irrigation has a higher salt content than 0.3 g/L, compounded by irrigation projects using a far greater annual supply of water. Sugar cane, for example, needs about 20,000 m3/ha of water per year. As a result, irrigated areas often receive more than 3,000 kg/ha of salt per year, with some receiving as much as 10,000 kg/ha/year.
Secondary cause
The secondary cause of salinization is waterlogging in irrigated land. Irrigation causes changes to the natural water balance of irrigated lands. Large quantities of water in irrigation projects are not consumed by plants and must go somewhere. In irrigation projects, it is impossible to achieve 100% irrigation efficiency where all the irrigation water is consumed by the plants. The maximum attainable irrigation efficiency is about 70%, but usually, it is less than 60%. This means that minimum 30%, but usually more than 40% of the irrigation water is not evaporated and it must go somewhere.
Most of the water lost this way is stored underground which can change the original hydrology of local aquifers considerably. Many aquifers cannot absorb and transport these quantities of water, and so the water table rises leading to waterlogging.
Waterlogging causes three problems:
The shallow water table and lack of oxygenation of the root zone reduces the yield of most crops.
It leads to an accumulation of salts brought in with the irrigation water as their removal through the aquifer is blocked.
With the upward seepage of groundwater, more salts are brought into the soil and the salination is aggravated.
Aquifer conditions in irrigated land and the groundwater flow have an important role in soil salinization, as illustrated here:
Salt affected area
Normally, the salinization of agricultural land affects a considerable area of 20% to 30% in irrigation projects. When the agriculture in such a fraction of the land is abandoned, a new salt and water balance is attained, a new equilibrium is reached and the situation becomes stable.
In India alone, thousands of square kilometers have been severely salinized. China and Pakistan do not lag far behind (perhaps China has even more salt affected land than India). A regional distribution of the 3,230,000 km2 of saline land worldwide is shown in the following table derived from the FAO/UNESCO Soil Map of the World.
Spatial variation
Although the principles of the processes of salinization are fairly easy to understand, it is more difficult to explain why certain parts of the land suffer from the problems and other parts do not, or to predict accurately which part of the land will fall victim. The main reason for this is the variation of natural conditions in time and space, the usually uneven distribution of the irrigation water, and the seasonal or yearly changes of agricultural practices. Only in lands with undulating topography is the prediction simple: the depressional areas will degrade the most.
The preparation of salt and water balances for distinguishable sub-areas in the irrigation project, or the use of agro-hydro-salinity models, can be helpful in explaining or predicting the extent and severity of the problems.
Diagnosis
Measurement
Soil salinity is measured as the salt concentration of the soil solution in tems of g/L or electric conductivity (EC) in dS/m. The relation between these two units is about 5/3: y g/L => 5y/3 dS/m. Seawater may have a salt concentration of 30 g/L (3%) and an EC of 50 dS/m.
The standard for the determination of soil salinity is from an extract of a saturated paste of the soil, and the EC is then written as ECe. The extract is obtained by centrifugation. The salinity can more easily be measured, without centrifugation, in a 2:1 or 5:1 water:soil mixture (in terms of g water per g dry soil) than from a saturated paste. The relation between ECe and EC2:1 is about 4, hence: ECe = 4EC1:2.
Classification
Soils are considered saline when the ECe > 4. When 4 < ECe < 8, the soil is called slightly saline, when 8 < ECe < 16 it is called (moderately) saline, and when ECe > 16 severely saline.
Crop tolerance
Sensitive crops lose their vigor already in slightly saline soils; most crops are negatively affected by (moderately) saline soils, and only salinity resistant crops thrive in severely saline soils. The University of Wyoming and the Government of Alberta report data on the salt tolerance of plants.
Principles of salinity control
Drainage is the primary method of controlling soil salinity. The system should permit a small fraction of the irrigation water (about 10 to 20 percent, the drainage or leaching fraction) to be drained and discharged out of the irrigation project.
In irrigated areas where salinity is stable, the salt concentration of the drainage water is normally 5 to 10 times higher than that of the irrigation water. Salt export matches salt import and salt will not accumulate.
When reclaiming already salinized soils, the salt concentration of the drainage water will initially be much higher than that of the irrigation water (for example 50 times higher). Salt export will greatly exceed salt import, so that with the same drainage fraction a rapid desalinization occurs. After one or two years, the soil salinity is decreased so much, that the salinity of the drainage water has come down to a normal value and a new, favorable, equilibrium is reached.
In regions with pronounced dry and wet seasons, the drainage system may be operated in the wet season only, and closed during the dry season. This practice of checked or controlled drainage saves irrigation water.
The discharge of salty drainage water may pose environmental problems to downstream areas. The environmental hazards must be considered very carefully and, if necessary mitigating measures must be taken. If possible, the drainage must be limited to wet seasons only, when the salty effluent inflicts the least harm.
Drainage systems
Land drainage for soil salinity control is usually by horizontal drainage system (figure left), but vertical systems (figure right) are also employed.
The drainage system designed to evacuate salty water also lowers the water table. To reduce the cost of the system, the lowering must be reduced to a minimum. The highest permissible level of the water table (or the shallowest permissible depth) depends on the irrigation and agricultural practices and kind of crops.
In many cases a seasonal average water table depth of 0.6 to 0.8 m is deep enough. This means that the water table may occasionally be less than 0.6 m (say 0.2 m just after an irrigation or a rain storm). This automatically implies that, in other occasions, the water table will be deeper than 0.8 m (say 1.2 m). The fluctuation of the water table helps in the breathing function of the soil while the expulsion of carbon dioxide (CO2) produced by the plant roots and the inhalation of fresh oxygen (O2) is promoted.
The establishing of a not-too-deep water table offers the additional advantage that excessive field irrigation is discouraged, as the crop yield would be negatively affected by the resulting elevated water table, and irrigation water may be saved.
The statements made above on the optimum depth of the water table are very general, because in some instances the required water table may be still shallower than indicated (for example in rice paddies), while in other instances it must be considerably deeper (for example in some orchards). The establishment of the optimum depth of the water table is in the realm of agricultural drainage criteria.
Soil leaching
The vadose zone of the soil below the soil surface and the water table is subject to four main hydrological inflow and outflow factors:
Infiltration of rain and irrigation water (Irr) into the soil through the soil surface (Inf) :
Inf = Rain + Irr
Evaporation of soil water through plants and directly into the air through the soil surface (Evap)
Percolation of water from the unsaturated zone soil into the groundwater through the watertable (Perc)
Capillary rise of groundwater moving by capillary suction forces into the unsaturated zone (Cap)
In steady state (i.e. the amount of water stored in the unsaturated zone does not change in the long run) the water balance of the unsaturated zone reads: Inflow = Outflow, thus:
Inf + Cap = Evap + Perc or:
Irr + Rain + Cap = Evap + Perc
and the salt balance is
Irr.Ci + Cap.Cc = Evap.Fc.Ce + Perc.Cp + Ss
where Ci is the salt concentration of the irrigation water, Cc is the salt concentration of the capillary rise, equal to the salt concentration of the upper part of the groundwater body, Fc is the fraction of the total evaporation transpired by plants, Ce is the salt concentration of the water taken up by the plant roots, Cp is the salt concentration of the percolation water, and Ss is the increase of salt storage in the unsaturated soil. This assumes that the rainfall contains no salts. Only along the coast this may not be true. Further it is assumed that no runoff or surface drainage occurs. The amount of removed by plants (Evap.Fc.Ce) is usually negligibly small: Evap.Fc.Ce = 0
The salt concentration Cp can be taken as a part of the salt concentration of the soil in the unsaturated zone (Cu) giving: Cp = Le.Cu, where Le is the leaching efficiency. The leaching efficiency is often in the order of 0.7 to 0.8, but in poorly structured, heavy clay soils it may be less. In the Leziria Grande polder in the delta of the Tagus river in Portugal it was found that the leaching efficiency was only 0.15.
Assuming that one wishes to avoid the soil salinity to increase and maintain the soil salinity Cu at a desired level Cd we have:
Ss = 0, Cu = Cd and Cp = Le.Cd. Hence the salt balance can be simplified to:
Perc.Le.Cd = Irr.Ci + Cap.Cc
Setting the amount percolation water required to fulfill this salt balance equal to Lr (the leaching requirement) it is found that:
Lr = (Irr.Ci + Cap.Cc) / Le.Cd .
Substituting herein Irr = Evap + Perc − Rain − Cap and re-arranging gives :
Lr = [ (Evap−Rain).Ci + Cap(Cc−Ci) ] / (Le.Cd − Ci)
With this the irrigation and drainage requirements for salinity control can be computed too.
In irrigation projects in (semi)arid zones and climates it is important to check the leaching requirement, whereby the field irrigation efficiency (indicating the fraction of irrigation water percolating to the underground) is to be taken into account.
The desired soil salinity level Cd depends on the crop tolerance to salt. The University of Wyoming, US, and the Government of Alberta, Canada, report crop tolerance data.
Strip cropping: an alternative
In irrigated lands with scarce water resources suffering from drainage (high water table) and soil salinity problems, strip cropping is sometimes practiced with strips of land where every other strip is irrigated while the strips in between are left permanently fallow.
Owing to the water application in the irrigated strips they have a higher water table which induces flow of groundwater to the unirrigated strips. This flow functions as subsurface drainage for the irrigated strips, whereby the water table is maintained at a not-too-shallow depth, leaching of the soil is possible, and the soil salinity can be controlled at an acceptably low level.
In the unirrigated (sacrificial) strips the soil is dry and the groundwater comes up by capillary rise and evaporates leaving the salts behind, so that here the soil salinizes. Nevertheless, they can have some use for livestock, sowing salinity resistant grasses or weeds. Moreover, useful salt resistant trees can be planted like Casuarina, Eucalyptus, or Atriplex, keeping in mind that the trees have deep rooting systems and the salinity of the wet subsoil is less than of the topsoil. In these ways wind erosion can be controlled. The unirrigated strips can also be used for salt harvesting.
Soil salinity models
The majority of the computer models available for water and solute transport in the soil (e.g. SWAP, DrainMod-S, UnSatChem, and Hydrus) are based on Richard's differential equation for the movement of water in unsaturated soil in combination with Fick's differential convection–diffusion equation for advection and dispersion of salts.
The models require the input of soil characteristics like the relations between variable unsaturated soil moisture content, water tension, water retention curve, unsaturated hydraulic conductivity, dispersity, and diffusivity. These relations vary greatly from place to place and time to time and are not easy to measure. Further, the models are complicated to calibrate under farmer's field conditions because the soil salinity here is spatially very variable. The models use short time steps and need at least a daily, if not hourly, database of hydrological phenomena. Altogether, this makes model application to a fairly large project the job of a team of specialists with ample facilities.
Simpler models, like SaltMod, based on monthly or seasonal water and soil balances and an empirical capillary rise function, are also available. They are useful for long-term salinity predictions in relation to irrigation and drainage practices.
LeachMod, Using the SaltMod principles helps in analyzing leaching experiments in which the soil salinity was monitored in various root zone layers while the model will optimize the value of the leaching efficiency of each layer so that a fit is obtained of observed with simulated soil salinity values.
Spatial variations owing to variations in topography can be simulated and predicted using salinity cum groundwater models, like SahysMod.
See also
References
External links
Food and Agriculture Organization of the United Nations on soil salinity
US Salinity Laboratory at Riverside, California
Soil
Soil science
Environmental soil science
Agricultural soil science | Soil salinity control | Environmental_science | 3,662 |
6,160,943 | https://en.wikipedia.org/wiki/Huayco | A huaico or huayco (from the Quechua wayqu, meaning "depth, valley") is an Andean term for the mudslide and flash flood caused by torrential rains occurring high in the mountains, especially during the weather phenomenon known as El Niño.
National forests such as the San Matías–San Carlos Protection Forest were created in Peru to protect vegetation, which reduces runoff, and prevent huaicos.
The indigenous Mapuche residents of Lo Barnechea, in present-day Santiago Province, Chile, were called Huaicoches in their Mapudungun language: Huaico (flash flood) and che (people).
"Cabeça d'água" (lit. "Water head") is a term in Brazil describing similar phenomena: During orographic rain, rivers in mountain ranges are often struck by very rapid flooding, which produces a downward wave that can carry large river rocks, vegetation, and people. Several fatalities have been recorded due to water heads, usually from people not familiar with local conditions.
References
Hydrology
Geography of Peru | Huayco | Chemistry,Engineering,Environmental_science | 218 |
6,317,531 | https://en.wikipedia.org/wiki/281%20%28number%29 | 281 is the natural number following 280 and preceding 282. It is also a prime number.
In mathematics
281 is a twin prime with 283, Sophie Germain prime, sum of the first fourteen primes, sum of seven consecutive primes (29 + 31 + 37 + 41 + 43 + 47 + 53), Chen prime, Eisenstein prime with no imaginary part, and a centered decagonal number.
281 is the smallest prime p such that the decimal period length of the reciprocal of p is (p−1)/10, i.e. the period length of 1/281 is 28. However, in binary, it has period length 70.
The generalized repunit number is composite for all prime p < 60000.
References
Integers | 281 (number) | Mathematics | 152 |
15,838,199 | https://en.wikipedia.org/wiki/Thymosin%20beta-4 | Thymosin beta-4 is a protein that in humans is encoded by the TMSB4X gene. Recommended INN (International Nonproprietary Name) for thymosin beta-4 is 'timbetasin', as published by the World Health Organization (WHO).
The protein consists (in humans) of 43 amino acids (sequence: SDKPDMAEI EKFDKSKLKK TETQEKNPLP SKETIEQEKQ AGES) and has a molecular weight of 4921 g/mol.
Thymosin-β4 is a major cellular constituent in many tissues. Its intracellular concentration may reach as high as 0.5 mM. Following Thymosin α1, β4 was the second of the biologically active peptides from Thymosin Fraction 5 to be completely sequenced and synthesized.
Function
This gene encodes an actin sequestering protein which plays a role in regulation of actin polymerization. The protein is also involved in cell proliferation, migration, and differentiation. This gene escapes X inactivation and has a homolog on chromosome Y (TMSB4Y).
Biological activities of thymosin β4
Any concepts of the biological role of thymosin β4 must inevitably be coloured by the demonstration that total ablation of the thymosin β4 gene in the mouse allows apparently normal embryonic development of mice which are fertile as adults.
Actin binding
Thymosin β4 was initially perceived as a thymic hormone. However this changed when it was discovered that it forms a 1:1 complex with G (globular) actin, and is present at high concentration in a wide range of mammalian cell types. When appropriate, G-actin monomers polymerize to form F (filamentous) actin, which, together with other proteins that bind to actin, comprise cellular microfilaments. Formation by G-actin of the complex with β-thymosin (= "sequestration") opposes this.
Due to its profusion in the cytosol and its ability to bind G-actin but not F-actin, thymosin β4 is regarded as the principal actin-sequestering protein in many cell types. Thymosin β4 functions like a buffer for monomeric actin as represented in the following reaction:
F-actin ↔ G-actin + Thymosin β4 ↔ G-actin/Thymosin β4
Release of G-actin monomers from thymosin β4 occurs as part of the mechanism that drives actin polymerization in the normal function of the cytoskeleton in cell morphology and cell motility.
The sequence LKKTET, which starts at residue 17 of the 43-aminoacid sequence of thymosin beta-4, and is strongly conserved between all β-thymosins, together with a similar sequence in WH2 domains, is frequently referred to as "the actin-binding motif" of these proteins, although modelling based on X-ray crystallography has shown that essentially the entire length of the β-thymosin sequence interacts with actin in the actin-thymosin complex.
"Moonlighting"
In addition to its intracellular role as the major actin-sequestering molecule in cells of many multicellular animals, thymosin β4 shows a remarkably diverse range of effects when present in the fluid surrounding animal tissue cells. Taken together, these effects suggest that thymosin has a general role in tissue regeneration. This has suggested a variety of possible therapeutic applications, and several have now been extended to animal models and human clinical trials.
It is considered unlikely that thymosin β4 exerts all these effects via intracellular sequestration of G-actin. This would require its uptake by cells, and moreover, in most cases the cells affected already have substantial intracellular concentrations.
The diverse activities related to tissue repair may depend on interactions with receptors quite distinct from actin and possessing extracellular ligand-binding domains. Such multi-tasking by, or "partner promiscuity" of, proteins has been referred to as protein moonlighting. Proteins such as thymosins which lack stable folded structure in aqueous solution, are known as intrinsically unstructured proteins (IUPs). Because IUPs acquire specific folded structures only on binding to their partner proteins, they offer special possibilities for interaction with multiple partners. A candidate extracellular receptor of high affinity for thymosin β4 is the β subunit of cell surface-located ATP synthase, which would allow extracellular thymosin to signal via a purinergic receptor.
Some of the multiple activities of thymosin β4 unrelated to actin may be mediated by a tetrapeptide enzymically cleaved from its N-terminus, N-acetyl-ser-asp-lys-pro, brand names Seraspenide or Goralatide, best known as an inhibitor of the proliferation of haematopoietic (blood-cell precursor) stem cells of bone marrow.
Tissue regeneration
Work with cell cultures and experiments with animals have shown that administration of thymosin β4 can promote migration of cells, formation of blood vessels, maturation of stem cells, survival of various cell types and lowering of the production of pro-inflammatory cytokines. These multiple properties have provided the impetus for a worldwide series of on-going clinical trials of potential effectiveness of thymosin β4 in promoting repair of wounds in skin, cornea and heart.
Such tissue-regenerating properties of thymosin β4 may ultimately contribute to repair of human heart muscle damaged by heart disease and heart attack. In mice, administration of thymosin β4 has been shown to stimulate formation of new heart muscle cells from otherwise inactive precursor cells present in the outer lining of adult hearts, to induce migration of these cells into heart muscle and recruit new blood vessels within the muscle.
Anti-inflammatory role for sulfoxide
In 1999 researchers in Glasgow University found that an oxidised derivative of thymosin β4 (the sulfoxide, in which an oxygen atom is added to the methionine near the N-terminus) exerted several potentially anti-inflammatory effects on neutrophil leucocytes. It promoted their dispersion from a focus, inhibited their response to a small peptide (F-Met-Leu-Phe) which attracts them to sites of bacterial infection and lowered their adhesion to endothelial cells. (Adhesion to endothelial cells of blood vessel walls is pre-requisite for these cells to leave the bloodstream and invade infected tissue). A possible anti-inflammatory role for the β4 sulfoxide was supported by the group's finding that it counteracted artificially-induced inflammation in mice.
The group had first identified the thymosin sulfoxide as an active factor in culture fluid of cells responding to treatment with a steroid hormone, suggesting that its formation might form part of the mechanism by which steroids exert anti-inflammatory effects. Extracellular thymosin β4 would be readily oxidised to the sulfoxide in vivo at sites of inflammation, by the respiratory burst.
Terminal deoxynucleotidyl transferase
Thymosin β4 induces the activity of the enzyme terminal deoxynucleotidyl transferase in populations of thymocytes (thymus-derived lymphocytes). This suggests that the peptide may contribute to the maturation of these cells.
Clinical significance
Tβ4 has been studied in a number of clinical trials.
In phase 2 trials with patients having pressure ulcers, venous pressure ulcers, and epidermolysis bullosa, Tβ4 accelerated the rate of repair. It was also found to be safe and well tolerated.
In human clinical trials, Tβ4 improves the conditions of dry eye and neurotrophic keratopathy with effects lasting long after the end of treatment.
Doping in sports
Thymosin beta-4 is considered a performance enhancing substance and is banned in sports by the World Anti-Doping Agency due to its effect of aiding soft tissue recovery and enabling higher training loads. It was central to two controversies in Australia in the 2010s which saw a large proportion of the playing lists from two professional football clubs – the Cronulla-Sutherland Sharks of the National Rugby League and the Essendon Football Club of the Australian Football League – found guilty of doping and suspended from playing; in both cases, the players were administered thymosin beta-4 in a program organised by sports scientist Stephen Dank.
Interactions
TMSB4X has been shown to interact with ACTA1 and ACTG1.
See also
Beta thymosins
Thymosin beta-4, Y-chromosomal
Thymosins
References
Further reading
Peptides | Thymosin beta-4 | Chemistry | 1,829 |
14,346,042 | https://en.wikipedia.org/wiki/Double-stranded%20RNA%20viruses | Double-stranded RNA viruses (dsRNA viruses) are a polyphyletic group of viruses that have double-stranded genomes made of ribonucleic acid. The double-stranded genome is used as a template by the viral RNA-dependent RNA polymerase (RdRp) to transcribe a positive-strand RNA functioning as messenger RNA (mRNA) for the host cell's ribosomes, which translate it into viral proteins. The positive-strand RNA can also be replicated by the RdRp to create a new double-stranded viral genome.
A distinguishing feature of the dsRNA viruses is their ability to carry out transcription of the dsRNA segments within the capsid, and the required enzymes are part of the virion structure.
Double-stranded RNA viruses are classified into two phyla, Duplornaviricota and Pisuviricota (specifically class Duplopiviricetes), in the kingdom Orthornavirae and realm Riboviria. The two phyla do not share a common dsRNA virus ancestor, but evolved their double strands two separate times from positive-strand RNA viruses. In the Baltimore classification system, dsRNA viruses belong to Group III.
Virus group members vary widely in host range (animals, plants, fungi, and bacteria), genome segment number (one to twelve), and virion organization (T-number, capsid layers, or turrets). Double-stranded RNA viruses include the rotaviruses, known globally as a common cause of gastroenteritis in young children, and bluetongue virus, an economically significant pathogen of cattle and sheep. The family Reoviridae is the largest and most diverse dsRNA virus family in terms of host range.
Classification
Two clades of dsRNA viruses exist: the phylum Duplornaviricota and the class Duplopiviricetes, which is in the phylum Pisuviricota. Both are included in the kingdom Orthornavirae in the realm Riboviria. Based on phylogenetic analysis of RdRp, the two clades do not share a common dsRNA ancestor but are instead separately descended from different positive-sense, single-stranded RNA viruses. In the Baltimore classification system, which groups viruses together based on their manner of mRNA synthesis, dsRNA viruses are group III.
Duplornaviricota
Duplornaviricota contains most dsRNA viruses, including reoviruses, which infect a diverse range of eukaryotes, and cystoviruses, which are the only dsRNA viruses known to infect prokaryotes. Apart from RdRp, viruses in Duplornaviricota also share icosahedral capsids that contain 60 homo- or heterodimers of the capsid protein organized on a pseudo T=2 lattice. The phylum is divided into three classes: Chrymotiviricetes, which primarily contains fungal and protozoan viruses, Resentoviricetes, which contains reoviruses, and Vidaverviricetes, which contains cystoviruses.
Duplopiviricetes
The class Duplopiviricetes is the second clade of dsRNA viruses and is in the phylum Pisuviricota, which also contains positive-sense single-stranded RNA viruses. Duplopiviricetes mostly contains plant and fungal viruses and includes the following four families: Amalgaviridae, Hypoviridae, Partitiviridae, and Picobirnaviridae.
Notes on selected species
Reoviridae
Reoviridae are currently classified into nine genera. The genomes of these viruses consist of 10 to 12 segments of dsRNA, each generally encoding one protein. The mature virions are non-enveloped. Their capsids, formed by multiple proteins, have icosahedral symmetry and are arranged generally in concentric layers.
Orthoreoviruses
The orthoreoviruses (reoviruses) are the prototypic members of the virus Reoviridae family and representative of the turreted members, which comprise about half the genera. Like other members of the family, the reoviruses are non-enveloped and characterized by concentric capsid shells that encapsidate a segmented dsRNA genome. In particular, reovirus has eight structural proteins and ten segments of dsRNA. A series of uncoating steps and conformational changes accompany cell entry and replication. High-resolution structures are known for almost all of the proteins of mammalian reovirus (MRV), which is the best-studied genotype. Electron cryo-microscopy (cryoEM) and X-ray crystallography have provided a wealth of structural information about two specific MRV strains, type 1 Lang (T1L) and type 3 Dearing (T3D).
Cypovirus
The cytoplasmic polyhedrosis viruses (CPVs) form the genus Cypovirus of the family Reoviridae. CPVs are classified into 14 species based on the electrophoretic migration profiles of their genome segments. Cypovirus has only a single capsid shell, which is similar to the orthoreovirus inner core. CPV exhibits striking capsid stability and is fully capable of endogenous RNA transcription and processing. The overall folds of CPV proteins are similar to those of other reoviruses. However, CPV proteins have insertional domains and unique structures that contribute to their extensive intermolecular interactions. The CPV turret protein contains two methylase domains with a highly conserved helix-pair/β-sheet/helix-pair sandwich fold but lacks the β-barrel flap present in orthoreovirus λ2. The stacking of turret protein functional domains and the presence of constrictions and A spikes along the mRNA release pathway indicate a mechanism that uses pores and channels to regulate the highly coordinated steps of RNA transcription, processing, and release.
Rotavirus
Rotavirus is the most common cause of acute gastroenteritis in infants and young children worldwide. This virus contains a dsRNA genome and is a member of the Reoviridae family. The genome of rotavirus consists of eleven segments of dsRNA. Each genome segment codes for one protein with the exception of segment 11, which codes for two proteins. Among the twelve proteins, six are structural and six are non-structural proteins.
It is a double-stranded RNA non-enveloped virus. When at least two rotavirus genomes are present in a host cell, the genome segments may undergo reassortment to form progeny viruses with new gene combinations., or they may undergo intragenic homologous recombination. Some pathogenic rotovirus lineages that infect humans appear to have evolved through multiple interspecies reassortment events. Intragenic homologous recombination also appears to be a significant driver of rotovirus diversity and evolution. Intragenic recombination may occur when the VP1 RNA-dependent RNA polymerase replicates part of one template strand before switching to another.
Bluetongue virus
The members of genus Orbivirus within the Reoviridae family are arthropod borne viruses and are responsible for high morbidity and mortality in ruminants. Bluetongue virus (BTV) which causes disease in livestock (sheep, goat, cattle) has been in the forefront of molecular studies for the last three decades and now represents the best understood orbivirus at the molecular and structural levels. BTV, like other members of the family, is a complex non-enveloped virus with seven structural proteins and a RNA genome consisting of 10 variously sized dsRNA segments.
Phytoreoviruses
Phytoreoviruses are non-turreted reoviruses that are major agricultural pathogens, particularly in Asia. One member of this family, Rice Dwarf Virus (RDV), has been extensively studied by electron cryomicroscopy and x-ray crystallography. From these analyses, atomic models of the capsid proteins and a plausible model for capsid assembly have been derived. While the structural proteins of RDV share no sequence similarity to other proteins, their folds and the overall capsid structure are similar to those of other Reoviridae.
Saccharomyces cerevisiae virus L-A
The L-A dsRNA virus of the yeast Saccharomyces cerevisiae has a single 4.6 kb genomic segment that encodes its major coat protein, Gag (76 kDa) and a Gag-Pol fusion protein (180 kDa) formed by a -1 ribosomal frameshift. L-A can support the replication and encapsidation in separate viral particles of any of several satellite dsRNAs, called M dsRNAs, each of which encodes a secreted protein toxin (the killer toxin) and immunity to that toxin. L-A and M are transmitted from cell to cell by the cytoplasmic mixing that occurs in the process of mating. Neither is naturally released from the cell or enters cells by other mechanisms, but the high frequency of yeast mating in nature results in the wide distribution of these viruses in natural isolates. Moreover, the structural and functional similarities with dsRNA viruses of mammals has made it useful to consider these entities as viruses.
Infectious bursal disease virus
Infectious bursal disease virus (IBDV) is the best-characterized member of the family Birnaviridae. These viruses have bipartite dsRNA genomes enclosed in single layered icosahedral capsids with T = 13l geometry. IBDV shares functional strategies and structural features with many other icosahedral dsRNA viruses, except that it lacks the T = 1 (or pseudo T = 2) core common to the Reoviridae, Cystoviridae, and Totiviridae. The IBDV capsid protein exhibits structural domains that show homology to those of the capsid proteins of some positive-sense single-stranded RNA viruses, such as the nodaviruses and tetraviruses, as well as the T = 13 capsid shell protein of the Reoviridae. The T = 13 shell of the IBDV capsid is formed by trimers of VP2, a protein generated by removal of the C-terminal domain from its precursor, pVP2. The trimming of pVP2 is performed on immature particles as part of the maturation process. The other major structural protein, VP3, is a multifunctional component lying under the T = 13 shell that influences the inherent structural polymorphism of pVP2. The virus-encoded RNA-dependent RNA polymerase, VP1, is incorporated into the capsid through its association with VP3. VP3 also interacts extensively with the viral dsRNA genome.
Bacteriophage Φ6
Bacteriophage Φ6, is a member of the Cystoviridae family. It infects Pseudomonas bacteria (typically plant-pathogenic P. syringae). It has a three-part, segmented, double-stranded RNA genome, totalling ~13.5 kb in length. Φ6 and its relatives have a lipid membrane around their nucleocapsid, a rare trait among bacteriophages. It is a lytic phage, though under certain circumstances has been observed to display a delay in lysis which may be described as a "carrier state".
Anti-virals
Since cells do not produce double-stranded RNA during normal nucleic acid metabolism, natural selection has favored the evolution of enzymes that destroy dsRNA on contact. The best known class of this type of enzymes is Dicer. It is hoped that broad-spectrum anti-virals could be synthesized that take advantage of this vulnerability of double-stranded RNA viruses.
See also
Animal virology
List of viruses
RNA virus
TLR3
Virology
Virus classification
References
Bibliography
Animal virology
Molecular biology
RNA viruses | Double-stranded RNA viruses | Chemistry,Biology | 2,475 |
60,692,294 | https://en.wikipedia.org/wiki/If%20Trees%20Could%20Talk | If Trees Could Talk: Life Lessons from the Wisdom of the Woods is a non-fiction book by American author and podcaster, Holly Worton that offers spirituality and self-help through making contact with nature and talking to trees.
Summary
Part self-help and part spiritual, Worton's If Trees Could Talk is a guide to taking time out to connect with nature, talk to trees, and to live a happier and more fulfilled life. The author, who lives in England, believes that "all trees are living, breathing organisms that humans can connect with and talk to on a deeper level through silent, telepathic communication."
A druid, coach and healer, Worton writes about the individual trees she has encountered on her many nature walks – each with their own history, character, personality, and story; and she describes the different species of trees, and their place and reverence in pagan ways, such as that of the Order of Bards, Ovates and Druids, and the ancient alphabet, the ogham which ascribes a tree to each letter. The book is structured around this description, the stories that each individual tree has to tell, and the advice they have to offer.
Interviews
On 6 May 2019, Worton was interviewed on ITV's programme This Morning by television presenters Eamonn Holmes and Rochelle Humes, to talk about her new book. In a garden outside the television studios, she also gave the presenters a practical demonstration of how she communicates with trees, with the aid of a sound engineer.
On 26 February 2020, Worton was again interviewed on This Morning, by Alison Hammond and Phillip Schofield, discussing the Allerton Oak, the UK's nomination for the European Tree of the Year competition, and communicating with it.
Reception
The televised item on This Morning attracted a largely humorous and dismissive response in the social media, and was reported in several newspapers, including the Daily Mirror, the Birmingham Mail, and Entertainment Daily.
Writing in the Daily Express on 13 May 2019, life coach and columnist, Carole Ann Rice is, however, more positive about the book. She describes If Trees Could Talk as "a wise and beautiful book coming at the right time for many of us." "Stop rushing, be patient, respect nature, stray from your normal path, be mindful, ask permission [to communicate with the trees]" – these, the reviewer says, are a few of the things that we can learn from the trees, and she advises the reader to "show love and respect to what grows around you" and to "learn some true lessons from the wild side of life."
About the author
Originally from California in the United States, Worton has lived in Spain, Costa Rica, Mexico and Chile before moving to England. She is an author and podcaster, as well as a druid, coach and healer.
See also
Celtic sacred trees
References
External links
Author's web site
Interview on This Morning
2019 non-fiction books
Self-help books
Nature books
Works about trees
Books about spirituality
Plant communication | If Trees Could Talk | Biology | 624 |
17,336,523 | https://en.wikipedia.org/wiki/Descriptive%20interpretation | According to Rudolf Carnap, in logic, an interpretation is a descriptive interpretation (also called a factual interpretation) if at least one of the undefined symbols of its formal system becomes, in the interpretation, a descriptive sign (i.e., the name of single objects, or observable properties). In his Introduction to Semantics (Harvard Uni. Press, 1942) he makes a distinction between formal interpretations which are logical interpretations (also called mathematical interpretation or logico-mathematical interpretation) and descriptive interpretations: a formal interpretation is a descriptive interpretation if it is not a logical interpretation.
Attempts to axiomatize the empirical sciences, Carnap said, use a descriptive interpretation to model reality.: the aim of these attempts is to construct a formal system for which reality is the only interpretation. - the world is an interpretation (or model) of these sciences, only insofar as these sciences are true.
Any non-empty set may be chosen as the domain of a descriptive interpretation, and all n-ary relations among the elements of the domain are candidates for assignment to any predicate of degree n.
Examples
A sentence is either true or false under an interpretation which assigns values to the logical variables. We might for example make the following assignments:
Individual constants
a: Socrates
b: Plato
c: Aristotle
Predicates:
Fα: α is sleeping
Gαβ: α hates β
Hαβγ: α made β hit γ
Sentential variables:
p "It is raining."
Under this interpretation the sentences discussed above would represent the following English statements:
p: "It is raining."
F(a): "Socrates is sleeping."
H(b,a,c): "Plato made Socrates hit Aristotle."
x(F(x)): "Everybody is sleeping."
z(G(a,z)): "Socrates hates somebody."
xyz(H(x,y,z)): "Somebody made everybody hit somebody."
xz(F(x)G(a,z)): Everybody is sleeping and Socrates hates somebody.
xyz (G(a,z)H(x,y,z)): Either Socrates hates somebody or somebody made everybody hit somebody.
Sources
Semantics
Formal languages
Interpretation (philosophy) | Descriptive interpretation | Mathematics | 472 |
31,198 | https://en.wikipedia.org/wiki/Tuning%20fork | A tuning fork is an acoustic resonator in the form of a two-pronged fork with the prongs (tines) formed from a U-shaped bar of elastic metal (usually steel). It resonates at a specific constant pitch when set vibrating by striking it against a surface or with an object, and emits a pure musical tone once the high overtones fade out. A tuning fork's pitch depends on the length and mass of the two prongs. They are traditional sources of standard pitch for tuning musical instruments.
The tuning fork was invented in 1711 by British musician John Shore, sergeant trumpeter and lutenist to the royal court.
Description
A tuning fork is a fork-shaped acoustic resonator used in many applications to produce a fixed tone. The main reason for using the fork shape is that, unlike many other types of resonators, it produces a very pure tone, with most of the vibrational energy at the fundamental frequency. The reason for this is that the frequency of the first overtone is about = = times the fundamental (about octaves above it). By comparison, the first overtone of a vibrating string or metal bar is one octave above (twice) the fundamental, so when the string is plucked or the bar is struck, its vibrations tend to mix the fundamental and overtone frequencies. When the tuning fork is struck, little of the energy goes into the overtone modes; they also die out correspondingly faster, leaving a pure sine wave at the fundamental frequency. It is easier to tune other instruments with this pure tone.
Another reason for using the fork shape is that it can then be held at the base without damping the oscillation. That is because its principal mode of vibration is symmetric, with the two prongs always moving in opposite directions, so that at the base where the two prongs meet there is a node (point of no vibratory motion) which can therefore be handled without removing energy from the oscillation (damping). However, there is still a tiny motion induced in the handle in its longitudinal direction (thus at right angles to the oscillation of the prongs) which can be made audible using any sort of sound board. Thus by pressing the tuning fork's base against a sound board such as a wooden box, table top, or bridge of a musical instrument, this small motion, but which is at a high acoustic pressure (thus a very high acoustic impedance), is partly converted into audible sound in air which involves a much greater motion (particle velocity) at a relatively low pressure (thus low acoustic impedance). The pitch of a tuning fork can also be heard directly through bone conduction, by pressing the tuning fork against the bone just behind the ear, or even by holding the stem of the fork in one's teeth, conveniently leaving both hands free. Bone conduction using a tuning fork is specifically used in the Weber and Rinne tests for hearing in order to bypass the middle ear. If just held in open air, the sound of a tuning fork is very faint due to the acoustic impedance mismatch between the steel and air. Moreover, since the feeble sound waves emanating from each prong are 180° out of phase, those two opposite waves interfere, largely cancelling each other. Thus when a solid sheet is slid in between the prongs of a vibrating fork, the apparent volume actually increases, as this cancellation is reduced, just as a loudspeaker requires a baffle in order to radiate efficiently.
Commercial tuning forks are tuned to the correct pitch at the factory, and the pitch and frequency in hertz is stamped on them. They can be retuned by filing material off the prongs. Filing the ends of the prongs raises the pitch, while filing the inside of the base of the prongs lowers it.
Currently, the most common tuning fork sounds the note of A = 440 Hz, the standard concert pitch that many orchestras use. That A is the pitch of the violin's second-highest string, the highest string of the viola, and an octave above the highest string of the cello. Orchestras between 1750 and 1820 mostly used A = 423.5 Hz, though there were many forks and many slightly different pitches. Standard tuning forks are available that vibrate at all the pitches within the central octave of the piano, and also other pitches.
Tuning fork pitch varies slightly with temperature, due mainly to a slight decrease in the modulus of elasticity of steel with increasing temperature. A change in frequency of 48 parts per million per °F (86 ppm per °C) is typical for a steel tuning fork. The frequency decreases (becomes flat) with increasing temperature. Tuning forks are manufactured to have their correct pitch at a standard temperature. The standard temperature is now , but is an older standard. The pitch of other instruments is also subject to variation with temperature change.
Calculation of frequency
The frequency of a tuning fork depends on its dimensions and what it is made from:
where
is the frequency the fork vibrates at, (SI units: 1/s)
≈ 3.516015 is the square of the smallest positive solution to , which arises from the boundary conditions of the prong’s cantilevered structure.
is the length of the prongs, (m)
is the Young's modulus (elastic modulus or stiffness) of the material the fork is made from, (Pa or N/m2 or kg/(ms2))
is the second moment of area of the cross-section, (m4)
is the density of the fork's material (kg/m3), and
is the cross-sectional area of the prongs (tines) (m2).
The ratio in the equation above can be rewritten as if the prongs are cylindrical with radius , and if the prongs have rectangular cross-section of width along the direction of motion.
Uses
Tuning forks have traditionally been used to tune musical instruments, though electronic tuners have largely replaced them. Forks can be driven electrically by placing electronic oscillator-driven electromagnets close to the prongs.
In musical instruments
A number of keyboard musical instruments use principles similar to tuning forks. The most popular of these is the Rhodes piano, in which hammers hit metal tines that vibrate in the magnetic field of a pickup, creating a signal that drives electric amplification. The earlier, un-amplified dulcitone, which used tuning forks directly, suffered from low volume.
In clocks and watches
The quartz crystal that serves as the timekeeping element in modern quartz clocks and watches is in the form of a tiny tuning fork. It usually vibrates at a frequency of 32,768 Hz in the ultrasonic range (above the range of human hearing). It is made to vibrate by small oscillating voltages applied by an electronic oscillator circuit to metal electrodes plated on the surface of the crystal. Quartz is piezoelectric, so the voltage causes the tines to bend rapidly back and forth.
The Accutron, an electromechanical watch developed by Max Hetzel and manufactured by Bulova beginning in 1960, used a 360-hertz steel tuning fork as its timekeeper, powered by electromagnets attached to a battery-powered transistor oscillator circuit. The fork provided greater accuracy than conventional balance wheel watches. The humming sound of the tuning fork was audible when the watch was held to the ear.
Medical and scientific uses
Alternatives to the common A=440 standard include philosophical or scientific pitch with standard pitch of C=512. According to Rayleigh, physicists and acoustic instrument makers used this pitch. The tuning fork John Shore gave to George Frideric Handel produces C=512.
Tuning forks, usually C512, are used by medical practitioners to assess a patient's hearing. This is most commonly done with two exams called the Weber test and Rinne test, respectively. Lower-pitched ones, usually at C128, are also used to check vibration sense as part of the examination of the peripheral nervous system.
Orthopedic surgeons have explored using a tuning fork (lowest frequency C=128) to assess injuries where bone fracture is suspected. They hold the end of the vibrating fork on the skin above the suspected fracture, progressively closer to the suspected fracture. If there is a fracture, the periosteum of the bone vibrates and fires nociceptors (pain receptors), causing a local sharp pain. This can indicate a fracture, which the practitioner refers for medical X-ray. The sharp pain of a local sprain can give a false positive. Established practice, however, requires an X-ray regardless, because it's better than missing a real fracture while wondering if a response means a sprain. A systematic review published in 2014 in BMJ Open suggests that this technique is not reliable or accurate enough for clinical use.
Non-medical and non-scientific uses
Tuning forks also play a role in several alternative therapy practices, such as sonopuncture and polarity therapy.
Radar gun calibration
A radar gun that measures the speed of cars or a ball in sports is usually calibrated with a tuning fork. Instead of the frequency, these forks are labeled with the calibration speed and radar band (e.g., X-band or K-band) they are calibrated for.
In gyroscopes
Doubled and H-type tuning forks are used for tactical-grade Vibrating Structure Gyroscopes and various types of microelectromechanical systems.
Level sensors
Tuning fork forms the sensing part of vibrating point level sensors. The tuning fork is kept vibrating at its resonant frequency by a piezoelectric device. Upon coming in contact with solids, amplitude of oscillation goes down, the same is used as a switching parameter for detecting point level for solids. For liquids, the resonant frequency of tuning fork changes upon coming in contact with the liquids, change in frequency is used to detect level.
See also
Electronic tuner
Pitch pipe
Savart wheel
Tonometer
References
External links
Onlinetuningfork.com, an online tuning fork using Macromedia Flash Player.
1711 introductions
Musical instrument parts and accessories
Idiophones
Acoustics
Sound | Tuning fork | Physics,Technology | 2,125 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.