id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
76,926,283
https://en.wikipedia.org/wiki/GUN%20%28graph%20database%29
GUN (Graph Universe Node) is an open source, offline-first, real-time, decentralized, graph database written in JavaScript for the web browser. The database is implemented as a peer-to-peer network distributed across "Browser Peers" and optional "Runtime Peers". It employs multi-master replication with a custom commutative replicated data type (CRDT). GUN is currently used in the decentralized version of the Internet Archive. References External links Official website Graph databases Database engines Peer-to-peer computing Mesh networking Distributed computing architecture
GUN (graph database)
Mathematics,Technology
119
14,747,033
https://en.wikipedia.org/wiki/Aqua%20Sciences
Aqua Sciences is a Miami Beach-based company providing advanced water technologies with a module capable of extracting up to 2500 gallons of water from the moisture present in the air. Module The module is a modified 40-foot trailer that permits the extraction of water from the moisture in the air. It has options such as additional storage tanks for keeping the water for extended periods of time. It can be powered by an internal diesel generator for a week without needing to refuel, or plugged into the electrical grid. The module is the trailer of an 18-wheeler. It is possible to add a reverse osmosis module which increases production up to 8000 gallons a day. Advantages There are no toxic or harmful byproducts. The only requirement for it is 14% humidity in the air, so it can be used in deserts. The water provided is also very pure. Applications The United States Army has shown interest in the project, mainly because of the high cost of water transportation to its forces. Using the Aqua Sciences' module, that price is pushed down to $0.15 USD per gallon, which would provide huge logistic savings for the military. Also, it would be practical for providing water after a natural disaster such as the 2004 Indian Ocean earthquake or Hurricane Katrina. References External links Aqua Sciences Homepage (December 2007) Making Water From Thin Air (December 2007) Water technology
Aqua Sciences
Chemistry
280
37,188,939
https://en.wikipedia.org/wiki/ITAD%20Subscriber%20Numbers
ITAD Subscriber Numbers, or ISNs, provide a way of interconnecting VOIP PBXs by adding a number to the internal phone number of the target phone. The ITAD number is added to the target phone number preceded by an asterisk. Therefore, only numbers and symbols which appear on a telephone keypad will be used. References Telecommunications
ITAD Subscriber Numbers
Technology
78
319,536
https://en.wikipedia.org/wiki/7400-series%20integrated%20circuits
The 7400 series is a popular logic family of transistor–transistor logic (TTL) integrated circuits (ICs). In 1964, Texas Instruments introduced the SN5400 series of logic chips, in a ceramic semiconductor package. A low-cost plastic package SN7400 series was introduced in 1966 which quickly gained over 50% of the logic chip market, and eventually becoming de facto standardized electronic components. Since the introduction of the original bipolar-transistor TTL parts, pin-compatible parts were introduced with such features as low power CMOS technology and lower supply voltages. Surface mount packages exist for several popular logic family functions. Overview The 7400 series contains hundreds of devices that provide everything from basic logic gates, flip-flops, and counters, to special purpose bus transceivers and arithmetic logic units (ALU). Specific functions are described in a list of 7400 series integrated circuits. Some TTL logic parts were made with an extended military-specification temperature range. These parts are prefixed with 54 instead of 74 in the part number. The less-common 64 and 84 prefixes on Texas Instruments parts indicated an industrial temperature range. Since the 1970s, new product families have been released to replace the original 7400 series. More recent TTL-compatible logic families were manufactured using CMOS or BiCMOS technology rather than TTL. Today, surface-mounted CMOS versions of the 7400 series are used in various applications in electronics and for glue logic in computers and industrial electronics. The original through-hole devices in dual in-line packages (DIP/DIL) were the mainstay of the industry for many decades. They are useful for rapid breadboard-prototyping and for education and remain available from most manufacturers. The fastest types and very low voltage versions are typically surface-mount only, however. The first part number in the series, the 7400, is a 14-pin IC containing four two-input NAND gates. Each gate uses two input pins and one output pin, with the remaining two pins being power (+5 V) and ground. This part was made in various through-hole and surface-mount packages, including flat pack and plastic/ceramic dual in-line. Additional characters in a part number identify the package and other variations. Unlike the older resistor-transistor logic integrated circuits, bipolar TTL gates were unsuitable to be used as analog devices, providing low gain, poor stability, and low input impedance. Special-purpose TTL devices were used to provide interface functions such as Schmitt triggers or monostable multivibrator timing circuits. Inverting gates could be cascaded as a ring oscillator, useful for purposes where high stability was not required. History Although the 7400 series was the first de facto industry standard TTL logic family (i.e. second-sourced by several semiconductor companies), there were earlier TTL logic families such as: Sylvania Universal High-level Logic in 1963 Motorola MC4000 MTTL National Semiconductor DM8000 Fairchild 9300 series Signetics 8200 and 8T00 The 7400 quad 2-input NAND gate was the first product in the series, introduced by Texas Instruments in a military grade metal flat package (5400W) in October 1964. The pin assignment of this early series differed from the de facto standard set by the later series in DIP packages (in particular, ground was connected to pin 11 and the power supply to pin 4, compared to pins 7 and 14 for DIP packages). The extremely popular commercial grade plastic DIP (7400N) followed in the third quarter of 1966. The 5400 and 7400 series were used in many popular minicomputers in the 1970s and early 1980s. Some models of the DEC PDP-series 'minis' used the 74181 ALU as the main computing element in the CPU. Other examples were the Data General Nova series and Hewlett-Packard 21MX, 1000, and 3000 series. In 1965, typical quantity-one pricing for the SN5400 (military grade, in ceramic welded flat-pack) was around 22 USD. As of 2007, individual commercial-grade chips in molded epoxy (plastic) packages can be purchased for approximately US$0.25 each, depending on the particular chip. Families 7400 series parts were constructed using bipolar junction transistors (BJT), forming what is referred to as transistor–transistor logic or TTL. Newer series, more or less compatible in function and logic level with the original parts, use CMOS technology or a combination of the two (BiCMOS). Originally the bipolar circuits provided higher speed but consumed more power than the competing 4000 series of CMOS devices. Bipolar devices are also limited to a fixed power-supply voltage, typically 5 V, while CMOS parts often support a range of supply voltages. Milspec-rated devices for use in extended temperature conditions are available as the 5400 series. Texas Instruments also manufactured radiation-hardened devices with the prefix RSN, and the company offered beam-lead bare dies for integration into hybrid circuits with a BL prefix designation. Regular-speed TTL parts were also available for a time in the 6400 series these had an extended industrial temperature range of −40 °C to +85 °C. While companies such as Mullard listed 6400-series compatible parts in 1970 data sheets, by 1973 there was no mention of the 6400 family in the Texas Instruments TTL Data Book. Texas Instruments brought back the 6400 series in 1989 for the SN64BCT540. The SN64BCTxxx series is still in production as of 2023. Some companies have also offered industrial extended temperature range variants using the regular 7400-series part numbers with a prefix or suffix to indicate the temperature grade. As integrated circuits in the 7400 series were made in different technologies, usually compatibility was retained with the original TTL logic levels and power-supply voltages. An integrated circuit made in CMOS is not a TTL chip, since it uses field-effect transistors (FETs) and not bipolar junction transistors (BJT), but similar part numbers are retained to identify similar logic functions and electrical (power and I/O voltage) compatibility in the different subfamilies. Over 40 different logic subfamilies use this standardized part number scheme. The headings in the following table are: Vcc power-supply voltage; tpd maximum gate delay; IOL maximum output current at low level; IOH maximum output current at high level; tpd, IOL, and IOH apply to most gates in a given family. Driver or buffer gates have higher output currents. Many parts in the CMOS HC, AC, AHC, and VHC families are also offered in "T" versions (HCT, ACT, AHCT and VHCT) which have input thresholds that are compatible with both TTL and 3.3 V CMOS signals. The non-T parts have conventional CMOS input thresholds, which are more restrictive than TTL thresholds. Typically, CMOS input thresholds require high-level signals to be at least 70% of Vcc and low-level signals to be at most 30% of Vcc. (TTL has the input high level above 2.0 V and the input low level below 0.8 V, so a TTL high-level signal could be in the forbidden middle range for 5 V CMOS.) The 74H family is the same basic design as the 7400 family with resistor values reduced. This reduced the typical propagation delay from 9 ns to 6 ns but increased the power consumption. The 74H family provided a number of unique devices for CPU designs in the 1970s. Many designers of military and aerospace equipment used this family over a long period and as they need exact replacements, this family is still produced by Lansdale Semiconductor. The 74S family, using Schottky circuitry, uses more power than the 74, but is faster. The 74LS family of ICs is a lower-power version of the 74S family, with slightly higher speed but lower power dissipation than the original 74 family; it became the most popular variant once it was widely available. Many 74LS ICs can be found in microcomputers and digital consumer electronics manufactured in the 1980s and early 1990s. The 74F family was introduced by Fairchild Semiconductor and adopted by other manufacturers; it is faster than the 74, 74LS and 74S families. Through the late 1980s and 1990s newer versions of this family were introduced to support the lower operating voltages used in newer CPU devices. Part numbering Part number schemes varied by manufacturer. The part numbers for 7400-series logic devices often use the following designators: Often first, a two or three letter prefix, denoting the manufacturer and flow class of the device. These codes are no longer closely associated with a single manufacturer, for example, Fairchild Semiconductor manufactures parts with MM and DM prefixes, and no prefixes. Examples: SN: Texas Instruments using a commercial processing SNV: Texas Instruments using military processing M: ST Microelectronics DM: National Semiconductor UT: Cobham PLC SG: Sylvania Two digits for temperature range. Examples: 54: military temperature range 64: short-lived historical series with intermediate "industrial" temperature range 74: commercial temperature range device Zero to four letters denoting the logic subfamily. Examples: zero letters: basic bipolar TTL LS: low power Schottky HCT: High-speed CMOS compatible with TTL Two or more arbitrarily assigned digits that identify the function of the device. There are hundreds of different devices in each family. Additional suffix letters and numbers may be appended to denote the package type, quality grade, or other information, but this varies widely by manufacturer. For example, "SN5400N" signifies that the part is a 7400-series IC probably manufactured by Texas Instruments ("SN" originally meaning "Semiconductor Network") using commercial processing, is of the military temperature rating ("54"), and is of the TTL family (absence of a family designator), its function being the quad 2-input NAND gate ("00") implemented in a plastic through-hole DIP package ("N"). Many logic families maintain a consistent use of the device numbers as an aid to designers. Often a part from a different 74x00 subfamily could be substituted ("drop-in replacement") in a circuit, with the same function and pin-out yet more appropriate characteristics for an application (perhaps speed or power consumption), which was a large part of the appeal of the 74C00 series over the competing CD4000B series, for example. But there are a few exceptions where incompatibilities (mainly in pin-out) across the subfamilies occurred, such as: some flat-pack devices (e.g. 7400W) and surface-mount devices, some of the faster CMOS series (for example 74AC), a few low-power TTL devices (e.g. 74L86, 74L9 and 74L95) have a different pin-out than the regular (or even 74LS) series part. five versions of the 74x54 (4-wide AND-OR-INVERT gates IC), namely 7454(N), 7454W, 74H54, 74L54W and 74L54N/74LS54, are different from each other in pin-out and/or function, Second sources from Europe and Eastern Bloc Some manufacturers, such as Mullard and Siemens, had pin-compatible TTL parts, but with a completely different numbering scheme; however, data sheets identified the 7400-compatible number as an aid to recognition. At the time the 7400 series was being made, some European manufacturers (that traditionally followed the Pro Electron naming convention), such as Philips/Mullard, produced a series of TTL integrated circuits with part names beginning with FJ. Some examples of FJ series are: FJH101 (=7430) single 8-input NAND gate, FJH131 (=7400) quadruple 2-input NAND gate, FJH181 (=7454N or J) 2+2+2+2 input AND-OR-NOT gate. The Soviet Union started manufacturing TTL ICs with 7400-series pinout in the late 1960s and early 1970s, such as the K155ЛA3, which was pin-compatible with the 7400 part available in the United States, except for using a metric spacing of 2.5 mm between pins instead of the pin-to-pin spacing used in the west. Another peculiarity of the Soviet-made 7400 series was the packaging material used in the 1970s–1980s. Instead of the ubiquitous black resin, they had a brownish-green body colour with subtle swirl marks created during the moulding process. It was jokingly referred to in the Eastern Bloc electronics industry as the "elephant-dung packaging", due to its appearance. The Soviet integrated circuit designation is different from the Western series: the technology modifications were considered different series and were identified by different numbered prefixes – К155 series is equivalent to plain 74, К555 series is 74LS, К1533 is 74ALS, etc.; the function of the unit is described with a two-letter code followed by a number: the first letter represents the functional group – logical, triggers, counters, multiplexers, etc.; the second letter shows the functional subgroup, making the distinction between logical NAND and NOR, D- and JK-triggers, decimal and binary counters, etc.; the number distinguishes variants with different number of inputs or different number of elements within a die – ЛА1/ЛА2/ЛА3 (LA1/LA2/LA3) are 2 four-input / 1 eight-input / 4 two-input NAND elements respectively (equivalent to 7420/7430/7400). Before July 1974 the two letters from the functional description were inserted after the first digit of the series. Examples: К1ЛБ551 and К155ЛА1 (7420), К1ТМ552 and К155ТМ2 (7474) are the same ICs made at different times. Clones of the 7400 series were also made in other Eastern Bloc countries: Bulgaria (Mikroelektronika Botevgrad) used a designation somewhat similar to that of the Soviet Union, e.g. 1ЛБ00ШМ (1LB00ShM) for a 74LS00. Some of the two-letter functional groups were borrowed from the Soviet designation, while others differed. Unlike the Soviet scheme, the two or three digit number after the functional group matched the western counterpart. The series followed at the end (i.e. ШМ for LS). Only the LS series is known to have been manufactured in Bulgaria. Czechoslovakia (TESLA) used the 7400 numbering scheme with manufacturer prefix MH. Example: MH7400. Tesla also produced industrial grade (8400, −25 ° to 85 °C) and military grade (5400, −55 ° to 125 °C) ones. Poland (Unitra CEMI) used the 7400 numbering scheme with manufacturer prefixes UCA for the 5400 and 6400 series, as well as UCY for the 7400 series. Examples: UCA6400, UCY7400. Note that ICs with the prefix MCY74 correspond to the 4000 series (e.g. MCY74002 corresponds to 4002 and not to 7402). Hungary (Tungsram, later Mikroelektronikai Vállalat / MEV) also used the 7400 numbering scheme, but with manufacturer suffix – 7400 is marked as 7400APC. Romania (I.P.R.S.) used a trimmed 7400 numbering with the manufacturer prefix CDB (example: CDB4123E corresponds to 74123) for the 74 and 74H series, where the suffix H indicated the 74H series. For the later 74LS series, the standard numbering was used. East Germany (HFO) also used trimmed 7400 numbering without manufacturer prefix or suffix. The prefix D (or E) designates digital IC, and not the manufacturer. Example: D174 is 7474. 74LS clones were designated by the prefix DL; e.g. DL000 = 74LS00. In later years East German made clones were also available with standard 74* numbers, usually for export. A number of different technologies were available from the Soviet Union, Czechoslovakia, Poland, and East Germany. The 8400 series in the table below indicates an industrial temperature range from −25 °C to +85 °C (as opposed to −40 °C to +85 °C for the 6400 series). Around 1990 the production of standard logic ceased in all Eastern European countries except the Soviet Union and later Russia and Belarus. As of 2016, the series 133, К155, 1533, КР1533, 1554, 1594, and 5584 were in production at "Integral" in Belarus, as well as the series 130 and 530 at "NZPP-KBR", 134 and 5574 at "VZPP", 533 at "Svetlana", 1564, К1564, КР1564 at "NZPP", 1564, К1564 at "Voshod", 1564 at "Exiton", and 133, 530, 533, 1533 at "Mikron" in Russia. The Russian company Angstrem manufactures 54HC circuits as the 5514БЦ1 series, 54AC as the 5514БЦ2 series, and 54LVC as the 5524БЦ2 series. See also Electronic component Logic gate, Logic family List of 7400-series integrated circuits 4000-series integrated circuits List of 4000-series integrated circuits Linear integrated circuit List of linear integrated circuits List of LM-series integrated circuits Push–pull output Open-collector/drain output Three-state output Schmitt trigger input Programmable logic device Pin compatibility References Further reading Books 50 Circuits Using 7400 Series IC's; 1st Ed; R.N. Soar; Bernard Babani Publishing; 76 pages; 1979; . (archive) TTL Cookbook; 1st Ed; Don Lancaster; Sams Publishing; 412 pages; 1974; . (archive) Designing with TTL Integrated Circuits; 1st Ed; Robert Morris, John Miller; Texas Instruments and McGraw-Hill; 322 pages; 1971; . (archive) App Notes Understanding and Interpreting Standard-Logic Data Sheets; Stephen Nolan, Jose Soltero, Shreyas Rao; Texas Instruments; 60 pages; 2016. Comparison of 74HC / 74S / 74LS / 74ALS Logic; Fairchild; 6 pages, 1983. Interfacing to 74HC Logic; Fairchild; 10 pages; 1998. 74AHC / 74AHCT Designer's Guide; TI; 53pages; 1998. Compares 74HC / 74AHC / 74AC (CMOS I/O) and 74HCT / 74AHCT / 74ACT (TTL I/O). Fairchild Semiconductor / ON Semiconductor Historical Data Books: TTL (1978, 752 pages), FAST (1981, 349 pages) Logic Selection Guide (2008, 12 pages) Nexperia / NXP Semiconductor Logic Selection Guide (2020, 234 pages) Logic Application Handbook Design Engineer's Guide' (2021, 157 pages) ''Logic Translators''' (2021, 62 pages) Texas Instruments / National Semiconductor Historical Catalog: (1967, 375 pages) Historical Databooks: TTL Vol1 (1984, 339 pages), TTL Vol2 (1985, 1402 pages), TTL Vol3 (1984, 793 pages), TTL Vol4 (1986, 445 pages) Digital Logic Pocket Data Book (2007, 794 pages), Logic Reference Guide (2004, 8 pages), Logic Selection Guide (1998, 215 pages) Little Logic Guide (2018, 25 pages), Little Logic Selection Guide (2004, 24 pages) Toshiba General-Purpose Logic ICs (2012, 55 pages) External links Understanding 7400-series digital logic ICs - Nuts and Volts magazine Thorough list of 7400-series ICs - Electronics Club Integrated circuits Digital electronics 1964 introductions
7400-series integrated circuits
Technology,Engineering
4,276
147,854
https://en.wikipedia.org/wiki/Straightedge
A straightedge or straight edge is a tool used for drawing straight lines, or checking their straightness. If it has equally spaced markings along its length, it is usually called a ruler. Straightedges are used in the automotive service and machining industry to check the flatness of machined mating surfaces. They are also used in the decorating industry for cutting and hanging wallpaper. True straightness can in some cases be checked by using a laser line level as an optical straightedge: it can illuminate an accurately straight line on a flat surface such as the edge of a plank or shelf. A pair of straightedges called winding sticks are used in woodworking to make warping easier to perceive in pieces of wood. Three straight edges can be used to test and calibrate themselves to a certain extent, however this procedure does not control twist. For accurate calibration of a straight edge, a surface plate must be used. Compass-and-straightedge construction An idealized straightedge is used in compass-and-straightedge constructions in plane geometry. It may be used: Given two points, to draw the line connecting them Given a point and a circle, to draw either tangent Given two circles, to draw any of their common tangents Or any of the other numerous geometric constructions The idealized straightedge is: Infinitely long Infinitesimally thin (i.e. point width) Always assumed to be without graduations or marks, or the ability to mark Able to be aligned to two points with infinite precision to draw a line through them It may not be marked or used together with the compass so as to transfer the length of one segment to another. It is possible to do all compass and straightedge constructions without the straightedge. That is, it is possible, using only a compass, to find the intersection of two lines given two points on each, and to find the tangent points to circles. It is not, however, possible to do all constructions using only a straightedge. It is possible to do them with straightedge alone given a circle and its center. See also Chalk line Geometrography References External links Making Accurate Straight-Edges from Scratch Mathematical tools Metalworking measuring instruments Stonemasonry tools Technical drawing tools Woodworking measuring instruments
Straightedge
Mathematics,Technology
461
59,990
https://en.wikipedia.org/wiki/Aconitine
Aconitine is an alkaloid toxin produced by various plant species belonging to the genus Aconitum (family Ranunculaceae), commonly known by the names wolfsbane and monkshood. Aconitine is notorious for its toxic properties. Structure and reactivity Biologically active isolates from Aconitum and Delphinium plants are classified as norditerpenoid alkaloids, which are further subdivided based on the presence or absence of the C18 carbon. Aconitine is a C19-norditerpenoid, based on its presence of this C18 carbon. It is barely soluble in water, but very soluble in organic solvents such as chloroform or diethyl ether. Aconitine is also soluble in mixtures of alcohol and water if the concentration of alcohol is high enough. Like many other alkaloids, the basic nitrogen atom in one of the six-membered ring structure of aconitine can easily form salts and ions, giving it affinity for both polar and lipophilic structures (such as cell membranes and receptors) and making it possible for the molecule to pass the blood–brain barrier. The acetoxyl group at the c8 position can readily be replaced by a methoxy group, by heating aconitine in methanol, to produce a 8-deacetyl-8-O-methyl derivatives. If aconitine is heated in its dry state, it undergoes a pyrolysis to form pyroaconitine ((1α,3α,6α,14α,16β)-20-ethyl-3,13-dihydroxy-1,6,16-trimethoxy-4-(methoxymethyl)-15-oxoaconitan-14-yl benzoate) with the chemical formula C32H43NO9. Mechanism of action Aconitine can interact with the voltage-dependent sodium-ion channels, which are proteins in the cell membranes of excitable tissues, such as cardiac and skeletal muscles and neurons. These proteins are highly selective for sodium ions. They open very quickly to depolarize the cell membrane potential, causing the upstroke of an action potential. Normally, the sodium channels close very rapidly, but the depolarization of the membrane potential causes the opening (activation) of potassium channels and potassium efflux, which results in repolarization of the membrane potential. Aconitine binds to the channel at the neurotoxin binding site 2 on the alpha subunit (the same site bound by batrachotoxin, veratridine, and grayanotoxin). This binding results in a sodium-ion channel that stays open longer. Aconitine suppresses the conformational change in the sodium-ion channel from the active state to the inactive state. The membrane stays depolarized due to the constant sodium influx (which is 10–1000-fold greater than the potassium efflux). As a result, the membrane cannot be repolarized. The binding of aconitine to the channel also leads to the channel to change conformation from the inactive state to the active state at a more negative voltage. In neurons, aconitine increases the permeability of the membrane for sodium ions, resulting in a huge sodium influx in the axon terminal. As a result, the membrane depolarizes rapidly. Due to the strong depolarization, the permeability of the membrane for potassium ions increases rapidly, resulting in a potassium reflux to release the positive charge out of the cell. Not only the permeability for potassium ions but also the permeability for calcium ions increases as a result of the depolarization of the membrane. A calcium influx takes place. The increase of the calcium concentration in the cell stimulates the release of the neurotransmitter acetylcholine into the synaptic cleft. Acetylcholine binds to acetylcholine receptors at the postsynaptic membrane to open the sodium-channels there, generating a new action potential. Research with mouse nerve-hemidiaphragm muscle preparation indicate that at low concentrations (<0.1 μM) aconitine increases the electrically evoked acetylcholine release causing an induced muscle tension. Action potentials are generated more often at this concentration. At higher concentration (0.3–3 μM) aconitine decreases the electrically evoked acetylcholine release, resulting in a decrease in muscle tension. At high concentration (0.3–3 μM), the sodium-ion channels are constantly activated, transmission of action potentials is suppressed, leading to non-excitable target cells or paralysis. Biosynthesis and total synthesis of related alkaloids Aconitine is biosynthesized by the monkshood plant via the terpenoid biosynthesis pathway (MEP chloroplast pathway). Approximately 700 naturally occurring C19-diterpenoid alkaloids have been isolated and identified, but the biosynthesis of only a few of these alkaloids are well understood. Likewise, only a few alkaloids of the aconitine family have been synthesized in the laboratory. In particular, despite over one hundred years having elapsed since its isolation, the prototypical member of its family of norditerpenoid alkaloids, aconitine itself, represents a rare example of a well-known natural product that has yet to succumb to efforts towards its total synthesis. The challenge that aconitine poses to synthetic organic chemists is due to both the intricate interlocking hexacyclic ring system that makes up its core and the elaborate collection of oxygenated functional groups at its periphery. A handful of simpler members of the aconitine alkaloids, however, have been prepared synthetically. In 1971, the Weisner group discovered the total synthesis of talatisamine (a C19-norditerpenoid). In the subsequent years, they also discovered the total syntheses of other C19-norditerpenoids, such as chasmanine, and 13-deoxydelphonine. The total synthesis of napelline (Scheme a) begins with aldehyde 100. In a 7 step process, the A-ring of napelline is formed (104). It takes another 10 steps to form the lactone ring in the pentacyclic structure of napelline (106). An additional 9 steps creates the enone-aldehyde 107. Heating in methanol with potassium hydroxide causes an aldol condensation to close the sixth and final ring in napelline (14). Oxidation then gives rise to diketone 108 which was converted to (±)-napelline (14) in 10 steps. A similar process is demonstrated in Wiesner's synthesis of 13-desoxydelphinone (Scheme c). The first step of this synthesis is the generation of a conjugated dienone 112 from 111 in 4 steps. This is followed by the addition of a benzyl vinyl ether to produce 113. In 11 steps, this compound is converted to ketal 114. The addition of heat, DMSO and o-xylene rearranges this ketol (115), and after 5 more steps (±)-13-desoxydelphinone (15) is formed. Lastly, talatisamine (Scheme d) is synthesized from diene 116 and nitrile 117. The first step is to form tricycle 118 in 16 steps. After another 6 steps, this compound is converted to enone 120. Subsequently, this allene is added to produce photoadduct 121. This adduct group is cleaved and rearrangement gives rise to the compound 122. In 7 steps, this compound forms 123, which is then rearranged, in a similar manner to compound 114, to form the aconitine-like skeleton in 124. A racemic relay synthesis is completed to produce talatisamine (13). More recently, the laboratory of the late David Y. Gin completed the total syntheses of the aconitine alkaloids nominine and neofinaconitine. Metabolism Aconitine is metabolized by cytochrome P450 isozymes (CYPs). There has been research in 2011 in China to investigate in-depth the CYPs involved in aconitine metabolism in human liver microsomes. It has been estimated that more than 90 percent of currently available human drug metabolism can be attributed to eight main enzymes (CYP 1A2, 2C9, 2C8, 2C19, 2D6, 2E1, 3A4, 3A5). The researchers used recombinants of these eight different CYPs and incubated it with aconitine. To initiate the metabolism pathway the presence of NADPH was needed. Six CYP-mediated metabolites (M1–M6) were found by liquid chromatography, these six metabolites were characterized by mass-spectrometry. The six metabolites and the involved enzymes are summarized in the following table: Selective inhibitors were used to determine the involved CYPs in the aconitine metabolism. The results indicate that aconitine was mainly metabolized by CYP3A4, 3A5 and 2D6. CYP2C8 and 2C9 had a minor role to the aconitine metabolism, whereas CYP1A2, 2E1 and 2C19 did not produce any aconitine metabolites at all. The proposed metabolic pathways of aconitine in human liver microsomes and the CYPs involved to it are summarized in the table above. Uses Aconitine was previously used as an antipyretic and analgesic and still has some limited application in herbal medicine, although the narrow therapeutic index makes calculating appropriate dosage difficult. Aconitine is also present in Yunnan Baiyao, a proprietary traditional Chinese medicine. Toxicity Consuming as little as 2 milligrams of pure aconitine or 1 gram of the plant itself may cause death by paralyzing respiratory or heart functions. Toxicity may occur through the skin; even touching the flowers can numb finger tips. The toxic effects of aconitine have been tested in a variety of animals, including mammals (dog, cat, guinea pig, mouse, rat and rabbit), frogs and pigeons. Depending on the route of exposure, the observed toxic effects were local anesthetic effect, diarrhea, convulsions, arrhythmias or death. According to a review of different reports of aconite poisoning in humans, the following clinical features were observed: Neurological: paresthesia and numbness of face, perioral area and four limbs; muscle weakness in four limbs Cardiovascular: hypotension, palpitations, chest pain, bradycardia, sinus tachycardia, ventricular ectopics and other arrhythmias, ventricular arrhythmias, and junctional rhythm Gastrointestinal: nausea, vomiting, abdominal pain, and diarrhea Others: dizziness, hyperventilation, sweating, difficulty breathing, confusion, headache, and lacrimation Progression of symptoms: the first symptoms of aconitine poisoning appear approximately 20 minutes to 2 hours after oral intake and include paresthesia, sweating and nausea. This leads to severe vomiting, colicky diarrhea, intense pain and then paralysis of the skeletal muscles. Following the onset of life-threatening arrhythmia, including ventricular tachycardia and ventricular fibrillation, death finally occurs as a result of respiratory paralysis or cardiac arrest. values for mice are 1 mg/kg orally, 0.100 mg/kg intravenously, 0.270 mg/kg intraperitoneally and 0.270 mg/kg subcutaneously. The lowest published lethal dose (LDLo) for mice is 1 mg/kg orally and 0.100 mg/kg intraperitoneally. The lowest published toxic dose (TDLo) for mice is 0.0549 mg/kg subcutaneously. LD50 value for rats is 0.064 mg/kg intravenously. The LDLo for rats is 0.040 mg/kg intravenously and 0.250 mg/kg intraperitoneally. The TDLo for rats is 0.040 mg/kg parenterally. For an overview of more test animal results (LD50, LDLo and TDLo) see the following table. Note that LD50 means lethal dose, 50 percent kill; LDLo means lowest published lethal dose; TDLo means lowest published toxic dose For humans the lowest published oral lethal dose of 28 μg/kg was reported in 1969. Diagnosis and treatment For the analysis of the Aconitum alkaloids in biological specimens such as blood, serum and urine, several GC-MS methods have been described. These employ a variety of extraction procedures followed by derivatisation to their trimethylsilyl derivatives. New sensitive HPLC-MS methods have been developed as well, usually preceded by SPE purification of the sample. The antiarrhythmic drug lidocaine has been reported to be an effective treatment of aconitine poisoning of a patient. Considering the fact that aconitine acts as an agonist of the sodium channel receptor, antiarrhythmic agents which block the sodium channel (Vaughan-Williams' classification I) might be the first choice for the therapy of aconitine induced arrhythmias. Animal experiments have shown that the mortality of aconitine is lowered by tetrodotoxin. The toxic effects of aconitine were attenuated by tetrodotoxin, probably due to their mutual antagonistic effect on excitable membranes. Also paeoniflorin seems to have a detoxifying effect on the acute toxicity of aconitine in test animals. This may result from alternations of pharmacokinetic behavior of aconitine in the animals due to the pharmacokinetic interaction between aconitine and paeoniflorin. In addition, in emergencies, one can wash the stomach using either tannic acid or powdered charcoal. Heart stimulants such as strong coffee or caffeine may also help until professional help is available. Famous poisonings During the Indian Rebellion of 1857, a British detachment was the target of attempted poisoning with aconitine by the Indian regimental cooks. The plot was thwarted by John Nicholson who, having detected the plot, interrupted the British officers just as they were about to consume the poisoned meal. The chefs refused to taste their own preparation, whereupon it was force-fed to a monkey who "expired on the spot". The cooks were hanged. Aconitine was the poison used by George Henry Lamson in 1881 to murder his brother-in-law in order to secure an inheritance. Lamson had learned about aconitine as a medical student from professor Robert Christison, who had taught that it was undetectable—but forensic science had improved since Lamson's student days. Rufus T. Bush, American industrialist and yachtsman, died on September 15, 1890, after accidentally taking a fatal dose of aconite. In 1953 aconitine was used by a Soviet biochemist and poison developer, Grigory Mairanovsky, in experiments with prisoners in the secret NKVD laboratory in Moscow. He admitted killing around 10 people using the poison. In 2004 Canadian actor Andre Noble died from aconitine poisoning. He accidentally ate some monkshood while he was on a hike with his aunt in Newfoundland. In 2009 Lakhvir Singh of Feltham, west London, used aconitine to poison the food of her ex-lover Lakhvinder Cheema (who died as a result of the poisoning) and his current fiancée Gurjeet Choongh. Singh received a life sentence with a 23-year minimum for the murder on February 10, 2010. In 2022, twelve diners at a restaurant in York Region became acutely ill following a meal. All twelve became seriously ill and four of them were admitted to the intensive care unit after the suspected poisoning. In popular culture Aconitine was a favorite poison in the ancient world. The poet Ovid, referring to the proverbial dislike of stepmothers for their step-children, writes: Lurida terribiles miscent aconita novercae. Fearsome stepmothers mix lurid aconites. Aconitine was also made famous by its use in Oscar Wilde's 1891 story "Lord Arthur Savile's Crime". Aconite also plays a prominent role in James Joyce's Ulysses, in which the father to protagonist Leopold Bloom used pastilles of the chemical to commit suicide. Aconitine poisoning plays a key role in the murder mystery Breakdown by Jonathan Kellerman (2016). In Twin Peaks season 3 part 13, aconitine is suggested as a means to poison the main character. Monk's Hood is the name of the third Cadfael novel written in 1980 by Ellis Peters. The novel was made into an episode of the television series Cadfael starring Derek Jacobi. In the third season of the Netflix series You, two of the main characters poison each other with aconitine. One survives (due to a lower dose and an antidote), and the other is killed. Hannah McKay (Yvonne Strahovski), a serial killer in the Showtime series Dexter uses aconite on at least three occasions to poison her victims. In season 2 episode 16 of the series Person Of Interest, aconitine is shown in a syringe stuck to the character Shaw (Sarah Shahi) nearly being injected and causing her death, until she is rescued by Reese (Jim Caviezel). In a 2017 episode of The Doctor Blake Mysteries, fight manager Gus Jansons (Steve Adams) murdered his boxer, Mickey Ellis (Trey Coward), during a match by applying aconitine he had put in petroleum jelly and applying it to a cut over the boxer’s eye. He feared being blackmailed over a murder he helped cover up. He had made the poison from wolfsbane he had seen in a local garden. Aconitine poisoning is used by Villanelle to kill the Ukrainian gangster, Rinat Yevtukh in Killing Eve: No Tomorrow by Luke Jennings (2018). See also Pseudaconitine References External links Diterpene alkaloids Ion channel toxins Non-protein ion channel toxins Neurotoxins Acetate esters Benzoate esters Secondary alcohols Tertiary alcohols Nitrogen heterocycles Sodium channel openers Plant toxins Heterocyclic compounds with 6 rings Methoxy compounds
Aconitine
Chemistry
3,916
78,325,805
https://en.wikipedia.org/wiki/Obecabtagene%20autoleucel
Obecabtagene autoleucel, sold under the brand name Aucatzyl, is an anti-cancer medication used for the treatment of acute lymphoblastic leukemia. It is a CD19-directed genetically modified autologous T-cell immunotherapy. The most common side effects include cytokine release syndrome, infections-pathogen unspecified, musculoskeletal pain, viral infections, fever, nausea, bacterial infectious disorders, diarrhea, febrile neutropenia, immune effector cell-associated neurotoxicity syndrome, hypotension, pain, fatigue, headache, encephalopathy, and hemorrhage. Obecabtagene autoleucel was approved for medical use in the United States in November 2024. Medical uses Obecabtagene autoleucel is indicated for the treatment of adults with relapsed or refractory B-cell precursor acute lymphoblastic leukemia. Side effects The US Food and Drug Administration (FDA) approved prescribing information for obecabtagene autoleucel has a boxed warning for cytokine release syndrome, immune effector cell-associated neurotoxicity syndrome, and T-cell malignancies. The most common side effects include cytokine release syndrome, infections-pathogen unspecified, musculoskeletal pain, viral infections, fever, nausea, bacterial infectious disorders, diarrhea, febrile neutropenia, immune effector cell-associated neurotoxicity syndrome, hypotension, pain, fatigue, headache, encephalopathy, and hemorrhage. History Efficacy was evaluated in FELIX (NCT04404660), an open-label, multicenter, single-arm trial that enrolled adults with relapsed or refractory CD19-positive B-cell acute lymphoblastic leukemia. Enrolled participants were required to have relapsed following a remission lasting twelve months or less, relapsed or refractory acute lymphoblastic leukemia following two or more prior lines of systemic therapy, or disease that was relapsed or refractory three or more months after allogeneic stem cell transplantation. The major efficacy outcome measures were rate and duration of complete remission achieved within three months after infusion. Additional outcome measures were rate and duration of overall complete remission which includes complete remission and complete remission with incomplete hematologic recovery, at any time. Of the 65 participants evaluable for efficacy, 27 participants (42%; 95% confidence interval [CI]: 29%, 54%) achieved complete remission within three months. The median duration of complete remission achieved within three months was 14.1 months (95% CI: 6.1, not reached). The US Food and Drug Administration (FDA) granted the application for obecabtagene autoleucel regenerative medicine advanced therapy (RMAT) and orphan drug designations. Society and culture Legal status Obecabtagene autoleucel was approved for medical use in the United States in November 2024. Names Obecabtagene autoleucel is the international nonproprietary name. It is sold under the brand name Aucatzyl. References External links Antineoplastic drugs Approved gene therapies CAR T-cell therapy Drugs that are a gene therapy Orphan drugs
Obecabtagene autoleucel
Chemistry,Biology
723
17,608,460
https://en.wikipedia.org/wiki/Texas%20ratio
The Texas ratio is a metric used to assess the extent of a bank's credit problems. Developed by Gerard Cassidy and others at RBC Capital Markets, it is calculated by dividing the value of the lender's non-performing assets (NPL + Real Estate Owned) by the sum of its tangible common equity capital and loan loss reserves. While analyzing Texas banks during the early 1980s recession, Cassidy observed that banks typically failed when this ratio reached 1:1, or 100%. He later identified a similar pattern among New England banks during the early 1990s recession. References External links Current Texas Ratios for all US Banks and Credit Unions Current Texas Ratios for US Banks Updated May 21st 2010 by Amateur Investors Complete list of US banks and their Texas ratios as published in December of 2008 and an updated listing published in October of 2009; the original blog entry includes notes how the tables were created (that the ratio was multiplied by 100 for easier comprehension, etc.) Banking Credit Debt Financial ratios
Texas ratio
Mathematics
198
17,893,430
https://en.wikipedia.org/wiki/International%20Builders%27%20Show
The International Builders' Show (IBS) is organized by the National Association of Home Builders (NAHB) and is the largest light construction building industry tradeshow in the United States. It is the only event of its kind, focusing specifically on the needs, concerns, and opportunities that face home builders. In 1944, the NAHB held its first annual convention and exposition, later becoming the International Builders' Show in 1998. From its early start, the show has grown to attract more than 100,000 attendees, making it one of the largest conventions in the country. As such, the show has alternated its location since 2003 between the Orange County Convention Center in Orlando, Florida and the Las Vegas Convention Center in Las Vegas, Nevada (two of the United States' largest convention centers). Since 2014, the International Builders' Show has been co-located with the Kitchen & Bath Industry Show (KBIS), with the combined shows branded as Design & Construction Week. In 2023, the National Hardware Show was co-dated with Design & Construction Week in Las Vegas. The 2023 Design and Construction Week drew nearly 110,000 attendees and nearly 2,000 exhibitors occupying more than one million square feet of indoor and outdoor exhibits. Previous and future dates 2000 - Jan. 14–17, Dallas, TX 2001 - Feb. 9–12, Atlanta, GA 2002 - Feb. 8–11, Atlanta, GA 2003 - Jan. 21–24, Las Vegas, NV 2004 - Jan. 19–22, Las Vegas, NV 2005 - Jan. 13–16, Orlando, FL 2006 - Jan. 11–14, Orlando, FL 2007 - Feb. 7–10, Orlando, FL 2008 - Feb. 12–15, Orlando, FL 2009 - Jan. 20–23, Las Vegas, NV 2010 - Jan. 19–22, Las Vegas, NV 2011 - Jan. 12–15, Orlando, FL 2012 - Feb. 8–11, Orlando, FL 2013 - Jan. 22–25, Las Vegas, NV 2014 - Feb. 4–7, Las Vegas, NV 2015 - Jan. 20–23, Las Vegas, NV 2016 - Jan. 19–22, Las Vegas, NV 2017 - Jan. 11–14, Orlando, FL 2018 - Jan. 10–13, Orlando, FL 2019 - Feb. 19–22, Las Vegas, NV 2020 - Jan. 21–24, Las Vegas, NV 2021 - Feb. 9–11, Orlando, FL 2022 - Feb. 8–10, Orlando, FL 2023 - Jan. 31 – Feb. 2, Las Vegas, NV 2024 - Feb. 27–29, Las Vegas, NV 2025 - Feb. 25–27, Las Vegas, NV 2026 - Feb. 17–19, Orlando, FL References External links International Builders' Show web site National Association of Home Builders NAHB History Technology conventions
International Builders' Show
Engineering
604
8,207,817
https://en.wikipedia.org/wiki/Mesonic%20molecule
A mesonic molecule is a set of two or more mesons bound together by the strong force. Unlike baryonic molecules, which form the nuclei of all elements in nature save hydrogen-1, a mesonic molecule has yet to be definitively observed. The X(3872) discovered in 2003 and the Z(4430) discovered in 2007 by the Belle experiment are the best candidates for such an observation. See also Meson Pionium Tetraquark References Hypothetical composite particles
Mesonic molecule
Physics
102
13,225,486
https://en.wikipedia.org/wiki/Private%20VLAN
Private VLAN, also known as port isolation, is a technique in computer networking where a VLAN contains switch ports that are restricted such that they can only communicate with a given uplink. The restricted ports are called private ports. Each private VLAN typically contains many private ports, and a single uplink. The uplink will typically be a port (or link aggregation group) connected to a router, firewall, server, provider network, or similar central resource. The concept was primarily introduced as a result of the limitation on the number of VLANs in network switches, a limit quickly exhausted in highly scaled scenarios. Hence, there was a requirement to create multiple network segregations with a minimum number of VLANs. The switch forwards all frames received from a private port to the uplink port, regardless of VLAN ID or destination MAC address. Frames received from an uplink port are forwarded in the normal way (i.e. to the port hosting the destination MAC address, or to all ports of the VLAN for broadcast frames or for unknown destination MAC addresses). As a result, direct peer-to-peer traffic between peers through the switch is blocked, and any such communication must go through the uplink. While private VLANs provide isolation between peers at the data link layer, communication at higher layers may still be possible depending on further network configuration. A typical application for a private VLAN is a hotel or Ethernet to the home network where each room or apartment has a port for Internet access. Similar port isolation is used in Ethernet-based ADSL DSLAMs. Allowing direct data link layer communication between customer nodes would expose the local network to various security attacks, such as ARP spoofing, as well as increase the potential for damage due to misconfiguration. Another application of private VLANs is to simplify IP address assignment. Ports can be isolated from each other at the data link layer (for security, performance, or other reasons), while belonging to the same IP subnet. In such a case, direct communication between the IP hosts on the protected ports is only possible through the uplink connection by using MAC-Forced Forwarding or a similar Proxy ARP based solution. VLAN Trunking Protocol Version 3 Version 3 of VLAN Trunking Protocol saw support added for private VLANs. version 1 and 2 If using version 1 and 2, the switch must be in VTP . VTP v1 and 2 do not propagate private-VLAN configuration, so the administrator needs to configure it one by one. Limitations of Private VLANs Private VLANs have no support for: Dynamic-access port VLAN membership. Dynamic Trunking Protocol (DTP) Port Aggregation Protocol (PAgP) Link Aggregation Control Protocol (LACP) Multicast VLAN Registration (MVR) Voice VLAN Web Cache Communication Protocol (WCCP) Ethernet ring protection (ERP) Flexible VLAN tagging Egress VLAN firewall filters Integrated routing and bridging (IRB) interface Multichassis link aggregation groups (MC-LAGs) Q-in-Q tunneling Routing between sub-VLANs (Secondary) and Primary VLAN Juniper, does not support IGMP snooping Configuration limitations An access interface cannot participate in two different primary VLANs, limited to one private VLAN. Spanning Tree Protocol (STP) settings. Cannot be configured on VLAN 1 or VLANs 1002 to 1005 as primary or secondary VLANs. As they are special VLANs. Cisco implementation Cisco Systems' Private VLANs have the advantage that they can function across multiple switches. A Private VLAN divides a VLAN (Primary) into sub-VLANs (Secondary) while keeping existing IP subnet and layer 3 configuration. A regular VLAN is a single broadcast domain, while private VLAN partitions one broadcast domain into multiple smaller broadcast subdomains. Primary VLAN: Simply the original VLAN. This type of VLAN is used to forward frames downstream to all Secondary VLANs. Secondary VLAN: Secondary VLAN is configured with one of the following types: Isolated: Any switch ports associated with an Isolated VLAN can reach the primary VLAN, but not any other Secondary VLAN. In addition, hosts associated with the same Isolated VLAN cannot reach each other. There can be multiple Isolated VLANs in one Private VLAN domain (which may be useful if the VLANs need to use distinct paths for security reasons); the ports remain isolated from each other within each VLAN. Community: Any switch ports associated with a common community VLAN can communicate with each other and with the primary VLAN but not with any other secondary VLAN. There can be multiple distinct community VLANs within one Private VLAN domain. There are mainly two types of ports in a Private VLAN: Promiscuous port (P-Port) and Host port. Host port further divides in two types Isolated port (I-Port) and Community port (C-port). Promiscuous port (P-Port): The switch port connects to a router, firewall or other common gateway device. This port can communicate with anything else connected to the primary or any secondary VLAN. In other words, it is a type of a port that is allowed to send and receive frames from any other port on the VLAN. Host Ports: Isolated Port (I-Port): Connects to the regular host that resides on isolated VLAN. This port communicates only with P-Ports. Community Port (C-Port): Connects to the regular host that resides on community VLAN. This port communicates with P-Ports and ports on the same community VLAN. Example scenario: a switch with VLAN 100, converted into a Private VLAN with one P-Port, two I-Ports in Isolated VLAN 101 (Secondary) and two community VLANs 102 and 103 (Secondary), with 2 ports in each. The switch has one uplink port (trunk), connected to another switch. The diagram shows this configuration graphically. The following table shows the traffic which can flow between all these ports. Traffic from an Uplink port to an Isolated port will be denied if it is in the Isolated VLAN. Traffic from an Uplink port to an isolated port will be permitted if it is in the primary VLAN. Use cases Network segregation Private VLANs are used for network segregation when: Moving from a flat network to a segregated network without changing the IP addressing of the hosts. A firewall can replace a router, and then hosts can be slowly moved to their secondary VLAN assignment without changing their IP addresses. There is a need for a firewall with many tens, hundreds or even thousands interfaces. Using Private VLANs the firewall can have only one interface for all the segregated networks. There is a need to preserve IP addressing. With Private VLANs, all Secondary VLANs can share the same IP subnet. Overcome license fees for number of supported VLANs per firewall. There is a need for more than 4095 segregated networks. With Isolated VLAN, there can be endless number of segregated networks. Secure hosting Private VLANs in hosting operation allows segregation between customers with the following benefits: No need for separate IP subnet for each customer. Using Isolated VLAN, there is no limit on the number of customers. No need to change firewall's interface configuration to extend the number of configured VLANs. Secure VDI An Isolated VLAN can be used to segregate VDI desktops from each other, allowing filtering and inspection of desktop to desktop communication. Using non-isolated VLANs would require a different VLAN and subnet for each VDI desktop. Backup network On a backup network, there is no need for hosts to reach each other. Hosts should only reach their backup destination. Backup clients can be placed in one Isolated VLAN and the backup servers can be placed as promiscuous on the Primary VLAN, this will allow hosts to communicate only with the backup servers. Broadcast mitigation Because broadcast traffic on a network must be sent to each wireless host serially, it can consume large shares of air time, making the wireless network unresponsive. Where there is more than one wireless access point connected to a switch, private VLANs can prevent broadcast frames from propagating from one AP to another, preserving network performance for connected hosts. Vendor support Hardware switches Alcatel-Lucent Enterprise OmniSwitch series Arista Networks Data Center Switching Brocade BigIron, TurboIron and FastIron switches Cisco Systems Catalyst 2960-XR, 3560 and higher product lines switches Extreme Networks XOS based switches Fortinet FortiOS-based switches Hewlett-Packard Enterprise Aruba Access Switches 2920 series and higher product lines switches Juniper Networks EX switches Lenovo CNOS based switches Microsens G6 switch family MikroTik All models (routers/switches) with switch chips since RouterOS v6.43 TP-Link T2600G series, T3700G series TRENDnet many models Ubiquiti Networks EdgeSwitch series, Unifi series Software switches Cisco Systems Nexus 1000V Microsoft HyperV 2012 Oracle Oracle VM Server for SPARC 3.1.1.1 VMware vDS switch Other private VLAN–aware products Cisco Systems Firewall Services Module Marathon Networks PVTD Private VLAN deployment and operation appliance See also Ethernet Broadcast domain VLAN hopping References External links "Configuring Private VLAN" TP-Link Configuration Guide. Further reading CCNP BCMSN Official exam certification guide.By-David Hucaby, , Local area networks Network architecture
Private VLAN
Engineering
2,012
44,074,040
https://en.wikipedia.org/wiki/Wart%20lichen
Wart lichen may refer to lichens in different genera, which are not all in the same family: Porina Pyrenula Staurothele Verrucaria
Wart lichen
Biology
39
2,526,877
https://en.wikipedia.org/wiki/Isotopes%20of%20bismuth
Bismuth (83Bi) has 41 known isotopes, ranging from 184Bi to 224Bi. Bismuth has no stable isotopes, but does have one very long-lived isotope; thus, the standard atomic weight can be given as . Although bismuth-209 is now known to be radioactive, it has classically been considered to be a stable isotope because it has a half-life of approximately 2.01×1019 years, which is more than a billion times the age of the universe. Besides 209Bi, the most stable bismuth radioisotopes are 210mBi with a half-life of 3.04 million years, 208Bi with a half-life of 368,000 years and 207Bi, with a half-life of 32.9 years, none of which occurs in nature. All other isotopes have half-lives under 1 year, most under a day. Of naturally occurring radioisotopes, the most stable is radiogenic 210Bi with a half-life of 5.012 days. 210mBi is unusual for being a nuclear isomer with a half-life multiple orders of magnitude longer than that of the ground state. List of isotopes |-id=Bismuth-184 | 184Bi | | style="text-align:right" | 83 | style="text-align:right" | 101 | 184.00135(13)# | 6.6(15) ms | α | 180Tl | 3+# | |-id=Bismuth-184m | style="text-indent:1em" | 184mBi | | colspan="3" style="text-indent:2em" | 150(100)# keV | 13(2) ms | α | 180Tl | 10−# | |-id=Bismuth-185 | rowspan=2|185Bi | rowspan=2| | rowspan=2 style="text-align:right" | 83 | rowspan=2 style="text-align:right" | 102 | rowspan=2|184.99760(9)# | rowspan=2| | p (92%) | 184Pb | rowspan=2|(1/2+) | rowspan=2| |- | α (8%) | 181Tl |-id=Bismuth-185m | style="text-indent:1em" | 185mBi | | colspan="3" style="text-indent:2em" | 70(50)# keV | 58(2) μs | IT | 185Bi | (7/2−, 9/2−) | |-id=Bismuth-186 | rowspan=3|186Bi | rowspan=3| | rowspan=3 style="text-align:right" | 83 | rowspan=3 style="text-align:right" | 103 | rowspan=3|185.996623(18) | rowspan=3|14.8(7) ms | α (99.99%) | 182Tl | rowspan=3|(3+) | rowspan=3| |- | β+ (?%) | 186Pb |- | β+, SF (0.011%) | (various) |-id=Bismuth-186m | rowspan=3 style="text-indent:1em" | 186mBi | rowspan=3| | rowspan=3 colspan="3" style="text-indent:2em" | 170(100)# keV | rowspan=3|9.8(4) ms | α (99.99%) | 182Tl | rowspan=3|(10−) | rowspan=3| |- | β+ (?%) | 186Pb |- | β+, SF (0.011%) | (various) |-id=Bismuth-187 | 187Bi | | style="text-align:right" | 83 | style="text-align:right" | 104 | 186.993147(11) | 37(2) ms | α | 183Tl | (9/2−) | |-id=Bismuth-187m1 | style="text-indent:1em" | 187m1Bi | | colspan="3" style="text-indent:2em" | 108(8) keV | 370(20) μs | α | 183Tl | 1/2+ | |-id=Bismuth-187m2 | style="text-indent:1em" | 187m2Bi | | colspan="3" style="text-indent:2em" | 252(3) keV | 7(5) μs | IT | 187Bi | (13/2+) | |-id=Bismuth-188 | rowspan=2|188Bi | rowspan=2| | rowspan=2 style="text-align:right" | 83 | rowspan=2 style="text-align:right" | 105 | rowspan=2|187.992276(12) | rowspan=2|60(3) ms | α | 184Tl | rowspan=2|(3+) | rowspan=2| |- | β+, SF (0.0014%) | (various) |-id=Bismuth-188m1 | style="text-indent:1em" | 188m1Bi | | colspan="3" style="text-indent:2em" | 66(30) keV | >5 μs | | | 7+# | |-id=Bismuth-188m2 | rowspan=2 style="text-indent:1em" | 188m2Bi | rowspan=2| | rowspan=2 colspan="3" style="text-indent:2em" | 153(30) keV | rowspan=2|265(15) ms | α | 184Tl | rowspan=2|(10−) | rowspan=2| |- | β+, SF (0.0046%) | (various) |-id=Bismuth-189 | 189Bi | | style="text-align:right" | 83 | style="text-align:right" | 106 | 188.989195(22) | 688(5) ms | α | 185Tl | 9/2− | |-id=Bismuth-189m1 | rowspan=2 style="text-indent:1em" | 189m1Bi | rowspan=2| | rowspan=2 colspan="3" style="text-indent:2em" | 184(5) keV | rowspan=2|5.0(1) ms | α (83%) | 185Tl | rowspan=2|1/2+ | rowspan=2| |- | IT (17%) | 189Bi |-id=Bismuth-189m2 | style="text-indent:1em" | 189m2Bi | | colspan="3" style="text-indent:2em" | 357.6(5) keV | 880(50) ns | IT | 189Bi | 13/2+ | |-id=Bismuth-190 | rowspan=3|190Bi | rowspan=3| | rowspan=3 style="text-align:right" | 83 | rowspan=3 style="text-align:right" | 107 | rowspan=3|189.988625(23) | rowspan=3|6.3(1) s | α (77%) | 186Tl | rowspan=3|(3+) | rowspan=3| |- | β+ (23%) | 190Pb |- | β+, SF (6×10−6%) | (various) |-id=Bismuth-190m1 | rowspan=3 style="text-indent:1em" | 190m1Bi | rowspan=3| | rowspan=3 colspan="3" style="text-indent:2em" | 120(40) keV | rowspan=3|6.2(1) s | α (70%) | 186Tl | rowspan=3|10− | rowspan=3| |- | β+ (30%) | 190Pb |- | β+, SF (4×10−6%) | (various) |-id=Bismuth-190m2 | style="text-indent:1em" | 190m2Bi | | colspan="3" style="text-indent:2em" | 121(15) keV | 175(8) ns | IT | 190Bi | (5−) | |-id=Bismuth-190m3 | style="text-indent:1em" | 190m3Bi | | colspan="3" style="text-indent:2em" | 394(40) keV | 1.3(8) μs | IT | 190Bi | (8−) | |-id=Bismuth-191 | rowspan=2|191Bi | rowspan=2| | rowspan=2 style="text-align:right" | 83 | rowspan=2 style="text-align:right" | 108 | rowspan=2|190.985787(8) | rowspan=2|12.4(3) s | α (51%) | 187Tl | rowspan=2|9/2− | rowspan=2| |- | β+ (49%) | 191Pb |-id=Bismuth-191m1 | rowspan=3 style="text-indent:1em" | 191m1Bi | rowspan=3| | rowspan=3 colspan="3" style="text-indent:2em" | 242(4) keV | rowspan=3|125(8) ms | α (68%) | 187Tl | rowspan=3|1/2+ | rowspan=3| |- | IT (?%) | 191Bi |- | β+ (?%) | 191Pb |-id=Bismuth-191m2 | style="text-indent:1em" | 191m2Bi | | colspan="3" style="text-indent:2em" | 429.7(5) keV | 562(10) ns | IT | 191Bi | 13/2+ | |-id=Bismuth-191m3 | style="text-indent:1em" | 191m3Bi | | colspan="3" style="text-indent:2em" | 1875(25)# keV | 400(40) ns | IT | 191Bi | 25/2-# | |-id=Bismuth-192 | rowspan=2|192Bi | rowspan=2| | rowspan=2 style="text-align:right" | 83 | rowspan=2 style="text-align:right" | 109 | rowspan=2|191.98547(3) | rowspan=2|34.6(9) s | β+ (88%) | 192Pb | rowspan=2|(3+) | rowspan=2| |- | α (12%) | 188Tl |-id=Bismuth-192m | rowspan=2 style="text-indent:1em" | 192mBi | rowspan=2| | rowspan=2 colspan="3" style="text-indent:2em" | 140(30) keV | rowspan=2|39.6(4) s | β+ (90%) | 192Pb | rowspan=2|10− | rowspan=2| |- | α (10%) | 188Tl |-id=Bismuth-193 | rowspan=2|193Bi | rowspan=2| | rowspan=2 style="text-align:right" | 83 | rowspan=2 style="text-align:right" | 110 | rowspan=2|192.982947(8) | rowspan=2|63.6(30) s | β+ (96.5%) | 193Pb | rowspan=2|9/2− | rowspan=2| |- | α (3.5%) | 189Tl |-id=Bismuth-193m1 | rowspan=2 style="text-indent:1em" | 193m1Bi | rowspan=2| | rowspan=2 colspan="3" style="text-indent:2em" | 305(6) keV | rowspan=2|3.20(14) s | α (84%) | 189Tl | rowspan=2|1/2+ | rowspan=2| |- | β+ (16%) | 193Pb |-id=Bismuth-193m2 | style="text-indent:1em" | 193m2Bi | | colspan="3" style="text-indent:2em" | 605.53(18) keV | 153(10) ns | IT | 193Bi | 13/2+ | |-id=Bismuth-193m3 | style="text-indent:1em" | 193m3Bi | | colspan="3" style="text-indent:2em" | 2349.6(6) keV | 85(3) μs | IT | 193Bi | 29/2+ | |-id=Bismuth-193m4 | style="text-indent:1em" | 193m4Bi | | colspan="3" style="text-indent:2em" | 2405.1(7) keV | 3.02(8) μs | IT | 193Bi | (29/2−) | |-id=Bismuth-194 | rowspan=2|194Bi | rowspan=2| | rowspan=2 style="text-align:right" | 83 | rowspan=2 style="text-align:right" | 111 | rowspan=2|193.982799(6) | rowspan=2|95(3) s | β+ (99.54%) | 194Pb | rowspan=2|3+ | rowspan=2| |- | α (0.46%) | 190Tl |-id=Bismuth-194m1 | style="text-indent:1em" | 194m1Bi | | colspan="3" style="text-indent:2em" | 150(50) keV | 125(2) s | β+ | 194Pb | (6+, 7+) | |-id=Bismuth-194m2 | rowspan=2 style="text-indent:1em" | 194m2Bi | rowspan=2| | rowspan=2 colspan="3" style="text-indent:2em" | 163(4) keV | rowspan=2|115(4) s | β+ (99.80%) | 194Pb | rowspan=2|(10−) | rowspan=2| |- | α (0.20%) | 190Tl |-id=Bismuth-195 | rowspan=2|195Bi | rowspan=2| | rowspan=2 style="text-align:right" | 83 | rowspan=2 style="text-align:right" | 112 | rowspan=2|194.980649(6) | rowspan=2|183(4) s | β+ (99.97%) | 195Pb | rowspan=2|9/2− | rowspan=2| |- | α (0.030%) | 191Tl |-id=Bismuth-195m1 | rowspan=2 style="text-indent:1em" | 195m1Bi | rowspan=2| | rowspan=2 colspan="3" style="text-indent:2em" | 399(6) keV | rowspan=2|87(1) s | β+ (67%) | 195Pb | rowspan=2|1/2+ | rowspan=2| |- | α (33%) | 191Tl |-id=Bismuth-195m2 | style="text-indent:1em" | 195m2Bi | | colspan="3" style="text-indent:2em" | 2381.0(5) keV | 614(5) ns | IT | 195Bi | (29/2−) | |-id=Bismuth-195m3 | style="text-indent:1em" | 195m3Bi | | colspan="3" style="text-indent:2em" | 2615.9(5) keV | 1.49(1) μs | IT | 195Bi | 29/2+ | |-id=Bismuth-196 | rowspan=2|196Bi | rowspan=2| | rowspan=2 style="text-align:right" | 83 | rowspan=2 style="text-align:right" | 113 | rowspan=2|195.980667(26) | rowspan=2|5.13(20) min | β+ | 196Pb | rowspan=2|(3+) | rowspan=2| |- | α (0.00115%) | 192Tl |-id=Bismuth-196m1 | style="text-indent:1em" | 196m1Bi | | colspan="3" style="text-indent:2em" | 166.4(29) keV | 0.6(5) s | IT | 196Bi | (7+) | |-id=Bismuth-196m2 | rowspan=3 style="text-indent:1em" | 196m2Bi | rowspan=3| | rowspan=3 colspan="3" style="text-indent:2em" | 272(3) keV | rowspan=3| 4.00(5) min | β+ (74.2%) | 196Pb | rowspan=3| (10−) | rowspan=3| |- | IT (25.8%) | 196Bi |- | α (3.8×10−4%) | 196Bi |-id=Bismuth-197 | 197Bi | | style="text-align:right" | 83 | style="text-align:right" | 114 | 196.978865(9) | 9.33(50) min | β+ | 197Pb | 9/2− | |-id=Bismuth-197m1 | rowspan=2 style="text-indent:1em" | 197m1Bi | rowspan=2| | rowspan=2 colspan="3" style="text-indent:2em" | 533(12) keV | rowspan=2|5.04(16) min | α (55%) | 193Tl | rowspan=2|1/2+ | rowspan=2| |- | β+ (45%) | 197Pb |-id=Bismuth-197m2 | style="text-indent:1em" | 197m2Bi | | colspan="3" style="text-indent:2em" | 2403(12) keV | 263(13) ns | IT | 197Bi | (29/2−) | |-id=Bismuth-197m3 | style="text-indent:1em" | 197m3Bi | | colspan="3" style="text-indent:2em" | 2929.5(5) keV | 209(30) ns | IT | 197Bi | (31/2−) | |-id=Bismuth-198 | 198Bi | | style="text-align:right" | 83 | style="text-align:right" | 115 | 197.979201(30) | 10.3(3) min | β+ | 198Pb | 3+ | |-id=Bismuth-198m1 | style="text-indent:1em" | 198m1Bi | | colspan="3" style="text-indent:2em" | 290(40) keV | 11.6(3) min | β+ | 198Pb | 7+ | |-id=Bismuth-198m2 | style="text-indent:1em" | 198m2Bi | | colspan="3" style="text-indent:2em" | 540(40) keV | 7.7(5) s | IT | 198Bi | 10− | |-id=Bismuth-199 | 199Bi | | style="text-align:right" | 83 | style="text-align:right" | 116 | 198.977673(11) | 27(1) min | β+ | 199Pb | 9/2− | |-id=Bismuth-199m1 | rowspan=3 style="text-indent:1em" | 199m1Bi | rowspan=3| | rowspan=3 colspan="3" style="text-indent:2em" | 667(3) keV | rowspan=3|24.70(15) min | β+ (>98%) | 199Pb | rowspan=3|(1/2+) | rowspan=3| |- | IT (<2%) | 199Bi |- | α (0.01%) | 195Tl |-id=Bismuth-199m2 | style="text-indent:1em" | 199m2Bi | | colspan="3" style="text-indent:2em" | 1962(23) keV | 0.10(3) μs | IT | 199Bi | 25/2+# | |-id=Bismuth-199m3 | style="text-indent:1em" | 199m3Bi | | colspan="3" style="text-indent:2em" | 2548(23) keV | 168(13) ns | IT | 199Bi | 29/2−# | |-id=Bismuth-200 | 200Bi | | style="text-align:right" | 83 | style="text-align:right" | 117 | 199.978131(24) | 36.4(5) min | β+ | 200Pb | 7+ | |-id=Bismuth-200m1 | rowspan=2 style="text-indent:1em" | 200m1Bi | rowspan=2| | rowspan=2 colspan="3" style="text-indent:2em" | 100(70)# keV | rowspan=2|31(2) min | β+ (?%) | 200Pb | rowspan=2|(2+) | rowspan=2| |- | IT (?%) | 200Bi |-id=Bismuth-200m2 | style="text-indent:1em" | 200m2Bi | | colspan="3" style="text-indent:2em" | 428.20(10) keV | 400(50) ms | IT | 200Bi | (10−) | |-id=Bismuth-201 | 201Bi | | style="text-align:right" | 83 | style="text-align:right" | 118 | 200.976995(13) | 103(3) min | β+ | 201Pb | 9/2− | |-id=Bismuth-201m1 | rowspan=2 style="text-indent:1em" | 201m1Bi | rowspan=2| | rowspan=2 colspan="3" style="text-indent:2em" | 846.35(18) keV | rowspan=2|57.5(21) min | β+ | 201Pb | rowspan=2|1/2+ | rowspan=2| |- | α (?%) | 197Tl |-id=Bismuth-201m2 | style="text-indent:1em" | 201m2Bi | | colspan="3" style="text-indent:2em" | 1973(23) keV | 118(28) ns | IT | 201Bi | 25/2+# | |-id=Bismuth-201m3 | style="text-indent:1em" | 201m3Bi | | colspan="3" style="text-indent:2em" | 2012(23) keV | 105(75) ns | IT | 201Bi | 27/2+# | |-id=Bismuth-201m4 | style="text-indent:1em" | 201m4Bi | | colspan="3" style="text-indent:2em" | 2781(23) keV | 124(4) ns | IT | 201Bi | 29/2−# | |-id=Bismuth-202 | rowspan=2|202Bi | rowspan=2| | rowspan=2 style="text-align:right" | 83 | rowspan=2 style="text-align:right" | 119 | rowspan=2|201.977723(15) | rowspan=2|1.72(5) h | β+ | 202Pb | rowspan=2|5+ | rowspan=2| |- | α (<10−5%) | 198Tl |-id=Bismuth-202m1 | style="text-indent:1em" | 202m1Bi | | colspan="3" style="text-indent:2em" | 625(12) keV | 3.04(6) μs | IT | 202Bi | 10−# | |-id=Bismuth-202m2 | style="text-indent:1em" | 202m2Bi | | colspan="3" style="text-indent:2em" | 2617(12) keV | 310(50) ns | IT | 202Bi | (17+) | |-id=Bismuth-203 | 203Bi | | style="text-align:right" | 83 | style="text-align:right" | 120 | 202.976892(14) | 11.76(5) h | β+ | 203Pb | 9/2− | |-id=Bismuth-203m1 | style="text-indent:1em" | 203m1Bi | | colspan="3" style="text-indent:2em" | 1098.21(9) keV | 305(5) ms | IT | 203Bi | 1/2+ | |-id=Bismuth-203m2 | style="text-indent:1em" | 203m2Bi | | colspan="3" style="text-indent:2em" | 2041.5(6) keV | 194(30) ns | IT | 203Bi | 25/2+ | |-id=Bismuth-204 | 204Bi | | style="text-align:right" | 83 | style="text-align:right" | 121 | 203.977836(10) | 11.22(10) h | β+ | 204Pb | 6+ | |-id=Bismuth-204m1 | style="text-indent:1em" | 204m1Bi | | colspan="3" style="text-indent:2em" | 805.5(3) keV | 13.0(1) ms | IT | 204Bi | 10− | |-id=Bismuth-204m2 | style="text-indent:1em" | 204m2Bi | | colspan="3" style="text-indent:2em" | 2833.4(11) keV | 1.07(3) ms | IT | 204Bi | 17+ | |-id=Bismuth-205 | 205Bi | | style="text-align:right" | 83 | style="text-align:right" | 122 | 204.977385(5) | 14.91(7) d | β+ | 205Pb | 9/2− | |-id=Bismuth-205m1 | style="text-indent:1em" | 205m1Bi | | colspan="3" style="text-indent:2em" | 1497.17(9) keV | 7.9(7) μs | IT | 205Bi | 1/2+ | |-id=Bismuth-205m2 | style="text-indent:1em" | 205m2Bi | | colspan="3" style="text-indent:2em" | 2064.7(4) keV | 100(6) ns | IT | 205Bi | 21/2+ | |-id=Bismuth-205m3 | style="text-indent:1em" | 205m3Bi | | colspan="3" style="text-indent:2em" | 2139.0(7) keV | 220(25) ns | IT | 205Bi | 25/2+ | |-id=Bismuth-206 | 206Bi | | style="text-align:right" | 83 | style="text-align:right" | 123 | 205.978499(8) | 6.243(3) d | β+ | 206Pb | 6+ | |-id=Bismuth-206m1 | style="text-indent:1em" | 206m1Bi | | colspan="3" style="text-indent:2em" | 59.897(17) keV | 7.7(2) μs | IT | 206Bi | 4+ | |-id=Bismuth-206m2 | style="text-indent:1em" | 206m2Bi | | colspan="3" style="text-indent:2em" | 1044.8(7) keV | 890(10) μs | IT | 206Bi | 10− | |-id=Bismuth-206m3 | style="text-indent:1em" | 206m3Bi | | colspan="3" style="text-indent:2em" | 9233.3(8) keV | 155(15) ns | IT | 206Bi | (28−) | |-id=Bismuth-206m4 | style="text-indent:1em" | 206m4Bi | | colspan="3" style="text-indent:2em" | 10170.5(8) keV | >2 μs | IT | 206Bi | (31+) | |-id=Bismuth-207 | 207Bi | | style="text-align:right" | 83 | style="text-align:right" | 124 | 206.9784706(26) | 31.22(17) y | β+ | 207Pb | 9/2− | |-id=Bismuth-207m | style="text-indent:1em" | 207mBi | | colspan="3" style="text-indent:2em" | 2101.61(16) keV | 182(6) μs | IT | 207Bi | 21/2+ | |-id=Bismuth-208 | 208Bi | | style="text-align:right" | 83 | style="text-align:right" | 125 | 207.9797421(25) | 3.68(4)×105 y | β+ | 208Pb | 5+ | |-id=Bismuth-208m | style="text-indent:1em" | 208mBi | | colspan="3" style="text-indent:2em" | 1571.1(4) keV | 2.58(4) ms | IT | 208Bi | 10− | |- | 209Bi | | style="text-align:right" | 83 | style="text-align:right" | 126 | 208.9803986(15) | 2.01(8)×1019 y | α | 205Tl | 9/2− | 1.0000 |-id=Bismuth-210 | rowspan=2|210Bi | rowspan=2|Radium E | rowspan=2 style="text-align:right" | 83 | rowspan=2 style="text-align:right" | 127 | rowspan=2|209.9841202(15) | rowspan=2|5.012(5) d | β− | 210Po | rowspan=2|1− | rowspan=2|Trace |- | α (1.32×10−4%) | 206Tl |-id=Bismuth-210m | style="text-indent:1em" | 210mBi | | colspan="3" style="text-indent:2em" | 271.31(11) keV | 3.04(6)×106 y | α | 206Tl | 9− | |-id=Bismuth-211 | rowspan=2|211Bi | rowspan=2|Actinium C | rowspan=2 style="text-align:right" | 83 | rowspan=2 style="text-align:right" | 128 | rowspan=2|210.987269(6) | rowspan=2|2.14(2) min | α (99.72%) | 207Tl | rowspan=2|9/2− | rowspan=2|Trace |- | β− (0.276%) | 211Po |-id=Bismuth-211m | style="text-indent:1em" | 211mBi | | colspan="3" style="text-indent:2em" | 1257(10) keV | 1.4(3) μs | IT | 211Bi | (25/2−) | |-id=Bismuth-212 | rowspan=3|212Bi | rowspan=3|Thorium C | rowspan=3 style="text-align:right" | 83 | rowspan=3 style="text-align:right" | 129 | rowspan=3|211.991285(2) | rowspan=3|60.55(6) min | β− (64.05%) | 212Po | rowspan=3|1− | rowspan=3|Trace |- | α (35.94%) | 208Tl |- | β−, α (0.014%) | 208Pb |-id=Bismuth-212m1 | rowspan=3 style="text-indent:1em" | 212m1Bi | rowspan=3| | rowspan=3 colspan="3" style="text-indent:2em" | 250(30) keV | rowspan=3|25.0(2) min | α (67%) | 208Tl | rowspan=3|(8−, 9−) | rowspan=3| |- | β−, α (30%) | 208Pb |- | β− (3%) | 212Po |-id=Bismuth-212m2 | style="text-indent:1em" | 212m2Bi | | colspan="3" style="text-indent:2em" | 1479(30) keV | 7.0(3) min | β− | 212Po | (18−) | |- | rowspan=2|213Bi | rowspan=2| | rowspan=2 style="text-align:right" | 83 | rowspan=2 style="text-align:right" | 130 | rowspan=2|212.994384(5) | rowspan=2|45.60(4) min | β− (97.91%) | 213Po | rowspan=2|9/2− | rowspan=2|Trace |- | α (2.09%) | 209Tl |-id=Bismuth-213m | style="text-indent:1em" | 213mBi | | colspan="3" style="text-indent:2em" | 1353(21) keV | >168 s | | | 25/2−# | |-id=Bismuth-214 | rowspan=3|214Bi | rowspan=3|Radium C | rowspan=3 style="text-align:right" | 83 | rowspan=3 style="text-align:right" | 131 | rowspan=3|213.998711(12) | rowspan=3|19.9(4) min | β− (99.98%) | 214Po | rowspan=3|1− | rowspan=3|Trace |- | α (0.021%) | 210Tl |- | β−, α (0.003%) | 210Pb |-id=Bismuth-214m | style="text-indent:1em" | 214mBi | | colspan="3" style="text-indent:2em" | 539(30) keV | >93 s | | | 8−# | |-id=Bismuth-215 | 215Bi | | style="text-align:right" | 83 | style="text-align:right" | 132 | 215.001749(6) | 7.62(13) min | β− | 215Po | (9/2−) | Trace |-id=Bismuth-215m | rowspan=2 style="text-indent:1em" | 215mBi | rowspan=2| | rowspan=2 colspan="3" style="text-indent:2em" | 1367(20)# keV | rowspan=2| 36.9(6) s | IT (76.9%) | 215Bi | rowspan=2| (25/2−) | rowspan=2| |- | β− (23.1%) | 215Po |-id=Bismuth-216 | 216Bi | | style="text-align:right" | 83 | style="text-align:right" | 133 | 216.006306(12) | 2.21(4) min | β− | 216Po | (6−, 7−) | |-id=Bismuth-216m | style="text-indent:1em" | 216mBi | | colspan="3" style="text-indent:2em" | 24(19) keV | 6.6(21) min | β− | 216Po | 3−# | |-id=Bismuth-217 | 217Bi | | style="text-align:right" | 83 | style="text-align:right" | 134 | 217.009372(19) | 98.5(13) s | β− | 217Po | 9/2−# | |-id=Bismuth-217m | style="text-indent:1em" | 217mBi | | colspan="3" style="text-indent:2em" | 1491(20) keV | 3.0(2) μs | IT | 217Bi | 25/2−# | |-id=Bismuth-218 | 218Bi | | style="text-align:right" | 83 | style="text-align:right" | 135 | 218.014188(29) | 33(1) s | β− | 218Po | 8−# | |-id=Bismuth-219 | 219Bi | | style="text-align:right" | 83 | style="text-align:right" | 136 | 219.01752(22)# | 8.7(29) s | β− | 219Po | 9/2−# | |-id=Bismuth-220 | 220Bi | | style="text-align:right" | 83 | style="text-align:right" | 137 | 220.02250(32)# | 9.5(57) s | β− | 220Po | 1−# | |-id=Bismuth-221 | 221Bi | | style="text-align:right" | 83 | style="text-align:right" | 138 | 221.02598(32)# | 2# s[>300 ns] | | | 9/2−# | |-id=Bismuth-222 | 222Bi | | style="text-align:right" | 83 | style="text-align:right" | 139 | 222.03108(32)# | 3# s[>300 ns] | | | 1−# | |-id=Bismuth-223 | 223Bi | | style="text-align:right" | 83 | style="text-align:right" | 140 | 223.03461(43)# | 1# s[>300 ns] | | | 9/2−# | |-id=Bismuth-224 | 224Bi | | style="text-align:right" | 83 | style="text-align:right" | 141 | 224.03980(43)# | 1# s[>300 ns] | | | 1−# | Bismuth-213 Bismuth-213 (213Bi) has a half-life of 45 minutes and decays via alpha emission. Commercially, bismuth-213 can be produced by bombarding radium with bremsstrahlung photons from a linear particle accelerator, which populates its progenitor actinium-225. In 1997, an antibody conjugate with 213Bi was used to treat patients with leukemia. This isotope has also been tried in targeted alpha therapy (TAT) program to treat a variety of cancers. Bismuth-213 is also found in the decay chain of uranium-233, which is the fuel bred by thorium reactors. References Isotopic compositions and standard atomic masses from: Half-life, spin, and isomer data selected from the following sources. Bismuth Bismuth
Isotopes of bismuth
Chemistry
9,748
39,218,636
https://en.wikipedia.org/wiki/Omar%20Khaiyyam
Omar Khaiyyam is a Hindi language film. It was released in 1946, directed by Mohan Sinha. The film was based on life of famous Mathematician, Astronomer, Philosopher and Poet Omar Khayyam. The film was directed by Mohan Sinha. Music was by Lal Mohammad. Cast K. L. Saigal as Omar Khayyam Suraiya as Mehru Wasti as Ghayasbeg Benjamin as Sultan Shakir as Vazir Madan Puri as Anwar Muzzmil as Zafar Leela Soundtrack The music composed by Lal Mohammed with lyrics written by Dr. Safdar Aah Sitapuri. Allaah Hu Khayyaam Hai Allaah Waalaa Matawaalaa - K. L. Saigal Aye Sufi Allah Wale - Suraiya Bedard Zaraa Sun Le Garibon Ki Kahaani - Suraiya Hare Bhare Baag Ke Phulon Pe Rijhaa Khayyaam - K. L. Saigal Insaan Kyon Rotaa Hai Insaan - K. L. Saigal Jal Ke Kuch Kehta Hai - Suraiya Khayyam Hai Allah Wala Matwala - Suraiya References External links 1946 films 1940s Hindi-language films 1940s Indian films Indian drama films 1946 drama films Indian black-and-white films Hindi-language drama films Cultural depictions of Omar Khayyam Films set in Iran
Omar Khaiyyam
Astronomy
292
14,024,869
https://en.wikipedia.org/wiki/Photothermal%20spectroscopy
Photothermal spectroscopy is a group of high sensitivity spectroscopy techniques used to measure optical absorption and thermal characteristics of a sample. The basis of photothermal spectroscopy is the change in thermal state of the sample resulting from the absorption of radiation. Light absorbed and not lost by emission results in heating. The heat raises temperature thereby influencing the thermodynamic properties of the sample or of a suitable material adjacent to it. Measurement of the temperature, pressure, or density changes that occur due to optical absorption are ultimately the basis for the photothermal spectroscopic measurements. As with photoacoustic spectroscopy, photothermal spectroscopy is an indirect method for measuring optical absorption, because it is not based on the direct measure of the light which is involved in the absorption. In another sense, however, photothermal (and photoacoustic) methods measure directly the absorption, rather than e.g. calculate it from the transmission, as is the case of more usual (transmission) spectroscopic techniques. And it is this fact that gives the technique its high sensitivity, because in transmission techniques the absorbance is calculated as the difference between total light impinging on the sample and the transmitted (plus reflected, plus scattered) light, with the usual problems of accuracy when one deals with small differences between large numbers, if the absorption is small. In photothermal spectroscopies, instead, the signal is essentially proportional to the absorption, and is zero when there is zero true absorption, even in the presence of reflection or scattering. There are several methods and techniques used in photothermal spectroscopy. Each of these has a name indicating the specific physical effect measured. Photothermal lens spectroscopy (PTS or TLS) measures the thermal blooming that occurs when a beam of light heats a transparent sample. It is typically applied for measuring minute quantities of substances in homogeneous gas and liquid solutions. Photothermal deflection spectroscopy (PDS), also called the mirage effect, measures the bending of light due to optical absorption. This technique is particularly useful for measuring surface absorption and for profiling thermal properties in layered materials. Photothermal diffraction, a type of four wave mixing, monitors the effect of transient diffraction gratings "written" into the sample with coherent lasers. It is a form of real-time holography. Photothermal emission measures an increase in sample infrared radiance occurring as a consequence of absorption. Sample emission follows Stefan's law of thermal emission. This methods is used to measure the thermal properties of solids and layered materials. Photothermal single particle microscopy. This technique allows the detection of single absorbing nanoparticles via the creation of a spherically symmetric thermal lens for imaging and correlation spectroscopy. Photothermal deflection spectroscopy Photothermal deflection spectroscopy is a kind of spectroscopy that measures the change in refractive index due to heating of a medium by light. It works via a sort of "mirage effect" where a refractive index gradient exists adjacent to the test sample surface. A probe laser beam is refracted or bent in a manner proportional to the temperature gradient of the transparent medium near the surface. From this deflection, a measure of the absorbed excitation radiation can be determined. The technique is useful when studying optically thin samples, because sensitive measurements can be obtained of whether absorption is occurring. It is of value in situations where "pass through" or transmission spectroscopy can't be used. There are two main forms of PDS: Collinear and Transverse. Collinear PDS was introduced in a 1980 paper by A.C. Boccara, D. Fournier, et al. In collinear, two beams pass through and intersect in a medium. The pump beam heats the material and the probe beam is deflected. This technique only works for transparent media. In transverse, the pump beam heats come in normal to the surface, and the probe beam passes parallel. In a variation on this, the probe beam may reflect off the surface, and measure buckling due to heating. Transverse PDS can be done in Nitrogen, but better performance is gained in a liquid cell: usually an inert, non-absorbing material such as a perfluorocarbon is used. In both collinear and transverse PDS, the surface is heated using a periodically modulated light source, such as an optical beam passing through a mechanical chopper or regulated with a function generator. A lock-in amplifier is then used to measure deflections found at the modulation frequency. Another scheme uses a pulsed laser as the excitation source. In that case, a boxcar average can be used to measure the temporal deflection of the probe beam to the excitation radiation. The signal falls off exponentially as a function of frequency, so frequencies around 1-10 hertz are frequently used. A full theoretical analysis of the PDS system was published by Jackson, Amer, et al. in 1981. The same paper also discussed the use of PDS as a form of microscopy, called "Photothermal Deflection Microscopy", which can yield information about impurities and the surface topology of materials. PDS analysis of thin films can also be performed using a patterned substrate that supports optical resonances, such as guided-mode resonance and whispering-gallery modes. The probe beam is coupled into a resonant mode and the coupling efficiency is highly sensitive to the incidence angle. Due to the photoheating effect, the coupling efficiency is changed and characterized to indicate the thin film absorption. See also Photothermal effect Photothermal microspectroscopy Photothermal optical microscopy Urbach energy References J. A. Sell Photothermal Investigations of Solids and Fluids Academic Press, New York 1989 D. P. Almond and P. M. Patel Photothermal Science and Techniques Chapman and Hall, London 1996 S. E. Bialkowski Photothermal Spectroscopy Methods for Chemical Analysis John Wiley, New York 1996 External links Quantities, terminology, and symbols in photothermal and related spectroscopies (IUPAC Recommendations 2004) on-line version Chapter 1 of Stephen E. Bialkowski's Photothermal Spectroscopy Methods for Chemical Analysis John Wiley, New York 1996 Spectroscopy
Photothermal spectroscopy
Physics,Chemistry
1,248
1,820,751
https://en.wikipedia.org/wiki/Location%20model%20%28economics%29
In economics, a location model or spatial model refers to any monopolistic competition model that demonstrates consumer preference for particular brands of goods and their locations. Examples of location models include Hotelling's Location Model, Salop's Circle Model, and hybrid variations. Traditional vs. location models In traditional economic models, consumers display preference given the constraints of a product characteristic space. Consumers perceive certain brands with common characteristics to be close substitutes, and differentiate these products from their unique characteristics. For example, there are many brands of chocolate with nuts and others without them. Hence, the chocolate with nuts is a constraint of its product characteristic space. On the other hand, consumers in location models display preference for both the utility gained from a particular brand's characteristics as well as its geographic location; these two factors form an enhanced “product characteristic space.” Consumers are now willing to sacrifice pleasure from products for a closer geographic location, and vice versa. For example, consumers realize high costs for products that are located far from their spatial point (e.g. transportation costs, time, etc.) and also for products that deviate from their ideal features. Firms have greater market power when they satisfy the consumer's demand for products at closer distance or preferred products. Hotelling's Location Model In 1929, Hotelling developed a location model that demonstrates the relationship between location and pricing behavior of firms. He represented this notion through a line of fixed length. Assuming all consumers are identical (except for location) and consumers are evenly dispersed along the line, both the firms and consumer respond to changes in demand and the economic environment. In Hotelling's Location Model, firms do not exercise variations in product characteristics; firms compete and price their products in only one dimension, geographic location. Therefore, traditional usage of this model should be used for consumers who perceive products to be perfect substitutes or as a foundation for modern location models. An example of fixed firms Assumptions Assume that the line in Hotelling's location model is actually a street with fixed length. All consumers are identical, except they are uniformly located at two equal quadrants and , which is divided in the center by point . Consumers face a transportation/time cost for reaching a firm, denoted by ; they have no preferences for the firms. There are two firms in this scenario, Firm x and Firm y; each one is located at a different end of the street, is fixed in location and sells an identical product. Advanced analysis Given the assumptions of the Hotelling model, consumers will choose either firm as long as the combined price and transportation cost of the product is less than the competitive firm. For example, if both firms sell the product at the same price , consumers in quadrants and will pick the firm closest to them. The price realized by the consumer is , where is the price of the product including the cost of transportation. As long as for Firm x is greater than Firm y, consumers will travel to Firm y to purchase their product; this minimizes . Only the consumers who live at point , the halfway point between the two firms, will be indifferent between the two product locations. An example of firm relocation Assumptions Assume that the line in Hotelling's location model is actually a street with fixed length. All consumers are identical, except they are uniformly located in four quadrants , , , and ; the halfway point between the endpoints is point . Consumers face an equal transportation/time cost for reaching a firm, denoted by ; they have no preferences for the firms. There are two firms in this scenario, Firm x and Firm y; each one is located at a different end of the street, is able to relocate at no cost, and sells an identical product. Analysis In this example, Firm x and Firm y will maximize their profit by increasing their consumer pool. Firm x will move slightly toward Firm y, in order to gain Firm y's customers. In response, Firm y will move slightly toward Firm x to re-establish its loss, and increase the pool from its competitor. The cycle repeats until both firms are at point , the halfway point of the street where each firm has the same number of customers. This result is known as Hotelling's law, however it was invalidated in 1979 by d'Aspremont, J. Jaskold Gabszewicz and J.-F. Thisse. Consider that quick (short run) price adjustment and slow (long run) location adjustment is modelled as a repeated two-stage game, where in the first stage firms will make an incremental relocation and in the second period, having observed each other's new locations, they will simultaneously choose prices. d'Aspremont et al. (1979) prove that when firms are sufficiently close together (but not located in the same place) no Nash equilibrium price pair (in pure strategies) exists for the second stage subgame (because there is an incentive to undercut the rival firm's price and gain the entire market). For example, when firms are equidistant from the centre of the street, no equilibrium price pair exists for locations 1/4 or closer than 1/4 of the length of the street from the centre. The non-existence of a Cournot equilibrium precludes the ending of the game, and so it is not repeated. Thus, although both firms at the halfway point itself is an equilibrium, there is no tendency for firms to agglomerate here. If only Firm x can relocate without costs and Firm y is fixed, Firm x will move to the side of Firm y where the consumer pool is maximized. Consequently, the profits for Firm X significantly increase, while the profits for Firm Y significantly decrease. Salop's Circle Model One of the most famous variations of Hotelling's location model is Salop's circle model. Similar to the previous spatial representations, the circle model examines consumer preference with regards to geographic location. However, Salop introduces two significant factors: 1) firms are located around a circle with no end-points, and 2) it allows the consumer to choose a second, heterogeneous good. An example of a second good Assumptions Assume that the consumers are equidistant from one another around the circle. The model will occur for one time period, in which only one product is purchased. The consumer will have a choice of purchasing variations of Product A (a differentiated product) or Product B (an outside good; undifferentiated product). There are two firms also located equidistant around the circle. Each firm offers a variation of Product A, and an outside firm offers a good, Product B. Analysis In this example, the consumer wants to purchase their ideal variation of Product A. They are willing to purchase the product, given that it is within the constraint of their utility, transportation/distance costs, and price. The utility for a particular product at distance is represented in the following equation: Where is the utility from a superior brand, denotes the rate at which an inferior brand lowers the utility from the superior brand, is the location of the superior brand, and is the location of the consumer. The distance between the brand and the consumer is thereby given in . The consumer's primary goal is to maximize consumer surplus, i.e. purchase the product that best satisfies any combination of price and quality. Although the consumer may receive more pleasure from their superior brand, the inferior brand may maximize the surplus which is given by: , where the difference is between the utility of a product at location and the price . Now suppose the consumer also has the option to purchase an outside, undifferentiated Product B. The consumer surplus gained from Product B is denoted by . Therefore, for a given amount of money, the consumer will purchase the superior variation of Product A over Product B as long as , where the consumer surplus from the superior variation of Product A is greater than the consumer surplus gained from Product B. Alternatively, the consumer only purchases the superior variation of product A as long as , where the difference between the surplus of the superior variation of Product A and the surplus gained from Product B is positive. See also Economics Economic geography Location theory Microeconomics Industrial organization Retail geography Hyperbolic discounting Hotelling's location model References External links The Hotelling-Downs Model of Spatial/Political Competition Economics models Geographic position
Location model (economics)
Mathematics
1,700
4,039,609
https://en.wikipedia.org/wiki/Unigine
UNIGINE is a proprietary cross-platform game engine developed by UNIGINE Company used in simulators, virtual reality systems, serious games and visualization. It supports OpenGL 4, Vulkan and DirectX 12. UNIGINE Engine is a core technology for a lineup of benchmarks (CPU, GPU, power supply, cooling system), which are used by overclockers and technical media such as Tom's Hardware, Linus Tech Tips, PC Gamer, and JayzTwoCents. UNIGINE benchmarks are also included as part of the Phoronix Test Suite for benchmarking purposes on Linux and other systems. UNIGINE 1 The first public release was the 0.3 version on May 4, 2005. Platforms UNIGINE 1 supported Microsoft Windows, Linux, OS X, PlayStation 3, Android, and iOS. Experimental support for WebGL existed but was not included into the official SDK. UNIGINE 1 supported DirectX 9, DirectX 10, DirectX 11, OpenGL, OpenGL ES and PlayStation 3, while initial versions (v0.3x) only supported OpenGL. UNIGINE 1 provided C++, C#, and UnigineScript APIs for developers. It also supported the shading languages GLSL and HLSL. Game features UNIGINE 1 had support for large virtual scenarios and specific hardware required by professional simulators and enterprise VR systems, often called serious games. Support for large virtual worlds was implemented via double precision of coordinates (64-bit per axis), zone-based background data streaming, and optional operations in geographic coordinate system (latitude, longitude, and elevation instead of X, Y, Z). Display output was implemented via multi-channel rendering (network-synchronized image generation of a single large image with several computers), which typical for professional simulators. The same system enabled support of multiple output devices with asymmetric projections (e.g. CAVE). Curved screens with multiple projectors were also supported. UNIGINE 1 had stereoscopic output support for anaglyph rendering, separate images output, Nvidia 3D Vision, and virtual reality headsets. It also supported multi-monitor output. Other features UNIGINE rendered supported Shader model 5.0 with hardware tessellation, DirectCompute, and OpenCL. It also used screen space ambient occlusion and real-time global illumination. UNIGINE used a proprietary physics engine to process events such as collision detection, rigid body physics, and dynamical destruction of objects. It also used a proprietary engine for path finding and basic AI components. UNIGINE had features such as interactive 3D GUI, video playback using Theora codec, 3D audio system based on OpenAL library, WYSIWYG scene editor (UNIGINE Editor). UNIGINE 2 UNIGINE 2 was released on October 10, 2015. UNIGINE 2 has all features from UNIGINE 1 and transitioned from forward rendering to deferred rendering approach, PBR shading, and introduced new graphical technologies like geometry water, multi-layered volumetric clouds, SSRTGI and voxel-based lighting. Platforms UNIGINE 2 supports Microsoft Windows, Linux and OS X (support stopped starting from 2.6 version). UNIGINE 2 also supports the following graphical APIs: DirectX 11, OpenGL 4.x. Since version 2.16 UNIGINE experimentally supports DirectX 12 and Vulkan. There are 3 APIs for developers: C++, C#, Unigine Script. Supported Shader languages: HLSL, GLSL, UUSL (Unified UNIGINE Shader Language). SSRTGI Proprietary SSRTGI (Screen Space Ray-Traced Global Illumination) rendering technology was introduced in version 2.5. It was presented at SIGGRAPH 2017 Real-Time Live! event. Development The roots of UNIGINE are in the frustum.org open source project, which was initiated in 2002 by Alexander "Frustum" Zapryagaev, who is a co-founder (along with Denis Shergin, CEO) and ex-CTO of UNIGINE Company. Linux game competition On November 25, 2010, UNIGINE Company announced a competition to support Linux game development. They agreed to give away a free license of the UNIGINE engine to anyone willing to develop and release a game with a Linux native client, and would also grant the team a Windows license. The competition ran until December 10, 2010, with a considerable number of entries being submitted. Due to the unexpected response, UNIGINE decided to extend the offer to the three best applicants, with each getting full UNIGINE licenses. The winners were announced on December 13, 2010, with the developers selected being Kot-in-Action Creative Artel (who previously developed Steel Storm), Gamepulp (who intend to make a puzzle platform), and MED-ART (who previously worked on Painkiller: Resurrection). UNIGINE-based projects As of 2021, company claimed to have more than 250 B2B customers worldwide. Some companies that develop software for professional aircraft, ships & vehicle simulators use UNIGINE Engine as a base for the 3D & VR visualization. Games Released Cradle - released for Windows and Linux in 2015 Oil Rush - released for Windows, Linux and Mac OS X in 2012; released for iOS in 2013 Syndicates of Arkon - released for Windows in 2010 Tryst - released for Windows in 2012 Petshop - released for Windows and Mac in 2011 Sumoman - released for Windows and Linux in 2017 Demolicious - released for iOS in 2012 Acro FS - aerobatic flight simulator (Steam page) Dual Universe - released in 2022 Upcoming Dilogus: The Winds of War Node - VR shooter (Steam page) Kingdom of Kore - action RPG for PC (in future for PS3) - cancelled by publisher El Somni Quas - MMORPG (Patreon page) Simulation and visualization Metro Simulator by Smart Simulation CarMaker 10.0 by IPG Automotive NAUTIS maritime simulators by VSTEP Train driver simulator by Oktal Sydac Be-200 flight simulator Klee 3D (3D visualization solution for digital marketing and research applications) The visualization component of the analytical software complex developed for JSC "ALMAZ-ANTEY" MSDB", an affiliate of JSC "Concern "Almaz-Antey" Real-time interactive architectural visualization projects of AI3D Bell-206 Ranger rescue helicopter simulator Magus ex Machina (3D animated movie) SIMREX CDS, SIMREX FDS, SIMREX FTS car driving simulators by INNOSIMULATION Real-time artworks by John Gerrard (artist): Farm, Solar Reserve, Exercise, Western Flag (Spindletop, Texas), X. laevis (Spacelab) Train simulators by SPECTR DVS3D by GDI RF-X flight simulator NAVANTIS Ship Simulator VR simulator for learning of computer vision for autonomous flight control at Daedalean AI Benchmarks UNIGINE Engine is used as a platform for a series of benchmarks, which can be used to determine the stability of PC hardware (CPU, GPU, power supply, cooling system) under extremely stressful conditions, as well as for overclocking: Superposition Benchmark (featuring online leaderboards) - UNIGINE 2 (2017) Valley Benchmark - UNIGINE 1 (2013) Heaven Benchmark (the first DirectX 11 benchmark) - UNIGINE 1 (2009) Tropics Benchmark - UNIGINE 1 (2008) Sanctuary Benchmark - UNIGINE 1 (2007) See also List of game engines List of game middleware Game physics 3D computer graphics References External links Computer physics engines Game engines for Linux Middleware Unigine SDK Software that uses Qt Video game development Video game development software Video game development software for Linux Video game engines Video game IDE Virtual reality
Unigine
Technology,Engineering
1,660
149,881
https://en.wikipedia.org/wiki/Medical%20uses%20of%20silver
The medical uses of silver include its use in wound dressings, creams, and as an antibiotic coating on medical devices. Wound dressings containing silver sulfadiazine or silver nanomaterials may be used to treat external infections. The limited evidence available shows that silver coatings on endotracheal breathing tubes may reduce the incidence of ventilator-associated pneumonia. There is tentative evidence that using silver-alloy indwelling catheters for short-term catheterizing will reduce the risk of catheter-acquired urinary tract infections. Silver generally has low toxicity, and minimal risk is expected when silver is used in approved medical applications. Alternative medicine products such as colloidal silver are controversial. Mechanism of action Silver and most silver compounds have an oligodynamic effect and are toxic for bacteria, algae, and fungi in vitro. The antibacterial action of silver is dependent on the silver ion. The effectiveness of silver compounds as an antiseptic is based on the ability of the biologically active silver ion () to irreversibly damage key enzyme systems in the cell membranes of pathogens. The antibacterial action of silver has long been known to be enhanced by the presence of an electric field. Applying an electric current across silver electrodes enhances antibiotic action at the anode, likely due to the release of silver into the bacterial culture. The antibacterial action of electrodes coated with silver nanostructures is greatly improved in the presence of an electric field. Silver, used as a topical antiseptic, is incorporated by bacteria it kills. Thus dead bacteria may be the source of silver that may kill additional bacteria. Medical uses Antibacterial cream Silver sulfadiazine (SSD) is a topical antibiotic used in partial thickness and full thickness burns to prevent infection. It was discovered in the 1960s, and was the standard topical antimicrobial for burn wounds for decades. However systemic reviews in 2014, 2017 and 2018 concluded that more modern treatments, both with and without silver, show better results for wound healing and infection-prevention than silver sulfadiazine, and therefore SSD is no longer generally recommended. It is on the World Health Organization's List of Essential Medicines. The US Food and Drug Administration (FDA) approved a number of topical preparations of silver sulfadiazine for treatment of second-degree and third-degree burns. Dressings Despite its widespread use, there is only mixed evidence that silver in dressings has any benefit. A 2018 Cochrane review found that silver-containing dressings may increase the probability of healing for venous leg ulcers. A number of wound dressings containing silver as an anti-bacterial have been cleared by the U.S. Food and Drug Administration (FDA). However, silver-containing dressings may cause staining, and in some cases tingling sensations as well. Endotracheal tubes A 2015 systematic review concluded that the limited evidence available indicates that using silver-coated endotracheal breathing tubes reduces the risk of contracting ventilator-associated pneumonia (VAP), especially during the initial days of utilisation. A 2014 study concluded that using silver-coated endotracheal tubes will help to prevent VAP and that this may save on hospital costs. A 2012 systematic review of randomized controlled trials concluded that the limited evidence available indicates that using silver-coated endotracheal tubes will reduce the incidence of ventilator-associated pneumonia, microbiologic burden, and device-related adverse events among adult patients. Another 2012 review agreed that the use of silver-coated endotracheal tubes reduces the prevalence of VAP in intubated patients, but cautioned that this on its own is not sufficient to prevent infection. They also suggested that more research is needed to establish the cost-effectiveness of the treatment. Another 2012 study agreed that there is evidence that endotracheal tubes coated with silver may reduce the incidence of ventilator associated pneumonia (VAP) and delay its onset, but concluded that no benefit was seen in the duration of intubation, the duration of stay in intensive care or the mortality rate. They also raised concerns surrounding the unblinded nature of some of the studies then available. The U.S. Food and Drug Administration in 2007 cleared an endotracheal tube with a fine coat of silver to reduce the risk of ventilator-associated pneumonia. Catheters A 2014 systemic review concluded that using silver alloy-coated catheters showed no significant difference in incidences of symptomatic Catheter-Associated Urinary Tract Infections (CAUTI) versus using standard catheters, although silver-alloy catheters seemed to cause less discomfort to patients. These catheters are associated with greater cost than other catheters. A 2014 Multicenter Cohort Study found that using a silver-alloy hydrogel urinary catheter did reduce symptomatic Catheter-Associated Urinary Tract Infection (CAUTI) occurrences as defined by both NHSN and clinical criteria. A 2011 critical analysis of eight studies found a consistent pattern which supported using silver-alloy urinary catheters over uncoated catheters to reduce infections in adult patients, and concluded that using silver-alloy catheters would significantly improve patient care. A 2007 systemic review concluded that using silver-alloy indwelling catheters for short-term catheterizing will reduce the risk of catheter-acquired urinary tract infection, but called for further studies to evaluate the economic benefits of using the expensive silver alloy-catheters. Two systemic reviews in 2004 found that using silver-alloy catheters reduced asymptomatic and symptomatic bacteriuria more than standard catheters, for patients who were catheterised for a short time. A 2000 randomized crossover study found that using the more expensive silver-coated catheter may result in cost savings by preventing nosocomial UTI infections, and another 2000 study found that using silver alloy catheters for short-term urinary catheterization reduces the incidence of symptomatic UTI and bacteremia compared with standard catheters, and may thus yield cost savings. A 2017 study found that a combination of chlorhexidine and silver-sulfadiazine (CSS) used to coat central venous catheters (CVC) reduces the rate of catheter-related bloodstream infections. However, they also found that the efficacy of the CSS-CVC coating was progressively eroded by blood-flow, and that the antibacterial function was lost after 48 hours. Conjugations with existing drugs Research in 2018 into the treatment of central nervous system infections caused by free-living amoebae such as Naegleria fowleri and Acanthamoeba castellanii, tested the effectiveness of existing drugs as well as the effectiveness of the same drugs when they were conjugated with silver nanoparticles. In vitro tests demonstrated more potent amoebicidal effects for the drugs when conjugated with silver nanoparticles as compared to the same drugs when used alone. They also found that conjugating the drugs with silver nanoparticles enhanced their anti-acanthamoebic activity. X-ray film Silver-halide imaging plates used with X-ray imaging were the standard before digital techniques arrived; these function essentially the same as other silver-halide photographic films, although for x-ray use the developing process is very simple and takes only a few minutes. Silver x-ray film remains popular for its accuracy, and cost effectiveness, particularly in developing countries, where digital X-ray technology is usually not available. Other uses Silver compounds have been used in external preparations as antiseptics, including both silver nitrate and silver proteinate. Before the development of antibiotics, Credé's prophylaxis used a 2% solution of silver nitrate to prevent neonatal conjunctivitis, which used to account for half of all cases of blindness in Europe. The original procedure called for a 2% silver nitrate solution administered immediately after birth, as Credé erroneously believed that a 1% solution was ineffective due to a previous study by Hecker; however, this was eventually corrected and reduced back down to a 1% solution to reduce chemical irritation to the newborn's eyes. Silver nitrate is also sometimes used in dermatology in solid stick form as a caustic ("lunar caustic") to treat certain skin conditions, such as corns and warts. Silver nitrate is also used in certain laboratory procedures to stain cells. As it turns them permanently a dark-purple/black color, in doing so increasing individual cells' visibility under a microscope and allowing for differentiation between cells, or identification of irregularities. Silver is also used in bone prostheses and cardiac devices. In reconstructive hip and knee surgery, silver-coated titanium prostheses are indicated in cases of recalcitrant prosthetic joint infections. Silver diammine fluoride appears to be an effective intervention to reduce dental caries (tooth decay). Silver is also a component in dental amalgam. Silver acetate has been used as a potential aid to help stop smoking; a review of the literature in 2012, however, found no effect of silver acetate on smoking cessation at a six-month endpoint and if there is an effect it would be small. Silver has also been used in cosmetics, intended to enhance antimicrobial effects and the preservation of ingredients. Adverse effects Though toxicity of silver is low, the human body has no biological use for silver and when inhaled, ingested, injected, or applied topically, silver can accumulate irreversibly in the body, particularly in the skin, and chronic use combined with exposure to sunlight can result in a disfiguring condition known as argyria in which the skin becomes blue or blue-gray. Localized argyria can occur as a result of topical use of silver-containing creams and solutions, while the ingestion, inhalation, or injection can result in generalized argyria. Preliminary reports of treatment with laser therapy have been reported. These laser treatments are painful and general anesthesia is required. A similar laser treatment has been used to clear silver particles from the eye, a condition related to argyria called argyrosis. The Agency for Toxic Substances and Disease Registry (ATSDR) describes argyria as a "cosmetic problem". One incident of argyria came to the public's attention in 2008, when a man named Paul Karason, whose skin turned blue from using colloidal silver for over 10 years to treat dermatitis, appeared on NBC's Today show. Karason died in 2013 at the age of 62 after a heart attack. Another example is Montana politician Stan Jones whose purposeful consumption of colloidal silver was a self-prescribed measure he undertook in response to his fears that the Y2K problem would make antibiotics unavailable, an event that did not occur. Colloidal silver may interact with some prescription medications, reducing the absorption of some antibiotics and thyroxine, among others. Some people are allergic to silver, and the use of treatments and medical devices containing silver is contraindicated for such people. Although medical devices containing silver are widely used in hospitals, no thorough testing and standardization of these products has yet been undertaken. Water purification Electrolytically dissolved silver has been used as a water disinfecting agent, for example, the drinking water supplies of the Russian Mir orbital station and the International Space Station. Many modern hospitals filter hot water through copper-silver filters to defeat MRSA and legionella infections. The World Health Organization (WHO) includes silver in a colloidal state produced by electrolysis of silver electrodes in water, and colloidal silver in water filters as two of a number of water disinfection methods specified to provide safe drinking water in developing countries. Along these lines, a ceramic filtration system coated with silver particles has been created by Ron Rivera of Potters for Peace and used in developing countries for water disinfection (in this application the silver inhibits microbial growth on the filter substrate, to prevent clogging, and does not directly disinfect the filtered water). Alternative medicine Colloidal silver (a colloid consisting of silver particles suspended in liquid) and formulations containing silver salts were used by physicians in the early 20th century, but their use was largely discontinued in the 1940s following the development of modern antibiotics. Since about 1990, there has been a resurgence of the promotion of colloidal silver as a dietary supplement, marketed with claims of it being an essential mineral supplement, or that it can prevent or treat numerous diseases, such as cancer, diabetes, arthritis, HIV/AIDS, herpes, and tuberculosis. No medical evidence supports the effectiveness of colloidal silver for any of these claimed indications. Silver is not an essential mineral in humans; there is no dietary requirement for silver, and hence, no such thing as a silver "deficiency". There is no evidence that colloidal silver treats or prevents any medical condition, and it can cause serious and potentially irreversible side effects, such as argyria. In August 1999, the U.S. FDA banned colloidal silver sellers from claiming any therapeutic or preventive value for the product, although silver-containing products continue to be promoted as dietary supplements in the U.S. under the looser regulatory standards applied to supplements. The FDA has issued numerous warning letters to Internet sites that have continued to promote colloidal silver as an antibiotic or for other medical purposes. Despite the efforts of the FDA, silver products remain widely available on the market today. A review of websites promoting nasal sprays containing colloidal silver suggested that information about silver-containing nasal sprays on the Internet is misleading and inaccurate. Colloidal silver is also sold in some topical cosmetics, as well as some toothpastes, which are regulated by the FDA as cosmetics (other than drug ingredients making medical claims). In 2002, the Australian Therapeutic Goods Administration (TGA) found there were no legitimate medical uses for colloidal silver and no evidence to support its marketing claims. The U.S. National Center for Complementary and Integrative Health (NCCIH) warns that marketing claims about colloidal silver are scientifically unsupported, that the silver content of marketed supplements varies widely, and that colloidal silver products can have serious side effects such as argyria. In 2009, the USFDA issued a consumer advisory warning about the potential adverse effects of colloidal silver, and said that "there are no legally marketed prescription or over-the-counter (OTC) drugs containing silver that are taken by mouth". Quackwatch states that colloidal silver dietary supplements have not been found safe or effective for the treatment of any condition. Consumer Reports lists colloidal silver as a "supplement to avoid", describing it as "likely unsafe". The Los Angeles Times stated that "colloidal silver as a cure-all is a fraud with a long history, with quacks claiming it could cure cancer, AIDS, tuberculosis, diabetes, and numerous other diseases". It may be illegal to market as preventing or treating cancer, and in some jurisdictions illegal to sell colloidal silver for consumption. In 2015 an English man was prosecuted and found guilty under the Cancer Act 1939 for selling colloidal silver with claims it could treat cancer. Fraudulent products marketed during the COVID-19 outbreak The US Food and Drug Administration has issued warning letters to firms including colloidal silver marketers for selling products with false and misleading claims to prevent, treat, mitigate, diagnose or cure coronavirus disease 2019 (COVID-19). In 2020, televangelist felon Jim Bakker was sued by the Missouri Attorney General (AG) for marketing colloidal silver products and making false claims about their effectiveness against COVID-19. The Attorney General of New York sent a cease and desist order to Bakker and others about peddling the unproven products that was compared to selling "snake oil", and the Food and Drug Administration also warned Bakker about his actions. Controversial web show host, podcaster and conspiracy theorist Alex Jones was also warned by the New York Attorney General's office to stop marketing his colloidal silver infused products (toothpaste, mouthwash, dietary supplements, etc.) because he made unproven claims of its ability to fend off COVID-19. History Hippocrates in his writings discussed the use of silver in wound care. At the beginning of the twentieth century surgeons routinely used silver sutures to reduce the risk of infection. In the early 20th century, physicians used silver-containing eyedrops to treat ophthalmic problems, for various infections, and sometimes internally for diseases such as tropical sprue, epilepsy, gonorrhea, and the common cold. During World War I, soldiers used silver leaf to treat infected wounds. In the 1840s, founder of gynecology J. Marion Sims employed silver wire, which he had a jeweler fashion, as a suture in gynecological surgery. This produced very favorable results when compared with its predecessors, silk and catgut. Prior to the introduction of modern antibiotics, colloidal silver was used as a germicide and disinfectant. With the development of modern antibiotics in the 1940s, the use of silver as an antimicrobial agent diminished, although it retains some use in medicinal compounds today. Silver sulfadiazine (SSD) is a compound containing silver and the antibiotic sodium sulfadiazine, which was developed in 1968. Cost The National Health Services in the UK spent about £25 million on silver-containing dressings in 2006. Silver-containing dressings represent about 14% of the total dressings used and about 25% of the overall wound dressing costs. Concerns have been expressed about the potential environmental cost of manufactured silver nanomaterials in consumer applications being released into the environment, for example that they may pose a threat to benign soil organisms. See also List of ineffective cancer treatments Colloidal gold Antibiotic properties of nanoparticles References External links Silver Silver Silver Silver Silver Silver Silver Silver
Medical uses of silver
Physics,Materials_science,Biology
3,801
11,015,555
https://en.wikipedia.org/wiki/Finite%20element%20exterior%20calculus
Finite element exterior calculus (FEEC) is a mathematical framework that formulates finite element methods using chain complexes. Its main application has been a comprehensive theory for finite element methods in computational electromagnetism, computational solid and fluid mechanics. FEEC was developed in the early 2000s by Douglas N. Arnold, Richard S. Falk and Ragnar Winther, among others. Finite element exterior calculus is sometimes called as an example of a compatible discretization technique, and bears similarities with discrete exterior calculus, although they are distinct theories. One starts with the recognition that the used differential operators are often part of complexes: successive application results in zero. Then, the phrasing of the differential operators of relevant differential equations and relevant boundary conditions as a Hodge Laplacian. The Hodge Laplacian terms are split using the Hodge decomposition. A related variational saddle-point formulation for mixed quantities is then generated. Discretization to a mesh-related subcomplex is done requiring a collection of projection operators which commute with the differential operators. One can then prove uniqueness and optimal convergence as function of mesh density. FEEC is of immediate relevancy for diffusion, elasticity, electromagnetism, Stokes flow. For the important de Rham complex, pertaining to the grad, curl and div operators, suitable family of elements have been generated not only for tetrahedrons, but also for other shaped elements such as bricks. Moreover, also conforming with them, prism and pyramid shaped elements have been generated. For the latter, uniquely, the shape functions are not polynomial. The quantities are 0-forms (scalars), 1-forms (gradients), 2-forms (fluxes), and 3-forms (densities). Diffusion, electromagnetism, and elasticity, Stokes flow, general relativity, and actually all known complexes, can all be phrased in terms the de Rham complex. For Navier-Stokes, there may be possibilities too. References Finite element method
Finite element exterior calculus
Mathematics
411
54,913,289
https://en.wikipedia.org/wiki/Norman%20R.%20Legge
Norman Reginald Legge (20 April 1919 – 28 March 2004) was a Canadian researcher for the Shell Oil Company and pioneer of thermoplastic elastomers, Kraton in particular. Personal Legge was born on 20 April 1919 in Edmonton, Alberta, Canada. He died on 28 March 2004 in Livermore, California. Education 1942 BSC, Chemistry, University of Alberta 1943 MSC, Chemistry, University of Alberta 1945 Ph.D., McGill University, explosives research during World War II. Career Legge worked as a research chemist for Polymer Corporation (formerly Polysar Corp.) in Sarnia, Ontario (1945-1951). Later, he moved to Kentucky Synthetic Rubber Corporation as Director of Research in Louisville. He then moved to Shell Chemical until his retirement. He was a Fellow of the American Association for the Advancement of Science and a member of the American Chemical Society, Rubber Division. Awards and Recognitions 1987 - Charles Goodyear Medal from the ACS Rubber Division 1992 - IISRP Technical Award from the International Institute of Synthetic Rubber Producers External links Audio interview with Norman Legge. References 1919 births 2004 deaths Polymer scientists and engineers Scientists from Edmonton University of Alberta alumni McGill University alumni
Norman R. Legge
Chemistry,Materials_science
242
3,986,613
https://en.wikipedia.org/wiki/Perfect%20phylogeny
Perfect phylogeny is a term used in computational phylogenetics to denote a phylogenetic tree in which all internal nodes may be labeled such that all characters evolve down the tree without homoplasy. That is, characteristics do not hold to evolutionary convergence, and do not have analogous structures. Statistically, this can be represented as an ancestor having state "0" in all characteristics where 0 represents a lack of that characteristic. Each of these characteristics changes from 0 to 1 exactly once and never reverts to state 0. It is rare that actual data adheres to the concept of perfect phylogeny. Building In general there are two different data types that are used in the construction of a phylogenetic tree. In distance-based computations a phylogenetic tree is created by analyzing relationships among the distance between species and the edge lengths of a corresponding tree. Using a character-based approach employs character states across species as an input in an attempt to find the most "perfect" phylogenetic tree. The statistical components of a perfect phylogenetic tree can best be described as follows: A perfect phylogeny for an n x m character state matrix M is a rooted tree T with n leaves satisfying: i. Each row of M labels exactly one leaf of T ii. Each column of M labels exactly one edge of T iii. Every interior edge of T is labeled by at least one column of M iv. The characters associated with the edges along the unique path from root to a leaf v exactly specify the character vector of v, i.e. the character vector has a 1 entry in all columns corresponding to characters associated to path edges and a 0 entry otherwise. It is worth noting that it is very rare to find actual phylogenetic data that adheres to the concepts and limitations detailed here. Therefore, it is often the case that researchers are forced to compromise by developing trees that simply try to minimize homoplasy, finding a maximum-cardinality set of compatible characters, or constructing phylogenies that match as closely as possible to the partitions implied by the characters. Example Both of these data sets illustrate examples of character state matrices. Using matrix M'1 one is able to observe that the resulting phylogenetic tree can be created such that each of the characters label exactly one edge of the tree. In contrast, when observing matrix M'2, one can see that there is no way to set up the phylogenetic tree such that each character labels only one edge length. If the samples come from variant allelic frequency (VAF) data of a population of cells under study, the entries in the character matrix are frequencies of mutations, and take a value between 0 and 1. Namely, if represents a position in the genome, then the entry corresponding to and sample will hold the frequencies of genomes in sample with a mutation in position . Usage Perfect phylogeny is a theoretical framework that can also be used in more practical methods. One such example is that of Incomplete Directed Perfect Phylogeny. This concept involves utilizing perfect phylogenies with real, and therefore incomplete and imperfect, datasets. Such a method utilizes SINEs to determine evolutionary similarity. These Short Interspersed Elements are present across many genomes and can be identified by their flanking sequences. SINEs provide information on the inheritance of certain traits across different species. Unfortunately, if a SINE is missing it is difficult to know whether those SINEs were present prior to the deletion. By utilizing algorithms derived from perfect phylogeny data we are able to attempt to reconstruct a phylogenetic tree in spite of these limitations. Perfect phylogeny is also used in the construction of haplotype maps. By utilizing the concepts and algorithms described in perfect phylogeny one can determine information regarding missing and unavailable haplotype data. By assuming that the set of haplotypes that result from genotype mapping corresponds and adheres to the concept of perfect phylogeny (as well as other assumptions such as perfect Mendelian inheritance and the fact that there is only one mutation per SNP), one is able to infer missing haplotype data. Inferring a phylogeny from noisy VAF data under the PPM is a hard problem. Most inference tools include some heuristic step to make inference computationally tractable. Examples of tools that infer phylogenies from noisy VAF data include AncesTree, Canopy, CITUP, EXACT, and PhyloWGS. In particular, EXACT performs exact inference by using GPUs to compute a posterior probability on all possible trees for small size problems. Extensions to the PPM have been made with accompanying tools. For example, tools such as MEDICC, TuMult, and FISHtrees allow the number of copies of a given genetic element, or ploidy, to both increase, or decrease, thus effectively allowing the removal of mutations. See also List of phylogenetics software References External links One of several programs available for analysis and creation of phylogenetic trees Another such program for phylogenetic tree analysis Additional program for tree analysis A paper detailing an example of how perfect phylogeny can be utilized outside of the field of genetics, as in language association Github for "Algorithm for clonal tree reconstruction from multi-sample cancer sequencing data" (AncesTree) Github for "Accessing Intra-Tumor Heterogeneity and Tracking Longitudinal and Spatial Clonal Evolutionary History by Next-Generation Sequencing" (Canopy) Github for "Clonality Inference in Tumors Using Phylogeny" (CITUP) Github for "Exact inference under the perfect phylogeny model" (EXACT) Github for "Reconstructing subclonal composition and evolution from whole-genome sequencing of tumors" (PhyloWGS) Computational phylogenetics
Perfect phylogeny
Biology
1,184
54,000,061
https://en.wikipedia.org/wiki/Neurogastronomy
Neurogastronomy is the study of flavor perception and the ways it affects cognition and memory. This interdisciplinary field is influenced by the psychology and neuroscience of sensation, learning, satiety, and decision making. Areas of interest include how olfaction contributes to flavor, food addiction and obesity, taste preferences, and the linguistics of communicating and identifying flavor. The term neurogastronomy was coined by neuroscientist Gordon M. Shepherd. Olfaction and flavor Out of all the sensory modalities, olfaction contributes most to the sensation and perception of flavor processing. Olfaction has two sensory modalities, orthonasal smell, the detection of odor molecules originating outside the body, and retronasal smell, the detection of odor molecules originating during mastication. It is retronasal smell, whose sensation is felt in the mouth, that contributes to flavor perception. Anthropologically, over human evolution, the shortening of the nasopharynx and other shifts in bone structure suggest a constant improvement of flavor perception capabilities. After mastication, odor molecules travel through the back of the mouth and up the nasopharynx. The odorants are detected by myriad receptors on the olfactory epithelium. These receptors respond to a variety of dimensions of chemical properties. Odor receptors that respond to a dimension within a molecular receptive range are aggregated by glomeruli in the olfactory bulb. Here, the multi-dimensional nature of odorant stimuli is reduced to two dimensions. This input undergoes edge enhancement, increasing its signal-to-noise ratio by way of lateral inhibition due to mitral cells stemming from the glomerular layer. This input then reaches the olfactory cortex. Here, Hebbian learning networks allow for recall with partial or weak stimuli, indicating the first stage of conscious perception. Here, connections with the hypothalamus and hippocampus indicate that olfaction stimuli affect emotion, decision making, and learning only after significant processing and rudimentary identification. Decision making The hedonic value of food and its decision making relies on several concurrent neural processes. The attentional drive to seek and consume food is modulated by homeostatic signaling of hunger and satiety. Habit, social interactions, and nutritional needs affect this signaling. Analysis of non-human primates' orbitofrontal cortex suggests decision making is additionally modulated by food identification, independent of hunger. Activity in the medial orbitofrontal cortex and anterior singulate suggest that an affective value is assigned to every food identification. Hedonic pleasure increases when engaging with food consumption and peaks during satiety. Impairments in these systems greatly impact the ability to resist the urge to eat. Imaging studies show that obese subjects with impairment in dopamine circuits that regulate hedonic value have issues with reward sensitivity and resist functional homeostatic signals that normally would prevent overeating. The consumption of comfort foods can facilitate feelings of relational connection and belonging, and the motivation behind pursuing certain foods can be modulated by social context and environment. Although the consumption of spicy food can cause pain, people in many cultures ascribe a high hedonic value to it. Psychologist Paul Rozin puts forth the idea of "benign masochism", a learned tendency that overrides the typically aversive stimuli because of the risk-taking or thrill-seeking associated with overcoming pain. Learned flavor preferences Learned taste preferences develop as early as in utero, where the fetus is exposed to flavors through amniotic fluid. Early, innate, preferences exhibit tendencies towards calorie and protein dense foods. As children grow older, more factors such as peers, repeated exposures, environments and food availability will modulate taste preferences. Describing odors While naming a flavor or food refines its representation strengthens its recall in memory, the patterns and tendencies in word choice to describe flavor suggests limits to the our perception and communication. In describing the flavor of wine, tasters tend to use words that function as a combination of visual and texture descriptors, and references to objects with similar odorant profiles. Color perception heavily influences the word choice describing a flavor; the color of word's semantic reference is often congruent with the food's color when the taster can see the food. Clinical and other academic translations With neurogastronomy's roots in neuroscience and psychology, clinical translation into research in obesity, diabetes, hypertension, eating disorders, chemoreceptive deficits in cancer treatments, etc. are explored in clinical neurogastronomy. The term clinical neurogastronomy was coined by neuropsychologist Dan Han, to advocate for quality of life issues and positive clinical outcomes in patient populations. In 2015, Gordon M. Shepherd, Dan Han, Frédéric Morin, Tim McClintock, Bob Perry, Charles Spence, Jehangir Mehta, Kelsey Rahenkamp, Siddharth Kapoor, Ouita Michel, and Bret Smith formed the International Society of Neurogastronomy (ISN). ISN is sponsored by the National Institutes of Health. The inaugural meeting addressed multiple aspects of neurogastronomy concepts, and focused on its clinical translation including quality of life issues in cancer treatment and related smell and taste deficits, then followed by application into treatments for diabetes. Additional translational efforts included food technology, agriculture, climate change, and culinary arts. References External links International Society of Neurogastronomy website Neurogastronomy - Shepherd Lab website Cognitive neuroscience Cognitive psychology Neuropsychology Gustation Olfaction
Neurogastronomy
Biology
1,144
11,855,225
https://en.wikipedia.org/wiki/Mycosphaerella%20bolleana
Mycosphaerella bolleana is a fungal plant pathogen. See also List of Mycosphaerella species References External links New Zealand Fungi: Mycosphaerella bolleana bolleana Fungal plant pathogens and diseases Fungi described in 1920 Fungus species
Mycosphaerella bolleana
Biology
56
16,542,329
https://en.wikipedia.org/wiki/Goat%20grazing%20problem
The goat grazing problem is either of two related problems in recreational mathematics involving a tethered goat grazing a circular area: the interior grazing problem and the exterior grazing problem. The former involves grazing the interior of a circular area, and the latter, grazing an exterior of a circular area. For the exterior problem, the constraint that the rope can not enter the circular area dictates that the grazing area forms an involute. If the goat were instead tethered to a post on the edge of a circular path of pavement that did not obstruct the goat (rather than a fence or a silo), the interior and exterior problem would be complements of a simple circular area. The original problem was the exterior grazing problem and appeared in the 1748 edition of the English annual journal The Ladies' Diary: or, the Woman's Almanack, designated as Question  attributed to Upnorensis (an unknown historical figure), stated thus: Observing a horse tied to feed in a gentlemen's park, with one end of a rope to his fore foot, and the other end to one of the circular iron rails, enclosing a pond, the circumference of which rails being 160 yards, equal to the length of the rope, what quantity of ground at most, could the horse feed? The related problem involving area in the interior of a circle without reference to barnyard animals first appeared in 1894 in the first edition of the renown journal American Mathematical Monthly. Attributed to Charles E. Myers, it was stated as: A circle containing one acre is cut by another whose center is on the circumference of the given circle, and the area common to both is one-half acre. Find the radius of the cutting circle. The solutions in both cases are non-trivial but yield to straightforward application of trigonometry, analytical geometry or integral calculus. Both problems are intrinsically transcendental – they do not have closed-form analytical solutions in the Euclidean plane. The numerical answers must be obtained by an iterative approximation procedure. The goat problems do not yield any new mathematical insights; rather they are primarily exercises in how to artfully deconstruct problems in order to facilitate solution. Three-dimensional analogues and planar boundary/area problems on other shapes, including the obvious rectangular barn and/or field, have been proposed and solved. A generalized solution for any smooth convex curve like an ellipse, and even unclosed curves, has been formulated. Exterior grazing problem The question about the grazable area outside a circle is considered. This concerns a situation where the animal is tethered to a silo. The complication here is that the grazing area overlaps around the silo (i.e., in general, the tether is longer than one half the circumference of the silo): the goat can only eat the grass once, he can't eat it twice. The answer to the problem as proposed was given in the 1749 issue of the magazine by a Mr. Heath, and stated as 76,257.86 sq.yds. which was arrived at partly by "trial and a table of logarithms". The answer is not so accurate as the number of digits of precision would suggest. No analytical solution was provided. A useful approximation Let tether length R = 160 yds. and silo radius r = R/(2) yds. The involute in the fourth quadrant is a nearly circular arc. One can imagine a circular segment with the same perimeter (arc length) would enclose nearly the same area; the radius and therefore the area of that segment could be readily computed. The arc length of an involute is given by so the arc length |FG| of the involute in the fourth quadrant is . Let c be the length of an arc segment of the involute between the y-axis and a vertical line tangent to the silo at θ = 3/2; it is the arc subtended by Φ. (while the arc is minutely longer than r, the difference is negligible). So . The arc length of a circular arc is and θ here is /2 radians of the fourth quadrant, so , r the radius of the circular arc is and the area of the circular segment bounded by it is . The area of the involute excludes half the area of the silo (1018.61) in the fourth quadrant, so its approximate area is 18146, and the grazable area including the half circle of radius R, () totals . That is 249 sq.yds. greater than the correct area of 76256, an error of just 0.33%. This method of approximating may not be quite so good for angles < 3/2 of the involute. If it matters, there is a constructive way to obtain a quick and very accurate estimate of : draw a diagonal from point on the circumference of the pond to its intersection on the y-axis. The length of the diagonal is 120yds. because it is of the tether. So the other leg of the triangle, the hypotenuse as drawn, is yds. So radians, rounded to three places. Solution by integrating with polar coordinates Find the area between a circle and its involute over an angle of 2 to −2 excluding any overlap. In Cartesian coordinates, the equation of the involute is transcendental; doing a line integral there is hardly feasible. A more felicitous approach is to use polar coordinates (z,θ). Because the "sweep" of the area under the involute is bounded by a tangent line (see diagram and derivation below) which is not the boundary () between overlapping areas, the decomposition of the problem results in four computable areas: a half circle whose radius is the tether length (A1); the area "swept" by the tether over an angle of 2 (A2); the portion of area A2 from θ = 0 to the tangent line segment (A3); and the wedge area qFtq (A4). So, the desired area A is A1 + (A2 − A3 + A4) · 2. The area(s) required to be computed are between two quadratic curves, and will necessarily be an integral or difference of integrals. The primary parameters of the problem are , the tether length defined to be 160yds, and , the radius of the silo. There is no necessary relationship between and , but here is the radius of the circle whose circumference is . If one defines the point of tethering (see diagram, above) as the origin with the circle representing the circumference of the pond below the x-axis, and on the y-axis below the circle representing the point of intersection of the tether when wound clockwise and counterclockwise, let be a point on the circle such that the tangent at intersects , and + Is the length of the tether. Let be the point of intersection of the circumference of the pond on the y-axis (opposite to ) below the origin. Then let acute be . The area under the involute is a function of because it is an integral over a quadratic curve. The area has a fixed boundary defined by the parameter (i.e. the circumference of the silo). In this case the area is inversely proportional to , i.e. the larger , the smaller the area of the integral, and the circumference is a linear function of (). So we seek an expression for the area under the involute . First, the area A1 is a half circle of radius so Next, find the angle which will be used in the limits of the integrals below. Let . is complementary to the opposite angle of the triangle whose right angle is at point t; and also complementary to that angle in the third quadrant of the circle. is the unrolled arc , so its arclength is . So and , so . Finally, and the following equation is obtained: . That is a transcendental equation that can only be solved by trial-and-error, polynomial expansion, or an iterative procedure like Newton–Raphson. . Next compute the area between the circumference of the pond and involute. Compute the area in the tapering "tail" of the involute, i.e. the overlapped area (note, on account of the tangent tF, that this area includes the wedge section, area A4, which will have to be added back in during the final summation). Recall that the area of a circular sector is if the angle is in radians. Imagine an infinitely thin circular sector from to subtended by an infinitely small angle . Tangent to , there is a corresponding infinitely thin sector of the involute from to subtending the same infinitely small angle . The area of this sector is where is the radius at some angle , which is , the arc length of the circle so far "unwrapped" at angle . The area under the involute is the sum of all the infinitely many infinitely thin sectors through some angle . This sum is The bounds of the integral represent the area under the involute in the fourth quadrant between and . The angle is measured on the circle, not on the involute, so it is less than by some angle designated . Is not given, and must be determined indirectly. Unfortunately, there is no way to simplify the latter term representing the lower bound of the eval expression because is not a rational fraction of , so it may as well be substituted and evaluated at once (factoring out preemptively): which for expository reasons can be rewritten . It seems apropo to merge a factor of into the constant term to get a common denominartor for the terms, so . is dominated by a linear term from the integration, so may be written, where is a non-zero positive but negligible quantity. A4 is the area of the peculiar wedge . That area is the area of a right triangle with vertex t, minus the area of a sector bounded by . where x is |tF| and θ is the angle opposite to Φ in the right angle triangle. So, . If , then the area of the wedge is by reduction. The final summation A1 + (A2 − A3 + A4) · 2 is . All imprecision in the calculation is now uncertainty in and the residual . . That's useful for elucidating the relationships between the parameters. is transcendental, so the definition is a recurrence relation. The initial guess is a small fraction of . The numerical answer is rounded up to the nearest square yard. It is worth noting that , which is the answer given for the case where the tether length is half the circumference (or any length such that ) of the silo, or no overlap to account for. The goat can eat all but 5% of the area of the great circle defined by its tether length, and half the area it cannot eat is within the perimeter of the pond/silo. The only imprecision in the calculation is that no closed-form representation for can be derived from the geometry presented. But small inaccuracies in when don't significantly affect the final result. Solution by ratio of arc length Just as the area below a line is proportional to the length of the line between boundaries, and the area of a circular sector is a ratio of the arc length () of the sector (), the area between an involute and its bounding circle is also proportional to the involute's arc length: for . So the total grazing area is . . .. Interior grazing problem Let be the center of a unit circle. A goat/bull/horse is tethered at point on the circumference. How long does the rope need to be to allow the animal to graze on exactly one half of the circle's area (white area in diagram, in plane geometry, called a lens)? Solution by calculating the lens area The area reachable by the animal is in the form of an asymmetric lens, delimited by the two circular arcs. The area of a lens with two circles of radii and distance between centers is which simplifies in case of and one half of the circle area to The equation can only be solved iteratively and results in . Solution using integration By using and integrating over the right half of the lens area with the transcendental equation follows, with the same solution. In fact, using the identities and , the transcendental equation derived from lens area can be obtained. Solution by sector area plus segment area The area can be written as the sum of sector area plus segment area. Assuming the leash is tied to the bottom of the pen, define as the angle the taut leash makes with upwards when the goat is at the circumference. Define as the angle from downwards to the same location, but from the center of the pen instead of from the center of the larger circle. The sum of angles of a triangle equals for the resulting isosceles triangle, giving . Setting the pen's radius to be 1 and trigonometry such as then give . Requiring that half the grazable area be 1/4 of the pen's area gives . Using the circular sector and circular segment area formulae gives , which only assumes . Combining into a single equation gives . Note that solving for then taking the cosine of both sides generates extra solutions even if including the obvious constraint . Using trigonometric identities, we see that this is the same transcendental equation that lens area and integration provide. Closed-form solution By using complex analysis methods in 2020, Ingo Ullisch obtained a closed-form solution as the cosine of a ratio of two contour integrals: where C is the circle . 3-dimensional extension The three-dimensional analogue to the two-dimensional goat problem is a bird tethered to the inside of a sphere, with the tether long enough to constrain the bird's flight to half the volume of the sphere. In the three-dimensional case, point lies on the surface of a unit sphere, and the problem is to find radius of the second sphere so that the volume of the intersection body equals exactly half the volume of the unit sphere. The volume of the unit sphere reachable by the animal has the form of a three-dimensional lens with differently shaped sides and defined by the two spherical caps. The volume of a lens with two spheres of radii and distance between the centers is which simplifies in case of and one half of the sphere volume to leading to a solution of It can be demonstrated that, with increasing dimensionality, the reachable area approaches one half the sphere at the critical length . If , the area covered approaches almost none of the sphere; if , the area covered approaches the sphere's entire area. See also Mrs. Miniver's problem, another problem of equalizing areas of circular lunes and lenses References External links The Goat Problem - Numberphile video about the goat problem. Recreational mathematics Circles Area Metaphors referring to sheep or goats
Goat grazing problem
Physics,Mathematics
3,195
26,244,741
https://en.wikipedia.org/wiki/Conrad%20Schlumberger%20Award
The Conrad Schlumberger Award is an award given to one of the members of European Association of Geoscientists and Engineers. The award is given each year to one that has made an outstanding contribution over a period of time to the scientific and technical advancement of the geosciences, particularly geophysics. The award is made annually by the EAGE Board. History The Conrad Schlumberger Award was created in 1955, as a recognition of Conrad Schlumberger's outstanding contribution to exploration geophysics, by the European Association of Geoscientists and Engineers (then named The European Association of Exploration Geophysicists.) List of recipients Source: 2020 André Revil 2019 Andrey Bakulin 2018 Johan Robertsson and Phil Christie 2017 José Carcione 2016 Stewart Greenhalgh 2015 Alain Tabbagh 2014 Valentina Socco 2013 Kees Wapenaar 2012 Martin Landrø 2011 Sergey Fomel 2010 Lasse Amundsen 2009 Gerhard Pratt 2008 Clive McCann 2007 Colin Macbeth 2006 Petar Stavrev 2005 Horst Rüter 2004 Eric de Bazelaire 2003 Tariq Alkhalifah 2002 M. Tygel 2001 R. Marschall From June 2001, award dates refer to the year in which the award was presented and not to the year in which the winning poster/paper was presented. 1999 Peter Weidelt 1998 Vlastislav Cerveny 1997 Jacob T. Fokkema 1996 Michael Schoenberg 1995 Patrick Lailly 1994 P. Newman 1993 Bjørn Ursin 1992 L. Dresen 1991 Oz Yilmaz 1990 Fabio Rocca 1989 Albert Tarantola 1988 Ken Larner 1987 Les Hatton 1986 S. Crampin 1985 S.M. Deregowski 1984 K. Helbig 1983 Johann Sattlegger 1982 Anton Ziolkowski 1981 W.E. Lerwill 1980 A.J. "Guus" Berkhout 1979 Theodore Krey 1978 No award 1977 Peter Hubral 1976 J.G. (Mendel) Hagedoorn 1975 Milo M. Backus and R.L. Chen 1974 N. de Voogd 1973 Roy E. White 1972 P. Bois 1971 P.N.S. O'Brien 1970 H. Naudy 1969 Sven Treitel and Enders A. Robinson 1968 Helmut Linsser 1967 Robert Garotta and Dominique Michon 1966 Jacques D'Hoeraene 1965 O. Koefoed 1964 Nigel Anstey 1963 G. Grau 1962 No award 1961 G. Kunetz 1960 Reinhard Bortfeld 1959 L. Alfano 1958 Umberto Colombo 1957 O. Kappelmeyer 1956 No award 1955 H. Flathe See also List of geophysicists List of geophysics awards Prizes named after people References External links The Conrad Schlumberger Award homepage at EAGE All awards by EAGE EAGE (European Association of Geoscientists and Engineers) homepage Science and technology awards Awards established in 1955 1955 establishments in Europe
Conrad Schlumberger Award
Technology
607
2,428,259
https://en.wikipedia.org/wiki/Tower%20mill
A tower mill is a type of vertical windmill consisting of a brick or stone tower, on which sits a wooden 'cap' or roof, which can rotate to bring the sails into the wind. This rotating cap on a firm masonry base gave tower mills great advantages over earlier post mills, as they could stand much higher, bear larger sails, and thus afford greater reach into the wind. Windmills in general had been known to civilization for centuries, but the tower mill represented an improvement on traditional western-style windmills. The tower mill was an important source of power for Europe for nearly 600 years from 1300 to 1900, contributing to 25 percent of the industrial power of all wind machines before the advent of the steam engine and coal power. It represented a modification or a demonstration of improving and adapting technology that had been known by humans for ages. Although these types of mills were effective, some argue that, owing to their complexity, they would have initially been built mainly by the most wealthy individuals. History The tower mill originated in written history in the late 13th century in western Europe; the earliest record of its existence is from 1295, from Stephen de Pencastor of Dover, but the earliest illustrations date from 1390. Other early examples come from Yorkshire and Buckinghamshire. Other sources pin its earliest inception back in 1180 in the form of an illustration on a Norman deed, showing this new western-style windmill. The Netherlands has six mills recorded before the year 1407. One of the earliest tower mills in Britain was Chesterton Windmill, Warwickshire, which has a hollowed conical base with arches. The large part of its development continued through the Late Middle Ages. Towards the end of the 15th century, tower mills began appearing across Europe in greater numbers. The origins of the tower mill can be found in a growing economy of Europe, which needed a more reliable and efficient form of power, especially one that could be used away from a river bank. Post mills dominated the scene in Europe until the 19th century when tower mills began to replace them in such places as Billingford Mill in Norfolk, Upper Hellesdon Mill in Norwich, and Stretham Mill in Cambridgeshire. The tower mill also was seen as a cultural object, being painted and designed with aesthetic appeal in mind. Styles of the mills reflected on local tradition and weather conditions, for example mills built on the western coast of Britain were mainly built of stone to withstand the stronger winds, and those built in the east were mainly of brick. In England around 12 eight-sailers, more than 50 six- and 50 five-sailers were built in the late 18th century and 19th century, half of them in Lincolnshire. Of the eight sailed mills only Pocklington's Mill in Heckington survived in fully functional state. A few of the other ones exist as four-sailed mills (Old Buckenham), as residences (Diss Button's Mill), as ruins (Leach's Windmill, Wisbech), or have been dismantled (Holbeach Mill; Skirbeck Mill, Boston). In Lincolnshire some of the six-sailed (Sibsey Trader Mill, Waltham Windmill) and five-sailed (Dobson's Mill in Burgh le Marsh, Maud Foster Windmill in Boston, Hoyle's Mill in Alford) slender (mostly tarred) tower mills with their white onion-shaped cap and a huge fantail are still there and working today. Other former five- and six-sailed Lincolnshire and Yorkshire tower mills now without sails and partly without cap are LeTall's Mill in Lincoln, Holgate Windmill in Holgate, York (currently being restored), Black, Cliff, or Whiting's Mill (a seven-storeyed chalk mill) in Hessle and (with originally six sails) Barton-upon-Humber Tower mill, Brunswick Mill in Long Sutton, Lincolnshire, Metheringham Windmill, Penny Hill Windmill in Holbeach, Wragby Mill (built by E. Ingledew in 1831, millwright of Heckington Mill in 1830), and Wellingore Tower Mill. Another fine six-sailer can be found in Derbyshire – England's only sandstone towered windmill at Heage of 1791. Design The advantage of the tower mill over the earlier post mill is that it is not necessary to turn the whole mill ("body", "buck") with all its machinery into the wind; this allows more space for the machinery as well as for storage. However, select tower mills found around Holland were constructed on a wooden frame so as to rotate the entire foundation of the mill along with the cap. These towers were often constructed out of wood rather than masonry as well. A movable head which could pivot to react to the changing wind patterns was the most important aspect of the tower mill. This ability gave the advantage of a larger and more stable frame that could deal with harsh weather. Also, only moving a cap was much easier than moving an entire structure. In the earliest tower mills the cap was turned into the wind with a long tail-pole which stretched down to the ground at the back of the mill. Later an endless chain was used which drove the cap through gearing. In 1745 an English engineer, Edmund Lee, invented the windmill fantaila little windmill mounted at right angles to the sails, at the rear of the mill, and which turned the cap automatically to bring it into the wind. Like other windmills tower mills have normally four blades. To increase windmill efficiency millwrights experimented with different methods: automated patent-sails instead of cloth spread type sails didn't need the sail cross to be stopped to spread or remove the cloth sails because they altered the surface from inside the mill by means of a controlling gear. more than four blades to increase the sail surface. Therefore, engineer John Smeaton invented the cast-iron Lincolnshire cross to make sail-crosses with five, six, and even eight blades possible. The cross was named after Lincolnshire where it was most widely used. There are several components to the tower mill as it was in the 19th century in Europe in its most developed stage, some elements such as the gallery are not present in all tower mills: Stockthe arm that protrudes from the top of windmill holding the frame of the sail in place, this is the main support of the sail and is usually made of wood. Sailthe turning frame that catches the wind, attached and held by the stock. The traditional style found on most tower mills is a four-sail frame, however in the Mediterranean model there is usually an eight-sail frame. An example of this in St. Mary's Mill on the Isle of Sicilly constructed in 1820. WindshaftA particularly important part of the sail frame, the windshaft is the cylindrical piece that translates the movement of the sail into the machinery within the windmill. CapThe top of the tower that holds the sail and stock, this piece is able to rotate on top of the tower. TowerSupports the cap, the main structure of the tower mill. FloorBase level of the tower inside, usually where grain or other products are stored. GalleryDeck surrounding the floor outside the tower to provide access around the tower mill if it is raised, not present in all tower mills. The gallery allowed access to the sails for making repairs because they could not be easily reached from the ground in larger mills. FrameSail design that forms the outline of the sail, usually a meshed wood design that then is covered in cloth. The Mediterranean design is different in that there are several sails on the sail-frame and each supports a draped cloth and there is no wooden frame behind it. FantailOrientation device that is attached to the cap, allowing it to rotate to keep the sails in the direction of the wind. HemlathThick wooden sailbar on the side of the frame that keeps the narrower sailbars inside the sail. SailbarElongated piece of wood that forms a sail. Sail clothCloth attached to a sail that collects wind energy; a large sail cloth is used for weak winds and a small sail cloth for strong winds. Application The tower mill was more powerful than the water mill, able to generate roughly to kilowatt (20 to 30 horsepower). There were many uses that the tower mill had aside from grinding corn. It is sometimes said that tower mills fuelled a society that was steadily growing in its need for power by providing a service to other industries as well: The production of pepper and other spices Lumber companies used them for powering sawmills Paper companies used to change wood pulp into paper Other sources argue against this claiming there is no real evidence, specifically, of tower mills doing these things. Interesting facts The world's tallest tower mills can be found in Schiedam, Netherlands. , meaning 'the North', is a corn mill dating to 1803 that is to the cap. In 2006 an imitation, (named after the local Nolet distilling family who owns the mill), was built as a generator mill producing electricity that is to the cap. England's tallest tower mill is the nine-storeyed Moulton Windmill in Moulton, Lincolnshire, with a cap height of . Since 2005 the mill has a new white rotatable cap with windshaft and fantail in place. The stage was erected during 2008 and new sails were fitted on 21 November 2011 to complete the restoration of the mill. Larger mills have been lost, such as the Great Yarmouth Southtown mill that was to the top of the lantern that functioned as a lighthouse, and Bixley tower mill that was to the cap top, both in Norfolk. In the Netherlands windmills named tower mills () have a compact, cylindrical or only slightly conical tower. In the southern Netherlands four mills of that type (Dutch definition) survive, the oldest one dating from before 1441. The cap of three of those mills is turned by a luffing gear built in the cap. Older types of tower mill with a fixed cap were found in castles, fortresses or inside city walls from the 14th century, and are still be found around the Mediterranean Sea. They were built with the sails facing the prevailing wind direction. Tower mills were very expensive to build, with cost estimates suggesting almost twice that of post mills; this is in part why they were not very prevalent until centuries after their invention. Sometimes these mills were even built on the sides of castles and towers in fortified towns to make them resistant to attacks. Some tower mills were still in operation well into the 20th century in southern parts of the United Kingdom. Citations References Lucas, Adam . Wind, water, work: ancient and medieval milling technology. (BRILL, 2006) Righter, Robert W. Wind energy in America: A History. (University of Oklahoma Press, 1996) Hills, Richard Leslie. Power from wind: a history of windmill technology. (Cambridge University Press, 1996) Langdon, John. Mills in the medieval economy: England, 1300–1540. (Oxford University Press, 2004) Thomas Kingston Derry and Trevor Illtyd William. A short history of technology: from the earliest times to A.D. 1900. (Courier Dover Publications, 1993) Thomas F. Glick, Steven John Livesey, and Faith Wallis. Medieval science, technology, and medicine: an encyclopedia: Volume 11 of Routledge encyclopedias of the Middle Ages. (Routledge, 2005) Harvard University. Journal of the Franklin Institute, Volume 187. (Pergamon Press, 1919) Watts, Martin. Water and wind power. (Osprey Publishing, 2000) Ball, Robert Steele. Natural sources of power. (D. Van Nostrand company, 1908) Colonial Williamsburg Foundation. Miller in Eighteenth Century Virginia. (Colonial Williamsburg, 1958) Cipolla, Carlo M. Before the industrial revolution: European society and economy, 1000–1700. (W. W. Norton & Company, 1994) External links Chesterton Windmill Interesting smaller site about Tower Mills Shows components of the Tower Mill Several interesting images of Tower Mills Dutch website presenting the "Lana Mariana" windmill at Ede with a built-in restaurant Morgan Lewis Windmill, Barbados. A good example of a tower mill with tail tree Dutch tower mill Noletmolen traditional style mill built in 2005 to generate electricicty – Dutch text. Fédération Des Moulins de France Towers Windmills
Tower mill
Engineering
2,513
46,581,976
https://en.wikipedia.org/wiki/Leccinum%20vulpinum
Leccinum vulpinum, commonly known as the foxy bolete, is a bolete fungus in the genus Leccinum that is found in Europe. It was described as new to science by Roy Watling in 1961. An edible species, it grows in mycorrhizal association with species of pine and bearberry. See also List of Leccinum species References vulpinum Fungi described in 1961 Fungi of Europe Edible fungi Taxa named by Roy Watling Fungus species
Leccinum vulpinum
Biology
101
2,003,371
https://en.wikipedia.org/wiki/Mordant%20red%2019
Mordant red 19 is an organic compound with the chemical formula C16H13ClN4O5S. It is classified as an azo dye. It is a mordant used in textile dyeing, usually in combination with chromium. It is usually found as the sodium salt. See also Alizarin List of colors References External links Google images of Mordant Red 19 Shades of red Azo dyes Pyrazolones Sulfonic acids Chloroarenes Phenols
Mordant red 19
Chemistry
104
15,718,146
https://en.wikipedia.org/wiki/Descriptional%20Complexity%20of%20Formal%20Systems
DCFS, the International Workshop on Descriptional Complexity of Formal Systems is an annual academic conference in the field of computer science. Beginning with the 2011 edition, the proceedings of the workshop appear in the series Lecture Notes in Computer Science. Already since the very beginning, extended versions of selected papers are published as special issues of the International Journal of Foundations of Computer Science, the Journal of Automata, Languages and Combinatorics, of Theoretical Computer Science, and of Information and Computation In 2002 DCFS was the result of the merger of the workshops DCAGRS (Descriptional Complexity of Automata, Grammars and Related Structures) and FDSR (Formal Descriptions and Software Reliability). The workshop is often collocated with international conferences in related fields, such as ICALP, DLT and CIAA. Topics of the workshop Typical topics include: various measures of descriptional complexity of automata, grammars, languages and of related systems trade-offs between descriptional complexity and mode of operation circuit complexity of Boolean functions and related measures succinctness of description of (finite) objects state complexity of finite automata descriptional complexity in resource-bounded or structure-bounded environments structural complexity descriptional complexity of formal systems for applications (e.g. software reliability, software and hardware testing, modelling of natural languages) descriptional complexity aspects of nature-motivated (bio-inspired) architectures and unconventional models of computing Kolmogorov–Chaitin complexity and descriptional complexity As such, the topics of the conference overlap with those of the International Federation for Information Processing Working Group 1.2 on descriptional complexity. Significance In a survey on descriptional complexity, state that "since more than a decade the Workshop on 'Descriptional Complexity of Formal Systems' (DCFS), [...] has contributed substantially to the development of [its] field of research." In a talk on the occasion of the 10th anniversary of the workshop, gave an overview about trends and directions in research papers presented at DCFS. History of the workshop Chairs of the Steering Committee of the DCFS workshop series: Basic information on each DCFS event, as well as on its precursors, DCAGRS and FSDR, is included in the following table. See also The list of computer science conferences contains other academic conferences in computer science. References Bianca Truthe: "Report on DCFS 2008." Bulletin of the EATCS 96:160-161, October 2008. Online edition accessed Feb 9, 2009. Talk held at the 11th DCFS in Magdeburg, Germany, July 6–9, 2009. Ian McQuillan: "Report on DCFS 2009." Bulletin of the EATCS 99:185-187, October 2009. Online edition accessed Nov 24, 2009. Electronic Proceedings in Theoretical Computer Science, official website. Andreas Malcher: "Report on DCFS 2012." Bulletin of the EATCS 108:168-169, October 2012. Online edition. External links Descriptional Complexity of Formal Systems: official website Theoretical computer science conferences Formal languages
Descriptional Complexity of Formal Systems
Mathematics
620
8,941,394
https://en.wikipedia.org/wiki/Downs%20cell
Downs' process is an electrochemical method for the commercial preparation of metallic sodium, in which molten NaCl is electrolyzed in a special apparatus called the Downs cell. The Downs cell was invented in 1923 (patented: 1924) by the American chemist James Cloyd Downs (1885–1957). Operation The Downs cell uses a carbon anode and an iron cathode. The electrolyte is sodium chloride that has been heated to the liquid state. Although solid sodium chloride is a poor conductor of electricity, when molten the sodium and chloride ions are mobilized, which become charge carriers and allow conduction of electric current. Some calcium chloride and/or chlorides of barium (BaCl2) and strontium (SrCl2), and, in some processes, sodium fluoride (NaF) are added to the electrolyte to reduce the temperature required to keep the electrolyte liquid. Sodium chloride (NaCl) melts at 801 °C (1074 Kelvin), but a salt mixture can be kept liquid at a temperature as low as 600 °C at the mixture containing, by weight: 33.2% NaCl and 66.8% CaCl2. If pure sodium chloride is used, a metallic sodium emulsion is formed in the molten NaCl which is impossible to separate. Therefore, one option is to have a NaCl (42%) and CaCl2 (58%) mixture. The anode reaction is: 2Cl− → Cl2 (g) + 2e− The cathode reaction is: 2Na+ + 2e− → 2Na (l) for an overall reaction of 2Na+ + 2Cl− → 2Na (l) + Cl2 (g) The calcium does not enter into the reaction because its reduction potential of -2.87 volts is lower than that of sodium, which is -2.38 volts. Hence the sodium ions are reduced to metallic form in preference to those of calcium. If the electrolyte contained only calcium ions and no sodium, calcium metal would be produced as the cathode product (which indeed is how metallic calcium is produced). Both the products of the electrolysis, sodium metal and chlorine gas, are less dense than the electrolyte and therefore float to the surface. Perforated iron baffles are arranged in the cell to direct the products into separate chambers without their ever coming into contact with each other. Although theory predicts that a potential of a little over 4.07 volts should be sufficient to cause the reaction to go forward, in practice potentials of up to 8 volts are used. This is done in order to achieve useful current densities in the electrolyte despite its inherent electrical resistance. The overvoltage and consequent resistive heating contributes to the heat required to keep the electrolyte in a liquid state. The Downs' process also produces chlorine as a byproduct, although chlorine produced this way accounts for only a small fraction of chlorine produced industrially by other methods. References Chemical processes Electrolytic cells Metallurgical processes
Downs cell
Chemistry,Materials_science
645
29,545,222
https://en.wikipedia.org/wiki/Sterile%20alpha%20motif
In molecular biology, the protein domain Sterile alpha motif (or SAM) is a putative protein interaction module present in a wide variety of proteins involved in many biological processes. The SAM domain that spreads over around 70 residues is found in diverse eukaryotic organisms. SAM domains have been shown to homo- and hetero-oligomerise, forming multiple self-association architectures and also binding to various non-SAM domain-containing proteins, nevertheless with a low affinity constant. SAM domains also appear to possess the ability to bind RNA. Smaug, a protein that helps to establish a morphogen gradient in Drosophila embryos by repressing the translation of nanos (nos) mRNA, binds to the 3' untranslated region (UTR) of nos mRNA via two similar hairpin structures. The 3D crystal structure of the Smaug RNA-binding region shows a cluster of positively charged residues on the Smaug-SAM domain, which could be the RNA-binding surface. This electropositive potential is unique among all previously determined SAM-domain structures and is conserved among Smaug-SAM homologs. These results suggest that the SAM domain might have a primary role in RNA binding. Structural analyses show that the SAM domain is arranged in a small five-helix bundle with two large interfaces. In the case of the SAM domain of EPHB2, each of these interfaces is able to form dimers. The presence of these two distinct intermonomers binding surface suggest that SAM could form extended polymeric structures. Fungal SAM In molecular biology, the protein domain Ste50p mainly in fungi and some other types of eukaryotes. It plays a role in the mitogen-activated protein kinase cascades, a type of cell signalling that helps the cell respond to external stimuli, more specifically mating, cell growth, and osmo-tolerance in fungi. Function The protein domain Ste50p has a role in detecting pheromones for mating. It is thought to be found bound to Ste11p in order to prolong the pheromone-induced signaling response. Furthermore, it is also involved in aiding the cell to respond to nitrogen starvation. Structure The fungal Ste50p SAM consists of six helices, which form a compact, globular fold. It is a monomer in solution and often undergoes heterodimerisation (and in some cases oligomerisation) of the protein. Protein interaction The SAM domain of Ste50p often interacts with the SAM domain of Ste11p. They form bonds through this association. It is important to note that the SAM domain of one protein will bind to the SAM of a different protein. SAM domains do not self-associate in vitro. There is significant evidence for Ste50p oligomerization in vivo. Human proteins containing this domain ANKS1A; ANKS1B; ANKS3; ANKS4B; ANKS6; BFAR; BICC1; CASKIN1; CASKIN2; CENTD1; CNKSR2; CNKSR3; DDHD2; EPHA1; EPHA10; EPHA2; EPHA5; EPHA6; EPHA7; EPHA8; EPHB1; EPHB2; EPHB3; EPHB4; FAM59A; HPH2; INPPL1; L3MBTL3; PHC1; PHC2; PHC3; PPFIA1; PPFIA2; PPFIA3; PPFIA4; PPFIBP1; PPFIBP2; SAMD1; SAMD13; SAMD14; SAMD3; SAMD4A; SAMD4B; SAMD5; SAMD7; SAMD8; SAMD9; SARM1; SCMH1; SCML1; SCML2; SEC23IP; SGMS1; SHANK1; SHANK2; SHANK3; STARD13; UBP1; USH1G; ZCCHC14; p63; p73; References Structural evolution of p53, p63, and p73: Implication for heterotetramer formation Protein domains Protein structural motifs
Sterile alpha motif
Biology
881
44,844,472
https://en.wikipedia.org/wiki/Amino%20acid%20N-carboxyanhydride
Amino acid N-carboxyanhydrides, also called Leuchs' anhydrides, are a family of heterocyclic organic compounds derived from amino acids. They are white, moisture-reactive solids. They have been evaluated for applications the field of biomaterials. Preparation NCAs are typically prepared by phosgenation of amino acids: They were first synthesized by Hermann Leuchs by heating an N-ethoxycarbonyl or N-methoxycarbonyl amino acid chloride in a vacuum at 50-70 °C: A moisture-tolerant route to unprotected NCAs employs epoxides as scavengers of hydrogen chloride. This synthesis of NCAs is sometimes called the . The relatively high temperatures necessary for this cyclization results in the decomposition of several NCAs. Of several improvements, one notable procedure involves treating an unprotected amino acid with phosgene or its trimer. Reactions NCAs are prone to hydrolysis to the parent amino acid: RCHNHC(O)OC(O) + H2O → H2NCH(R)CO2H + CO2 Some derivatives however tolerate water briefly. NCAs convert to homopolypeptides ( [N(H)CH(R)CO)]n) through ring-opening polymerization: nRCHNHC(O)OC(O) → [N(H)CH(R)CO)]n + nCO2 Poly-L-lysine has been prepared from N-carbobenzyloxy-α-N-carboxy-L-lysine anhydride, followed by deprotection with phosphonium iodide. Peptide synthesis from NCAs does not require protection of the amino acid functional groups. N-Substituted NCAs, such as sulfenamide derivatives have also been examined. The ring-opening polymerization of NCAs is catalyzed by metal catalysts. The polymerization of NCA’s have been considered as a prebiotic route to polypeptides. See also Dakin–West reaction Glycine N-carboxyanhydride, the parent NCA References Further reading Organic chemistry Organic reactions
Amino acid N-carboxyanhydride
Chemistry
464
13,632,049
https://en.wikipedia.org/wiki/Gamma-ray%20burst%20emission%20mechanisms
Gamma-ray burst emission mechanisms are theories that explain how the energy from a gamma-ray burst progenitor (regardless of the actual nature of the progenitor) is turned into radiation. These mechanisms are a major topic of research as of 2007. Neither the light curves nor the early-time spectra of GRBs show resemblance to the radiation emitted by any familiar physical process. Compactness problem It has been known for many years that ejection of matter at relativistic velocities (velocities very close to the speed of light) is a necessary requirement for producing the emission in a gamma-ray burst. GRBs vary on such short timescales (as short as milliseconds) that the size of the emitting region must be very small, or else the time delay due to the finite speed of light would "smear" the emission out in time, wiping out any short-timescale behavior. At the energies involved in a typical GRB, so much energy crammed into such a small space would make the system opaque to photon-photon pair production, making the burst far less luminous and also giving it a very different spectrum from what is observed. However, if the emitting system is moving towards Earth at relativistic velocities, the burst is compressed in time (as seen by an Earth observer, due to the relativistic Doppler effect) and the emitting region inferred from the finite speed of light becomes much smaller than the true size of the GRB (see relativistic beaming). GRBs and internal shocks A related constraint is imposed by the relative timescales seen in some bursts between the short-timescale variability and the total length of the GRB. Often this variability timescale is far shorter than the total burst length. For example, in bursts as long as 100 seconds, the majority of the energy can be released in short episodes less than 1 second long. If the GRB were due to matter moving towards Earth (as the relativistic motion argument enforces), it is hard to understand why it would release its energy in such brief interludes. The generally accepted explanation for this is that these bursts involve the collision of multiple shells traveling at slightly different velocities; so-called "internal shocks". The collision of two thin shells flash-heats the matter, converting enormous amounts of kinetic energy into the random motion of particles, greatly amplifying the energy release due to all emission mechanisms. Which physical mechanisms are at play in producing the observed photons is still an area of debate, but the most likely candidates appear to be synchrotron radiation and inverse Compton scattering. As of 2007 there is no theory that has successfully described the spectrum of all gamma-ray bursts (though some theories work for a subset). However, the so-called Band function (named after David Band) has been fairly successful at fitting, empirically, the spectra of most gamma-ray bursts: A few gamma-ray bursts have shown evidence for an additional, delayed emission component at very high energies (GeV and higher). One theory for this emission invokes inverse Compton scattering. If a GRB progenitor, such as a Wolf-Rayet star, were to explode within a stellar cluster, the resulting shock wave could generate gamma-rays by scattering photons from neighboring stars. About 30% of known galactic Wolf-Rayet stars, are located in dense clusters of O stars with intense ultraviolet radiation fields, and the collapsar model suggests that WR stars are likely GRB progenitors. Therefore, a substantial fraction of GRBs are expected to occur in such clusters. As the relativistic matter ejected from an explosion slows and interacts with ultraviolet-wavelength photons, some photons gain energy, generating gamma-rays. Afterglows and external shocks The GRB itself is very rapid, lasting from less than a second up to a few minutes at most. Once it disappears, it leaves behind a counterpart at longer wavelengths (X-ray, UV, optical, infrared, and radio) known as the afterglow that generally remains detectable for days or longer. In contrast to the GRB emission, the afterglow emission is not believed to be dominated by internal shocks. In general, all the ejected matter has by this time coalesced into a single shell traveling outward into the interstellar medium (or possibly the stellar wind) around the star. At the front of this shell of matter is a shock wave referred to as the "external shock" as the still relativistically moving matter ploughs into the tenuous interstellar gas or the gas surrounding the star. As the interstellar matter moves across the shock, it is immediately heated to extreme temperatures. (How this happens is still poorly understood as of 2007, since the particle density across the shock wave is too low to create a shock wave comparable to those familiar in dense terrestrial environments – the topic of "collisionless shocks" is still largely hypothesis but seems to accurately describe a number of astrophysical situations. Magnetic fields are probably critically involved.) These particles, now relativistically moving, encounter a strong local magnetic field and are accelerated perpendicular to the magnetic field, causing them to radiate their energy via synchrotron radiation. Synchrotron radiation is well understood, and the afterglow spectrum has been modeled fairly successfully using this template. It is generally dominated by electrons (which move and therefore radiate much faster than protons and other particles) so radiation from other particles is generally ignored. In general, the GRB assumes the form of a power-law with three break points (and therefore four different power-law segments.) The lowest break point, , corresponds to the frequency below which the GRB is opaque to radiation and so the spectrum attains the form Rayleigh-Jeans tail of blackbody radiation. The two other break points, and , are related to the minimum energy acquired by an electron after it crosses the shock wave and the time it takes an electron to radiate most of its energy, respectively. Depending on which of these two frequencies is higher, two different regimes are possible: Fast cooling () - Shortly after the GRB, the shock wave imparts immense energy to the electrons and the minimum electron Lorentz factor is very high. In this case, the spectrum looks like: Slow cooling () – Later after the GRB, the shock wave has slowed down and the minimum electron Lorentz factor is much lower.: The afterglow changes with time. It must fade, obviously, but the spectrum changes as well. For the simplest case of adiabatic expansion into a uniform-density medium, the critical parameters evolve as: Here is the flux at the current peak frequency of the GRB spectrum. (During fast-cooling this is at ; during slow-cooling it is at .) Note that because drops faster than , the system eventually switches from fast-cooling to slow-cooling. Different scalings are derived for radiative evolution and for a non-constant-density environment (such as a stellar wind), but share the general power-law behavior observed in this case. Several other known effects can modify the evolution of the afterglow: Reverse shocks and the optical flash There can be "reverse shocks", which propagate back into the shocked matter once it begins to encounter the interstellar medium. The twice-shocked material can produce a bright optical/UV flash, which has been seen in a few GRBs, though it appears not to be a common phenomenon. Refreshed shocks and late-time flares There can be "refreshed" shocks if the central engine continues to release fast-moving matter in small amounts even out to late times, these new shocks will catch up with the external shock to produce something like a late-time internal shock. This explanation has been invoked to explain the frequent flares seen in X-rays and at other wavelengths in many bursts, though some theorists are uncomfortable with the apparent demand that the progenitor (which one would think would be destroyed by the GRB) remains active for very long. Jet effects Gamma-ray burst emission is believed to be released in jets, not spherical shells. Initially the two scenarios are equivalent: the center of the jet is not "aware" of the jet edge, and due to relativistic beaming we only see a small fraction of the jet. However, as the jet slows down, two things eventually occur (each at about the same time): First, information from the edge of the jet that there is no pressure to the side propagates to its center, and the jet matter can spread laterally. Second, relativistic beaming effects subside, and once Earth observers see the entire jet the widening of the relativistic beam is no longer compensated by the fact that we see a larger emitting region. Once these effects appear the jet fades very rapidly, an effect that is visible as a power-law "break" in the afterglow light curve. This is the so-called "jet break" that has been seen in some events and is often cited as evidence for the consensus view of GRBs as jets. Many GRB afterglows do not display jet breaks, especially in the X-ray, but they are more common in the optical light curves. Though as jet breaks generally occur at very late times (~1 day or more) when the afterglow is quite faint, and often undetectable, this is not necessarily surprising. Dust extinction and hydrogen absorption There may be dust along the line of sight from the GRB to Earth, both in the host galaxy and in the Milky Way. If so, the light will be attenuated and reddened and an afterglow spectrum may look very different from that modeled. At very high frequencies (far-ultraviolet and X-ray) interstellar hydrogen gas becomes a significant absorber. In particular, a photon with a wavelength of less than 91 nanometers is energetic enough to completely ionize neutral hydrogen and is absorbed with almost 100% probability even through relatively thin gas clouds. (At much shorter wavelengths the probability of absorption begins to drop again, which is why X-ray afterglows are still detectable.) As a result, observed spectra of very high-redshift GRBs often drop to zero at wavelengths less than that of where this hydrogen ionization threshold (known as the Lyman break) would be in the GRB host's reference frame. Other, less dramatic hydrogen absorption features are also commonly seen in high-z GRBs, such as the Lyman alpha forest. References Gamma-ray bursts
Gamma-ray burst emission mechanisms
Physics,Astronomy
2,200
29,433,936
https://en.wikipedia.org/wiki/Hinged%20expansion%20joint
A hinged expansion joint is a metallic assembly, that can rotate in a single plane, used to absorb changes resulting from piping thermal expansion or contraction. They include hinges, attached to the expansion joint ends with a pair of pins, which allow angular movement in a single plane, restrain the pressure thrust, and prevent the expansion joint from deflecting axially, either in extension or compression. It is recommended that the hinges should be used in sets of two or three. The expansion joint hinges provide for angular movement and will resist pressure thrust forces. Individual hinged expansion joints used in piping systems are restricted to pure angular rotation by its hinges. As a pair, hinged expansion joints will function together to absorb lateral deflection. Advantages of hinged expansion joints are that they are typically compact in size and structurally rigid. Applications Air, Steam, & Flue Gas Ducts Power Plants Chemical Industry Gas Turbine System Petrochemical Industry Primary reformer ducts & burners Steel Plants References External links U.S. Bellows, Inc. http://www.usbellows.com American Society for Testing and Materials (ASTM) http://www.astm.org/ Expansion Joint Manufacturers Association EJMA http://www.ejma.org Structural connectors
Hinged expansion joint
Physics,Engineering
264
70,426,547
https://en.wikipedia.org/wiki/2022%E2%80%932023%20Russia%E2%80%93European%20Union%20gas%20dispute
The Russia–EU gas dispute flared up in March 2022 following the invasion of Ukraine on 24 February 2022. Russia and the major EU countries clashed over the issue of payment for natural gas pipelined to Europe by Russia's Gazprom, amidst sanctions on Russia that were expanded in response to Russia's 2022 invasion of Ukraine. In June, Gazprom claimed it was obliged to cut the flow of gas to Germany by more than half, as a result of such sanctions that prevented the Russian company from receiving its turbine component from Canada. On 26 September 2022, three of the four pipes of the Nord Stream 1 and 2 gas pipelines were sabotaged. This resulted in a record release of of methane()an equivalent of of carbon dioxide()and is believed to have made a contribution to global warming. , the price of gas had fallen to a fraction of the 2022 peak price and whilst Russian pipeline gas exports continued to flow in small quantities via Ukraine and via the TurkStream pipeline, provided the recipient was willing to indirectly pay in rubles, the EU had found alternate sources of gas for its needs and are no longer reliant on Russia as an energy source. Arbitration cases are pending, with large claims being made against Gazprom. Background Europe consumed of natural gas in 2020, of which 36% (that is, ) came from Russia. In early 2022, with 23 pipelines between Europe and Russia in 2021, Russia supplied 45% of EU's natural gas imports, earning $900 million a day. Following Russia's invasion of Ukraine in February 2022, the United States, the European Union, and other countries, introduced or significantly expanded sanctions to cut off "selected Russian banks" from SWIFT, including Gazprom bank. Assets of the Central Bank of Russia held in Western nations were frozen: the Central Bank of Russia was blocked from accessing more than $400 billion in foreign-exchange reserves held abroad. The EU became major supporters of Ukraine, with humanitarian assistance and a growing likelihood of military assistance. Russia wanted the EU to back away from Ukraine and decided to use gas as a weapon, by threatening to cut off supplies. In March 2022, the European Commission and International Energy Agency presented joint plans to reduce reliance on Russian energy, reduce Russian gas imports by two thirds within a year, and completely by 2030. In April 2022, the European Commission President Ursula von der Leyen said "the era of Russian fossil fuels in Europe will come to an end". On 18 May 2022, the European Union published plans to end its reliance on Russian oil, natural gas and coal by 2027. Demand of payment in rubles, March 2022 On 23 March 2022, Russian President Vladimir Putin announced payments for Russian pipeline gas would be switched from "currencies that had been compromised" (that is, US dollar and euro) to payments in roubles when the transaction involved a country formally designated "unfriendly" previously, which included all European Union states; on 28 March, he ordered the Central Bank of Russia, the government, and Gazprom to present proposals by 31 March for gas payments in rubles from these "unfriendly countries". President Putin's move was construed to be aimed at forcing European companies to directly prop up the Russian currency as well as bringing Russia's Central Bank back into the global financial system after the sanctions had nearly cut it off from financial markets, essentially circumventing sanctions. ING bank's chief economist, Carsten Brzeski, told Deutsche Welle he thought the gas-for-ruble demand was "a smart move". At the end of April 2022, Russian Foreign Minister Sergey Lavrov said that the $300 billion of Gazprom's funds that had in effect been "stolen" by the "Western 'friends'" were actually the funds they had paid for Russia's gas, which meant that all those years they had been consuming the Russian gas free of charge; he thus made a point that the new payment system was designed to preclude "the continuation of the brazen thievery those countries were involved in". On 28 March, Robert Habeck, the German Minister for Economic Affairs and Climate Action, announced that the G7 countries had rejected the Russian President's demand that payment for gas be made in rubles. On the same day, Dmitry Peskov, spokesman for the Russian president, said that Russia would "not supply gas for free". On 29 March, it was reported that the physical gas flows through the Yamal-Europe pipeline at Germany's Mallnow point had decreased to zero. The following day, Habeck triggered the "early warning" level for gas supplies, the first step of a national gas emergency plan that involved setting up a crisis team of representatives from the federal and state governments, regulators and private industry and that could, eventually, lead to gas rationing; he urged Germans to voluntarily cut their energy consumption as a way of ending the country's dependence on Russia. A similar step was undertaken by the Austrian government. Meanwhile, Gazprom said it continued to supply gas to Europe via Ukraine. Russia's gas had also begun flowing westward through the pipeline via Poland. Russia's TASS reported that President Putin had a phone call with Germany's Chancellor Olaf Scholz to "inform him on the decision to switch to payments in rubles for gas". According to Olaf Scholz's office, President Vladimir Putin told the German Chancellor that European companies could continue paying in euros or dollars. Decree 172 On 31 March, President Vladimir Putin signed a decreethat obligated, starting 1 April, purchasers of Russian pipeline gas from countries on Russia's Unfriendly Countries List to make their payments for Russian gas through a facility run by Russia's Gazprombank, a subsidiary of Gazprom. To pay for gas, purchaser companies from "unfriendly countries" would be required to open two accounts at Gazprombank and transfer foreign currency in which they previously made payments into one of them, which Gazprombank would then sell on the Moscow stock exchange for rubles that are deposited into the second (foreign-purchaser owned) ruble-denominated account (this currency conversion would be done in Russia). Gazprombank would then transfer this ruble payment to Gazprom PJSC (a company that operates gas pipeline systems, produces and explores gas, and transports high pressure gas in the Russian Federation and European countries), at which point the purchaser would be deemed to have legally fulfilled (under Russian law) its obligations to pay. Gas purchasers were thus still able to make payments by transferring foreign (non-ruble) currencies, including the currencies stipulated by their contract, which in most cases were US dollars and Euros. Despite this, the obligatory new payment mechanism introduced by decree 172 has been colloquially referred to as a "demand to pay in rubles" by many media outlets. The natural gas contracts stipulated the currency in which payments to Gazprom were to be made − 97% of which were in US dollars or euros − as well as the accounts into which the payments were to be deposited, which were Gazprom-owned accounts at Western financial institutions. These accounts had been frozen by international sanctions and any payments deposited into these accounts would also be immediately frozen, whereas payments deposited into these Gazprombank accounts (located in Russia) would be accessible to Gazprom, which would circumvent these international sanctions. The first post-1 April payments were due near the end of April and in May. Putin stated that any country refusing to use the new payment mechanism would be in violation of their contracts and face "corresponding repercussions". The Russian government would consider a failure to pay to be a default and the existing contract would be terminated. The decree allowed exceptions to be made for buyers that would permit them to pay as before. On 29 April 2022, a spokesperson for the German Economy Ministry said a payment through an account (such as stipulated by decree 172) is in line with sanctions when (1) “contracts have been fulfilled with payment in euros or dollars", with a government source clarifying further that (2) "it was irrelevant in which country [... that] account is opened [at a bank] as long as the bank in question was not on any sanctions list." Gas delivery disruption, April 2022 – present On 26 April 2022, Gazprom announced it would stop delivering natural gas to Poland via the Yamal–Europe pipeline and to Bulgaria from the following day as both countries had failed to make due payments to Gazprom in rubles. Poland said it did not expect to experience disruptions due to its natural gas storage facilities being about 75% full (ensuring 40–180 days of supply), the Poland–Lithuania gas pipeline becoming operational in May 2022, and the Baltic Pipe natural gas pipeline between Poland and Norway becoming operational in October 2022, which would make Poland fully independent of Russian gas. Poland could also import gas via the Świnoujście LNG terminal in the city of Świnoujście in the country's extreme north-west. Meanwhile, Bulgaria was almost completely dependent on Russian gas. The following day, Gazprom announced that it had "completely suspended gas supplies" to Poland's PGNiG and Bulgaria's Bulgargaz "due to absence of payments in roubles". Bulgaria, Poland, and the European Union condemned the suspension. The announcement of the suspension caused natural gas prices to surge and the Russian ruble to reach a two-year-high against the Euro in Moscow trade. On 11 May 2022, Ukraine's state-owned gas grid operator GTSOU halted the flow of natural gas through the Sokhranovka transit point, which had transported about one third of all of piped Russian natural gas that transited through Ukraine. It was the first time since the start of Russia's 24 February invasion of Ukraine that natural gas flow through Ukraine was interrupted. The Ukrainian government stated that it would not reopen this pipeline unless it regained control of areas from pro-Russian fighters. On the same day, Russia imposed sanctions on European subsidiaries of Gazprom which had been nationalized by European countries. On 20 May 2022, Gazprom announced that it had informed Finland that the next morning, natural gas deliveries to the country would be halted due to the refusal of the Finnish state-owned gas wholesaler to pay in rubles (that is, to comply with decree 172). Natural gas accounted for 5% of Finland's total annual energy consumption, with the majority of this natural gas being supplied by Russia. On 14 June 2022, Gazprom announced it would be slashing gas flow via the Nord Stream 1 pipeline, due to what it claimed was Siemens’ failure to return compressor units on time that had been sent off to Canada for repair. The explanation was challenged by Germany's energy regulator. On 16 June 2022, European benchmark natural gas prices increased by around 30% after Gazprom reduced Nord Stream 1's gas supply to Germany to 40% of the pipeline's capacity. Russia warned that usage of the pipeline could be completely suspended because of problems with the repairment. On 11 July 2022, Nord Stream 1 was turned off for scheduled annual maintenance, but remained off after the usual repair period. The Siemens pipeline turbine was repaired in Canada. Due to sanctions, Canada could not deliver the turbine back to Russia after repair works and instead sent it to Germany, despite the call of Volodymyr Zelenskiy to maintain the sanctions. On 26 September 2022, both pipes of the Nord Stream 1 pipeline, and one of the two pipes of the Nord Stream 2 pipeline, which connect Russia and Germany, ruptured in the Baltic Sea. Nord Stream 1 had been operating at a significantly reduced capacity and then closed for weeks, and Nord Stream 2 was not operating, but both still contained gas. As of 7 October 2022, Swedish investigators said evidence pointed to sabotage. In October 2023, Bulgaria's government issued new transit fees on Russian gas which would aim to reduce Russia's revenues from selling the fossil fuel and discourage buyers of Russian gas. In November 2023 those fees are being challenged in the constitutional court and are also opposed by Serbia and Hungary, two regional powers which largely disagree with the EU's decision to restrict Russian gas imports, even though the fees are payable by Gazprom, not the end user. On 3 Jan’25, Gilberto Pichetto Fratin mentioned the European Union must ellobrate its price cap on gas and increase then to 60euro megawatt hour, so as to prevent from sudden energy price shock. As Ukraine has denied to renew the gas transit agreement with Russia, there are raise of fear for energy shock. Analysis of impact With European policy-makers deciding in March 2022 to replace Russian fossil fuel imports with other fossil fuels imports and European coal energy production, as well as due to Russia being "a key supplier" of materials used for "clean energy technologies", the reactions to the war were projected in March 2022 to have an overall negative impact on the climate emissions pathway. However, aJuly2022 report from three German science academies noted that if Russian natural gas imports were to cease in the next few months, around 25% of Europe's natural gas demand could not be met at peak times for a winter similar to that in 2021moreover that shortfall is due to a lack of transport infrastructure such as pipeline capacity and liquefied natural gas (LNG) terminals and that this supply gap can be closed by 2025 if natural gas consumption falls by 20% across Europe and infrastructure is expanded simultaneously. Afully open study from Zero Lab at Princeton University published in July2022 and based on the GenX framework concluded that reliance on Russia gas could end by October 2022 under the three core scenarios they investigatedwhich ranged from high coal usage to accelerated renewables deployment. All three cases would result in falling greenhouse gas emissions, relative to business as usual. In 2022, Turkish President Recep Tayyip Erdoğan and Russian President Vladimir Putin planned for Turkey to become an energy hub for all of Europe. According to Aura Săbăduș, a senior energy journalist focusing on the Black Sea region, "Turkey would accumulate gas from various producers — Russia, Iran and Azerbaijan, [liquefied natural gas] and its own Black Sea gas — and then whitewash it and relabel it as Turkish. European buyers wouldn’t know the origin of the gas." Alternate supplies In May 2022 small natural gas exporter Peru increased its export of liquified natural gas to Europe, especially to Spain and the United Kingdom in the first five months of 2022, by 74% compared to the same period in 2021. As of September 2022, Norway, the second largest non-EU provider of gas to the EU after Russia for several decades, has been constrained by its pipeline network's structural (maximum) capacities. Prior to the opening of the Baltic Pipe, which became partially operational on 26 September 2022, Norway could only increase the volume of deliveries of natural gas to try to compensate Russia's disrupted supplies by a maximum of to Europe, a far cry from Russia's supplies of of natural gas to the EU in 2021. On 20 May 2022, Germany and Qatar signed a declaration to deepen their energy partnership. Qatar plans to start supplying LNG to Germany in 2024. The total and operational capacity of global LNG tanker fleet as of 2021 of about was already operating at full capacity before the 2022 gas disputes. The UK and EU consume about of gas per year. Although the UK has three LNG terminals for gasification, much of the EU has insufficient tankers to meet its needs. In late 2022, the high price attracted more tankers than available LNG import capacity. In 2023 96% of EU imports of Russian LNG went to the Netherlands, Belgium, France, and Spain. The Netherlands imported very little and is quickly phasing it out. Spain has decided to not renew any LNG contracts. France and Belgium have an interest in TotalEnergies Yamal LNG project, but have not said whether they will increase or decrease purchases. On 15 June 2022, Israel, Egypt and the European Union signed a trilateral natural gas agreement. On 20 June 2022, Dutch climate and energy minister Rob Jetten announced that the Netherlands would remove all restrictions on the operation of coal-fired power stations until at least 2024 in response to Russia's refusal to export natural gas to the country. Operations were previously limited to less than a third of the total production. In January 2023, Bulgaria signed an agreement with Turkey, which would allow it to import significant amounts of Turkish gas. The European Commission expressed serious concerns about this deal, since much of the gas imported from it could have originated in Russia. Bulgaria has been re-exporting significant amounts gas to the wider Southeastern European region as well. Bulgaria says that the agreement with Turkey enables Bulgaria to buy LNG on the open market and for the liquified gas to be delivered to Turkey and returned to a gaseous state for pumping via the pipeline to Bulgaria. The agreement is for the use of LNG terminals and pumping gas from those terminals to Bulgaria. By 2023, EU had an LNG import capacity of , in excess of what had been utilized in 2022. Some terminals were fully booked for 2023, while others had open time slots for further import. Price cap sanction The European Energy ministers agreed, on 19 December 2022, on a price cap for natural gas at €180 per megawatt-hour. The objective being to stabilise and avoid major upward fluctuations in gas prices. Position July 2023 Gas pipeline routes from Russia As of January 2025 only One of the five major pipelines connecting Russia and Europe are still operational: Nord Stream 1ceased September 2022both pipes damaged by explosions Nord Stream 2never usedone pipe damaged by explosionone theoretically possible to use Yamalthrough Belarus and Polandceased May 2022Poland took over the 48% Gazprom share of the pipe in Poland in 2023 and does not want to reopen the pipeline Ukraine transitone route operational with around , the other is closed as it is in the war zone. Contract runs until end of 2024 when Ukraine says it will then stop the flow of Russian gas. Gas transits Ukraine to Slovakia () and on to Austria () and Hungary () Turkstreamvia Turkey and Bulgaria, capacity of , operational with around transiting on to Serbia (), Hungary (), Bosnia (), North Macedonia (), and Greece () In 2021, Russian piped gas supplied to Europe was , in 2022 it was around and just in 2023. Contractual position Gas companies in all EU countries had contracts with Gazprom to supply certain amounts of gas for a set number of years, most have not been fulfilled. The position in July 2023 was contracts falling into three categories: Terminatedaround have been terminatedbeing those that had reached the end of their contract period or had been officially terminated Poland, , expired late 2022, supplies suspended in April 2022 after Poland refused to pay in rubles Bulgaria, , expired late 2022, supplies suspended in May 2022 after Bulgaria refused to pay in rubles Finland, , supplies suspended in May 2022 after refusal to pay in rubles, Gazprom went to arbitration against Gasum for the unpaid gas, which arbitration decided was payable, but not in rubles. Czech Republic and Slovenia expired late 2022 but suffered shortfalls in delivery after Nord Stream ceased operating. The Czech gas company ČEZ Group went to ICC arbitration to recover damages of $45 million. Under legal reviewaround with contracts running to 2030 to 2035 are short or no longer being supplied by Gazprom. Germany, , supplies ceased due to Nord Stream or the refusal to pay in rubles resulting in supplies being terminated. Many companies are seeking arbitration, Uniper is claiming €11.6 billion compensation from Gazprom Italy, , received around 15% of contracted amount, even though it was prepared to pay in rubles France, , suffered reduced supplies after Nord Stream stopped, Engie opened arbitration proceedings in February 2023 for short delivery Denmark, , Ørsted had its supply suspended in July 2022 after refusing to pay in rubles Active with 10 contracts generally receiving their contracted amounts, with minor interruptions via Ukraine, which may terminate in December 2024 when transit agreement ends Austria, , contracted until 2040, with OMV paying in rubles Slovakia, , contracted until 2028, paying in rubles Hungary, , part of national supply via Turkstream, routing though Bulgaria, which dramatically increased their transit fee in late 2023 to 20 levs (€10.22) per MWh. Hungary, , increased in August 2022 with an extra contract Serbia, , contracted until 2026 Croatia, , contracted until 2027 North Macedonia, , contracted until 2027 Bosnia, , 1 year deal Greece, , different contract dates, some as LNG Overall impact of dispute The dispute arose because of the Russian invasion of Ukraine in February 2022, with Russia using the EU's reliance on Russian gas as leverage against the EU to move the EU away from Ukraine and reduce sanctions being applied against Russia. The effect of the European Union response to the dispute is clear when you consider that European gas prices went above €200/MWh in 2022, but by August 2023 the price had dropped to around €30, and with EU gas storage near full, some companies began storing gas in war-torn Ukraine. EU autumn gas storage goals were achieved early in 2022 and 2023, and levels were at a record high of 59% at the end of winter on 1 April 2024. The mass departure of most EU countries from using Russian piped gas in 2022, whether voluntarily or being forced due to Russia ceasing supplies, will have a long term impact on both the EU and Russia, with Russia losing both political influence and massive amounts of taxation and company profits. Russia is unlikely to make inroads into the EU's future energy policy. The EU found new suppliers in 2022, initially at a higher price, which had a cost, including high inflation; however, the price of world gas had fallen to acceptable levels by 2023, and with the EU determined to reduce reliance on fossil fuels, a major boost has been given to renewable energy sources. Termination of Russian gas supplies On 1 January 2025, after the expiration of the contract between Russian Gazprom and Ukrainian Naftohaz, the supply of Russian gas to the European Union via Ukraine was terminated. Moldava and Transistria Moldova, which is not part of the European Union, has been seriously affected by the end of the transit agreement. In Transnistria, a breakaway region of Moldova, more than 51 families are without gas and 1,500 apartments without heating after the gas supply via Ukraine was interrupted, the authorities in Tiraspol announced. In a message on Telegram, the Moldovan government has offered to help, but Transnistrian officials have rejected the offer. The Moldovan prime minister accused Russia of provoking a humanitarian crisis in the region in order to destabilise the pro-European government. Parliamentary elections are scheduled for this autumn in this republic located between Ukraine and Romania. Prime Minister of Slovakia Robert Fico Slovakia's left-wing populist Prime Minister Robert Fico has said that cutting off Russian gas supplies to Ukraine is a unilateral measure that harms Slovak and EU interests and that he will therefore use his veto power on EU issues that require unanimity. See also German economic crisis (2022–present) Strategic natural gas reserve Lukoil oil transit dispute Notes References 2022 in economic history 2023 in economic history 2022 in international relations 2023 in international relations 2022 in Russia 2023 in Russia 2022 in the European Union 2023 in the European Union Events affected by the Russian invasion of Ukraine Energy crises Energy policy Energy policy of Russia Price disputes involving Gazprom Natural gas in Russia Natural resource conflicts Russia–European Union relations Vladimir Putin Von der Leyen Commission
2022–2023 Russia–European Union gas dispute
Environmental_science
5,045
36,053,717
https://en.wikipedia.org/wiki/Endogenous%20regeneration
Endogenous regeneration in the brain is the ability of cells to engage in the repair and regeneration process. While the brain has a limited capacity for regeneration, endogenous neural stem cells, as well as numerous pro-regenerative molecules, can participate in replacing and repairing damaged or diseased neurons and glial cells. Another benefit that can be achieved by using endogenous regeneration could be avoiding an immune response from the host. Neural stem cells in the adult brain During the early development of a human, neural stem cells lie in the germinal layer of the developing brain, ventricular and subventricular zones. In brain development, multipotent stem cells (those that can generate different types of cells) are present in these regions, and all of these cells differentiate into neural cell forms, such as neurons, oligodendrocytes and astrocytes. A long-held belief states that the multipotency of neural stem cells would be lost in the adult human brain. However, it is only in vitro, using neurosphere and adherent monolayer cultures, that stem cells from the adult mammalian brain have shown multipotent capacity, while the in vivo study is not convincing. Therefore, the term "neural progenitor" is used instead of "stem cell" to describe limited regeneration ability in the adult brain stem cell. Neural stem cells (NSC) reside in the subventricular zone (SVZ) of the adult human brain and the dentate gyrus of the adult mammalian hippocampus. Newly formed neurons from these regions participate in learning, memory, olfaction and mood modulation. It has not been definitively determined whether or not these stem cells are multipotents. NSC from the hippocampus of rodents, which can differentiate into dentate granule cells, have developed into many cell types when studied in culture. However, another in vivo study, using NSCs in the postnatal SVZ, showed that the stem cell is restricted to developing into different neuronal sub-type cells in the olfactory bulb. It is believed that the various spatial location niches regulate the differentiation of the neural stem cell. Neurogenesis in the central nervous system Santiago Ramon y Cajal, a neuroscience pioneer, concluded that the generation of neurons occurs only in the prenatal phase of human development, not after birth. This theory had long been the fundamental principle of neuroscience. However, in the mid-20th century, evidence of adult mammalian neurogenesis was found in rodent hippocampus and other region of the brain. In the intact adult mammalian brain, neuroregeneration maintains the function and structure of the central nervous system (CNS). The most adult stem cells in the brain are found in the subventricular zone at the lateral walls of the lateral ventricle. Another region where neurogenesis takes place in the adult brain is the subgranular zone (SGZ) of the dentate gyrus in the hippocampus. While the exact mechanism that maintains functional NSCs in these regions is still unknown, NSCs have shown an ability to restore neurons and glia in response to certain pathological conditions. However, so far, this regeneration by NSCs is insufficient to restore the full function and structure of an injured brain. However, endogenous neuroregeneration, unlike using embryonic stem cell implantation, is anticipated to treat damaged CNS without immunogenesis or tumorigenesis. Neurogenesis in subgranular zone Progenitor cells in the dentate gyrus of the hippocampus migrate to the nearby location and differentiate into granule cells. As a part of the limbic system, new neurons of the hippocampus maintain the function of controlling mood, learning and memory. In the dentate gyrus, putative stem cells, called type 1 cells, proliferate into type 2 and type 3 cells, which are transiently amplifying, lineage-determined progenitor cells. Type 1 cells in the hippocampus are multipotent in vitro. However, although there is evidence that both new neurons and glia are generated in the hippocampus in vivo, no exact relationship of neurogenesis to type 1 cells is shown. In the hippocampus, newly formed neurons contribute only a small portion to the entire neuron population. These new neurons have different electrophysiology compared to the rest of the existing neurons. This may be evidence that generating new neurons in the SGZ is part of learning and memorizing activity of mammals. Several studies have been performed to explain the relationship between neurogenesis and learning. In the case of learning, that related to the hippocampal function, a significantly increased number of neurons are generated in the SGZ and survival of the new neurons is increased if they are required for retention of memory. In addition to learning and memorizing, neurogenesis in the SGZ is also affected by mood and emotion. With constant, inescapable stress, which usually results in emotional depression, there is a significant decrease in neurogenesis, the effect of which can be reversed by treatment with fluoxetine. Neurogenesis in subventricular zone The largest NSC population in the brain is found in the SVZ. The SVZ is considered a micro-environment called a "stem cell niche" that retains the NSC's capacity of self-renewing and multipotency. Basic fibroblast growth factor (FGF2), hepatocyte growth factor (HGF), Notch-1, sonic hedgehog (SHH), noggin, ciliary neurotrophic factor (CNTF), and a soluble carbohydrate-binding protein, Galectin-1, are reported as factors that maintain such properties of NSC in stem cell niche. Like stem cells in SGZ, progenitor cells in SVZ also differentiate into neurons and form an intermediate cell called a transiently amplifying cell (TAC). A recent study revealed that beta-catenin signaling, Wnt β-catenin, regulates the differentiation of TAC. NSCs in the SVZ have a distinct capacity to migrate into the olfactory bulb in the anterior tip of the telencephalon by a pathway called the rostral migratory stream (RMS). This migration is unique to new neurons in the SVZ that embryonic neurogenesis and neurogenesis at other region of the brain are not able to perform. Another unique neurogenesis in the SVZ is neurogenesis by astrocytes. A study done by Doetsch (1999) showed that astrocytes in the SVZ can be dedifferentiate and differentiate into neurons in the olfactory bulb. Among four types of cells in the SVZ (migrating neuroblasts, immature precursors, astrocytes, and ependymal cells), migrating neuroblasts and immature precursors are silenced with the anti-mitotic agent and astrocytes are infected with a retrovirus. In the result, neurons that have the retrovirus are found in the olfactory bulb. Factors affecting neurogenesis Neurogenesis in the adult mammalian brain is affected by various factors, including exercise, stroke, brain insult and pharmacological treatments. For example, kainic acid-induced seizures, antidepressant (fluoxetine), neurotransmitters such as GABA and growth factors (fibroblast growth factors (FGFs), epidermal growth factor (EGF), neuregulins (NRGs), vascular endothelial growth factor (VEGF), and pigment epithelium-derived factor (PEDF) induce formation of neuroblasts. The final destination of NSCs is determined by "niche" signals. Wnt signaling drives NSCs to the formation of new neurons in the SGZ, whereas bone morphogenic proteins (BMPs) promote NSC differentiation into glia cells in the SVZ. However, in the case of brain injury, neurogenesis seems insufficient to repair damaged neurons. Thus, Cajal's theory was accepted for a long time. In actuality, in the intercranial physiological condition, many neurogenesis inhibitors are present (for example, axon growth-inhibitory ligands expressed in oligodendrocytes, myelin, NG2-glia, and reactive astrocytes in the lesion and degenerating tracts, and fibroblasts in scar tissue). The inhibitory ligands bind to growth cone receptors on a damaged neuron, which causes repulsion and collapse of the growth cone in the damaged regions. Among inhibitory factors, oligodendrocyte and myelin-derived inhibitory ligands are membrane-bound, meaning that, in the case of injury, those factors are not upregulated or overexpressed, rather it is from direct contact between intact or degraded myelin (or oligodendrocytes) and newly forming neurons. Nevertheless, with scar formation, many cell types in the brain release growth-inhibitory ligands such as basal lamina components, inhibitory axon guidance molecules and chondroitin sulfate proteoglycans. Inhibitory action of such factors may be a protection of the brain from inflammation. Okano and Sawamoto used an astrocyte-selective conditional Stat3-deficient mice model to examine the role of reactive astrocytes. The result was increased widespread CD11b-positive inflammatory cell invasion and demyelination. Application Brain damage itself can induce endogenous regeneration. Many studies have proven endogenous regeneration as a possible treatment of brain damage. However, the inhibitory reaction of the surrounding tissue of damaged region must be overcome before the treatment produces significant improvement. Traumatic brain injury In the study of the endogenous regeneration of the brain done by Scharff and co-researchers, damaged neurons in a songbird brain are regenerated with the same neuronal types where regeneration occurs (in the case of the study, the hippocampus). However, in places where normal regeneration of the neuron does not occur, there was no replacement of damaged neurons. Thus, recovering brain function after a brain injury was supposed to have limitations. However, a current study revealed that neurons are repaired to some degree after damage, from the SVZ. The migrating ability of progenitor cells in the SVZ form chain-like structures and laterally move progenitor cells towards the injured region. Along with progenitor cells, thin astrocytic processes and blood vessels also play an important role in the migration of neuroblasts, suggesting that the blood vessels may act as a scaffold. Other factors that contribute of the migration are slit proteins (produced at the choroid plexus) and their gradient (generated by the flow of cerebrospinal fluid). However, only 0.2% of new neurons survived and functioned in this study. Enhancing neurogenesis can be done by injecting growth factors such as fibroblast growth factor-2 (FGF-2) and epidermal growth factor (EGF). However, enhanced neurogenesis also have the possibility of epilepsy resulting in prolonged seizures. Parkinson's Disease Although endogenous regeneration methods are showing some promising evidence in treating brain ischemia, the current body of knowledge regarding promoting and inhibiting endogenous regeneration is not sufficient to treat Parkinson's disease. Both extrinsic and intrinsic modulation of pathological and physiological stimulation prevent the progenitor cell from differentiating into dopamine cells. Further research must be done to understand factors that affect progenitor cell differentiation in order to treat Parkinson's disease. Despite the difficulties in replacing compromised dopamine neurons through endogenous sources, recent work suggests that pharmacological activation of endogenous neural stem cells or neural precursor cells results in powerful neuronal rescue and motor skill improvements through a signal transduction pathway that involves the phosphorylation of STAT3 on the serine residue and subsequent Hes3 expression increase (STAT3-Ser/Hes3 Signaling Axis). References Developmental biology
Endogenous regeneration
Biology
2,534
378,661
https://en.wikipedia.org/wiki/Thermoregulation
Thermoregulation is the ability of an organism to keep its body temperature within certain boundaries, even when the surrounding temperature is very different. A thermoconforming organism, by contrast, simply adopts the surrounding temperature as its own body temperature, thus avoiding the need for internal thermoregulation. The internal thermoregulation process is one aspect of homeostasis: a state of dynamic stability in an organism's internal conditions, maintained far from thermal equilibrium with its environment (the study of such processes in zoology has been called physiological ecology). If the body is unable to maintain a normal temperature and it increases significantly above normal, a condition known as hyperthermia occurs. Humans may also experience lethal hyperthermia when the wet bulb temperature is sustained above for six hours. Work in 2022 established by experiment that a wet-bulb temperature exceeding 30.55°C caused uncompensable heat stress in young, healthy adult humans. The opposite condition, when body temperature decreases below normal levels, is known as hypothermia. It results when the homeostatic control mechanisms of heat within the body malfunction, causing the body to lose heat faster than producing it. Normal body temperature is around 37°C (98.6°F), and hypothermia sets in when the core body temperature gets lower than . Usually caused by prolonged exposure to cold temperatures, hypothermia is usually treated by methods that attempt to raise the body temperature back to a normal range. It was not until the introduction of thermometers that any exact data on the temperature of animals could be obtained. It was then found that local differences were present, since heat production and heat loss vary considerably in different parts of the body, although the circulation of the blood tends to bring about a mean temperature of the internal parts. Hence it is important to identify the parts of the body that most closely reflect the temperature of the internal organs. Also, for such results to be comparable, the measurements must be conducted under comparable conditions. The rectum has traditionally been considered to reflect most accurately the temperature of internal parts, or in some cases of sex or species, the vagina, uterus or bladder. Some animals undergo one of various forms of dormancy where the thermoregulation process temporarily allows the body temperature to drop, thereby conserving energy. Examples include hibernating bears and torpor in bats. Classification of animals by thermal characteristics Endothermy vs. ectothermy Thermoregulation in organisms runs along a spectrum from endothermy to ectothermy. Endotherms create most of their heat via metabolic processes and are colloquially referred to as warm-blooded. When the surrounding temperatures are cold, endotherms increase metabolic heat production to keep their body temperature constant, thus making the internal body temperature of an endotherm more or less independent of the temperature of the environment. Endotherms possess a larger number of mitochondria per cell than ectotherms, enabling them to generate more heat by increasing the rate at which they metabolize fats and sugars. Ectotherms use external sources of temperature to regulate their body temperatures. They are colloquially referred to as cold-blooded despite the fact that body temperatures often stay within the same temperature ranges as warm-blooded animals. Ectotherms are the opposite of endotherms when it comes to regulating internal temperatures. In ectotherms, the internal physiological sources of heat are of negligible importance; the biggest factor that enables them to maintain adequate body temperatures is due to environmental influences. Living in areas that maintain a constant temperature throughout the year, like the tropics or the ocean, has enabled ectotherms to develop behavioral mechanisms that respond to external temperatures, such as sun-bathing to increase body temperature, or seeking the cover of shade to lower body temperature. Ectotherms Ectothermic cooling Vaporization: Evaporation of sweat and other bodily fluids. Convection: Increasing blood flow to body surfaces to maximize heat transfer across the advective gradient. Conduction: Losing heat by being in contact with a colder surface. For instance: Lying on cool ground. Staying wet in a river, lake or sea. Covering in cool mud. Radiation: Releasing heat by radiating it away from the body. Ectothermic heating (or minimizing heat loss) Convection: Climbing to higher ground up trees, ridges, rocks. Entering a warm water or air current. Building an insulated nest or burrow. Conduction: Lying on a hot surface. Radiation: Lying in the sun (heating this way is affected by the body's angle in relation to the sun). Folding skin to reduce exposure. Concealing wing surfaces. Exposing wing surfaces. Insulation: Changing shape to alter surface/volume ratio. Inflating the body. To cope with low temperatures, some fish have developed the ability to remain functional even when the water temperature is below freezing; some use natural antifreeze or antifreeze proteins to resist ice crystal formation in their tissues. Amphibians and reptiles cope with heat gain by evaporative cooling and behavioral adaptations. An example of behavioral adaptation is that of a lizard lying in the sun on a hot rock in order to heat through radiation and conduction. Endothermy An endotherm is an animal that regulates its own body temperature, typically by keeping it at a constant level. To regulate body temperature, an organism may need to prevent heat gains in arid environments. Evaporation of water, either across respiratory surfaces or across the skin in those animals possessing sweat glands, helps in cooling body temperature to within the organism's tolerance range. Animals with a body covered by fur have limited ability to sweat, relying heavily on panting to increase evaporation of water across the moist surfaces of the lungs and the tongue and mouth. Mammals like cats, dogs and pigs, rely on panting or other means for thermal regulation and have sweat glands only in foot pads and snout. The sweat produced on pads of paws and on palms and soles mostly serves to increase friction and enhance grip. Birds also counteract overheating by gular fluttering, or rapid vibrations of the gular (throat) skin. Down feathers trap warm air acting as excellent insulators just as hair in mammals acts as a good insulator. Mammalian skin is much thicker than that of birds and often has a continuous layer of insulating fat beneath the dermis. In marine mammals, such as whales, or animals that live in very cold regions, such as the polar bears, this is called blubber. Dense coats found in desert endotherms also aid in preventing heat gain such as in the case of the camels. A cold weather strategy is to temporarily decrease metabolic rate, decreasing the temperature difference between the animal and the air and thereby minimizing heat loss. Furthermore, having a lower metabolic rate is less energetically expensive. Many animals survive cold frosty nights through torpor, a short-term temporary drop in body temperature. Organisms, when presented with the problem of regulating body temperature, have not only behavioural, physiological, and structural adaptations but also a feedback system to trigger these adaptations to regulate temperature accordingly. The main features of this system are stimulus, receptor, modulator, effector and then the feedback of the newly adjusted temperature to the stimulus. This cyclical process aids in homeostasis. Homeothermy compared with poikilothermy Homeothermy and poikilothermy refer to how stable an organism's deep-body temperature is. Most endothermic organisms are homeothermic, like mammals. However, animals with facultative endothermy are often poikilothermic, meaning their temperature can vary considerably. Most fish are ectotherms, as most of their heat comes from the surrounding water. However, almost all fish are poikilothermic. Beetles The physiology of the Dendroctonus micans beetle encompasses a suite of adaptations crucial for its survival and reproduction. Flight capabilities enable them to disperse and locate new host trees, while sensory organs aid in detecting environmental cues and food sources. Of particular importance is their ability to thermoregulate, ensuring optimal body temperature in fluctuating forest conditions. This physiological mechanism, coupled with thermosensation, allows them to thrive across diverse environments. Overall, these adaptations underscore the beetle's remarkable resilience and highlight the significance of understanding their physiology for effective management and conservation efforts. Vertebrates By numerous observations upon humans and other animals, John Hunter showed that the essential difference between the so-called warm-blooded and cold-blooded animals lies in observed constancy of the temperature of the former, and the observed variability of the temperature of the latter. Almost all birds and mammals have a high temperature almost constant and independent of that of the surrounding air (homeothermy). Almost all other animals display a variation of body temperature, dependent on their surroundings (poikilothermy). Brain control Thermoregulation in both ectotherms and endotherms is controlled mainly by the preoptic area of the anterior hypothalamus. Such homeostatic control is separate from the sensation of temperature. In birds and mammals In cold environments, birds and mammals employ the following adaptations and strategies to minimize heat loss: Using small smooth muscles (arrector pili in mammals), which are attached to feather or hair shafts; this distorts the surface of the skin making feather/hair shaft stand erect (called goose bumps or goose pimples) which slows the movement of air across the skin and minimizes heat loss. Increasing body size to more easily maintain core body temperature (warm-blooded animals in cold climates tend to be larger than similar species in warmer climates (see Bergmann's rule)) Having the ability to store energy as fat for metabolism Have shortened extremities Have countercurrent blood flow in extremities – this is where the warm arterial blood travelling to the limb passes the cooler venous blood from the limb and heat is exchanged warming the venous blood and cooling the arterial (e.g., Arctic wolf or penguins) In warm environments, birds and mammals employ the following adaptations and strategies to maximize heat loss: Behavioural adaptations like living in burrows during the day and being nocturnal Evaporative cooling by perspiration and panting Storing fat reserves in one place (e.g., camel's hump) to avoid its insulating effect Elongated, often vascularized extremities to conduct body heat to the air In humans As in other mammals, thermoregulation is an important aspect of human homeostasis. Most body heat is generated in the deep organs, especially the liver, brain, and heart, and in contraction of skeletal muscles. Humans have been able to adapt to a great diversity of climates, including hot humid and hot arid. High temperatures pose serious stresses for the human body, placing it in great danger of injury or even death. For example, one of the most common reactions to hot temperatures is heat exhaustion, which is an illness that could happen if one is exposed to high temperatures, resulting in some symptoms such as dizziness, fainting, or a rapid heartbeat. For humans, adaptation to varying climatic conditions includes both physiological mechanisms resulting from evolution and behavioural mechanisms resulting from conscious cultural adaptations. The physiological control of the body's core temperature takes place primarily through the hypothalamus, which assumes the role as the body's "thermostat". This organ possesses control mechanisms as well as key temperature sensors, which are connected to nerve cells called thermoreceptors. Thermoreceptors come in two subcategories; ones that respond to cold temperatures and ones that respond to warm temperatures. Scattered throughout the body in both peripheral and central nervous systems, these nerve cells are sensitive to changes in temperature and are able to provide useful information to the hypothalamus through the process of negative feedback, thus maintaining a constant core temperature. There are four avenues of heat loss: evaporation, convection, conduction, and radiation. If skin temperature is greater than that of the surrounding air temperature, the body can lose heat by convection and conduction. However, if air temperature of the surroundings is greater than that of the skin, the body gains heat by convection and conduction. In such conditions, the only means by which the body can rid itself of heat is by evaporation. So, when the surrounding temperature is higher than the skin temperature, anything that prevents adequate evaporation will cause the internal body temperature to rise. During intense physical activity (e.g. sports), evaporation becomes the main avenue of heat loss. Humidity affects thermoregulation by limiting sweat evaporation and thus heat loss. In reptiles Thermoregulation is also an integral part of a reptile's life, specifically lizards such as Microlophus occipitalis and Ctenophorus decresii who must change microhabitats to keep a constant body temperature. By moving to cooler areas when it is too hot and to warmer areas when it is cold, they can thermoregulate their temperature to stay within their necessary bounds. In plants Thermogenesis occurs in the flowers of many plants in the family Araceae as well as in cycad cones. In addition, the sacred lotus (Nelumbo nucifera) is able to thermoregulate itself, remaining on average above air temperature while flowering. Heat is produced by breaking down the starch that was stored in their roots, which requires the consumption of oxygen at a rate approaching that of a flying hummingbird. One possible explanation for plant thermoregulation is to provide protection against cold temperature. For example, the skunk cabbage is not frost-resistant, yet it begins to grow and flower when there is still snow on the ground. Another theory is that thermogenicity helps attract pollinators, which is borne out by observations that heat production is accompanied by the arrival of beetles or flies. Some plants are known to protect themselves against colder temperatures using antifreeze proteins. This occurs in wheat (Triticum aestivum), potatoes (Solanum tuberosum) and several other angiosperm species. Behavioral temperature regulation Animals other than humans regulate and maintain their body temperature with physiological adjustments and behavior. Desert lizards are ectotherms, and therefore are unable to regulate their internal temperature themselves. To regulate their internal temperature, many lizards relocate themselves to a more environmentally favorable location. They may do this in the morning only by raising their head from its burrow and then exposing their entire body. By basking in the sun, the lizard absorbs solar heat. It may also absorb heat by conduction from heated rocks that have stored radiant solar energy. To lower their temperature, lizards exhibit varied behaviors. Sand seas, or ergs, produce up to , and the sand lizard will hold its feet up in the air to cool down, seek cooler objects with which to contact, find shade, or return to its burrow. They also go to their burrows to avoid cooling when the temperature falls. Aquatic animals can also regulate their temperature behaviorally by changing their position in the thermal gradient. Sprawling prone in a cool shady spot, "splooting," has been observed in squirrels on hot days. Animals also engage in kleptothermy in which they share or steal each other's body warmth. Kleptothermy is observed, particularly amongst juveniles, in endotherms such as bats and birds (such as the mousebird and emperor penguin). This allows the individuals to increase their thermal inertia (as with gigantothermy) and so reduce heat loss. Some ectotherms share burrows of ectotherms. Other animals exploit termite mounds. Some animals living in cold environments maintain their body temperature by preventing heat loss. Their fur grows more densely to increase the amount of insulation. Some animals are regionally heterothermic and are able to allow their less insulated extremities to cool to temperatures much lower than their core temperature—nearly to . This minimizes heat loss through less insulated body parts, like the legs, feet (or hooves), and nose. Different species of Drosophila found in the Sonoran Desert will exploit different species of cacti based on the thermotolerance differences between species and hosts. For example, Drosophila mettleri is found in cacti like the saguaro and senita; these two cacti remain cool by storing water. Over time, the genes selecting for higher heat tolerance were reduced in the population due to the cooler host climate the fly is able to exploit. Some flies, such as Lucilia sericata, lay their eggs en masse. The resulting group of larvae, depending on its size, is able to thermoregulate and keep itself at the optimum temperature for development. Koalas also can behaviorally thermoregulate by seeking out cooler portions of trees on hot days. They preferentially wrap themselves around the coolest portions of trees, typically near the bottom, to increase their passive radiation of internal body heat. Hibernation, estivation and daily torpor To cope with limited food resources and low temperatures, some mammals hibernate during cold periods. To remain in "stasis" for long periods, these animals build up brown fat reserves and slow all body functions. True hibernators (e.g., groundhogs) keep their body temperatures low throughout hibernation whereas the core temperature of false hibernators (e.g., bears) varies; occasionally the animal may emerge from its den for brief periods. Some bats are true hibernators and rely upon a rapid, non-shivering thermogenesis of their brown fat deposit to bring them out of hibernation. Estivation is similar to hibernation, however, it usually occurs in hot periods to allow animals to avoid high temperatures and desiccation. Both terrestrial and aquatic invertebrate and vertebrates enter into estivation. Examples include lady beetles (Coccinellidae), North American desert tortoises, crocodiles, salamanders, cane toads, and the water-holding frog. Daily torpor occurs in small endotherms like bats and hummingbirds, which temporarily reduces their high metabolic rates to conserve energy. Variation in animals Normal human temperature Previously, average oral temperature for healthy adults had been considered , while normal ranges are . In Poland and Russia, the temperature had been measured axillarily (under the arm). was considered "ideal" temperature in these countries, while normal ranges are . Recent studies suggest that the average temperature for healthy adults is (same result in three different studies). Variations (one standard deviation) from three other studies are: for males, for females Measured temperature varies according to thermometer placement, with rectal temperature being higher than oral temperature, while axillary temperature is lower than oral temperature. The average difference between oral and axillary temperatures of Indian children aged 6–12 was found to be only 0.1 °C (standard deviation 0.2 °C), and the mean difference in Maltese children aged 4–14 between oral and axillary temperature was 0.56 °C, while the mean difference between rectal and axillary temperature for children under 4 years old was 0.38 °C. Variations due to circadian rhythms In humans, a diurnal variation has been observed dependent on the periods of rest and activity, lowest at 11 p.m. to 3 a.m. and peaking at 10 a.m. to 6 p.m. Monkeys also have a well-marked and regular diurnal variation of body temperature that follows periods of rest and activity, and is not dependent on the incidence of day and night; nocturnal monkeys reach their highest body temperature at night and lowest during the day. Sutherland Simpson and J.J. Galbraith observed that all nocturnal animals and birds – whose periods of rest and activity are naturally reversed through habit and not from outside interference – experience their highest temperature during the natural period of activity (night) and lowest during the period of rest (day). Those diurnal temperatures can be reversed by reversing their daily routine. In essence, the temperature curve of diurnal birds is similar to that of humans and other homeothermic animals, except that the maximum occurs earlier in the afternoon and the minimum earlier in the morning. Also, the curves obtained from rabbits, guinea pigs, and dogs were quite similar to those from humans. These observations indicate that body temperature is partially regulated by circadian rhythms. Variations due to human menstrual cycles During the follicular phase (which lasts from the first day of menstruation until the day of ovulation), the average basal body temperature in women ranges from . Within 24 hours of ovulation, women experience an elevation of due to the increased metabolic rate caused by sharply elevated levels of progesterone. The basal body temperature ranges between throughout the luteal phase, and drops down to pre-ovulatory levels within a few days of menstruation. Women can chart this phenomenon to determine whether and when they are ovulating, so as to aid conception or contraception. Variations due to fever Fever is a regulated elevation of the set point of core temperature in the hypothalamus, caused by circulating pyrogens produced by the immune system. To the subject, a rise in core temperature due to fever may result in feeling cold in an environment where people without fever do not. Variations due to biofeedback Some monks are known to practice Tummo, biofeedback meditation techniques, that allow them to raise their body temperatures substantially. Effect on lifespan The effects of such a genetic change in body temperature on longevity is difficult to study in humans. Limits compatible with life There are limits both of heat and cold that an endothermic animal can bear and other far wider limits that an ectothermic animal may endure and yet live. The effect of too extreme a cold is to decrease metabolism, and hence to lessen the production of heat. Both catabolic and anabolic pathways share in this metabolic depression, and, though less energy is used up, still less energy is generated. The effects of this diminished metabolism become telling on the central nervous system first, especially the brain and those parts concerning consciousness; both heart rate and respiration rate decrease; judgment becomes impaired as drowsiness supervenes, becoming steadily deeper until the individual loses consciousness; without medical intervention, death by hypothermia quickly follows. Occasionally, however, convulsions may set in towards the end, and death is caused by asphyxia. In experiments on cats performed by Sutherland Simpson and Percy T. Herring, the animals were unable to survive when rectal temperature fell below . At this low temperature, respiration became increasingly feeble; heart-impulse usually continued after respiration had ceased, the beats becoming very irregular, appearing to cease, then beginning again. Death appeared to be mainly due to asphyxia, and the only certain sign that it had taken place was the loss of knee-jerks. However, too high a temperature speeds up the metabolism of different tissues to such a rate that their metabolic capital is soon exhausted. Blood that is too warm produces dyspnea by exhausting the metabolic capital of the respiratory centre; heart rate is increased; the beats then become arrhythmic and eventually cease. The central nervous system is also profoundly affected by hyperthermia and delirium, and convulsions may set in. Consciousness may also be lost, propelling the person into a comatose condition. These changes can sometimes also be observed in patients experiencing an acute fever. Mammalian muscle becomes rigid with heat rigor at about 50 °C, with the sudden rigidity of the whole body rendering life impossible. H.M. Vernon performed work on the death temperature and paralysis temperature (temperature of heat rigor) of various animals. He found that species of the same class showed very similar temperature values, those from the Amphibia examined being 38.5 °C, fish 39 °C, reptiles 45 °C, and various molluscs 46 °C. Also, in the case of pelagic animals, he showed a relation between death temperature and the quantity of solid constituents of the body. In higher animals, however, his experiments tend to show that there is greater variation in both the chemical and physical characteristics of the protoplasm and, hence, greater variation in the extreme temperature compatible with life. A 2022 study on the effect of heat on young people found that the critical wet-bulb temperature at which heat stress can no longer be compensated, Twb,crit, in young, healthy adults performing tasks at modest metabolic rates mimicking basic activities of daily life was much lower than the 35°C usually assumed, at about 30.55°C in 36–40°C humid environments, but progressively decreased in hotter, dry ambient environments. Arthropoda The maximum temperatures tolerated by certain thermophilic arthropods exceeds the lethal temperatures for most vertebrates. The most heat-resistant insects are three genera of desert ants recorded from three different parts of the world. The ants have developed a lifestyle of scavenging for short durations during the hottest hours of the day, in excess of , for the carcasses of insects and other forms of life which have died from heat stress. In April 2014, the South Californian mite Paratarsotomus macropalpis has been recorded as the world's fastest land animal relative to body length, at a speed of 322 body lengths per second. Besides the unusually great speed of the mites, the researchers were surprised to find the mites running at such speeds on concrete at temperatures up to , which is significant because this temperature is well above the lethal limit for the majority of animal species. In addition, the mites are able to stop and change direction very quickly. Spiders like Nephila pilipes exhibits active thermal regulation behavior. During high temperature sunny days, it aligns its body with the direction of sunlight to reduce the body area under direct sunlight. See also Human body temperature Innate heat Insect thermoregulation Thermal neutral zone References Further reading full pdf This cites work of Simpson & Galbraith Other Internet Archive listings see Table of Contents link (Previously Guyton's Textbook of Medical Physiology. Earlier editions back to at least 5th edition 1976, contain useful information on the subject of thermoregulation, the concepts of which have changed little in that time). link to abstract Weldon Owen Pty Ltd. (1993). Encyclopedia of animals – Mammals, Birds, Reptiles, Amphibians. Reader's Digest Association, Inc. Pages 567–568. . External links Royal Institution Christmas Lectures 1998 Human homeostasis Animal physiology Heat transfer Articles containing video clips Mathematics in medicine
Thermoregulation
Physics,Chemistry,Mathematics,Biology
5,585
797,472
https://en.wikipedia.org/wiki/Bone%20ash
Bone ash is a white material produced by the calcination of bones. Typical bone ash consists of about 55.82% calcium oxide, 42.39% phosphorus pentoxide, and 1.79% water. The exact composition of these compounds varies depending upon the type of bones being used, but generally the formula for bone ash is Ca5(OH)(PO4)3. Bone ash usually has a density around 3.10 g/mL and a melting point of 1670 °C (3038 °F). Most bones retain their cellular structure through calcination. History Antiquity Burnt bones have been recovered from numerous Ancient Greek sanctuaries dating from the Late Bronze Age up to the Hellenistic period. The burnt bones are often calcined with a white or blueish color, allowing archaeologists to identify them as sacrificial remains. At the sanctuary to Artemis in Eretria a round altar of fieldstones filled with soil was found, dating to the 8th century BC. The upper surface was covered with clay and animal bones were burned on top, then apparently swept off the surface with terracotta, metal objects, and pottery and trampled until the altar was eventually subsumed by the ritual debris. Some scholars have attributed these altars to chthonian rituals, but this is disputed. Xenocrates of Aphrodisias reported its use as a medicinal ingredient, although cannibalism was, according to Galen, prohibited under the laws of the Roman Empire. Uses Bone china Bone ash is a key raw material for bone china. Constituting around 50% of the body, it reacts with other raw materials in the body during firing to form, amongst other phases, anorthite. In preparation for use in bone china, bones undergo multiple processing stages, including: Removal of any meat before being degreased. Calcination to around 1000 °C (1832 °F). This will remove all organic, and the bone is left sterilised. Being ground with water to fine particle size. Being partially dewatered. Since the 1990s, the use of synthetic alternatives to bone ash, which are based on dicalcium phosphate and tricalcium phosphate, has increased. Significant amounts of bone china is produced using these synthetic alternatives rather than bone ash. Fertilizers Bone ash can be used alone as an organic fertilizer or it can be treated with sulfuric acid to form a "single superphosphate" fertilizer which is more water soluble: Ca3(PO4)2 + 2 H2SO4 + 5 H2O → 2 CaSO4·2H2O + Ca(H2PO4)2·H2O Similarly, phosphoric acid can be used to form triple superphosphate, a more concentrated phosphorus fertilizer which excludes the gypsum content found in single superphosphate: Ca3(PO4)2 + 4 H3PO4 → 3 Ca(H2PO4)2 Metal casting Bone ash is used in foundries for various purposes. Examples include release agents and protective barriers for tools exposed to molten metal, and as a sealant for seams and cracks. Applied as a powder or water slurry, bone ash has many unique characteristics. First of all, the powder has high thermal stability, so it maintains its form in extremely high temperatures. The powder coating itself adheres to metal well and does not drip, run, cause much corrosion, or create noticeable streaks. Using the bone ash is easy as well, as it comes in a powder form, is easy to clean up, and does not separate into smaller parts (therefore requiring no extra mixing). Metallurgy Bone ash is a material often used in cupellation, a process by which precious metals (such as gold and silver) are removed from base metals. In cupellation, base metals in an impure sample are oxidized with the help of lead and are vaporized and absorbed into a porous cupellation material, typically made of magnesium or calcium. This leaves the precious metals which do not oxidize behind. Bone ash's extremely porous and calcareous structure as well as its high melting point makes it an ideal candidate for cupellation. Analysis of bone ash The chemical analyses, determined by X-ray fluorescence and reported as %, of three samples of ceramic grade bone ash: In culture Bible From Isaiah: "And the people shall be as the burnings of lime: as thorns cut up shall they be burned in the fire" Its use is mentioned in the Book of Amos (2:1): "I will not turn away the punishment thereof, because he burned the bones of the King of Edom into lime." It was used in ancient formulas for white paint and cosmetic pigments, and in the cupellation process to separate silver from lead. See also Cremains - composed partially of human bone ash References Inorganic compounds Ash Organic fertilizers Animal products Types of ash
Bone ash
Chemistry
1,026
9,463,674
https://en.wikipedia.org/wiki/Vainu%20Bappu
Manali Kallat Vainu Bappu (10 August 1927 – 19 August 1982) was an Indian astronomer and president of the International Astronomical Union. Bappu helped to establish several astronomical institutions in India, including the Vainu Bappu Observatory which is named after him, and he also contributed to the establishment of the modern Indian Institute of Astrophysics. In 1957, he discovered the Wilson–Bappu effect jointly with American astronomer Olin Chaddock Wilson. On 2 July 1949, when Bappu was taking pictures of the night sky, he spotted a bright moving object which he had rightfully understood to be a comet. When he turned to his professor, Bart Bok, and colleague Gordon Newkirk, they confirmed the discovery. They calculated the orbit of the comet which revealed that the comet would reappear only after 60,000 years. The International Astronomical Union officially named the comet as the Bappu-Bok-Newkirk comet (C/1949N1). Bappu also received the Donohoe Comet Medal of the Astronomical Society of the Pacific. This is the only comet with an Indian name. Early life Vainu Bappu was born on 10 August 1927, in Chennai, as the only child of Manali Kukuzhi Bappu and Kallat Sunanna Bappu. His family originally hails from Thalassery in Kerala. His father was an astronomer at the Nizamiah Observatory in Telangana. He attended the Harvard Graduate School of Astronomy for his PhD after obtaining postgraduate degree from the Madras University. Discoveries Bappu, along with two of his colleagues, discovered the 'Bappu-Bok-Newkirk' comet. He was awarded the Donhoe Comet-Medal by the Astronomical Society of the Pacific in 1949. In a paper published in 1957, American astronomer Olin Chaddock Wilson and Bappu had described what would later be known as the Wilson–Bappu effect. The effect as described by L.V. Kuhi is: 'The width of the Ca II emission in normal, nonvariable, G, K, and M stars is correlated with the visual absolute magnitude in the sense that the brighter the star the wider the emission.' The paper opened up the field of stellar chromospheres for research. Vainu Bappu Observatory On his return to India, Bappu was appointed to head a team of astronomers to build an observatory at Nainital. His efforts of building an indigenous large optical telescope and a research observatory led to the founding of the optical observatory of Kavalur and its large telescope. The Vainu Bappu Observatory is one of the main observatories of the Indian Institute of Astrophysics, also initiated in its modern avatar by Bappu in 1971. Later, a number of discoveries were made from the Vainu Bappu Observatory. Career overview See also Cosmic distance ladder References 1927 births Harvard University alumni Indian astrophysicists 1982 deaths Recipients of the Padma Bhushan in science & engineering People from Thalassery Scientists from Kerala 20th-century Indian astronomers Presidents of the International Astronomical Union
Vainu Bappu
Astronomy
639
45,285,183
https://en.wikipedia.org/wiki/Methylene%20cyclopropyl%20acetic%20acid
Methylene cyclopropyl acetic acid (MCPA) is found in lychee seeds and also a toxic metabolite in mammalian digestion after eating hypoglycin, present in the unripe ackee fruit, grown in Jamaica and in Africa. By blocking coenzyme A and carnitine, MPCA causes a decrease in β-oxidation of fatty acids, and hence gluconeogenesis. Overview Methylene cyclopropyl acetic acid (MCPA) is a compound found in lychee (Litchi chinensis) seeds. The major carbocyclic fatty acid in the seed oils of Litchi chinensis is a cyclopropane fatty acid named Dihydrosterculic acid; these have been found in many plants of the order Malvales (Malvaceae), in up to 60% of seed oil content, depending on the species but also in leaves, roots and shoots. They are accompanied by small amounts of their cyclopropanoid analogues, i.e. cyclopropyl acetic acid. MPCA is also a metabolite in mammalian digestion after ingestion of hypoglycin, a rare and potentially toxic amino acid, chemically related to the common amino acid lysine. Hypoglycin is found in the unripe ackee fruit. Pathophysiology MCPA forms non-metabolizable esters with coenzyme A (CoA) and carnitine, causing a decrease in their bioavailability and concentration in bodily tissue. Both of these cofactors are necessary for the β-oxidation of fatty acids, which in turn is vital for gluconeogenesis. MCPA also inhibits the dehydrogenation of a number of Acyl-CoA dehydrogenases. The inhibition of one in particular, butyryl CoA dehydrogenase (a short-chain acyl-CoA dehydrogenase), causes β-oxidation to cease before fully realized, which leads to a decrease in the production of NADH and Acetyl-CoA. The cascading effect continues, as this decrease in concentration further inhibits gluconeogenesis. Formation after ingestion of hypoglycin A Hypoglycin A is a water-soluble liver toxin, that upon ingestion, leads to hypoglycemia through the inhibition of gluconeogenesis, a metabolic pathway that leads to the generation of glucose from non-carbohydrate carbon sources (i.e. glucogenic amino acids, lactate, and glycerol). In addition, it also limits Acyl and carnitine cofactors, which are instrumental in the oxidation of large fatty acids. Hypoglycin A undergoes deamination, forming α-ketomethylene-cyclopropylpropionic acid (KMCPP), which then forms MCPA through oxidative decarboxylation. Hypoglycin A (and hypoglycin B) is found in the ackee fruit, the national fruit of Jamaica, and, like Litchi chinensis, is a member of the family Sapindaceae. The fruit is rich in fatty acids, zinc, protein, and vitamin A. In the fully ripened arils of the fruit, Hypoglycin A is present at only 0.1ppm, but in the unripened fruit it can exceed a concentration of 1000ppm. Toxicity Ingestion of the unripened fruit containing such a concentrated dose causes what is known as Jamaican vomiting sickness. Depending on the severity of the case, the symptoms range from headache, rapid heart beat and sweating to dehydration and low blood pressure stemming from intense vomiting, to delirium and coma, and finally seizures and death. The symptoms stemming from lychee poisoning are nearly identical, both being caused by MCPA, with lychee seeds also containing methylenecyclopropyl glycine (MCPG), a homologue of Hypoglycin A. Recent poisonings In 2014, numerous children died in Bihar (the largest producer of lychees in India) after consuming lychees. The vast majority of the fatalities were undernourished children, their preexisting low blood sugar detrimentally amplifying the effects. It is also possible that they ate unripe lychees. An earlier poisoning incident occurring in 2011 -also in Bihar- has been recorded. Fourteen children reportedly perished in this incident. References Acetic acids Cyclopropanes Vinylidene compounds Plant toxins Sapindaceae
Methylene cyclopropyl acetic acid
Chemistry
980
51,086,640
https://en.wikipedia.org/wiki/Testosterone%20cyclohexylpropionate
Testosterone cyclohexylpropionate (TCHP; brand names Andromar, Femolone, Telipex Retard) is an androgen and anabolic steroid and a testosterone ester. See also Estradiol diundecylate/hydroxyprogesterone heptanoate/testosterone cyclohexylpropionate List of combined sex-hormonal preparations § Androgens References Abandoned drugs Anabolic–androgenic steroids Androstanes Testosterone esters
Testosterone cyclohexylpropionate
Chemistry
109
64,693,136
https://en.wikipedia.org/wiki/Ministry%20of%20Infrastructure%20and%20Sustainable%20Energy
The Ministry of Infrastructure and Sustainable Energy (MISE) is a government ministry of Kiribati, as ministry of infrastructure and as ministry of energy, headquartered in South Tarawa. Ministers Ruateki Tekaiara (2016–2020) Willie Tokataake (2020–) References External links MISE Kiribati Infrastructure and Sustainable Energy Kiribati
Ministry of Infrastructure and Sustainable Energy
Engineering
76
41,443,440
https://en.wikipedia.org/wiki/Automation%20Master
Automation Master is an open source community maintained project. Automation Master was created to assist in the design, implementation and operation of an automated system. The installation and startup of any automated system is very time-consuming and costly. Much of the time spent starting up an automated system can be traced to the difficulties in providing an effective test of the computer based system in the integrator's laboratory. Traditional testing techniques required staging as much of the equipment as practical in the laboratory, and wiring up a simulator panel containing switches and indicator lights to all of the I/O modules on the PLC. The operator stations would be connected up to this "rats nest" of wires, switches, indicator lights, and equipment for the test. PLC software would be tested by sequencing the toggle switches to input the electrical signals to the input cards on the PLC, and then observing the response by software on the indicator lights and operator consoles. For small simple systems, this type of testing was manageable, and resulted in some degree of confidence that the control software would work once it was installed. However, the amount of time spent performing the test was relatively high, and a real-time test could not be achieved. As systems become larger and more complex, this method of testing only achieves, at a significant cost, a basic hardware and configuration check. The testing of complex logic sequences, is an act of futility without the ability to accurately reproduce the timing relationships between signals. What was needed was the ability to exercise the control system's software in a real-time environment. Real-time simulation fills this void. Real-time simulators such as Automation Master are PC based software packages, which utilize a model to mimic the automated system's reaction to the control software. History Max Hitchens and George Rote began working on Industrial Automation projects in the late 1970s. One of their first projects was an automatic guided vehicle system for Goodyear Tire and Rubber Company in Lawton, Oklahoma. This system was to automatically transport material and finished goods around a massive tire factory. Mr. Hitchens' and Mr. Rote's previous experience in software development was mainly in office environments where logic could be debugged based upon simple CRT or printed output. So, after four months of writing software for the automated system, they took the software to the field and thus got their "baptism" into the real world debugging of large automated systems. An automatic vehicle would be dispatched to do a task and it would not show up at its destination. First, they had to find the vehicle which could be anywhere in the massive facility, then try to figure out what went wrong. After 6 months of 16-hour days - 7 days a week, they finally got the system running. Mr. Hitchens and Mr. Rote had other automatic guided vehicle projects and resolved to not repeat the Goodyear debugging experience. So, they build a custom simulator which attached to the guided vehicle system controller and pretended to be the factory floor. The activity of the guided vehicles was displayed on a color graphic display. The software could be debugged on their desks and with finished and debugged taken to the field and installed with minimum effort. Sometime later, Mr. Hitchens and Mr. Rote were demonstrating their AGV simulator to Conco-Tellus, a conveyor system manufacturer, when they were asked if they could build a simulator for conveyor systems. Of course, the answer was yes and the Real Time Conveyor Simulator (RTCS) was born. The RTCS was a custom system with 3 single-board computers. They were awarded a patent for it in 1985. The RTCS was a specialty product which did not have a large market, but Mr. Hitchens and Mr. Rote continued refinement and development. Along this time the IBM PC was introduced and it was used to build the database necessary for the simulator. In the mid-1980s, a director for Bell Labs saw the simulator and wanted to try it out modeling software development projects. It was impractical on a custom hardware box. But since the code was written for Intel processors, it could possibly be converted to run on a PC. In exchange for free use of the software, Bell Labs contributed a development system and two software engineers to help with the conversion. It turned out to be not very difficult and within a few weeks RTCS was running on a PC. Well almost, the PC did not have enough power to meet the real time computing which the RTCS required. It did, however, make a great demonstration system. Now all was required was a disk and not 100 lbs of computer gear. As the 8088 PC metamorphosed into the 80286, customers were increasingly reluctant to spend thousands of dollars on a custom piece of computer gear. By the time the 80386 personal computers came out, the RTCS ceased to have a market. Fortunately, the 80386 and subsequently the 80486 had enough power to run the simulation in real time and Automation Master was born. Development continued until the mid-1990s when for a myriad of reasons, mainly the death of George Rote, it ceased. By this time, Automation Master embodied many thousands of hours of development and use. Automation Master languished until 2013 when Max Hitchens decided to create an open source project and release it into the public domain. Description Automation Master is a comprehensive modeling and simulation software package designed specifically for design, implementation and operation of factory/warehouse automation. After the testing is complete, the system will ship with confidence that, a real time test has been performed, and the system will work when it is installed. The installation will be faster and less costly and the system provided to the customer will be of higher quality and can be quickly placed into production. Project Life Cycle Automation Master can also be used throughout the life cycle of an automated factory, from the design phase, through the implementation phase, into actual production. An automation project is a cycle of activities. The project starts as a concept, the system concept is used to develop a design, the system design is used to fabricate the system components, the fabricated components are installed, and the installed system will be operated. The installed system generates concepts for improvements or new systems and the cycle repeats. A real time simulator can assist in the entire life cycle of a project. Design Animating the System Concept A concept is usually just an idea, which needs to be funded to make it a reality. Automated systems are dynamic. A static picture or description of an automated system does not demonstrate the interaction of the components or show how the system functions as a whole. It has been said that a picture is worth a thousand words; a corollary is that a moving picture is worth ten thousand words. An animated picture, as generated by a real time simulator, can communicate the concept and assist in selling the project to management. Simulating the System Design Designing an automated system is a balancing act. You want the best possible results for the least cost. The system design is selected from several alternatives. Choosing the best alternative requires evaluation of the alternatives and how they interact with each other. A real time simulator allows the system designer to evaluate potential designs, by using a model, to select the best approach for the automated system. An important element of automated system design is developing the overall strategy to be used in operating the facility. A simulation model allows the operating strategy to be developed interactively. A strategy is implemented in the model, the results viewed, and the strategy refined to improve performance. The operating strategy becomes increasingly important as the cost of system components escalates. The system efficiency can be improved by changing the operating strategy using the model without increasing the cost of the system. Scenario testing or test cases may be set up to test and confirm proper system operation under varying conditions and collect statistical data on its operation. Implementation Automation Master is used for software quality control during the implementation phase. Testing the Control Logic The real time simulator may be connected directly to the automated system's programmable controllers and computers. The model is used as a replacement for the physical equipment. Thus, the control logic and system software can be exhaustively tested in a laboratory environment instead of on the plant floor. The control logic can be stress tested under full operational loading to verify that the system will meet production requirements. System emulation reduces safety hazards and equipment damage during installation. Mistakes in the control logic and testing blunders are discovered using a model, not the live system. An emulation model contains more detail than the design phase simulation model. The simulation scenarios which exercised the system design may be rerun in emulation mode to verify that the detailed design and control logic implementation meet system production requirements. If it does not, it is far easier and less costly to modify the design or the control logic before the system is installed. Creating an "As Built" Model A real time simulator may be used during installation to determine the variance between the system design and the actual installation. Field verification logs the differences between the "as built" system and the model. If a major mistake has been made in translating the system design into the installed system, it can be corrected prior to system start up. The differences reported in the verification log are used to change the model to reflect the "as built" system. The control logic can then be retested to verify that the software will still meet the production requirements with the "as built" system. The simulation throughput scenarios can also be rerun to verify that the "as built" system meets all of the system design criteria. Operation Maintaining the Automated System The model may act as a diagnostic monitor. In this mode, the model is run in parallel with the operation of the installed system. The real time simulator displays the dynamic activity in the system and continuously compares the model with the actual operation. When a discrepancy, outside of specified tolerances, occurs between the operation of the system and the model, an error is reported, assisting maintenance personnel in diagnosing and repairing the system. Closing the Loop Automated systems are never static. Changes are inevitable. Ideas for new systems are generated. Because an exact real time simulation model exists, proposed changes can be completely tested before they are implemented. The changes required in the control software can be tested under emulation. The physical equipment modifications can be verified. The result of changes to the automated system may be tested before the changes are made in the production system, so that the changes can be made without halting production. Automation Master Operating Modes Simulation Emulation Automation Master connects to the Control System/PLC and emulates the real world I/O by reading and writing the PLC's internal I/O images. The simulator can receive the Control System/PLC's outputs, and respond with the inputs in real time without the need for any hard-wired physical I/O. A simulator emulates the real time response to the Control System/PLC actions based upon a model which duplicates the operation of the automated system. For example, if the Control System/PLC sets a digital output to start a motor to raise a door, the model, within milliseconds, provides the Control System/PLC with an auxiliary contact closure to indicate that the motor has been started. Shortly, the door closed limit switch is turned off as the door begins to rise. As long as, the Control System/PLC keeps on the output signal which raises the door, the door in the model continues to rise. When the door is fully open, the model turns on the door open limit switch, and the PLC responds by turning off the motor which raised the door. The model sees the Control System/PLC turn off the motor and drops out the motor's auxiliary contact. Once a model of a component has been built, it can be executed over and over again, under varying conditions to quickly and thoroughly exercise the control software. For instance, what happens if the Control System/PLC loses the motor's auxiliary contact as the door is rising?, Does the Control System/PLC turn off the output which raises the door? Is an alarm sent to the Level II system? How does the Level II system respond? When an error is detected, the programmer can easily alter the software and retest it using the model. The automated system is debugged in real time without any wiring, switches, bells, whistles, or hassles. Monitor Multimode Models Real time simulation allows multiple mode models to be built. A multiple mode model can be operated in either simulation, emulation, or monitor modes by simply invoking the simulator with a different configuration file. Multiple mode models are created by separating the model of the system control strategy from the model of the physical components. There are two distinct elements in a simulation model of an automated system. One element is the physical components of the system being modeled. The second element is a control strategy used to make decisions, to manage the system resources, and route product using the system components. In simulation mode, the interaction between the control strategy and the model of the physical components takes place internally within the real time simulation model. An emulation model only requires the second element. The control strategy is incorporated into the PLC logic, instead of being contained within the model. The control strategy is provided by a separate processor in emulation mode. The control software written to implement the control strategy will be the same software which will control the physical system components when the system is installed. A model of the physical system components is created which reacts identically to the physical components in the real system. The model of the physical system is constructed separately from the control logic being tested. The model of the physical system is passive and makes no decisions. The physical model reacts to the decisions made by the control logic in the same manner as the real system would. An emulation model will operate in both emulation and simulation modes with the addition of the control strategy to the model. The system control strategy now exists in two places, in the model and in the PLC. The source of the system control strategy can be selected using the OPERATING_MODE variable in the configuration file. The control strategy in the model is implemented using as asynchronous activity. A conditional is used as part of the activation conditions in all asynchronous activity entries used strictly for simulation mode. This enables the execution of the system control strategy in simulation mode and disables it in emulation mode. Two different configuration files are set up, one for each mode, to set the initialization file, operating mode, and other configuration differences between the modes. Running the real time simulator with the simulation mode configuration file causes the model to operate as a simulation. Running the real time simulator with the emulation mode configuration file runs the model as an emulation. The simulation runs with the internal system control strategy and disables the external connection to the PLC. Running in emulation mode disables the internal control strategy, and enables the interface to the PLC which supplies the external control strategy. The physical components of the real system are required for monitor mode. In monitor mode, only the model of the physical system components is required. The control strategy is executed in the PLC and simultaneously controls the real system and the model. The real time simulator receives the signals which are sent to, and received from, the real system. The physical system model is run in parallel with the real system, so that the differences between the activity in the model and the real system can be used to diagnose component failures. A single model may be run in all three modes by including the system control strategy (enabled only in simulation mode) in the model. A separate configuration file, containing the initialization file for the monitor, is created for operation in monitor mode. Changing the operating mode from monitor to emulation or simulation mode will require that the real system be disconnected. Once the real system is disconnected, the model may be switched between simulation and emulation modes by enabling or disabling the internal control strategy. Applications R.R. Donnelley - Diskette Collating Machine See also Programmable logic controller Industrial control systems Automation Lights out (manufacturing) Verification and Validation of Computer Simulation Models References External links Direct Connect Emulation and the Project Life Cycle Fast Track Project, White Paper Open Source Project U.S. Patent 4,512,747 Trademark (abandoned) Automation Master Community Simulation software Industrial computing
Automation Master
Technology,Engineering
3,338
470,140
https://en.wikipedia.org/wiki/Transferrin
Transferrins are glycoproteins found in vertebrates which bind and consequently mediate the transport of iron (Fe) through blood plasma. They are produced in the liver and contain binding sites for two Fe3+ ions. Human transferrin is encoded by the TF gene and produced as a 76 kDa glycoprotein. Transferrin glycoproteins bind iron tightly, but reversibly. Although iron bound to transferrin is less than 0.1% (4 mg) of total body iron, it forms the most vital iron pool with the highest rate of turnover (25 mg/24 h). Transferrin has a molecular weight of around 80 kDa and contains two specific high-affinity Fe(III) binding sites. The affinity of transferrin for Fe(III) is extremely high (association constant is 1020 M−1 at pH 7.4) but decreases progressively with decreasing pH below neutrality. Transferrins are not limited to only binding to iron but also to different metal ions. These glycoproteins are located in various bodily fluids of vertebrates. Some invertebrates have proteins that act like transferrin found in the hemolymph. When not bound to iron, transferrin is known as "apotransferrin" (see also apoprotein). Occurrence and function Transferrins are glycoproteins that are often found in biological fluids of vertebrates. When a transferrin protein loaded with iron encounters a transferrin receptor on the surface of a cell, e.g., erythroid precursors in the bone marrow, it binds to it and is transported into the cell in a vesicle by receptor-mediated endocytosis. The pH of the vesicle is reduced by hydrogen ion pumps ( ATPases) to about 5.5, causing transferrin to release its iron ions. Iron release rate is dependent on several factors including pH levels, interactions between lobes, temperature, salt, and chelator. The receptor with its ligand bound transferrin is then transported through the endocytic cycle back to the cell surface, ready for another round of iron uptake. Each transferrin molecule has the ability to carry two iron ions in the ferric form (). Humans and other mammals The liver is the main site of transferrin synthesis but other tissues and organs, including the brain, also produce transferrin. A major source of transferrin secretion in the brain is the choroid plexus in the ventricular system. The main role of transferrin is to deliver iron from absorption centers in the duodenum and white blood cell macrophages to all tissues. Transferrin plays a key role in areas where erythropoiesis and active cell division occur. The receptor helps maintain iron homeostasis in the cells by controlling iron concentrations. The gene coding for transferrin in humans is located in chromosome band 3q21. Medical professionals may check serum transferrin level in iron deficiency and in iron overload disorders such as hemochromatosis. Other species Drosophila melanogaster has three transferrin genes and is highly divergent from all other model clades, Ciona intestinalis one, Danio rerio has three highly divergent from each other, as do Takifugu rubripes and Xenopus tropicalis and Gallus gallus, while Monodelphis domestica has two divergent orthologs, and Mus musculus has two relatively close and one more distant ortholog. Relatedness and orthology/paralogy data are also available for Dictyostelium discoideum, Arabidopsis thaliana, and Pseudomonas aeruginosa. Structure In humans, transferrin consists of a polypeptide chain containing 679 amino acids and two carbohydrate chains. The protein is composed of alpha helices and beta sheets that form two domains. The N- and C- terminal sequences are represented by globular lobes and between the two lobes is an iron-binding site. The amino acids which bind the iron ion to the transferrin are identical for both lobes; two tyrosines, one histidine, and one aspartic acid. For the iron ion to bind, an anion is required, preferably carbonate (). Transferrin also has a transferrin iron-bound receptor; it is a disulfide-linked homodimer. In humans, each monomer consists of 760 amino acids. It enables ligand bonding to the transferrin, as each monomer can bind to one or two atoms of iron. Each monomer consists of three domains: the protease, the helical, and the apical domains. The shape of a transferrin receptor resembles a butterfly based on the intersection of three clearly shaped domains. Two main transferrin receptors found in humans denoted as transferrin receptor 1 (TfR1) and transferrin receptor 2 (TfR2). Although both are similar in structure, TfR1 can only bind specifically to human TF where TfR2 also has the capability to interact with bovine TF. Immune system Transferrin is also associated with the innate immune system. It is found in the mucosa and binds iron, thus creating an environment low in free iron that impedes bacterial survival in a process called iron withholding. The level of transferrin decreases in inflammation. Role in disease An increased plasma transferrin level is often seen in patients with iron deficiency anemia, during pregnancy, and with the use of oral contraceptives, reflecting an increase in transferrin protein expression. When plasma transferrin levels rise, there is a reciprocal decrease in percent transferrin iron saturation, and a corresponding increase in total iron binding capacity in iron deficient states A decreased plasma transferrin level can occur in iron overload diseases and protein malnutrition. An absence of transferrin results from a rare genetic disorder known as atransferrinemia, a condition characterized by anemia and hemosiderosis in the heart and liver that leads to heart failure and many other complications as well as to H63D syndrome. Studies reveal that a transferrin saturation (serum iron concentration ÷ total iron binding capacity) over 60 percent in men and over 50 percent in women identified the presence of an abnormality in iron metabolism (Hereditary hemochromatosis, heterozygotes and homozygotes) with approximately 95 percent accuracy. This finding helps in the early diagnosis of Hereditary hemochromatosis, especially while serum ferritin still remains low. The retained iron in Hereditary hemochromatosis is primarily deposited in parenchymal cells, with reticuloendothelial cell accumulation occurring very late in the disease. This is in contrast to transfusional iron overload in which iron deposition occurs first in the reticuloendothelial cells and then in parenchymal cells. This explains why ferritin levels remain relative low in Hereditary hemochromatosis, while transferrin saturation is high. Transferrin and its receptor have been shown to diminish tumour cells when the receptor is used to attract antibodies. Transferrin and nanomedicine Many drugs are hindered when providing treatment when crossing the blood-brain barrier yielding poor uptake into areas of the brain. Transferrin glycoproteins are able to bypass the blood-brain barrier via receptor-mediated transport for specific transferrin receptors found in the brain capillary endothelial cells. Due to this functionality, it is theorized that nanoparticles acting as drug carriers bound to transferrin glycoproteins can penetrate the blood-brain barrier allowing these substances to reach the diseased cells in the brain. Advances with transferrin conjugated nanoparticles can lead to non-invasive drug distribution in the brain with potential therapeutic consequences of central nervous system (CNS) targeted diseases (e.g. Alzheimer's or Parkinson's disease). Other effects Carbohydrate deficient transferrin increases in the blood with heavy ethanol consumption and can be monitored through laboratory testing. Transferrin is an acute phase protein and is seen to decrease in inflammation, cancers, and certain diseases (in contrast to other acute phase proteins, e.g., C-reactive protein, which increase in case of acute inflammation). Pathology Atransferrinemia is associated with a deficiency in transferrin. In nephrotic syndrome, urinary loss of transferrin, along with other serum proteins such as thyroxine-binding globulin, gammaglobulin, and anti-thrombin III, can manifest as iron-resistant microcytic anemia. Reference ranges An example reference range for transferrin is 204–360 mg/dL. Laboratory test results should always be interpreted using the reference range provided by the laboratory that performed the test. A high transferrin level may indicate an iron deficiency anemia. Levels of serum iron and total iron binding capacity (TIBC) are used in conjunction with transferrin to specify any abnormality. See interpretation of TIBC. Low transferrin likely indicates malnutrition. Interactions Transferrin has been shown to interact with insulin-like growth factor 2 and IGFBP3. Transcriptional regulation of transferrin is upregulated by retinoic acid. Related proteins Members of the family include blood serotransferrin (or siderophilin, usually simply called transferrin); lactotransferrin (lactoferrin); milk transferrin; egg white ovotransferrin (conalbumin); and membrane-associated melanotransferrin. See also Beta-2 transferrin Transferrin receptor Total iron-binding capacity Transferrin saturation Ferritin Optiferrin recombinant human transferrin Atransferrinemia Hypotransferrinemia HFE H63D gene mutation References Further reading External links Chemical pathology Iron metabolism Transferrins
Transferrin
Chemistry,Biology
2,067
75,537,379
https://en.wikipedia.org/wiki/Holographic%20consciousness
Theories of holographic consciousness postulate that consciousness has structural and functional similarities to a hologram, in that the information needed to model the whole is contained within each constituent component. Many holographic theories of consciousness draw on holographic theories of the universe which hypothesize a holographic structure of the universe as a medium for storing information. Most holographic theories of consciousness postulate that human consciousness is a part of and/or interacts with a larger field of universal consciousness, and that information within this universal consciousness is encoded according to holographic principles. There is considerable overlap between holographic theories of consciousness and quantum theories of consciousness. Like quantum theories of consciousness, holographic theories of consciousness aim to address a perceived inability of classical mechanistic physics to explain various phenomena of consciousness. Motivation Holographic consciousness has been proposed as a holistic model incorporating quantum theory which can explain the nature and origin of consciousness. These theories are viewed by some researchers as a possible solution to the problems of consciousness. Theories of holographic consciousness have also been proposed as a potential explanation for capabilities such as the brain's capacity to retain memories despite extensive damage, the human ability to rapidly discern and integrate large amounts of related data, and a number of documented phenomena which indicate non-locality as a property of consciousness. Non-local consciousness is frequently cited in connection with experiences of "cosmic consciousness," where individuals in meditative, trance, or altered states of consciousness report experiencing knowledge or consciousness beyond what their own minds would seem to be able to access or store. Origins While it has been suggested that the forerunner of holographic theories of consciousness can be found in the work of Leibniz, contemporary holographic theories of conscious generally trace their origin to early attempts to use quantum mechanics to explain brain function. In the 1970s, a number of researchers invoked holography as a structure that could explain the distribution of memory within the brain. These theories later gained more credence with the discovery of quantum effects in neuron microtubules by Karl Pribram, suggesting the possibility of highly coherent informational states similar to those found in lasers and superconductors. Along with David Bohm, Pribram proposed the holonomic brain theory which describes information as stored throughout the brain in the form of waves which give rise to holographic images. Roger Penrose and Stuart Hameroff continued with this line of theory by hypothesizing that quantum activity inside neurons may have non-local interaction with other neurons, enabling "conscious events" when combined with a quantum hologram. Building on these theories, other researchers have since attempted to develop and test a number of variations of these hypotheses in diverse contexts, including altered states of consciousness, near-death experiences, and out of body experiences, in addition to seeking a better understanding of the nature of consciousness. Theories Implicate/explicate order In his book Wholeness and the Implicate Order, theoretical physicist David Bohm describes a cosmology based on implicate and explicate orders, wherein the implicate order acts as a kind of unified substrate for reality (the explicate order), and which he likens to a hologram. In this view, the implicate order necessarily encompasses human consciousness. Drawing on the work of Pribram, Bohm concludes that the implicate order may render a distinction between matter and consciousness impossible. Bohm contrasts this perspective to the view of classical mechanics, which focuses on the behavior of individual entities like particles and fields, instead taking a view of reality centered on structures and processes which "project" the explicate reality. This leads to a more holistic and interconnected view of reality, which Bohm describes as "an order of undivided wholeness of the content of description similar to that indicated by the hologram rather than to an order of analysis of such content into separate parts..." In keeping with its description as a process, Bohm terms this order of undivided wholeness a "holomovement" (lit. "movement of the whole"), where the explicate order of phenomenal reality arises out of the movement of an interconnected implicate order which is analogous to a hologram in its structure and process. Quantum hologram theory of consciousness Edgar D. Mitchell and Robert Staretz developed a quantum hologram theory of consciousness which views information as being as fundamental to the universe as matter or energy. This theory hypothesizes that all material objects as well as organisms store information, and that objects emit waves containing information which can be recognized and processed by the brain. Mitchell and Staretz suggest that the movement of this information is not unidirectional, but that human consciousness can emit similar waves which can also play a role in shaping reality. Holographic theory of transpersonal consciousness Robert M. Anderson likens Bohm's distinction between implicate and explicate orders to the dichotomy between personal and transpersonal consciousness. Anderson situates personal and transpersonal consciousness in the context of Eastern mystical traditions and Western rational traditions, with Eastern traditions viewing individualistic, personal consciousness as an illusion, while Western traditions tend to view mystical or transcendent experiences of transpersonal consciousness as hallucinations resulting from aberrations in the brain. Anderson suggests that while consciousness may be a constant property of reality on the level of the implicate order, higher levels of holographic complexity may result in higher levels of consciousness, which make a lizard more conscious than a rock, or a human more conscious than a lizard. This hypothesis is proposed as an explanation for why humans in deep meditative states are able to experience ineffable "higher" states of consciousness; the mind ceases to be a vehicle for personal consciousness, but instead shifts to a highly synchronized harmonic brain state which is part of an underlying universal consciousness of much greater complexity. Quantum holography and the zero point field In connection with his work on magnetic resonance imaging, Walter Schemp developed a mathematical model called "quantum holography." The model demonstrated how information can be recovered and reconstituted from quantum fluctuations in the zero point field, also known as the quantum vacuum. This was extended to theorize the zero point field as a medium for information storage, giving rise to an emitter/absorber model of holography. Some theorists proposed this model as a potential explanation for the large amounts of information experienced during near-death experience life reviews. Syntergic theory Syntergic theory, proposed by Jacobo Grinberg-Zylberbaum, postulates that the brain gives rise to a neuronal field which is the source of consciousness. The neuronal field is conceived as interacting directly with an interconnected, information-dense fabric of reality termed "pre-space," which Grinberg-Zylberbaum describes as "a holographic, non-local lattice that has as a basic characteristic the attribute of consciousness." Grinberg-Zylberbaum proposed this theory as an explanation for the intense empathic connection that psychotherapists sometimes experience with patients. Holographic principle Mark Germine, in association with the California Institute of Integral Studies, outlined a holographic principle which he applies to the evolution of consciousness. Germine's theory is similar to other theories of holographic consciousness, but he elaborates on it by drawing on Jason Brown's theory of microgenesis. Microgenetic theory applies an evolutionary paradigm to the development of ideas, concepts, and mental constructs, which Germine applies to theorizing the evolutionary origins of consciousness. Quantum holographic quantum gravitational The quantum holographic/quantum gravitation model was developed by Dejan Rakovic and resembles other holographic theories of consciousness, but Rakovic theorizes that transitional and altered states of consciousness depend on Einstein-Rosen space-time bridges or wormholes. This approach is not entirely novel, as Penrose had previously proposed gravitationally induced wave function reduction as a possible explanation for non-local conscious experience. Rakovic's approach is unique, however, insofar as it combines concepts from quantum physics, holography, and information theory to describe consciousness as a fundamental property of the universe whereby free will arises from interactions between universal consciousness and the quantum vacuum or zero-point field. Holographic transdisciplinary framework for consciousness Tamar Levin's holographic transdisciplinary framework for consciousness attempts to use complex systems theory to extend the model of holographic consciousness to the study of a wider range of subject matter, including society, culture, and spirituality. This view considers consciousness as an integral property of the universe, and attempts to provide a framework for transcending dualities such as the mind-body and spiritual-material dichotomies. The framework conceptualizes consciousness as both a structure and a system, incorporating both metaphysical and physical layers. This supports the idea that human consciousness is not three dimensional but multi-dimensional, which allows incorporating non-local consciousness. Holofield and connectivity hypothesis Developed by Ervin Laszlo, the connectivity hypothesis describes the universe as consisting of A-dimension (corresponding to Bohm's implicate order) and M-dimension (material). The "A" in "A-dimension" is derived from the Vedic concept of Akasha. The A-dimension is conceived as a holographic informational field (holofield) fundamental to reality. This theory views information as more fundamental to the universe than energy. The process of reality perception in humans, according to this theory, can be seen as a constant interaction between the A-dimension and the M-dimension. Information from the A-field (the implicate, non-local, and holographic domain) becomes manifest in the M-dimension (the explicate, local, and material domain) that we perceive as our reality. This theory suggests that our consciousness is not confined to our brains or our bodies, but is part of this cosmic holographic field which connects humans to the cosmos. This interconnectedness and the holographic structure of the universe could potentially explain phenomena such as intuition, spontaneous healing, and other transpersonal experiences. Applications Psychotherapy Due to their holistic perspective, holographic theories of consciousness can accommodate novel approaches to psychology and therapy which consider mind, body, emotion, and spirituality as an interconnected continuum; Bohm's explicate/implicit cosmology, for example, was cited by Stanislav Grof as a major influence on the development of transpersonal psychology, including his method of holonomic breathwork. Radovic's quantum holographic quantum gravitational framework may have implications for advancing understanding of psychosomatic processes in the context of integrative medicine and transpersonal psychology. Radovic's model suggests that consciousness and free will can be understood in terms of quantum information processes and holographic principles, which could help to systematize psychosomatic treatment of traumas, phobias, disorders, and allergies in conjunction with acupuncture. Evolutionary theory Mark Germine argues that the evolution of consciousness is linked to the holographic principle of mind through a recursive process of successive applications of the same holographic process. According to Germine, the most fundamental levels of experiences— from the conformations of proteins and fields of electrons— exist as quantum potentials. Through recursions of the holographic process, these potentials manifest as higher levels of experience, including consciousness. Germine suggests that this process of successive orders of manifestation on the microscopic and submicroscopic levels is what drives the evolution of consciousness. Laszlo also believes that an informational field which may possess holographic properties is a potential explanation for why evolution appears to be informed rather than random. Andre Lohrey and Bruce Boreham view Bohm's concept of holoflux as potentially supporting Lynn Margulis' theory of endosymbiotic evolution. This view, drawing on the essential unity of Bohm's cosmology, emphasizes cooperative aspects of evolution as opposed to competitive mechanisms emphasized by classical Darwinian evolutionary theory. Altered states of consciousness research Some consciousness researchers have suggested using holographic theories of consciousness for investigating altered states of consciousness, including near-death experiences and out of body experiences. Ethnobotanist Terence McKenna also suggested holographic frameworks for consciousness as a potential method for investigating the effects of psychedelic substances. Gas discharge visualization has been used as a method for studying the behavior of the brain and nervous system during altered states of consciousness, with substantial differences in the signatures emitted by the nervous system in normal versus altered states of consciousness. Holographic theories of consciousness have been proposed as a framework for interpreting and drawing conclusions from data derived from these tests. Acupuncture Researchers at Zhejiang University in China and Siegen University in Germany detected an electromagnetic field composed of interference patterns of standing waves in the resonance cavity of the body. This field was inferred to be holographic, insofar as changes in the conductivity of the measurement current appear simultaneously on all acu-points as well as every point of the skin. Changes in resistance appear as soon as the organism undergoes a pathological, physiological, or psychological change. Beyond its implications for understanding the coherence of the nervous system, other researchers have attempted to use this discovery to develop new acupuncture techniques. References Consciousness Behavioral neuroscience
Holographic consciousness
Biology
2,774
71,470,090
https://en.wikipedia.org/wiki/Gracilariaceae
The Gracilariaceae is a small family of red algae, containing several genera of agarophytes. It has a cosmopolitan distribution, in which 24 species are found in China, six in Great Britain and Ireland, and some in Australia and Chile. They are normally found in intertidal bays, backwaters, and estuaries. The family have been extensively investigated over the last 30 years, and various studies have yielded comprehensive information on their life history, cultivation, taxonomy, and utilization (Bellorin et al. 2002, Rueness 2005). Studies on the structure of their reproductive organs and the phylogenetic relationships among species inferred from rbcL sequence analyses have produced three clades at the genus level, namely Gracilaria, Gracilariopsis, and Hydropuntia (Gurgel and Fredericq 2004). In 2012, the University of São Paulo, Brazil set up the Gracilariaceae Germplasm Bank, to use molecule markers for the identification of species. Genera As accepted by GBIF; Crassiphycus (7) Curdiea (3) Graacilaria (1) Gracilaria (122) Gracilariophila (2) Gracilariopsis (17) Hydropuntia (13) Melanthalia (3) Figures in brackets are approx. how many species per genus. Uses They are economically important, as Agar can be derived from many types of red seaweeds, including those from families such as Gelidiaceae, Gracilariaceae, Gelidiellaceae and Pterocladiaceae. It is a polysaccharide located in the inner part of the red algal cell wall. It is used in food material, medicines, cosmetics, therapeutic and biotechnology industries. References Other sources Bellorin AM, Buriyo A, Sohrabipour J, Oliveira MC, Oliveira EC (2008) Gracilariopsis mclachlanii sp. nov. and Gracilariopsis persica sp. nov. of the Gracilariaceae (Gracilariales, Rhodophyceae) from the Indian Ocean. J Phycol 44:1022–1032 Conklin KY, O'Doberty DC, Sherwood AR (2014) Hydropuntia perplexa, n. comb. (Gracilariaceae, Rhodophyta), first record of the genus in Hawaii. Pac Sci 68:421–434 Kamiya, M., Lindstrom, S.C., Nakayama, T., Yokoyama, A., Lin, S.-M., Guiry, M.D., Gurgel, F.D.G., Huisman, J.M., Kitayama, T., Suzuki, M., Cho, T.O. & Frey, W. 2017. Rhodophyta. In: Syllabus of Plant Families, 13th ed. Part 2/2: Photoautotrophic eukaryotic Algae. (Frey, W. Eds), pp. [i]–xii, [1]–171. Stuttgart: Borntraeger Science Publishers. ISBN 978-3-443-01094-2. Red algae families Edible algae
Gracilariaceae
Biology
692
268,438
https://en.wikipedia.org/wiki/Excimer
An excimer (originally short for excited dimer) is a short-lived polyatomic molecule formed from two species that do not form a stable molecule in the ground state. In this case, formation of molecules is possible only if such atom is in an electronic excited state. Heteronuclear molecules and molecules that have more than two species are also called exciplex molecules (originally short for excited complex). Excimers are often diatomic and are composed of two atoms or molecules that would not bond if both were in the ground state. The lifetime of an excimer is very short, on the order of nanoseconds. Formation and decay Under the molecular orbital formalism, a typical ground-state molecule has electrons in the lowest possible energy levels. According to the Pauli principle, at most two electrons can occupy a given orbital, and if an orbital contains two electrons they must be in opposite spin states. The highest occupied molecular orbital is called the HOMO and the lowest unoccupied molecular orbital is called the LUMO; the energy gap between these two states is known as the HOMO–LUMO gap. If the molecule absorbs light whose energy is equal to this gap, an electron in the HOMO may be excited to the LUMO. This is called the molecule's excited state. Excimers are only formed when one of the dimer components is in the excited state. When the excimer returns to the ground state, its components dissociate and often repel each other. The wavelength of an excimer's emission is longer (smaller energy) than that of the excited monomer's emission. An excimer can thus be measured by fluorescent emissions. Because excimer formation is dependent on a bimolecular interaction, it is promoted by high monomer density. Low-density conditions produce excited monomers that decay to the ground state before they interact with an unexcited monomer to form an excimer. Usage note The term excimer (excited state dimer) is, strictly speaking, limited to cases in which a true dimer is formed; that is, both components of the dimer are the same molecule or atom. The term exciplex refers to the heterodimeric case; however, common usage expands excimer to cover this situation. Examples and use Heterodimeric diatomic complexes involving a noble gas and a halide, such as xenon chloride, are common in the construction of excimer lasers, which are excimers' most common application. These lasers take advantage of the fact that excimer components have attractive interactions in the excited state and repulsive interactions in the ground state. Emission of excimer molecules is also used as a source of spontaneous ultraviolet light (excimer lamps). The molecule pyrene is another canonical example of an excimer that has found applications in biophysics to evaluate the distance between biomolecules. In organic chemistry, many reactions occur through an exciplex, for example, those of simple arene compounds with alkenes. The reactions of benzene and their products depicted are a [2+2]cycloaddition to the ortho product (A), a [2+3]cycloaddition to the meta product (B) and the [2+4]cycloaddition to the para product (C) with simple alkenes such as the isomers of 2-butene. In these reactions, it is the arene that is excited. As a general rule, the regioselectivity is in favor of the ortho adduct at the expense of the meta adduct when the amount of charge transfer taking place in the exciplex increases. Generation techniques For a noble gas dimer or noble gas halide ittakes a noble gas atom in an excited electronic state to form an excimer molecule. Sufficiently high energy (approximately 10 eV) is required to obtain a noble gas atom in the lowest excited electronic state, which provides the formation of an excimer molecule. The most convenient way to excite gases is by an electric discharge. That is why such excimer molecules are generated in a plasma (see excimer molecule formation) or through high energy electron beams. Fluorescence quenching Exciplexes provide one of the three dynamic mechanisms by which fluorescence is quenched. A regular exciplex has some charge-transfer (CT) character, and in the extreme case there are distinct radical ions with unpaired electrons. If the unpaired electrons can spin-pair to form a covalent bond, then the covalent bonding interaction can lower the energy of the charge transfer state. Strong CT stabilisation has been shown to lead to a conical intersection of this exciplex state with the ground state in a balance of steric effects, electrostatic interactions, stacking interactions, and relative conformations that can determine the formation and accessibility of bonded exciplexes. As an exception to the conventional radical ion pair model, this mode of covalent bond formation is of interest to photochemistry research, as well as the many biological fields using fluorescence spectroscopy techniques. Evidence for the bonded exciplex intermediate has been given in studies of steric and Coulombic effects on the quenching rate constants and from extensive density functional theory computations that show a curve crossing between the ground state and the low-energy bonded exciplex state. See also References Photochemistry
Excimer
Chemistry
1,142
15,573,477
https://en.wikipedia.org/wiki/Downloadable%20content
Downloadable content (DLC) is additional content created for an already released video game, distributed through the Internet by the game's publisher. It can either be added for no extra cost or it can be a form of video game monetization, enabling the publisher to gain additional revenue from a title after it has been purchased, often using some type of microtransaction system. DLC can range from cosmetic content, such as skins, to new in-game content such as characters, levels, modes, and larger expansions that may contain a mix of such content as a continuation of the base game. In some games, multiple DLC (including future DLC not yet released) may be bundled as part of a "season pass"—typically at a discount in comparison to purchasing each DLC individually. While the Dreamcast was the first home console to support DLC (albeit in a limited form due to hardware and internet connection limitations), Sony's PlayStation 2 and Microsoft's Xbox helped to popularize the concept. Since the seventh generation of video game consoles, DLC has been a prevalent feature of most major video game platforms with internet connectivity. Since the popularization of microtransactions in online distribution platforms such as Steam, the term DLC has become a synonymous for any form of paid content in video games, regardless of whether they constitute the download of new content. Furthermore, this led to the creation of the oxymoronic term "on-disc DLC" for content included on the game's original files but locked behind a paywall. History Precursors to DLC The earliest form of downloadable content were offerings of full games, such as on the Atari 2600's GameLine service, which allowed users to download games using a telephone line. A similar service, Sega Channel, allowed for the downloading of games to the Sega Genesis over a cable line. While the GameLine and Sega Channel services allowed for the distribution of entire titles, they did not provide downloadable content for existing titles. Expansion packs were sold at retail for some PC games, which featured content such as additional levels, characters, or maps for a base game. They often required an installation of the original game in order to function, but some games (such as Half-Life) had "standalone" expansions, which were essentially spin-off games that reused engine code and assets from the original game. On consoles The Dreamcast was the first console to feature online support as a standard; DLC was available, though limited in size due to the narrowband connection and the size limitations of a memory card. These online features were still considered a breakthrough in video games. With the release of the PlayStation 2, Sony was the first company to implement downloadable content. Many Playstation 2 titles, including Ratchet & Clank, Burnout Revenge, and Jak X, offered varying amounts of extra content, available from save data bonus content. Most of this content was available for free. With the advent of the Xbox, Microsoft was the second company to implement downloadable content. Many original Xbox Live titles, including Splinter Cell, Halo 2, and Ninja Gaiden, offered varying amounts of extra content, available for download through the Xbox Live service. Most of this content was available for free. The Xbox 360 (2005) included more robust support for digital distribution, including DLC downloads and purchases, via its Xbox Live Marketplace service. Microsoft believed that publishers would benefit by offering small pieces of content at a small cost ($1 to $5), rather than full expansion packs (~$20), as this would allow players to pick and chose what content they desired, providing revenue to the publishers. Microsoft also utilized a digital currency known as "Microsoft Points" for transactions, which could also be purchased through physical gift cards to avoid the banking fees associated with the small price points. The PlayStation 3 (2006) adopted the same approach with their downloadable hub, the PlayStation Store. Sony planned on having the bulk of its content be purchased separately via many separate online microtransactions for PlayStation Network titles, including Gran Turismo HD Concept and Gran Turismo 5 Prologue. Nintendo has featured a sparser amount of downloadable content on their Wii Shop Channel, the bulk of which is accounted for by digital distribution of emulated Nintendo titles from previous generations. Music video games, such as titles from the Guitar Hero and Rock Band franchises, took significant advantage of downloadable content as a means of offering new songs to be played in-game. Harmonix claimed that Guitar Hero II would feature "more online content than anyone has ever seen in a game to this date." Rock Band features the largest number of downloadable items of any console video game, with a steady number of new songs that were added weekly between 2007 and 2013. Acquiring all the downloadable content for Rock Band would, as of July 12, 2012, cost $5,880.10. On personal computers As the popularity and speed of internet connections rose, so did the popularity of using the internet for digital distribution of media. User-created game mods and maps were distributed exclusively online, as they were mainly created by people without the infrastructure capable of distributing the content through physical media. In 1997, Cavedog offered a new unit every month as free downloadable content for their real-time strategy computer game Total Annihilation. Later PC digital distribution platforms, such as Games for Windows Marketplace and Steam would add support for DLC in a similar manner to consoles. On handhelds Nokia phones of the late 1990s and early 2000s shipped with side-scrolling shooter Space Impact, available on various models. With the introduction of WAP in 2000, additional downloadable content for the game, with extra levels, became available. The Nintendo Wi-Fi Connection service on the Nintendo DS could be used to obtain a form of DLC for certain games, such as Picross DS—where players could download puzzle "packs" of classic puzzles from previous Picross games (such as Mario's Picross). as well as downloadable user generated content. Due to the Nintendo DS's use of cartridges and lack of dedicated storage, most "DLC" for DS games was limited in scope, or in some cases (such as Professor Layton and the Curious Village and Moero! Nekketsu Rhythm Damashii Osu! Tatakae! Ouendan 2), was already part of the game's data on the cartridge, and merely unlocked. The Nintendo 3DS supported the purchase of DLC for supported titles for Nintendo eShop. Starting with iPhone OS 3, downloadable content became available for the platform via applications bought from the App Store. While this ability was initially only available to developers for paid applications, Apple eventually allowed for developers to offer this in free applications as well in October 2009. On-disc DLC In some cases, a purchased DLC may not actually download new content to the device, but merely consists of data used to enable associated content that is already present within the game's data. DLC of this nature revealed via data mining is typically referred to as "on-disc DLC" or PULC (premium unlockable content). This practice has sometimes been considered controversial, with publishers being accused of using what is effectively a microtransaction to lock access to content that was already contained within the game as sold at retail. Data relating to future DLC may be included on-disc or downloaded during updates for technical reasons as well, either to ensure online multiplayer compatibility for existing content between players who have not yet purchased the new DLC, or as dormant support code for planned content that is still in development at the time of the release. Monetization Downloadable content is often offered for a price. Since Facebook games popularized the business model of microtransactions, some have criticized downloadable content as being overpriced and an incentive for developers to leave items out of the initial release, with The Elder Scrolls IV: Oblivion's horse armor DLC having faced a mixed reception upon its release for that reason. However, by 2009, the Horse Armor DLC was one of the top ten content packs that Bethesda had sold, which justified the DLC model for future games. Where a normal software disc may allow its license sold or traded, DLC is generally locked to a specific user's account and does not come with the ability to transfer that license to another user. In addition to individual content downloads, video game publishers sometimes offer a "season pass", which allows users to pre-order a selection of upcoming content over a specific time period, and ensuring the customer's ability to immediately obtain the content upon release. As users do not have the ability to fully preview the content before their purchase, there is a chance that the content of a season pass may not be of a sufficient quality to justify the purchase. In multiplayer games, season passes may also segregate the player base if it is the primary means of receiving gameplay content such as maps. Microsoft has been known to require developers to charge for their content, when the developers would rather release their content for free. Some content has even been withheld from release because the developer refused to charge the amount Microsoft required. Epic Games, known for continual support of their older titles with downloadable updates, believed that releasing downloadable content over the course of a game's lifetime helped increase sales throughout, and had succeeded well with that business-model in the past, but was required to implement fees for downloads when releasing content for their Microsoft-published game, Gears of War. As of 2010 the sale of DLC makes up around 20% of video games sales, a substantial portion of a developer's profit margin. Developers are beginning to use the sale of DLC for an already successful game series to fund the development of new IPs or sequels to existing games. Availability DLC is usually distributed through a console platform's online storefront, such as Microsoft Store, Nintendo eShop, PlayStation Store, or similar storefronts for PC games such as Steam. Platform exclusivity can also apply to DLC, with Activision having reserved a timed exclusivity period for DLC in the Call of Duty franchise to PlayStation consoles. Some time after a game's original release, a publisher may reissue the game at retail with all of its existing DLC included at no additional charge and, in some cases, exclusive content which may be released as DLC for existing owners in the future. The resulting SKU is often branded with a subtitle to distinguish it from the original release, such as "Game of the Year Edition", "Definitive Edition" or "Complete Edition". Destiny was reissued twice to coincide with its "Year 2" and "Year 3" milestones and associated DLC expansions The Taken King and Rise of Iron; a compilation of the game's existing DLC and The Taken King was released in 2015 under the title Destiny: The Taken King - Legendary Edition, while the game was re-issued again in 2016 as Destiny: The Collection to add Rise of Iron. There have also been cases where DLCs were intended to be part of the main game, but they were later stripped out of it in order to be sold as a separate feature. Tomb Raider: Underworld has been criticized for providing two DLCs, exclusive to the Xbox 360, that were supposedly removed from the original game. The Sims 4: My First Pet was likewise criticised for containing items that had seemingly been removed from the Cats & Dogs expansion, with the DLC requiring the downloadable expansion pack in order to work. PCGamesN described it as "a stuff pack for an expansion pack." In other media While video games are the origins of downloadable content, with movies, books and music also becoming more popular in the digital sphere, experimental DLC has also been attempted. Amazon's Kindle service for example allows updating ebooks, which allows authors to not only update and correct work, but also add content. Notes References Download managers Digital media Video game distribution Video game terminology
Downloadable content
Technology
2,449
21,543,730
https://en.wikipedia.org/wiki/Tutin%20%28toxin%29
Tutin is a poisonous plant derivative found in New Zealand tutu plants (several species in the genus Coriaria). It acts as a potent antagonist of the glycine receptor, and has powerful convulsant effects. It is used in scientific research into the glycine receptor. It is sometimes associated with outbreaks of toxic honey poisoning when bees feed on honeydew exudate from the sap-sucking passion vine hopper (Scolypopa australis) insect, when the vine hoppers have been feeding on the sap of tutu bushes. Toxic honey is a rare event and is more likely to occur when comb honey is eaten directly from a hive that has been harvesting honeydew from passionvine hoppers feeding on tutu plants. History Tutin was first discovered as a honey contaminant in the late 19th century. Missionaries from overseas introduced the western honey bee (Apis mellifera) to New Zealand in 1839. A few decades later, people eating the local honey would suffer from symptoms like vomiting, headaches and confusion. At this point the neurotoxin was studied, and in the early 1900s its toxic effects were fully characterised. The toxin was known to come from the tutu plant. However, neither the nectar nor the pollen of the tutu plant contain this toxin, the two parts the honey bees ingest. Eventually it was found that the passion vine hopper (Scolypopa australis), a pest insect, extracts sap from young shoots of the tutu plant and releases secretions, honeydew, that contain the tutin toxin. Honeybees will consume honeydew as a supplementary food source, thereby contaminating the honey they produce with this toxin. Further outbreaks of tutin poisoning would periodically appear from that point onwards. As late as 2008 a family had to be hospitalized due to severe symptoms caused by homegrown honey with tutin contaminations. Structure and chemical properties Tutin is a polyoxygenanted polycyclic sesquiterpene convulsant neurotoxin. Tutin is one of a series of chemically and pharmacologically similar compounds of which picrotoxinin and coriamyrtin have been mostly studied. Conroy proposed the structure for picrotoxinin, which was confirmed by X-ray crystallographic studies and also determined the absolute configuration of the molecule. Karyone and Okuda proposed the tutin structure based on the pictrotoxinin structure and chemical degradation studies. The structure of tutin including absolute stereochemistry was confirmed by X-ray crystal analysis together with chemical and chiroptical means. Tutin has a highly strained skeleton, including two epoxide rings and a lactone, which is susceptible to various rearrangements. Tutin has a characteristic intensely bitter taste. Tutin is very soluble in acetone, but dissolves moderately in chloroform and is insoluble in carbon disulfide or benzene. Addition of strong sulfuric acid to a few drops of a saturated aqueous solution of tutin results in a blood-red coloration. Isolation from nature In 1901, tutin was first isolated by Easterfield and Aston and identified as the convulsive poison present in the New Zealand species of Coriaria (‘’tutu’’ or ‘’toitoi’’ in Maori). Easterfield and Aston used 1.5 kilograms of seeds and 11 kilograms of the air-dried Coriaria thymifolia plant (without roots) from Dunedin at the time of flowering in January. The seeds were pulverised and exhausted by carbon disulfide removing a green drying oil. The plant was put through a chaff cutter and boiled with water. The mixture was treated with a large volume of ethanol. The ethanol precipitated inorganic salts, ellagic acid and a large amount of black matter. After distilling, the residue was extracted with diethyl ether. The crystals were recrystallized several times from water, which resulted in separating of the substance in characteristic needle forms and recrystallization from ethanol in oblique ended prisms. The final product contained the characteristic highly poisonous non-nitrogenous glucoside tutin as colourless crystals melting at . Chemical synthesis of (+)-tutin In 1989, Wakamatsu and coworkers reported in details the first total synthesis of (+)-tutin in a stereocontrolled manner. (+)-Tutin can be synthesized in a nine-step reaction process. First, a (-)-bromo alcohol was protected by silylation. After this step, conversion of the allylic bromide moiety into the allylic alcohol was achieved by the Corey's conditions. Next, the hydroxyl moiety was introduced at C-2, regio- and stereoselectively of the intramolecular reaction was due to the use of the C-14 hydroxyl function to gain the desired cyclic ether. Thereafter, the ethereal bond was cleaved providing the allylic bromide. Subsequently, the silyl protection group was removed by using tetra-n-butylammonium fluoride in THF. The intramolecular SN2 reaction at the allylic bromide moiety led to the formation of the epoxy olefin. Then, the epoxy olefin was converted into the bisepoxide in three-steps, first alkaline hydrolysis to give the alcohol, second esterification to form 2,2,2-trichloroethyl carbonate and as last epoxidation. Thereafter, the bisepoxide was oxidized with ruthenium(VII)oxide affording 2,2,2-trichloroethoxycarbonyl α-bromotutin. The final part of the synthesis of (+)-tutin is a reduction with zinc and ammoniumchloride. Chemical reactions Acylation of the secondary alcohol 2-OH and double acetylation at both the 2-OH and C6-OH of tutin has been reported. In the New Zealand toxin honey two main structures of tutin conjugates were found; 2-(β-D-glucopyranosyl)-tutin and 2-[6’-(α-D-glucopyranosyl)-β-D-glucopyranosyl]-Tutin. Chemical synthesis of 2-(β-D-glucopyranosyl)-tutin could be achieved via the β-O-glycosylation reaction between tutin and an activated sugar donor. Multiple methods of O-glycosylation have been published about the synthesis of complex glycosides with anomeric β-stereoselectivity. Mechanism of action GABA (γ-aminobutyric acid) is a major inhibitory neurotransmitter in the central nervous system of mammals. Tutin is an antagonist of the GABA receptors. By inhibiting these receptors, the sedative effect of this neurotransmitter is lessened, leading to intensive stimulation of the nervous system. Based on extensive data, tutin was determined to be a non-competitive antagonist using an allosteric mechanism. Apart from GABA receptor inhibition, in vitro studies have also shown tutin to have an inhibitory effect on the glycine receptors of the neurons in the spinal cord. These receptors have inhibitory functions comparable to those of the GABA receptors. Lastly, investigation into similar toxins has shown them to be blockers for other ligand-gated ion channels. Therefore, it is suspected that tutin could also possess antagonistic properties against other ion channels. Metabolism Laboratory animal studies on the absorption, distribution, metabolism and excretion of tutin are not available. According to Fitchett and Malcolm 1909, McNaughton and Goodwin 2008, the systemic absorption of purified tutin after an oral ingestion appears rapid in animals as clinical signs that are consistent with neurotoxicity were found to appear within less than 15 minutes in mice and after about one hour in dogs. Animals that received non-lethal doses showed a rapid recovery suggesting a fast elimination. Onset time of toxicity following the consumption of tutin containing honey is on the contrary highly variable. In 2008, a median onset time of 7.5 hours was found for the 11 confirmed cases with onset times ranging from half an hour to 17 hours after ingestion. Biological effects Tutin has a toxic effect on both mammals and insects. It was looked into whether or not it would make a useful rodenticide. In rats it had a lethal effect within one hour at a dose rate of 55 mg/kg body weight. However, it was recommended that a more specific toxin should be used. In humans it also has a toxic effect. Although the exact doses remain unknown, people have been incapacitated, hospitalised or even died from getting tutin into their system. A study has been conducted in which six men were given a tutin dose of 1.8 μg/kg body weight. Although the effects were hardly felt by the volunteers, unusual serum concentrations were observed. A peak in tutin concentration was observed one hour after ingestion, and a second, larger and prolonged peak was observed around 15 hours after ingestion. The reasons for this observation have yet to be determined. Side effects of tutin intoxication include: headaches, nausea, vomiting, dizziness and seizures. The biological activities of tutin have been reported to be nearly identical with those of the other picrotoxane sesquiterpenes; picrotoxinin and coriamyrtin. Symptoms of tutin poisoning are for example: preliminary depression, salivation, a fall in the frequency of the pulse, increased breathing, and convulsions. The effect is due to an action on the medulla oblongata and basal ganglia of the brain. Toxicity The effects of tutin poisoning were described to be salivation, a diminished heart beat, increased respiratory activity and later, predominantly clinic seizures which are in their early stages limited to the fore part of the body. Results of published acute toxicity studies on various animals are of limited value because of the uncertainty in the impurity profile for the administered tutin. For instance, Palmer-Jones (1947) reported an LD50 of 20 mg/kg of tutin via oral administration in rats. Administration via subcutaneous (SC) and intraperitoneal (IP) routes showed a higher acute toxicity with LD50 of approximately 4 and 5 mg/kg. Little is known about the lethal dose in the average human though tests have been performed on various animal species. For instance, intraperitoneal injection of tutin in rats has shown that concentrations of 3, 5 and 8 mg/kg were lethal whilst 1 mg/kg was non-lethal with all rats showing symptoms such as muscle spasms and general seizures. Documented human exposure to tutin implied that a dose of about a milligram causes nausea and vomiting in a healthy, full grown man. Effects on animals Tutin has been known to cause death in sheep and cattle belonging to the settlers of New Zealand. Therefore, extensive research on the effects of tutin on different animal species has been done in the early 20th century. The symptoms after injection were more or less the same in all animals, and included rapid breathing, salivation, seizures and eventually death. The minimal lethal dose in cats and dogs was found to be around 1 mg/kg. In small rodents like rats, rabbits and guinea pigs, the minimal lethal dose was a little higher, around 2.5 mg/kg. In young animals, the minimal lethal dose is lower. Birds were thought to be immune to tutin poisoning, because they feed on the berries of the turin plant. After research it became clear that birds have a high minimal lethal dose (around 10.25 mg/kg), but no absolute immunity. The apparent immunity in natural circumstances is because in order to reach a dose of 10.25 mg/kg, the birds need to eat more of the berries than they physically can. The relatively high lethal dose can be explained by the way birds digest food. From the crop (a part of the throat in many birds where food is stored before going into the stomach), the veins go directly to the systemic circulation, instead of first through the liver like in mammals. References Convulsants Plant toxins Epsilon-lactones Alcohols Epoxides Alkene derivatives Glycine receptor antagonists GABAA receptor antagonists Spiro compounds Sesquiterpene lactones Neurotoxins
Tutin (toxin)
Chemistry
2,630
22,927,673
https://en.wikipedia.org/wiki/After%20the%20Software%20Wars
After the Software Wars is a book by Keith Curtis about free software and its importance in the computing industry, specifically about its impact on Microsoft and the proprietary software development model. The book is about the power of mass collaboration and possibilities of reaching up to a singular rationale showing successful collaborative examples in open source such as Linux and Wikipedia. Keith Curtis attended the University of Michigan, but dropped out to work as a programmer for Microsoft after meeting Bill Gates in 1993. He worked there for 11 years, and then left after he found he was bored. He then wrote and self-published After the Software Wars to explain the caliber of free and open source software and why he believes Linux is technically superior to any proprietary operating system. See also Lad (video game), an iOS puzzle game by Keith Curtis References External links 2009 non-fiction books Books about free software Software development philosophies Microsoft Works about the information economy
After the Software Wars
Technology
184
20,690,782
https://en.wikipedia.org/wiki/Rewilding
Rewilding is a form of ecological restoration aimed at increasing biodiversity and restoring natural processes. It differs from other forms of ecological restoration in that rewilding aspires to reduce human influence on ecosystems. It is also distinct from other forms of restoration in that, while it places emphasis on recovering geographically specific sets of ecological interactions and functions that would have maintained ecosystems prior to human influence, rewilding is open to novel or emerging ecosystems which encompass new species and new interactions. A key feature of rewilding is its focus on replacing human interventions with natural processes. Rewilding enables the return of intact, large mammal assemblages, to promote the restoration of trophic networks. This mechanism of rewilding is a process of restoring natural processes by introducing or re-introducing large mammals to promote resilient, self-regulating, and self-sustaining ecosystems. Large mammals can influence ecosystems by altering biogeochemical pathways as they contribute to unique ecological roles, they are landscape engineers that aid in shaping the structure and composition of natural habitats. Rewilding projects are often part of programs for habitat restoration and conservation biology, and should be based on sound socio-ecological theory and evidence. While rewilding initiatives can be controversial, the United Nations has listed rewilding as one of several methods needed to achieve massive scale restoration of natural ecosystems, which they say must be accomplished by 2030 as part of the 30x30 campaign. Origin The term rewilding was coined by members of the grassroots network Earth First!, first appearing in print in 1990. It was refined and grounded in a scientific context in a paper published in 1998 by conservation biologists Michael Soulé and Reed Noss. Soulé and Noss envisaged rewilding as a conservation method based on the concept of 'cores, corridors, and carnivores'. The key components of rewilding incorporate large core protected areas, keystone species, and ecological connectivity based on the theory that large predators play regulatory roles in ecosystems. '3Cs' rewilding therefore relied on protecting 'core' areas of wild land, linked together by 'corridors' allowing passage for 'carnivores' to move around the landscape and perform their functional role. Inside these cores, human development, especially the building of roads, is strictly limited. National parks and wilderness reserves are the most common types of 'core' areas. Soulé and fellow biologist John Terbough expanded on the concept of corridors in their book Continental Conservation. They determined that one size does not fit all: narrow, linear corridors might work for some smaller species, but if conservationists wanted to encourage the movement of large carnivores, they needed to make corridors wide enough to allow for daily and seasonal movement of both herds of prey and packs of their predators. The '3Cs' concept was developed further in 1999 and Earth First co-founder, Dave Foreman, subsequently wrote a full-length book on rewilding as a conservation strategy. History Rewilding was developed as a method to preserve functional ecosystems and reduce biodiversity loss, incorporating research in island biogeography and the ecological role of large carnivores. In 1967, The Theory of Island Biogeography by Robert H. MacArthur and Edward O. Wilson established the importance of considering the size and fragmentation of wildlife conservation areas, stating that protected species and areas remained vulnerable to extinctions if populations were small and isolated. In 1987, William D. Newmark's study of extinctions in national parks in North America added weight to the theory. The publications intensified debates on conservation approaches. With the creation of the Society for Conservation Biology in 1985, conservationists began to focus on reducing habitat loss and fragmentation. Supporters of rewilding initiatives range from individuals, small land owners, local non-governmental organizations and authorities, to national governments and international non-governmental organizations such as the International Union for Conservation of Nature. While rewilding efforts can be well regarded, the increased popularity of rewilding has generated controversy, especially in relation to large-scale projects. These have sometimes attracted criticism from academics and practicing conservationists, as well as government officials and business people. Nonetheless, a 2021 report for the launch of the UN Decade on Ecosystem Restoration, the United Nations listed rewilding as one of several restoration methods which they state should be used for ecosystem restoration of over 1 billion hectares. Guiding principles Since its origin, the term rewilding has been used as a signifier of particular forms of ecological restoration projects that have ranged widely in scope and geographic application. In 2021 the journal Conservation Biology published a paper by 33 coauthors from around the world. Titled 'Guiding Principles for Rewilding', researchers and project leaders from North America (Canada, Mexico and the United States) joined with counterparts in Europe (Denmark, France, Hungary, The Netherlands, Switzerland, and the UK), China, and South America (Chile and Colombia) to produce a unifying description, along with a set of ten guiding principles. The group wrote, 'Commonalities in the concept of rewilding lie in its aims, whereas differences lie in the methods used, which include land protection, connectivity conservation, removing human infrastructure, and species reintroduction or taxon replacement.' Referring to the span of project types they stated, 'Rewilding now incorporates a variety of concepts, including Pleistocene megafauna replacement, taxon replacement, species reintroductions, retrobreeding, release of captive-bred animals, land abandonment, and spontaneous rewilding.' Empowered by a directive from the International Union for the Conservation of Nature to produce a document on rewilding that reflected a global scale inventory of underlying goals as well as practices, the group sought a 'unifying definition', producing the following:'Rewilding is the process of rebuilding, following major human disturbance, a natural ecosystem by restoring natural processes and the complete or near complete food web at all trophic levels as a self-sustaining and resilient ecosystem with biota that would have been present had the disturbance not occurred. This will involve a paradigm shift in the relationship between humans and nature. The ultimate goal of rewilding is the restoration of functioning native ecosystems containing the full range of species at all trophic levels while reducing human control and pressures. Rewilded ecosystems should—where possible—be self-sustaining. That is, they require no or minimal management (i.e., natura naturans [nature doing what nature does]), and it is recognized that ecosystems are dynamic.' Ten principles were developed by the group: Rewilding utilizes wildlife to restore trophic interactions. Rewilding employs landscape-scale planning that considers core areas, connectivity, and co-existence. Rewilding focuses on the recovery of ecological processes, interactions, and conditions based on reference ecosystems. Rewilding recognizes that ecosystems are dynamic and constantly changing. Rewilding should anticipate the effects of climate change and where possible act as a tool to mitigate impacts. Rewilding requires local engagement and support. Rewilding is informed by science, traditional ecological knowledge, and other local knowledge. Rewilding is adaptive and dependent on monitoring and feedback. Rewilding recognizes the intrinsic value of all species and ecosystems. Rewilding requires a paradigm shift in the coexistence of humans and nature. A paper was published in 2024 that offered a "broad study of rewilding guidelines and interventions." Rewilding and climate change Rewilding can respond to both the causes and effects of climate change and has been posited as a 'natural climate solution'. Rewilding's creation of new ecosystems and restoration of existing ones can contribute to climate change mitigation and adaptation through, inter alia, carbon capture and storage, altering the Earth's albedo, natural flood management, reduction of wildfire risk, new habitat creation, and enabling or facilitating the movement of species to new, climate safe habitats, thus protecting biodiversity and maintaining functioning, climate resilient ecosystems. The functional roles animals perform in ecosystems, such as grazing, nutrient cycling and seed distribution, can influence the amount of carbon that soils and (marine and terrestrial) plants capture. The carbon cycle is altered through herbivores consuming vegetation, assimilating carbon within their own biomass, and releasing carbon by respiration and defecation after digestion. The most beneficial effects on biogeochemical cycling and ecosystem structure are reported through rewilding large herbivore species. A study in a tropical forest in Guyana found that an increase in mammal species from 5 to 35 increased tree and soil carbon storage by four to five times, compared to an increase of 3.5 to four times with an increase of tree species from 10 to 70. A separate study suggested that the loss of megafauna that eat fruits may be responsible for an up to 10% reduction in carbon storage in tropical forests. Furthermore, acceleration of nutrient cycling through browsing and grazing may increase local plant productivity and thereby maintain ecosystem productivity in grassy biomes. It is also posited that grazing and browsing reduces the risk of wildfires (which are significant contributors of GHG emissions and whose smoke can alter the planet's albedo - the Earth's ability to reflect heat from sunlight)). For example, the loss of wildebeest from the Serengeti led to an increase in un-grazed grass, leading to more frequent and intense fires, causing the grassland to turn from a carbon sink to a carbon source. When disease management practices restored the wildebeest population, the Serengeti returned to a carbon sink state. Rewilding's effect on albedo is not only through potential reduction of smoke from wildfires but also through the effects of grazing itself. By reducing woody cover through browsing and trampling, large herbivores expose more ground surface and thus increase the albedo effect, reducing local surface temperatures and creating a net surface cooling effect during spring and autumn. Other forms of ecological restoration as part of rewilding can also assist with mitigating climate change. For example, reforestation, afforestation and peat re-wetting can all contribute to carbon sequestration. While carbon sequestration could allow carbon offsetting and carbon trading as a way to monetize rewilding there has been concern that the highly speculative nature of carbon markets encourages 'land grabbing' (i.e., buying large areas of land) and 'greenwashing' from natural capital investors and multi-national companies. Types of rewilding Passive rewilding Passive rewilding (also referred to as ecological rewilding) aims to restore natural ecosystem processes via minimal or the total withdrawal of direct human management of the landscape. Active rewilding Active rewilding is an umbrella term used to describe a range of rewilding approaches all of which involve human intervention. These might include species reintroductions or translocations and/or habitat engineering and the removal of man-made structures. Pleistocene rewilding Pleistocene rewilding is the (re)introduction of extant Pleistocene megafauna, or the close ecological equivalents of extinct megafauna, to restore ecosystem function. Advocates of the approach maintain that ecosystems where species evolved in response to Pleistocene megafauna but now lack large mammals may be in danger of collapse. Meanwhile critics argue that it is unrealistic to assume that ecological communities today are functionally similar to their state 10,000 years ago. Trophic rewilding Trophic rewilding is an ecological restoration strategy focused on restoring trophic interactions and complexity (specifically top-down and associated trophic cascades where a top consumer/predator controls the primary consumer population) through species (re)introductions, in order to promote self-regulating, biodiverse ecosystems. Urban rewilding Urban rewilding is a type of rewilding focused on the integration of nature into urban settings. Elements Ecosystem engineers Ecosystem engineers are ‘organisms that demonstrably modify the structure of their habitats’. Examples of ecosystem engineers in rewilding include beaver, elephants, bison, elk, cattle (as analogues for the extinct aurochs) and pigs (as analogues for wild boar). Keystone species A keystone species is a species that has a disproportionately large effect on its environment relative to its abundance. Predators Apex predators may be required in rewilding projects to ensure that browsing and grazing animals are kept from over-breeding/over-feeding thereby destroying vegetation complexity and exceeding the ecological carrying capacity of the rewilding area, as was seen in the mass-starvations which occurred at the Oostvaardersplassen rewilding project in the Netherlands. While predators play an important role in ecosystems, however, there is debate regarding the extent to which the control of prey populations is due to direct predation or a more indirect influence of predators (see Ecology of fear). For example, it is thought that wildebeest populations in the Serengeti are primarily controlled by food constraints despite the presence of many predators. Criticism Compatibility with economic activity Some national governments and officials within multilateral agencies such as the United Nations, express the view that 'excessive' rewilding, such as large rigorously enforced protected areas where no extraction activities are allowed, can be too restrictive on people's ability to earn sustainable livelihoods. The alternative view is that increasing ecotourism can provide employment. Conflicts with animal rights and welfare Rewilding has been criticized by animal rights scholars, such as Dale Jamieson, who argues that 'most cases of rewilding or reintroducing are likely to involve conflicts between the satisfaction of human preferences and the welfare of nonhuman animals'. Erica von Essen and Michael Allen, using Donaldson and Kymlicka's political animal categories framework, assert that wildness standards imposed on animals are arbitrary and inconsistent with the premise that wild animals should be granted sovereignty over the territories that they inhabit and the right to make decisions about their own lives. To resolve this, von Essen and Allen contend that rewilding needs to shift towards full alignment with mainstream conservation and welcome full sovereignty, or instead take full responsibility for the care of animals who have been reintroduced. Ole Martin Moen argues that rewilding projects should be brought to an end because they unnecessarily increase wild animal suffering and are expensive, and the funds could be better spent elsewhere. Erasure of environmental history The environmental historian Dolly Jørgensen argues that rewilding, as it currently exists, 'seeks to erase human history and involvement with the land and flora and fauna. Such an attempted split between nature and culture may prove unproductive and even harmful.' She calls for rewilding to be more inclusive to combat this. Jonathan Prior and Kim J. Ward challenge Jørgensen's criticism and provide examples of rewilding programs which 'have been developed and governed within the understanding that human and non-human world are inextricably entangled'. Farming Some farmers have been critical of rewilding for 'abandoning productive farmland when the world's population is growing'. Farmers have also attacked plans to reintroduce the lynx in the United Kingdom because of fears that reintroduction will lead to an increase in sheep predation. Harm to conservation Some conservationists have expressed concern that rewilding 'could replace the traditional protection of rare species on small nature reserves', which could potentially lead to an increase in habitat fragmentation and species loss. David Nogués-Bravo and Carsten Rahbek assert that the benefits of rewilding lack evidence and that such programs may inadvertently lead to 'de-wilding', through the extinction of local and global species. They also contend that rewilding programs may draw funding away from 'more scientifically supported conservation projects'. Many large conservation groups have built fundraising campaigns around the idea that once wildlife is gone, it’s gone for good; rewilding experts saying otherwise may confuse donors and lead to less money being funneled into conservation efforts. Governmental agencies overseeing land use and consumption are often heavily influenced by the interests of loggers, ranchers, and miners, so non-profit organizations are often at the forefront of conservation efforts, and a loss of funding could have major impacts on the protection of wildlife. There is also concern among conservationists that if the idea that wilderness can be restored becomes popular with the public, oil companies, real estate developers, and agribusinesses may be emboldened to step up land consumption, arguing that it can be restored later. Human-wildlife conflict The reintroduction of brown bears to Italy's Trentino province through the EU-funded Life Ursus project has led to growing tensions between humans and wildlife. While initially celebrated as a conservation success, the bear population has expanded to over 100, leading to increased conflicts, including the fatal attack on Andrea Papi in 2023—the first modern death caused by a wild bear in Italy. This incident sparked fear among residents and prompted calls for stricter controls, including culling dangerous bears. Critics argue the conflict stems from poor management, inadequate public education, and a lack of preventive measures like bear-proof bins. Despite efforts to balance human safety and conservation, local communities remain deeply divided, with many pushing for limits on bear numbers and more decisive action against perceived threats. Rewilding in different locations Both grassroots groups and major international conservation organizations have incorporated rewilding into projects to protect and restore large-scale core wilderness areas, corridors (or connectivity) between them, and apex predators, carnivores, or keystone species. Projects include: the Yellowstone to Yukon Conservation Initiative in North America (also known as Y2Y), the European Green Belt (built along the former Iron Curtain), transboundary projects (including those in southern Africa funded by the Peace Parks Foundation), community-conservation projects (such as the wildlife conservancies of Namibia and Kenya), and projects organized around ecological restoration (including Gondwana Link, regrowing native bush in a hotspot of endemism in southwest Australia, and the Area de Conservacion Guanacaste, restoring dry tropical forest and rainforest in Costa Rica). North America In North America, a major project aims to restore the prairie grasslands of the Great Plains. The American Prairie is reintroducing bison on private land in the Missouri Breaks region of north-central Montana, with the goal of creating a prairie preserve larger than Yellowstone National Park. As of 2024, American Prairie's habitat spanned over 520,000 acres. Dam removal has led to the restoration of many river systems in the Pacific Northwest in an effort to restore salmon populations specifically but with other species in mind. As stated in an article on environmental law: 'These dam removals provide perhaps the best example of large-scale environmental remediation in the twenty-first century. [...] The result has been to put into motion ongoing rehabilitation efforts in four distinct river basins: the Elwha and White Salmon in Washington and the Sandy and Rogue in Oregon'. Yellowstone to Yukon Formally launched in 1997, Yellowstone to Yukon (Y2Y) was a conservation initiative that envisioned a wide corridor of protected land stretching from Canada’s Yukon territory, through American national parks like Waterton and Glacier, all the way to the Greater Yellowstone ecoregion in the northern Rocky Mountains. Promoters of the project worked to discourage building of roads and other human developments that would impede the movement of large predators like wolves and grizzly bears. Y2Y used lobbying and education to promote its mission and get the public involved. Organizers set up conferences between rewilding groups in Canada and the United States, facilitated dialogue between conservationists and Native American groups, and maintained high visibility for the project by featuring in newspapers like the New York Times and the Washington Post. Activists involved in the project successfully lobbied for 24 highway crossing structures in the Banff area, allowing for safer movement of wildlife across the Trans-Canadian highway. Y2Y inspired other conservation groups to focus more of their efforts on lobbying to persuade government action, and led to an increase in corridor planning across North America. The South Coast Wildlands Project successfully convinced the California State Parks Agency to buy a 700 acre tract slated for development. The Algonquin to Adirondack initiative, modeled after Y2Y, has focused research efforts on improving connectivity around the Great Lakes Region. Conservation groups from the United States and Canada have worked together to plan a series of marine priority areas from Baja California to the Bering Sea, allowing both nations to protect species of mutual concern. Protecting Predators There have been multiple projects launched to protect North America’s carnivores, one of the main components of the ‘3 C’s’ approach to rewilding. Reed Noss, an early advocate for rewilding, began working on reserve designs as early as the 1980s to protect Florida’s largest predators: the Florida panther and the Florida black bear. Noss’ initial plan envisioned 60% of Florida’s land set aside for wildlife reserves, and proved so influential that the Florida State legislature set aside $3.2 billion to buy land for a network of reserves and corridors between them. At the same time, a group based in Washington D.C. called Defenders of Wildlife began promoting protection of predators across the country, including grizzly bears, wolves, and river otters. In 1987, they set up the Bailey Wildlife Foundation Wolf Compensation Trust to pay ranchers back for the loss of livestock due to predation in an attempt to raise support for rewilding among farmers, who are often some of the most vocal opponents of the conservation of large predators. In 1998, they launched another program to pay for fencing, alarms, and other methods that would protect livestock in a way that didn’t harm predators. However, this approach has been largely unsuccessful at bolstering the native wolf population because of continued shooting of wolves, both illegally and permitted by the USFWS. New York Fresh Kills landfill, located on Staten Island, was once home to 150 million tons of trash. However, plans created between 2001 and 2006 reimagined it as a 2,200 acre park, the largest park built in the state of New York in over a century. Construction began in 2008 to restore the area back to its original wetland ecosystem, complete with open waterways, sweet-gum swamps, prairies, and meadows of wildflowers. Part of initial plans involved removing invasive reed species and replacing them native marsh grasses. The project is slated to take up to thirty years to complete, with the end goal of combining ecological restoration with recreational activities. While planning for Fresh Kills Park, New York State initiated an even more ambitious program focused on protecting the broader ecosystem around Staten Island by restoring the Hudson River. In 2005, the organizations involved came up with a few goals for the project: re-invigorating the river’s fisheries, improving water quality by removing contaminants, and preserving shoreline and forested habitats upriver. When the project is complete, it will affect fifty thousand acres containing six different habitat types. Mexico In the Mexican state of Sonora, the Northern Jaguar Project bought 45,000 acres of land by 2007 devoted to protecting the northernmost breeding population of jaguars. The group also encouraged local people to help them monitor the population by offering a $500 reward for each photograph of a living cat taken by ranch owners who promised not to shoot jaguars on their property. In its first year, the program paid out $6,500 for photos of jaguars, mountain lions, and ocelots. Central America Paseo Pantera/Mesoamerican Biological Corridor In the early 1990s, the Wildlife Conservation Society proposed a plan for a major corridor project that would span from Southern Mexico down into Panama, connecting existing reserves, parks, and undisturbed forests of all seven Central American countries and the lower five Mexican states. They called the plan “Paseo Pantera,” or “the path of the panther,” named so because of the movement of mountain lions throughout the area. The plan attracted a lot of controversy: indigenous peoples were concerned that their land would be taken from them to be converted into parks, and some activists claimed that the program was setting the environment above human needs. These arguments caused the project to be reviewed and refashioned. In 1997, the new plan, renamed the “Mesoamerican Biological Corridor,” was unveiled as a conservation project that also promoted the welfare of indigenous people and local economies. Despite the changes, the Mesoamerican Corridor still had some flaws, most notably with regard to land use. The plan necessitated reaching agreements with numerous villages to decide what zoning for protected areas meant for the local people, how it would be enforced, and where hunting and fishing would be allowed. Rural people were largely unimpressed with the vague nature of the outline, so progress was slow. In 2005, the Central American Free Trade Agreement promised to develop many of the same areas the Mesoamerican Corridor sought to protect, but conservationists refused to oppose the development for fear of losing funding. By 2006, hundreds of millions of dollars had been spent on preserving the corridor, but only one small protected area had been created. Costa Rica Costa Rica’s Osa Peninsula is one of the most biodiverse places on the planet. In 1975, the Nature Conservancy worked with the Costa Rican government to create the first national park in the country: Corcovado. The park originally spanned 86,000 acres, nearly a third of the peninsula. The Nature Conservancy wanted to establish it as a refuge for the dozens of endemic species that occur in this small stretch of habitat. However, the project has faced many setbacks since its establishment. Conservationists quickly realized that it was too small to protect many critical species, including the jaguar, peccary, and harpy eagle. Gold was discovered in Corcovado around the same time as the park was established, and some of the natural areas within the park were illegally destroyed by miners. Programs to engage local people in conservation efforts quickly failed because of a lack of funding, causing people living on the border to become increasingly hostile towards the project. Lack of financial resources caused many people to resort to poaching within the park’s borders or shooting jaguars that ate their crops. Conservation groups hoped to solve these problems by launching another initiative, the Osa Biological Corridor project. The plan was designed to enlarge currently protected areas on the peninsula, and hopes to devote $10 million to develop community support for rewilding by providing education programs and new jobs protecting the reserves. South America Argentina In 1997, Douglas and Kris Tompkins created 'The Conservation Land Trust Argentina' with the goal of transforming the Iberá Wetlands. In 2018, thanks to a team of conservationists and scientists, and a donation of of land by Kris Tompkins, an area was converted into a National Park, and jaguar (a species that had been extinct in the region for seven decades), anteaters and giant otters were reintroduced. A spin-off of the Tompkins Foundation, Rewilding Argentina, is an organization dedicated to the restoration of El Impenetrable National Park, in Chaco, Patagonia Park, in Santa Cruz, and the Patagonian coastal area in the province of Chubut, in addition to Iberá National Park. Brazil The red-rumped agouti and the brown howler monkey were reintroduced in Tijuca National Park (Rio de Janeiro state, Brazil), between 2010 and 2017 with the goal of restoring seed dispersal. Prior to the reintroductions, the national park did not have large or intermediate -sized seed dispersers, the increased dispersal of tree seeds following the reintroductions therefore had a significant effect on forest regeneration in the park. This is significant since the Tijuca National Park is part of heavily fragmented Atlantic Forest and there is potential to restore many more seed dispersal interactions if seed dispersing mammals and birds are reintroduced to forest patches where the tree species diversity remains high. The Cerrado-Pantanal Ecological Corridors Project was proposed in the 1990s to restore connectivity between two of Brazil’s core reserves: Emas National Park and the Pantanal, one of the world’s largest wetlands. It made significant progress in the early 2000s because of plans to conserve mainly areas with low human density. Another reason for wider support was because of a fund started to compensate farmers that lost livestock to the big cats that conservationists hope to protect using these corridors, and healthcare programs that provided free services to ranchers who committed to not killing critically endangered jaguars. Australia Colonisation has had a significant impact on Australia's native flora and fauna, and the introduction of red foxes and cats has devastated many of the smaller ground-dwelling mammals. The island state of Tasmania has become an important location for rewilding efforts because, as an island, it is easier to remove feral cat populations and manage other invasive species. The reintroduction and management of the Tasmanian devil in this state, and dingoes on the mainland, is being trialed in an effort to contain introduced predators, as well as over-populations of kangaroos. Gondwana Link, a plan conceived in 2002, was devised to connect two Australian national parks: Stirling Range and Fitzgerald River National Park. Much of this land had been severely degraded by harmful farming practices, and was barren of most plant and animal life. Organizers of the project worked on revegetating the land with native plant species, fifty of which were found nowhere else on Earth, in the hopes that they would attract wildlife back to the area. Five years later, they had planted over 100 species of native plants, and multiple reptiles species had been spotted coming back to the region. By 2009, the Gondwana Link included over 23,000 acres of protected land. WWF-Australia runs a program called 'Rewilding Australia' whose projects include restoring the platypus in the Royal National Park, south of Sydney, eastern quolls in the Booderee National Park in Jervis Bay and at Silver Plains in Tasmania, and brush-tailed bettongs in the Marna Banggara project on the Yorke Peninsula in South Australia. Other projects around the country include: Barrington Wildlife Sanctuary, NSW Mongo Valley, NSW Bungador Stoney Rises Nature Reserve, Victoria Mount Zero-Taravale Sanctuary, Queensland Dirk Hartog Island National Park, Western Australia Marna Banggara, SA Clarke Island/Lungtalanana, Tasmania Europe In 2011, the 'Rewilding Europe' initiative was established with the aim of rewilding one million hectares of land in ten areas including the western Iberian Peninsula, Velebit, the Carpathians and the Danube delta by 2020. The project considers reintroductions of species that are still present in Europe such as the Iberian lynx, Eurasian lynx, grey wolf, European jackal, brown bear, chamois, Iberian ibex, European bison, red deer, griffon vulture, cinereous vulture, Egyptian vulture, great white pelican and horned viper, along with primitive domestic horse and cattle breeds as proxies for the extinct tarpan and aurochs (the wild ancestors of domestic cattle) respectively. Since 2012, Rewilding Europe has been heavily involved in the Tauros Programme, which seeks to create a breed of cattle that resembles the aurochs by selectively breeding existing breeds of cattle. Projects also employ domestic water buffalo as a grazing analogue for the extinct European water buffalo. European Wildlife, established in 2008, advocates the establishment of a European Centre of Biodiversity at the German–Austrian–Czech borders, and the Chernobyl exclusion zone in Ukraine. European Green Belt The European Green Belt is a proposed rewilding zone that is envisioned running through over a dozen European countries using land that was historically part of the physical boundaries of the Iron Curtain. When completed, the European Green Belt will stretch over five thousand miles, from the Barents Sea off the northern coast of Norway to the Black Sea in southeast Europe. The corridor is composed of three main sections: the Fennoscandian Green Belt running through Norway, Finland, and Russia, the Central Green Belt located in parts of Germany, the Czech Republic, Austria, Slovakia, Hungary, Slovenia, and Italy, and the Balkan Green Belt in Macedonia, Romania, Bulgaria, Albania, Greece, and Turkey. It will link core reserves and parks like the Bavarian Forest in Germany, the Danube-March floodplains in Austria and Slovakia, and Sumava National Park in the Czech Republic. Proponents of the European Green Belt hope that it will increase ecotourism and sustainable farming practices across Europe. Austria Der Biosphärenpark Wienerwald was created in Austria in 2003 with 37 kernzonen (core zones) covering a total of 5,400 ha designated free from human interference. Britain Rewilding Britain, a charity founded in 2015, aims to promote rewilding in Britain and is a leading advocate of rewilding. Rewilding Britain has laid down 'five principles of rewilding' which it expects to be followed by affiliated rewilding projects. These are to support people and nature together, to 'let nature lead', to create resilient local economies, to 'work at nature's scale', and to secure benefits for the long-term. Celtic Reptile and Amphibian is a limited company established in 2020, with the aim of reintroducing extinct species of reptile and amphibian (such as the European pond turtle, moor frog, agile frog, common tree frog and pool frog) to Britain. Success has already been achieved with the captive breeding of the moor frog. A reintroduction trial of the European pond turtle to its historic, Holocene range in the East Anglian Fens, Brecks and Broads has been initiated, with support from the University of Cambridge. In 2020, nature writer Melissa Harrison reported a significant increase in attitudes supportive of rewilding among the British public, with plans recently approved for the release of European bison, Eurasian elk, and great bustard in England, along with calls to rewild as much as 20% of the land in East Anglia, and even return apex predators such as the Eurasian lynx, brown bear, and grey wolf. More recently, academic work on rewilding in England has highlighted that support for rewilding is by no means universal. As in other countries, rewilding in England remains controversial to the extent that some of its more ambitious aims are being 'domesticated' both in a proactive attempt to make it less controversial and in reactive response to previous controversy. Projects may also refer to their activity using terminology other than 'rewilding', possibly for political and diplomatic reasons, taking account of local sentiment or possible opposition. Examples include 'Sanctuary Nature Recovery Programme' (at Broughton) and 'nature restoration project', the preferred term used by the Cambrian Wildwood project, an area aspiring to encompass 7,000 acres in Wales. Notable rewilding sites include: Knepp Wildland. The 3,500 acre (1,400 hectare) Knepp Castle estate in West Sussex was the first major pioneer of rewilding in England, and started that land-management policy there in 2001 on land formerly used as dairy farmland. Rare species including common nightingale, turtle doves, peregrine falcons and purple emperor butterflies are now breeding at Knepp and populations of more common species are increasing. In 2019 a pair of white storks built a nest in an oak tree at Knepp. The storks were part of a group imported from Poland as a result of a programme to reintroduce the species to England run by the Roy Dennis Wildlife Foundation which has overseen reintroductions of other bird species to the UK. Broughton Hall Estate, Yorkshire. In 2021, approximately 1,100 acres (a third of the estate) was devoted to rewilding with advice from Prof. Alastair Driver of Rewilding Britain. Mapperton Estate, Dorset. In 2021, a 200 acre farm (one of the five farms composing the estate) began the process of rewilding. Alladale Wilderness Reserve, Sutherland, Scotland. This 23,000 acre estate hosts many wildlife species and engages in rewilding projects such as peatland and forest restoration, captive breeding of the Scottish wildcat, and reintroduction of the red squirrel. Visitors can engage in outdoor recreation and education programs. The British radio drama series The Archers featured rewilding areas in storylines in 2019 and 2020. In November 2023, Tatler described rewilding as being part of the worldview of the bopea ("bohemian peasant") movement, an elite British socio-cultural group. The Netherlands In the 1980s, analogue species (Konik ponies, Heck cattle and red deer) were introduced to the Oostvaardersplassen nature reserve, an area covering over , in order to (re)create a grassland ecology by keeping the landscape open by naturalistic grazing. This approach followed Vera's 'wood-pasture hypothesis' that grazing animals played a significant role in shaping European landscapes before the Neolithic period. Though not explicitly referred to as rewilding, many of the project's intentions were in line with those of rewilding. The case of the Oostvaardersplassen is considered controversial due to the lack of predators, and its management can be seen as having to contend with conflicting ideas regarding nature. Africa In the 1990s and early 2000s, several multi-nation rewilding projects were suggested across Africa. Some notable examples are: The Tri-National de la Sangha, a plan focused on joining three national parks in Cameroon, the Republic of the Congo, and the Central African Republic. The goal was to restore a large area of rainforest to protect the region’s forest elephants, lowland gorillas, and the historical territory of the Ba’Aka pygmy people. The Great Limpopo Transfrontier Park, proposed to protect elephants by expanding South Africa’s largest national park, Kruger, and connecting it to Zimbabwe’s Gonarezhou National Park and Mozambique’s Coutada 16, a previous hunting concession. The Kgalagadi Transfrontier Park, conceived to join two existing parks in Botswana and South Africa, protecting the wildlife that relied on the region’s desert habitat. This park, spanning over 14,000 square miles, was officially established in 2000. The Lubombo Transfrontier Conservation Area, designed to create a corridor for elephants through Mozambique, Eswatini, and South Africa. The reserve was formally established in 2000, and has been widely recognized for working with local communities and creating jobs in conservation. The Kavango-Zambezi Transfrontier Conservation Area (KAZA), the largest proposed wilderness reserve in the world, covering nearly 116,000 square miles. The project would connect thirty-six protected areas across five countries: Angola, Botswana, Namibia, Zambia, and Zimbabwe. KAZA was conceived with two main goals in mind: protecting the largest population of elephants in the world, and conserving scarce water resources by sustainably managing the region’s wetlands. Namibia In 1996, Namibia passed the Nature Conservation Act, a law that allowed communities of civilians to create their own protected wildlife conservancies to develop the country’s ecotourism sector. Conservancy creation was voluntary, but proved to be popular: by 2008, fifty-two conservancies were registered with the government, and fifteen more were seeking approval. By this time, one in four rural Namibians were involved in conservation, and around fifteen percent of the country’s land was protected. Conservancy committees were tasked with hiring park guards and rangers to crack down on illegal hunting, in exchange for limited hunting rights for conservancy members. The Namibian government relocated locally extirpated species to these newly protected areas, and community members monitored their flourishing population sizes. One notable success of the Nature Conservation Act is Salambala, a conservancy established in 1998. The region, only 359 square miles large, went from having virtually no large game to boasting a population of elephants six hundred strong, a herd of fifteen hundred zebra, and three lion prides after twenty years. Surveys conducted in the conservancy showed a 47 percent increase in wildlife sightings, just between 2004 and 2007. The local community was able to capitalize on the environmental success: by 2006, the community was earning thirty-seven times more revenue from tourism than they had been in 1998. Asia Nepal King Mahendra was crowned king of Nepal in 1955. An avid hunter, King Mahendra and his son instituted Nepal’s first Western-style national park, the Royal Chitwan National Park, in 1973. Establishment of the park led to an increase in research being done on Nepal’s wildlife, including the Nepal Tiger Ecology Project, an eighteen-year-long field study conducted in Chitwan. Findings from this study convinced the Nepalese government to eventually enlarge the boundaries of Chitwan and join it with its neighboring Parsa and Valmiki wildlife reserves. In 1995, Nepal’s Parliament ratified bylaws that required 50 percent of the revenue from park entrance fees to go towards programs that would benefit local people, providing funding to build better schools and clinics and bolstering public support for parks. In 1993, Terai Arc Landscape Program (TAL) was started to restore forested corridors between Chitwan, other Nepalese parks like Bardia National Park and Parsa Wildlife Reserve, and Indian reserves along the countries’ shared border. TAL’s goal was to add “buffer zones” around the established parks and create pathways between them to facilitate the movement of large species like elephants, tigers, and rhino. The project was initially successful, supporting over 600 endangered rhinos and attracting tens of thousands of tourists every year, but the success was disrupted by the Nepalese Civil War, which took place from 1996 to 2006. Hundreds of rhinos and tigers were killed during the war as a result of fewer park guards and governmental conservation groups growing disorganized by the war. By 2008, wildlife populations in the reserve began to grow again, but the war caused hundreds of thousands of dollars of damage to the project. Indonesia In 2001, conservationist Willie Smits began buying land from a former palm oil plantation that has been ecologically destroyed by logging. He, along with a group of Dayak villagers in Indonesia’s East Kalimantan province, replanted over twelve hundred species of trees on the land, which Smits renamed Samboja Lestari or “Everlasting Forest.” The project’s hopes of returning the land to a tropical rainforest seems to be working: by 2009, temperature within the regrown forest had dropped by three to five degrees Celsius, humidity has risen by 10 percent, and rainfall had increased by 25 percent. 137 species of birds now reside on the land, up from only five species that had lived in the logged area. The replanted forest is also home to nine species of primates, as of 2009. See also Climate change mitigation effects of rewilding Environmental restoration Feral, a 2013 book about rewilding Great Green Wall (Africa) Involuntary park Natural landscape Permaculture Sea rewilding Species reintroduction Urban prairie Urban reforestation Urban rewilding Wildlife management References Further reading van der Land, Hans and Poortinga, Gerben (1986). Natuurbos in Nederland: een uitdaging, Instituut voor Natuurbeschermingseducatie. ISBN 90-70168-09-x (in Dutch) Foreman, Dave (2004). Rewilding North America: A Vision for Conservation in the 21st Century, Island Press. Fraser, Caroline (2010). Rewilding the World: Dispatches from the Conservation Revolution, Picador. Hawkins, Convery, Carver & Beyers, eds. (2023). Routledge Handbook of Rewilding, Routledge. Jepson, Paul and Blythe, Cain (2022). Rewilding: The Radical New Science of Ecological Recovery (The Illustrated Edition), The MIT Press. MacKinnon, James Bernard (2013). The Once and Future World: Nature As It Was, As It Is, As It Could Be, Houghton Mifflin Harcourt. Monbiot, George (2013). Feral: Rewilding the Land, the Sea, and Human Life, Penguin. Monbiot, George (2022). Regenesis: Feeding the World without Devouring the Planet, Penguin Books. Louys, Julien et al. (2014). "Rewilding the tropics, and other conservation translocations strategies in the tropical Asia-Pacific region". doi:10.1002/ece3.1287 Root-Bernstein, Meredith et al. (2017) "Rewilding South America: Ten key questions". doi:10.1016/j.pecon.2017.09.007 Pereira, Henrique M., & Navarro, Laetitia (2015). Rewilding European Landscapes, Springer. Pettorelli, Durant & du Troit, eds. (2019). Rewilding, Cambridge University Press. Tree, Isabella (2018), Wilding: The Return of Nature to a British Farm, Picador, Wilson, Edward Osborne (2017). Half-Earth: Our Planet's Fight for Life, Liveright (W.W. Norton). Wright, Susan (2018). SCOTLAND: A Rewilding Journey, Wild Media Foundation. Thulin, Carl-Gustaf, & Röcklinsberg, Helena (2020). "Ethical Considerations for Wildlife Reintroductions and Rewilding". External links Projects American Prairie Reserve Area de Conservacion Guanacaste, Costa Rica European Green Belt European Wildlife - European Centre of Biodiversity Gondwana Link Heal Rewilding Highlands Rewilding Lewa Wildlife Conservancy Peace Parks Foundation Pleistocene Park Rewilding Britain Rewilding Europe Rewilding Australia Rewilding Institute Self-willed land Scotland: The Big Picture Terai Arc Landscape Project (WWF) Wildland Network UK Wildlands Network N. America (formerly Wildlands project) Wisentgrazing-project, Holland Information Book on experimental methods to rewild forests with grazers and dead and decaying wood an docu-film about the reintroduction of wild horses 15 years after Rewilding the World: Dispatches from the Conservation Revolution "Rewilding the World: A Bright Spot for Biodiversity" Rewilding and Biodiversity: Complementary Goals for Continental Conservation, Michael Soulé & Reed Noss, Wild Earth, Wildlands Project Fall 1998 "For more wonder, rewild the world", George Monbiot's July 2013 TED talk Bengal Tiger relocated to Sariska from Ranthambore | Times of India Animal reintroduction Conservation biology
Rewilding
Biology
9,651
2,297,111
https://en.wikipedia.org/wiki/Belpaire%20firebox
The Belpaire firebox is a type of firebox used on steam locomotives. It was invented by Alfred Belpaire of Belgium in 1864. Today it generally refers to the shape of the outer shell of the firebox which is approximately flat at the top and square in cross-section, indicated by the longitudinal ridges on the top sides. However, it is the similar square cross-section inner firebox which provides the main advantages of this design i.e. it has a greater surface area at the top of the firebox where the heat is greatest, improving heat transfer and steam production, compared with a round-top shape. The flat firebox top would make supporting it against pressure more difficult (e.g. by means of girders, or stays) compared to a round-top. However, the use of a similarly shaped square outer boiler shell allows simpler perpendicular stays to be used between the shells. The Belpaire outer firebox is, nevertheless, more complicated and expensive to manufacture than a round-top version. Due to the increased expense involved in manufacturing this boiler shell, just two major US railroads adopted the Belpaire firebox, the Pennsylvania and the Great Northern. In Britain most locomotives employed the design after the 1920s, except notably those of the LNER. Description In steam boilers, the firebox is encased in a water jacket on five sides, (front, back, left, right and top) to ensure maximum heat transfer to the water. Stays are used to support the surfaces against the high pressure between the outside wall and the interior firebox wall, and partially to conduct heat into the boiler interior. In many boiler designs, the top of the boiler is cylindrical above the firebox, matching the contour of the rest of the boiler and naturally resisting boiler pressure more easily. In the Belpaire design, the outer upper boiler wall sheets are roughly parallel with the flat upper firebox sheets giving it a squarer shape. The advantage was a greater surface area for evaporation, and less susceptibility to priming (foaming), involving water getting into the cylinders, compared with the narrowing upper space of a classic cylindrical boiler. This allowed G.J. Churchward, the chief mechanical engineer of the Great Western Railway, to dispense with a steam dome to collect steam. Churchward also improved the Belpaire design, maximising the flow of water in a given size of boiler by tapering the firebox and boiler barrel outwards to the area of highest steam production at the front of the firebox. The shape of the Belpaire firebox also allows easier placement of the boiler stays, because they are at right angles to the sheets. Despite these claimed advantages, other locomotive boilers such as the LNER Engine Pacifics had flat-topped inner fireboxes with round-topped outer shells and with as good a thermal performance as the Belpaire type, without suffering major problems with staying between shells. In the USA, the Belpaire firebox was introduced in about 1882 or 83 by R. P. C. Sanderson, who at the time was working for the Shenandoah Valley Railroad (essentially a subsidiary of the Pennsylvania Railroad, since they shared the same financial backing from E. W. Clark & Co.). Sanderson was an Englishman (later naturalized as an American citizen) who had attained his engineering degree from Cassel in Germany in 1875. Having obtained knowledge of a special form of locomotive boiler (the Belpaire), Sanderson wrote to an old acquaintance from his college days who was working at the Henschel locomotive factory at Cassel. He sent Sanderson a tracing of Henschel's latest Belpaire boiler. When shown the design, Charles Blackwell, Superintendent of Motive Power for the Shenandoah Valley Railroad, was very pleased with the design and placed an order with the Baldwin and Grant Locomotive Works for two passenger engines, afterwards numbered 94 and 95, and five freight engines, afterwards numbered, 56, 57, 58, 59, and 60. That marked the beginning of the use of the Belpaire-type locomotive boiler in the United States. The Pennsylvania Railroad used Belpaire fireboxes on nearly all of its steam locomotives. The distinctive square shape of the boiler cladding at the firebox end of locomotives practically became a "Pennsy" trademark, as otherwise only the Great Northern used Belpaire fireboxes in significant numbers in the USA. Gallery See also Wootten firebox References Steam locomotive fireboxes Steam boilers
Belpaire firebox
Engineering
924
35,274,013
https://en.wikipedia.org/wiki/NGC%205823
NGC 5823 (also known as Caldwell 88) is an open cluster in the southern constellation of Circinus, near (and extending across) its border with the constellation Lupus. It was discovered by Scottish astronomer James Dunlop in 1826. External links Open clusters Circinus Astronomical objects discovered in 1826 5823 Discoveries by James Dunlop
NGC 5823
Astronomy
68
17,940,038
https://en.wikipedia.org/wiki/Camunian%20rose
The Camunian rose (; ) is the name given to a particular symbol represented among the rock carvings of Camonica Valley (Brescia, Italy). It consists of a meandering closed line that winds around nine cup marks. It can be symmetrical, asymmetrical or form a swastika. Meaning and variations Many theories exist about its meaning: Emmanuel Anati suggests that it might symbolize a complex religious concept, perhaps a solar symbol linked to the astral movement. In Val Camonica this motif dates back to the Iron Age, particularly from the 7th to 1st centuries BC. Only one doubtful case is datable at the Final Bronze Age (1,100 BC). These figures are placed mainly in the Middle Camonica Valley (Capo di Ponte, Foppe of Nadro, Sellero, Ceto and Paspardo), but numerous cases are in the Low Valley too (Darfo Boario Terme and Esine). The motif has been deeply studied by Paola Farina, who created a corpus of all the "Camunian roses" known in Val Camonica: she counted 84 "roses" engraved on 27 rocks. Three basic types have been determined: swastika type: the 9 cup-marks make a 5 by 5 cross; the contour forms four arms that bend about 90° and every arm includes one of the top cup-marks of the cross. Sixteen “roses” of this type have been found; asymmetric-swastika type: the disposition of the 9 cup-marks is the same as the previous; but the contour is different, because only two arms bend 90°, while the other ones join together in a single bilobate arm. There are 12 “roses” of this type; quadrilobate type: the 9 cup-marks are aligned in three columns of three cups; the contour develops into four orthogonal and symmetric arms, and each one includes a cup-mark. It is the more widespread type of camunian rose; 56 examples exist. Regarding the interpretation, not easy for a symbol pertaining to a lost and past culture, Paola Farina suggests that the "Camunian rose" had originally a solar meaning, which then developed into a wider meaning of a positive power, to bring life and good luck. The name of the symbol, derived from its resemblance to a flower, is a modern invention. A stylized "Camunian rose" has become the symbol of the Lombardy region and is featured on its flag. See also Camunni Rock Drawings in Valcamonica Looped square Swastika Stone Lauburu Western use of the swastika in the early 20th century References Bibliography Farina, Paola (1998). La “rosa camuna” nell’arte rupestre della Valcamonica, NAB, 6, pp. 185–205. External links Symbols Rock art in Europe Swastika
Camunian rose
Mathematics
607
29,283,011
https://en.wikipedia.org/wiki/Immunophysics
Immunophysics is a novel interdisciplinary research field using immunological, biological, physical and chemical approaches to elucidate and modify immune-mediated mechanisms and to expand our knowledge on the pathomechanisms of chronic immune-mediated diseases such as arthritis, inflammatory bowel disease, asthma and chronic infections. Background Immune reactions are tightly regulated and usually self-limited. Dysregulation can result in chronic inflammatory diseases (immunochronicity). In addition to biochemical molecular mechanisms, physical factors influence the immune system. Such components include: Microenvironmental factors like tonicity, pH, oxygen pressure and the redox status of immune cells Mechanical factors, such as tissue pressure, cellular stiffness and cell motility Cell membrane physics such as membrane composition and particles The research field of immunophysics aims to investigate the influence of these physicochemical parameters on the function of the immune system in health and disease. Methods Immunophysical techniques include nuclear magnetic resonance spectroscopy, magnetic resonance imaging (MRI), dual-energy computed tomography, fluorescence-lifetime imaging microscopy, multispectral optoacoustic tomography (MSOT), high-throughput microfluidic cytometry, interferometric scattering microscopy (iSCAT) and cryogenic optical localization in 3D (COLD). Applications Immunophysical research is considered to open new perspectives for the investigation of the pathomechanisms of immune-mediated inflammatory diseases, help to develop novel detection methods and diagnostic tools in these diseases and advance the treatment possibilities of such diseases. See also Inflammation References External links Department of Medicine 3, Universitätsklinikum Erlangen, FAU Erlangen-Nürnberg Department of Medicine 1, Universitätsklinikum Erlangen, FAU Erlangen-Nürnberg Institute of Clinical Microbiology, Immunology and Hygiene, Universitätsklinikum Erlangen, FAU Erlangen-Nürnberg Department of Infection Biology, Universitätsklinikum Erlangen, FAU Erlangen-Nürnberg Department of Chemistry and Pharmacy, Universitätsklinikum Erlangen, FAU Erlangen-Nürnberg Max Planck Institute for the Science of Light BioMolecular Modeling & Design Laboratory Branches of immunology Medical physics
Immunophysics
Physics,Biology
495
61,351
https://en.wikipedia.org/wiki/Laurent%20polynomial
In mathematics, a Laurent polynomial (named after Pierre Alphonse Laurent) in one variable over a field is a linear combination of positive and negative powers of the variable with coefficients in . Laurent polynomials in form a ring denoted . They differ from ordinary polynomials in that they may have terms of negative degree. The construction of Laurent polynomials may be iterated, leading to the ring of Laurent polynomials in several variables. Laurent polynomials are of particular importance in the study of complex variables. Definition A Laurent polynomial with coefficients in a field is an expression of the form where is a formal variable, the summation index is an integer (not necessarily positive) and only finitely many coefficients are non-zero. Two Laurent polynomials are equal if their coefficients are equal. Such expressions can be added, multiplied, and brought back to the same form by reducing similar terms. Formulas for addition and multiplication are exactly the same as for the ordinary polynomials, with the only difference that both positive and negative powers of can be present: and Since only finitely many coefficients and are non-zero, all sums in effect have only finitely many terms, and hence represent Laurent polynomials. Properties A Laurent polynomial over may be viewed as a Laurent series in which only finitely many coefficients are non-zero. The ring of Laurent polynomials is an extension of the polynomial ring obtained by "inverting ". More rigorously, it is the localization of the polynomial ring in the multiplicative set consisting of the non-negative powers of . Many properties of the Laurent polynomial ring follow from the general properties of localization. The ring of Laurent polynomials is a subring of the rational functions. The ring of Laurent polynomials over a field is Noetherian (but not Artinian). If is an integral domain, the units of the Laurent polynomial ring have the form , where is a unit of and is an integer. In particular, if is a field then the units of have the form , where is a non-zero element of . The Laurent polynomial ring is isomorphic to the group ring of the group of integers over . More generally, the Laurent polynomial ring in variables is isomorphic to the group ring of the free abelian group of rank . It follows that the Laurent polynomial ring can be endowed with a structure of a commutative, cocommutative Hopf algebra. See also Jones polynomial References Commutative algebra Polynomials Ring theory
Laurent polynomial
Mathematics
485
5,901,542
https://en.wikipedia.org/wiki/Nickel%28II%29%20nitrate
Nickel nitrate is the inorganic compound Ni(NO3)2 or any hydrate thereof. In the hexahydrate, the nitrate anions are not bonded to nickel. Other hydrates have also been reported: Ni(NO3)2.9H2O, Ni(NO3)2.4H2O, and Ni(NO3)2.2H2O. It is prepared by the reaction of nickel oxide with nitric acid: NiO + 2 HNO3 + 5 H2O → Ni(NO3)2.6H2O The anhydrous nickel nitrate is typically not prepared by heating the hydrates. Rather it is generated by the reaction of hydrates with dinitrogen pentoxide or of nickel carbonyl with dinitrogen tetroxide: Ni(CO)4 + 2 N2O4 → Ni(NO3)2 + 2 NO + 4 CO The hydrated nitrate is often used as a precursor to supported nickel catalysts. Structure Nickel(II) compounds with oxygenated ligands often feature octahedral coordination geometry. Two polymorphs of the tetrahydrate Ni(NO3)2.4H2O have been crystallized. In one the monodentate nitrate ligands are trans while in the other they are cis. Reactions and uses Nickel(II) nitrate is primarily used in electrotyping and electroplating of metallic nickel. In heterogeneous catalysis, nickel(II) nitrate is used to impregnate alumina. Pyrolysis of the resulting material gives forms of Raney nickel and Urushibara nickel. In homogeneous catalysis, the hexahydrate is a precatalyst for cross coupling reactions. References Nickel compounds Nitrates IARC Group 1 carcinogens Oxidizing agents
Nickel(II) nitrate
Chemistry
383
2,716,524
https://en.wikipedia.org/wiki/Antianalgesia
Antianalgesia is the ability of some endogenous chemicals (notably cholecystokinin and neuropeptide Y) to counter the effects of exogenous analgesics (such as morphine) or endogenous pain inhibiting neurotransmitters/modulators, such as the endogenous opioids. A learned form can be established using methods similar to the learning principle of conditioned inhibition, and has been demonstrated in rats. References Pain
Antianalgesia
Chemistry
102
47,180,610
https://en.wikipedia.org/wiki/National%20Cyber%20Security%20Centre%20%28Ireland%29
The National Cyber Security Centre (NCSC) is a government computer security organisation in Ireland, an operational arm of the Department of the Environment, Climate and Communications. The NCSC was developed in 2013 and formally established by the Irish government in July 2015. It is responsible for Ireland's cyber security, with a primary focus on securing government networks, protecting critical national infrastructure, and assisting businesses and citizens in protecting their own systems. The NCSC incorporates the Computer Security Incident Response Team (CSIRT-IE). The NCSC is headquartered at Department of the Environment, Climate and Communications, Tom Johnson House, Haddington Road, D04 K7X4. Mandate and organisation The mandate for the NCSC includes; activities to reduce the vulnerability of critical systems and networks within the state to incidents and cyber-attacks; effective response when such attacks occur; responsibility for the protection of critical information infrastructure; establishing and maintaining cooperative relationships with national and international partners. Threats identified to Ireland's critical infrastructure and government networks include: lone individuals, activist groups, criminal groups, terrorist groups, and nation states seeking to gather intelligence or to damage or degrade infrastructure. Incidents arising through extreme weather, human error and hardware or software failure also pose significant risks to individuals, businesses and public administration. Work relating to the National Cyber Security Centre, and any records associated with the security of ICT systems in the state and outside it, are exempt from being disclosed under freedom of information (FOI). Richard Browne was appointed as the NCSC's director in January 2022, having served as acting director for the previous 18 months. Computer Security Incident Response Team (CSIRT-IE) The Computer Security Incident Response Team (CSIRT-IE) was established in late 2011 (prior to the official formation of the NCSC) within the Department of Communications, Energy and Natural Resources, and includes secondees from other government agencies. The main role of CSIRT-IE is to provide a 24/7 expert emergency response to computer security incidents across all public sector bodies, as well as to provide advice to reduce threat exposure. CSIRT-IE engages in emergency planning with government agencies overseen by the Office of Emergency Planning (OEP) within the Department of Defence and the Government Task Force on Emergency Planning, chaired by the Minister for Defence. CSIRT-IE shares information with the European Union Agency for Network and Information Security (ENISA). Outlining the future core aspects of the work of the NCSC, the government's National Cyber Security Strategy 2015-2017 states that the NCSC is to seek formal international accreditation for a Government CSIRT (g/CSIRT), expected in 2016, and accreditation will be sought for a formal National CSIRT (n/CSIRT), while also developing a capacity in the area of Industrial Control Systems and SCADA, which are used to run vital state networks such as electricity, water and telecommunications. Inter-departmental cooperation There is a strong culture of cooperation between the National Cyber Security Centre and the Irish Defence Forces in areas regarding technical skill sets, technical information sharing and exercise participation. Arrangements are due to be formalised by means of a Service Level Agreement with the Department of Defence, including a mechanism for the immediate sharing of technical expertise and information in the event of a major national cyber incident or emergency. The branch of the Irish military with responsibility for cyber defence is the Communications and Information Services Corps (CIS). The Garda Síochána, the national police service, is involved with the NCSC in a preventative and investigative capacity, with regard to national security and computer crime. Its liaison relationships with international security services are particularly helpful to the NCSC in identifying emerging threats and vulnerabilities, and establishing best practice preventative measures. There is to be a Memorandum of Understanding with the Department of Justice on this matter, and upcoming cyber legislation will support the work of the National Cyber Security Centre. There is also a Memorandum of Understanding with the Centre for Cybersecurity & Cybercrime Investigation (CCI) at University College Dublin, Europe's leading centre for research and education in cybersecurity, cybercrime and digital forensics. International cooperation In 2024 the NCSC took part in Locked Shields jointly with a team from South Korea, run by Cooperative Cyber Defence Centre of Excellence. The Irish team played the part of a cybersecurity team for the fictional state of Berylia, which was attacked by hackers from the fictional state of Crimsonia. Ireland joined CCDCOE in 2023 and took part for the first time in Locked Shields in 2024. Richard Browne said of the simulated attack "It’s like the 2021 incident but with very sophisticated actors at the other end, not just petty criminals like the HSE attack". See also Communications & Information Services Corps (CIS) Garda Bureau of Fraud Investigation (GBFI) Garda National Cyber Crime Bureau Office of Emergency Planning (OEP) Centre for Cybersecurity & Cybercrime Investigation (UCD CCI) Computer emergency response team National Cyber Security Centre (disambiguation) in other countries References External links National Cyber Security Strategy 2015-2017 (Department of Communications, Energy and Natural Resources) Irish intelligence agencies Signals intelligence agencies Cyberwarfare Cryptography organizations Information technology management Ireland Software engineering organizations Communications in the Republic of Ireland Telecommunications in the Republic of Ireland Cybercrime in the Republic of Ireland Computer security organizations
National Cyber Security Centre (Ireland)
Technology,Engineering
1,097
2,091,149
https://en.wikipedia.org/wiki/The%20Horns%20of%20Nimon
The Horns of Nimon is the fifth and final broadcast serial of the 17th season of the British science fiction television series Doctor Who, which was first broadcast in four weekly parts on BBC1 from 22 December 1979 to 12 January 1980. It is the last broadcast of David Brierley's voice as K9 (as John Leeson returned in the next season). The serial is set on the planets Skonnos and Crinoth. In the serial, minotaur-like aliens called the Nimons plot to invade Skonnos by creating a tunnel in time and space linked between two artificial black holes. Plot The declining Skonnan Empire is under control of a mysterious horned being called the Nimon. It resides inside a labyrinthine Power Complex on the planet Skonnos, and communicates only with the Skonnan leader, Soldeed, who reveres the Nimon as a god. The Nimon demands a regular tribute of young people, who are flown in from the nearby planet Aneth, as well as a supply of hymetusite crystals. A transport ship bearing the sacrifices from Aneth breaks down and becomes stranded in interplanetary space, close to a black hole. Outside the ship, the TARDIS materialises. The Fourth Doctor attempts to save the TARDIS from being drawn into the black hole by attaching it to the Skonnan ship with a force field. He and Romana then board the ship, leaving K9 behind. Once aboard they find a cargo of hymetusite crystals and a hold full of young prisoners from Aneth, led by Seth. The Doctor and Romana are captured at gunpoint by the co-pilot, who forces them to fix the ship using a hymetusite crystal. The Doctor returns to the TARDIS to get supplies, and becomes stranded when the ship's engines start. Steering the TARDIS away from the black hole, he travels to Skonnos. On Skonnos, the Nimon is enraged by the delayed sacrifice and threatens to withhold the promised armaments that will help rebuild the Skonnan Empire. The ship arrives, bearing the sacrifices and Romana, who are forced to carry the hymetusite crystals into the Power Complex. Within the labyrinth, the walls seem to shift and change, forcing them towards the Nimon. They discover desiccated husks of bodies, previous Anethans who have been drained of life. They meet the Nimon, who has the power to fire deadly laser beams out of his horns. Meanwhile, the TARDIS has materialised on Skonnos. The Doctor enters the labyrinth and distracts the Nimon, enabling Romana, Seth and Teka to escape. In the centre of the Power Complex, the Nimon operates a transit system, opening a tunnel through a pair of black holes. Large globes carrying two more Nimon appear. It is revealed that the Nimon are a parasitic race who travel via artificial black holes between planets, draining their resources, before moving on to conquer new worlds. They refer to this as "the Great Journey of Life". They are now abandoning the distant Planet Crinoth to take over Skonnos. Soldeed questions his faith when confronted with multiple Nimons. Romana accidentally travels through the tunnel to Crinoth, which she finds overrun with Nimons. She is assisted by an old man named Sezom, who gives her a mineral called jacenite which can be used to destroy Nimons. Sezom admits that he was the one who helped the Nimons take over, falling for their promises (much like Soldeed). He realized too late that the small tributes were only the start of destruction of the whole population. Romana is brought back to Skonnos. Amid a struggle, Seth has taken Soldeed's weapon, a ceremonial staff, and fitting it with the jacenite, he stuns the Nimons. K9, who has been held captive in Soldeed's laboratory, shoots the remaining Nimon. Soldeed is also shot by Seth, but sets off a self-destruct system to destroy the Power Complex. Guided by K9, the Doctor and his party escape from the labyrinth. The Skonnans evacuate their city as the Nimon Power Complex explodes. Seth and Teka take a spacecraft to return to Aneth, while the Nimon-infested Crinoth disintegrates. Production Outside references The Horns of Nimon is an analogue of the Classical Greek myth of Theseus and the Minotaur. Several names in Read's script allude to the original myth: the name of the planet Skonnos is close to Knossos; the Anethans, like the ancient Athenians, are sent to die in a Labyrinth; its guardian is Soldeed, whose name refers to Daedalus, designer of the Labyrinth at Knossos; the eponymous bull-headed Nimon is based on the Minotaur; and the Anethan hero Seth is based on Theseus. In the closing scenes of The Horns of Nimon, the Doctor alludes to the Ancient Greek story by reminding Seth to paint his ship white (in reference to Theseus's return to Athens), and insinuates that he was personally involved in the original events of on Knossos, when he "caused quite a hoohah … Other times, other places". Broadcast and reception Paul Cornell, Martin Day, and Keith Topping gave a mixed review of the serial, stating "With its cheap design work, and a wonderfully watchable OTT performance from Graham Crowden, The Horns of Nimon is by turns brilliant and dull". Doctor Who: The Television Companions David J. Howe and Stephen James Walker noted that the serial had acquired a low reputation but they considered this to be undeserved. Although "admittedly a little more light hearted than usual" it did feature a performance by Tom Baker which was "rather more serious and intense here than in most other stories of a similar vintage". Production values were "no worse than on many other stories of this era, and rather better than on some" and the story was "ingenious and fun". Patrick Mulkern of Radio Times was very critical of the serial which he described as "a turgid quagmire of vapid characters, amateur dramatics, mirthless antics and clattering sets". Although the script contained "interesting concepts" these were not portrayed well due to the "absurd" Nimon costumes. Mulkern also thought the cast gave "terrible performances" with the exception of Tom Baker and Lalla Ward. In A Critical History of Doctor Who on Television, John Kenneth Muir opined that Read's use of classical allusions to Greek mythology served little purpose, but noted that the December broadcast slot of The Horns of Nimon coincided with the British panto season, obliging the scriptwriter to include in-jokes. He considered the serial was "memorable only in its superficial, mythology-based trappings, not in its content". Writing in 2017, Carey Fleiner linked the mythological theme of The Horns of Nimon to the idea of the monomyth popularized by the author Joseph Campbell in his 1949 book The Hero with a Thousand Faces. She noted that Campbell's writing had influenced George Lucas for his film Star Wars, released two years before The Horns of Nimon, and suggested that the popularity of Star Wars had inspired mythological content in a number of Doctor Who serials. Den of Geeks Andrew Blair selected The Horns of Nimon as one of the ten Doctor Who stories that would make great musicals. Commercial releases In print Terrance Dicks' novelisation was published by Target Books in October 1980. Dicks begins with a history of the Skonnan Empire and Soldeed, culminating in the arrival of the Nimon. Original author Anthony Read completed a new novelisation for audiobook publisher AudioGO in 2013, but with that company's suspension of operations and Read's death in 2015, the likelihood of its eventual release is now unclear. An audiobook of the Terrance Dicks version was published by BBC audio in April 2024, read by Geoffrey Beevers. Home media The Horns of Nimon was released on VHS in June 2003. It was released in a DVD box set entitled Myths and Legends along with The Time Monster and Underworld in March 2010. In Region 1 North America DVD, Horns of Nimon as a single title, with extras and commentary, was released on 6 July 2010. A "no extras" DVD was released as part of the Doctor Who DVD Files in Issue 139 on 30 April 2014. References External links Reviews Past Times: The Horns of Nimon Review at Nebula One Target novelisation 1979 British television episodes 1980 British television episodes Cultural depictions of Theseus Doctor Who serials novelised by Terrance Dicks Fiction about black holes Fourth Doctor serials Minotaur in popular culture
The Horns of Nimon
Physics
1,831
3,295,074
https://en.wikipedia.org/wiki/Platinum%20silicide
Platinum silicide, also known as platinum monosilicide, is the inorganic compound with the formula PtSi. It is a semiconductor that turns into a superconductor when cooled to 0.8 K. Structure and bonding The crystal structure of PtSi is orthorhombic, with each silicon atom having six neighboring platinum atoms. The distances between the silicon and the platinum neighbors are as follows: one at a distance of 2.41 angstroms, two at a distance of 2.43 angstroms, one at a distance of 2.52 angstroms, and the final two at a distance of 2.64 angstroms. Each platinum atom has six silicon neighbors at the same distances, as well as two platinum neighbors, at a distance of 2.87 and 2.90 angstroms. All of the distances over 2.50 angstroms are considered too far to really be involved in bonding interactions of the compound. As a result, it has been shown that two sets of covalent bonds compose the bonds forming the compound. One set is the three center Pt–Si–Pt bond, and the other set the two center Pt–Si bonds. Each silicon atom in the compound has one three center bond and two center bonds. The thinnest film of PtSi would consist of two alternating planes of atoms, a single sheet of orthorhombic structures. Thicker layers are formed by stacking pairs of the alternating sheets. The mechanism of bonding between PtSi is more similar to that of pure silicon than pure platinum or , though experimentation has revealed metallic bonding character in PtSi that pure silicon lacks. Synthesis Methods PtSi can be synthesized in several ways. The standard method involves depositing a thin film of pure platinum onto silicon wafers and heating in a conventional furnace at 450–600 °C for a half an hour in inert ambients. The process cannot be carried out in an oxygenated environment, as this results in the formation of an oxide layer on the silicon, preventing PtSi from forming. A secondary technique for synthesis requires a sputtered platinum film deposited on a silicon substrate. Due to the ease with which PtSi can become contaminated by oxygen, several variations of the methods have been reported. Rapid thermal processing has been shown to increase the purity of PtSi layers formed. Lower temperatures (200–450 °C) were also found to be successful, higher temperatures produce thicker PtSi layers, though temperatures in excess of 950 °C formed PtSi with increased resistivity due to clusters of large PtSi grains. Kinetics Despite the synthesis method employed, PtSi forms in the same way. When pure platinum is first heated with silicon, is formed. Once all the available Pt and Si are used and the only available surfaces are , the silicide will begin the slower reaction of converting into PtSi. The activation energy for the reaction is around 1.38 eV, while it is 1.67 eV for PtSi. Oxygen is extremely detrimental to the reaction, as it will bind preferably to Pt, limiting the sites available for Pt–Si bonding and preventing the silicide formation. A partial pressure of as low at 10−7 has been found to be sufficient to slow the formation of the silicide. To avoid this issue inert ambients are used, as well as small annealing chambers to minimize amount of potential contamination. The cleanliness of the metal film is also extremely important, and unclean conditions result in poor PtSi synthesis. In certain cases an oxide layer can be beneficial. When PtSi is used as a Schottky barrier, an oxide layer prevents wear of the PtSi. Applications PtSi is a semiconductor and a Schottky barrier with high stability and good sensitivity, and can be used in infrared detection, thermal imaging, or ohmic and Schottky contacts. Platinum silicide was most widely studied and used in the 1980s and 90s, but has become less commonly used, due to its low quantum efficiency. PtSi is now most commonly used in infrared detectors, due to the large size of wavelengths it can be used to detect. It has also been used in detectors for infrared astronomy. It can operate with good stability up to 0.05 °C. Platinum silicide offers high uniformity of arrays imaged. The low cost and stability makes it suited for preventative maintenance and scientific infrared imaging. See also HgCdTe Indium antimonide References Platinum(IV) compounds Semiconductor materials Infrared sensor materials Transition metal silicides
Platinum silicide
Chemistry
916
39,291
https://en.wikipedia.org/wiki/Postcondition
In computer programming, a postcondition is a condition or predicate that must always be true just after the execution of some section of code or after an operation in a formal specification. Postconditions are sometimes tested using assertions within the code itself. Often, postconditions are simply included in the documentation of the affected section of code. For example: The result of a factorial is always an integer and greater than or equal to 1. So a program that calculates the factorial of an input number would have postconditions that the result after the calculation be an integer and that it be greater than or equal to 1. Another example: a program that calculates the square root of an input number might have the postconditions that the result be a number and that its square be equal to the input. Postconditions in object-oriented programming In some software design approaches, postconditions, along with preconditions and class invariants, are components of the software construction method design by contract. The postcondition for any routine is a declaration of the properties which are guaranteed upon completion of the routine's execution. As it relates to the routine's contract, the postcondition offers assurance to potential callers that in cases in which the routine is called in a state in which its precondition holds, the properties declared by the postcondition are assured. Eiffel example The following example written in Eiffel sets the value of a class attribute hour based on a caller-provided argument a_hour. The postcondition follows the keyword ensure. In this example, the postcondition guarantees, in cases in which the precondition holds (i.e., when a_hour represents a valid hour of the day), that after the execution of set_hour, the class attribute hour will have the same value as a_hour. The tag "hour_set:" describes this postcondition clause and serves to identify it in case of a runtime postcondition violation. set_hour (a_hour: INTEGER) -- Set `hour' to `a_hour' require valid_argument: 0 <= a_hour and a_hour <= 23 do hour := a_hour ensure hour_set: hour = a_hour end Postconditions and inheritance In the presence of inheritance, the routines inherited by descendant classes (subclasses) do so with their contracts, that is their preconditions and postconditions, in force. This means that any implementations or redefinitions of inherited routines also have to be written to comply with their inherited contracts. Postconditions can be modified in redefined routines, but they may only be strengthened. That is, the redefined routine may increase the benefits it provides to the client, but may not decrease those benefits. See also Precondition Design by contract Hoare logic Invariants maintained by conditions Database trigger References Programming constructs Formal methods Logic in computer science Mathematics of computing Articles with example Eiffel code
Postcondition
Mathematics,Engineering
630
475,202
https://en.wikipedia.org/wiki/Rg%20chromaticity
The RGB chromaticity space, two dimensions of the normalized RGB space, is a chromaticity space, a two-dimensional color space in which there is no intensity information. In the RGB color space a pixel is identified by the intensity of red, green, and blue primary colors. Therefore, a bright red can be represented as (R,G,B) (255,0,0), while a dark red may be (40,0,0). In the normalized RGB space or RG space, a color is represented by the proportion of red, green, and blue in the color, rather than by the intensity of each. Since these proportions must always add up to a total of 1, we are able to quote just the red and green proportions of the color, and can calculate the blue value if necessary. Conversion between RGB and RG Chromaticity Given a color (R,G,B) where R, G, B = linear intensity of red, green and blue, this can be converted to color where imply the proportion of red, green and blue in the original color: The sum of rgb will always equal one, because of this property the b dimension can be thrown away without causing any loss in information. The reverse conversion is not possible with only two dimensions, as the intensity information is lost during the conversion to rg chromaticity, e.g. (1/3, 1/3, 1/3) has equal proportions of each color, but it is not possible to determine whether this corresponds to black, gray, or white. If R, G, B, is normalized to r, g, G color space the conversion can be computed by the following: The conversion from rgG to RGB, is the same as the conversion from xyY to XYZ. The conversion requires at least some information relative to the intensity of the scene. For this reason if the G is preserved then the inverse is possible. Version used in computer vision Motivation Computer vision algorithms tend to suffer from varying imaging conditions. To make more robust computer vision algorithms it is important to use a (approximately) color invariant color space. Color invariant color spaces are desensitized to disturbances in the image. One common problem in computer vision is varying light source (color and intensity) between multiple images and within a single image. The rg colorspace is used out of a desire for . For example, if a scene is lit by a spotlight, an object of a given color will change in apparent color as it moves across the scene. Where color is being used to track an object in an RGB image, this can cause problems. Removing the intensity component should keep the color constant. Practice In practice, computer vision uses an "incorrect" form of rg colorspace derived directly from gamma-corrected RGB, typically sRGB. As a result, full removal of intensity is not achieved and 3D objects still show some of fringing. Illustration rg color space r, g, and b chromaticity coordinates are ratios of the one tristimulus value over the sum of all three tristimulus values. A neutral object infers equal values of red, green and blue stimulus. The lack of luminance information in rg prevents having more than 1 neutral point where all three coordinates are of equal value. The white point of the rg chromaticity diagram is defined by the point (1/3,1/3). The white point has one third red, one third green and the final third blue. On an rg chromaticity diagram the first quadrant where all values of r and g are positive forms a right triangle. With max r equals 1 unit along the x and max g equals 1 unit along the y axis. Connecting a line from the max r (1,0) to max g (0,1) from a straight line with slope of negative 1. Any sample that falls on this line has no blue. Moving along the line from max r to max g, shows a decrease in red and an increase of green in the sample, without blue changing. The further a sample moves from this line the more blue is present in the sample trying to be matched. CIE RGB RGB is a color mixture system. Once the color matching function are determined the tristimulus values can be determined easily. Since standardization is required to compare results, CIE established standards to determine color matching function. The reference stimuli must be monochromatic lights R, G, B. With wavelengths respectively. The basic stimulus is white with equal energy spectrum. Require a ratio of 1.000:4.5907:0.0601 (RGB) to match white point. Therefore, a white with equi-energy lights of 1.000 + 4.5907 + 0.0601 = 5.6508 lm can be matched by mixing together R, G and B. Guild and Wright used 17 subjects to determine RGB color matching functions. RGB color matching serve as the base for rg chromaticity. The RGB color matching functions are used to determine the tristimulus RGB values for a spectrum. Normalizing the RGB tristimulus values converts the tristimulus into rgb. Normalized RGB tristimulus value can be plotted on an rg chromaticity diagram. An example of color matching function below. is any monochromatic. Any monochromatic can be matched by adding reference stimuli and . The test light is also to bright to account for this reference stimuli is added to the target to dull the saturation. Thus is negative. and can be defined as a vector in a three-dimensional space. This three-dimensional space is defined as the color space. Any color can be reached by matching a given amount of and . The negative calls for color matching functions that are negative at certain wavelengths. This is evidence of why the color matching function appears to have negative tristimulus values. rg chromaticity diagram The figure to the side is a plotted rg chromaticity diagram. Noting the importance of the E which is defined as the white point where rg are equal and have a value of 1/3. Next notice the straight line from (0,1) to (1,0), follows the expression y = -x + 1. As the x (red) increases the y (green) decreases by the same amount. Any point on the line represents the limit in rg, and can be defined by a point that has no b information and formed by some combination of r and g. Moving of the linear line towards E represents a decrease in r and g and an increase in b. In computer vision and digital imagery only use the first quadrant because a computer cannot display negative RGB values. The range of RGB is 0-255 for most displays. But when trying to form color matches using real stimuli negative values are needed according to Grassmann's Laws to match all possible colors. This is why the rg chromaticity diagram extends in the negative r direction. Conversion xyY color system Avoiding negative color coordinate values prompted the change from to rg to xy. Negative coordinates are used in rg space because when making a spectral sample match can be created by adding stimulus to the sample. The color matching functions r, g, and b are negative at certain wavelengths to allow for any monochromatic sample to be matched. This is why in the rg chromaticity diagram the spectral locus extents into the negative r direction and ever so slightly into the negative g direction. On an xy chromaticity diagram the spectral locus if formed by all positive values of x and y. See also RG color space CIE 1931 color space Trichromacy Imaginary color Grassmann's law Chromaticity Chrominance Image segmentation Computer vision References Color space
Rg chromaticity
Mathematics
1,624
11,465,780
https://en.wikipedia.org/wiki/Gymnosporangium%20kernianum
Gymnosporangium kernianum is a fungal plant pathogen. References Fungal plant pathogens and diseases Fungi described in 1911 Pucciniales Fungus species
Gymnosporangium kernianum
Biology
33
760,260
https://en.wikipedia.org/wiki/Paulo%20R.%20Holvorcem
Paulo Renato Centeno Holvorcem (born 10 July 1967) is a Brazilian amateur astronomer and mathematician who lives in Brasília, Brazil. He is a prolific discoverer of asteroids. He is credited by the Minor Planet Center with the discovery or co-discovery (with Charles W. Juels) of about 197 minor planets between 1998 and 2010. Holvorcem with Juels also discovered two comets: C/2002 Y1 (Juels-Holvorcem) and C/2005 N1 (Juels-Holvorcem). Holvorcem was also involved in the discovery of C/2011 K1 (Schwartz-Holvorcem). The main-belt asteroid 13421 Holvorcem was named after him on 9 January 2001 (). Paulo's SkySift pipeline software is used by many observatories across the world for the detection of minor planets and transients. List of discovered minor planets See also References External links Paulo Holvorcem's website CV of Paulo Holvorcem Minor Planet Center: Minor Planet Discoverers Harvard-Smithsonian Center for Astrophysics: 2003 Comet Awards Announced Amateur astronomers Brazilian astronomers Discoverers of asteroids Living people People from Campinas 1967 births
Paulo R. Holvorcem
Astronomy
262
27,708,280
https://en.wikipedia.org/wiki/Soviet%20space%20exploration%20history%20on%20Soviet%20stamps
Soviet space exploration history has been well documented on Soviet stamps. These Soviet stamps cover a broad spectrum of subjects related to the Soviet space program. While much of the focus has been placed on the nation's notable "firsts" in space flight, including: Earth orbiting satellite, Sputnik 1; animal in space, the dog Laika on Sputnik 2; human in space and Earth orbit, Yuri Gagarin on Vostok 1; first spacewalk, Alexei Leonov on Voskhod 2; woman in space, Valentina Tereshkova on Vostok 6; Moon impact, 1959, and uncrewed landing; space station; and interplanetary probe; numerous stamps have paid tribute to more general astronomical topics as well. Further reading Gurevich, Iakov Borisovich and Vladimir Il'ich Shcherbakov. Kosmicheskaia filateliia: katalog-spravochnik. Moscow: Radio i sviaz, 1986, 126p. Kvasnikov, IU. S. Rossiiskaia kosmonavtika na pochtovykh markakh 1951–1995. Moscow: Novosti Kosmonavtiki?, 1996, 154p. Kvasnikov, Yuri. Russian Cosmonautics On Postage Stamps. Falkirk: Astro Space Stamp Society, 1999, 35p. Reichman, James G. Soviet and Russian philatelic items related to dogs in space. Mesa, Arizona, James G. Reichman, 2012, 97p. Sashenkov, Evgeniĭ Petrovich. Sovetskaia kosmonavtika v filatelii: katalog-spravochnik. Moscow: Glavnaia Filatelisticheskaia kontora, Soiuzpechati Ministerstva sviazi SSSR, 1967, 138p. See also Postage stamps of the Soviet Union U.S. space exploration history on U.S. stamps Space Race References External links History of spaceflight Space program of the Soviet Union Postage stamps of the Soviet Union Topical postage stamps Works about outer space
Soviet space exploration history on Soviet stamps
Astronomy
437
43,865,943
https://en.wikipedia.org/wiki/Endolithic%20lichen
An endolithic lichen is a crustose lichen that grows inside solid rock, growing between the grains, with only the fruiting bodies exposed to the air. Morphology Although variation exists, many mycobiont species have three layers. The innermost layer is a loose web of hyphae. The intermediate layer hosts the photobiont. The photobiont is surrounded by distended hyphae. The outer layer is composed of more densely packed hyphae and calcium carbonate microcrystals. Weathering effects The lichen act to deteriorate the rock they are growing on, contributing to weathering of the rock. This deterioration happens among different substrates and species. Mycobiont Species Source: Acrocordia conoidea Petractis clausa Rinodina immersa Verrucaria baldensis Verrucaria marmorea Caloplaca luteominea ssp. bolandri References Lichenology
Endolithic lichen
Biology
198
25,225,295
https://en.wikipedia.org/wiki/Consumer%20neuroscience
Consumer neuroscience is the combination of consumer research with modern neuroscience. The goal of the field is to find neural explanations for consumer behaviors in individuals both with or without disease. Consumer research Consumer research has existed for more than a century and has been well established as a combination of sociology, psychology, and anthropology, and popular topics in the field revolve around consumer decision-making, advertising, and branding. For decades, however, consumer researchers had never been able to directly record the internal mental processes that govern consumer behavior; they always were limited to designing experiments in which they alter the external conditions in order to view the ways in which changing variables may affect consumer behavior (examples include changing the packaging or changing a subject’s mood). With the integration of neuroscience with consumer research, it is possible to go directly into the brain to discover the neural explanations for consumer behavior. The ability to record brain activity with electrodes and advances in neural imaging technology make it possible to determine specific regions of the brain that are responsible for critical behaviors involved in consumption. Consumer neuroscience is similar to neuroeconomics and neuromarketing, but subtle, yet distinct differences exist between them. Neuroeconomics is more of an academic field while neuromarketing and consumer neuroscience are more of an applied science. Neuromarketing focuses on the study of various marketing techniques and attempts to integrate neuroscience knowledge to help improve the efficiency and effectiveness of said marketing strategies. Consumer neuroscience is unique among the three because the main focus is on the consumer and how various factors affect individual preferences and purchasing behavior. Advertising Advertising and emotion Studies of emotion are crucial to advertising research as it has been shown that emotion plays a significant role in ad memorization. Classically in advertising research, the theory has been that emotion and ratio are represented in different regions of the brain, but neuroscience may be able to disprove this theory by showing that the ventromedial prefrontal cortex and the striatum play a role in bilateral emotion processing. The attractiveness of the advertisements correlates with specific changes in brain activity in various brain regions including the medial prefrontal cortex, posterior cingulate, nucleus accumbens and higher-order visual cortices. This may represent an interaction between the perceived attractiveness of the ad by the consumer and the emotions expressed by the people pictured in the advertisement. It has been suggested that ads that use people with positive emotions are perceived as attractive while ads using exclusively text or depicting people with neutral expressions may generally be viewed as unattractive. Unattractive ads activate the anterior insula, which plays a role in the processing of negative emotions. Both attractive and unattractive ads have been shown to be more memorable than ads described as ambiguously attractive, but more research is needed to determine how this translates to the overall brand perception in the eyes of the consumer and how this may impact future purchasing behavior. Mental processing of advertisements There are various studies that have been conducted to research the question of how consumers process and store the information presented in advertisements. Television commercials with scene durations lasting longer than 1.5 seconds have been shown to be more memorable one week later than scenes that last less than 1.5 seconds, and scenes that produce the quickest electrical response in the left frontal hemisphere have been shown to be more memorable as well. It has been suggested that the transfer of visual advertising inputs from short term memory to long term memory may take place in the left hemisphere, and highly memorable ads can be created by producing the fastest responses in the left hemisphere. However, these theories have been renounced by some who believe that the research findings may be attributed to extraneous and unmeasured factors. There is also evidence to suggest that a front to back difference in processing speed may be more influential on ad memorization than left to right differences. Research has shown that there are certain periods of commercials that are far more significant for the consumer in terms of establishing advertising effects. These short segments are referred to as “branding moments” and are thought to be the most engaging parts of the commercial. These moments can be identified using an EEG and analyzing alpha waves (8–13 Hz), beta waves (13–30 Hz) and theta waves (4–7 Hz). These results may suggest that the strength of a commercial with regard to its effect on the consumer can be evaluated by the strength of its unique branding moments, helping brands create more engaging and effective AR campaigns.  In addition, research has also found that a consequence of curiosity, in terms of advertising, is that an unsatisfied curiosity can lead to indulgent consumption in any domain. Affective vs. cognitive ads Affective advertising (using comedy, drama, suspense, etc.) activates the amygdala, the orbitofrontal cortices, and the brainstem whereas cognitive advertising (strict facts) mainly activates the posterior parietal cortex and the superior prefrontal cortices. Ambler and Burne in 1999 created the Memory-Affect-Cognition (MAC) theory to explain the processes involved in decision making. According to the theory, the majority of decisions are habitual and do not require affect or cognition; they require memory only. Most of the remaining decisions only require memory and affect; they do not require cognition. The main use for cognition is in the form of rationalization following a particular action, however, there are occasional instances in which memory, affect and cognition are all used in conjunction, such as during a debate about a particular choice. The above findings suggest a correlation exists between ad memorization and the degree of affective content within the advertisement, but it is still unclear how this translates to brand memory. Branding Brand associations Much of consumer research is devoted to studying the effect of brand associations on consumer preferences and how they manifest into brand memories. Brand memories can be defined as “everything that exists in the minds of customers with respect to a brand (e.g. thoughts, feelings, experiences, images, perceptions, beliefs and attitudes)”. Several studies have indicated there is not a designated area of the brain devoted to brand recognition. Studies have shown that different areas of the brain are activated when exposed to a brand as opposed to a person, and decisions regarding the evaluation of brands in different product categories activate the area of the brain responsible for semantic object processing rather than areas involved with the judgment of people. These two findings suggest that brands are not processed by the brain in the same manner as human personalities, indicating that personality theory cannot be used to explain brand preferences. Consumer neuroscience explains brand loyalty In a study of fMRI scans of loyal and less loyal customers it was found that in the case of loyal customers the presence of a particular brand serves as a reward during choice tasks, but less loyal customers do not exhibit the same reward pathway. It was also found that loyal customers had greater activation in the brain areas concerned with emotion and memory retrieval suggesting that loyal customers develop an affective bond with a particular brand, which serves as the primary motivation for repeat purchases. Brand loyalty has been shown to be the result of changes in neural activity in the striatum, which is part of the human action reward system. In order to become brand loyal the brain must make a decision of brand A over brand B, a process which relies on the brain to make predictions based upon expected reward and then evaluate the results to learn loyalty. The brain is required to remember both positive and negative outcomes of previous brand choices in order to accurately be able to make predictions regarding the expected outcome of future brand decisions. For example, a helpful salesman or a discount in price may serve as a reward to encourage future customer loyalty. It is thought that the amygdala and striatum are the two most prominent structures for predicting the outcomes of decisions, and that the brain learns to better predict in part by establishing a larger neural network in these structures. For recently-formed brand relationships, there is greater self-reported emotional arousal. Over time, that self-reported emotional arousal decreases and inclusion increases. When tested through skin conductance, increased emotional arousal for recently formed close relationships was found, but not for already established close brand relationships. Also, an association was found between insula activation (a brain area connected to urging, addiction, loss aversion, and interpersonal love), and established close relationships. Research shows that brand betrayal is neuro-physiologically different from brand dissatisfaction. Brand betrayal is associated with feelings of psychological loss, self-castigation over previous brand support, anger from indignation, and rumination. Thus, compared with brand dissatisfaction, brand betrayal is likely to be more harmful to both the brand and the person’s relationship with the brand. This makes brand betrayal more difficult for marketers to deflect, with longer-lasting consequences. In an attempt to model how the brain learns, a temporal difference learning algorithm has been developed which takes into account expected reward, stimuli presence, reward evaluation, temporal error, and individual differences. As yet this is a theoretical equation, but it may be solved in the near future. How branding affects consumers Brands serve to connect consumers to the products they are purchasing either by establishing an emotional connection or by creating a particular image. It has been shown that when consumers are forced to choose an item from a group in which a familiar brand is present the choice is much easier than when consumers are forced to choose from a group of entirely unfamiliar brands. One MRI study found that there was significantly increased activation in the brain reward centers including the orbitofrontal cortex, the ventral striatum and the anterior cingulate when consumers were looking at sports cars as compared to sedans (presumably because the status symbol associated with sports cars is rewarding in some way). Many corporations have conducted similar MRI studies to investigate the effect of their brand on consumers including Delta Air Lines, General Motors, Home Depot, Hallmark, and Motorola but the results have not been made public. A study by McClure et al. investigated the difference in branding between Coca-Cola and Pepsi. The study found that when the two drinks were tasted blind there was no difference in consumer preference between the brands. Both drinks produced equal activation in the ventromedial prefrontal cortex, which is thought to be activated because the taste is rewarding. When the subjects were informed of the brand names the consumers preferred Coke, and only Coke activated the ventromedial prefrontal cortex, suggesting that drinking the Coke brand is rewarding beyond simply the taste itself. More subjects preferred Coke when they knew it was Coke than when the taste testing was anonymous, which demonstrates the power of branding to influence consumer behavior. There was also significant activation in the hippocampus and dorsolateral prefrontal cortex when subjects knew they were drinking Coke. These brain structures are known to play a role in memory and recollection, which indicates they are helping the subjects to connect their present drinking experience to previous brand associations. The study proposes that there are two separate processes contributing to consumer decision making: the ventromedial prefrontal cortex responds to sensory inputs and the hippocampus and dorsolateral prefrontal cortex recall previous associations to cultural information. According to the results of this study, the Coke brand has much more firmly established itself as a rewarding experience. Packaging Consumer neuroscience research has also invested in how firms package their goods, how designers apply principles of aesthetics to package design, and how consumers neurophysiologically respond to packaged goods. One such finding is that the reaction time of a consumer's choice is significantly increased when the product has aesthetic packaging. Similarly, aesthetic packaging also leads to a product being chosen over a product in standard packaging, even if the standard-packaged product is from a well-known brand and is less expensive. When packaging is deemed aesthetic, there is an increase in activation in the nucleus accumbens and the ventromedial prefrontal cortex. Purchasing Research in consumer buying has focused on the identification of processes that contribute to an individual making a purchase. The brain does not contain a “buy button”, but rather recruits several processes during choice tasks, and studies report that the prefrontal cortex is heavily involved in limiting the emotions expressed during impulse buying. Reducing the effect of these executive control areas of the brain may contribute to changes in purchasing behavior, for example music may lead to reduced cognitive control which is why it has been shown to correlate with a higher percentage of unplanned purchases. Purchasing process Several MEG studies have been conducted to measure the neuronal correlates associated with decision making in order to investigate the underlying processes governing purchasing. The studies suggest that decisions involved with purchasing can be seen as occurring in two halves. The first half is concerned with memory recall and problem identification and recognition. The second half is associated with the purchasing decision itself; familiar brands produce different brain patterns than do nonfamiliar brands. The right parietal cortex is activated when consumers choose a familiar brand, which indicates the choice is at least partially intentional and behavior is influenced by prior experiences. Familiar vs. unfamiliar purchases When consumers select less well known products or products that are completely unfamiliar, several areas of the brain are activated to help with the decision making process that are not activated when consumers select more well known products. There is an increased synchronization between the right dorsolateral cortices (associated with consideration of multiple sources of information), there is increased activity in the right orbitofrontal cortex (associated with evaluation of rewards) and there is increased activity in the left inferior frontal cortex (associated with silent vocalization). Activation in these brain structures indicates that the decision between less well known products is difficult in some way. MEG findings also suggest that even repetitive daily shopping that is apparently simple actually relies on very complex neural mechanisms. Associated areas of the brain Ventromedial prefrontal cortex It has been shown that the ventromedial prefrontal cortex is heavily involved in decisions regarding brand-related preferences and individuals with damage to this region of the brain do not demonstrate normal brand-preference behavior. People with damage to the ventromedial prefrontal cortex have also been found to be more easily influenced by misleading advertisement. Amygdala and striatum It is thought that the amygdala and striatum are the two most prominent structures for predicting the outcomes of decisions, and that the brain learns to better make predictions in part by establishing a larger neural network in these structures. Hippocampus and dorsolateral prefrontal cortex The hippocampus and dorsolateral prefrontal cortex help consumers recall previous associations with cultural information and cultural expectations. These associations with prior information serve to modify consumer behavior and influence purchasing decisions. Real-world applications Consumer research provides a real-world application for neuroscience studies. Consumer studies help neuroscience to learn more about how healthy and unhealthy brain functions differ, which may assist in discovering the neural source of consumption-related dysfunctions and treat a variety of addictions. Additionally, studies are currently underway to investigate the neural mechanism of “anchoring”, which has been thought to contribute to obesity because people are more influenced by the behaviors of their peers than an internal standard. Discovering a neural source of anchoring may be the key to preventing behaviors that typically lead to obesity. Limitations Most of the consumer neuroscience studies involving brain scanning techniques have been conducted in medical or technological environments where such brain imaging devices are present. This is not a realistic environment for consumer decision making and may serve to skew the data relative to consumer decision making in normal consumer environments. Testing underlying neurophysiological principles is extraordinarily difficult from an experimental setup standpoint simply because it is unclear exactly how various factors are perceived in the human mind. An extremely comprehensive understanding of the neuroscientific testing techniques to be used is required to be able to establish proper controls and create an environment such that test subjects are not inadvertently exposed to unwanted stimuli that may bias results. There are many concerns over the value and the potential usage of consumer neuroscience data. The potential for enhanced consumer welfare is certainly present but equally present is the potential for the information to be used inappropriately for individual gain. The reaction to emerging study results in both the public and the media remains to be seen. In its current state, consumer neuroscience research is a compilation of only loosely related subjects that is unable, at this point, to produce any collective conclusions. References Neuroeconomics Market research Consumer behaviour
Consumer neuroscience
Biology
3,348
34,256,618
https://en.wikipedia.org/wiki/IHTC
iHTC mobile phones are shanzhai (counterfeit) clones of HTC mobile phones. The iHTC clones are frequently sold in China and Hong Kong and can be confused with genuine HTC phones. iHTC's Windows Mobile phones have used the Huawei HiSilicon K3 chipset. References Mobile phones by company
IHTC
Technology
71
39,252,641
https://en.wikipedia.org/wiki/Foldase
In molecular biology, foldases are a particular kind of molecular chaperones that assist the non-covalent folding of proteins in an ATP-dependent manner. Examples of foldase systems are the GroEL/GroES and the DnaK/DnaJ/GrpE system. See also Holdase Chaperonin Co-chaperone References Molecular chaperones Protein biosynthesis
Foldase
Chemistry
83
14,708,275
https://en.wikipedia.org/wiki/CLASS%20B1359%2B154
CLASS B1359+154 is a quasar, or quasi-stellar object, that has a redshift of 3.235. A group of three foreground galaxies at a redshift of about 1 are behaving as gravitational lenses. The result is a rare example of a sixfold multiply imaged quasar. See also Twin Quasar Einstein Cross References External links Simbad data on QSO B1359+154 Image QSO B1359+154 Six-Image CLASS Gravitational Lens SIMBAD data Gravitationally lensed quasars Gravitational lensing Boötes
CLASS B1359+154
Physics,Astronomy
123
28,805,991
https://en.wikipedia.org/wiki/Wallbook
A Wallbook is a large printed book that is designed also to be mounted on a wall. For example, its design may be concertina folded so it can be read like a book or hung on a wall. Etymology The name was coined by Christopher Lloyd (world history author), creator of the 2010 The What on Earth? Wallbook which claims to be the first ever attempt to illustrate the entire history of everything from the Big Bang to the present day on a single timeline. Design and contents Reviewing the book for the Telegraph's Family Book Club, the writer Christopher Middleton encapsulates the work as a "7-foot, six-inch-long chart, which starts out some four billion years ago, with the explosion that triggered the Earth’s birth, and ends just a matter of months ago, with the election of Barack Obama and the financial crisis of 2007–2008". The What on Earth? Wallbook is notable for its use of a logarithmic timescale. At the beginning of the timeline 1 cm represents the passage of 1 billion years but by the end of the timeline the same space accounts for just five years. A total of 12 changes of scale accounts for how the whole of the past can be graphically represented on a single piece of paper. The wallbook’s 1,000 pictures and captions are arranged into 12 streams of colours which provide the backdrops along which the major events of natural and human history unfold. The section, "Space, Earth, Sky, Sea, Land and Humanity" accounts for the story of evolution while Asia, the Middle East, Europe, the Americas, Africa and Australasia convey the rise and fall of human civilisations. At the top of the timeline is a series of globes that start by showing the movement of the world’s continental plates but later chart the rise and fall of major human empires. Distribution The What on Earth? Wallbook was launched exclusively through The Daily Telegraph newspaper on Saturday 4 September 2010. Nomenclature Since the launch of the What on Earth? Wallbook the word wallbook has been defined in Macmillan's Open Dictionary as a new noun meaning: "a large printed book which can be mounted on a wall". References Book design Books by type
Wallbook
Engineering
459
55,963,154
https://en.wikipedia.org/wiki/SN%201996ah
SN 1996ah was a supernova located in the spiral galaxy NGC 5640 in the constellation of Camelopardalis. It was discovered on June 6, 1996 by American astronomer Jean Mueller, who was using the 1.2-m Oschin Schmidt telescope in the course of the second Palomar Sky Survey. SN 1996ah had magnitude about 18 and was located 5" west and 1" south of the center of NGC 5640. It was classified as type Ia supernova. See also Supernova NGC 5640 Camelopardalis (constellation) References Supernovae 19960606 Camelopardalis
SN 1996ah
Chemistry,Astronomy
126
42,451
https://en.wikipedia.org/wiki/Pentaerythritol%20tetranitrate
Pentaerythritol tetranitrate (PETN), also known as PENT, pentyl, PENTA (ПЕНТА, primarily in Russian), TEN (tetraeritrit nitrate), corpent, or penthrite (or, rarely and primarily in German, as nitropenta), is an explosive material. It is the nitrate ester of pentaerythritol, and is structurally very similar to nitroglycerin. Penta refers to the five carbon atoms of the neopentane skeleton. PETN is a very powerful explosive material with a relative effectiveness factor of 1.66. When mixed with a plasticizer, PETN forms a plastic explosive. Along with RDX it is the main ingredient of Semtex. PETN is also used as a vasodilator drug to treat certain heart conditions, such as for management of angina. History Pentaerythritol tetranitrate was first prepared and patented in 1894 by the explosives manufacturer of Cologne, Germany. The production of PETN started in 1912, when the improved method of production was patented by the German government. PETN was used by the German Military in . It was also used in the MG FF/M autocannons and many other weapon systems of the Luftwaffe in World War II. Properties PETN is practically insoluble in water (0.01 g/100 mL at 50 °C), weakly soluble in common nonpolar solvents such as aliphatic hydrocarbons (like gasoline) or tetrachloromethane, but soluble in some other organic solvents, particularly in acetone (about 15 g/100 g of the solution at 20 °C, 55 g/100 g at 60 °C) and dimethylformamide (40 g/100 g of the solution at 40 °C, 70 g/100 g at 70 °C). It is a non-planar molecule that crystallizes in the space group P21c. PETN forms eutectic mixtures with some liquid or molten aromatic nitro compounds, e.g. trinitrotoluene (TNT) or tetryl. Due to steric hindrance of the adjacent neopentyl-like moiety, PETN is resistant to attack by many chemical reagents; it does not hydrolyze in water at room temperature or in weaker alkaline aqueous solutions. Water at 100 °C or above causes hydrolysis to dinitrate; presence of 0.1% nitric acid accelerates the reaction. The chemical stability of PETN is of interest, because of the presence of PETN in aging weapons. Neutron radiation degrades PETN, producing carbon dioxide and some pentaerythritol dinitrate and trinitrate. Gamma radiation increases the thermal decomposition sensitivity of PETN, lowers melting point by few degrees Celsius, and causes swelling of the samples. Like other nitrate esters, the primary degradation mechanism is the loss of nitrogen dioxide; this reaction is autocatalytic. Studies were performed on thermal decomposition of PETN. In the environment, PETN undergoes biodegradation. Some bacteria denitrate PETN to trinitrate and then dinitrate, which is then further degraded. PETN has low volatility and low solubility in water, and therefore has low bioavailability for most organisms. Its toxicity is relatively low, and its transdermal absorption also seems to be low. It poses a threat for aquatic organisms. It can be degraded to pentaerythritol by iron. Production Production is by the reaction of pentaerythritol with concentrated nitric acid to form a precipitate which can be recrystallized from acetone to give processable crystals. Variations of a method first published in US Patent 2,370,437 by Acken and Vyverberg (1945 to Du Pont) form the basis of all current commercial production. PETN is manufactured by numerous manufacturers as a powder, or together with nitrocellulose and plasticizer as thin plasticized sheets (e.g. Primasheet 1000 or Detasheet). PETN residues are easily detectable in hair of people handling it. The highest residue retention is on black hair; some residues remain even after washing. Explosive use The most common use of PETN is as an explosive with high brisance. It is a secondary explosive, meaning it is more difficult to detonate than primary explosives, so dropping or igniting it will typically not cause an explosion (at standard atmospheric pressure it is difficult to ignite and burns vigorously), but is more sensitive to shock and friction than other secondary explosives such as TNT or tetryl. Under certain conditions a deflagration to detonation transition can occur, just like that of ammonium nitrate. It is rarely used alone in military operations due to its lower stability, but is primarily used in the main charges of plastic explosives (such as C4) along with other explosives (especially RDX), booster and bursting charges of small caliber ammunition, in upper charges of detonators in some land mines and shells, as the explosive core of detonation cord. PETN is the least stable of the common military explosives, but can be stored without significant deterioration for longer than nitroglycerin or nitrocellulose. During World War II, PETN was most importantly used in exploding-bridgewire detonators for the atomic bombs. These exploding-bridgewire detonators gave more precise detonation compared to primacord. PETN was used for these detonators because it was safer than primary explosives like lead azide: while it was sensitive, it would not detonate below a threshold amount of energy. Exploding bridgewires containing PETN remain used in current nuclear weapons. In spark detonators, PETN is used to avoid the need for primary explosives; the energy needed for a successful direct initiation of PETN by an electric spark ranges between 10–60 mJ. Its basic explosion characteristics are: Explosion energy: 5810 kJ/kg (1390 kcal/kg), so 1 kg of PETN has the energy of 1.24 kg TNT. Detonation velocity: 8350 m/s (1.73 g/cm3), 7910 m/s (1.62 g/cm3), 7420 m/s (1.5 g/cm3), 8500 m/s (pressed in a steel tube) Volume of gases produced: 790 dm3/kg (other value: 768 dm3/kg) Explosion temperature: 4230 °C Oxygen balance: −6.31 atom -g/kg Melting point: 141.3 °C (pure), 140–141 °C (technical) Trauzl lead block test: 523 cm3 (other values: 500 cm3 when sealed with sand, or 560 cm3 when sealed with water) Critical diameter (minimal diameter of a rod that can sustain detonation propagation): 0.9 mm for PETN at 1 g/cm3, smaller for higher densities (other value: 1.5 mm) In mixtures PETN is used in a number of compositions. It is a major ingredient of theSemtex plastic explosive. It is also used as a component of pentolite, a castable mixture with TNT (usually 50/50 but may contain more TNT), which is, along with pure PETN, a common explosive for boosters for the blasting work (as in mining). The XTX8003 extrudable explosive, used in the W68 and W76 nuclear warheads, is a mixture of 80% PETN and 20% of Sylgard 182, a silicone rubber. It is often phlegmatized by addition of 5–40% of wax, or by polymers (producing polymer-bonded explosives); in this form it is used in some cannon shells up to 30 mm caliber, though it is unsuitable for higher calibers. It is also used as a component of some gun propellants and solid rocket propellants. Nonphlegmatized PETN is stored and handled with approximately 10% water content. PETN alone cannot be cast as it explosively decomposes slightly above its melting point, but it can be mixed with other explosives to form castable mixtures. PETN can be initiated by a laser. A pulse with duration of 25 nanoseconds and 0.5–4.2 joules of energy from a Q-switched ruby laser can initiate detonation of a PETN surface coated with a 100 nm thick aluminium layer in less than half of a microsecond. PETN has been replaced in many applications by RDX, which is thermally more stable and has a longer shelf life. PETN can be used in some ram accelerator types. Replacement of the central carbon atom with silicon produces Si-PETN, which is extremely sensitive. Terrorist and Military use Ten kilograms of PETN was used in the 1980 Paris synagogue bombing. In 1983, 307 people were killed after a truck bomb filled with PETN was detonated at the Beirut barracks. In 1983, the "Maison de France" house in Berlin was brought to a near-total collapse by the detonation of of PETN by terrorist Johannes Weinrich. On July 17, 1996, flight TWA 800 exploded and crashed in the Atlantic Ocean. Traces of PETN were found in the wreckage. In 1999, Alfred Heinz Reumayr used PETN as the main charge for his fourteen improvised explosive devices that he constructed in a thwarted attempt to damage the Trans-Alaska Pipeline System. In 2001, al-Qaeda member Richard Reid, the "Shoe Bomber", used PETN in the sole of his shoe in his unsuccessful attempt to blow up American Airlines Flight 63 from Paris to Miami. He had intended to use the solid triacetone triperoxide (TATP) as a detonator. In 2009, PETN was used in an attempt by al-Qaeda in the Arabian Peninsula to murder the Saudi Arabian Deputy Minister of Interior Prince Muhammad bin Nayef, by Saudi suicide bomber Abdullah Hassan al Asiri. The target survived and the bomber died in the blast. The PETN was hidden in the bomber's rectum, which security experts described as a novel technique. On 25 December 2009, PETN was found in the underwear of Umar Farouk Abdulmutallab, the "Underwear bomber", a Nigerian with links to al-Qaeda in the Arabian Peninsula. According to US law enforcement officials, he had attempted to blow up Northwest Airlines Flight 253 while approaching Detroit from Amsterdam. Abdulmutallab had tried, unsuccessfully, to detonate approximately of PETN sewn into his underwear by adding liquid from a syringe; however, only a small fire resulted. In the al-Qaeda in the Arabian Peninsula October 2010 cargo plane bomb plot, two PETN-filled printer cartridges were found at East Midlands Airport and in Dubai on flights bound for the US on an intelligence tip. Both packages contained sophisticated bombs concealed in computer printer cartridges filled with PETN. The bomb found in England contained of PETN, and the one found in Dubai contained of PETN. Hans Michels, professor of safety engineering at University College London, told a newspaper that of PETN—"around 50 times less than was used—would be enough to blast a hole in a metal plate twice the thickness of an aircraft's skin". In contrast, according to an experiment conducted by a BBC documentary team designed to simulate Abdulmutallab's Christmas Day bombing, using a Boeing 747 plane, even 80 grams of PETN was not sufficient to materially damage the fuselage. On 12 July 2017, 150 grams of PETN was found in the Assembly of Uttar Pradesh, India's most populous state. PETN was used by Israel in the manufacturing of pagers provided to Hezbollah. On September 17, 2024, the pagers detonated, killing 12 people and injuring thousands. Detection In the wake of terrorist PETN bomb plots, an article in Scientific American noted PETN is difficult to detect because it does not readily vaporize into the surrounding air. The Los Angeles Times noted in November 2010 that PETN's low vapor pressure makes it difficult for bomb-sniffing dogs to detect. Many technologies can be used to detect PETN, including chemical sensors, X-rays, infrared, microwaves and terahertz, some of which have been implemented in public screening applications, primarily for air travel. PETN is one of the explosive chemicals typically of interest in that area, and it belongs to a family of common nitrate-based explosive chemicals which can often be detected by the same tests. One detection system in use at airports involves analysis of swab samples obtained from passengers and their baggage. Whole-body imaging scanners that use radio-frequency electromagnetic waves, low-intensity X-rays, or T-rays of terahertz frequency that can detect objects hidden under clothing are not widely used because of cost, concerns about the resulting traveler delays, and privacy concerns. Both parcels in the 2010 cargo plane bomb plot were x-rayed without the bombs being spotted. Qatar Airways said the PETN bomb "could not be detected by x-ray screening or trained sniffer dogs". The Bundeskriminalamt received copies of the Dubai x-rays, and an investigator said German staff would not have identified the bomb either. New airport security procedures followed in the U.S., largely to protect against PETN. Medical use Like nitroglycerin (glyceryl trinitrate) and other nitrates, PETN is also used medically as a vasodilator in the treatment of heart conditions. These drugs work by releasing the signaling gas nitric oxide in the body. The heart medicine Lentonitrat is nearly pure PETN. Monitoring of oral usage of the drug by patients has been performed by determination of plasma levels of several of its hydrolysis products, pentaerythritol dinitrate, pentaerythritol mononitrate and pentaerythritol, in plasma using gas chromatography-mass spectrometry. See also Erythritol tetranitrate RE factor References Further reading Antianginals Explosive chemicals German inventions Nitrate esters
Pentaerythritol tetranitrate
Chemistry
2,980
3,568,378
https://en.wikipedia.org/wiki/World%27s%20Tallest%20Thermometer
The World's Tallest Thermometer is a landmark in Baker, California, US. It is a steel electric sign that commemorates the weather record of recorded in nearby Death Valley on July 10, 1913. The sign weighs and is held together by of concrete. It stands tall and is capable of displaying a maximum temperature of , both of which are a reference to the temperature record. History It was built in 1991 by the Young Electric Sign Company of Salt Lake City, Utah for Willis Herron, a Baker businessman who spent US$700,000 () to build the thermometer next to his Bun Boy restaurant. Its height—134 feet—was in honor of the 134-degree record temperature set in nearby Death Valley on July 10, 1913. Soon after its construction, winds snapped the thermometer in half, and it was rebuilt. Two years later, severe gusts made the thermometer sway so much that its light bulbs popped out. Concrete was then poured inside the steel core to reinforce the monument. Herron sold the attraction and restaurant to another local businessman, Larry Dabour, who sold it in 2005. In September 2012, the owner at that time, Matt Pike, said that the power bill for its operation had reached US$8,000 per month () and that he turned it off due to the poor economy. In 2013, the thermometer and accompanying empty gift shop were listed for sale. The family of Willis Herron (who died in 2007) recovered ownership of the property in 2014 and stated their intention to make it operational again. The renovation was funded with sweat equity and a contribution from the owner's mother of US$150,000 (). The official re-lighting took place on July 10, 2014. In December 2016, EVgo announced building the first US fast charge station for electric vehicles at up to 350 kW. The station is located in the rear parking area behind the thermometer, visible to travelers on Interstate 15. References External links Roadside America 1991 establishments in California Individual signs in the United States Landmarks in California Mojave Desert Roadside attractions in California Thermometers Towers completed in 1991 Tourist attractions in San Bernardino County, California
World's Tallest Thermometer
Technology,Engineering
444
28,639
https://en.wikipedia.org/wiki/Stop%20codon
In molecular biology, a stop codon (or termination codon) is a codon (nucleotide triplet within messenger RNA) that signals the termination of the translation process of the current protein. Most codons in messenger RNA correspond to the addition of an amino acid to a growing polypeptide chain, which may ultimately become a protein; stop codons signal the termination of this process by binding release factors, which cause the ribosomal subunits to disassociate, releasing the amino acid chain. While start codons need nearby sequences or initiation factors to start translation, a stop codon alone is sufficient to initiate termination. Properties Standard codons In the standard genetic code, there are three different termination codons: Alternative stop codons There are variations on the standard genetic code, and alternative stop codons have been found in the mitochondrial genomes of vertebrates, Scenedesmus obliquus, and Thraustochytrium. Reassigned stop codons The nuclear genetic code is flexible as illustrated by variant genetic codes that reassign standard stop codons to amino acids. Translation In 1986, convincing evidence was provided that selenocysteine (Sec) was incorporated co-translationally. Moreover, the codon partially directing its incorporation in the polypeptide chain was identified as UGA also known as the opal termination codon. Different mechanisms for overriding the termination function of this codon have been identified in prokaryotes and in eukaryotes. A particular difference between these kingdoms is that cis elements seem restricted to the neighborhood of the UAG codon in prokaryotes while in eukaryotes this restriction is not present. Instead such locations seem disfavored albeit not prohibited. In 2003, a landmark paper described the identification of all known selenoproteins in humans: 25 in total. Similar analyses have been run for other organisms. The UAG codon can translate into pyrrolysine (Pyl) in a similar manner. Genomic distribution Distribution of stop codons within the genome of an organism is non-random and can correlate with GC-content. For example, the E. coli K-12 genome contains 2705 TAA (63%), 1257 TGA (29%), and 326 TAG (8%) stop codons (GC content 50.8%). Also the substrates for the stop codons release factor 1 or release factor 2 are strongly correlated to the abundance of stop codons. Large scale study of bacteria with a broad range of GC-contents shows that while the frequency of occurrence of TAA is negatively correlated to the GC-content and the frequency of occurrence of TGA is positively correlated to the GC-content, the frequency of occurrence of the TAG stop codon, which is often the minimally used stop codon in a genome, is not influenced by the GC-content. Recognition Recognition of stop codons in bacteria have been associated with the so-called 'tripeptide anticodon', a highly conserved amino acid motif in RF1 (PxT) and RF2 (SPF). Even though this is supported by structural studies, it was shown that the tripeptide anticodon hypothesis is an oversimplification. Nomenclature Stop codons were historically given many different names, as they each corresponded to a distinct class of mutants that all behaved in a similar manner. These mutants were first isolated within bacteriophages (T4 and lambda), viruses that infect the bacteria Escherichia coli. Mutations in viral genes weakened their infectious ability, sometimes creating viruses that were able to infect and grow within only certain varieties of E. coli. amber mutations () They were the first set of nonsense mutations to be discovered, isolated by Richard H. Epstein and Charles Steinberg and named after their friend and graduate Caltech student Harris Bernstein, whose last name means "amber" in German (cf. Bernstein). Viruses with amber mutations are characterized by their ability to infect only certain strains of bacteria, known as amber suppressors. These bacteria carry their own mutation that allows a recovery of function in the mutant viruses. For example, a mutation in the tRNA that recognizes the amber stop codon allows translation to "read through" the codon and produce a full-length protein, thereby recovering the normal form of the protein and "suppressing" the amber mutation. Thus, amber mutants are an entire class of virus mutants that can grow in bacteria that contain amber suppressor mutations. Similar suppressors are known for ochre and opal stop codons as well. tRNA molecules carrying unnatural aminoacids have been designed to recognize the amber stop codon in bacterial RNA. This technology allows for incorporation of orthogonal aminoacids (such as p-azidophenylalanine) at specific locations of the target protein. ochre mutations () It was the second stop codon mutation to be discovered. Reminiscent of the usual yellow-orange-brown color associated with amber, this second stop codon was given the name of "ochre", an orange-reddish-brown mineral pigment. Ochre mutant viruses had a property similar to amber mutants in that they recovered infectious ability within certain suppressor strains of bacteria. The set of ochre suppressors was distinct from amber suppressors, so ochre mutants were inferred to correspond to a different nucleotide triplet. Through a series of mutation experiments comparing these mutants with each other and other known amino acid codons, Sydney Brenner concluded that the amber and ochre mutations corresponded to the nucleotide triplets "UAG" and "UAA". opal or umber mutations () The third and last stop codon in the standard genetic code was discovered soon after, and corresponds to the nucleotide triplet "UGA". To continue matching with the theme of colored minerals, the third nonsense codon came to be known as "opal", which is a type of silica showing a variety of colors. Nonsense mutations that created this premature stop codon were later called opal mutations or umber mutations. Mutations and disease Nonsense Nonsense mutations are changes in DNA sequence that introduce a premature stop codon, causing any resulting protein to be abnormally shortened. This often causes a loss of function in the protein, as critical parts of the amino acid chain are no longer assembled. Because of this terminology, stop codons have also been referred to as nonsense codons. Nonstop A nonstop mutation, also called a stop-loss variant, is a point mutation that occurs within a stop codon. Nonstop mutations cause the continued translation of an mRNA strand into what should be an untranslated region. Most polypeptides resulting from a gene with a nonstop mutation lose their function due to their extreme length and the impact on normal folding. Nonstop mutations differ from nonsense mutations in that they do not create a stop codon but, instead, delete one. Nonstop mutations also differ from missense mutations, which are point mutations where a single nucleotide is changed to cause replacement by a different amino acid. Nonstop mutations have been linked with many inherited diseases including endocrine disorders, eye disease, and neurodevelopmental disorders. Hidden stops Hidden stops are non-stop codons that would be read as stop codons if they were frameshifted +1 or −1. These prematurely terminate translation if the corresponding frame-shift (such as due to a ribosomal RNA slip) occurs before the hidden stop. It is hypothesised that this decreases resource wastage on nonfunctional proteins and the production of potential cytotoxins. Researchers at Louisiana State University propose the ambush hypothesis, that hidden stops are selected for. Codons that can form hidden stops are used in genomes more frequently compared to synonymous codons that would otherwise code for the same amino acid. Unstable rRNA in an organism correlates with a higher frequency of hidden stops. However, this hypothesis could not be validated with a larger data set. Stop-codons and hidden stops together are collectively referred as stop-signals. Researchers at University of Memphis found that the ratios of the stop-signals on the three reading frames of a genome (referred to as translation stop-signals ratio or TSSR) of genetically related bacteria, despite their great differences in gene contents, are much alike. This nearly identical genomic-TSSR value of genetically related bacteria may suggest that bacterial genome expansion is limited by their unique stop-signals bias of that bacterial species. Translational readthrough Stop codon suppression or translational readthrough occurs when in translation a stop codon is interpreted as a sense codon, that is, when a (standard) amino acid is 'encoded' by the stop codon. Mutated tRNAs can be the cause of readthrough, but also certain nucleotide motifs close to the stop codon. Translational readthrough is very common in viruses and bacteria, and has also been found as a gene regulatory principle in humans, yeasts, bacteria and drosophila. This kind of endogenous translational readthrough constitutes a variation of the genetic code, because a stop codon codes for an amino acid. In the case of human malate dehydrogenase, the stop codon is read through with a frequency of about 4%. The amino acid inserted at the stop codon depends on the identity of the stop codon itself: Gln, Tyr, and Lys have been found for the UAA and UAG codons, while Cys, Trp, and Arg for the UGA codon have been identified by mass spectrometry. Extent of readthrough in mammals have widely variable extents, and can broadly diversify the proteome and affect cancer progression. Use as a watermark In 2010, when Craig Venter unveiled the first fully functioning, reproducing cell controlled by synthetic DNA he described how his team used frequent stop codons to create watermarks in RNA and DNA to help confirm the results were indeed synthetic (and not contaminated or otherwise), using it to encode authors' names and website addresses. See also Genetic code Start codon Terminator (genetics) Null-terminated string References Molecular genetics Gene expression Protein biosynthesis
Stop codon
Chemistry,Biology
2,125
27,169,851
https://en.wikipedia.org/wiki/Radiogenic%20nuclide
A radiogenic nuclide is a nuclide that is produced by a process of radioactive decay. It may itself be radioactive (a radionuclide) or stable (a stable nuclide). Radiogenic nuclides (more commonly referred to as radiogenic isotopes) form some of the most important tools in geology. They are used in two principal ways: In comparison with the quantity of the radioactive 'parent isotope' in a system, the quantity of the radiogenic 'daughter product' is used as a radiometric dating tool (e.g. uranium–lead geochronology). In comparison with the quantity of a non-radiogenic isotope of the same element, the quantity of the radiogenic isotope is used to define its isotopic signature (e.g. 206Pb/204Pb). This technique is discussed in more detail under the heading isotope geochemistry. Examples Some naturally occurring isotopes are entirely radiogenic, but all those are radioactive isotopes, with half-lives too short to have occurred primordially and still exist today. Thus, they are only present as radiogenic daughters of either ongoing decay processes, or else cosmogenic (cosmic ray induced) processes that produce them in nature freshly. A few others are naturally produced by nucleogenic processes (natural nuclear reactions of other types, such as neutron absorption). For radiogenic isotopes that decay slowly enough, or that are stable isotopes, a primordial fraction is always present, since all sufficiently long-lived and stable isotopes do in fact naturally occur primordially. An additional fraction of some of these isotopes may also occur radiogenically. Lead is perhaps the best example of a partly radiogenic substance, as all four of its stable isotopes (204Pb, 206Pb, 207Pb, and 208Pb) are present primordially, in known and fixed ratios. However, 204Pb is only present primordially, while the other three isotopes may also occur as radiogenic decay products of uranium and thorium. Specifically, 206Pb is formed from 238U, 207Pb from 235U, and 208Pb from 232Th. In rocks that contain uranium and thorium, the excess amounts of the three heavier lead isotopes allows the rocks to be "dated", thus providing a time estimate for when the rock solidified and the mineral held the ratio of isotopes fixed and in place. Another notable radiogenic nuclide is argon-40, formed from radioactive potassium. Almost all the argon in the Earth's atmosphere is radiogenic, whereas primordial argon is argon-36. Some nitrogen-14 is radiogenic, coming from the decay of carbon-14 (half-life around 5700 years), but the carbon-14 was formed some time earlier from nitrogen-14 by the action of cosmic rays. Other important examples of radiogenic elements are radon and helium, both of which form during the decay of heavier elements in bedrock. Radon is entirely radiogenic, since it has too short a half-life to have occurred primordially. Helium, however, occurs in the crust of the Earth primordially, since both helium-3 and helium-4 are stable, and small amounts were trapped in the crust of the Earth as it formed. Helium-3 is almost entirely primordial (a small amount is formed by natural nuclear reactions in the crust). Helium-3 can also be produced as the decay product of tritium (3H) which is a product of some nuclear reactions, including ternary fission. The global supply of helium (which occurs in gas wells as well as the atmosphere) is mainly (about 90%–99%) radiogenic, as shown by its factor of 10 to 100 times enrichment in radiogenic helium-4 relative to the primordial ratio of helium-4 to helium-3. This latter ratio is known from extraterrestrial sources, such as some Moon rocks and meteorites, which are relatively free of parental sources for helium-3 and helium-4. As noted in the case of lead-204, a radiogenic nuclide is often not radioactive. In this case, if its precursor nuclide has a half-life too short to have survived from primordial times, then the parent nuclide will be gone, and known now entirely by a relative excess of its stable daughter. In practice, this occurs for all radionuclides with half lives less than about 50 to 100 million years. Such nuclides are formed in supernovas, but are known as extinct radionuclides, since they are not seen directly on the Earth today. An example of an extinct radionuclide is iodine-129; it decays to xenon-129, a stable isotope of xenon which appears in excess relative to other xenon isotopes. It is found in meteorites that condensed from the primordial Solar System dust cloud and trapped primordial iodine-129 (half life 15.7 million years) sometime in a relative short period (probably less than 20 million years) between the iodine-129's creation in a supernova, and the formation of the Solar System by condensation of this dust. The trapped iodine-129 now appears as a relative excess of xenon-129. Iodine-129 was the first extinct radionuclide to be inferred, in 1960. Others are aluminium-26 (also inferred from extra magnesium-26 found in meteorites), and iron-60. Radiogenic nuclides used in geology The following table lists some of the most important radiogenic isotope systems used in geology, in order of decreasing half-life of the radioactive parent isotope. The values given for half-life and decay constant are the current consensus values in the Isotope Geology community. ** indicates ultimate decay product of a series. Units used in this tableGyr = gigayear = 109 yearsMyr = megayear = 106 yearskyr = kiloyear = 103 years Radiogenic heating Radiogenic heating occurs as a result of the release of heat energy from radioactive decay during the production of radiogenic nuclides. Along with heat from the Primordial Heat (resulting from planetary accretion), radiogenic heating occurring in the mantle and crust make up the two main sources of heat in the Earth's interior. Most of the radiogenic heating in the Earth results from the decay of the daughter nuclei in the decay chains of uranium-238 and thorium-232, and potassium-40. See also Geoneutrino Radiometric dating Stable nuclide References External links National Isotope Development Center Government supply of radionuclides; information on isotopes; coordination and management of isotope production, availability, and distribution Isotope Development & Production for Research and Applications (IDPRA) U.S. Department of Energy program for isotope production and production research and development Radioactivity
Radiogenic nuclide
Physics,Chemistry
1,449
15,013,410
https://en.wikipedia.org/wiki/Content%20delivery%20platform
A content delivery platform (CDP) is a software as a service (SaaS) content service, similar to a content management system (CMS), that utilizes embedded software code to deliver web content. Instead of the installation of software on client servers, a CDP feeds content through embedded code snippets, typically via JavaScript widget, Flash widget or server-side Ajax. Content delivery platforms are not content delivery networks, which are utilized for large web media and do not depend on embedded software code. A CDP is utilized for all types of web content, even text-based content. Alternatively, a content delivery platform can be utilized to import a variety of syndicated content into one central location and then re-purposed for web syndication. The term content delivery platform was coined by Feed.Us software architect John Welborn during a presentation to the Chicago Web Developers Association. In late 2007, two blog comment services launched utilizing CDP-based services. Intense Debate and Disqus both employ JavaScript widgets to display and collect blog comments on websites. Notable Content delivery platforms See also Web content management system Viddler, YouTube, Ustream embeddable streaming video References Computer networking Content management systems Website management
Content delivery platform
Technology,Engineering
248
1,788,755
https://en.wikipedia.org/wiki/Latin%20Library
The Latin Library is a website that collects public domain Latin texts. It is run by William L. Carey, adjunct professor of Latin and Roman Law at George Mason University. The texts have been drawn from different sources, are not intended for research purposes nor as substitutes for critical editions, and may contain errors. There are no translations at the site. See also Latin literature Corpus Corporum Library of Latin Texts References External links Latin-language literature Computing in classical studies American digital libraries
Latin Library
Technology
96
3,195,145
https://en.wikipedia.org/wiki/Binary%20icosahedral%20group
In mathematics, the binary icosahedral group 2I or is a certain nonabelian group of order 120. It is an extension of the icosahedral group I or (2,3,5) of order 60 by the cyclic group of order 2, and is the preimage of the icosahedral group under the 2:1 covering homomorphism of the special orthogonal group by the spin group. It follows that the binary icosahedral group is a discrete subgroup of Spin(3) of order 120. It should not be confused with the full icosahedral group, which is a different group of order 120, and is rather a subgroup of the orthogonal group O(3). In the algebra of quaternions, the binary icosahedral group is concretely realized as a discrete subgroup of the versors, which are the quaternions of norm one. For more information see Quaternions and spatial rotations. Elements Explicitly, the binary icosahedral group is given as the union of all even permutations of the following vectors: 8 even permutations of 16 even permutations of 96 even permutations of Here is the golden ratio. In total there are 120 elements, namely the unit icosians. They all have unit magnitude and therefore lie in the unit quaternion group Sp(1). The 120 elements in 4-dimensional space match the 120 vertices the 600-cell, a regular 4-polytope. Properties Central extension The binary icosahedral group, denoted by 2I, is the universal perfect central extension of the icosahedral group, and thus is quasisimple: it is a perfect central extension of a simple group. Explicitly, it fits into the short exact sequence This sequence does not split, meaning that 2I is not a semidirect product of { ±1 } by I. In fact, there is no subgroup of 2I isomorphic to I. The center of 2I is the subgroup { ±1 }, so that the inner automorphism group is isomorphic to I. The full automorphism group is isomorphic to S5 (the symmetric group on 5 letters), just as for - any automorphism of 2I fixes the non-trivial element of the center (), hence descends to an automorphism of I, and conversely, any automorphism of I lifts to an automorphism of 2I, since the lift of generators of I are generators of 2I (different lifts give the same automorphism). Superperfect The binary icosahedral group is perfect, meaning that it is equal to its commutator subgroup. In fact, 2I is the unique perfect group of order 120. It follows that 2I is not solvable. Further, the binary icosahedral group is superperfect, meaning abstractly that its first two group homology groups vanish: Concretely, this means that its abelianization is trivial (it has no non-trivial abelian quotients) and that its Schur multiplier is trivial (it has no non-trivial perfect central extensions). In fact, the binary icosahedral group is the smallest (non-trivial) superperfect group. The binary icosahedral group is not acyclic, however, as Hn(2I,Z) is cyclic of order 120 for n = 4k+3, and trivial for n > 0 otherwise, . Isomorphisms Concretely, the binary icosahedral group is a subgroup of Spin(3), and covers the icosahedral group, which is a subgroup of SO(3). Abstractly, the icosahedral group is isomorphic to the symmetries of the 4-simplex, which is a subgroup of SO(4), and the binary icosahedral group is isomorphic to the double cover of this in Spin(4). Note that the symmetric group does have a 4-dimensional representation (its usual lowest-dimensional irreducible representation as the full symmetries of the -simplex), and that the full symmetries of the 4-simplex are thus not the full icosahedral group (these are two different groups of order 120). The binary icosahedral group can be considered as the double cover of the alternating group denoted this isomorphism covers the isomorphism of the icosahedral group with the alternating group . Just as is a discrete subgroup of , is a discrete subgroup of the double over of , namely . The 2-1 homomorphism from to then restricts to the 2-1 homomorphism from to . One can show that the binary icosahedral group is isomorphic to the special linear group SL(2,5) — the group of all 2×2 matrices over the finite field F5 with unit determinant; this covers the exceptional isomorphism of with the projective special linear group PSL(2,5). Note also the exceptional isomorphism which is a different group of order 120, with the commutative square of SL, GL, PSL, PGL being isomorphic to a commutative square of which are isomorphic to subgroups of the commutative square of Spin(4), Pin(4), SO(4), O(4). Presentation The group 2I has a presentation given by or equivalently, Generators in the group of unit quaternions with these relations are given by Subgroups The only proper normal subgroup of 2I is the center { ±1 }. By the third isomorphism theorem, there is a Galois connection between subgroups of 2I and subgroups of I, where the closure operator on subgroups of 2I is multiplication by { ±1 }. is the only element of order 2, hence it is contained in all subgroups of even order: thus every subgroup of 2I is either of odd order or is the preimage of a subgroup of I. Besides the cyclic groups generated by the various elements (which can have odd order), the only other subgroups of 2I (up to conjugation) are: binary dihedral groups, Dic5=Q20=⟨2,2,5⟩, order 20 and Dic3=Q12=⟨2,2,3⟩ of order 12 The quaternion group, Q8=⟨2,2,2⟩, consisting of the 8 Lipschitz units forms a subgroup of index 15, which is also the dicyclic group Dic2; this covers the stabilizer of an edge. The 24 Hurwitz units form an index 5 subgroup called the binary tetrahedral group; this covers a chiral tetrahedral group. This group is self-normalizing so its conjugacy class has 5 members (this gives a map whose image is ). Relation to 4-dimensional symmetry groups The 4-dimensional analog of the icosahedral symmetry group Ih is the symmetry group of the 600-cell (also that of its dual, the 120-cell). Just as the former is the Coxeter group of type H3, the latter is the Coxeter group of type H4, also denoted [3,3,5]. Its rotational subgroup, denoted [3,3,5]+ is a group of order 7200 living in SO(4). SO(4) has a double cover called Spin(4) in much the same way that Spin(3) is the double cover of SO(3). Similar to the isomorphism Spin(3) = Sp(1), the group Spin(4) is isomorphic to Sp(1) × Sp(1). The preimage of [3,3,5]+ in Spin(4) (a four-dimensional analogue of 2I) is precisely the product group 2I × 2I of order 14400. The rotational symmetry group of the 600-cell is then [3,3,5]+ = ( 2I × 2I ) / { ±1 }. Various other 4-dimensional symmetry groups can be constructed from 2I. For details, see (Conway and Smith, 2003). Applications The coset space Spin(3) / 2I = S3 / 2I is a spherical 3-manifold called the Poincaré homology sphere. It is an example of a homology sphere, i.e. a 3-manifold whose homology groups are identical to those of a 3-sphere. The fundamental group of the Poincaré sphere is isomorphic to the binary icosahedral group, as the Poincaré sphere is the quotient of a 3-sphere by the binary icosahedral group. See also binary polyhedral group binary cyclic group, ⟨n⟩, order 2n binary dihedral group, ⟨2,2,n⟩, order 4n binary tetrahedral group, 2T=⟨2,3,3⟩, order 24 binary octahedral group, 2O=⟨2,3,4⟩, order 48 References 6.5 The binary polyhedral groups, p. 68 Notes Icosahedral
Binary icosahedral group
Physics
1,881
36,682,308
https://en.wikipedia.org/wiki/Stan%20Lee%27s%20World%20of%20Heroes
MarvelousTV (formerly known as Stan Lee's World of Heroes) is a YouTube-funded channel on YouTube. The channel was created by Stan Lee. History The first video posted onto World of Heroes, on April 17, 2012, is an episode of a program on the channel, Fan Wars. Other programs on the channel include Stan Lee's Super Model, Head Cases, Bad Days, Stan Lee's Academy of Heroes, Stan's Rants, UnCONventional, Geek D.I.Y., and Cocktails w/ Stan. Vuguru and POW! Entertainment, who pitched the idea of the channel and collaborated to create the channel, unveiled the channel at San Diego Comic-Con in 2012. The show, Stan's Rants, is based on Lee's old Soapbox column. Felicia Day has appeared in an episode of Cocktails w/ Stan. In 2012, Lee discussed World of Heroes in a Q&A session at the New York Comic Con. References External links - YouTube channel SL WOH.com San Lee's World of Heroes.com at archive.org Works by Stan Lee YouTube-funded channels YouTube channels launched in 2012
Stan Lee's World of Heroes
Technology
238
41,837,357
https://en.wikipedia.org/wiki/Pentanitratoaluminate
Pentanitratoaluminate is an anion of aluminium and nitrate groups with formula [Al(NO3)5]2− that can form salts called pentanitratoaluminates. It is unusual being a complex with five nitrate groups, and being a nitrate complex of a light element with nitrate. Such a complex with five nitrate groups is called a pentanitratometallate. Related complexes There is a pentanitrato complex with cerium (PPh3Et)2[Ce(NO3)5]. Tetranitratoaluminate and hexanitratoaluminate are related anions with aluminium at their core. Properties There are two different arrangements for the coordination of nitrate with aluminium in the complex. One nitrate is bonded with two oxygens to the aluminium (bidentate), and the other five oxygen atoms only link via one oxygen (monodentate). The bidentate nitrate has an O–Al length of 1.98 Å. Another two Al–O bonds roughly in the same plane have a length of 1.89 Å. The other two Al–O bonds complete a distorted octahedral arrangement and pop out the top and bottom of the aluminium with a length of 1.93 Å. The bidentate connected nitrate group is distorted so that the uncoordinated terminal oxygen bond is shorter than the coordinated oxygen–nitrogen distance (1.22 versus 1.31 Å). The angles are also warped, with coordinated oxygen angle subtended on the nitrogen of 109°, and angle of these oxygen atoms with the terminal atom of 126°. The whole nitrate is still planar along with the aluminium. Examples An example salt is caesium pentanitratoaluminate Cs2[Al(NO3)5]. caesium pentanitratoaluminate crystallises in the trigonal form with α = 11.16 Å, c = 10.02 Å, formula mass 602.85, with three molecules per unit cell. The unit cell volume is 1080 Å3, measured density 2.69 g/cm3. The space group is P3121. It is a crystalline substance. Caesium pentanitratoaluminate has been formed by treating a mixture of caesium chloride and caesium tetracloroaluminate with dinitrogen tetroxide and methyl nitrate, and pumping off the NOCl gas produced. If only caesium tetracloroaluminate is used, omitting the CsCl, then the result contains solid Al(NO3)3·CH3CN as well. Tetramethylammonium pentanitratoaluminate has been made by recrystallising tetramethyl ammonium tetranitratoaluminate in acetonitrile over several weeks. It forms a waxy substance. References Aluminium complexes Nitrates Anions
Pentanitratoaluminate
Physics,Chemistry
609
61,680,371
https://en.wikipedia.org/wiki/North%20American%20Society%20of%20Toxinology
The North American Society of Toxinology (NAST) is a North America-based, international, multidisciplinary organization dedicated to the advancement of the science of all things venomous. It was founded in 2012 and is a 501(c)(3) organization under the U.S. Internal Revenue Service. Origin The progenitor meeting of Venom Week Symposiums, "Snakebites in the New Millennium," was held in 2005 in Omaha, NE. The first meeting named as Venom Week was held in 2007 in Tucson, AZ . and subsequent meetings were held on an ad hoc basis until 2012, when NAST was founded at the combined Venom Week IV/International Society on Toxinology meeting in Honolulu to take on the organizing, and become the primary sponsor of, future Venom Week Symposiums. Foundation At the Venom Week VI meeting (Kingsville, TX, 2018), in recognition of their roles in the founding of NAST, the "Founder Award" was given to James Armitage, Leslie Boyer, Sean Bush, Dan Keyler, Steven Seifert and Carl-Wilhelm Vogel. Organization and Membership Members include medical clinicians, veterinarians, basic scientists, zoo and collection managers, animal scientists, herpetologists, antivenom researchers and developers, and others with an interest in venomous animals and their venoms. Additional information regarding membership can be found on the membership page of the Society's website. The officers of the Society are elected by the members. Venom Week Meetings Venom Week Symposiums have been held in 2005 (Omaha, NE), 2007 (Tucson, AZ), 2009 (Albuquerque, NM), 2012 (Honolulu, HI)), 2016 (Greenville, NC), 2018 (Kingsville, TX), and 2020 (Gainesville, FL). Venom Week 2022 will be held in Scottsdale, AZ, July 18 - 21, 2022. Publication Toxicon is the official journal of the North American Society of Toxinology. The journal was started in 1963. It became the official journal of NAST in 2017 and is published monthly by Elsevier. References External links | NAST Home Page | Toxicon Home Page International scientific organizations Organizations established in 2012 Toxicology organizations 2012 establishments in the United States
North American Society of Toxinology
Environmental_science
460
3,560,385
https://en.wikipedia.org/wiki/Case%20fatality%20rate
In epidemiology, case fatality rate (CFR) – or sometimes more accurately case-fatality risk – is the proportion of people who have been diagnosed with a certain disease and end up dying of it. Unlike a disease's mortality rate, the CFR does not take into account the time period between disease onset and death. A CFR is generally expressed as a percentage. It is a measure of disease lethality, and thus may change with different treatments. CFRs are most often used for with discrete, limited-time courses, such as acute infections. Terminology The mortality rate – often confused with the CFR – is a measure of the relative number of deaths (either in general, or due to a specific cause) within the entire population per unit of time. A CFR, in contrast, is the number of deaths among the number of diagnosed cases only, regardless of time or total population. From a mathematical point of view, by taking values between 0 and 1 or 0% and 100%, CFRs are actually a measure of risk (case fatality risk) – that is, they are a proportion of incidence, although they do not reflect a disease's incidence. They are neither rates, incidence rates, nor ratios (none of which are limited to the range 0–1). They do not take into account time from disease onset to death. Sometimes the term case fatality ratio is used interchangeably with case fatality rate, but they are not the same. A case fatality ratio is a comparison between two different case fatality rates, expressed as a ratio. It is used to compare the severity of different diseases or to assess the impact of interventions. Because the CFR is not an incidence rate by not measuring frequency, some authors note that a more appropriate term is case fatality proportion. Example calculation If 100 people in a community are diagnosed with the same disease, and 9 of them subsequently die from the effects of the disease, the CFR would be 9%. If some of the cases have not yet resolved (neither died nor fully recovered) at the time of analysis, a later analysis might take into account additional deaths and arrive at a higher estimate of the CFR, if the unresolved cases were included as recovered in the earlier analysis. Alternatively, it might later be established that a higher number of people were subclinically infected with the pathogen, resulting in an IFR below the CFR. A CFR may only be calculated from cases that have been resolved through either death or recovery. The preliminary CFR, for example, of a newly occurring disease with a high daily increase and long resolution time would be substantially lower than the final CFR, if unresolved cases were not excluded from the calculation, but added to the denominator only. Infection fatality rate Like the case fatality rate, the term infection fatality rate (IFR) also applies to infectious diseases, but represents the proportion of deaths among all infected individuals, including all asymptomatic and undiagnosed subjects. It is closely related to the CFR, but attempts to additionally account for inapparent infections among healthy people. The IFR differs from the CFR in that it aims to estimate the fatality rate in both sick and healthy infected: the detected disease (cases) and those with an undetected disease (asymptomatic and not tested group). Individuals who are infected, but show no symptoms, are said to have inapparent, silent or subclinical infections and may inadvertently infect others. By definition, the IFR cannot exceed the CFR, because the former adds asymptomatic cases to its denominator. Examples Some examples will suggest the range of possible CFRs for diseases in the real world: The CFR for the Spanish (1918) flu was greater than 2.5%, while the Asian (1957-58) and Hong Kong (1968-69) flus both had a CFR of about 0.2%. As of , coronavirus disease 2019 has an overall CFR of %, although the CFRs of earlier strains of COVID-19 was around 2%, the CFRs for original SARS and MERS are about 11% and 34%, respectively. The CFR for yellow fever is about 5-6% (but 40-50% in severe cases). Legionnaires' disease has a CFR of about 15%. Left untreated, bubonic plague will have a CFR of up to 60%. With antibiotic treatment, the CFR for bubonic plague is 17%, pneumonic 29% and septicaemic 45%. Active tuberculosis, the infection with the highest mortality rate, has a CFR of 43% in the absence of HIV. Ebola virus disease, one of the infections with the highest lethality, has a CFR as high as 90%. Naegleriasis (also known as primary amoebic meningoencephalitis), has a CFR greater than 95%, with a few of the survivors having been treated with heroic doses of amphotericin B and other off-label drugs. Rabies has a CFR greater than 99% in unvaccinated individuals. A few people have survived either by being vaccinated (but after symptoms started, or else later than ideal), or more recently, by being put into a medically induced coma. See also List of human disease case fatality rates References External links Definitions of case fatality for coronary events in the WHO MONICA Project Swine flu: what do CFR, virulence and mortality rate mean? Epidemiology Rates Articles containing video clips Death
Case fatality rate
Environmental_science
1,178
14,755,183
https://en.wikipedia.org/wiki/Small%20nuclear%20ribonucleoprotein%20polypeptide%20N
Small nuclear ribonucleoprotein-associated protein N is a protein that in humans is encoded by the SNRPN gene. The protein encoded by this gene is one polypeptide of a small nuclear ribonucleoprotein complex and belongs to the snRNP SMB/SMN family. The protein plays a role in pre-mRNA processing, possibly tissue-specific alternative splicing events. Although individual snRNPs are believed to recognize specific nucleic acid sequences through RNA-RNA base pairing, the specific role of this family member is unknown. The protein arises from a bicistronic transcript that also encodes a protein identified as the SNRPN upstream reading frame (SNURF). Multiple transcription initiation sites have been identified and extensive alternative splicing occurs in the 5' untranslated region. Additional splice variants have been described but sequences for the complete transcripts have not been determined. The 5' UTR of this gene has been identified as an imprinting center. Alternative splicing or deletion caused by a translocation event in this paternally-expressed region is responsible for Prader-Willi syndrome due to parental imprint switch failure. SNRPN-methylation is used to detect uniparental disomy of chromosome 15. After fluorescent-in-situ-hybridization has confirmed the presence of either SNRPN or UBE3A (a neighboring gene that is also imprinted), the methylation test (of SNRPN) can reveal whether the patient has uniparental disomy. SNRPN is maternally methylated (silenced). UBE3A appears to be paternally methylated (silenced). References Further reading
Small nuclear ribonucleoprotein polypeptide N
Chemistry
366
7,322,992
https://en.wikipedia.org/wiki/Matterhorn%20%28ride%29
The Matterhorn or Flying Bobs, sometimes known by alternate names such as Musik Express or Terminator, is an amusement ride very similar to the Superbob, which consists of a number of cars attached to axles that swing in and out. The hill and valley shape of the ride causes a pronounced swinging motion: the faster the ride goes, the more dramatic the swinging motion. This ride is commonly seen at a travelling funfairs. Most carnivals and parks require riders to be at least 42 inches or taller. United States Rides are commonly known as "Flying Bobs". They can typically be found at carnivals, where another common name for them is the "Himalaya," but can also exist at amusement parks such as the Flying Bobs at DelGrosso's Amusement Park, KonTiki at Six Flags New England and at Coney Island (Cincinnati) and Matterhorn at Cedar Point and Lake Winnepesaukah. The carnival rides are typically transported on two trucks. One is for the ride itself, and the other is for the swinging cars. All rides are essentially similar in concept, but have varying designs. Cars typically move forward and backward at varying intervals during the ride. The Allan Herschell Company made the first "Flying Bobs" in the 1960s. Chance-Morgan currently manufactures a few versions, called the "Alpine Bobs" or "Thunder Bolt." Mack manufactures the Matterhorn, Feria Swing, and Petersburg Schilitenfahrt (Sleigh Ride). Another common manufacturer of the Matterhorn is Bertazzon. It is currently unknown for what company that manufactured the other version, called "Rip Curl" in Fun Spot in Orlando, Florida. United Kingdom The common analog of the Matterhorn is the Music Express. The main difference between the two rides is the Music Express' use of a track, rather than axles. These versions can also be found in the United States, but have been discontinued by all manufacturers except Bertazzon. Rides manufacturers Bertazzon Chance Morgan Mack Rides Reverchon Industries See also Matterhorn External links Matterhorn History at the nfa. Database of Matterhorns travelling in the UK. Video featuring a Matterhorn. References Amusement rides
Matterhorn (ride)
Physics,Technology
457
8,422,037
https://en.wikipedia.org/wiki/Camillo%20Olivetti
Samuel David Camillo Olivetti (August 13, 1868 – December 1943) was an Italian electrical engineer and founder of Olivetti & Co., SpA., the Italian manufacturer of computers, printers and other business machines. The company was later run by his son Adriano. Biography Samuel David Camillo Olivetti was born in 1868 in a bourgeois Jewish family in Ivrea, Piedmont. His father, Salvador Benedetto, was a textile trader and his mother, Elvira Sacerdoti, who was from Modena, was a banker's daughter. From his father, Camillo Olivetti received the entrepreneurial style and a love for progress, while from his mother a love for languages (Elvira spoke four languages). His cousin was the painter Raffaele Pontremoli. when Camillo was twelve months old, his father died. His mother looked after him, and he was sent to the «Calchi Taeggi» boarding school in Milan. After secondary school he enrolled at the Royal Italian Industrial Museum (later Politecnico di Torino from 1906) and at the Technical Application School, where he attended electrotechnics courses led by Galileo Ferraris. Having graduated on 31 December 1891 in industrial engineering, Olivetti needed to improve his English and gain useful work experience. He stayed over a year in London where he worked for a company that produced measuring instruments for electrical quantities, while also being a mechanic. Upon his return to Turin, he became Ferraris's assistant. In 1893 he accompanied his teacher to the United States of America, who had been invited to lecture at the International Congress of Electrotechnics in Chicago. Olivetti acted as his interpreter. Together they visited the Thomas A. Edison laboratories at Llewellyn Park, New Jersey, where they met the brilliant American inventor in person. After this meeting, in 1893, Camillo wrote to his brother-in-law Carlo from Chicago: Camillo continued the journey from Chicago to San Francisco alone, carefully writing down the things he was discovering about the United States of America. His correspondence from the United States was published in 1968 with the title of American Letters : if the British industrial situation had already struck him, he found the American reality far superior, not only from an industrial point of view but also from a social point of view. After a few months in Palo Alto he began to know US universities better. As an assistant electrotechnical engineer at Stanford University (November 1893 - April 1894), Olivetti was able to experiment in the laboratory the different potential applications of the use of electricity. From that point the United States always represented for Olivetti the frontier of economic modernity, the model to refer to in Italy's own industrial progress: the vivid memory of the collection of American letters, published after his death in Biella in December 1943. See also Adriano Olivetti References and notes External links 1868 births 1943 deaths People from Ivrea 20th-century Italian Jews Electronics engineers Engineers from Turin Italian industrialists Olivetti people 19th-century Italian businesspeople 20th-century Italian businesspeople 19th-century Italian Jews
Camillo Olivetti
Engineering
630
37,651,828
https://en.wikipedia.org/wiki/Tate%20topology
In mathematics, the Tate topology is a Grothendieck topology of the space of maximal ideals of a k-affinoid algebra, whose open sets are the admissible open subsets and whose coverings are the admissible open coverings. References Algebraic geometry
Tate topology
Mathematics
56
208,407
https://en.wikipedia.org/wiki/Rurouni%20Kenshin
is a Japanese manga series written and illustrated by Nobuhiro Watsuki. The story begins in 1878, the 11th year of the Meiji era in Japan, and follows a former assassin of the Bakumatsu, known as Hitokiri Battosai. After his work against the , he becomes Himura Kenshin, a wandering swordsman who protects the people of Japan with a vow never to take another life. Watsuki wrote the series based on his desire to make a manga different from others being published at the time, with Kenshin being a former assassin and the story taking a more serious tone as it progressed. The manga was serialized in Shueisha's Weekly Shōnen Jump magazine from April 1994 to September 1999. The complete series originally consisted of 28 volumes, and was later republished in 22 volumes. It was adapted into an anime television series by Studio Gallop, Studio Deen, and SPE Visual Works, which aired from January 1996 to September 1998. In addition to an animated feature film, Rurouni Kenshin: The Motion Picture, two series of original video animations (OVAs) were also produced; Rurouni Kenshin: Trust & Betrayal, which adapted stories from the manga that were not featured in the anime, and Rurouni Kenshin: Reflection, a sequel to the manga. In 2017, Watsuki began publishing a direct sequel, Rurouni Kenshin: The Hokkaido Arc, in Jump Square. A second anime television series adaptation by Liden Films premiered in July 2023. In addition, other media based on the franchise has been produced, including a series of five live-action theatrical film adaptations, beginning with Rurouni Kenshin in 2012 and ending with Rurouni Kenshin: The Beginning in 2021, and video games for the PlayStation, PlayStation 2, and PlayStation Portable. Several art and guidebooks have been published, and writer Kaoru Shizuka has written three official light novels, which were published by Shueisha. The manga, as well as the first light novel and guidebook, have received a complete North American release by Viz Media. The Rurouni Kenshin manga has over 72 million copies in circulation as of 2019, making it one of the best-selling manga series of all time. The series has received praise from various publications for manga, anime, and other media, particularly for the characters' designs and historical setting. Plot The series takes place in 1878, eleven years after the beginning of the Meiji era. After participating in the Boshin War as the assassin , Himura Kenshin wanders the countryside of Japan, offering protection and aid to those in need as atonement for the murders he once committed. Having vowed to never kill again, he now wields a reverse-bladed katana. Upon arriving in Tokyo, he meets a young woman named Kamiya Kaoru, who is fighting a murderer who claims to be the and is tarnishing the name of the swordsmanship school that she teaches. Kenshin decides to help her and defeats the fake Battōsai. After discovering that Kenshin is the true , Kaoru offers him a place to stay at her dojo, noting that he is peace-loving and not cold-hearted, as his reputation had implied. Kenshin accepts and begins to form lifelong relationships with others, including Sagara Sanosuke, a former member of the Sekihō Army; Myōjin Yahiko, an orphan from a samurai family who also lives with Kaoru as her student; and doctor Takani Megumi, who has become involved in the opium trade. However, he also deals with old and new enemies, including the former leader of the Oniwabanshū, Shinomori Aoshi. After several months living in the dojo, Kenshin faces Saitō Hajime, a rival from Bakumatsu who is now a police officer. This challenge turns out to be a test to face his successor, Shishio Makoto, who plans to conquer Japan by destroying the Meiji Government, starting with Kyoto. Feeling that Shishio's faction may attack his friends, Kenshin meets Shishio alone to defeat him. However, many of his friends, including a young Oniwabanshū named Makimachi Misao, whom he meets during his travels, decide to help him in his fight. After his first meeting with him, Kenshin realizes that he must become stronger to defeat Shishio without becoming the cold assassin he was in the past and returns to the man who taught him kenjutsu, Hiko Seijūrō, to learn the school's final technique. Finally accepting the help of his friends, he defeats Shishio, who dies after exceeding the limits of his abnormal body condition, after which a reformed Shinomori stays in Kyoto with the surviving Oniwabanshū. When Kenshin and his friends return to Tokyo, he finds Yukishiro Enishi, who plans to take revenge. At this point, it is revealed that, during the Bakumatsu, Kenshin was to be married to Yukishiro Tomoe, who sought to avenge the death of her first fiancé, whom he had assassinated, but instead they fell in love and he proposed to her. Because she was related to the Edo guards who sought to kill Kenshin, they realized her deception and captured her to use as bait. In the final fight against the group's leader, Kenshin accidentally killed Tomoe after she took a blow meant for him. Seeking revenge for the death of his sister, Enishi kidnaps Kaoru and Kenshin and his friends set out to rescue her. A final battle between Kenshin and Enishi ensues, with Kenshin emerging victorious. Misao brings Tomoe's diary to Enishi, who keeps it in a village to hide along with his missing father. Four years later, Kenshin has married Kaoru and has a son named Himura Kenji. Now at peace with himself, Kenshin gives his reverse-blade sword to Yahiko as a ceremonial gift. Production One-shots A prototype series titled Rurouni: Meiji Swordsman Romantic Story appeared as a pair of separate short stories published in 1992 and 1993. The first story, published in December 1992 in the Weekly Shōnen Jump Winter Special issue of 1993, featured an earlier version of Kenshin stopping a crime lord from taking over the Kamiya family dojo. Watsuki described the first Rurouni story, echoing the "Megumi Arc," as a "pilot" for Rurouni Kenshin. According to Watsuki, the final Rurouni Kenshin series was not composed entirely of his free will. Describing the creation of historical stories as "hard," Watsuki initially wanted to make his next series in a contemporary setting. An editor approached Watsuki and asked him to make a new historical story. With the historical concept, Watsuki intended to use the Bakumatsu period from Moeyo Ken (Burn, O Sword) with a story akin to Sanshiro Sugata. Watsuki experimented with various titles, including Nishin (Two-Hearts) Kenshin, Yorozuya (Jack-of-All-Trades) Kenshin, and variations of "Rurouni" and "Kenshin" with different kanji in that order. The second Rurouni story, published in April 1993 in the Weekly Shōnen Jump 21–22 double issue of that year, featured Kenshin helping a wealthy girl named Raikōji Chizuru. Watsuki recalled experiencing difficulty when condensing "everything" into 31 pages for that story. He said that he "put all [his] soul into it," but sighs when looking at it from his perspective after the publication of the first Rurouni Kenshin volume. Watsuki said that the second Rurouni: Meiji Swordsman Romantic Story received mediocre reviews and about 200 letters. He referred to it as a "side story." The design model for Hiko Seijuro, Kenshin's master, in Rurouni Kenshin is the character of the same name from his one-shot manga "Crescent Moon of the Warring States," but Watsuki also added some influences from Hiken Majin Hajerun in Takeshi Obata's Arabian Lamp-Lamp. At the time, Watsuki said that he was fascinated by images of "manliness" and that Hiko was one of the first characters to reflect this fascination. Since Watsuki's debut work contained a tall, black-haired man in "showy" armor, he wanted to make a character "completely opposite" to the debut character; the new character ended up "coming out like a girl". According to Watsuki, he used "no real motif" when creating Kenshin and placed a cross-shaped scar on his face "not knowing what else to do". Like several characters, Kenshin was influenced by the Shinsengumi, with Kenshin being affected by Okita Sōji and Saitō Hajime in order to give him an air of mystery. Publication and influences Ever since writing his first storyboards for the manga, Watsuki did sketches of Kenshin's appearance and noticed that he looked like Kurama from Yoshihiro Togashi's manga YuYu Hakusho. Watsuki considered himself younger back and that he preferred always drawing goodlooking men in contrast to Hiko Seijuro from a previous one-shot. When competing with these series, Watsuki felt Rurouni Kenshin competed more with YuYu Hakusho that focused more on drama rather than action like Dragon Ball. In contrast to the YuYu Hakusho young lead Yusuke Urameshi, Kenshin was written as an adult with a dark past based on the Edo period in order to make the series stand out. Nevertheless, the manga managed to compete well against other Weekly Shōnen Jump manga from the 1990s including Slam Dunk, Dragon Ball and YuYu Hakusho. Kenshin's age was also a departure from common archetypes in manga as the themes and lore in the story made him an adult in contrast to other young protagonist besides City Hunter. Nevertheless, he was careful with how writing these characters as they get old. During his childhood, Watsuki used to practice kendo, which influenced the making of the series. Although Watsuki developed various one-shots before the official serialization of the series, while naming the characters, he based some of their names on places he used to live, such as Makimachi Misaos's "Makimachi" and Sanjō Tsubame, who are named after places in Niigata. Since the manga started, an editor from Shueisha suggested Watsuki to check the Samurai Shodown fighting games which helped as a form influence for his characters. However, he tried writing a realistic series and avoid supernatural powers regardless of the young demography with few exceptions being Yukishiro Enishi's ability to perform double jumps to counterattack Kenshin's aerial style. When thinking about the ending, Watsuki always thought that a mass murderer should die even if he is on a self antonment journey, seeing Ashita no Joe as a famous example of a hero's death. When the manga series started to be published in Weekly Shōnen Jump, Watsuki had little hope in the development of the series. He planned to finish the story in approximately 30 chapters, ending with Kenshin's departure from Tokyo similarly to the one from volume 7. Kenshin's enemies would have been people from Kyoto who would send an assassin to kill Kenshin. When the Oniwabanshū were introduced during the serialization, Watsuki noted that the series could be longer as he had created various main characters. At that time, there was a survey, and the series had become very popular. For its seventh volume, Watsuki's boss suggested to him that it was time to make a longer story arc, which resulted in the creation of the fights between Kenshin and Shishio Makoto. The arc was only meant to be serialized for one year, but it ended up being one year and a half long. This arc was also done to develop Kenshin's character, as he considered him not to have a weak point. Watsuki commented that his artistic skills were honed with this arc, as he could draw everything he wanted to. The last arc from the manga was meant to be much shorter, but it turned out to be a fairly long one as he could not present it simplistically. Watsuki originally made this arc prior to the series' start, having already thought about how Kenshin's scar would have been made. Because of the dark style of the Kyoto arc, Watsuki created the comical Mikimachi Misao in order to contrast Kenshin's serious side. Being fascinated by the Shinsengumi, Watsuki designed the characters by basing their characteristics on those of the real Shinsengumi members and also using fictional representations of them and other historical characters from the Bakumatsu period of Japan. The historical characters were considered a hard task by Watsuki. Due to problems with the characterization of Sagara Sōzō, Watsuki decided to illustrate Saitō Hajime in his own style, avoiding the historical figure. He felt very good about Saitō's character, having noted that he fit very well in the manga. However, Watsuki mentioned that many Japanese fans of the Shinsengumi complained about the personality of Saitō, as he was made sadistic. Additionally, the final shot of Kenshin returning to Kaoru's dojo was inspired by the final shot of the Rurouni Kenshin anime's first opening theme, "Sobakasu", by Judy and Mary. In the final arc of the manga, Watsuki wanted to make the five comrades in this storyline as "scum-like" as possible. But because he created villains with no ideals or beliefs, it was difficult to portray them as an enjoyable read. The story took on a darker tone as most of the characters believed Kaoru was killed by Yukishiro Enishi, which made Kenshin question his own way of living and escape to a village of wanderers. Watsuki did not enjoy the angst in Kenshin, so his friend Myōjin Yahiko took over as the series' protagonist until Kenshin recovered. Even though the plot for the "remembrance episodes" of Kenshin's past was already set before serialization started, which was three and a half years before her debut, Watsuki was filled with regrets about how he portrayed Yukishiro Tomoe for unspecified reasons. The final villains, the Sū-shin, had no personality models and were created simply to "fill out the numbers". As the story advanced towards Kenshin's final battle, Watsuki realized that the other characters would have no "glamour" and created the Sū-shin on the spot. Ending Watsuki also had ideas to create a "Hokkaido episode, a sequel," but wanted to start a new manga, so he ended Rurouni Kenshin with the last arc he made. Due to the dark nature of Kenshin's life, Watsuki ended the manga in the Jinchu arc, afraid that if he continued writing, the series would not fit the manga demography. In 2012, Watsuki revealed that when he clashed with the editorial staff at the end of the series, his editor, Hisashi Sasaki, understood his intentions, saw that he was at his physical limit, and backed him up. He said it was out of respect and appreciation for the readers that he ended the popular series while it was still popular. Nevertheless, Watsuki was happy with how he ended Rurouni Kenshin. He felt it was a good place to end the narrative. In contrast, most series keep being pushed and pushed until they lose popularity and are cancelled. Watsuki was glad Rurouni Kenshin did not end like this. For the series' ending, Watsuki conceived new designs with the potential for a sequel in the future. Initially, Watsuki had planned to make Kenshin's hair shorter before the end; however, he found this to be similar to the character Multi in To Heart. Additionally, Himura Kenji was introduced in the finale as the son of Kenshin and Kaoru; even though the character was "cliché", Watsuki felt that Kenji had to appear. An elder Sanosuke was drafted by Watsuki to appear in the manga's finale, but this idea was scrapped. In the manga's final story arc, the design was used for Sanosuke's father, Higashidani Kamishimoemon. The author added that he felt attachment towards Enishi and that he would someday like to use Enishi in a future work. Another idea explored for a sequel was the handling of Yahiko as a teenager. Watsuki had redesigned his appearance. He wanted Yahiko to impress manga readers so that he could be the protagonist of a possible series sequel. He said this goal influenced his design of Yahiko with Kenshin's physical appearance as well as Sanosuke's personality. He added Sanosuke's kanji for to the back of his clothes and was pleased that various readers recognized it. Although he suggested he was not going to make a sequel, he said the main characters would be Yahiko, Sanjō Tsubame, and Tsukayama Yutarō. Watsuki thought about writing a story in which Yahiko and Tsubame would have a son, Myōjin Shin'ya, who would become a skilled swordsman. Themes The series' main theme is responsibility, as seen through Kenshin's actions, as he wants to atone for all the people he killed during the Bakumatsu by aiding innocent people by wielding a non-lethal sword. Marco Olivier from the Nelson Mandela Metropolitan University said that the sakabatō symbolizes Kenshin's oath not to kill again, which has been found challenging by other warriors appearing in the series. This theme also encourages former drug dealer Takani Megumi to become a doctor upon learning of Kenshin's past and actions. Another theme is power, which is mostly seen by Sagara Sanosuke and Myojin Yahiko. However, like Megumi, these two characters are also influenced by the main character, as they wish to become stronger to assist Kenshin across the plot. Additionally, the series discourages revenge, as seen in the final arc when Yukishiro Enishi believes he succeeded in getting his revenge on Kenshin but starts having hallucinations of his late sister with a sad expression on her face. As an "outlet" for Watsuki's kendo emotions, Yahiko "knows a pain that hero-types like Himura Kenshin and Sagara Sanosuke can never know". As a result, Yahiko was made a stronger character little by little to relate to the demography. eventually giving him a stronger characterization during the Kyoto arc, which surprised his readers. When questioned about the series' theme being Kenshin's self-redemption, Watsuki mentioned that when he was young, he used to read manga and that it influenced his writing of Rurouni Kenshin. He added that he wanted to make the story different from other comics as he considers the main character, Kenshin, neither a good nor evil character. Since volume 7, Watsuki mentioned the series took on a more adult tone due to the various conflicts in the story, but commented it was influenced by the manga he read. Through the series' development, Watsuki was deciding if Kamiya Kaoru's character was going to die before the end. However, he later decided to keep Kaoru alive as he came to the conclusion that he wanted a happy ending and that the manga was aimed at young readers. In The Oxford Handbook of American Folklore and Folklife Studies, Kenshin is regarded as a "far cry" from American superheroes due to his androgynous look and self-deprecating personality. However, the character is said to be relatable to the Eastern audience through Kenshin's quest for redemption, which is called the main theme of the manga. The manga is further noted to have a balance between individualism and community. Watsuki said he was an "infatuated" type of person rather than a "passionate" kind of person; therefore, Rurouni Kenshin is a "Meiji Swordsman Story" as opposed to a "Meiji Love Story". According to the book Bringing Forth a World: Engaged Pedagogy in the Japanese University, the manga reflects the confusion of Japanese society after the big economy disenchantment in the early 1990s. It confronts visualizations of Japanese education in a manner that contrasts school books, especially because of the series' young demography. Since the manga focuses on realism but is aimed at young readers, the series is notable for changing the portrayals of samurais in order to create a more optimistic take in comparison to real-life events. The unique take on Kenshin's handling gave the manga the concept of "neo shonen" due to how different it was from previous Weekly Shonen Jump series. Media Manga Written and illustrated by Nobuhiro Watsuki, Rurouni Kenshin was serialized in Shueisha's manga magazine Weekly Shōnen Jump from April 12, 1994, to September 21, 1999. The 255 individual chapters were collected and published in 28 volumes by Shueisha, with the first volume released on September 9, 1994, and the last on November 4, 1999. They re-released the series in a 22-volume edition between July 4, 2006, and May 2, 2007. Shueisha published a 14-volume edition between January 18 and July 18, 2012. A single-chapter follow-up to the series that follows the character of Yahiko Myōjin, , was originally published in Weekly Shōnen Jump in 2000 after the conclusion of the series. Left out of the original volumes, it was added as an extra to the final release. In December 2011, Shueisha announced Watsuki would be putting his series, Embalming -The Another Tale of Frankenstein-, on hold to begin a "reboot" of Rurouni Kenshin, called Rurouni Kenshin: Restoration, as a tie-in to the live-action film. The series began in the June 2012 issue of Jump Square, which was released on May 2, 2012, and ended in the July 2013 issue on June 4, 2013. The reboot depicts the battles that were featured in the first live-action film. Another special, "Act Zero", was published in Weekly Shōnen Jump in August 2012 as a prologue to Restoration and included in its first volume. In 2014, Watsuki wrote a two-chapter spin-off titled Rurouni Kenshin: Master of Flame for Jump SQ., which tells how Shishio met Yumi and formed the Juppongatana. Watsuki and his wife, Kaworu Kurosaki, collaborated on a two-chapter spin-off titled Rurouni Kenshin Side Story: The Ex-Con Ashitaro for the ninth anniversary of Jump SQ. in 2016. It acts as a prologue to Rurouni Kenshin: The Hokkaido Arc, which began in September 2017 as a sequel to the original manga series. In 2021, Watsuki created the manga that was exclusively shown at an exhibition celebrating the 25th anniversary of Rurouni Kenshin. It serves as an epilogue to chapter 81 of the original manga and shows the first time Kenshin used his sakabatō. The chapter was later adapted into the episode 34 of the 2023 Rurouni Kenshin anime series, "Sakabato First Attack". Rurouni Kenshin was licensed for an English-language release in North America by Viz Media. The first volume of the series was released on October 7, 2003. Although the first volumes were published on an irregular basis, since volume 7, Viz has established a monthly basis due to good sales and consumer demands. Therefore, the following volumes were published until July 5, 2006, when the final volume was released. "Yahiko no Sakabatō" was also published in English in Shonen Jump on August 1, 2006. Between January 29, 2008, and March 16, 2010, Viz re-released the manga in a nine-volume omnibus format called "Viz Big Edition", which collects three volumes in one. The ninth and final volume includes "Yahiko no Sakabato" and "Cherry Blossoms in Spring". They released a similar "3-in-1 Edition" across nine volumes between January 3, 2017, and January 1, 2019. Viz uses the actual ordering of Japanese names, with the family name or surname before the given name, within the series to reduce confusion and because Rurouni Kenshin is a historical series. Anime series An anime television series adaptation of Rurouni Kenshin, produced by SPE Visual Works and Fuji TV, animated by Studio Gallop (episodes 1–66) and Studio Deen (episodes 67–95), and directed by Kazuhiro Furuhashi, was broadcast on Fuji TV from January 1996 to September 1998. A second anime television series adaptation by Liden Films was announced at Jump Festa '22 in December 2021. The series' first season was broadcast from July to December 2023 on Fuji TV's Noitamina programming block. A second season, subtitled Kyoto Disturbance, premiered in October 2024. Animated film An anime film with an original story, titled , also known as Rurouni Kenshin: Requiem for Patriots, originally released in North America as Samurai X: The Motion Picture, premiered in December 1997. Original video animations A 4-episode original video animation (OVA), titled Rurouni Kenshin: Trust & Betrayal, which served as a prequel to the first anime television series, was released in 1999. A two-episode OVA titled Rurouni Kenshin: Reflection, which served as a sequel to the first anime television series, was released from 2001 to 2002. A two-episode OVA, Rurouni Kenshin: New Kyoto Arc, which remade the series' Kyoto arc, was released from 2011 to 2012. Live-action films Five live-action films have been released theatrically. The live-action film adaptation of Rurouni Kenshin was announced on June 28, 2011. Produced by Warner Bros., with actual film production done by Studio Swan, the films were directed by Keishi Ōtomo and starred Takeru Satoh (of Kamen Rider Den-O fame) as Kenshin, Munetaka Aoki as Sanosuke Sagara, and Emi Takei as Kaoru. The first film, titled Rurouni Kenshin, was released on August 25, 2012. In August 2013, it was announced that two sequels were being filmed simultaneously for release in 2014. Rurouni Kenshin: Kyoto Inferno and Rurouni Kenshin: The Legend Ends adapt the Kyoto arc of the manga. In April 2019, it was announced that two new live-action films would adapt the Remembrance/Tenchu and Jinchu arcs; the films, titled Rurouni Kenshin: The Final and Rurouni Kenshin: The Beginning, premiered in 2021. Stage shows In 2016, the Takarazuka Revue performed a musical adaptation of the manga called Rurouni Kenshin: Meiji Swordsman Romantic Story. The show ran from February to March and starred Seina Sagiri as Kenshin and Miyu Sakihi as Kaoru. The musical was written and directed by Shūichirō Koike. In 2018, a stage play adaptation was performed in the Shinbashi Enbujō theater in Tokyo and Shōchikuza theater in Osaka. Seina Sagiri returned to play Kenshin, while Moka Kamishiraishi played Kaoru. Kanō Sōzaburō, an original character introduced in the previous musical, made a return appearance, played by Mitsuru Matsuoka. Shūichirō Koike returned as the director and the script writer of the play. In 2020, a stage musical adaptation of the manga's Kyoto arc was scheduled to be held from November to December 2020 at IHI Stage Around Tokyo. Starring Teppei Koike as Himura Kenshin and Mario Kuroba as the antagonist Makoto Shishio, Shūichirō Koike returned as director and script writer of the play. This stage musical was cancelled due to the COVID-19 pandemic. Art and guidebooks Two encyclopedias of the Rurouni Kenshin manga were released; the first one, , was released first in Japan on July 4, 1996, by Shueisha and in the United States by Viz Media on November 1, 2005. , released on December 15, 1999, includes the story , which details the fates of all of the Rurouni Kenshin characters. The story takes place years after the manga's conclusion, when Kenshin and Kaoru have married and have a young son, Kenji. Many of the series' major characters who have befriended Kenshin reunite with him or otherwise reveal their current whereabouts at a spring picnic. For the anime, three Kenshin Soushi artbooks were published from 1997 to 1998. While the first two were based on the TV series, the third one was based on the film. The film one was named Ishin Shishi no Requiem Art Book and was released along with the movie. Also released was the Rurouni-Art Book, which contained images from the OVAs. A guidebook from the imprint of the series was published on June 4, 2007. Light novels The Rurouni Kenshin light novels were published by Shueisha's Jump J-Books line and co-written by Kaoru Shizuka. Most of them are original stories that were later adapted into the anime. Others are adaptations of manga and anime stories. The very first novel, Rurouni Kenshin: Voyage to the Moon World, which was published in Japan on October 10, 1996, and in North America on October 17, 2006, details another adventure involving the return of Tales of the Meiji Season 3's Beni-Aoi Arc characters like Kaishu Katsu and the Kamiya Dojo's third pupil, Daigoro. The second, Yahiko's Battle, was released on October 3, 1997. It retells various stories featured in the manga and anime series. The third novel, TV Anime Shimabara Arc, was published on February 4, 1999. A novel adaptation of Rurouni Kenshin Cinema-ban, titled and written by Watsuki's wife Kaoru Kurosaki, which was released on September 4, 2012, is a Japanese light novel version of America's Restoration's New Kurogasa (Jin-E) Arc manga featuring Banshin and a different younger Gein. Both are Ishin members of Enishi's team in the Jinchu/Tenchu (Judgment of Earth/Heaven) portions of the Enishi saga in the main plot manga series. Video games There have been five Rurouni Kenshin video games released for the PlayStation series of consoles. The first, , was released on November 29, 1996. It was developed by ZOOM Inc. and published by Sony Computer Entertainment. The game is a 3D fighting game with nine playable characters, with the plot being based on the first seven volumes of the manga. The second one, , was released on December 18, 1997, and was re-released in the PlayStation The Best lineup on November 5, 1998. The game is a role-playing video game with an original story unrelated to either the manga or anime. is the only video game for the PlayStation 2 console. Its Japanese release was slated for September 13, 2006. The game has sold over 130,000 copies in Japan. The game was developed by Eighting and published by Banpresto. A 2D fighting game titled was released for the PlayStation Portable on March 10, 2011. On August 30, 2012, a sequel, , was released. Both games were developed by Natsume Co., Ltd. and published by Bandai Namco Games. Himura Kenshin also appears in the 2005 and 2006 Nintendo DS games Jump Super Stars and Jump Ultimate Stars as the sole battle character representing his series, while others are support characters and help characters. Kenshin and Shishio appeared as playable characters in the 2014 PlayStation 3 and PlayStation Vita game J-Stars Victory VS and in the 2019 game Jump Force for Windows, PlayStation 4, and Xbox One. Audio dramas Several drama CDs that adapt stories from the Rurouni Kenshin manga were released. They feature different voice actors than those that later worked on the anime adaptation. In Volume 5 of the manga, Watsuki stated that he was anticipating the third installment, which would adapt the Udō Jin-e arc. He expected it to be "pretty close" to his original, but with additional lines for Sanosuke and Yahiko. Merchandise Watsuki commented that there had been a lot of Rurouni Kenshin merchandise released for the Japanese market. He recommended that buyers consider quality before paying for merchandise items and that they consult their wallets and buy stuff that they feel is "worth it". Watsuki added that he liked the prototype for a stuffed Kenshin doll for the UFO catcher devices. Reception Sales and popularity Rurouni Kenshin has been highly popular, having sold over 55 million copies in Japan alone up until February 2012, making it one of Shueisha's top ten best-selling manga series. In 2014, it was reported that the series had 70 million copies in circulation. By December 2019, the manga had over 72 million copies in circulation, including digital releases. Volume 27 of the manga ranked second in the Viz Bookscan Top Ten during June 2006, while volumes 21 and 20 ranked second and tenth, respectively, in the Top 10 Graphic Novels of Viz of 2005. Rurouni Kenshin volume 24 ranked 116th on USA Todays best-selling book list for the week ending February 26, 2006. During the third quarter of 2003, Rurouni Kenshin ranked at the top of ICv2's Top 50 Manga Properties. In the same poll from 2005, it was featured at the top once again based on sales from English volumes during 2004. In the Top Ten Manga Properties from 2006 from the same site, it ranked ninth. In November 2014, readers of Da Vinci magazine voted Rurouni Kenshin as the thirteenth Weekly Shōnen Jumps greatest manga series of all time. On TV Asahi's Manga Sōsenkyo 2021 poll, in which 150,000 people voted for their top 100 manga series, Rurouni Kenshin ranked 31st. Critical response The manga has received praise and criticism from various publications. Mania Entertainment writer Megan Lavey found that the manga had a good balance between character development, comedy, and action scenes. Watsuki's artwork was said to have improved as the series continued, noting that characters also had reactions during fights. Steve Raiteri from Library Journal praised the series for its characters and battles. However, he noted that some fights were too violent, so he recommended the series to older teenagers as well as adults. Surat described the series as an example of a "neo-shōnen" series, where a shōnen series also appeals to a female audience; Surat stated that in such series, character designs are "pretty" for female audiences but not too "girly" for male audiences. Surat cited Shinomori Aoshi and Seta Sōjirō, characters who ranked highly in popularity polls even though, in Surat's view, Aoshi does not engage in "meaningful" battles and Sōjirō is a "kid". Surat explained that Aoshi appears "like a Clamp character wearing Gambit's coat and Sōjirō always smiles despite the abuse inflicted upon him. Surat said that the character designs for the anime television series were "toughened up a bit". He added that the budget for animation and music was "top-notch" because Sony produced the budget. Watsuki's writing involving romance and Kenshin's psychological hidden weakpoints also earned positive responses from other sites, with AnimeNation also comparing it to Clamp's X based on the multiple elements of the series. In general, Mania found Watsuki's art appealing as well as its evolution across the twenty-eight volumes, as it made female characters more attractive while the male characters seemed simpler while retaining the early handsome looks. As a result of the series taking a darker tone in later story arcs with Kenshin facing new threats and at the same time his Battosai self, Kat Kan from Voice of Youth Advocates recommended it to older teens. Kan also found that anime viewers will also enjoy Watsuki's drawings due to the way he illustrates battles. This is mostly noted in the "Kyoto arc", where Mania Entertainment writer Megan Lavey applauded the fight between Himura Kenshin and anti-hero Saito Hajime, which acts as the prologue of such a narrative. Mania remarks on the buildup Aoshi, Saito, and other characters bring to the story due to how they share similar goals in the same arc, with newcomer Misao helping to balance the style by bringing more comical interactions with the protagonist. Although the site Manga News enjoyed Seta Sojiro's fight and how it connected with Shishio's past, they said the sixteenth manga's best part was Kenshin's fight against Shishio due to the buildup and symbolism the two characters have. The eventual climax led to further praise based on how menacing Shishio is shown in the battle against his predecessor, although he questioned if Kenshin had been a superior enemy if he had kept back his original killer persona. Critics expressed mixed opinions in regards to the final arc. Zac Bertschy from Anime News Network (ANN) praised the story from the manga but noted that by volume 18 of the series, Watsuki started to repeat the same type of villains who were united to kill Kenshin, similar to Trigun. Although he praised Watsuki's characters, he commented that some of them needed some consistency due to various "bizarre" antagonists. Due to Kaoru, Kenshin, and Sanosuke missing from the final arc during the Jinchu arc, Manga News described Aoshi as the star of the series' 24th volume due to how he explores the mysteries behind Enishi's revenge and his subsequent actions that made him stand out, most notably because he had been absent for multiple chapters. IGN reviewer A.E. Sparrow liked the manga's ending, praising how the storylines are resolved and how most of the supporting cast ends up. He also praised the series' characters, remarking that Kenshin "belongs in any top ten of manga heroes." Otaku USA reviewer Daryl Surat said that the manga's quality was good until the "Revenge Arc", where he criticized the storyline and the new characters. Carlo Santos from the same site praised Enishi and Kenshin's final fight despite finding the ending predictable. While also liking their final showdown, Megan Lavey from Mania Entertainment felt that the twist that happens shortly after the battle is over serves to show Enishi's long-life trauma and, at the same time, Kenshin's compassion towards others. In Bringing Forth a World: Engaged Pedagogy in the Japanese Universitys seventh chapter, "The Renegotiation of Modernity", by media studies professor Maria Grajdian, Kenshin's heroic nature as a wanderer was compared to both Luke Skywalker and Harry Potter due to how he wishes to protect the weak people, seeing nothing wrong with such trait. This is heavily explored in the series when confronting the young Seta who had opposite values in terms how should the strong men act. This soft masculinity is exemplified as a result. There is also a balance between Kenshin's supernatural strength and small design, led a major impact in the audience due to how likable the protagonist is. His introduction marks his values with the sword which also affected Kaoru, Yahiko's and Sanosuke's values upon their meetings. In doing so, Rurouni Kenshin laid" more than twenty years ago the foundation of a fresh paradigm of humanity based on tenderness and mutual acceptance as a counter-movement to the individualism, competition and efficiency that characterize the project of modernity". Legacy Before becoming an official manga author, Narutos author, Masashi Kishimoto, decided that he should try creating a manga since Weekly Shōnen Jump had not published a title from that genre. However, during his years of college, Kishimoto started reading Hiroaki Samura's Blade of the Immortal and Rurouni Kenshin, which used the said genre. Kishimoto recalls having never been surprised by manga ever since reading Akira and finding that he was still not able to compete against them. Hideaki Sorachi cited Rurouni Kenshin as a major source of inspiration for his manga series, Gintama. He also commented that the series influenced the existence of modern historical works, such as manga and video games. Kenshin's design partially inspired Koyoharu Gotouge for the appearance of Tanjiro Kamado, protagonist of Demon Slayer: Kimetsu no Yaiba. For the series 25th anniversary in January 2021, 15 manga authors sent congratulatory messages: three of Watsuki's former assistants, Eiichiro Oda (One Piece), Hiroyuki Takei (Shaman King), and Shinya Suzuki (Mr. Fullswing); Nobuyuki Anzai (Flame of Recca); Riichiro Inagaki (Eyeshield 21); Takeshi Obata (Death Note); Masashi Kishimoto (Naruto); Mitsutoshi Shimabukuro (Toriko); Hideaki Sorachi (Gintama); Yasuhiro Nightow (Trigun); Kazuhiro Fujita (Ushio & Tora); Yusei Matsui (Assassination Classroom); and Kentaro Yabuki (Black Cat). In an interview for the event, Oda told Watsuki that Rurouni Kenshin is popular due to his loyalty to his fans. Watsuki commented that Kenshin's tendency to defeat his enemies without killing them became a common trend for protagonists from other Weekly Shonen Jump, series such as Monkey D. Luffy from One Piece and Naruto Uzumaki from Naruto. Notes References Further reading External links at Viz Media Adventure anime and manga Anime and manga set in Tokyo Comics set in the 1870s Fiction about uxoricide Fiction set in 1878 Historical anime and manga Japanese serial novels Jump J-Books Madman Entertainment manga Manga adapted into films Martial arts anime and manga Meiji era in fiction Ninja in anime and manga Romance anime and manga Samurai in anime and manga Shueisha franchises Shueisha manga Shōnen manga Viz Media manga Viz Media novels Works about atonement
Rurouni Kenshin
Biology
8,948
65,762,255
https://en.wikipedia.org/wiki/Copenhagen%20Cabinetmakers%27%20Guild%20Exhibition
Copenhagen Cabinetmakers' Guild Exhibition (Danish: Københavns Snedkerlaugs Møbeludstilling) was an annual furniture exhibition and competition held from 1927 to 1966 that served as an well-known institution of Danish Design and a vehicle for the emergence of the Danish Modern art movement. Many recognizable icons of Danish Modern were first unveiled as prototypes at the exhibition, including Hans Wegner's Round Chair, Aksel Bender Madsen and Ejnar Larsen's Metropolitan chair, Børge Mogensen's Spokeback Chair, and Finn Juhl’s Chieftain Chair. History The Exhibition was originally created out of fear that the Danish cabinetmaking craft industry would not be able to compete with more affordable furniture imports (primarily from Germany). After the Copenhagen Cabinetmakers Guild failed to lobby the Danish government to limit furniture imports, the organization established the exhibition to in order to increase awareness of the traditional craft and dissuade consumer from purchasing the cheaper imports. The event sought to foster greater collaboration and experimentation between master cabinetmakers and architects. In some cases, these pairs established long-term working relationships, including Hans J. Wegner and Johannes Hansen, Finn Juhl and Niels Vodder, Ole Wanscher and A.J. Iversen, Jacob Kjær and Peder Moos, and Kaare Klint and Rud Rasmussen. In 1933, a design competition was added to the event format. When American journalists attended the event the first time in 1949, their report of the event was the first coverage Danish Modern in the American media and helped foster international hype for the design trend in the 1950s. The Cabinetmakers' Guild held its final exhibition in 1966 after a decline in Danish furniture and few cabinetmakers remained in Copenhagen to sustain it. Revival In 1981, the Snedkernes Efterårsudstilling was founded to revive the tradition of the defunct Cabinetmakers' Guild Exhibition: organizing an annual furniture exhibition for designers and manufacturers in Denmark. See also Danish Modern Stockholm Exhibition Further reading Grete Jalk, Dansk Møbelkunst gennam 40 aar : Københavns Snedkerlaugs møbeludstillinger [40 years of Danish furniture design; the Copenhagen Cabinetmakers' Guild exhibitions]. Taastrup: Teknologisk instituts forlag. 1987. . . References External links Copenhagen Cabinetmakers' Guild website Archival footage of the 1959 event on Dansk Kulturarv Exhibition Booklets from the Royal Library of Denmark's Digital collections Danish modern Design events Danish furniture History of furniture 20th century in Denmark Annual events in Denmark Recurring events established in 1927 Recurring events disestablished in 1966
Copenhagen Cabinetmakers' Guild Exhibition
Engineering
557
256,703
https://en.wikipedia.org/wiki/Leo%20Baekeland
Leo Hendrik Baekeland (; ; November 14, 1863 – February 23, 1944) was a Belgian chemist. Educated in Belgium and Germany, he spent most of his career in the United States. He is best known for the inventions of Velox photographic paper in 1893, and Bakelite in 1907. He has been called "The Father of the Plastics Industry" for his invention of Bakelite, an inexpensive, non-flammable and versatile plastic, which marked the beginning of the modern plastics industry. Early life Leo Baekeland was born in Ghent, Belgium, on November 14, 1863, the son of a cobbler, Charles Baekeland, and a house maid, Rosalia Merchie. His siblings were: Elodia Maria Baekeland; Melonia Leonia Baekeland; Edmundus Baekeland; Rachel Helena Baekeland and Delphina Baekeland. He told The Literary Digest: "The name is a Dutch word meaning 'Land of Beacons.'" He spent much of his early life in Ghent, Belgium. He graduated with honours from the Ghent Municipal Technical School and was awarded a scholarship by the City of Ghent to study chemistry at the Ghent University, which he entered in 1880. He acquired a PhD maxima cum laude at the age of 21. After a brief appointment as Professor of Physics and Chemistry at the Government Higher Normal School in Bruges (1887–1889), he was appointed associate professor of chemistry at Ghent University in 1889. Personal life Baekeland married (1868–1944) on August 8, 1889, and they had two children. One of their grandsons, Brooks (whose father was George Washington Baekeland) married the model Barbara Daly a.k.a. Barbara Daly Baekeland in 1942 and had one child, a boy named Anthony "Tony" Baekeland. Career In 1889, Baekeland and his wife Céline took advantage of a travel scholarship to visit universities in England and the United States. They visited New York City, where he met Professor Charles F. Chandler of Columbia University and Richard Anthony, of the E. and H.T. Anthony photographic company. Professor Chandler was influential in convincing Baekeland to stay in the United States. Baekeland had already invented a process to develop photographic plates using water instead of other chemicals, which he had patented in Belgium in 1887. Although this method was unreliable, Anthony saw potential in the young chemist and offered him a job. Baekeland worked for the Anthony company for two years, and in 1891, set up in business for himself working as a consulting chemist. However, a spell of illness and disappearing funds made him rethink his actions and he decided to return to his old interest of producing a photographic paper that would allow enlargements to be printed by artificial light. After two years of intensive effort, he perfected the process to produce the paper, which he named "Velox"; it was the first commercially successful photographic paper. At the time, the US was suffering a recession and there were no investors or buyers for his proposed new product, so Baekeland became partners with Leonard Jacobi and established the Nepera Chemical Company in Nepera Park, Yonkers, New York. In 1899, Jacobi, Baekeland, and Albert Hahn, a further associate, sold Nepera to George Eastman of the Eastman Kodak Co. for $750,000. Baekeland earned approximately $215,000 net through the transaction. With a portion of the money he purchased "Snug Rock", a house in Yonkers, New York, where he set up his own well-equipped laboratory. There, he later said,"in comfortable financial circumstances, a free man, ready to devote myself again to my favorite studies... I enjoyed for several years that great blessing, the luxury of not being interrupted in one's favorite work."One of the requirements of the Nepera sale was, in effect, a non-compete clause: Baekeland agreed not to do research in photography for at least 20 years. He would have to find a new area of research. His first step was to go to Germany in 1900, for a "refresher in electrochemistry" at the Technical Institute at Charlottenburg. Upon returning to the United States, Baekeland was involved briefly but successfully in helping Clinton Paul Townsend and Elon Huntington Hooker to develop a production-quality electrolytic cell. Baekeland was hired as an independent consultant, with the responsibility of constructing and operating a pilot plant. Baekeland developed a stronger diaphragm cell for the chloralkali process, using woven asbestos cloth filled with a mixture of iron oxide, asbestos fibre, and iron hydroxide. Baekeland's improvements were important to the founding of Hooker Chemical Company and the construction of one of the world's largest electrochemical plants, at Niagara Falls. Baekeland was elected to the American Philosophical Society in 1935 and the United States National Academy of Sciences in 1936. Invention of Bakelite Having been successful with Velox, Baekeland set out to find another promising area for chemical development. As he had done with Velox, he looked for a problem that offered "the best chance for the quickest possible results". Asked why he entered the field of synthetic resins, Baekeland answered that his intention was to make money. By the 1900s, chemists had begun to recognize that many of the natural resins and fibers were polymeric, a term introduced in 1833 by Jöns Jacob Berzelius. Adolf von Baeyer had experimented with phenols and aldehydes in 1872, particularly pyrogallol and benzaldehyde. He created a "black guck" which he considered useless and irrelevant to his search for synthetic dyes. Baeyer's student, Werner Kleeberg, experimented with phenol and formaldehyde in 1891, but as Baekeland noted "could not crystallize this mess, nor purify it to constant composition, nor in fact do anything with it once produced". Baekeland began to investigate the reactions of phenol and formaldehyde. He familiarized himself with previous work and approached the field systematically, carefully controlling and examining the effects of temperature, pressure, and the types and proportions of materials used. The first application that appeared promising was the development of a synthetic replacement for shellac (made from the secretion of lac beetles). Baekeland produced a soluble phenol-formaldehyde shellac called "Novolak" but concluded that its properties were inferior. It never became a big market success, but is still used to this day (e. g. as a photoresist). Baekeland continued to explore possible combinations of phenol and formaldehyde, intrigued by the possibility that such materials could be used in molding. By controlling the pressure and temperature applied to phenol and formaldehyde, he produced his dreamed-of hard moldable plastic: Bakelite. Bakelite was made from phenol, then known as carbolic acid, and formaldehyde. The chemical name of Bakelite is polyoxybenzylmethylenglycolanhydride. In compression molding, the resin is generally combined with fillers such as wood or asbestos, before pressing it directly into the final shape of the product. Baekeland's process patent for making insoluble products of phenol and formaldehyde was filed in July 1907, and granted on December 7, 1909. In February 1909, Baekeland officially announced his achievement at a meeting of the New York section of the American Chemical Society. In 1917, Baekeland became a professor by special appointment at Columbia University. The Smithsonian has documents from the county courthouse for Westchester County in White Plains, New York, indicating that he was admitted to U. S. Citizenship on December 16, 1919. In 1922, after patent litigation favorable to Baekeland, the General Bakelite Co., which he had founded in 1910, along with the Condensite Co. founded by Aylesworth, and the Redmanol Chemical Products Company founded by Lawrence V. Redman, were merged into the Bakelite Corporation. The invention of Bakelite marks the beginning of the age of plastics. Bakelite was the first plastic invented that retained its shape after being heated. Radios, telephones and electrical insulators were made of Bakelite because of its excellent electrical insulation and heat-resistance. Soon, its applications spread to most branches of industry. Baekeland received many awards and honors, both during his lifetime and beyond, including the Perkin Medal in 1916 and the Franklin Medal in 1940. In 1974 he was posthumously inducted into the Plastics Hall of Fame and in 1978 he was likewise inducted into the National Inventors Hall of Fame in Akron, Ohio. At the time of Baekeland's death in 1944, the world production of Bakelite was ca. 175,000 tons, and it was used in over 15,000 different products. He held more than 100 patents, including processes for the separation of copper and cadmium, and for the impregnation of wood. Later life and death As Baekeland grew older he became more eccentric, entering fierce battles with his son and presumptive heir over salary and other issues. He sold the General Bakelite Company to Union Carbide in 1939 and, at his son's prompting, he retired. He became a recluse, attempting to develop an immense tropical garden on his winter estate in Coconut Grove, Florida. He died of a stroke in a sanatorium in Beacon, New York, in 1944. Baekeland is buried in Sleepy Hollow Cemetery in Sleepy Hollow, New York. References Further reading Mercelis, Joris. (2020) Beyond Bakelite: Leo Baekeland and the Business of Science and Invention (MIT Press, 2020) online review External links Amsterdam Bakelite Collection Foundation The Baekeland fund A virtual Bakelite museum with a short biography of Leo Baekeland Virtual Bakelite Museum of Ghent 1907–2007 Time, Mar. 29, 1999, Chemist LEO BAEKELAND National Academy of Sciences Biographical Memoir 1863 births 1944 deaths 19th-century American inventors 20th-century American inventors Columbia University faculty 20th-century Belgian inventors Belgian emigrants to the United States Belgian chemists American chemists Scientists from Miami Scientists from Ghent Scientists from Yonkers, New York American people of Flemish descent Polymer scientists and engineers Ghent University alumni Burials at Sleepy Hollow Cemetery People from Beacon, New York Presidents of the Electrochemical Society Members of the American Philosophical Society Recipients of Franklin Medal
Leo Baekeland
Chemistry,Materials_science
2,213
143,335
https://en.wikipedia.org/wiki/Celestial%20navigation
Celestial navigation, also known as astronavigation, is the practice of position fixing using stars and other celestial bodies that enables a navigator to accurately determine their actual current physical position in space or on the surface of the Earth without relying solely on estimated positional calculations, commonly known as dead reckoning. Celestial navigation is performed without using satellite navigation or other similar modern electronic or digital positioning means. Celestial navigation uses "sights," or timed angular measurements, taken typically between a celestial body (e.g., the Sun, the Moon, a planet, or a star) and the visible horizon. Celestial navigation can also take advantage of measurements between celestial bodies without reference to the Earth's horizon, such as when the Moon and other selected bodies are used in the practice called "lunars" or the lunar distance method, used for determining precise time when time is unknown. Celestial navigation by taking sights of the Sun and the horizon whilst on the surface of the Earth is commonly used, providing various methods of determining position, one of which is the popular and simple method called "noon sight navigation"—being a single observation of the exact altitude of the Sun and the exact time of that altitude (known as "local noon")—the highest point of the Sun above the horizon from the position of the observer in any single day. This angular observation, combined with knowing its simultaneous precise time, referred to as the time at the prime meridian, directly renders a latitude and longitude fix at the time and place of the observation by simple mathematical reduction. The Moon, a planet, Polaris, or one of the 57 other navigational stars whose coordinates are tabulated in any of the published nautical or air almanacs can also accomplish this same goal. Celestial navigation accomplishes its purpose by using angular measurements (sights) between celestial bodies and the visible horizon to locate one's position on the Earth, whether on land, in the air, or at sea. In addition, observations between stars and other celestial bodies accomplished the same results while in space,used in the Apollo space program and is still used on many contemporary satellites. Equally, celestial navigation may be used while on other planetary bodies to determine position on their surface, using their local horizon and suitable celestial bodies with matching reduction tables and knowledge of local time. For navigation by celestial means, when on the surface of the Earth at any given instant in time, a celestial body is located directly over a single point on the Earth's surface. The latitude and longitude of that point are known as the celestial body's geographic position (GP), the location of which can be determined from tables in the nautical or air almanac for that year. The measured angle between the celestial body and the visible horizon is directly related to the distance between the celestial body's GP and the observer's position. After some computations, referred to as "sight reduction," this measurement is used to plot a line of position (LOP) on a navigational chart or plotting worksheet, with the observer's position being somewhere on that line. The LOP is actually a short segment of a very large circle on Earth that surrounds the GP of the observed celestial body. (An observer located anywhere on the circumference of this circle on Earth, measuring the angle of the same celestial body above the horizon at that instant of time, would observe that body to be at the same angle above the horizon.) Sights on two celestial bodies give two such lines on the chart, intersecting at the observer's position (actually, the two circles would result in two points of intersection arising from sights on two stars described above, but one can be discarded since it will be far from the estimated position—see the figure at the example below). Most navigators will use sights of three to five stars, if available, since that will result in only one common intersection and minimize the chance of error. That premise is the basis for the most commonly used method of celestial navigation, referred to as the "altitude-intercept method." At least three points must be plotted. The plot intersection will usually provide a triangle where the exact position is inside of it. The accuracy of the sights is indicated by the size of the triangle. Joshua Slocum used both noon sight and star sight navigation to determine his current position during his voyage, the first recorded single-handed circumnavigation of the world. In addition, he used the lunar distance method (or "lunars") to determine and maintain known time at Greenwich (the prime meridian), thereby keeping his "tin clock" reasonably accurate and therefore his position fixes accurate. Celestial navigation can only determine longitude when the time at the prime meridian is accurately known. The more accurately time at the prime meridian (0° longitude) is known, the more accurate the fix;indeed, every four seconds of time source (commonly a chronometer or, in aircraft, an accurate "hack watch") error can lead to a positional error of one nautical mile. When time is unknown or not trusted, the lunar distance method can be used as a method of determining time at the prime meridian. A functioning timepiece with a second hand or digit, an almanac with lunar corrections, and a sextant are used. With no knowledge of time at all, a lunar calculation (given an observable Moon of respectable altitude) can provide time accurate to within a second or two with about 15 to 30 minutes of observations and mathematical reduction from the almanac tables. After practice, an observer can regularly derive and prove time using this method to within about one second, or one nautical mile, of navigational error due to errors ascribed to the time source. Example An example illustrating the concept behind the intercept method for determining position is shown to the right. (Two other common methods for determining one's position using celestial navigation are longitude by chronometer and ex-meridian methods.) In the adjacent image, the two circles on the map represent lines of position for the Sun and Moon at 12:00 GMT on October 29, 2005. At this time, a navigator on a ship at sea measured the Moon to be 56° above the horizon using a sextant. Ten minutes later, the Sun was observed to be 40° above the horizon. Lines of position were then calculated and plotted for each of these observations. Since both the Sun and Moon were observed at their respective angles from the same location, the navigator would have to be located at one of the two locations where the circles cross. In this case, the navigator is either located on the Atlantic Ocean, about west of Madeira, or in South America, about southwest of Asunción, Paraguay. In most cases, determining which of the two intersections is the correct one is obvious to the observer because they are often thousands of miles apart. As it is unlikely that the ship is sailing across South America, the position in the Atlantic is the correct one. Note that the lines of position in the figure are distorted because of the map's projection; they would be circular if plotted on a globe. An observer at the Gran Chaco point would see the Moon at the left of the Sun, and an observer at the Madeira point would see the Moon at the right of the Sun. Angular measurement Accurate angle measurement has evolved over the years. One simple method is to hold the hand above the horizon with one's arm stretched out. The angular width of the little finger is just over 1.5 degrees at extended arm's length and can be used to estimate the elevation of the Sun from the horizon plane and therefore estimate the time until sunset. The need for more accurate measurements led to the development of a number of increasingly accurate instruments, including the kamal, astrolabe, octant, and sextant. The sextant and octant are most accurate because they measure angles from the horizon, eliminating errors caused by the placement of an instrument's pointers, and because their dual-mirror system cancels relative motions of the instrument, showing a steady view of the object and horizon. Navigators measure distance on the Earth in degrees, arcminutes, and arcseconds. A nautical mile is defined as 1,852 meters but is also (not accidentally) one arc minute of angle along a meridian on the Earth. Sextants can be read accurately to within 0.1 arcminutes, so the observer's position can be determined within (theoretically) 0.1 nautical miles (185.2 meters, or about 203 yards). Most ocean navigators, measuring from a moving platform under fair conditions, can achieve a practical accuracy of approximately 1.5 nautical miles (2.8 km), enough to navigate safely when out of sight of land or other hazards. Practical navigation Practical celestial navigation usually requires a marine chronometer to measure time, a sextant to measure the angles, an almanac giving schedules of the coordinates of celestial objects, a set of sight reduction tables to help perform the height and azimuth computations, and a chart of the region. With sight reduction tables, the only calculations required are addition and subtraction. Small handheld computers, laptops and even scientific calculators enable modern navigators to "reduce" sextant sights in minutes, by automating all the calculation and/or data lookup steps. Most people can master simpler celestial navigation procedures after a day or two of instruction and practice, even using manual calculation methods. Modern practical navigators usually use celestial navigation in combination with satellite navigation to correct a dead reckoning track, that is, a course estimated from a vessel's position, course, and speed. Using multiple methods helps the navigator detect errors and simplifies procedures. When used this way, a navigator, from time to time, measures the Sun's altitude with a sextant, then compares that with a precalculated altitude based on the exact time and estimated position of the observation. On the chart, the straight edge of a plotter can mark each position line. If the position line indicates a location more than a few miles from the estimated position, more observations can be taken to restart the dead-reckoning track. In the event of equipment or electrical failure, taking Sun lines a few times a day and advancing them by dead reckoning allows a vessel to get a crude running fix sufficient to return to port. One can also use the Moon, a planet, Polaris, or one of 57 other navigational stars to track celestial positioning. Latitude Latitude was measured in the past either by measuring the altitude of the Sun at noon (the "noon sight") or by measuring the altitudes of any other celestial body when crossing the meridian (reaching its maximum altitude when due north or south), and frequently by measuring the altitude of Polaris, the north star (assuming it is sufficiently visible above the horizon, which it is not in the Southern Hemisphere). Polaris always stays within 1 degree of the celestial north pole. If a navigator measures the angle to Polaris and finds it to be 10 degrees from the horizon, then he is about 10 degrees north of the equator. This approximate latitude is then corrected using simple tables or almanac corrections to determine a latitude that is theoretically accurate to within a fraction of a mile. Angles are measured from the horizon because locating the point directly overhead, the zenith, is not normally possible. When haze obscures the horizon, navigators use artificial horizons, which are horizontal mirrors or pans of reflective fluid, especially mercury. In the latter case, the angle between the reflected image in the mirror and the actual image of the object in the sky is exactly twice the required altitude. Longitude If the angle to Polaris can be accurately measured, a similar measurement of a star near the eastern or western horizons would provide the longitude. The problem is that the Earth turns 15 degrees per hour, making such measurements dependent on time. A measure a few minutes before or after the same measure the day before creates serious navigation errors. Before good chronometers were available, longitude measurements were based on the transit of the moon or the positions of the moons of Jupiter. For the most part, these were too difficult to be used by anyone except professional astronomers. The invention of the modern chronometer by John Harrison in 1761 vastly simplified longitudinal calculation. The longitude problem took centuries to solve and was dependent on the construction of a non-pendulum clock (as pendulum clocks cannot function accurately on a tilting ship, or indeed a moving vehicle of any kind). Two useful methods evolved during the 18th century and are still practiced today: lunar distance, which does not involve the use of a chronometer, and the use of an accurate timepiece or chronometer. Presently, layperson calculations of longitude can be made by noting the exact local time (leaving out any reference for daylight saving time) when the Sun is at its highest point in Earth's sky. The calculation of noon can be made more easily and accurately with a small, exactly vertical rod driven into level ground—take the time reading when the shadow is pointing due north (in the northern hemisphere). Then take your local time reading and subtract it from GMT (Greenwich Mean Time), or the time in London, England. For example, a noon reading (12:00) near central Canada or the US would occur at approximately 6 p.m. (18:00) in London. The 6-hour difference is one quarter of a 24-hour day, or 90 degrees of a 360-degree circle (the Earth). The calculation can also be made by taking the number of hours (use decimals for fractions of an hour) multiplied by 15, the number of degrees in one hour. Either way, it can be demonstrated that much of central North America is at or near 90 degrees west longitude. Eastern longitudes can be determined by adding the local time to GMT, with similar calculations. Lunar distance An older but still useful and practical method of determining accurate time at sea before the advent of precise timekeeping and satellite-based time systems is called "lunar distances," or "lunars," which was used extensively for a short period and refined for daily use on board ships in the 18th century. Use declined through the middle of the 19th century as better and better timepieces (chronometers) became available to the average vessel at sea. Although most recently only used by sextant hobbyists and historians, it is now becoming more common in celestial navigation courses to reduce total dependence on GNSS systems as potentially the only accurate time source aboard a vessel. Designed for use when an accurate timepiece is not available or timepiece accuracy is suspect during a long sea voyage, the navigator precisely measures the angle between the Moon and the Sun or between the Moon and one of several stars near the ecliptic. The observed angle must be corrected for the effects of refraction and parallax, like any celestial sight. To make this correction, the navigator measures the altitudes of the Moon and Sun (or another star) at about the same time as the lunar distance angle. Only rough values for the altitudes are required. A calculation with suitable published tables (or longhand with logarithms and graphical tables) requires about 10 to 15 minutes' work to convert the observed angle(s) to a geocentric lunar distance. The navigator then compares the corrected angle against those listed in the appropriate almanac pages for every three hours of Greenwich time, using interpolation tables to derive intermediate values. The result is a difference in time between the time source (of unknown time) used for the observations and the actual prime meridian time (that of the "Zero Meridian" at Greenwich, also known as UTC or GMT). Knowing UTC/GMT, a further set of sights can be taken and reduced by the navigator to calculate their exact position on the Earth as a local latitude and longitude. Use of time The considerably more popular method was (and still is) to use an accurate timepiece to directly measure the time of a sextant sight. The need for accurate navigation led to the development of progressively more accurate chronometers in the 18th century (see John Harrison). Today, time is measured with a chronometer, a quartz watch, a shortwave radio time signal broadcast from an atomic clock, or the time displayed on a satellite time signal receiver. A quartz wristwatch normally keeps time within a half-second per day. If it is worn constantly, keeping it near body heat, its rate of drift can be measured with the radio, and by compensating for this drift, a navigator can keep time to better than a second per month. When time at the prime meridian (or another starting point) is accurately known, celestial navigation can determine longitude, and the more accurately latitude and time are known, the more accurate the longitude determination. The angular speed of the Earth is latitude-dependent. At the poles, or latitude 90°, the rotation velocity of the Earth reaches zero. At 45° latitude, one second of time is equivalent in longitude to , or one-tenth of a second means At the slightly bulged-out equator, or latitude 0°, the rotation velocity of Earth or its equivalent in longitude reaches its maximum at . Traditionally, a navigator checked their chronometer(s) with their sextant at a geographic marker surveyed by a professional astronomer. This is now a rare skill, and most harbormasters cannot locate their harbor's marker. Ships often carried more than one chronometer. Chronometers were kept on gimbals in a dry room near the center of the ship. They were used to set a hack watch for the actual sight, so that no chronometers were ever exposed to the wind and salt water on deck. Winding and comparing the chronometers was a crucial duty of the navigator. Even today, it is still logged daily in the ship's deck log and reported to the captain before eight bells on the forenoon watch (shipboard noon). Navigators also set the ship's clocks and calendar. Two chronometers provided dual modular redundancy, allowing a backup if one ceases to work but not allowing any error correction if the two displayed a different time, since in case of contradiction between the two chronometers, it would be impossible to know which one was wrong (the error detection obtained would be the same as having only one chronometer and checking it periodically: every day at noon against dead reckoning). Three chronometers provided triple modular redundancy, allowing error correction if one of the three was wrong, so the pilot would take the average of the two with closer readings (average precision vote). There is an old adage to this effect, stating: "Never go to sea with two chronometers; take one or three." Vessels engaged in survey work generally carried many more than three chronometers for example, HMS Beagle carried 22 chronometers. Modern celestial navigation The celestial line of position concept was discovered in 1837 by Thomas Hubbard Sumner when, after one observation, he computed and plotted his longitude at more than one trial latitude in his vicinity and noticed that the positions lay along a line. Using this method with two bodies, navigators were finally able to cross two position lines and obtain their position, in effect determining both latitude and longitude. Later in the 19th century came the development of the modern (Marcq St. Hilaire) intercept method; with this method, the body height and azimuth are calculated for a convenient trial position and compared with the observed height. The difference in arcminutes is the nautical mile "intercept" distance that the position line needs to be shifted toward or away from the direction of the body's subpoint. (The intercept method uses the concept illustrated in the example in the "How it works" section above.) Two other methods of reducing sights are the longitude by chronometer and the ex-meridian method. While celestial navigation is becoming increasingly redundant with the advent of inexpensive and highly accurate satellite navigation receivers (GNSS), it was used extensively in aviation until the 1960s and marine navigation until quite recently. However, since a prudent mariner never relies on any sole means of fixing their position, many national maritime authorities still require deck officers to show knowledge of celestial navigation in examinations, primarily as a backup for electronic or satellite navigation. One of the most common current uses of celestial navigation aboard large merchant vessels is for compass calibration and error checking at sea when no terrestrial references are available. In 1980, French Navy regulations still required an independently operated timepiece on board so that, in combination with a sextant, a ship's position could be determined by celestial navigation. The U.S. Air Force and U.S. Navy continued instructing military aviators on celestial navigation use until 1997, because: celestial navigation can be used independently of ground aids. celestial navigation has global coverage. celestial navigation cannot be jammed (although it can be obscured by clouds). celestial navigation does not give off any signals that could be detected by an enemy. The United States Naval Academy (USNA) announced that it was discontinuing its course on celestial navigation (considered to be one of its most demanding non-engineering courses) from the formal curriculum in the spring of 1998. In October 2015, citing concerns about the reliability of GNSS systems in the face of potential hostile hacking, the USNA reinstated instruction in celestial navigation in the 2015 to 2016 academic year. At another federal service academy, the US Merchant Marine Academy, there was no break in instruction in celestial navigation as it is required to pass the US Coast Guard License Exam to enter the Merchant Marine. It is also taught at Harvard, most recently as Astronomy 2. Celestial navigation continues to be used by private yachtsmen, and particularly by long-distance cruising yachts around the world. For small cruising boat crews, celestial navigation is generally considered an essential skill when venturing beyond visual range of land. Although satellite navigation technology is reliable, offshore yachtsmen use celestial navigation as either a primary navigational tool or as a backup. Celestial navigation was used in commercial aviation up until the early part of the jet age; early Boeing 747s had a "sextant port" in the roof of the cockpit. It was only phased out in the 1960s with the advent of inertial navigation and Doppler navigation systems, and today's satellite-based systems which can locate the aircraft's position accurate to a 3-meter sphere with several updates per second. A variation on terrestrial celestial navigation was used to help orient the Apollo spacecraft en route to and from the Moon. To this day, space missions such as the Mars Exploration Rover use star trackers to determine the attitude of the spacecraft. As early as the mid-1960s, advanced electronic and computer systems had evolved enabling navigators to obtain automated celestial sight fixes. These systems were used aboard both ships and US Air Force aircraft, and were highly accurate, able to lock onto up to 11 stars (even in daytime) and resolve the craft's position to less than . The SR-71 high-speed reconnaissance aircraft was one example of an aircraft that used a combination of automated celestial and inertial navigation. These rare systems were expensive, however, and the few that remain in use today are regarded as backups to more reliable satellite positioning systems. Intercontinental ballistic missiles use celestial navigation to check and correct their course (initially set using internal gyroscopes) while flying outside the Earth's atmosphere. The immunity to jamming signals is the main driver behind this seemingly archaic technique. X-ray pulsar-based navigation and timing (XNAV) is an experimental navigation technique for space whereby the periodic X-ray signals emitted from pulsars are used to determine the location of a vehicle, such as a spacecraft in deep space. A vehicle using XNAV would compare received X-ray signals with a database of known pulsar frequencies and locations. Similar to GNSS, this comparison would allow the vehicle to triangulate its position accurately (±5 km). The advantage of using X-ray signals over radio waves is that X-ray telescopes can be made smaller and lighter. On 9 November 2016 the Chinese Academy of Sciences launched an experimental pulsar navigation satellite called XPNAV 1. SEXTANT (Station Explorer for X-ray Timing and Navigation Technology) is a NASA-funded project developed at the Goddard Space Flight Center that is testing XNAV on-orbit on board the International Space Station in connection with the NICER project, launched on 3 June 2017 on the SpaceX CRS-11 ISS resupply mission. Training Celestial navigation training equipment for aircraft crews combine a simple flight simulator with a planetarium. An early example is the Link Celestial Navigation Trainer, used in the Second World War. Housed in a high building, it featured a cockpit accommodating a whole bomber crew (pilot, navigator, and bombardier). The cockpit offered a full array of instruments, which the pilot used to fly the simulated airplane. Fixed to a dome above the cockpit was an arrangement of lights, some collimated, simulating constellations, from which the navigator determined the plane's position. The dome's movement simulated the changing positions of the stars with the passage of time and the movement of the plane around the Earth. The navigator also received simulated radio signals from various positions on the ground. Below the cockpit moved "terrain plates"—large, movable aerial photographs of the land below—which gave the crew the impression of flight and enabled the bomber to practice lining up bombing targets. A team of operators sat at a control booth on the ground below the machine, from which they could simulate weather conditions such as wind or clouds. This team also tracked the airplane's position by moving a "crab" (a marker) on a paper map. The Link Celestial Navigation Trainer was developed in response to a request made by the Royal Air Force (RAF) in 1939. The RAF ordered 60 of these machines, and the first one was built in 1941. The RAF used only a few of these, leasing the rest back to the US, where eventually hundreds were in use. See also Air navigation Aircraft periscope Astrodome (aeronautics) Astronautics Bowditch's American Practical Navigator Celestial pole Circle of equal altitude Ephemeris Geodetic astronomy GNSS Satellite navigation History of longitude List of proper names of stars List of selected stars for navigation Polar alignment Polynesian navigation Radio navigation Spherical geometry Star clock References External links Celestial Navigation Net Table of the 57 navigational stars with apparent magnitudes and celestial coordinates Inua Complete nautical Almanac and more Calculating Lunar Distances Backbearing.com Almanac, Sight Reduction Tables and more. Celestial Navigation in Petan.net Air Facts THE V-FORCE Air Navigation Sextants Sextant in a Douglas DC-8
Celestial navigation
Astronomy
5,515
32,238,093
https://en.wikipedia.org/wiki/Di-haem%20cytochrome%20c%20peroxidase
In molecular biology, the di-haem cytochrome c peroxidase family is a group of distinct cytochrome c peroxidases (CCPs) that contain two haem groups. Similar to other cytochrome c peroxidases, they reduce hydrogen peroxide to water using c-type haem as an oxidizable substrate. However, since they possess two, instead of one, haem prosthetic groups, this family of bacterial CCPs reduce hydrogen peroxide without the need to generate semi-stable free radicals. The two haem groups have significantly different redox potentials. The high potential (+320 mV) haem feeds electrons from electron shuttle proteins to the low potential (-330 mV) haem, where peroxide is reduced (indeed, the low potential site is known as the peroxidatic site). The CCP protein itself is structured into two domains, each containing one c-type haem group, with a calcium-binding site at the domain interface. This family also includes MauG proteins, whose similarity to di-haem CCP was previously recognised. References Protein families
Di-haem cytochrome c peroxidase
Biology
236
14,463
https://en.wikipedia.org/wiki/Harmonic%20mean
In mathematics, the harmonic mean is a kind of average, one of the Pythagorean means. It is the most appropriate average for ratios and rates such as speeds, and is normally only used for positive arguments. The harmonic mean is the reciprocal of the arithmetic mean of the reciprocals of the numbers, that is, the generalized f-mean with . For example, the harmonic mean of 1, 4, and 4 is Definition The harmonic mean H of the positive real numbers is It is the reciprocal of the arithmetic mean of the reciprocals, and vice versa: where the arithmetic mean is The harmonic mean is a Schur-concave function, and is greater than or equal to the minimum of its arguments: for positive arguments, . Thus, the harmonic mean cannot be made arbitrarily large by changing some values to bigger ones (while having at least one value unchanged). The harmonic mean is also concave for positive arguments, an even stronger property than Schur-concavity. Relationship with other means For all positive data sets containing at least one pair of nonequal values, the harmonic mean is always the least of the three Pythagorean means, while the arithmetic mean is always the greatest of the three and the geometric mean is always in between. (If all values in a nonempty data set are equal, the three means are always equal.) It is the special case M−1 of the power mean: Since the harmonic mean of a list of numbers tends strongly toward the least elements of the list, it tends (compared to the arithmetic mean) to mitigate the impact of large outliers and aggravate the impact of small ones. The arithmetic mean is often mistakenly used in places calling for the harmonic mean. In the speed example below for instance, the arithmetic mean of 40 is incorrect, and too big. The harmonic mean is related to the other Pythagorean means, as seen in the equation below. This can be seen by interpreting the denominator to be the arithmetic mean of the product of numbers n times but each time omitting the j-th term. That is, for the first term, we multiply all n numbers except the first; for the second, we multiply all n numbers except the second; and so on. The numerator, excluding the n, which goes with the arithmetic mean, is the geometric mean to the power n. Thus the n-th harmonic mean is related to the n-th geometric and arithmetic means. The general formula is If a set of non-identical numbers is subjected to a mean-preserving spread — that is, two or more elements of the set are "spread apart" from each other while leaving the arithmetic mean unchanged — then the harmonic mean always decreases. Harmonic mean of two or three numbers Two numbers For the special case of just two numbers, and , the harmonic mean can be written as: or (Note that the harmonic mean is undefined if , i.e. .) In this special case, the harmonic mean is related to the arithmetic mean and the geometric mean by Since by the inequality of arithmetic and geometric means, this shows for the n = 2 case that H ≤ G (a property that in fact holds for all n). It also follows that , meaning the two numbers' geometric mean equals the geometric mean of their arithmetic and harmonic means. Three numbers For the special case of three numbers, , and , the harmonic mean can be written as: Three positive numbers H, G, and A are respectively the harmonic, geometric, and arithmetic means of three positive numbers if and only if the following inequality holds Weighted harmonic mean If a set of weights , ..., is associated to the data set , ..., , the weighted harmonic mean is defined by The unweighted harmonic mean can be regarded as the special case where all of the weights are equal. Examples In analytic number theory Prime number theory The prime number theorem states that the number of primes less than or equal to is asymptotically equal to the harmonic mean of the first natural numbers. In physics Average speed In many situations involving rates and ratios, the harmonic mean provides the correct average. For instance, if a vehicle travels a certain distance d outbound at a speed x (e.g. 60 km/h) and returns the same distance at a speed y (e.g. 20 km/h), then its average speed is the harmonic mean of x and y (30 km/h), not the arithmetic mean (40 km/h). The total travel time is the same as if it had traveled the whole distance at that average speed. This can be proven as follows: Average speed for the entire journey = However, if the vehicle travels for a certain amount of time at a speed x and then the same amount of time at a speed y, then its average speed is the arithmetic mean of x and y, which in the above example is 40 km/h. Average speed for the entire journey The same principle applies to more than two segments: given a series of sub-trips at different speeds, if each sub-trip covers the same distance, then the average speed is the harmonic mean of all the sub-trip speeds; and if each sub-trip takes the same amount of time, then the average speed is the arithmetic mean of all the sub-trip speeds. (If neither is the case, then a weighted harmonic mean or weighted arithmetic mean is needed. For the arithmetic mean, the speed of each portion of the trip is weighted by the duration of that portion, while for the harmonic mean, the corresponding weight is the distance. In both cases, the resulting formula reduces to dividing the total distance by the total time.) However, one may avoid the use of the harmonic mean for the case of "weighting by distance". Pose the problem as finding "slowness" of the trip where "slowness" (in hours per kilometre) is the inverse of speed. When trip slowness is found, invert it so as to find the "true" average trip speed. For each trip segment i, the slowness si = 1/speedi. Then take the weighted arithmetic mean of the si's weighted by their respective distances (optionally with the weights normalized so they sum to 1 by dividing them by trip length). This gives the true average slowness (in time per kilometre). It turns out that this procedure, which can be done with no knowledge of the harmonic mean, amounts to the same mathematical operations as one would use in solving this problem by using the harmonic mean. Thus it illustrates why the harmonic mean works in this case. Density Similarly, if one wishes to estimate the density of an alloy given the densities of its constituent elements and their mass fractions (or, equivalently, percentages by mass), then the predicted density of the alloy (exclusive of typically minor volume changes due to atom packing effects) is the weighted harmonic mean of the individual densities, weighted by mass, rather than the weighted arithmetic mean as one might at first expect. To use the weighted arithmetic mean, the densities would have to be weighted by volume. Applying dimensional analysis to the problem while labeling the mass units by element and making sure that only like element-masses cancel makes this clear. Electricity If one connects two electrical resistors in parallel, one having resistance x (e.g., 60 Ω) and one having resistance y (e.g., 40 Ω), then the effect is the same as if one had used two resistors with the same resistance, both equal to the harmonic mean of x and y (48 Ω): the equivalent resistance, in either case, is 24 Ω (one-half of the harmonic mean). This same principle applies to capacitors in series or to inductors in parallel. However, if one connects the resistors in series, then the average resistance is the arithmetic mean of x and y (50 Ω), with total resistance equal to twice this, the sum of x and y (100 Ω). This principle applies to capacitors in parallel or to inductors in series. As with the previous example, the same principle applies when more than two resistors, capacitors or inductors are connected, provided that all are in parallel or all are in series. The "conductivity effective mass" of a semiconductor is also defined as the harmonic mean of the effective masses along the three crystallographic directions. Optics As for other optic equations, the thin lens equation = + can be rewritten such that the focal length f is one-half of the harmonic mean of the distances of the subject u and object v from the lens. Two thin lenses of focal length f1 and f2 in series is equivalent to two thin lenses of focal length fhm, their harmonic mean, in series. Expressed as optical power, two thin lenses of optical powers P1 and P2 in series is equivalent to two thin lenses of optical power Pam, their arithmetic mean, in series. In finance The weighted harmonic mean is the preferable method for averaging multiples, such as the price–earnings ratio (P/E). If these ratios are averaged using a weighted arithmetic mean, high data points are given greater weights than low data points. The weighted harmonic mean, on the other hand, correctly weights each data point. The simple weighted arithmetic mean when applied to non-price normalized ratios such as the P/E is biased upwards and cannot be numerically justified, since it is based on equalized earnings; just as vehicles speeds cannot be averaged for a roundtrip journey (see above). In geometry In any triangle, the radius of the incircle is one-third of the harmonic mean of the altitudes. For any point P on the minor arc BC of the circumcircle of an equilateral triangle ABC, with distances q and t from B and C respectively, and with the intersection of PA and BC being at a distance y from point P, we have that y is half the harmonic mean of q and t. In a right triangle with legs a and b and altitude h from the hypotenuse to the right angle, is half the harmonic mean of and . Let t and s (t > s) be the sides of the two inscribed squares in a right triangle with hypotenuse c. Then equals half the harmonic mean of and . Let a trapezoid have vertices A, B, C, and D in sequence and have parallel sides AB and CD. Let E be the intersection of the diagonals, and let F be on side DA and G be on side BC such that FEG is parallel to AB and CD. Then FG is the harmonic mean of AB and DC. (This is provable using similar triangles.) One application of this trapezoid result is in the crossed ladders problem, where two ladders lie oppositely across an alley, each with feet at the base of one sidewall, with one leaning against a wall at height A and the other leaning against the opposite wall at height B, as shown. The ladders cross at a height of h above the alley floor. Then h is half the harmonic mean of A and B. This result still holds if the walls are slanted but still parallel and the "heights" A, B, and h are measured as distances from the floor along lines parallel to the walls. This can be proved easily using the area formula of a trapezoid and area addition formula. In an ellipse, the semi-latus rectum (the distance from a focus to the ellipse along a line parallel to the minor axis) is the harmonic mean of the maximum and minimum distances of the ellipse from a focus. In other sciences In computer science, specifically information retrieval and machine learning, the harmonic mean of the precision (true positives per predicted positive) and the recall (true positives per real positive) is often used as an aggregated performance score for the evaluation of algorithms and systems: the F-score (or F-measure). This is used in information retrieval because only the positive class is of relevance, while number of negatives, in general, is large and unknown. It is thus a trade-off as to whether the correct positive predictions should be measured in relation to the number of predicted positives or the number of real positives, so it is measured versus a putative number of positives that is an arithmetic mean of the two possible denominators. A consequence arises from basic algebra in problems where people or systems work together. As an example, if a gas-powered pump can drain a pool in 4 hours and a battery-powered pump can drain the same pool in 6 hours, then it will take both pumps , which is equal to 2.4 hours, to drain the pool together. This is one-half of the harmonic mean of 6 and 4: . That is, the appropriate average for the two types of pump is the harmonic mean, and with one pair of pumps (two pumps), it takes half this harmonic mean time, while with two pairs of pumps (four pumps) it would take a quarter of this harmonic mean time. In hydrology, the harmonic mean is similarly used to average hydraulic conductivity values for a flow that is perpendicular to layers (e.g., geologic or soil) - flow parallel to layers uses the arithmetic mean. This apparent difference in averaging is explained by the fact that hydrology uses conductivity, which is the inverse of resistivity. In sabermetrics, a baseball player's Power–speed number is the harmonic mean of their home run and stolen base totals. In population genetics, the harmonic mean is used when calculating the effects of fluctuations in the census population size on the effective population size. The harmonic mean takes into account the fact that events such as population bottleneck increase the rate genetic drift and reduce the amount of genetic variation in the population. This is a result of the fact that following a bottleneck very few individuals contribute to the gene pool limiting the genetic variation present in the population for many generations to come. When considering fuel economy in automobiles two measures are commonly used – miles per gallon (mpg), and litres per 100 km. As the dimensions of these quantities are the inverse of each other (one is distance per volume, the other volume per distance) when taking the mean value of the fuel economy of a range of cars one measure will produce the harmonic mean of the other – i.e., converting the mean value of fuel economy expressed in litres per 100 km to miles per gallon will produce the harmonic mean of the fuel economy expressed in miles per gallon. For calculating the average fuel consumption of a fleet of vehicles from the individual fuel consumptions, the harmonic mean should be used if the fleet uses miles per gallon, whereas the arithmetic mean should be used if the fleet uses litres per 100 km. In the USA the CAFE standards (the federal automobile fuel consumption standards) make use of the harmonic mean. In chemistry and nuclear physics the average mass per particle of a mixture consisting of different species (e.g., molecules or isotopes) is given by the harmonic mean of the individual species' masses weighted by their respective mass fraction. Beta distribution The harmonic mean of a beta distribution with shape parameters α and β is: The harmonic mean with α < 1 is undefined because its defining expression is not bounded in [0, 1]. Letting α = β showing that for α = β the harmonic mean ranges from 0 for α = β = 1, to 1/2 for α = β → ∞. The following are the limits with one parameter finite (non-zero) and the other parameter approaching these limits: With the geometric mean the harmonic mean may be useful in maximum likelihood estimation in the four parameter case. A second harmonic mean (H1 − X) also exists for this distribution This harmonic mean with β < 1 is undefined because its defining expression is not bounded in [ 0, 1 ]. Letting α = β in the above expression showing that for α = β the harmonic mean ranges from 0, for α = β = 1, to 1/2, for α = β → ∞. The following are the limits with one parameter finite (non zero) and the other approaching these limits: Although both harmonic means are asymmetric, when α = β the two means are equal. Lognormal distribution The harmonic mean ( H ) of the lognormal distribution of a random variable X is where μ and σ2 are the parameters of the distribution, i.e. the mean and variance of the distribution of the natural logarithm of X. The harmonic and arithmetic means of the distribution are related by where Cv and μ* are the coefficient of variation and the mean of the distribution respectively.. The geometric (G), arithmetic and harmonic means of the distribution are related by Pareto distribution The harmonic mean of type 1 Pareto distribution is where k is the scale parameter and α is the shape parameter. Statistics For a random sample, the harmonic mean is calculated as above. Both the mean and the variance may be infinite (if it includes at least one term of the form 1/0). Sample distributions of mean and variance The mean of the sample m is asymptotically distributed normally with variance s2. The variance of the mean itself is where m is the arithmetic mean of the reciprocals, x are the variates, n is the population size and E is the expectation operator. Delta method Assuming that the variance is not infinite and that the central limit theorem applies to the sample then using the delta method, the variance is where H is the harmonic mean, m is the arithmetic mean of the reciprocals s2 is the variance of the reciprocals of the data and n is the number of data points in the sample. Jackknife method A jackknife method of estimating the variance is possible if the mean is known. This method is the usual 'delete 1' rather than the 'delete m' version. This method first requires the computation of the mean of the sample (m) where x are the sample values. A series of value wi is then computed where The mean (h) of the wi is then taken: The variance of the mean is Significance testing and confidence intervals for the mean can then be estimated with the t test. Size biased sampling Assume a random variate has a distribution f( x ). Assume also that the likelihood of a variate being chosen is proportional to its value. This is known as length based or size biased sampling. Let μ be the mean of the population. Then the probability density function f*( x ) of the size biased population is The expectation of this length biased distribution E*( x ) is where σ2 is the variance. The expectation of the harmonic mean is the same as the non-length biased version E( x ) The problem of length biased sampling arises in a number of areas including textile manufacture pedigree analysis and survival analysis Akman et al. have developed a test for the detection of length based bias in samples. Shifted variables If X is a positive random variable and q > 0 then for all ε > 0 Moments Assuming that X and E(X) are > 0 then This follows from Jensen's inequality. Gurland has shown that for a distribution that takes only positive values, for any n > 0 Under some conditions where ~ means approximately equal to. Sampling properties Assuming that the variates (x) are drawn from a lognormal distribution there are several possible estimators for H: where Of these H3 is probably the best estimator for samples of 25 or more. Bias and variance estimators A first order approximation to the bias and variance of H1 are where Cv is the coefficient of variation. Similarly a first order approximation to the bias and variance of H3 are In numerical experiments H3 is generally a superior estimator of the harmonic mean than H1. H2 produces estimates that are largely similar to H1. Notes The Environmental Protection Agency recommends the use of the harmonic mean in setting maximum toxin levels in water. In geophysical reservoir engineering studies, the harmonic mean is widely used. See also Contraharmonic mean Generalized mean Harmonic number Rate (mathematics) Weighted mean Parallel summation Geometric mean Weighted geometric mean HM-GM-AM-QM inequalities Harmonic mean p-value Notes References External links Averages, Arithmetic and Harmonic Means at cut-the-knot Means
Harmonic mean
Physics,Mathematics
4,202
63,591,906
https://en.wikipedia.org/wiki/NGC%20635
NGC 635 is a spiral galaxy located in the constellation of Cetus about 626 million light years from the Milky Way. NGC 635 was discovered by the American astronomer Francis Leavenworth in 1885. It is also known as MCG-04-05-002 or PGC 6062, although in SIMBAD its New General Catalogue designation is not recognized. See also List of NGC objects (1–1000) References Spiral galaxies Cetus 0635 006062
NGC 635
Astronomy
99
57,711,620
https://en.wikipedia.org/wiki/Levchin%20Prize
The Levchin Prize for real-world cryptography is a prize given to people or organizations who are recognized for contributions to cryptography that have a significant impact on its practical use. The recipients are selected by the steering committee of the Real World Crypto (RWC) academic conference run by the International Association for Cryptologic Research (IACR) and announced at the RWC conference. The award was established in 2015 by Max Levchin, a software engineer and businessman who co-founded the financial technology company PayPal, and first awarded in January 2016. Two awards are presented every year, each on its own topic. While there is no formal rule, every year so far as of 2024, one of the two awards has recognized one or more individuals for theoretical advancements to cryptographic methods with a practical impact, while the other has recognized one or more individual or an organization for either the construction of practical systems or practical advancements in cryptanalysis. Recipients The following table lists the recipients of the Levchin Prize. See also List of computer science awards References Awards established in 2016 Cryptographers History of cryptography Computer science awards
Levchin Prize
Technology
229
32,893,827
https://en.wikipedia.org/wiki/Compaq%20C%20series
The Compaq C series was a series of Handheld PCs running on Windows CE 2.0, manufactured by Compaq in 1998. Description The C series replaced the Aero line of handheld PCs. It was succeeded, as with other HPCs manufactured by Compaq and HP, by the iPAQ line of Pocket PCs. The C series featured an integrated 33.6 kbit/s modem. For wireless data transfer, it sported an IrDA port. An upgrade to Windows CE 2.11 could be purchased from Compaq for US$109. Targus Inc. manufactured a leather portfolio case for the C series, under the Compaq brand. References C Windows CE Windows CE devices Computer-related introductions in 1998
Compaq C series
Technology
151
1,895,800
https://en.wikipedia.org/wiki/Riemann%20series%20theorem
In mathematics, the Riemann series theorem, also called the Riemann rearrangement theorem, named after 19th-century German mathematician Bernhard Riemann, says that if an infinite series of real numbers is conditionally convergent, then its terms can be arranged in a permutation so that the new series converges to an arbitrary real number, and rearranged such that the new series diverges. This implies that a series of real numbers is absolutely convergent if and only if it is unconditionally convergent. As an example, the series converges to 0 (for a sufficiently large number of terms, the partial sum gets arbitrarily near to 0); but replacing all terms with their absolute values gives which sums to infinity. Thus, the original series is conditionally convergent, and can be rearranged (by taking the first two positive terms followed by the first negative term, followed by the next two positive terms and then the next negative term, etc.) to give a series that converges to a different sum, such as which evaluates to ln 2. More generally, using this procedure with p positives followed by q negatives gives the sum ln(p/q). Other rearrangements give other finite sums or do not converge to any sum. History It is a basic result that the sum of finitely many numbers does not depend on the order in which they are added. For example, . The observation that the sum of an infinite sequence of numbers can depend on the ordering of the summands is commonly attributed to Augustin-Louis Cauchy in 1833. He analyzed the alternating harmonic series, showing that certain rearrangements of its summands result in different limits. Around the same time, Peter Gustav Lejeune Dirichlet highlighted that such phenomena are ruled out in the context of absolute convergence, and gave further examples of Cauchy's phenomenon for some other series which fail to be absolutely convergent. In the course of his analysis of Fourier series and the theory of Riemann integration, Bernhard Riemann gave a full characterization of the rearrangement phenomena. He proved that in the case of a convergent series which does not converge absolutely (known as conditional convergence), rearrangements can be found so that the new series converges to any arbitrarily prescribed real number. Riemann's theorem is now considered as a basic part of the field of mathematical analysis. For any series, one may consider the set of all possible sums, corresponding to all possible rearrangements of the summands. Riemann’s theorem can be formulated as saying that, for a series of real numbers, this set is either empty, a single point (in the case of absolute convergence), or the entire real number line (in the case of conditional convergence). In this formulation, Riemann’s theorem was extended by Paul Lévy and Ernst Steinitz to series whose summands are complex numbers or, even more generally, elements of a finite-dimensional real vector space. They proved that the set of possible sums forms a real affine subspace. Extensions of the Lévy–Steinitz theorem to series in infinite-dimensional spaces have been considered by a number of authors. Definitions A series converges if there exists a value such that the sequence of the partial sums converges to . That is, for any ε > 0, there exists an integer N such that if n ≥ N, then A series converges conditionally if the series converges but the series diverges. A permutation is simply a bijection from the set of positive integers to itself. This means that if is a permutation, then for any positive integer there exists exactly one positive integer such that In particular, if , then . Statement of the theorem Suppose that is a sequence of real numbers, and that is conditionally convergent. Let be a real number. Then there exists a permutation such that There also exists a permutation such that The sum can also be rearranged to diverge to or to fail to approach any limit, finite or infinite. Alternating harmonic series Changing the sum The alternating harmonic series is a classic example of a conditionally convergent series:is convergent, whereasis the ordinary harmonic series, which diverges. Although in standard presentation the alternating harmonic series converges to , its terms can be arranged to converge to any number, or even to diverge. One instance of this is as follows. Begin with the series written in the usual order, and rearrange and regroup the terms as: where the pattern is: the first two terms are 1 and −1/2, whose sum is 1/2. The next term is −1/4. The next two terms are 1/3 and −1/6, whose sum is 1/6. The next term is −1/8. The next two terms are 1/5 and −1/10, whose sum is 1/10. In general, since every odd integer occurs once positively and every even integers occur once negatively (half of them as multiples of 4, the other half as twice odd integers), the sum is composed of blocks of three which can be simplified as: Hence, the above series can in fact be written as: which is half the sum originally, and can only equate to the original sequence if the value were zero. This series can be demonstrated to be greater than zero by the proof of Leibniz's theorem using that the second partial sum is half. Alternatively, the value of which it converges to, cannot be zero. Hence, the value of the sequence is shown to depend on the order in which series is computed. It is true that the sequence: contains all elements in the sequence: However, since the summation is defined as and , the order of the terms can influence the limit. Getting an arbitrary sum An efficient way to recover and generalize the result of the previous section is to use the fact that where γ is the Euler–Mascheroni constant, and where the notation o(1) denotes a quantity that depends upon the current variable (here, the variable is n) in such a way that this quantity goes to 0 when the variable tends to infinity. It follows that the sum of q even terms satisfies and by taking the difference, one sees that the sum of p odd terms satisfies Suppose that two positive integers a and b are given, and that a rearrangement of the alternating harmonic series is formed by taking, in order, a positive terms from the alternating harmonic series, followed by b negative terms, and repeating this pattern at infinity (the alternating series itself corresponds to , the example in the preceding section corresponds to a = 1, b = 2): Then the partial sum of order (a + b)n of this rearranged series contains positive odd terms and negative even terms, hence It follows that the sum of this rearranged series is Suppose now that, more generally, a rearranged series of the alternating harmonic series is organized in such a way that the ratio between the number of positive and negative terms in the partial sum of order n tends to a positive limit r. Then, the sum of such a rearrangement will be and this explains that any real number x can be obtained as sum of a rearranged series of the alternating harmonic series: it suffices to form a rearrangement for which the limit r is equal . Proof Existence of a rearrangement that sums to any positive real M Riemann's description of the theorem and its proof reads in full: This can be given more detail as follows. Recall that a conditionally convergent series of real terms has both infinitely many negative terms and infinitely many positive terms. First, define two quantities, and by: That is, the series includes all an positive, with all negative terms replaced by zeroes, and the series includes all an negative, with all positive terms replaced by zeroes. Since is conditionally convergent, both the 'positive' and the 'negative' series diverge. Let be any real number. Take just enough of the positive terms so that their sum exceeds . That is, let be the smallest positive integer such that This is possible because the partial sums of the series tend to . Now let be the smallest positive integer such that This number exists because the partial sums of tend to . Now continue inductively, defining as the smallest integer larger than such that and so on. The result may be viewed as a new sequence Furthermore the partial sums of this new sequence converge to . This can be seen from the fact that for any , with the first inequality holding due to the fact that has been defined as the smallest number larger than which makes the second inequality true; as a consequence, it holds that Since the right-hand side converges to zero due to the assumption of conditional convergence, this shows that the 'th partial sum of the new sequence converges to as increases. Similarly, the 'th partial sum also converges to . Since the 'th, 'th, ... 'th partial sums are valued between the 'th and 'th partial sums, it follows that the whole sequence of partial sums converges to . Every entry in the original sequence appears in this new sequence whose partial sums converge to . Those entries of the original sequence which are zero will appear twice in the new sequence (once in the 'positive' sequence and once in the 'negative' sequence), and every second such appearance can be removed, which does not affect the summation in any way. The new sequence is thus a permutation of the original sequence. Existence of a rearrangement that diverges to infinity Let be a conditionally convergent series. The following is a proof that there exists a rearrangement of this series that tends to (a similar argument can be used to show that can also be attained). The above proof of Riemann's original formulation only needs to be modified so that is selected as the smallest integer larger than such that and with selected as the smallest integer larger than such that The choice of on the left-hand sides is immaterial, as it could be replaced by any sequence increasing to infinity. Since converges to zero as increases, for sufficiently large there is and this proves (just as with the analysis of convergence above) that the sequence of partial sums of the new sequence diverge to infinity. Existence of a rearrangement that fails to approach any limit, finite or infinite The above proof only needs to be modified so that is selected as the smallest integer larger than such that and with selected as the smallest integer larger than such that This directly shows that the sequence of partial sums contains infinitely many entries which are larger than 1, and also infinitely many entries which are less than , so that the sequence of partial sums cannot converge. Generalizations Sierpiński theorem Given an infinite series , we may consider a set of "fixed points" , and study the real numbers that the series can sum to if we are only allowed to permute indices in . That is, we letWith this notation, we have: If is finite, then . Here means symmetric difference. If then . If the series is an absolutely convergent sum, then for any . If the series is a conditionally convergent sum, then by Riemann series theorem, . Sierpiński proved that rearranging only the positive terms one can obtain a series converging to any prescribed value less than or equal to the sum of the original series, but larger values in general can not be attained. That is, let be a conditionally convergent sum, then contains , but there is no guarantee that it contains any other number. More generally, let be an ideal of , then we can define . Let be the set of all asymptotic density zero sets , that is, . It's clear that is an ideal of . Proof sketch: Given , a conditionally convergent sum, construct some such that and are both conditionally convergent. Then, rearranging suffices to converge to any number in . Filipów and Szuca proved that other ideals also have this property. Steinitz's theorem Given a converging series of complex numbers, several cases can occur when considering the set of possible sums for all series obtained by rearranging (permuting) the terms of that series: the series may converge unconditionally; then, all rearranged series converge, and have the same sum: the set of sums of the rearranged series reduces to one point; the series may fail to converge unconditionally; if S denotes the set of sums of those rearranged series that converge, then, either the set S is a line L in the complex plane C, of the form or the set S is the whole complex plane C. More generally, given a converging series of vectors in a finite-dimensional real vector space E, the set of sums of converging rearranged series is an affine subspace of E. See also Agnew's theorem — describes all rearrangements that preserve convergence to the same sum for all convergent series References External links Mathematical series Theorems in real analysis Permutations Summability theory Bernhard Riemann
Riemann series theorem
Mathematics
2,732
30,839,887
https://en.wikipedia.org/wiki/Indium%20arsenide%20antimonide%20phosphide
Indium arsenide antimonide phosphide () is a semiconductor material. InAsSbP has been used as blocking layers for semiconductor laser structures, as well as for the mid-infrared light-emitting diodes and lasers, photodetectors and thermophotovoltaic cells. InAsSbP layers can be grown by heteroepitaxy on indium arsenide, gallium antimonide and other materials. See also Aluminium gallium indium phosphide Gallium indium arsenide antimonide phosphide References III-V semiconductors Indium compounds Arsenides Antimonides Phosphides III-V compounds
Indium arsenide antimonide phosphide
Physics,Chemistry,Materials_science
145
70,959,045
https://en.wikipedia.org/wiki/Amanda%20Paulovich
Amanda Grace Paulovich is an oncologist, and a pioneer in proteomics using multiple reaction monitoring mass spectrometry to study tailored cancer treatment. Education Paulovich received a BS in Biological Sciences from Carnegie Mellon University in 1988, a PhD in Genetics from University of Washington in 1996, under the direction of Leland Hartwell. She also received a MD from University of Washington in 1998. Follow her residency in Internal Medicine at Massachusetts General Hospital, she also completed a Postdoctoral Fellowship in Computational Biology at the Massachusetts Institute of Technology Whitehead Center for Genomic Research in 2003, and a Fellowship in Medical Oncology at the Dana Farber Cancer Institute in 2004. Career Paulovich is a Professor in Clinical Research, an Aven Foundation Endowed Chair, and the Director of Early Detection Initiative at the Fred Hutchinson Cancer Research Center. She was inducted to the American Society for Clinical Inviestigation in 2012. Paulovich is an expert in proteomics. Her targeted proteomics method uses multiple reaction monitoring mass spectrometry to target cancer biomarkers with ongoing clinical trials, and was named Method of the Year in 2012 by Nature Methods. She founded Precision Assays in 2016, whose rights to targeted assays were acquired by CellCarta in 2022. Awards 2014 Life Science Innovation Northwest Woman to Watch in Life Science Award 2015 Human Proteome Organization (HUPO) Distinguished Achievement in Proteomic Sciences Award Patent applications Identification and use of biomarkers for detection and quantification of the level of radiation exposure in a biological sample (2011) US 20130052668 A1 Compositions and methods for reliably detecting and/or measuring the amount of a modified target protein in a sample (2011) US 20130052669 A1 References Living people University of Washington alumni Carnegie Mellon University alumni American oncologists Mass spectrometrists University of Washington faculty 20th-century American women scientists Year of birth missing (living people) Fred Hutchinson Cancer Research Center people
Amanda Paulovich
Physics,Chemistry
398