id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
15,050,269
https://en.wikipedia.org/wiki/Operating%20context
An operating context (OC) for an application is the external environment that influences its operation. For a mobile application, the OC is defined by the hardware and software environment in the device, the target user, and other constraints imposed by various other stakeholders, such as a carrier. This concept differs from the operating system (OS) by the impact of these various other stakeholders. Example Here is an example of one device, with one operating system, changing its operating context without changing the OS. A user with a mobile phone changes SIM cards, removing card A, and inserting card B. The phone will now make any network calls over cell phone carrier B's network, rather than A's. Any applications running on the phone will run in a new operating context, and will often have to change functionality to adapt to the abilities, and business logic, of the new carrier. The network, spectrum, and wireless protocol all change in this example. These changes must be reflected back to the user, so the user knows what experience to expect, and thus these changes all change the user interface (UI) also. Hardware agnostic context Situations exist where one can program in a context, with less concern about what hardware it will actually run on. Examples include Flash and Android. Unfortunately, it also quite common that code in a hardware free context will see hardware specific bugs. This is common with software written for, that interacts more directly with, personal computer (PC) hardware, or mobile phones. References Fragmentation of Mobile Applications Operating context is defined in this article See also List of operating systems Comparison of operating systems Operational context Computing terminology Mobile technology
Operating context
Technology
332
6,485,489
https://en.wikipedia.org/wiki/Trouser%20press
A trouser press, also referred to by the trademarked name Corby trouser press, is an electrical appliance used to smooth the wrinkles from a pair of trousers. They are commonly provided in hotel rooms worldwide, though may also be purchased for home use; they are generally associated with use by businessmen who require a formal appearance to their suit. RAF veteran Peter Corby, the inventor of the press, died in August 2021, at the age of 97. Trouser pressing process Most trousers creases occur on the bottom two-thirds of trouser legs, particularly around the back of the knee. Trouser presses are typically the tool for removing these creases without damaging the trousers. On a typical trouser press, the side levers are raised; and the trousers placed between the pressing plate and the cushioned heating pad. The press is slowly closed, the trousers gently pulled so that they align properly and the press dial turned on for heat. The press heats to around regardless of model type. It can take roughly 15 to 45 minutes to press the trousers depending on the model type and the thermostatically controlled heating pad will warm up and gently press out creases and wrinkles without scorching the trousers. Corby Trouser Press The Corby Trouser Press brand is the generic trademark for the product. John Corby Limited was established by John Corby in Windsor, Berkshire, in 1930 as a manufacturer of valet stands. These were later improved with the addition of a pressing area and the first Corby trouser press was launched. These subsequently became electrically heated during the 1960s. In 1977, John Corby Limited became part of what is now Jourdan plc and relocated to Andover, Hampshire in 1986. In 2005, the company moved manufacturing to the premises of a sister company, Suncrest Surrounds Limited in Peterlee, Co Durham. All sales, marketing and service operations continue to operate from Andover, though the business was acquired in 2009 by Fired Up Corporation Ltd, based in Huddersfield. The brand was later re-launched, reverting to its founding name of "Corby of Windsor". In popular culture During the 1960s the trouser press was an aspirational product for the British middle classes, and this led to a thread of satire and cultural references. The Bonzo Dog Doo-Dah Band recorded the song, "Trouser Press", for their 1968 album The Doughnut in Granny's Greenhouse, satirising 1960s consumerism, and making numerous references to the trouser press as emblematic of middle class life. Author and journalist Ira Robbins founded an influential alternative music magazine titled Trouser Press after the Bonzo's song, and his book The Trouser Press Record Guide: The Ultimate Guide to Alternative Stone is a reference work on alternative and outlandish music first published in 1983 and the fourth edition was published in 1991 (). The ubiquitous presence of the trouser press in British commercial hotels has made them a recurring theme, along with "tea and coffee making facilities", in British comedian Bill Bailey's monologues. The Tea, Coffee and Trouser Press Census tour diary along is included as an extra feature on his Part Troll DVD. Bailey's Tinselworm show has a spoof infomercial in the style of Kraftwerk, Hosenbügler (German for trouser press), which sees Bailey and Kevin Eldon riding around the stage on Segways with trouser presses mounted on them. It has also been featured in the British comedy I'm Alan Partridge during the episode "Basic Alan" in which Alan dismantles a Corby Trouser Press in his bored desperation. See also Clothes iron Dadeumi, a mechanical way to smooth clothing, once traditional in Korea Ironing Mangle (machine) References Home appliances
Trouser press
Physics,Technology
804
37,835,275
https://en.wikipedia.org/wiki/Edward%20Galloway
Edward Galloway (September 1840 – April 19, 1861) was the first soldier in the American Civil War to be mortally wounded, and the war's second death, after Private Daniel Hough. He was injured when a gun went off prematurely on April 14, 1861, during a 100-gun salute to the flag after the Battle of Fort Sumter. The explosion killed Hough, severely injured Galloway, and slightly injured four other men. He was taken to the Gibbes Hospital in Charleston, where he died five days later on April 19, 1861. Edward was an Irish immigrant, he was born in Skibbereen, County Cork, Ireland. Military service Galloway served as a private in Battery E of the 1st United States Artillery Regiment. In January 1861, the regiment was relocated to Fort Sumter, where it would be stationed until the Battle of Fort Sumter in April of the same year. Death On April 12, 1861, Fort Sumter came under attack. It is unknown where Galloway served during the battle, but along with the rest of the garrison, he came through the battle unscathed and was present on April 14 during the 100-gun salute to the flag after the surrender, stationed at the 47th gun along with Private Daniel Hough. While Hough was loading the gun, a spark in the barrel of the gun made it explode prematurely. The explosion blew off Hough's right arm and killed him almost instantly. It also detonated ammunition stored next to the gun and wounded five other men, including Galloway. The salute was cut short at fifty guns. Galloway was severely wounded by the explosion, and was taken after the salute to the Gibbes Hospital in Charleston where he died five days later. Variations on name There are several variations of his surname; the United States Registers of Enlistments in the U.S. Army, 1798–1914 recorded his name as Edward Gallwey. His brother, who was mortally wounded at Port Hudson and died in Baton Rouge on July 9, 1863, was Major Andrew Power Gallwey. References Sources 1840 births 1861 deaths United States Army soldiers Accidental deaths in South Carolina Union military personnel killed in the American Civil War Deaths from explosion
Edward Galloway
Chemistry
442
36,847,374
https://en.wikipedia.org/wiki/30%20Cygni
30 Cygni (ο1 Cygni) is a class A5III (white giant) star in the constellation Cygnus. Its apparent magnitude is 4.83 and it is approximately 610 light years away based on parallax. The Bayer letter ο (omicron) has been variously applied to two or three of the stars 30, 31, and 32 Cygni. 30 Cygni has sometimes been designated as ο1 Cygni with the other two stars being ο2 and ο3 respectively. For clarity, it is preferred to use the Flamsteed designation 30 Cygni rather than one of the Bayer designations. 30 Cygni is about six arc-minutes from 31 Cygni A and seven arc-minutes from 31 Cygni B. That pair is known as ο1 Cygni, while ο2 Cygni is a degree away. Both ο1 and ο2 are 4th magnitude stars. References Cygnus (constellation) A-type giants Cygni, 30 Durchmusterung objects 099639 7730 192514
30 Cygni
Astronomy
214
12,589,603
https://en.wikipedia.org/wiki/Transcription%20factor%20II%20E
Transcription factor II E (TFIIE) is one of several general transcription factors that make up the RNA polymerase II preinitiation complex. It is a tetramer of two alpha and two beta chains and interacts with TAF6/TAFII80, ATF7IP, and varicella-zoster virus IE63 protein. TFIIE recruits TFIIH to the initiation complex and stimulates the RNA polymerase II C-terminal domain kinase and DNA-dependent ATPase activities of TFIIH. Both TFIIH and TFIIE are required for promoter clearance by RNA polymerase. Transcription factor II E is encoded by the GTF2E1 and GTF2E2 genes. TFIIE is thought to be involved in DNA melting at the promoter: it contains a zinc ribbon motif that can bind single stranded DNA. See also TFIIH TFIIB TFIID References External links Molecular genetics Proteins Gene expression Transcription factors
Transcription factor II E
Chemistry,Biology
201
13,939,428
https://en.wikipedia.org/wiki/74181
The 74181 is a 4-bit slice arithmetic logic unit (ALU), implemented as a 7400 series TTL integrated circuit. Introduced by Texas Instruments in February 1970, it was the first complete ALU on a single chip. It was used as the arithmetic/logic core in the CPUs of many historically significant minicomputers and other devices. The 74181 represents an evolutionary step between the CPUs of the 1960s, which were constructed using discrete logic gates, and single-chip microprocessors of the 1970s. Although no longer used in commercial products, the 74181 later was used in hands-on computer architecture courses and is still referenced in textbooks and technical papers. Specifications The 74181 is a 7400 series medium-scale integration (MSI) TTL integrated circuit, containing the equivalent of 75 logic gates and most commonly packaged as a 24-pin DIP. The 4-bit wide ALU can perform all the traditional add / subtract / decrement operations with or without carry, as well as AND / NAND, OR / NOR, XOR, and shift. Many variations of these basic functions are available, for a total of 16 arithmetic and 16 logical operations on two four-bit words. Multiply and divide functions are not provided but can be performed in multiple steps using the shift and add or subtract functions. Shift is not an explicit function but can be derived from several available functions; e.g., selecting function "A plus A" with carry (M=0) will give an arithmetic left shift of the A input. The 74181 performs these operations on two four-bit operands generating a four-bit result with carry in 22 nanoseconds (45 MHz). The 74S181 performs the same operations in 11 nanoseconds (90 MHz), while the 74F181 performs the operations in 7 nanoseconds (143 MHz) (typical). Multiple slices can be combined for arbitrarily large word sizes. For example, sixteen 74S181s and five 74S182 look ahead carry generators can be combined to perform the same operations on 64-bit operands in 28 nanoseconds (36 MHz). Although overshadowed by the performance of today's multi-gigahertz 64-bit microprocessors, this was quite impressive when compared to the sub-megahertz clock speeds of the early four- and eight-bit microprocessors. Implemented functions The 74181 implements all 16 possible logical functions with two variables. Its arithmetic functions include addition and subtraction with and without carry. It can be used with active-high data, in which a high logic level corresponds to 1, and active-low data, in which a low logic level corresponds to 1. Inputs and outputs There are four selection inputs, S0 to S3, to select the function. M is used to select between logical and arithmetic operation, and Cn is the carry-in. A and B is the data to be processed (four bits). F is the number output. There are also P and a G signals for a carry-lookahead adder, which can be implemented via one or several 74182 chips. Function table for output F In the following table, AND is denoted as a product, OR with a sign, XOR with , logical NOT with an overbar and arithmetic plus and minus using the words plus and minus. Significance The 74181 greatly simplified the development and manufacture of computers and other devices that required high speed computation during the 1970s through the early 1980s, and is still referenced as a "classic" ALU design. Prior to the introduction of the 74181, computer CPUs occupied multiple circuit boards and even very simple computers could fill multiple cabinets. The 74181 allowed an entire CPU and in some cases, an entire computer to be constructed on a single large printed circuit board. The 74181 occupies a historically significant stage between older CPUs based on discrete logic functions spread over multiple circuit boards and modern microprocessors that incorporate all CPU functions in a single chip. The 74181 was used in various minicomputers and other devices beginning in the 1970s, but as microprocessors became more powerful the practice of building a CPU from discrete components fell out of favour and the 74181 was not used in any new designs. Education By 1994, CPU designs based on the 74181 were not commercially viable due to the comparatively low price and high performance of microprocessors, but it was still useful for teaching computer organization and CPU design because it provided opportunities for hands-on design and experimentation. Digital Electronics with VHDL (Quartus II Version) review in Journal of Modern Engineering, Volume 7, Number 2, Spring 2007. A Minimal TTL Processor for Architecture Exploration a paper describing how the 74181 can be used to teach CPU architecture. A Hardware Lab for the Computer Organization Course at Small Colleges in 2003 used the 74LS181 in a lab class. 74181 + 74182 demonstration Java-based simulator APOLLO181 (by Gianluca.G, Italy 2012): a homemade educational processor made of TTL logics and bipolar memories, based upon the Bugbook I and II chips, in particular on the 74181. Build Your Computer using LOGIC & MEMORY, before the advent of microprocessor a video showing history and educational use of the 74181 ALU. A playable demo of the 74181 emulated in a physics simulator Computers Many computer CPUs and subsystems were based on the 74181, including several historically significant models. NOVA First widely available 16-bit minicomputer manufactured by Data General. The NOVA 1200 was the first commercial minicomputer in 1970 to use the 74181. Several models of the PDP-11 Most popular minicomputer of all time, manufactured by Digital Equipment Corporation. Xerox Alto The first computer to use the desktop metaphor and graphical user interface (GUI). VAX-11/780 The first VAX, the most popular 32-bit computer of the 1980s manufactured by Digital Equipment Corp. Three Rivers PERQ A commercial computer workstation influenced by the Xerox Alto and first released in 1979. Computer Automation Naked Mini LSI A computer that found use in LSI IC test equipment and process control. KMC11 Peripheral processor for Digital Equipment Corporation PDP-11. FPP-12 Floating-point unit for the Digital Equipment Corporation PDP-12. Wang 2200 CPU (one 74181 per CPU) and disk controller (two 74181s per controller) TI-990 Texas Instruments' series of 16-bit minicomputers. Honeywell option 1100 The so-called "scientific unit" option for Honeywell H200/H2000 series mainframes. Datapoint 2200 Version II and follow-on machines, the Datapoint 5500, 6600, and 1800/3800 The computer that defined the architecture for the Intel 8008. Cogar System 4 / Singer 1501 / ICL 1501 Intelligent Terminal Varian Data Machines V70 series of 16-bit minicomputers Other uses Vectorbeam Arcade game platform used by Cinematronics for various arcade games including Space Wars, Starhawk, Warrior, Star Castle and others uses three 25LS181 chips in its 12-bit processor. See also Arithmetic logic unit Microsequencer 7400-series integrated circuits List of 7400-series integrated circuits References External links Manufacturer's data sheets: Texas Instruments (and 74182 look-ahead carry generator) Signetics Philips Fairchild. Explanation of how the chip works Inside the vintage 74181 ALU chip: how it works and why it's so strange Inside the 74181 ALU chip: die photos and reverse engineering showing its floorplan and transistor layout of some of its gates Computer-related introductions in 1970 Bit-slice chips Digital circuits History of computing hardware Texas Instruments hardware
74181
Technology
1,654
8,253,417
https://en.wikipedia.org/wiki/Plasma%20modeling
Plasma modeling refers to solving equations of motion that describe the state of a plasma. It is generally coupled with Maxwell's equations for electromagnetic fields or Poisson's equation for electrostatic fields. There are several main types of plasma models: single particle, kinetic, fluid, hybrid kinetic/fluid, gyrokinetic and as system of many particles. Single particle description The single-particle model describes the plasma as individual electrons and ions moving in imposed (rather than self-consistent) electric and magnetic fields. The motion of each particle is thus described by the Lorentz Force Law. In many cases of practical interest, this motion can be treated as the superposition of a relatively fast circular motion around a point called the guiding center and a relatively slow drift of this point. Kinetic description The kinetic model is the most fundamental way to describe a plasma, resultantly producing a distribution function where the independent variables and are position and velocity, respectively. A kinetic description is achieved by solving the Boltzmann equation or, when the correct description of long-range Coulomb interaction is necessary, by the Vlasov equation which contains self-consistent collective electromagnetic field, or by the Fokker–Planck equation, in which approximations have been used to derive manageable collision terms. The charges and currents produced by the distribution functions self-consistently determine the electromagnetic fields via Maxwell's equations. Fluid description To reduce the complexities in the kinetic description, the fluid model describes the plasma based on macroscopic quantities (velocity moments of the distribution such as density, mean velocity, and mean energy). The equations for macroscopic quantities, called fluid equations, are obtained by taking velocity moments of the Boltzmann equation or the Vlasov equation. The fluid equations are not closed without the determination of transport coefficients such as mobility, diffusion coefficient, averaged collision frequencies, and so on. To determine the transport coefficients, the velocity distribution function must be assumed/chosen. But this assumption can lead to a failure of capturing some physics. Hybrid kinetic/fluid description Although the kinetic model describes the physics accurately, it is more complex (and in the case of numerical simulations, more computationally intensive) than the fluid model. The hybrid model is a combination of fluid and kinetic models, treating some components of the system as a fluid, and others kinetically. The hybrid model is sometimes applied in space physics, when the simulation domain exceeds thousands of ion gyroradius scales, making it impractical to solve kinetic equations for electrons. In this approach, magnetohydrodynamic fluid equations describe electrons, while the kinetic Vlasov equation describes ions. Gyrokinetic description In the gyrokinetic model, which is appropriate to systems with a strong background magnetic field, the kinetic equations are averaged over the fast circular motion of the gyroradius. This model has been used extensively for simulation of tokamak plasma instabilities (for example, the GYRO and Gyrokinetic ElectroMagnetic codes), and more recently in astrophysical applications. Quantum mechanical methods Quantum methods are not yet very common in plasma modeling. They can be used to solve unique modeling problems; like situations where other methods do not apply. They involve the application of quantum field theory to plasma. In these cases, the electric and magnetic fields made by particles are modeled like a field; A web of forces. Particles that move, or are removed from the population push and pull on this web of forces, this field. The mathematical treatment for this involves Lagrangian mathematics. Collisional-radiative modeling is used to calculate quantum state densities and the emission/absorption properties of a plasma. This plasma radiation physics is critical for the diagnosis and simulation of astrophysical and nuclear fusion plasma. It is one of the most general approaches and lies between the extrema of a local thermal equilibrium and a coronal picture. In a local thermal equilibrium the population of excited states is distributed according to a Boltzmann distribution. However, this holds only if densities are high enough for an excited hydrogen atom to undergo many collisions such that the energy is distributed before the radiative process sets in. In a coronal picture the timescale of the radiative process is small compared to the collisions since densities are very small. The use of the term coronal equilibrium is ambiguous and may also refer to the non-transport ionization balance of recombination and ionization. The only thing they have in common is that a coronal equilibrium is not sufficient for tokamak plasma. Commercial plasma physics modeling codes Quantemol-VT VizGlow VizSpark CFD-ACE+ COMSOL LSP Magic Starfish USim VSim STAR-CCM+ See also Particle-in-cell References Computational physics
Plasma modeling
Physics
975
25,388,427
https://en.wikipedia.org/wiki/Jamming%20avoidance%20response
The jamming avoidance response is a behavior of some species of weakly electric fish. It occurs when two electric fish with wave discharges meet – if their discharge frequencies are very similar, each fish shifts its discharge frequency to increase the difference between the two. By doing this, both fish prevent jamming of their sense of electroreception. The behavior has been most intensively studied in the South American species Eigenmannia virescens. It is also present in other Gymnotiformes such as Apteronotus, as well as in the African species Gymnarchus niloticus. The jamming avoidance response was one of the first complex behavioral responses in a vertebrate to have its neural circuitry completely specified. As such, it holds special significance in the field of neuroethology. Discovery The jamming avoidance response (JAR) was discovered by Akira Watanabe and Kimihisa Takeda in 1963. The fish they used was an unspecified species of Eigenmannia, which has a quasi-sinusoidal wave discharge of about 300 Hz. They found that when a sinusoidal electrical stimulus is emitted from an electrode near the fish, if the stimulus frequency is within 5 Hz of the fish's electric organ discharge (EOD) frequency, the fish alters its EOD frequency to increase the difference between its own frequency and the stimulus frequency. Stimuli above the fish's EOD frequency push the EOD frequency downwards, while frequencies below that of the fish push the EOD frequency upwards, with a maximum change of about ±6.5 Hz. This behavior was given the name "jamming avoidance response" several years later in 1972, in a paper by Theodore Bullock, Robert Hamstra Jr., and Henning Scheich. In 1975, Walter Heiligenberg discovered a JAR in the distantly-related Gymnarchus niloticus, the African knifefish, showing that the behavior had convergently evolved in two separate lineages. Behavior Eigenmannia and other weakly electric fish use active electrolocation – they can locate objects by generating an electric field and detecting distortions in the field caused by interference from those objects. Electric fish use their electric organ to create electric fields, and they detect small distortions of these fields using special electroreceptive organs in the skin. All fish with the JAR are wave-discharging fish that emit steady quasi-sinusoidal discharges. For the genus Eigenmannia, frequencies range from 240 to 600 Hz. The EOD frequency is very steady, typically with less than 0.3% variation over a 10-minute time span. If a neighboring sinusoidal electric field is discharging close to the fish's EOD frequency, it causes interference which results in sensory confusion in the fish and sufficient jamming to prevent it from electrolocating effectively. Eigenmannia typically are within the electric field range of three to five other fish of the same species at any time. If many fish are located near each other, it is beneficial for each fish to distinguish between their own signal and those of others; this can be done by increasing the frequency difference between their discharges. Therefore, it seems to be the function of the JAR to avoid sensory confusion among neighboring fish. To determine how close the stimulus frequency is to the discharge frequency, the fish compares the two frequencies using its electroreceptive organs, rather than comparing the discharge frequency to an internal pacemaker; in other words, the JAR relies only on sensory information. This was determined experimentally by silencing a fish's electric organ with curare, and then stimulating the fish with two external frequencies. The JAR, measured from the electromotor neurons in the spinal cord, depended only on the frequencies of the external stimuli, and not on the frequency of the pacemaker. Neurobiology Pathway in Eigenmannia (Gymnotiformes) Most of the JAR pathway in the South American Gymnotiformes has been worked out using Eigenmannia virescens as a model system. Sensory coding When the stimulus frequency and discharge frequency are close to each other, the two amplitude-time waves undergo interference, and the electroreceptive organs perceive a single wave with an intermediate frequency. In addition, the combined stimulus-EOD wave has a beat pattern, with the beat frequency equal to the frequency difference between the stimulus and EOD. Gymnotiforms have two classes of electroreceptive organs, the ampullary receptors and the tuberous receptors. Ampullary receptors respond to low-frequency stimulation less than 40 Hz and their role in the JAR is currently unknown. Tuberous receptors respond to higher frequencies, firing best near the fish's normal EOD frequency. Tuberous receptors themselves have two types, the T-unit and P-unit. The T-unit (T standing for time, meaning phase in the cycle) fires synchronously with the signal frequency by firing a spike on every cycle of the waveform. P-units (P standing for probability) tend to fire when the amplitude increases and fire less when it decreases. Under conditions of jamming, the P-unit fires on the amplitude peaks of the beat cycle where the two waves constructively interfere. So, a combined stimulus-EOD signal causes T-units to fire at the intermediate frequency, and causes P-unit firing to increase and decrease periodically with the beat. Processing in the brain The time-coding T-units converge onto neurons called spherical cells in the electrosensory lateral line lobe. By combining information from multiple T-units, the spherical cell is even more precise in its time coding. Amplitude-coding P-units converge onto pyramidal cells, also in the electrosensory lateral line lobe. Two types of pyramidal cells exist: excitatory E-units, which fire more when stimulated by P-units, and inhibitory I-units, which fire less when stimulated by inhibitory interneurons activated by P-units. Spherical cells and pyramidal cells then project to the torus semicircularis, a structure with many laminae (layers) in the mesencephalon. Phase and amplitude information are integrated here to determine whether the stimulus frequency is greater or less than the EOD frequency. Sign-selective neurons in the deeper layers of the torus semicircularis are selective to whether the frequency difference is positive or negative; any given sign-selective cell fires in one case but not in the other. Output Sign-selective cells input into the nucleus electrosensorius in the diencephalon, which then projects onto two different pathways. Neurons selective for a positive difference (stimulus greater than EOD) stimulate the prepacemaker nucleus, while neurons selective for a negative difference (stimulus less than EOD) inhibit the sublemniscal prepacemaker nucleus. Both prepacemaker nuclei send projections to the pacemaker nucleus, which ultimately controls the frequency of the EOD. Pathway in Gymnarchus (Osteoglossiformes) The neural pathway of JAR in Gymnarchus is nearly identical to that of the Gymnotiformes, with a few minor differences. S-units in Gymnarchus are time coders, like the T-units in Gymnotiformes. O-units code the signal's intensity, like P-units in Gymnotiformes, but respond over a narrower range of intensities. In Gymnarchus, phase differences between EOD and stimulus are calculated in the electrosensory lateral line lobe rather than in the torus semicircularis. Phylogeny and evolution of weakly electric fish There are two main orders of weakly electric fish, Gymnotiformes from South America and Osteoglossiformes from Africa. Electroreception most likely arose independently in the two lineages. Weakly electric fish are mostly pulse-dischargers, which do not perform the JAR, while some are wave-dischargers. Wave-discharge evolved in two taxa: the superfamily Apteronotoidea (order Gymnotiformes), and the species Gymnarchus niloticus (order Osteoglossiformes). Notable genera in Apteronotoidea that perform JAR include Eigenmannia and Apteronotus. Though they evolved the JAR separately, the South American and African taxa (boldface in the tree) have convergently evolved nearly identical neural computational mechanisms and behavioral responses to avoid jamming, with only minor differences. The phylogeny of the weakly electric fish clades, omitting non-electric and strongly-electric fishes, shows major events in their evolution. In the tree, "sp" means "a species" and "spp" means "multiple species". See also Electric fish Electroreception and electrogenesis References Further reading Heiligenberg, W. (1977) Principles of Electrolocation and Jamming Avoidance in Electric Fish: A Neuroethological Approach. Studies of Brain Function, Vol. 1. Berlin-New York: Springer Verlag. Heiligenberg, W. (1990) Electric Systems in Fish. Synapse 6:196-206. Heiligenberg, W. (1991) Neural Nets in Electric fish. MIT Press: Cambridge, Massachusetts. Kawasaki, M. (2009) Evolution of time-coding systems in weakly electric fishes. Zoological Science 26: 587-599. Animal nervous system Neuroethology
Jamming avoidance response
Biology
1,932
29,103,384
https://en.wikipedia.org/wiki/Mycena%20fuscoaurantiaca
Mycena fuscoaurantiaca is a species of mushroom in the family Mycenaceae. First reported as a new species in 2007, the diminutive mushroom is only found in Kanagawa, Japan, where it grows on dead fallen twigs in lowland forests dominated by hornbeam (Carpinus) and Chinese evergreen oak trees. The mushroom has a brownish-orange conical cap that has grooves extending to the center, and reaches up to in diameter. Its slender stem is colored similarly to the cap, and long—up to tall. Microscopic characteristics include the weakly amyloid spores (turning blue to black when stained with Melzer's reagent), the smooth, swollen cheilocystidia and pleurocystidia (cystidia on the gill edges and faces, respectively) with long rounded tips, the diverticulate hyphae of the cap cuticle, and the absence of clamp connections. Taxonomy, naming, and classification The mushroom was first collected by Japanese mycologist Haruki Takahashi in 1999 and, along with seven other Mycena species, identified as a new species in a 2007 publication. The specific epithet is derived from the Latin words fusco- (meaning "dark") and aurantiaca ("orange-yellow"), and refers to the color of the fruit bodies. Its Japanese name is Taisha-ashinagatake タイシャアシナガタケ(代赭足長茸). Takahashi suggests that the species is best classified in the section Fragilipedes, as defined by Dutch Mycena specialist Rudolph Arnold Maas Geesteranus. Within the section, the North American species M. subfusca appears to be closely related to M. fuscoaurantiaca. M. subfusca may be distinguished by its spindle- to broadly club-shaped cheilocystidia without a narrow neck, club-shaped to irregularly shaped caulocystidia, and lack of pleurocystidia. Description The cap, which reaches in diameter, is initially conical to convex to bell-shaped, but becomes flattened in age. It is radially grooved almost to the center, and somewhat hygrophanous (changing color as it loses or absorbs moisture). The cap surface is dry, minutely pruinose initially (that is, appearing as if covered with a fine white powder), but soon becomes smooth. The cap is brown to brownish-orange when young, with a somewhat darker center, and fades to paler toward the margin with age. The flesh is white, and up to 0.5 mm thick. It does not have any distinctive taste or odor. The stem is long by thick, cylindrical, centrally attached to the cap, slender, hollow, and dry. Its color is orange to brownish-orange, and it is initially pruinose, but later becomes smooth. The base of the stem is covered with coarse, stiff white hairs. The gills are adnexed (narrowly attached to the stem), and distantly spaced, with between 16 and 18 gills reaching the stem. The gills are up to 1.8 mm broad, thin, and pale brownish. The gill edges are pruinose, and the same color as the gill face. Microscopic characteristics The basidiospores are ellipsoid and measure 9–10.5 by 6–7 μm. They are smooth, thin-walled, colorless, and weakly amyloid. The basidia (spore-bearing cells) are 19–30 by 7–9 μm, club-shaped, and two-spored. The cheilocystidia (cystidia on the gill edge) are thin-walled, smooth, 25–47 by 3–20 μm, abundant, spindle-shaped with a prolonged thickened tip, smooth, and colorless or pale vinaceous. The pleurocystidia (cystidia on the gill face) are 27–75 by 5–20 μm, scattered, and similar in shape and color to the cheilocystidia. The hymenophoral tissue (tissue of the hymenium-bearing structure) is made of thin-walled hyphae that are 10–22 μm wide, cylindrical, often somewhat inflated, smooth, colorless, and dextrinoid (turning reddish to reddish-brown when stained with Melzer's reagent). The cap cuticle is made of parallel, bent-over hyphae that are 2–7 μm wide, and cylindrical. These hyphae are smooth or covered with scattered, warty or finger-like thin-walled brownish diverticulae. The layer of hyphae beneath the cap cuticle is arranged in a parallel manner, hyaline (translucent), and dextrinoid, containing short and inflated cells that measure up to 34 μm wide. The cuticle of the stem is made of parallel, bent-over hyphae that are 2–4 μm wide, cylindrical, smooth, brownish, and thin-walled. The flesh of the stem is composed of longitudinally running, cylindrical hyphae that are 8–20 μm wide, smooth, colorless, and dextrinoid. The strigose (stiff or bristly) hairs at the base of the stem are 2–6 μm wide, and arise directly from the stem cuticle. They are bent-over or erect, cylindrical, with rounded tips, sometimes flexuous (winding from side to side), smooth, colorless, and thin-walled. Clamp connections are absent in all tissues of this species. Habitat and distribution Mycena fuscoaurantiaca is known only from Kanagawa, Japan. It is found growing solitary to scattered on dead fallen twigs in lowland forests dominated by hornbeam carpinus (Carpinus tschonoskii) and Chinese evergreen oak (Quercus myrsinifolia). Fruit bodies appear in November. References External links The Agaricales in Southwestern Islands of Japan Images of the holotype specimen fuscoaurantiaca Fungi of Asia Fungi described in 2007 Flora of Japan Fungus species
Mycena fuscoaurantiaca
Biology
1,275
77,037,789
https://en.wikipedia.org/wiki/IRAS%2013218%2B0552
IRAS 13218+0552 known as SFRS 263, is a galaxy merger located in the Virgo constellation. Its redshift is 0.202806, putting the object at 2.6 billion light-years away from Earth. It is a Seyfert galaxy and a luminous infrared galaxy. Characteristics IRAS 13218+0552 is classified as a Seyfert type 1.5 galaxy given its large [OIII] flux although XMM-Newton did not observe it. Further studies showed it as a Seyfert type 2 galaxy instead, as it harbors a highly obscured active galactic nucleus and not of Seyfert 1 type. Moreover, it belongs to the ultraluminous galaxy classification, because according to IRAS, its luminosity range Lir = 1012-1013 L⊙ is found to be approximated by the power law of Φ(L) ~ L-2.35[Mpc-3 mag-1]. Besides being a Seyfert galaxy and a luminous inflared galaxy, IRAS 13218+0552 also has a quasar nucleus which is notable for its extreme outflows and has strong star formations. That being said, it resulted from a collision between two gas-rich disk galaxies. Evidence showed both galaxies have orbited each other several times before merging with each other; signs left included distinct loops of glowing gas around the quasar's host. Apart from the loop of gas, IRAS 13218+0552 has a tidal tail feature and possibly binary nucleus with its separation smaller than 1 kpc. Detected through targeted surveys, observations find IRAS 13218+0552 hosts an OH megamaser (OHM), producing nonthermal emission from the hydroxyl (OH) molecules, with its two main lines situated at 1665/166 MHz and two satellite lines at 1612/1720 MHz. This might be caused by OHM emission being pumped by infrared radiation from the galaxy's environment and also amplification of an intense radio continuum background. Through the observation, IRAS 13218+0552 has an OH spectrum showing two prominent broad emission peaks, having a separation of 490 km s−1 in its rest frame, suggesting it is associated with multiple nuclei. This makes IRAS 13218+0552 among 119 OHMs found in ultraluminous galaxies right up to 2014. References Virgo (constellation) Luminous infrared galaxies Galaxy mergers Quasars Seyfert galaxies 165618 IRAS catalogue objects
IRAS 13218+0552
Astronomy
517
28,409,956
https://en.wikipedia.org/wiki/The%20Power%20%28self-help%20book%29
The Power is a 2010 self-help and spirituality book written by Rhonda Byrne. It is a sequel to the 2006 book The Secret. The book was released on 17 August 2010 along with an audio-book based on it. The Power'''s mission statement is, "The philosophy and vision of the Secret is to bring joy to billions. To bring joy to the world, the Secret creates life-transforming tools in the mediums of books, films, and multi-media. With each creation from the Secret, we aim to share knowledge that is true, simple, and practical, and that will transform people's lives." The "Power" of the title is the power of love, the mainspring of the universe. A large portion of The Power describes how Byrne greets each blessed moment with overwhelming love and gratitude toward all creation. The book is based on the law of attraction and claims that positive thinking can create life-changing results such as increased happiness, health, and wealth. Byrne describes this as a fundamental universal law akin to gravity. The law of attraction The law of attraction states that whatever someone experiences in life is a direct result of their thoughts. Byrne's states that it really is that simple. According to The Secret and The Power, one's thoughts and feelings have "magnetic properties" and "frequencies". They vibrate and resonate with the universe, somehow attracting events that share those frequencies. The three rules of the Law of Attraction, according to Byrne are the following: "Ask", "Believe", and "Receive". As Byrne says, it means that: "Like attracts like. What that means in simple terms for your life is: what you give out, you receive back. Whatever you give out in life is what you receive back in life. Whatever you give, by the law of attraction, is exactly what you attract back to yourself." In other words, if you want good things to happen, be a good person, think positive thoughts. Criticisms The claims made by the book are highly controversial, and have been criticized by reviewers and readers. The book has also been heavily criticized by former believers and practitioners, with some claiming that the concept of the "Secret" was conceived by the author and that the only people generating wealth and happiness from it are the author and the publishers. Critics contend that the book is based on a pseudoscientific theory called the "law of attraction"—the principle that "like attracts like". In a harshly critical 2010 review, The New York Times stated: "The Power and The Secret are larded with references to magnets, energy and quantum mechanics. This last is a dead giveaway: whenever you hear someone appeal to impenetrable physics to explain the workings of the mind, run away—we already have disciplines called 'psychology' and 'neuroscience' to deal with those questions. Byrne's onslaught of pseudoscientific jargon serves mostly to establish an 'illusion of knowledge,' as social scientists call our tendency to believe we understand something much better than we really do." Jerry Adler wrote in Newsweek that The Power'' offers false hope to those in true need of more conventional assistance in their lives. References External links 2010 non-fiction books Atria Publishing Group books Australian non-fiction books New Thought literature Quantum mysticism Self-help books
The Power (self-help book)
Physics
687
72,314,003
https://en.wikipedia.org/wiki/Magnetic%20chicane
A magnetic chicane also called a bunch compressor helps form dense bunches of electrons in a free-electron laser. A magnetic chicane makes electrons detour slightly from their otherwise straight path, and in that way is similar to a chicane on a road. A magnetic chicane consists of four dipole magnets, giving electrons at the beginning of a bunch a longer path than electrons at the end of the bunch, thereby allowing the lagging electrons to catch up. Free-electron laser A free-electron laser depends upon a beam of tightly bunched electrons. Short bunches of electrons are produced by a photoinjector, but they quickly elongate, because electrons have negative charge and little mass, causing the bunch to expand. As the bunch is accelerated, the electrons gain mass and quickly approach the speed of light. After that, electrons at the end of the bunch cannot go any faster to catch up with electrons at the beginning of the bunch. Chirp This problem is solved by adjusting the phase of the driving electric field to more strongly add energy and mass to electrons at the trailing end of the bunch. This is called negative energy chirp, meaning the energy decreases along the direction of beam travel. Because the beam is traveling at almost the speed of light, the trailing electrons gain mass, rather than velocity. This results in a correlation between mass and position in the bunch. Chicane The chicane gives lagging electrons time to catch up. More massive electrons are deflected less by the magnetic field than lighter electrons, and therefor take a shorter path through the chicane, resulting in a shorter bunch. A chicane consists of four dipole magnets with the following roles: Deflects the beam slightly away from the central axis of the accelerator, with lighter electrons deflected more than more massive electrons. Deflects the beam in the opposite direction, making it parallel to the central axis, but with an offset. The offset is greatest for lighter electrons. Deflects the beam back towards the central axis. Deflects the beam back in the direction of the central axis. Limitations In practice, bunch compression cannot be done a single step. To avoid beam emittance blowup, beam compression is usually done by using two chicanes. References External links RF and Space Charge Emittance in Guns, a basic definition of emittance Space Charge Induced Beam Emittance Growth and Halo Formation Electron beam Free-electron lasers Accelerator physics
Magnetic chicane
Physics,Chemistry
508
10,938,277
https://en.wikipedia.org/wiki/Robert%20Forester%20Mushet
Robert Forester Mushet (8 April 1811 – 29 January 1891) was a British metallurgist and businessman, born on 8 April 1811, in Coleford, in the Forest of Dean, Gloucestershire, England. He was the youngest son of Scottish parents, Agnes Wilson and David Mushet; an ironmaster, formerly of the Clyde, Alfreton and Whitecliff Ironworks. In 1818/1819, David Mushet built a foundry named Darkhill Ironworks in the Forest of Dean. Robert spent his formative years studying metallurgy with his father and took over the management of Darkhill in 1845. In 1848, he moved to the newly constructed Forest Steel Works on the edge of the Darkhill site where he carried out over ten thousand experiments in ten years before moving to the Titanic Steelworks in 1862. It seems that Mushet only began using his middle name 'Forester' in 1845, and only occasionally at first. In his later years he said he had been given the name from the Forest of Dean, although he variously spelled it both 'Forester' and 'Forrester'. In 1876, he was awarded the Bessemer Gold Medal by the Iron and Steel Institute, their highest award. Robert Mushet died on 29 January 1891 in Cheltenham. He is buried with his wife and daughter, Mary, in Cheltenham Cemetery. High quality steel In the summer of 1848, Henry Burgess, editor of The Bankers' Circular, brought to Mushet a lump of white crystallised metal which he said was found in Rhenish Prussia. ... "Being familiar with alloys of iron and manganese," says Mr. Mushet, "I at once recognized this lump of metal as an alloy of these two metals and, as such, of great value in the making of steel. Later, I found that the white metallic alloy was the product of steel ore, called also spathose iron ore, being, in fact, a double carbonate of iron and manganese found in the Rhenish mountains, and that it was most carefully selected and smelted in small blast furnaces, charcoal fuel alone being employed and the only flux used being lime. The metal was run from the furnace into shallow iron troughs similar to the old refiners' boxes, and the cakes thus formed, when cold and broken up, showed large and beautifully bright facets and crystals specked with minute spots of uncombined carbon. It was called, from its brightness, 'spiegel glanz' or spiegel eisen, i.e., looking-glass iron. Practically its analysis was: Iron, 86…25; manganese, 8…50; and carbon, 5…25; making a total of 100…00." Mushet carried out many experiments with the metal, discovering that a small amount added during the manufacture of steel rendered it more workable when heated. It was not until 1856, however, that he realised the true potential of this property when his friend Thomas Brown brought him a piece of steel, made using the Bessemer Process, asking if he could improve its poor quality. Mushet carried out experiments on the sample, based on those he had previously carried out with spiegeleisen. Henry Bessemer himself had realised that the problem of quality was due to impurities in the iron and concluded that the solution lay in knowing when to turn off the flow of air in his process; so that the impurities had been burned off, but just the right quantity of carbon remained. Despite spending tens of thousands of pounds on experiments, however, he could not find the answer. Mushet's solution was simple, but elegant; he first burnt off, as far as possible, all the impurities and carbon, then reintroduced carbon and manganese by adding an exact amount of spiegeleisen. This had the effect of improving the quality of the finished product, increasing its malleability – its ability to withstand rolling and forging at high temperatures. I saw then that the Bessemer process was perfected and that, with fair play, untold wealth would reward Mr. Bessemer and myself..." Mushet's dream was never to be fulfilled. While others made fortunes from his discoveries, he failed to capitalise on his successes and by 1866 was destitute and in ill health. In that year his 16-year-old daughter, Mary, travelled to London alone, to confront Bessemer at his offices, arguing that his success was based on the results of her father's work. Bessemer, whose own process for producing steel was not economically viable without Mushet's method for improving quality, decided to pay Mushet an annual pension of £300, a very considerable sum, which he paid for over 20 years; possibly with a view to keeping the Mushets from legal action. Steel rails In 1857, Mushet was the first to make durable rails of steel rather than cast iron, providing the basis for the development of rail transportation throughout the world in the late 19th century. The first of Mushet's steel rails was sent to Derby Midland railway station, where it was laid at a heavily used part of the station approach where the iron rails had to be renewed at least every six months, and occasionally every three. Six years later, in 1863, the rail seemed as perfect as ever, although some 700 trains had passed over it daily. During its 16 years "life" 1,252,000 detached engines and tenders at the least, apart from trains, had passed over that rail. Dozzles When steel solidifies in a mould, uneven cooling causes a central cavity or 'pipe' to form in the casting. In 1861, Mushet invented the 'Dozzle'; a clay cone or sleeve, heated white hot and inserted into the top of the ingot mould near the end of the pour, and then filled with molten steel. Its purpose was to maintain a reservoir of molten steel, which drained down and filled the pipe as the casting cooled. Mushet claimed this, and other small inventions of his, saved the steelmakers of Sheffield 'many millions of pounds' (in 19th century money), yet he received neither payment nor recognition for these inventions. Dozzles, now called hot tops or feeder heads, are still in use today. Steel alloys In a second key advance in metallurgy Mushet invented 'R Mushet's Special Steel' (RMS) in 1868. It was both the first true tool steel and the first air-hardening steel. Previously, the only way to make steel hard enough for machine tools had been to quench it, by rapid cooling in water. With self-hardening (or tungsten) steel, machine tools could run much faster and were able to cut harder metals than had been possible previously. RMS revolutionised the design of machine tools and the progress of industrial metalworking, and was the forerunner of high speed steel. See also Crucible steel Spathic iron ore References Bibliography Further reading Fred M. Osborn, The Story of the Mushets, London, Thomas Nelson & Sons (1952) 1811 births 1891 deaths Businesspeople in steel English inventors History of Sheffield British metallurgists People from Coleford, Gloucestershire People of the Industrial Revolution Bessemer Gold Medal
Robert Forester Mushet
Chemistry
1,514
1,947,769
https://en.wikipedia.org/wiki/Divination%20by%20Astrological%20and%20Meteorological%20Phenomena
The Divination by Astrological and Meteorological Phenomena (), also known as Book of Silk is an ancient astronomy silk manuscript compiled by Chinese astronomers of the Western Han dynasty (202 BC – 9 AD) and found in the Mawangdui of Changsha, Hunan, China in 1973. It lists 29 comets (referred to as 彗星, huì xīng, literally broom stars) that appeared over a period of about 300 years. It is now exhibited in the Hunan Provincial Museum. Contents The Divination by Astrological and Meteorological Phenomena contains what archaeologists claim is the first definitive atlas of comets. There are roughly two dozen renderings of comets, some in fold out/pop-up format. In some cases, the pages of the document roll out to be five feet long. Each comet's picture has a caption which describes an event its appearance corresponded to, such as "the death of the prince", "the coming of the plague", or "the three-year drought." One of the comets in the manuscript has four tails and resembles a swastika. In their 1985 book Comet, Carl Sagan and Ann Druyan argue that the appearance of a rotating comet with a four-pronged tail as early as 2,000 years BCE could explain why the swastika is found in the cultures of both the Old World and the pre-Columbian Americas. Bob Kobres, in a 1992 paper, contends that the swastika-like comet on the Han-dynasty manuscript was labelled a "long tailed pheasant star" (dixing) because of its resemblance to a bird's foot or footprint. See also Chinese astrology Chinese astronomy Chu Silk Manuscript Mawangdui Silk Texts References External links Ancient Chinese Astronomy Hunan Provincial Museum 3rd-century BC manuscripts 2nd-century BC manuscripts 1st-century BC manuscripts 1st-century manuscripts 1973 archaeological discoveries Astronomy books Astronomy in China Astrological texts History of Changsha
Divination by Astrological and Meteorological Phenomena
Astronomy
396
35,521,593
https://en.wikipedia.org/wiki/Morchella%20meiliensis
Morchella meiliensis is a species of fungus in the family Morchellaceae native to China. Taxonomy The species was described as new to science in 2006. The specific epithet meiliensis refers to Meili Snow Mountain in Yunnan, where the type specimen was collected. Description The fruit bodies are with a conical cap measuring tall by wide. The surface has vertically arranged ridges that are dark brown to black in colour, while the rectangular to quadrangular pits between the ridges are merulioid (wrinkled with low, uneven ridges) and yellowish in colour. The flesh is thin, and lacks any distinctive taste or odour. The cylindrical stipe measures tall by thick. Initially whitish, it turns yellowish with a waxy sheen when dry. In deposit, ascospores are smooth, ellipsoid, hyaline (translucent), and measure 4.7–5.1 by 5.2–5.7 μm. They are thin-walled and contain oil droplets. Asci (spore-bearing cells) are eight-spored, cylindrical, and hyaline, and have dimensions of 5.2–5.9 μm long by 91–94 μm long. The paraphyses are dark, club-shaped, and measure 4.2–5.2 by 40–65 μm. Similar species Morchella conica and M. angusticeps are similar in appearance to M. meiliensis, but the latter species can be distinguished by more lightly coloured ridges on the cap surface, the merulioid texture of the pits, and microscopically by the club-shaped paraphyses. Habitat and distribution Morchella meiliensis fruits on the ground in deciduous or mixed forests. It is known from Deqin County, Yunnan Province in China, where it grows at elevations of . References External links Edible fungi Fungi described in 2006 Fungi of Asia meiliensis Fungus species
Morchella meiliensis
Biology
396
1,705,815
https://en.wikipedia.org/wiki/Dark-energy%20star
A dark-energy star is a hypothetical compact astrophysical object, which a minority of physicists think might constitute an alternative explanation for observations of astronomical black hole candidates. The concept was proposed by physicist George Chapline. The theory states that infalling matter is converted into vacuum energy or dark energy, as the matter falls through the event horizon. The space within the event horizon would end up with a large value for the cosmological constant and have negative pressure to exert against gravity. There would be no information-destroying singularity. Theory In March 2005, physicist George Chapline claimed that quantum mechanics makes it a "near certainty" that black holes do not exist and are instead dark-energy stars. The dark-energy star is a different concept from that of a gravastar. Dark-energy stars were first proposed because in quantum physics, absolute time is required; however, in general relativity, an object falling towards a black hole would, to an outside observer, seem to have time pass infinitely slowly at the event horizon. The object itself would feel as if time flowed normally. In order to reconcile quantum mechanics with black holes, Chapline theorized that a phase transition in the phase of space occurs at the event horizon. He based his ideas on the physics of superfluids. As a column of superfluid grows taller, at some point, density increases, slowing down the speed of sound, so that it approaches zero. However, at that point, quantum physics makes sound waves dissipate their energy into the superfluid, so that the zero sound speed condition is never encountered. In the dark-energy star hypothesis, infalling matter approaching the event horizon decays into successively lighter particles. Nearing the event horizon, environmental effects accelerate proton decay. This may account for high-energy cosmic-ray sources and positron sources in the sky. When the matter falls through the event horizon, the energy equivalent of some or all of that matter is converted into dark energy. This negative pressure counteracts the mass the star gains, avoiding a singularity. The negative pressure also gives a very high number for the cosmological constant. Furthermore, 'primordial' dark-energy stars could form by fluctuations of spacetime itself, which is analogous to "blobs of liquid condensing spontaneously out of a cooling gas". This not only alters the understanding of black holes, but has the potential to explain the dark energy and dark matter that are indirectly observed. See also Black star (semiclassical gravity) Dark energy Dark matter Gravastar Stellar black hole References Sources External links MPIE Galactic Center Research (subscription only) Black holes Dark concepts in astrophysics Dark matter Hypothetical stars Quantum gravity Fringe physics Dark energy
Dark-energy star
Physics,Astronomy
557
48,309,206
https://en.wikipedia.org/wiki/Russula%20pyriodora
Russula pyriodora is a species of fungus in the family Russulaceae. Found in Finland, it was described as new to science in 2011 by Juhani Ruotsalainen. It associates mostly with birch (Betula spp.), but has also been recorded with alder (Alnus), spruce (Picea), and willow (Salix). Fruitbodies of the fungus resembles those of Russula betularum, but can be distinguished from that species by their distinctive pear odor. The holotype collection was made in the Kylmänpuro Nature Protection Area in August 2011. A rare species, the mushroom has usually been recorded in calcareous soil, beside brooks in forests. See also List of Russula species References External links pyriodora Fungi described in 2011 Fungi of Finland Fungus species
Russula pyriodora
Biology
171
22,819,730
https://en.wikipedia.org/wiki/Acaulospora%20alpina
Acaulospora alpina is a species of fungus in the family Acaulosporaceae. It forms arbuscular mycorrhiza and vesicles in roots. The fungus was discovered in Switzerland, in the rhizosphere of an alpine grassland at altitudes between . References Diversisporales Fungi described in 2006 Fungi of Europe Fungus species
Acaulospora alpina
Biology
75
34,259
https://en.wikipedia.org/wiki/Yet%20another
A naming convention as a form of computer humour especially among playful programmers, yet another is often abbreviated ya, Ya, or YA in the prefix of an acronym or backronym. This humorous prefix is an idiomatic qualifier in the name of a computer program, organization, or event for the intention of elevating love and interest for something that seems confessedly unoriginal or unnecessarily repeated. This is a programmer practical joke which is an allusion to the culture of programmer esteem for perfection as seen by software programming principles such as "Keep It Simple Stupid" (KISS) and "Don't Repeat Yourself" (DRY). Stephen C. Johnson is credited with establishing the naming convention in the late 1970s when he named his compiler-compiler yacc (Yet Another Compiler-Compiler), since he felt there were already numerous compiler-compilers in circulation at the time. Outside of computing, the YA construct has appeared in astronomy, where YAMOO means Yet Another Map of Orion. Examples Yabasic – Yet Another BASIC Yaboot – Yet another boot loader Yacc – Yet another compiler-compiler Yacas – Yet another computer algebra system YACP Yet Another Chat Protocol YaDICs – Yet another Digital Image Correlation Software YADIFA – Yet Another DNS Implementation For All YAFFS – Yet Another Flash File System YAGO – Yet Another Great Ontology Yahoo! – Yet Another Hierarchical Officious Oracle (backronym) Yakuake – Yet Another Kuake YAM – Yet Another Mailer, an email client YAML – Yet Another Markup Language. Later redefined to YAML Ain't Markup Language, making it a recursive acronym Yandex – Yet another indexer, a web search engine and index YA-NewsWatcher – a Usenet client for classic Mac OS YANG – Yet Another Next Generation YAP – Yet Another Previewer, document previewer YAP – Yet Another Prolog, an implementation of the Prolog programming language YAPC – Yet Another Perl Conference YARN – Yet Another Resource Negotiator YARV – Yet Another Ruby VM YASARA – Yet Another Scientific Artificial Reality Application, a molecular modeling program Yasca – Yet another source code analyzer YAS – Yet Another Society, a non-profit organization organizing YAPCs YASS – Yet Another Similarity Searcher, a pairwise nucleotide sequence alignment tool with dotplot YaST – Yet another Setup Tool, an operating system installation and configuration wizard for SUSE Linux distributions Y.A.S.U. – Yet Another SecuROM Utility Yate – Yet Another Telephony Engine, VoIP software YAWC – Yet Another Wersion of Citadel YAWL – Yet Another Workflow Language, a business process modeling language for diagramming workflow patterns Yaws – Yet another web server See also Another (disambiguation) All articles starting with "Yet Another ..." or "Yet another ..." Reinventing the wheel References Computer jargon
Yet another
Technology
616
59,067,258
https://en.wikipedia.org/wiki/Moto%20E5
The Moto E5 is the 5th generation of the low-end Moto E family of Android smart phones developed by Motorola Mobility. It comprises three submodels: E5 Play, E5 and E5 Plus. They were released in April 2018. The base model costs $99, putting this phone in the budget segment of the smartphone market. This phone is often praised for having long battery life, although it tends to have low performance due to the dated processor and video card. Submodels comparison † Not counting camera bump, which adds to E5 Plus depth/thickness. References Motorola smartphones Android (operating system) devices
Moto E5
Technology
132
62,977,003
https://en.wikipedia.org/wiki/NGC%20920
NGC 920 is a barred spiral galaxy in the Andromeda constellation. The celestial object was discovered on September 11, 1885 by the American astronomer Lewis A. Swift. See also List of NGC objects (1–1000) References External links Barred spiral galaxies 920 Andromeda (constellation) 009377
NGC 920
Astronomy
65
39,516,620
https://en.wikipedia.org/wiki/Alan%20G.%20Thomas%20%28scientist%29
Alan G. Thomas (1927-2019) was an international authority on the mechanics of rubbery materials, in particular their fracture mechanics properties. Along with Ronald S. Rivlin, he published the Rupture of Rubber series of articles, beginning in 1953. He was the first to apply Griffith's energy release rate criterion to the analysis of rubber's strength and fatigue behavior. Thomas attend Brasenose College, Oxford to study physics, graduating in 1948. He then accepted a position at the British Rubber Producer's Research Association. His research director was Dr Ronald S. Rivlin, who suggested that he study the strength of rubber. He developed the theories of strength and crack growth in rubber, starting from the work of Alan Arnold Griffith. He demonstrated that Griffith's strain energy release rate provided a useful way to characterize the conditions at a crack tip, a problem that previously had been thought intractable due to the finite straining and nonlinearly elastic stress-strain behavior of rubber. Thomas has been recognised with many prizes and medals. Most notable of these are the 1978 Colwyn medal of the Institute of Materials Minerals and Mining and the 1994 Charles Goodyear Medal of the American Chemical Society. His employers MRPRA, received the Prince Philip award in 1990 for his pioneering work on earthquake bearings. He was a visiting professor in the Materials Department at Queen Mary University of London since 1975. Prof Alan Thomas died in his sleep on 23 April 2019. References External links QMUL website British materials scientists Polymer scientists and engineers 1927 births 2019 deaths Alumni of Brasenose College, Oxford
Alan G. Thomas (scientist)
Chemistry,Materials_science
321
55,993,644
https://en.wikipedia.org/wiki/Business%20process%20outsourcing%20in%20China
The business process outsourcing industry in China including IT and other outsourcing services for onshore and export markets surpassed 1 trillion yuan (about $145 billion) in 2016 according to the Ministry of Commerce (MOC). History The outsourcing industry grew rapidly in the 2000s in China by beginning from an "embryonic" scale. IDC, an IT industry consultancy, estimated in 2006 that while outsourcing of IT services was growing at 30% annually, the market size was only $586 million at the end of 2005. Most IT services then were offered to domestic companies with offshore clients concentrated in Japan. By 2016, outsourcing was still growing fast at about 20% annually, driven by "cloud computing, big data, Internet of things and mobile Internet" according to Xinhua. IT services IT related services accounted for about half of the US$87 billion in total service outsourcing provided to export markets in 2015. The largest IT outsourcing companies based in China include ChinaSoft and Pactera. One of the largest China-focused outsourcing companies but based in the US is VXI Global. ChinaSoft is backed by Huawei. Pactera was formed in 2012 by the merger of two industry leaders, VanceInfo and HiSoft. VXI Global was acquired in 2016 by the Carlyle Group, a marquee private equity firm, for around US$1 billion. The deal was seen by Dow Jones as "making a big bet that the future of the outsourcing industry, long associated with India, will be in China." Painting Art production is outsourced to China either to mass-produce thousands of paintings or execute original works based on instructions from foreign artists. A center for art outsourcing is Dafen Village in Shenzhen, well known for production of imitations of masterworks, but also home to artists who are commissioned to execute original works. At the high end of the art world, outsourcing to China is practice by Kehinde Wiley, an American portrait painter, who opened a studio in Beijing in 2006. Under Wiley's 4 to 10 helpers do the brushstrokes for his paintings. Cost has been a motivation for outsourcing to China with an art blogger suggesting in 2007 that it allowed those with limited budgets to go from buy posters to oil paintings. Initially, Kehinde Wiley also opened his Beijing studio due to cost sensitivity but by 2012, Wiley told New York magazine that savings costs was no longer the reason for relying on Beijing based-helpers. Global competitiveness China has the world's second largest outsourcing industry, taking up 33% of global market share, according to a news release from the Ministry of Commerce of the People's Republic of China in March 2017. The Business Processing Industry Association of India in March 2017 had concurring figures, assessing that China had second largest outsourcing industry, behind India and ahead of the Philippines. References Industry in China China
Business process outsourcing in China
Technology,Engineering
608
50,127,829
https://en.wikipedia.org/wiki/Euromatrix
The EuroMatrix is a project that ran from September 2006 to February 2009. The project aimed to develop and improve machine translation (MT) systems between all official languages of the European Union (EU). EuroMatrix was followed up by another project EuroMatrixPlus (March 2009 to February 2012). Approach to translation EuroMatrix explored using linguistic knowledge in statistical machine translation. Statistical techniques were combined with rule-based approach, resulting in hybrid MT architecture. The project experimented with combining methods and resources from statistical MT, rule-based MT, shallow language processing and computational lexicography and morphology. Project objectives EuroMatrix focused on high-quality translation for the publication of technical, social, legal and political documents. It applied advanced MT technologies to all pairs of EU languages; languages of new and likely-to-become EU member states were also taken into account. Annual international evaluation Competitive annual international evaluation of machine translation meetings (“MT marathons”) were organized to bring together MT researchers. Participants of the marathons translated test sets with their systems. The test sets were then evaluated by manual as well as automatic metrics. MT marathons were multi-day happenings consisting of several events — summer school, lab lessons, research talks, workshops, open source conventions, research showcases. List of MT marathons Outcome Several tools and resources were created or supported by the project: Moses, an open source statistical machine translation engine Europarl Corpus, version 3 Results from Workshops on Statistical Machine Translation (2007, 2008, 2009) CzEng Corpus, version 0.7 Funding The EuroMatrix project was sponsored by EU Information Society Technology program. Total cost of the project was 2 358 747 €, from which the European Union contributed 2 066 388 €. Project members Experienced research groups in machine translation that are internationally recognized, as well as relevant industrial partners participated in the project. The consortium included the University of Edinburgh (United Kingdom), Charles University (Czech Republic), Saarland University (Germany), Center for the Evaluation of Language and Communication Technologies (Italy), MorphoLogic (Hungary), and GROUP Technologies AG (Germany). The project was coordinated by Hans Uszkoreit, a professor of Computational Linguistics at Saarland University. References External links official homepage EuroMatrixPlus official homepage College and university associations and consortia in Europe Information technology organizations based in Europe Machine translation
Euromatrix
Technology
492
60,557,698
https://en.wikipedia.org/wiki/Military%20geology
Military geology is the application of geological theory to warfare and the peacetime practices of the military. The formal practice of military geology began during the Napoleonic Wars; however, geotechnical knowledge has been applied since the earliest days of siege warfare. In modern warfare military geologists are used for terrain analysis, engineering, and the identification of resources. Military geologists have included both specially trained military personnel and civilians incorporated into the military. The peacetime application of military geology includes the building of infrastructure, typically during local emergencies or foreign peacekeeping deployments. Warfare can change the physical geology. Examples of this include artillery shattering the bedrock on the Western Front during World War I and the detonation of nuclear weapons creating new rock types. Military research has also led to many important geological discoveries. Terrain analysis Geologists have been employed since the Napoleonic Wars to provide an analysis of terrain which was expected to become a war theater, both in case of an upcoming battle and to assess the difficulty of logistical supply. Academically, it has been found that battles are likely to occur on rocks of Permian, Triassic, or Upper Carboniferous age, possibly due to their typical relief and drainage. More practically, geology has been used in identifying the best Allied invasion sites during World War II, including those in North Africa, Italy, and France. This included studying the properties of the sand of Normandy beaches, the tolerance of the soil in the hinterland to bombardment, the sediment of the English Channel sea floor, and the occurrence of landslides in Sicily. Likewise, German geologists created maps of southern England for Operation Sea Lion, identifying quarry locations and the suitability of rock types to excavate trenchers, etc. In the Demilitarized Zone between North and South Korea, very rugged terrain is due to the structure of metamorphic rocks, while the best flat land is underlain by granite. During the Korean War, these flat areas were used as military staging grounds by the North Koreans. It has been suggested that an understanding of the fracture and foliation patterns of the metamorphic rocks could help a field commander. This field partially overlaps with military geography. For this reason the British Army employed geographers in this role until the end of 1941, when it joined international common practice and started using geologists. Geotechnical engineering Geologists have been involved in the construction of forts, tunnels, and bunkers both during military conflicts and in peacetime. This included digging tunnels in northeastern Italy and Austria during the so-called mine war in World War I. The rocks of the Dolomites are different from those in other theaters and specialists were required in order to design the tunnels. Explosives were then put in the tunnels and detonated, to cause rock falls and undercut enemy troops. Geology is also used in determining the likely resistance of enemy defenses to shelling and bombing. In World War II, this task was performed by the Allies as they advanced across German-occupied Europe, assessing the likely effect of bombing bridges and shelling defenses in light of the local geology. During peacetime, similar methods have been used, such as the decision to locate the American Strategic Petroleum Reserve in the salt domes of the Gulf Coast. Resource acquisition Geologists are used to determine both the location and accessibility of strategic and tactical resources during war. In the case of the D Day landings and the 2003 invasion of Iraq, ground water and aggregate were the two most important geological resources to identify for the campaign. The aggregate was required both for roading metal and for the construction of airfields. Since 1966, the German Army has also been using geologists to mitigate and predict the environmental effects of civilian resource extraction. Forensics Geology has been used in many military intelligence investigations. During World War II, the American Military Geology Unit discovered the origin of balloon bombs which had been dispatched towards North America from Japan. They accomplished this by determining from which beach the sand in the balloon's ballast originated. Knowledge of rock types and seismic propagation also allows geologists to distinguish between natural and nuclear test initiated earthquakes. Effect of warfare on rocks Military activity affects the physical geology. This was first noted through the intensive shelling on the Western Front during World War I, which caused the shattering of the bedrock and changed the rocks' permeability. New minerals, rocks, and land-forms are also a byproduct of nuclear testing. Discoveries by military Military research has led to many geological discoveries; however, secrecy has often delayed some of the possible progress. The Austrian Army of World War I included geologists called Kriegsgeologen who were allowed to carry out non-military scientific investigation during the war. Discoveries have included new natural resource deposits and the mapping of magnetic stripes on the ocean floor, leading to the idea of plate tectonics. See also 24th Waffen Mountain Division of the SS Karstjäger Australian Mining Corps Military Geology Unit Tunnelling companies of the Royal Engineers U.S. Army Corps of Engineers References Geology Military geography Military engineering
Military geology
Engineering
1,006
46,207,323
https://en.wikipedia.org/wiki/Feature%20engineering
Feature engineering is a preprocessing step in supervised machine learning and statistical modeling which transforms raw data into a more effective set of inputs. Each input comprises several attributes, known as features. By providing models with relevant information, feature engineering significantly enhances their predictive accuracy and decision-making capability. Beyond machine learning, the principles of feature engineering are applied in various scientific fields, including physics. For example, physicists construct dimensionless numbers such as the Reynolds number in fluid dynamics, the Nusselt number in heat transfer, and the Archimedes number in sedimentation. They also develop first approximations of solutions, such as analytical solutions for the strength of materials in mechanics. Clustering One of the applications of feature engineering has been clustering of feature-objects or sample-objects in a dataset. Especially, feature engineering based on matrix decomposition has been extensively used for data clustering under non-negativity constraints on the feature coefficients. These include Non-Negative Matrix Factorization (NMF), Non-Negative Matrix-Tri Factorization (NMTF), Non-Negative Tensor Decomposition/Factorization (NTF/NTD), etc. The non-negativity constraints on coefficients of the feature vectors mined by the above-stated algorithms yields a part-based representation, and different factor matrices exhibit natural clustering properties. Several extensions of the above-stated feature engineering methods have been reported in literature, including orthogonality-constrained factorization for hard clustering, and manifold learning to overcome inherent issues with these algorithms. Other classes of feature engineering algorithms include leveraging a common hidden structure across multiple inter-related datasets to obtain a consensus (common) clustering scheme. An example is Multi-view Classification based on Consensus Matrix Decomposition (MCMD), which mines a common clustering scheme across multiple datasets. MCMD is designed to output two types of class labels (scale-variant and scale-invariant clustering), and: is computationally robust to missing information, can obtain shape- and scale-based outliers, and can handle high-dimensional data effectively. Coupled matrix and tensor decompositions are popular in multi-view feature engineering. Predictive modelling Feature engineering in machine learning and statistical modeling involves selecting, creating, transforming, and extracting data features. Key components include feature creation from existing data, transforming and imputing missing or invalid features, reducing data dimensionality through methods like Principal Components Analysis (PCA), Independent Component Analysis (ICA), and Linear Discriminant Analysis (LDA), and selecting the most relevant features for model training based on importance scores and correlation matrices. Features vary in significance. Even relatively insignificant features may contribute to a model. Feature selection can reduce the number of features to prevent a model from becoming too specific to the training data set (overfitting). Feature explosion occurs when the number of identified features is too large for effective model estimation or optimization. Common causes include: Feature templates - implementing feature templates instead of coding new features Feature combinations - combinations that cannot be represented by a linear system Feature explosion can be limited via techniques such as: regularization, kernel methods, and feature selection. Automation Automation of feature engineering is a research topic that dates back to the 1990s. Machine learning software that incorporates automated feature engineering has been commercially available since 2016. Related academic literature can be roughly separated into two types: Multi-relational decision tree learning (MRDTL) uses a supervised algorithm that is similar to a decision tree. Deep Feature Synthesis uses simpler methods. Multi-relational decision tree learning (MRDTL) Multi-relational Decision Tree Learning (MRDTL) extends traditional decision tree methods to relational databases, handling complex data relationships across tables. It innovatively uses selection graphs as decision nodes, refined systematically until a specific termination criterion is reached. Most MRDTL studies base implementations on relational databases, which results in many redundant operations. These redundancies can be reduced by using techniques such as tuple id propagation. Open-source implementations There are a number of open-source libraries and tools that automate feature engineering on relational data and time series: featuretools is a Python library for transforming time series and relational data into feature matrices for machine learning. MCMD: An open-source feature engineering algorithm for joint clustering of multiple datasets . OneBM or One-Button Machine combines feature transformations and feature selection on relational data with feature selection techniques. getML community is an open source tool for automated feature engineering on time series and relational data. It is implemented in C/C++ with a Python interface. It has been shown to be at least 60 times faster than tsflex, tsfresh, tsfel, featuretools or kats. tsfresh is a Python library for feature extraction on time series data. It evaluates the quality of the features using hypothesis testing. tsflex is an open source Python library for extracting features from time series data. Despite being 100% written in Python, it has been shown to be faster and more memory efficient than tsfresh, seglearn or tsfel. seglearn is an extension for multivariate, sequential time series data to the scikit-learn Python library. tsfel is a Python package for feature extraction on time series data. kats is a Python toolkit for analyzing time series data. Deep feature synthesis The deep feature synthesis (DFS) algorithm beat 615 of 906 human teams in a competition. Feature stores The feature store is where the features are stored and organized for the explicit purpose of being used to either train models (by data scientists) or make predictions (by applications that have a trained model). It is a central location where you can either create or update groups of features created from multiple different data sources, or create and update new datasets from those feature groups for training models or for use in applications that do not want to compute the features but just retrieve them when it needs them to make predictions. A feature store includes the ability to store code used to generate features, apply the code to raw data, and serve those features to models upon request. Useful capabilities include feature versioning and policies governing the circumstances under which features can be used. Feature stores can be standalone software tools or built into machine learning platforms. Alternatives Feature engineering can be a time-consuming and error-prone process, as it requires domain expertise and often involves trial and error. Deep learning algorithms may be used to process a large raw dataset without having to resort to feature engineering. However, deep learning algorithms still require careful preprocessing and cleaning of the input data. In addition, choosing the right architecture, hyperparameters, and optimization algorithm for a deep neural network can be a challenging and iterative process. See also Covariate Data transformation Feature extraction Feature learning Hashing trick Instrumental variables estimation Kernel method List of datasets for machine learning research Scale co-occurrence matrix Space mapping References Further reading Machine learning Data analysis
Feature engineering
Engineering
1,426
20,436,586
https://en.wikipedia.org/wiki/Sergei%20Tyablikov
Sergei Vladimirovich Tyablikov (; September 7, 1921 – March 17, 1968) was a Soviet theoretical physicist known for his significant contributions to statistical mechanics, solid-state physics, and for the development of the double-time Green function's formalism. Biography Tyablikov was born in Klin, Russia. In 1944 he graduated from the Faculty of Physics at the Moscow State University (MSU) and started his postgraduate study with Anatoly Vlasov and later with Nikolay Bogoliubov at the Department of Theoretical Physics. In 1947 he obtained PhD degree (Candidate of Sciences) with PhD Thesis on the subject of crystallization theory and was appointed to the Steklov Institute of Mathematics, where he continued to work for the rest of his life. In 1954 he defended at the MSU his doctoral dissertation "Studies of the Polaron Theory" and obtained the degree of Doktor nauk (Doctor of Science, similar to Habilitation). Since 1962 he was the Head of the Division of Statistical Mechanics in the Steklov Institute of Mathematics. In the period 1966-1968, Sergei Tyablikov also worked at the Joint Institute for Nuclear Research, where he was the first Head of the Statistical Mechanics and Theory of Condensed Matter Group at the Laboratory of Theoretical Physics. Research work During postgraduate study in 1944—1947 he worked on theory of crystallization, where he applied such methods as diagonalization of bilinear forms in Bose or Fermi operators, etc., which later became a common tool for theoretical physicists. After finishing PhD he started to work on the problem of a particle interacting with a quantum field. This problem is directly related to polaron theory, the effect of impurities on the energy spectrum of superfluids, and other problems in condensed matter physics. He was involved in the development of operator form of perturbation theory, approximate second quantization, adiabatic approximation for systems with translational invariance, and other theoretical physics methods which play an important role in the theory of many-particle systems. Since 1948 in collaboration with Nikolay Bogoliubov he started to work on quantum theory of ferromagnetism and antiferromagnetism. In 1948 they developed a consistent theoretical polar model of metals. Later Tyablikov developed the first consistent quantum theory of magnetic anisotropy. His particularly important contribution to antiferromagnetism was in the development of the method of quantum temperature Green's functions. In 1959, Sergei Tyablikov and Nikolay Bogoliubov published the paper  which strongly influenced the development of the many-body physics and specifically the quantum theory of magnetism. He also co-authored with V.L. Bonch-Bruevich the book The Green Function Method in Statistical Mechanics, the first book with a consistent exposition of the method of Green's functions. Publications Books Bonch-Bruevich V. L., Tyablikov S. V. (1962): The Green Function Method in Statistical Mechanics. North Holland Publishing Co. Tyablikov S. V. (1995): Methods in the Quantum Theory of Magnetism. (Translated to English) Springer; 1st edition. . . Selected papers References Sergei Vladimirovich Tyablikov Soviet Physics Uspekhi 11(4), 606—607 (January–February 1969). Biography of S. V. Tyablikov (1921-1968) at the Joint Institute for Nuclear Research. 1921 births 1968 deaths People from Klin Quantum physicists Soviet physicists Moscow State University alumni Theoretical physicists
Sergei Tyablikov
Physics
734
20,281,471
https://en.wikipedia.org/wiki/Christophe%20Breuil
Christophe Breuil (; born 1968) is a French mathematician, who works in arithmetic geometry and algebraic number theory. Work With Fred Diamond, Richard Taylor and Brian Conrad in 1999, he proved the Taniyama–Shimura conjecture, which previously had only been proved for semistable elliptic curves by Andrew Wiles and Taylor in their proof of Fermat's Last Theorem. Later, he worked on the p-adic Langlands conjecture. Academic life Breuil attended schools in Brive-la-Gaillarde and Toulouse and studied from 1990 to 1992 at the École Polytechnique. In 1993, he obtained his DEA degree at the Paris-Sud 11 University located in Orsay. From 1993 to 1996 he conducted research at the École Polytechnique and taught simultaneously at the University of Paris-Sud, Orsay, and in 1996 received his PhD from the École Polytechnique, supervised by Jean-Marc Fontaine with the thesis "Cohomologie log-cristalline et représentations galoisiennes p -adiques". In 1997, he gave the Cours Peccot at the Collège de France. In 2001 he obtained a habilitation degree entitled "Aspects entiers de la théorie de Hodge p-adique et applications" at Paris-Sud 11 University. Between 2002 and 2010 he was at the IHES. From 2010 he has been in the Mathematics Department of University of Paris-Sud as Director of Research with the CNRS. In 2007–2008 he was a visiting professor at Columbia University. Awards and recognition In 1993 he was awarded the Prix Gaston Julia at the École Polytechnique. In 2002 he received the of the French Academy of Sciences and the 2006 Prix Dargelos Anciens Élèves of the École Polytechnique. He was an invited speaker in International Congress of Mathematicians 2010, Hyderabad on the topic of "Number Theory." References External links Living people 20th-century French mathematicians 21st-century French mathematicians Paris-Sud University alumni 1968 births Arithmetic geometers École Polytechnique alumni Fermat's Last Theorem Research directors of the French National Centre for Scientific Research
Christophe Breuil
Mathematics
442
67,028,109
https://en.wikipedia.org/wiki/Workpiece
A workpiece is a piece, often made of a single material, that is being processed into another desired shape (such as building blocks). The workpiece is usually a piece of relatively rigid material such as wood, metal, plastic, or stone. After a processing step, the workpiece may be moved on to further steps of processing. For example, a part can made out of bar stock and later become part of a semi-finished product. The workpiece is often attached to the tool being used via a jig or fixture, like for example to a milling machine via an angle plate, or to a lathe via a lathe faceplate. A vise is another example of a simple type of fixture used to fix workpieces. A workpiece may be subjected to various cutting operations, like truing, making fillets, chamfers, countersinking, counterboring, etc. It may also receive various surface treatments and finishes. The term "workpiece" has established itself within crafts and the manufacturing industry, and connects the work or treatment and the object to be treated. A workbench is often used to hold a workpiece steady during work on it. See also Surface finishing Metalworking References External links Video demonstration of workpieces being attached to faceplates and angleplates Video demonstration of different methods of filing a metal workpiece Machining
Workpiece
Engineering
281
75,391,267
https://en.wikipedia.org/wiki/Olpasiran
Olpasiran (AMG890) is an experimental antisense therapy designed to lower the level of lipoprotein(a), which is believed to be a causal factor in the development of cardiovascular disease. The drug is developed by Amgen. References Amgen Antisense RNA
Olpasiran
Chemistry
63
1,516,465
https://en.wikipedia.org/wiki/Nota%20accusativi
Nota accusativi is a grammatical term for a particle (an uninflected word) that marks a noun as being in the accusative case. An example is the use of the word in Spanish before an animate direct object: . Esperanto Officially, in Esperanto, the suffix letter is used to mark an accusative. But a few modern speakers use the unofficial preposition instead of the final . Hebrew In Hebrew the preposition is used for definite nouns in the accusative. Those nouns might be used with the definite article ( ). Otherwise, the object is modified by a possessive pronominal suffix, by virtue of being a within a genitive phrasing, or as a proper name. To continue with the Hebrew example: On the other hand, "I see a dog" is simply This example is obviously a specialized use of the , since Hebrew does not use the unless the noun is in the definitive. Japanese In Japanese, the particle (pronounced ) is the direct object marker and marks the recipient of an action. Korean In Korean, the postposition or is the direct object marker and marks the recipient of an action. For example: is used when the previous syllable ( in this case) is closed, i.e. when it ends with a consonant ( in in this case). is used when the previous syllable ( in this case) is open, i.e. when it ends with a vowel ( in in this case). Toki Pona In Toki Pona, the word is used to mark a direct object. Other languages Nota accusativi also exists in Armenian, Greek and other languages. In other languages, especially those with grammatical case, there is usually a separate form (for each declension if declensions exist) of the accusative case. The nota accusativi should not be confused with such case forms, as the term is a separate particle of the accusative case. See also Accusative case References Grammatical cases Parts of speech
Nota accusativi
Technology
421
76,012,741
https://en.wikipedia.org/wiki/Gate%20lice
Gate lice is a pejorative term used to describe a phenomenon observed among air travelers where passengers gather in front of boarding gates before their designated boarding time. The term has gained recognition within the community of frequent flyers, particularly on platforms such as Flyertalk. This phenomenon may make the boarding process more cumbersome. For instance, it can lead to congestion, longer wait times for those who have prioritized boarding, and confusion. To avoid behaving in this manner, it is recommended to stay in one's seat until one's boarding zone is called. Contributing factors The rationale for gate lice behavior may be due to various contributing factors. Some attribute it to the inexperience of certain travelers who may not fully comprehend airline boarding procedures. Additionally, the presence of elite fliers with priority boarding privileges board early, forming clusters in front of the gate and contributing to congestion. Airport gate designs can also play a role, for example at O'Hare International Airport gate layouts are conducive for congestion. Also the baggage fees may also play a role, as some passengers may seek to board early to secure overhead bin space to potentially avoid fees. Also, people may seek the overhead bin space to avoid lost luggage. In some cases, people may seek overhead bin space to store items required on the flight. Psychological factors may also play a role. When people see other people crowding the boarding area, there may be a social tendency to move towards conformity. Also, the overhead bin space may be viewed as a limited resource leading to competition. The underlying uncertainty and competition may lead to anxiety and hostility. Waiting in line may also help bring a sense of control as well as relieve anxiety. Following the COVID-19 pandemic, the phenomenon has increased possibly as travelers have become more anxious. Industry response Some airlines have implemented measures to address the challenges posed by gate lice. This includes the creation of dedicated lanes for elite fliers and the removal of special pre-boarding privileges for families with small children. Various airlines, such as United, Continental, Delta, Northwest, and Southwest, have introduced priority boarding programs catering to specific customer groups. As of October 2024, American Airlines was testing a program in several U.S. airports that alerts gate agents to passengers who attempt to board before their assigned boarding group. The system creates an audible signal when the passenger's boarding pass is scanned before their boarding group is called. References Airports Travel
Gate lice
Physics
502
1,945,917
https://en.wikipedia.org/wiki/Sulfamic%20acid
Sulfamic acid, also known as amidosulfonic acid, amidosulfuric acid, aminosulfonic acid, sulphamic acid and sulfamidic acid, is a molecular compound with the formula H3NSO3. This colourless, water-soluble compound finds many applications. Sulfamic acid melts at 205 °C before decomposing at higher temperatures to water, sulfur trioxide, sulfur dioxide and nitrogen. Sulfamic acid (H3NSO3) may be considered an intermediate compound between sulfuric acid (H2SO4), and sulfamide (H4N2SO2), effectively replacing a hydroxyl (–OH) group with an amine (–NH2) group at each step. This pattern can extend no further in either direction without breaking down the sulfonyl (–SO2–) moiety. Sulfamates are derivatives of sulfamic acid. Production Sulfamic acid is produced industrially by treating urea with a mixture of sulfur trioxide and sulfuric acid (or oleum). The conversion is conducted in two stages, the first being sulfamation: OC(NH2)2 + SO3 → OC(NH2)(NHSO3H) OC(NH2)(NHSO3H) + H2SO4 → CO2 + 2 H3NSO3 In this way, approximately 96,000 tonnes were produced in 1995. Structure and reactivity The compound is well described by the formula H3NSO3, not the tautomer H2NSO2(OH). The relevant bond distances are 1.44 Å for the S=O and 1.77 Å for the S–N. The greater length of the S–N is consistent with a single bond. Furthermore, a neutron diffraction study located the hydrogen atoms, all three of which are 1.03 Å distant from the nitrogen. In the solid state, the molecule of sulfamic acid is well described by a zwitterionic form. Hydrolysis The crystalline solid is indefinitely stable under ordinary storage conditions, however, aqueous solutions of sulfamic acid slowly hydrolyse to ammonium bisulfate, according to the following reaction: H3NSO3 + H2O → [NH4]+[HSO4]− Its behaviour resembles that of urea, (H2N)2CO. Both feature amino groups linked to electron-withdrawing centres that can participate in delocalised bonding. Both liberate ammonia upon heating in water, with urea releasing CO2 while sulfamic acid releases sulfuric acid. Acid–base reactions Sulfamic acid is a moderately strong acid, Ka = 0.101 (pKa = 0.995). Because the solid is not hygroscopic, it is used as a standard in acidimetry (quantitative assays of acid content). H3NSO3 + NaOH → NaH2NSO3 + H2O Double deprotonation can be effected in liquid ammonia to give the anion . H3NSO3 + 2 NH3 → + 2 Reaction with nitric and nitrous acids With nitrous acid, sulfamic acid reacts to give nitrogen: HNO2 + H3NSO3 → H2SO4 + N2 + H2O while with concentrated nitric acid, it affords nitrous oxide: HNO3 + H3NSO3 → H2SO4 + N2O + H2O Reaction with hypochlorite The reaction of excess hypochlorite ions with sulfamic acid or a sulfamate salt gives rise reversibly to both N-chlorosulfamate and N,N-dichlorosulfamate ions. HClO + H2NSO3H → ClNHSO3H + H2O HClO + ClNHSO3H Cl2NSO3H + H2O Consequently, sulfamic acid is used as hypochlorite scavenger in the oxidation of aldehydes with chlorite such as the Pinnick oxidation. Reaction with alcohols Upon heating sulfamic acid will react with alcohols to form the corresponding organosulfates. It is more expensive than other reagents for doing this, such as chlorosulfonic acid or oleum, but is also significantly milder and will not sulfonate aromatic rings. Products are produced as their ammonium salts. Such reactions can be catalyzed by the presence of urea. Without the presence of any catalysts, sulfamic acid will not react with ethanol at temperatures below 100 °C. ROH + H2NSO3H → ROS(O)2O− + An example of this reaction is the production 2-ethylhexyl sulfate, a wetting agent used in the mercerisation of cotton, by combining sulfamic acid with 2-ethylhexanol. Applications Sulfamic acid is mainly a precursor to sweet-tasting compounds. Reaction with cyclohexylamine followed by addition of NaOH gives C6H11NHSO3Na, sodium cyclamate. Related compounds are also sweeteners, such as acesulfame potassium. Sulfamates have been used in the design of many types of therapeutic agents such as antibiotics, nucleoside/nucleotide human immunodeficiency virus (HIV) reverse transcriptase inhibitors, HIV protease inhibitors (PIs), anticancer drugs (steroid sulfatase and carbonic anhydrase inhibitors), anti-epileptic drugs, and weight loss drugs. Cleaning agent Sulfamic acid is used as an acidic cleaning agent and descaling agent sometimes pure or as a component of proprietary mixtures, typically for metals and ceramics. For cleaning purposes, there are different grades based on application such as GP Grade, SR Grade and TM Grade. It is frequently used for removing rust and limescale, replacing the more volatile and irritating hydrochloric acid, which is cheaper. It is often a component of household descalant, for example, Lime-A-Way Thick Gel contains up to 8% sulfamic acid and has pH 2.0–2.2, or detergents used for removal of limescale. When compared to most of the common strong mineral acids, sulfamic acid has desirable water descaling properties, low volatility, and low toxicity. It forms water-soluble salts of calcium, nickel, and ferric iron. Sulfamic acid is preferable to hydrochloric acid in household use, due to its intrinsic safety. If inadvertently mixed with hypochlorite based products such as bleach, it does not form chlorine gas, whereas the most common acids would; the reaction (neutralisation) with ammonia, produces a salt, as depicted in the section above. It also finds applications in the industrial cleaning of dairy and brewhouse equipment. Although it is considered less corrosive than hydrochloric acid, corrosion inhibitors are often added to the commercial cleansers of which it is a component. It can be used as a descalant for descaling home coffee and espresso machines and in denture cleaners. Other uses Catalyst for esterification process Dye and pigment manufacturing Herbicide, as ammonium sulfamate Descalant for scale removal Coagulator for urea-formaldehyde resins Ingredient in fire extinguishing media. Sulfamic acid is the main raw material for ammonium sulfamate which is a widely used herbicide and fire retardant material for household products. Pulp and paper industry as a chloride stabilizer Synthesis of nitrous oxide by reaction with nitric acid The deprotonated form (sulfamate) is a common counterion for nickel(II) in electroplating. Used to separate nitrite ions from mixture of nitrite and nitrate ions( NO3−+ NO2−) during qualitative analysis of nitrate by Brown Ring test. Obtaining deep eutectic solvents with urea Silver polishing According to the label on the consumer product, the silver cleaning product TarnX contains thiourea, a detergent, and sulfamic acid. References Further reading Oxoacids Sulfur oxoacids Household chemicals Cleaning product components Sulfamates
Sulfamic acid
Chemistry,Technology
1,756
23,963,427
https://en.wikipedia.org/wiki/Acqua%20alta
An (, ; ) is an exceptional tide peak that occurs periodically in the northern Adriatic Sea. The term is applied to such tides in the Italian region of Veneto. The peaks reach their maximum in the Venetian Lagoon, where they cause partial flooding of Venice and Chioggia; flooding also occurs elsewhere around the northern Adriatic, for instance at Grado and Trieste, but much less often and to a lesser degree. The phenomenon occurs mainly between autumn and spring, when the astronomical tides are reinforced by the prevailing seasonal winds that hamper the usual reflux. The main winds involved are the sirocco, which blows northbound along the Adriatic Sea, and the bora, which has a specific local effect due to the shape and location of the Venetian Lagoon. Causes Precise scientific parameters define the phenomenon called acqua alta, the most significant of which (i.e., the deviation in amplitude from a base measurement of "standard" tides) is measured by the hydrographic station located nearby the Basilica di Santa Maria della Salute. Supernormal tidal events can be categorized as: intense when the measured sea level is between 80 cm and 109 cm above the standard sea level (which was defined by averaging the measurements of sea level during the year 1897); very intense when the measured sea level is between 110 cm and 139 cm above the standard; exceptional high waters when the measured sea level reaches or exceeds 140 cm above the standard. Generally speaking, tide levels largely depend on three contributing factors: An astronomical component, which results from the movement and alignment of celestial bodies, principally the Moon, secondarily the Sun, and marginally other planets (with effects decreasing in relation to their distance from the Earth); this component is dependent upon the laws of the astronomical mechanics and can be computed and accurately predicted for the long run (even years or decades) A geophysical component, primarily dependent upon the geometric shape of the basin, which amplifies or reduces the astronomical component and, because it is dependent upon the laws of the physical mechanics, can be also computed and accurately predicted for the long run (even years or decades); A meteorological component, linked to a large set of variables, such as the direction and strength of winds, the location of barometric pressure fields and their gradients, precipitation, etc. Because of their complex interrelations and quasi-stochastic behavior, these variables cannot be accurately modeled in statistical terms. Consequently, this component can only be forecast for the very short run and is the principal determinant of acqua alta emergencies that catch Venetians unprepared. Two further contributing natural factors are the subsidence, i.e. the natural sinking of the soil level, to which the lagoon is subject, and eustasy, i.e. the progressive rise of sea levels. While these phenomena would occur independently of human activity, their effects have increased because of inhabitation: the use of lagoonal water by the industries in Porto Marghera (now ceased) sped up subsidence, while global warming has been linked to increased eustasy. Venice's "Tide Monitoring and Forecast Center" evaluates that the city has lost 23 cm in its elevation since 1897, the year of reference, 12 of which are attributable to natural causes (9 because of eustasy, 3 because of subsidence), 13 are due to the additional subsidence caused by human activity, while the "elastic recovery" of the soil has allowed the city to "gain back" 2 cm. Geophysical determinants linked to the Adriatic Sea The long and narrow rectangular shape of the Adriatic Sea is the source of an oscillating water motion (called seiche) along the basin's minor axis. The principal oscillation, which has a period of 21 hours and 30 minutes and an amplitude around 0.5 meters at the axis' extremities, supplements the natural tidal cycle, so that the Adriatic Sea has much more extreme tidal events than the rest of the Mediterranean. A secondary oscillation is also present, with an average period of 12 hours and 11 minutes. Because the timeframe of both oscillations is comparable to naturally occurring (yet independent) astronomical tides, the two effects overlap and reinforce each other. The combined effects are more significant at the perigees, which correspond to new moons, full moons and equinoxes. Should meteorological conditions (such as a strong scirocco wind blowing north along the major axis of the Adriatic basin) hamper the natural outflow of excess tidal water, high waters of greater magnitude can be expected in Venice. Specific characteristics of the Venetian lagoon The particular shape of the Venetian lagoon, the subsidence which has been affecting the soil in the coastal area, and the peculiar urban configuration all magnify the impact of the high waters on city dwellers and on the buildings. Furthermore, the northbound winds called bora and sirocco often blow directly towards the harbors that connect the lagoon to the Adriatic Sea, significantly slowing down (and, at times, completing blocking) the outflow of water from the lagoon toward the sea. When this occurs, the ebb is prevented inside the lagoon, so that the following high tide overlaps with the previous one, in a perverse self-supporting cycle. The creation of the industrial area of Porto Marghera, which lies immediately behind Venice, amplified the effects of high waters for two reasons: first, the land upon which the area is built was created by filling large parts of the lagoon where smaller islands just above sea level previously lay. These islands, called barene, acted as natural sponges (or "expansion tanks") when high tides occurred, absorbing a significant portion of the excess water. Second, a navigable channel was carved through the lagoon to allow oil tankers to reach the piers. This "Oil Channel" physically linked the sea to the coastal line, running through the harbor in Malamocco and crossing the lagoon for its entire width. This direct connection to the sea, which was obviously non-existent at the time of Venice's foundation, has subjected the city to more severe high tides. Porto Marghera and its facilities are not the only human-made contributors to higher tides. Rather, the municipality of Venice has published a study that suggests the following initiatives may have had an irreversible and catastrophic impact on the city's capacity to withstand acque alte in the future: the building of the Railroad Bridge (1841/1846) connecting Venice to the land, because its supporting pillars modify the natural motion of lagoonal water; the diversion of the river Brenta outside the Chioggia basin, which drained the 2,63 hectares of the river's delta that functioned as expansion tanks, absorbing extra lagoonal water during high tides; the building of offshore dammed piers (Porto di Malamocco, 1820/72; Porto di S. Nicolò, 884/97; Porto di Chioggia, 1911/33), which obviously restrict the natural movement of water; the building of the Ponte della Libertà (1931/33), which connects Venice to the land; the building of the Riva dei Sette Martiri (1936/41), an extension to the Riva degli Schiavoni; the building of the artificial island Tronchetto used as a car and bus terminal (17 hectares, 1957/61): the doubling of the Railroad Bridge (1977). Acqua alta in Venice Affected portions of the city The flooding caused by the acqua alta is not uniform throughout the city of Venice because of several factors, such as the varying altitude of each zone above sea level, its distance from a channel, the relative heights of the sidewalks or pavements (fondamenta), the presence of full parapets (which act as dams) along the proximate channel, and the layout of the sewer and water drainage network (which acts as a channel for the flooding, as it is directly connected with the lagoon). These factors account for the severity and spread of a supernormal tidal phenomenon; as a city-commissioned study showed, a tide up to 90 cm above sea level leaves Venice virtually unaffected, while 50 cm of additional water affects more than half of the city. The study provided Venetians with the following reference guide: To assist pedestrian circulation during floods, the city installs a network of gangways (wide wooden planks on iron supports) on the main urban paths. This gangway system is generally set at 120 cm above the conventional sea level, and can flood as well when higher tides occur. Monitoring, alerting and control The Tide Monitoring and Forecast Centre of the City of Venice is fed information via a network of hydrographic stations, located in both the lagoon and the Adriatic Sea (on a scientific platform belonging to the Italian National Research Committee, CNR). The centre's unique expertise on the phenomenon enables it to produce forecasts of remarkable accuracy, usually for the following 48 hours (longer forecasts are also issued, but tend to be less reliable, as discussed above), by analysing the meteorological and hydrographic data available. Forecasts are then announced to the population via the centre's website and dedicated phone lines, through local newspapers, on electronic displays, and at some stops of the vaporetti (public transport). When an acqua alta event is forecast, owners of commercial and residential property that is likely to be affected are contacted by phone (a free service provided by the municipality) or SMS. "Very intense" events warrant alerting the whole population, which is accomplished by sounding a dedicated system of sirens located throughout the city. On December 7, 2007, the alert system was modified (in Venice alone) to signal the magnitude of expected "very intense" tidal events to the population: sirens sound a first "await instructions" whistle to catch the population's attention, then produce a sequence of whistles whose number increases with the expected tide level (according to a published equivalence table). While not radically innovative, the new system communicates in greater detail the extent of the expected flooding to the population. The previous system, still used in the rest of the Venetian lagoon, only provides three levels of warning: the signal is sounded once for a tide above 110 cm., twice for tidal forecasts above 140 cm. and thrice for those above 160 cm. The new system was first used on March 24, 2008, communicating an accurately forecast tide level above 110 cm. Countermeasures The MOSE project (which stands for Modulo Sperimentale Elettromeccanico, i.e. "Experimental Electromechanical Module") has been under construction since 2003, the long time period partly because of budget constraints, and partly because of the sheer complexity of the undertaking. The project should significantly reduce the effects of "exceptional high waters" (but not those of lesser, yet detrimental, tidal events) by completing the installation of 79 separate 300-ton flaps hinged on the seabed between the lagoon and the Adriatic sea. While normally fully submerged and invisible, the flaps can be raised preemptively to create a temporary barrier, which is expected to protect the city from exceptional acqua alta. Statistics Regular scientific record-keeping of lagoonal water levels is considered to have begun in 1872, although some researchers suggest pushing this date to 1867, when an exceptional event (153 cm above sea level) was measured. However, because the first modern marigraph for regular tide monitoring was installed in Venice only in 1871, most documentation on the subject adopts the following year as the golden standard. The Venetian Institute for Science, Literature and Arts was appointed to the task by the newly formed Italian Kingdom, thus replacing the Magistrato alle Acque in 1866 upon annexing the city. The Institute ceased to exercise its monitoring and record-keeping functions in 1908, when the task, along with records and instruments, was passed to the Hydrographic Office of Venice. After the unprecedented acqua alta of 1966, the city set up a dedicated service to analyse data, monitor fluctuations, and forecast high tides, which is also charged with continuously keeping the population informed. Renamed Tide Monitoring and Forecast Center in 1980, the service has absorbed the record-keeping functions of the Hydrographic Office. Historical records Early records The first record of a large flood in the Venetian lagoon dates back to the so-called Rotta della Cucca, reported by Paul the Deacon as having occurred on October 17, 589. According to Paul, all rivers with mouths in the northern Adriatic, from the Tagliamento to the Po, overflowed at the same time, completely modifying the hydro-geologic equilibrium of the lagoon. Middle Ages The first documented description of acqua alta in Venice concerns the year 782 and is followed by other documented events in 840, 885, and 1102. In 1110 the water, following a violent sea storm (or, possibly, a seaquake and its subsequent tsunami), completely destroyed Metamauco (ancient name for Malamocco), Venice's political centre before the Doge's residence was moved to Rialto. Local chroniclers report that in 1240 "the water (that) flooded the streets (was) higher than a man". Other events are recorded to have occurred in 1268, 1280, 1282, and on December 20, 1283, which was probably an abnormally significant event, since a chronicle reported that Venice was "saved by a miracle". Chroniclers report that high tides occurred in 1286, 1297, and 1314; on February 15, 1340; on February 25, 1341; on January 18, 1386; and on May 31 and August 10, 1410. In the 15th century, high tides were recorded in 1419 and 1423, on May 11, 1428, and on October 10, 1430, as well as in 1444 and 1445. On November 10, 1442, the water is reported to have risen "four feet above the usual". Modern era High waters were recorded on May 29, 1511; in 1517; on October 16, 1521; on October 3 and, again, on December 20, 1535. Local chronicles also attest to floods occurring in 1543; on November 21, 1550; on October 12, 1559; and in 1599. The year 1600 was characterized by a high frequency of events, with floods on December 8 as well as December 18 and 19. The latter event was probably remarkable, since there are also records of very violent sea storms that, having "broken indeed the shores in several places, entered the towns of Lido Maggiore, Tre Porti, Malamocco, Chiozza, et cetera". Another noteworthy acqua alta took place on November 5, 1686. Several chronicles of the time, among them one written by a scientist, concur in reporting that "the waters reached the outdoor floor of ... [Sansovino's] Lodge", which is the monumental entrance to the Campanile di San Marco. A similar level was reached during the exceptional flood of November 4, 1966, which allowed scholars in the late 1960s to recreate a likely scenario for the 1686 flood. After accounting for the rebuilding of the Lodge after the 1902 fall of the Campanile and for subsidence, estimates concluded that the tide may have been as high as 254 cm above today's standard sea level. In the 18th century, records became more abundant and precise, reporting acque alte on December 21, 1727; New Year's Eve, 1738; October 7, 1729; November 5 and 28, 1742; October 31, 1746; November 4, 1748; October 31, 1749; October 9, 1750; Christmas Eve, 1792; and on Christmas Day, 1794. Finally, in the decades before the installation of the marigraphs, high waters are recorded to have occurred on December 5, 1839, as well as in 1848 (140 cm) and 1867 (153 cm). Exceptional high waters since 1923 These are the highest water levels documented by the Tide Monitoring and Forecast Centre of Venice: Maximum high tide level: 194 cm, recorded on November 4, 1966 Minimum ebb tide level: −121 cm, recorded on February 14, 1934 Maximum difference between a high tide and the following ebb tide: 163 cm, recorded on January 28, 1948 and on December 28, 1970 Maximum difference between an ebb tide and the following high tide: 146 cm, recorded on February 23–24, 1928 and on January 25, 1966 In popular culture In Kozue Amano's utopian science fantasy manga series Aria and its anime adaptation, acqua alta is a phenomenon that happens in the lands on Mars referred to as Neo Venezia. Donna Leon refers to acqua alta in multiple books in her Commissario Guido Brunetti Mystery Series, set in and around Venice. For example: In Acqua Alta (1996), book 5, acqua alta is an important plot point, as the title suggests. In Friends in High Places (2000), book 9, the residence of a bureaucrat who died mysteriously has a "high step the residents no doubt hoped would raise their front hall about the level of acqua alta", and inside, "There was a small entrance, little more than a metre wide, up from which rose two steps, further evidence of the Venetians' eternal confidence that they could outwit the tides that gnawed away perpetually at the foundations of the city. The room to which the steps led was clean and neat and surprisingly well lit for an apartment located on a piano rialzato (raised ground floor)". In the popular manga and anime One Piece by Eiichiro Oda, Season 8 is set in Water 7, a city inspired by Venice. This season introduces the phenomenon of Aqua Laguna, an annual event in which rising water levels completely floods the lower part of the city, causing massive damage to it. References Bibliography External links Venice Municipality - Tide Monitoring and Forecast Center (in Italian) APAT Agency for Environment Protection and Technical Services, Venice (in Italian) Adriatic Sea Floods in Italy Geography of Venice Hydrology
Acqua alta
Chemistry,Engineering,Environmental_science
3,737
20,249,418
https://en.wikipedia.org/wiki/Asymmetric%20graph
In graph theory, a branch of mathematics, an undirected graph is called an asymmetric graph if it has no nontrivial symmetries. Formally, an automorphism of a graph is a permutation of its vertices with the property that any two vertices and are adjacent if and only if and are adjacent. The identity mapping of a graph is always an automorphism, and is called the trivial automorphism of the graph. An asymmetric graph is a graph for which there are no other automorphisms. Note that the term "asymmetric graph" is not a negation of the term "symmetric graph," as the latter refers to a stronger condition than possessing nontrivial symmetries. Examples The smallest asymmetric non-trivial graphs have 6 vertices. The smallest asymmetric regular graphs have ten vertices; there exist asymmetric graphs that are and . One of the five smallest asymmetric cubic graphs is the twelve-vertex Frucht graph discovered in 1939. According to a strengthened version of Frucht's theorem, there are infinitely many asymmetric cubic graphs. Properties The class of asymmetric graphs is closed under complements: a graph G is asymmetric if and only if its complement is. Any n-vertex asymmetric graph can be made symmetric by adding and removing a total of at most n/2 + o(n) edges. Random graphs The proportion of graphs on n vertices with a nontrivial automorphism tends to zero as n grows, which is informally expressed as "almost all finite graphs are asymmetric". In contrast, again informally, "almost all infinite graphs have nontrivial symmetries." More specifically, countably infinite random graphs in the Erdős–Rényi model are, with probability 1, isomorphic to the highly symmetric Rado graph. Trees The smallest asymmetric tree has seven vertices: it consists of three paths of lengths 1, 2, and 3, linked at a common endpoint. In contrast to the situation for graphs, almost all trees are symmetric. In particular, if a tree is chosen uniformly at random among all trees on n labeled nodes, then with probability tending to 1 as n increases, the tree will contain some two leaves adjacent to the same node and will have symmetries exchanging these two leaves. References Graph families Graph
Asymmetric graph
Physics
483
5,024,592
https://en.wikipedia.org/wiki/Immune%20tolerance
Immune tolerance, also known as immunological tolerance or immunotolerance, refers to the immune system's state of unresponsiveness to substances or tissues that would otherwise trigger an immune response. It arises from prior exposure to a specific antigen and contrasts the immune system's conventional role in eliminating foreign antigens. Depending on the site of induction, tolerance is categorized as either central tolerance, occurring in the thymus and bone marrow, or peripheral tolerance, taking place in other tissues and lymph nodes. Although the mechanisms establishing central and peripheral tolerance differ, their outcomes are analogous, ensuring immune system modulation. Immune tolerance is important for normal physiology and homeostasis. Central tolerance is crucial for enabling the immune system to differentiate between self and non-self antigens, thereby preventing autoimmunity. Peripheral tolerance plays a significant role in preventing excessive immune reactions to environmental agents, including allergens and gut microbiota. Deficiencies in either central or peripheral tolerance mechanisms can lead to autoimmune diseases, with conditions such as systemic lupus erythematosus, rheumatoid arthritis, type 1 diabetes, autoimmune polyendocrine syndrome type 1 (APS-1), and immunodysregulation polyendocrinopathy enteropathy X-linked syndrome (IPEX) as examples. Furthermore, disruptions in immune tolerance are implicated in the development of asthma, atopy, and inflammatory bowel disease. In the context of pregnancy, immune tolerance is vital for the gestation of genetically distinct offspring, as it moderates the alloimmune response sufficiently to prevent miscarriage. However, immune tolerance is not without its drawbacks. It can permit the successful infection of a host by pathogenic microbes that manage to evade immune elimination. Additionally, the induction of peripheral tolerance within the local microenvironment is a strategy employed by many cancers to avoid detection and destruction by the host's immune system. Historical background The phenomenon of immune tolerance was first described by Ray D. Owen in 1945, who noted that dizygotic twin cattle sharing a common placenta also shared a stable mixture of each other's red blood cells (though not necessarily 50/50), and retained that mixture throughout life. Although Owen did not use the term immune tolerance, his study showed the body could be tolerant of these foreign tissues. This observation was experimentally validated by Leslie Brent, Rupert E. Billingham and Peter Medawar in 1953, who showed by injecting foreign cells into fetal or neonatal mice, they could become accepting of future grafts from the same foreign donor. However, they were not thinking of the immunological consequences of their work at the time: as Medawar explains: "We did not set out with the idea in mind of studying the immunological consequences of the phenomenon described by Owen; on the contrary, we had been goaded by Dr. H.P. Donald into trying to devise a foolproof method of distinguishing monozygotic from dizygotic twins... ." However, these discoveries, and the host of allograft experiments and observations of twin chimerism they inspired, were seminal for the theories of immune tolerance formulated by Sir Frank McFarlane Burnet and Frank Fenner, who were the first to propose the deletion of self-reactive lymphocytes to establish tolerance, now termed clonal deletion. Burnet and Medawar were ultimately credited for "the discovery of acquired immune tolerance" and shared the Nobel Prize in Physiology or Medicine in 1960. Definitions and usage In their Nobel Lecture, Medawar and Burnet define immune tolerance as "a state of indifference or non-reactivity towards a substance that would normally be expected to excite an immunological response." Other more recent definitions have remained more or less the same. The 8th edition of Janeway's Immunobiology defines tolerance as "immunologically unresponsive...to another's tissues.". Immune tolerance encompasses the range of physiological mechanisms by which the body reduces or eliminates an immune response to particular agents. It is used to describe the phenomenon underlying discrimination of self from non-self, suppressing allergic responses, allowing chronic infection instead of rejection and elimination, and preventing attack of fetuses by the maternal immune system. Typically, a change in the host, not the antigen, is implied. Though some pathogens can evolve to become less virulent in host-pathogen coevolution, tolerance does not refer to the change in the pathogen but can be used to describe the changes in host physiology. Immune tolerance also does not usually refer to artificially induced immunosuppression by corticosteroids, lymphotoxic chemotherapy agents, sublethal irradiation, etc. Nor does it refer to other types of non-reactivity such as immunological paralysis. In the latter two cases, the host's physiology is handicapped but not fundamentally changed. Immune tolerance is formally differentiated into central or peripheral; however, alternative terms such as "natural" or "acquired" tolerance have at times been used to refer to establishment of tolerance by physiological means or by artificial, experimental, or pharmacological means. These two methods of categorization are sometimes confused, but are not equivalent—central or peripheral tolerance may be present naturally or induced experimentally. This difference is important to keep in mind. Central tolerance Central tolerance refers to the tolerance established by deleting autoreactive lymphocyte clones before they develop into fully immunocompetent cells. It occurs during lymphocyte development in the thymus and bone marrow for T and B lymphocytes, respectively. In these tissues, maturing lymphocytes are exposed to self-antigens presented by medullary thymic epithelial cells and thymic dendritic cells, or bone marrow cells. Self-antigens are present due to endogenous expression, importation of antigen from peripheral sites via circulating blood, and in the case of thymic stromal cells, expression of proteins of other non-thymic tissues by the action of the transcription factor AIRE. Those lymphocytes that have receptors that bind strongly to self-antigens are removed by induction of apoptosis of the autoreactive cells, or by induction of anergy, a state of non-activity. Weakly autoreactive B cells may also remain in a state of immunological ignorance where they simply do not respond to stimulation of their B cell receptor. Some weakly self-recognizing T cells are alternatively differentiated into natural regulatory T cells (nTreg cells), which act as sentinels in the periphery to calm down potential instances of T cell autoreactivity (see peripheral tolerance below). The deletion threshold is much more stringent for T cells than for B cells since T cells alone can cause direct tissue damage. Furthermore, it is more advantageous for the organism to let its B cells recognize a wider variety of antigen so it can produce antibodies against a greater diversity of pathogens. Since the B cells can only be fully activated after confirmation by more self-restricted T cells that recognize the same antigen, autoreactivity is held in check. This process of negative selection ensures that T and B cells that could initiate a potent immune response to the host's own tissues are eliminated while preserving the ability to recognize foreign antigens. It is the step in lymphocyte education that is key for preventing autoimmunity (entire process detailed here). Lymphocyte development and education is most active in fetal development but continues throughout life as immature lymphocytes are generated, slowing as the thymus degenerates and the bone marrow shrinks in adult life. Peripheral tolerance Peripheral tolerance develops after T and B cells mature and enter the peripheral tissues and lymph nodes. It is established by a number of partly overlapping mechanisms that mostly involve control at the level of T cells, especially CD4+ helper T cells, which orchestrate immune responses and give B cells the confirmatory signals they need in order to produce antibodies. Inappropriate reactivity toward normal self-antigen that was not eliminated in the thymus can occur, since the T cells that leave the thymus are relatively but not completely safe. Some will have receptors (TCRs) that can respond to self-antigens that: are present in such high concentration outside the thymus that they can bind to "weak" receptors. the T cell did not encounter in the thymus (such as, tissue-specific molecules like those in the islets of Langerhans, brain, or spinal cord not expressed by AIRE in thymic tissues). Those self-reactive T cells that escape intrathymic negative selection in the thymus can inflict cell injury unless they are deleted or effectively muzzled in the peripheral tissue chiefly by nTreg cells (see central tolerance above). Appropriate reactivity toward certain antigens can also be quieted by induction of tolerance after repeated exposure, or exposure in a certain context. In these cases, there is a differentiation of naïve CD4+ helper T cells into induced Treg cells (iTreg cells) in the peripheral tissue or nearby lymphoid tissue (lymph nodes, mucosal-associated lymphoid tissue, etc.). This differentiation is mediated by IL-2 produced upon T cell activation, and TGF-β from any of a variety of sources, including tolerizing dendritic cells (DCs), other antigen presenting cells, or in certain conditions surrounding tissue. Treg cells are not the only cells that mediate peripheral tolerance. Other regulatory immune cells include T cell subsets similar to but phenotypically distinct from Treg cells, including TR1 cells that make IL-10 but do not express Foxp3, TGF-β-secreting TH3 cells, as well as other less well-characterized cells that help establish a local tolerogenic environment. B cells also express CD22, a non-specific inhibitor receptor that dampens B cell receptor activation. A subset of B regulatory cells that makes IL-10 and TGF-β also exists. Some DCs can make Indoleamine 2,3-dioxygenase (IDO) that depletes the amino acid tryptophan needed by T cells to proliferate and thus reduce responsiveness. DCs also have the capacity to directly induce anergy in T cells that recognize antigen expressed at high levels and thus presented at steady-state by DCs. In addition, FasL expression by immune privileged tissues can result in activation-induced cell death of T cells. nTreg vs. iTreg cells The involvement of T cells, later classified as Treg cells, in immune tolerance was recognized in 1995 when animal models showed that CD4+ CD25+ T cells were necessary and sufficient for the prevention of autoimmunity in mice and rats. Initial observations showed removal of the thymus of a newborn mouse resulted in autoimmunity, which could be rescued by transplantation of CD4+ T cells. A more specific depletion and reconstitution experiment established the phenotype of these cells as CD4+ and CD25+. Later in 2003, experiments showed that Treg cells were characterized by the expression of the Foxp3 transcription factor, which is responsible for the suppressive phenotype of these cells. It was assumed that, since the presence of the Treg cells originally characterized was dependent on the neonatal thymus, these cells were thymically derived. By the mid-2000s, however, evidence was accruing of conversion of naïve CD4+ T cells to Treg cells outside of the thymus. These were later defined as induced or iTreg cells to contrast them with thymus-derived nTreg cells. Both types of Treg cells quieten autoreactive T cell signaling and proliferation by cell-contact-dependent and -independent mechanisms including: Contact-dependent: Granzyme or perforin secretion upon contact Upregulation of cAMP after contact, inducing anergy (reduced proliferation and IL-2 signaling) Interaction with B7 on T cells Downregulation of CD80/CD86 costimultory molecules on antigen presenting cells upon interaction with CTLA-4 or lymphocyte function-associated antigen 1 (LFA-1) Contact-independent Secretion of TGF-β, which sensitizes cells to suppression and promotes Treg-like cell differentiation Secretion of IL-10 Cytokine absorption leading to cytokine deprivation-mediated apoptosis nTreg cells and iTreg cells, however, have a few important distinguishing characteristics that suggest they have different physiological roles: nTreg cells develop in the thymus; iTreg cells develop outside the thymus in chronically inflamed tissue, lymph nodes, spleen, and gut-associated lymphoid tissue (GALT). nTreg cells develop from Foxp3- CD25+ CD4+ cells while iTreg cells develop from Foxp3+ CD25- CD4- cells (both become Foxp3+ CD25+CD4+). nTreg cells, when activated, require CD28 costimulation, while iTreg cells require CTLA-4 costimulation. nTreg cells are specific, modestly, for self-antigen while iTreg cells recognize allergens, commensal bacteria, tumor antigens, alloantigens, and self-antigens in inflamed tissue. Tolerance in physiology and medicine Allograft tolerance Immune recognition of non-self-antigens typically complicates transplantation and engrafting of foreign tissue from an organism of the same species (allografts), resulting in graft reaction. However, there are two general cases in which an allograft may be accepted. One is when cells or tissue are grafted to an immune-privileged site that is sequestered from immune surveillance (like in the eye or testes) or has strong molecular signals in place to prevent dangerous inflammation (like in the brain). The second is when a state of tolerance has been induced, either by previous exposure to the antigen of the donor in a manner that causes immune tolerance rather than sensitization in the recipient, or after chronic rejection. Long-term exposure to a foreign antigen from fetal development or birth may result in establishment of central tolerance, as was observed in Medawar's mouse-allograft experiments. In usual transplant cases, however, such early prior exposure is not possible. Nonetheless, a few patients can still develop allograft tolerance upon cessation of all exogenous immunosuppressive therapy, a condition referred to as operational tolerance. CD4+ Foxp3+ Treg cells, as well as CD8+ CD28- regulatory T cells that dampen cytotoxic responses to grafted organs, are thought to play a role. In addition, genes involved in NK cell and γδT cell function associated with tolerance have been implicated for liver transplant patients. The unique gene signatures of these patients implies their physiology may be predisposed toward immune tolerance. Fetal development The fetus has a different genetic makeup than the mother, as it also translates its father's genes, and is thus perceived as foreign by the maternal immune system. Women who have borne multiple children by the same father typically have antibodies against the father's red blood cell and major histocompatibility complex (MHC) proteins. However, the fetus usually is not rejected by the mother, making it essentially a physiologically tolerated allograft. It is thought that the placental tissues which interface with maternal tissues not only try to escape immunological recognition by downregulating identifying MHC proteins but also actively induce a marked peripheral tolerance. Placental trophoblast cells express a unique Human Leukocyte Antigen (HLA-G) that inhibits attack by maternal NK cells. These cells also express IDO, which represses maternal T cell responses by amino acid starvation. Maternal T cells specific for paternal antigens are also suppressed by tolerogenic DCs and activated iTregs or cross-reacting nTregs. Some maternal Treg cells also release soluble fibrinogen-like proteins 2 (sFGL2), which suppresses the function of DCs and macrophages involved in inflammation and antigen presentation to reactive T cells These mechanisms altogether establish an immune-privileged state in the placenta that protects the fetus. A break in this peripheral tolerance results in miscarriage and fetal loss. (for more information, see Immune tolerance in pregnancy). The microbiome The skin and digestive tract of humans and many other organisms is colonized with an ecosystem of microorganisms that is referred to as the microbiome. Though in mammals a number of defenses exist to keep the microbiota at a safe distance, including a constant sampling and presentation of microbial antigens by local DCs, most organisms do not react against commensal microorganisms and tolerate their presence. Reactions are mounted, however, to pathogenic microbes and microbes that breach physiological barriers(epithelium barriers). Peripheral mucosal immune tolerance, in particular, mediated by iTreg cells and tolerogenic antigen-presenting cells, is thought to be responsible for this phenomenon. In particular, specialized gut CD103+ DCs that produce both TGF-β and retinoic acid efficiently promotes the differentiation of iTreg cells in the gut lymphoid tissue. Foxp3- TR1 cells that make IL-10 are also enriched in the intestinal lining. Break in this tolerance is thought to underlie the pathogenesis of inflammatory bowel diseases like Crohn's disease and ulcerative colitis. Oral tolerance Oral tolerance refers to a specific type of peripheral tolerance induced by antigens given by mouth and exposed to the gut mucosa and its associated lymphoid tissues. The intestine harbours many non-self-antigens that are able to induce an immune reaction. The immune system in the gut needs to restrain from responding to these antigens to prevent constant inflammation. On the other hand, the thin intestinal wall is vulnerable to pathogenic penetration. The immune system must maintain its responsiveness to pathogenic antigens to prevent infections. The immune system has developed mechanisms in which orally ingested antigens can suppress following immune responses on a local and systemic level. Oral tolerance may have evolved to prevent hypersensitivity reactions to food proteins. Mechanisms of oral tolerance for food antigens The soluble antigens in the lumen of intestine are transported to dendritic cells in the lamina propria. After receiving an antigen these dendritic cells migrate to the mesenteric lymph nodes. Here they interact with naïve T cells and induce differentiation into regulatory T cells. The newly differentiated regulatory T cells travel to the lamina propria, where they suppress the immune reaction against the recognized antigens. Antigen presentation to dendritic cells Dendritic cells play a crucial role in establishing oral tolerance for food antigens. The dendritic cells in the intestines cannot directly sample the antigens, as they are located behind the epithelial wall. There are different mechanisms in which the dendritic cells come in contact with the food antigens Dissolved antigens can be taken up by enterocytes. The antigens are then partially degraded in the lysosomes. The partially degraded antigens are presented on MHCII after lysosome merging with MHCII carrying endosomes. The MHCII carrying vesicles are released on the basolateral surface of the enterocytes. Here dendritic cells can interact with the presented antigens. Another pathway of soluble antigen transport occurs through goblet cells. Goblet cell-associated antigen passages (GAP) transfer low molecular weight soluble antigens to CD103+ dendritic cells. CD103+ dendritic cells are associated with tolerance induction. CX3CR1+ macrophages extend in between enterocytes and directly take up antigens form the intestinal lumen. These macrophages are not capable of traveling to the mesenteric lymph nodes. They form gap junctions with CD103+ dendritic cells and transfer antigens to the dendritic cells. Regulatory T cells After antigen interaction the CD103+ dendritic cells travel to the mesenteric lymph nodes where they interact with their T cell population. Within the mesenteric lymph nodes the CD103+ dendritic cells will induce differentiation of the naïve T cell population into Foxp3+ regulatory T cells (iTregs). Under inflammatory conditions, CD103+ dendritic cells will induce Th1 cells instead. The local microenvironment determines if CD103+ dendritic cells act tolerogenic or immunogenic. The differentiation into regulatory T cells is dependent on TGFβ and retinoic acid. Retinoic acid is also programming the T cells to stay in the gut environment by inducing CCR9 and α4β7 expression. The mesenteric lymph node stromal cells also release retinoic acid and are required for gut localisation of the mesenteric lymph node T cell population. The differentiated regulatory T cells subsequently migrate to the lamina propria, where they multiply. CX3CR1+ macrophages present in this environment secrete IL-10, which is required for the expansion of the regulatory T cell population. In the lamina propria the regulatory T cell population creates a tolerogenic environment to food antigens. It is known that tolerance to food antigens is systemic. The mechanism that establishes this systemic tolerance is not yet fully understood. Other mechanisms of oral tolerance Oral tolerance is also established by inducing anergy or deletion of antigen specific T cells. This process can take place in the liver. The liver is exposed to many food antigens through the portal vein and is therefore also a site of food tolerance induction. Upon high antigen exposure plasmacytoid dendritic cells from the liver and mesenteric lymph node can induce anergy or deletion of antigen specific T cells. Anergic T cells are hyporesponsive to their specific antigen. Hypersensitivity and oral tolerance The hypo-responsiveness induced by oral exposure is systemic and can reduce hypersensitivity reactions in certain cases. Records from 1829 indicate that American Indians would reduce contact hypersensitivity from poison ivy by consuming leaves of related Rhus species; however, contemporary attempts to use oral tolerance to ameliorate autoimmune diseases like rheumatoid arthritis and other hypersensitivity reactions have been mixed. The systemic effects of oral tolerance may be explained by the extensive recirculation of immune cells primed in one mucosal tissue in another mucosal tissue, allowing extension of mucosal immunity. The same probably occurs for cells mediating mucosal immune tolerance. Allergy and hypersensitivity reactions in general are traditionally thought of as misguided or excessive reactions by the immune system, possibly due to broken or underdeveloped mechanisms of peripheral tolerance. Usually, Treg cells, TR1, and Th3 cells at mucosal surfaces suppress type 2 CD4 helper cells, mast cells, and eosinophils, which mediate allergic response. Deficits in Treg cells or their localization to mucosa have been implicated in asthma and atopic dermatitis. Attempts have been made to reduce hypersensitivity reactions by oral tolerance and other means of repeated exposure. Repeated administration of the allergen in slowly increasing doses, subcutaneously or sublingually appears to be effective for allergic rhinitis. Repeated administration of antibiotics, which can form haptens to cause allergic reactions, can also reduce antibiotic allergies in children. The tumor microenvironment Immune tolerance is an important means by which growing tumors, which have mutated proteins and altered antigen expression, prevent elimination by the host immune system. It is well recognized that tumors are a complex and dynamic population of cells composed of transformed cells as well as stromal cells, blood vessels, tissue macrophages, and other immune infiltrates. These cells and their interactions all contribute to the changing tumor microenvironment, which the tumor largely manipulates to be immunotolerant so as to avoid elimination. There is an accumulation of metabolic enzymes that suppress T cell proliferation and activation, including IDO and arginase, and high expression of tolerance-inducing ligands like FasL, PD-1, CTLA-4, and B7. Pharmacologic monoclonal antibodies targeted against some of these ligands has been effective in treating cancer. Tumor-derived vesicles known as exosomes have also been implicated promoting differentiation of iTreg cells and myeloid derived suppressor cells (MDSCs), which also induce peripheral tolerance. In addition to promoting immune tolerance, other aspects of the microenvironment aid in immune evasion and induction of tumor-promoting inflammation. Evolution Though the exact evolutionary rationale behind the development of immunological tolerance is not completely known, it is thought to allow organisms to adapt to antigenic stimuli that will consistently be present instead of expending considerable resources fighting it off repeatedly. Tolerance in general can be thought of as an alternative defense strategy that focuses on minimizing impact of an invader on host fitness, instead of on destroying and eliminating the invader. Such efforts may have a prohibitive cost on host fitness. In plants, where the concept was originally used, tolerance is defined as a reaction norm of host fitness over a range of parasite burdens, and can be measured from the slope of the line fitting these data. Immune tolerance may constitute one aspect of this defense strategy, though other types of tissue tolerance have been described. The advantages of immune tolerance, in particular, may be seen in experiments with mice infected with malaria, in which more tolerant mice have higher fitness at greater pathogen burdens. In addition, development of immune tolerance would have allowed organisms to reap the benefits of having a robust commensal microbiome, such as increased nutrient absorption and decreased colonization by pathogenic bacteria. Though it seems that the existence of tolerance is mostly adaptive, allowing an adjustment of the immune response to a level appropriate for the given stressor, it comes with important evolutionary disadvantages. Some infectious microbes take advantage of existing mechanisms of tolerance to avoid detection and/or elimination by the host immune system. Induction of regulatory T cells, for instance, has been noted in infections with Helicobacter pylori, Listeria monocytogenes, Brugia malayi, and other worms and parasites. Another important disadvantage of the existence of tolerance may be susceptibility to cancer progression. Treg cells inhibit anti-tumor NK cells. The injection of Treg cells specific for a tumor antigen also can reverse experimentally-mediated tumor rejection based on that same antigen. The prior existence of immune tolerance mechanisms due to selection for its fitness benefits facilitates its utilization in tumor growth. Tradeoffs between immune tolerance and resistance Immune tolerance contrasts with resistance. Upon exposure to a foreign antigen, either the antigen is eliminated by the standard immune response (resistance), or the immune system adapts to the pathogen, promoting immune tolerance instead. Resistance typically protects the host at the expense of the parasite, while tolerance reduces harm to the host without having any direct negative effects on the parasite. Each strategy has its unique costs and benefits for host fitness: Evolution works to optimize host fitness, so whether elimination or tolerance occurs depends on which would benefit the organism most in a given scenario. If the antigen is from a rare, dangerous invader, the costs of tolerating its presence are high and it is more beneficial to the host to eliminate it. Conversely, if experience (of the organism or its ancestors) has shown that the antigen is innocuous, then it would be more beneficial to tolerate the presence of the antigen rather than pay the costs of inflammation. Despite having mechanisms for both immune resistance and tolerance, any one organism may be overall more skewed toward a tolerant or resistant phenotype depending on individual variation in both traits due to genetic and environmental factors. In mice infected with malaria, different genetic strains of mice fall neatly along a spectrum of being more tolerant but less resistant or more resistant but less tolerant. Patients with autoimmune diseases also often have a unique gene signature and certain environmental risk factors that predispose them to disease. This may have implications for current efforts to identify why certain individuals may be disposed to or protected against autoimmunity, allergy, inflammatory bowel disease, and other such diseases. See also Evolutionary medicine § tradeoffs Immunotherapy Infectious tolerance Mithridatism Plant tolerance to herbivory References External links Immune Tolerance Network International Conference on Immune Tolerance Immunology
Immune tolerance
Biology
6,040
57,234,656
https://en.wikipedia.org/wiki/Kanrodai
The () is a sacred entity in Tenrikyo and Tenrikyo-derived Japanese new religions such as Honmichi, Honbushin, and Daehan Cheolligyo. While Tenrikyo considers the Kanrodai to be a physical pillar, Honmichi gives a new interpretation in which the Kanrodai is embodied as a living person. Honbushin recognizes a Kanrodai on Kamiyama, a mountain in Okayama, as well as a human Kanrodai as its founder Ōnishi Tama. Tenrikyo In Tenrikyo, the Kanrodai (甘露台) is a hexagonal stands in the Divine Residence (Oyasato) of the Tenrikyo Church Headquarters in Tenri, Nara, Japan. It marks the Jiba. Adherents believe that when the hearts of human beings have been adequately purified through the Service, a sweet dew would fall from the heavens onto a vessel placed on top of the stand. Since 1875, there have been several different Kanrodai installed at the Jiba. June 1875: After Nakayama Miki identified the sacred spot of the Jiba, Iburi Izō made a two-metre high wooden kanrodai. 1881: Construction of a stone kanrodai began. However, construction stopped after only two tiers were made, and the police confiscated it in 1882. A pile of pebbles marked the Jiba afterwards. 1888: A wooden board Kanrodai with two tiers was built and placed at the Jiba. 1934: A complete 13-tier hinagata (雛形, or "model") Kanrodai measuring approximately 2.5 metres high was built and placed at the Jiba. It has been regularly replaced on special occasions. July 2000: Most recent replacement of the Kanrodai, as of 2005 Honmichi In Honmichi, the Kanrodai is a living person, namely the religion's founder Ōnishi Aijirō. Honbushin In Honbushin, the Kanrodai is a located in a shrine on the summit of Kamiyama (神山, ) located southeast of the city center of Okayama. Daehan Cheolligyo Unlike in Japanese Tenrikyo, Daehan Cheolligyo's adherents in South Korea directly pray to the wooden kanrodai fixtures (while the one in the headquarters in Uijeongbu is much bigger) that are installed within the main halls of respective churches, instead of mirrors from Shinto traditions, during the localized services appropriate for the Korean social environment. See also Asherah pole, Canaanite sacred tree or pole honouring Asherah, consort of El Axis mundi Baetylus, type of sacred standing stone Bema and bimah, elevated platform Benben Ceremonial pole Foundation Stone High place, raised place of worship Kami, central objects of worship in Shinto, some of which are natural phenomena and objects including stones Lingam, an abstract representation of the Hindu deity Shiva Matzevah, sacred pillar (Hebrew Bible) or Jewish headstone Omphalos of Delphi Peace pole Pole worship Totem pole References Tenrikyo Geographical centres Religious objects Mythological objects
Kanrodai
Physics,Mathematics
647
43,493,792
https://en.wikipedia.org/wiki/Deryk%20Osthus
Deryk Osthus is the Professor of Graph Theory at the School of Mathematics, University of Birmingham. He is known for his research in combinatorics, predominantly in extremal and probabilistic graph theory. Career Osthus earned a B.A. in mathematics from Cambridge University in 1996, followed by the Certificate of Advanced Studies in Mathematics (Part III) from Cambridge in 1997. He earned a PhD in theoretical computer science from Humboldt University of Berlin in 2000. From 2000 until 2004, he was a postdoctoral researcher in Berlin. He joined Birmingham University in 2004 as a lecturer. Working at the Birmingham University from 2004 to a 2010 as lecturer, Deryk was a promoted in 2010 to a senior lecturer. From 2011 to 2012, he was a reader in graph theory. He was appointed Professor in Graph Theory in 2012. Awards and honours Together with Daniela Kühn and Alain Plagne, he was one of the first winners of the European Prize in Combinatorics in 2003. Together with Kühn, he was a recipient of the 2014 Whitehead Prize of the London Mathematical Society for "their many results in extremal graph theory and related areas. Several of their papers resolve long-standing open problems in the area." In 2014, he was also invited to a lecture at the International Congress of Mathematics in Seoul. Grants With the variety of intense research that Deryk Osthus was interested in, many grants were needed to conduct, analyze, and publish the research and publications that Deryk Osthus wanted to figure out, and gain more reliable and valid information on graph theories and other detailed areas. Throughout the years starting from the mid 2000's, many grants were accepted and given to Deryk Osthus in order to complete his research interests and potentially answer any research questions. In August 2007, Deryk Osthus was given his first grant for "Graph expansion and applications." Two months later, in October 2007 he was given another grant for "The regularity method for directed graphs." 3 years later in October 2010, he was given a grant for "Problems in Extremal Graph Theory." In June 2012, he had received a grant for "Edge-colourings and Hamilton decompositions of graphs." A few months later in December 2012, another grant was given to Deryk for "Asymptotic properties of graphs." 3 years later In March 2015, he received a grant for "Randomized approaches to combinatorial packing and covering problems." From January 2019 to the current date, he was given a grant for "Approximate structure in large graphs and hypergraphs." Research Interests With an education stemming back From the late 90's and early 2000's, Deryk Osthus had many areas of interest in the field of research. Deryk Osthus has done a variety of research in his area of interest, which resulted in a variety of different publications. Deryk's research interests are in extremal graph theory, random graphs, randomized algorithms, structural graph theory as well as Ramsey theory. His recent research has included results on Hamilton cycles and more general spanning substructures, as well as decompositions of graphs and hypergraphs. References External links Year of birth missing (living people) Living people Graph theorists Academics of the University of Birmingham Whitehead Prize winners Humboldt University of Berlin alumni
Deryk Osthus
Mathematics
674
777,462
https://en.wikipedia.org/wiki/Neuroblast
In vertebrates, a neuroblast or primitive nerve cell is a postmitotic cell that does not divide further, and which will develop into a neuron after a migration phase. In invertebrates such as Drosophila, neuroblasts are neural progenitor cells which divide asymmetrically to produce a neuroblast, and a daughter cell of varying potency depending on the type of neuroblast. Vertebrate neuroblasts differentiate from radial glial cells and are committed to becoming neurons. Neural stem cells, which only divide symmetrically to produce more neural stem cells, transition gradually into radial glial cells. Radial glial cells, also called radial glial progenitor cells, divide asymmetrically to produce a neuroblast and another radial glial cell that will re-enter the cell cycle. This mitosis occurs in the germinal neuroepithelium (or germinal zone), when a radial glial cell divides to produce the neuroblast. The neuroblast detaches from the epithelium and migrates while the radial glial progenitor cell produced stays in the lumenal epithelium. The migrating cell will not divide further and this is called the neuron's birthday. Cells with the earliest birthdays will only migrate a short distance. Those cells with later birthdays will migrate further to the more outer regions of the cerebral cortex. The positions that the migrated cells occupy will determine their neuronal differentiation. Formation Neuroblasts are formed by the asymmetric division of radial glial cells. They start to migrate as soon as they are born. Neurogenesis can only take place when neural stem cells have transitioned into radial glial cells. Differentiation Neuroblasts are mainly present as precursors of neurons during embryonic development; however, they also constitute one of the cell types involved in adult neurogenesis. Adult neurogenesis is characterized by neural stem cell differentiation and integration in the mature adult mammalian brain. This process occurs in the dentate gyrus of the hippocampus and in the subventricular zones of the adult mammalian brain. Neuroblasts are formed when a neural stem cell, which can differentiate into any type of mature neural cell (i.e. neurons, oligodendrocytes, astrocytes, etc.), divides and becomes a transit amplifying cell. Transit amplifying cells are slightly more differentiated than neural stem cells and can divide asymmetrically to produce postmitotic neuroblasts and glioblasts, as well as other transit amplifying cells. A neuroblast, a daughter cell of a transit amplifying cell, is initially a neural stem cell that has reached the "point of no return." A neuroblast has differentiated such that it will mature into a neuron and not any other neural cell type. Neuroblasts are being studied extensively as they have the potential to be used therapeutically to combat cell loss due to injury or disease in the brain, although their potential effectiveness is debated. Migration In the embryo neuroblasts form the middle mantle layer of the neural tube wall which goes on to form the grey matter of the spinal cord. The outer layer to the mantle layer is the marginal layer and this contains the myelinated axons from the neuroblasts forming the white matter of the spinal cord. The inner layer is the ependymal layer that will form the lining of the ventricles and central canal of the spinal cord. In humans, neuroblasts produced by stem cells in the adult subventricular zone migrate into damaged areas after brain injuries. However, they are restricted to the subtype of small interneuron-like cells, and it is unlikely that they contribute to functional recovery of striatal circuits. Clinical significance There are several disorders known as neuronal migration disorders that can cause serious problems. These arise from a disruption in the pattern of migration of the neuroblasts on their way to their target destinations. The disorders include, lissencephaly, microlissencephaly, pachygyria, and several types of gray matter heterotopia. Neuroblast development in Drosophila In the fruit fly model organism Drosophila melanogaster, a neuroblast is a neural progenitor cell which divides asymmetrically to produce a neuroblast and either a neuron, a ganglion mother cell (GMC), or an intermediate neural progenitor, depending on the type of neuroblast. During embryogenesis, embryonic neuroblasts delaminate from either the procephalic neuroectoderm (for brain neuroblasts), or the ventral nerve cord neuroectoderm (for abdominal neuroblasts). During larval development, optic lobe neuroblasts are generated from a neuroectoderm called the Outer Proliferation Center. There are more than 800 optic lobe neuroblasts, 105 central brain neuroblasts, and 30 abdominal neuroblasts per hemisegment (a bilateral half of a segment). Neuroblasts undergo three known division types. Type 0 neuroblasts divide to give rise to a neuroblast, and a daughter cell which directly differentiates into a single neuron or glia. Type I neuroblasts give rise to a neuroblast and a ganglion mother cell (GMC), which undergoes a terminal division to generate a pair of sibling neurons. This is the most common form of cell division, and is observed in abdominal, optic lobe, and central brain neuroblasts. Type II neuroblasts give rise to a neuroblast and a transit amplifying Intermediate Neural Progenitor (INP). INPs divide in a manner similar to type I neuroblasts, producing an INP and a ganglion mother cell. While only 8 type II neuroblasts exist in the central brain, their lineages are both much larger and more complex than type I neuroblasts. The switch from pluripotent neuroblast to differentiated cell fate is facilitated by the proteins Prospero, Numb, and Miranda. Prospero is a transcription factor that triggers differentiation. It is expressed in neuroblasts, but is kept out of the nucleus by Miranda, which tethers it to the cell basal cortex. This also results in asymmetric division, where Prospero localizes in only one out of the two daughter cells. After division, Prospero enters the nucleus, and the cell it is present in becomes the GMC. Neuroblasts are capable of giving rise to the vast neural diversity present in the fly brain using a combination of spatial and temporal restriction of gene expression that give progeny born from each neuroblast a unique identity depending both their parent neuroblast and their birth date. This is partly based on the position of the neuroblast along the Anterior/Posterior and Dorsal/Ventral axes, and partly on a temporal sequence of transcription factors that are expressed in a specific order as neuroblasts undergo sequential divisions. See also Neuroblastoma Posterior column List of human cell types derived from the germ layers References Embryology of nervous system Cell biology
Neuroblast
Biology
1,536
20,672,522
https://en.wikipedia.org/wiki/Boulder%20River%20Wilderness
Boulder River Wilderness is a wilderness area within the Mount Baker-Snoqualmie National Forest in the western Cascade Range of Washington state. Topography Boulder River Wilderness is made up of dense forests and steep ridges that rise to the summits of Three Fingers and Whitehorse Mountain. Elevations range from in the Boulder River Valley to the south peak of Three Fingers. South Peak is also home to an old fire lookout. This high ridge bears a narrow saw-toothed profile with several sharp summits, which include Liberty, Big Bear, and Whitehorse Mountains and Salish and Buckeye Peaks, all above in elevation. Several steep and heavily wooded ridges thrust out east and west from the central crest of the wilderness. Boulder River, a tributary to the North Fork Stillaguamish River, is the wilderness area's primary drainage and runs approximately through the northwest section of the wilderness. The Long Creek Research Natural Area on the south slope of Wiley Ridge is also protected within the wilderness boundary. Vegetation Common vegetation in Boulder River Wilderness includes old-growth Douglas fir, true fir, western hemlock, and western red cedar, as well as bigleaf maple, alder, willow, and devil's club. Sitka spruce can be found at the lowest elevations along the Boulder River. The Boulder River Wilderness contains some of the last substantials tracts of lowland virgin forest in Washington state. Wildlife Black bears, black-tailed deer, and elk inhabit the forest, and mountain goats can be found on the rocky shelves above the tree line. Hiking Boulder River Wilderness boasts approximately of trails, though the central core of the area remains rough and trailless. A short trail extends up Boulder River for through old-growth forest. Three short trails climb toward the high crest and eventually peter out. Another trail crosses the northeast corner of the Wilderness over Squire Creek Pass, with outstanding views of the high crest. See also List of U.S. Wilderness Areas List of old-growth forests References External links Boulder River Wilderness - Mt. Baker-Snoqualmie National Forest Boulder River Wilderness - Wilderness.net Boulder River Wilderness, Washington - Backpacker Magazine Wilderness areas of Washington (state) Old-growth forests Cascade Range Protected areas of Snohomish County, Washington Mount Baker-Snoqualmie National Forest Protected areas established in 1984 1984 establishments in Washington (state)
Boulder River Wilderness
Biology
472
12,501,792
https://en.wikipedia.org/wiki/Bisoxazoline%20ligand
Bis(oxazoline) ligands (often abbreviated BOX ligands) are a class of privileged chiral ligands containing two oxazoline rings. They are typically C2‑symmetric and exist in a wide variety of forms; with structures based around CH2 or pyridine linkers being particularly common (often generalised BOX and PyBOX respectively). The coordination complexes of bis(oxazoline) ligands are used in asymmetric catalysis. These ligands are examples of C2-symmetric ligands. Synthesis The synthesis of oxazoline rings is well established and in general proceeded via the cyclisation of a 2‑amino alcohol with any of a number of suitable functional groups. In the case of bis(oxazoline)s, synthesis is most conveniently achieved by using bi-functional starting materials; as this allows both rings to be produced at once. Of the materials suitable, dicarboxylic or dinitrile compounds are the most commonly available and hence the majority bis(oxazoline) ligands are produced from these materials. Part of the success of the BOX and PyBOX motifs lies in their convenient one step synthesis from malononitrile and dipicolinic acid, which are commercially available at low expense. Chirality is introduced with the amino alcohols, as these are prepared from amino acids and hence are chiral (e.g. valinol). Catalytic applications In general, for methylene bridged BOX ligands the stereochemical outcome is consistent with a twisted square planar intermediate that was proposed based on related crystal structures. The substituent at the oxazoline's 4-position blocks one enantiotopic face of the substrate, leading to enantioselectivity. This is demonstrated in the following aldol-type reaction, but is applicable to a wide variety of reactions such as Mannich-type reactions, ene reaction, Michael addition, Nazarov cyclization, and hetero-Diels-Alder reaction. On the other hand, two-point binding on a Lewis acid bearing the meridially tridentate PyBOX ligand would result in a square pyramidal complex. A study using (benzyloxy)acetaldehyde as the electrophile showed that the stereochemical outcome is consistent with the carbonyl oxygen binding equatorially and the ether oxygen binding axially. Metal complexes incorporating bis(oxazoline) ligands are effective for a wide range of asymmetric catalytic transformations and have been the subject of numerous literature reviews. The neutral character of bis(oxazoline)s makes them well suited to use with noble metals, with copper complexes being particularly common. Their most important and commonly used applications are in carbon–carbon bond forming reactions. Carbon–carbon bond forming reactions bis(oxazoline) ligands have been found to be effective for a range of asymmetric cycloaddition reactions, this began with the very first application of BOX ligands in carbenoid cyclopropanations and has been expanded to include 1,3-Dipolar cycloaddition and Diels-Alder reactions. Bisoxazoline ligands have also been found to be effective for aldol, Michael and ene reactions, amongst many others Other reactions The success of bis(oxazoline) ligands for carbenoid cyclopropanations led to their application for aziridination. Another common reaction is hydrosilylation, which dates back to the first use of PyBOX ligands. Other niche applications include as fluorination catalysts and for Wacker-type cyclisations. History Oxazoline ligands were first used for asymmetric catalysis in 1984 when Brunner et al. showed a single example, along with a number of Schiff bases, as being effective for enantioselective carbenoid cyclopropanation. Schiff bases were prominent ligands at the time, having been used by Ryōji Noyori during the discovery of asymmetric catalysis in 1968 (for which he and William S. Knowles would later be awarded the Nobel Prize in Chemistry). Brunner's work was influenced by that of Tadatoshi Aratani, who had worked with Noyori, before publishing a number of papers on enantioselective cyclopropanation using Schiff bases. In this first usage the oxazoline ligand performed poorly, giving an ee of 4.9% compared to 65.6% from one of the Schiff base ligands. However Brunner reinvestigated oxazoline ligands during research into the monophenylation of diols, leading to the development of chiral pyridine oxazoline ligands, which achieved ee's of 30.2% in 1986 and 45% in 1989. In the same year Andreas Pfaltz et al. reported the use of C2‑symmetric semicorrin ligands for enantioselective carbenoid cyclopropanations, achieving impressive results with ee's of between 92-97%. Reference was made to both Brunner's and Aratani's work, however the design of the ligands was also largely based on his earlier work with various macrocycles. A disadvantage of these ligands however, was that they required a multi-step synthesis with a low overall yield of approximately 30%. Brunner's work led to the development of very first bisoxazolines by Nishiyama et al., who synthesised the first PyBox ligands in 1989. These ligands were used in the hydrosilylation of ketones; achieving ee's of up to 93% The first BOX ligands were reported a year later by Masamune et al. and were first used in copper catalysed carbenoid cyclopropanation reactions; achieving ee's of up to 99% with 1% molar loadings. This was a remarkable result for the time and generated significant interest in the BOX motif. As the synthesis of 2-oxazoline rings was already well established at this time (literature reviews in 1949 and 1971), research proceeded quickly, with papers from new groups being published within a year. and review articles being published by 1996. Today a considerable number of bis(oxazoline) ligands exist; structurally these are still largely based around the classic BOX and PyBOX motifs, however they also include a number of alternative structures, such as axially chiral compounds. See also Phosphinooxazolines (PHOX) Trisoxazolines (TRISOX) References Ligands Oxazolines
Bisoxazoline ligand
Chemistry
1,408
41,533,727
https://en.wikipedia.org/wiki/TouchBistro%20Inc.
TouchBistro Inc. is a Toronto-based software company that develops a restaurant point of sale system for the iPad. TouchBistro Inc. was founded by Alex Barrotti in 2010. History Barrotti had previously founded INEX, a web-based online storefront creator, which he sold to Infospace (now Blucora) in 1999 for $45 million. TouchBistro is an app that supports tableside ordering, custom restaurant layouts, custom menus, bill splitting, sales reports, and an unlimited number of order and cash register printers. It is available from the iTunes store. At some point it was the top grossing Food and Beverage iTunes app in over 34 countries. TouchBistro does not require an Internet connection, communicating with printers/cash drawers via local WIFI. PayPal Partnership In August, 2013, TouchBistro partnered with PayPal to facilitate restaurant payments via smartphones. See also Point of sale companies category References Point of sale companies Mobile technology Retail point of sale systems Business software
TouchBistro Inc.
Technology
206
41,197,921
https://en.wikipedia.org/wiki/Verbal%20overshadowing
Verbal overshadowing is a phenomenon where giving a verbal description of sensory input impairs formation of memories of that input. This was first reported by Schooler and Engstler-Schooler (1990) where it was shown that the effects can be observed across multiple domains of cognition which are known to rely on non-verbal knowledge and perceptual expertise. One example of this is memory, which has been known to be influenced by language. Seminal work by Carmichael and collaborators (1932) demonstrated that when verbal labels are connected to non-verbal forms during an individual's encoding process, it could potentially bias the way those forms are reproduced. Because of this, memory performance relying on reportable aspects of memory that encode visual forms should be vulnerable to the effects of verbalization. Initial findings Schooler and Engstler-Schooler (1990) were the first to report findings of verbal overshadowing. In their study, participants watched a video of a simulated robbery and were instructed to either verbally describe the robber or engage in a control task. Those who engaged in giving a verbal description were less likely to correctly identify the robber from a test lineup, compared to those who engaged in the control task. A larger effect was detected when the verbal description was provided 20, rather than 5, minutes after the video, and immediately before the test lineup. A meta-analysis by Meissner and Brigham (2008) supported the effects of verbal overshadowing, showing a small but reliably negative effect. General effects of verbal overshadowing The effects of verbal overshadowing have been generalized across multiple domains of cognition that are known to rely on non-verbal knowledge and perceptual expertise, such as memory. Memory has been known to be influenced by language. Seminal work by Carmichael and collaborators (1932) demonstrated that labels attached to, or associated with, non-verbal forms during memory encoding can affect the way the forms were subsequently reproduced. Because of this, memory performance that relies on reportable aspects of memory that encode visual forms should be vulnerable to the effects of verbalization. Pelizzon, Brandimonte, and Luccio (2002) found that visual memory representations appear to incorporate visual, spatial, and temporal characteristics. It is explained as follows: Hatano, Ueno, Kitagami, and Kawaguchi found that verbal overshadowing is likely to occur when participants verbally described targets in detail. Detailed verbal descriptions resulted in more frequently inaccurate descriptions that in turn created inaccurate representations in the memories of participants. Inaccuracies are also likely to occur when face recognition comes immediately after verbalization. Other forms of non-verbal knowledge affected by verbal overshadowing include the following: Verbalization of stimuli leads to the disruption of non-reportable processes that are necessary for achieving insight solutions, which are distinct from language processes. Schooler, Ohlsson, and Brooks (1993) found that face recognition requires information that cannot be adequately verbalized, giving rise to difficulty in describing factors in recognition judgments. Subjects were less effective in solving insight problems when compelled to put their thoughts in words, which suggests that language may interfere with thought. The verbal overshadowing effect was not seen when participants engaged in articulatory suppression. Performance was reduced in both the verbal and non-verbal description conditions. This is evidence that verbal encoding plays a role in face recognition. By testing with distracting faces presented between study and test, Lloyd-Jones and Brown (2008) suggested a dual-process approach to recognition memory took place, that verbalization influenced familiarity-based processes at first, but its effects were later seen on recollection, when discrimination between items became more difficult. Verbal overshadowing in facial recognition The verbal overshadowing effect can be found for facial recognition because faces are predominately processed in a holistic or configurable manner. (Tanaka & Farah, 1993; Tanaka & Sengco, 1997) Verbalizing one's memory for a face is done using a featural or analytic strategy, leading to a drift from the configurable information about the face and to impaired recognition performance. However, Fallshore & Schooler (1995) found that the verbal overshadowing effect was not found when participants described faces of races different from their own. A study by Brown and Lloyd-Jones (2003) found that there was no verbal overshadowing effect found in car descriptions; it was only seen in facial descriptions. The authors noted that descriptions were no different on any measure including accuracy. It is suggested that less expertise in verbalizing faces rather than cars invokes a stronger shift in verbal and featural processing. This supports the concept of a transfer inappropriate retrieval framework and addresses some limitations of the effect. Wickham and Swift (2006) suggested that the verbal overshadowing effect is not seen in describing all faces, and one aspect that determines this is distinctiveness. Results showed that typical faces produce verbal overshadowing, while distinctive faces did not. In studies of eyewitness reports, variation in response criteria given by participants influenced the quality of the descriptions generated and accuracy on identification task, known as the retrieval-based effect. Face recognition was also impaired when subjects described a familiar face, such as a parent, or when describing a previously seen but novel face. Dodson, Johnson, and Schooler (1997) found that recognition was also impaired when participants were provided with a description of a previously seen face, and they were able to ignore provided versus self-generated descriptions more easily. This finding of verbal overshadowing suggested that eyewitness recognition is not only affected by their own descriptions, but of descriptions heard from others, such other eyewitness testimonies. Voice recognition The verbal overshadowing effect has also been found to affect voice identification. Research shows that describing a non-verbal stimuli leads to a decrease in recognition accuracy. In an unpublished study by Schooler, Fiore, Melcher, and Ambadar (1996), participants listened to a tape-recorded voice, after which they were asked either to verbally describe it or to not do so, and then asked to distinguish the voice from 3 similar distractor voices. The results showed that verbal overshadowing impaired accuracy of recognition based on gut feeling, suggesting an overall verbal overshadowing for voice recognition. Due to the forensic relevance of voices heard over the telephone and harassing phone calls that are often a problem for police, Perfect, Hunt, and Harris (2002) examined the influence of three factors on accuracy and confidence in voice recognition from a line-up. They expected to find an effect, because voice represents a class of stimuli that is difficult to describe verbally. This meets Schooler et al.'s (1997) modality mismatch criterion, meaning that describing the speakers age, gender, or accent is difficult, making voice recognition susceptible to the verbal overshadowing phenomenon. It was found that the method of memory encoding had no impact on performance, and that hearing a telephone voice reduced confidence but did not affect accuracy. They also found that providing a verbal description impaired accuracy but had no effect on confidence. The data showed an effect of verbal overshadowing in voice recognition and provided yet another disassociation between confidence and performance. Although there was a difference in confidence level, witnesses were able to identify voices over the telephone as accurately as voices heard directly. The authors stated, "This effect is useful in the respect that it demonstrates that the lack of a confidence effect with verbal overshadowing is not due to low sensitivity of the confidence measure". (p. 979) The data from the study suggested that the main effect of verbal overshadowing was seen mainly in the telephone voices. They also stated, "However, because of the statistical limitations, it is perhaps best not to over-interpret this finding until it is replicated in a larger sample". (P.979) Perfect, Hunt, and Harris (2002) did a small-scale study that showed a reliable verbal overshadowing effect on voice identification, thus confirming previous research that showed verbally describing a to-be-recognized (non-verbal) stimulus leads to decrease in recognition accuracy without reducing confidence. This disassociation between performance and confidence offers scope to test theoretical accounts of the verbal overshadowing phenomena, and it is an issue that has been neglected so far. A more recent study by Wilson, Seale-Carlisle, and Mickes (2017) found that confidence is predictive of accuracy in verbal overshadowing. They found that high confidence identifications are lower in accuracy compared to what was observed in the lineups. Other results from their study concluded that police should encourage reporting crimes immediately and take down descriptions of perpetrators as soon as possible in order to reduce the effects of verbal overshadowing. Recognition criteria The verbal overshadowing effect may effect changes in recognition criteria rather than in processing style or underlying memory. One explanation for the effect is based on a shift in a person's recognition criteria, or increased hesitancy in choosing someone from a lineup. Verbalization leads witnesses to use more precise or exact recognition criteria, therefore lowering identification rates, the phenomena can be captured by a shift in recognition criteria. Placement of recognition criteria affects performance. With conservative criteria, people are unwilling to identify anyone in a lineup, but with liberal criteria, identification rates are greater. The criterion effect is persistent and known to play a large part in recognition paradigms that allow voluntary responses, moreso when there is a tradeoff between quantity and accuracy. However, with the criterion effect controlled, confidence and perceived difficulty cannot account for the effects of verbalization. The verbal overshadowing effect is caused by strict recognition criteria that only affect identification rates when a "not present" response is possible. Clare and Lewandowsky (2004) found that verbalization can have a positive effect on identification. Although resulting in fewer correct identifications from lineups, it also reduced the number of false identification rates within lineups. Because of this, verbalization may protect innocent suspects from being falsely identified as perpetrators, suggesting that not all effects of verbalization on eyewitnesses are bad. With the standard description instructions in place, verbal overshadowing occurs because people have become more reluctant to identify someone from a lineup after they provide a description of the perpetrator. The criterion-effect explanation is one that accounts for a large verbal overshadowing effect in optional lineups, the absence of a verbal overshadowing effect with forced choice identification, and the advantageous effect of verbalization with optional-choice lineups. Retrieval-based interference theory Some researchers (e.g. Hunt and Carroll, Clare and Lewandowsky) hypothesize that verbal overshadowing is caused by retrieval-based interference, which is a change to the original memory trace made during verbalization of the given memory. Verbalizing of a non-verbal stimulus brings a verbal memory representation of that stimulus. When tested, this interferes with the original perceptual memory representation on which accurate recognition performance depends. The content of verbalization influences the outcome of identification by retroactively interfering with the original memory trace. Studies have found that when participants were forced to provide detailed descriptions of perpetrators, even if this involved guessing, a larger verbal overshadowing effect resulted than when they were discouraged from guessing. The inaccuracy is thought to be caused by descriptions interfering with the earlier memory. Finger and Pezdek (1999) took this as a retroactive interference effect on memory, caused by higher verbalization when participants completed a complicated rather than an easy task. Meissner and Brigham (2001) showed that when participants were allowed to guess when forced to provide a detailed description of a robber, the size of the verbal overshadowing effect was greater than when they were discouraged from guessing. Verbalization interfered with performance on new and familiar faces, but it did not interfere with priming. They argued that verbalization encouraged a long-lasting shift toward greater visual processing of individual facial features at the expense of more global visual processing (which is, for the most part, beneficial in recognition of faces and important for discriminating faces from non-faces in the face-decision task. Verbally describing a visual memory of a face can interfere with subsequent visual recognition of that face. Retrieval-based interference has been challenged by several findings and by the fact that verbal overshadowing can have an effect on description beyond a specific face, to other, undescribed faces. Recoding interference hypothesis According to the recoding interference hypothesis, verbalizing non-verbal memory makes the visual representations less accurate. The recoding interference hypothesis predicts that verbal overshadowing will occur more readily if participants generate less accurate verbal descriptions. A computational model detected core processing principles of the recoding interference hypothesis to simulate facial recognition, and it reproduced these behavioral phenomena as well as verbal overshadowing, providing an account as to why target description accuracy does not linearly predict recognition accuracy. The study addressed the replicability issues in verbal overshadowing. Hatano et al. (2015) stated: The study found that verbalization changed the nature of representations, rather than shifting the types of processing. Then, this recoded representation was used for (or affected) subsequent visual recognition and resulted in a failure in computing item-specific information. The phenomena from the study are predictable from the recoding interference hypothesis. It is explained as follows by the authors: The researchers found that generated verbal descriptions affected not just the polarity (familiarity) values of "old" items but also those of "new" items, as a function of how accurately the descriptions captured the distractor faces. This was the reason why target-description accuracy in isolation does not necessarily predict the effect of verbal overshadowing in a linear fashion. The model links verbal and visual code as in the face-recognition model. Verbal descriptions predicted recognition impairment and indicated that theory about interference in the memory domain is potentially useful when discussing the verbalization effect on non-verbal recognition. The study also noted that it did not consider whether this account can be extended to the visual-imagery domain beyond facial recognition. Brandimonte and Collina (2008) conducted three experiments that support a retrieval based, recording-interference explanation of verbal overshadowing. They found that the effects of verbal overshadowing can be attenuated by reactivating featural aspects of a stimulus, with any cues that trigger the activation of featural representations. Transfer inappropriate retrieval hypothesis The transfer inappropriate retrieval (TIR) hypothesis states that activation of the verbal processes needed for a description stops the following application of nonverbal face recognition processes without changing the memory of the perpetrator. TIR theory does not expect the accuracy of verbalization to be related to accuracy of identification. All that is required for verbal overshadowing to happen is the act of verbalizing, to put a description into words, which produces an expected processing shift to an inappropriate style. This explains the generalization of interference to non-described faces as well as to described ones. The TIR hypothesis assumes the original memory trace of the target remains and becomes temporarily inaccessible, rather than being permanently changed by verbalization. Verbalization leads cognitive processing to an inappropriate style, which stops retrieval of the non-verbal information needed for facial recognition. Verbal overshadowing comes solely because "verbalization indices inappropriate processing operations which area incommensurate with the processes required for successful recognition performance, that is, there is a transfer inappropriate processing shift". A shift is not tied to a particular item that has been previously coded, but rather generalizes to new stimuli that have not been encountered before. It is proposed that verbalization requires a shift to verbal processing, and this shift obstructs the application of non-verbal (face-specific) processing in the following face recognition test. The key difference from other hypotheses is whether an operation-specific representation is postulated or not. Also referred to as the processing shift account, TIR proposes that verbal overshadowing does not comes from conflicting representations, but from the effect that verbal description has in leading participants to switch from an appropriate to an inappropriate mode of processing, which carries over to the recognition test. Hunt and Carroll (2008) reported results supporting this interpretation. They found that, According to this hypothesis, participants in the proximal imagining condition were more impaired because they encoded the target face at first using a holistic, non-verbal, and non-analytic process. After, a switch was made to explicit, analytic processing, to write or verbalize their description. They then failed to revert to critical non-verbal, holistic mode, which is more effective in making a recognition decision. Contrary to that, participants in the distal imagining condition experienced less disruption from having verbalized the faces prior to the recognition test, adopting instead a distal time perspective, which is known to facilitate abstract or holistic thinking. (Forster et al., 2004) The study found a verbal overshadowing effect when participants were forced to engage in extensive verbalization by making them fill out a blank, lined page with a description of the previously seen face. This effect was not observed when participants were not forced to engage in this extensive verbalization. This suggested that prevention of verbal overshadowing in real life situations can be effected with a manipulation, such as encouraging global thinking. Consistent with TIR, a study by Dehon, Vanootighem, and Bredart (2013) showed the absence of correlation between descriptor accuracy, vocabulary performance, and correct identification. Neither quality nor quantity of descriptors affected identification accuracy, which was only impacted by the act of verbally describing a face. The results held for the immediate test condition, "post-encoding delay", consistent with the hypothesis. This suggested that the content of description is irrelevant and that the verbal overshadowing effect occurred due to a shift in featural processing caused by the verbalization, thus supporting the TIR. Westerman and Larsen (1997) suggested that verbal overshadowing effects are more pervasive that initially believed. They showed that verbal description can impair face recognition when the described object is not the recognition target. This extends findings from Dodson et al. (1997) to a situation where face recognition is impaired by the description of a non-face object. Consistent with TIR, this points to a general shift in face-recognition processing as a result of producing a description of any object. Some problems with this theory are that identification accuracy can be affected even if no retrieval operations are involved and that unrelated nonverbal processes can alleviate the effect of verbalization on identification. Signal detection theory Signal detection theory views impaired recognition as caused by a reduced ability to discriminate, that reduced discriminability in test suspects is a consequence of describing the robber or perpetrator. Who is susceptible? The verbal overshadowing effect on face identification was found in children as well as adults, with neither accuracy of description, delay, nor target presence in lineup being found to be associated with accuracy. Age increased the number of accurate descriptors produced but not incorrect ones, suggesting that children produce less detailed but not less accurate descriptions than adults. This study holds for 7-8, 10-11, and 13-14 year olds. Further research is needed to detect under which conditions this phenomenon may hold, even in more "ecologically valid" situations. Older adults have been found to be less affected than young adults by the verbal overshadowing effect. In a study similar to Schooler et al. (1990), Kinlen, Adams-Price, and Henley (2007) showed the following: The findings suggested that verbal expertise, as seen in older adults, may decrease effects of verbal overshadowing in a face recognition task. See also Articulatory suppression Cognitive interview Decline effect References Cognitive psychology Face perception Memory Speech recognition
Verbal overshadowing
Biology
4,059
6,770,335
https://en.wikipedia.org/wiki/Car%E2%80%93Parrinello%20molecular%20dynamics
Car–Parrinello molecular dynamics or CPMD refers to either a method used in molecular dynamics (also known as the Car–Parrinello method) or the computational chemistry software package used to implement this method. The CPMD method is one of the major methods for calculating ab-initio molecular dynamics (ab-initio MD or AIMD). Ab initio molecular dynamics (ab initio MD) is a computational method that uses first principles, or fundamental laws of nature, to simulate the motion of atoms in a system. It is a type of molecular dynamics (MD) simulation that does not rely on empirical potentials or force fields to describe the interactions between atoms, but rather calculates these interactions directly from the electronic structure of the system using quantum mechanics. In an ab initio MD simulation, the total energy of the system is calculated at each time step using density functional theory (DFT) or another method of quantum chemistry. The forces acting on each atom are then determined from the gradient of the energy with respect to the atomic coordinates, and the equations of motion are solved to predict the trajectory of the atoms. AIMD permits chemical bond breaking and forming events to occur and accounts for electronic polarization effect. Therefore, Ab initio MD simulations can be used to study a wide range of phenomena, including the structural, thermodynamic, and dynamic properties of materials and chemical reactions. They are particularly useful for systems that are not well described by empirical potentials or force fields, such as systems with strong electronic correlation or systems with many degrees of freedom. However, ab initio MD simulations are computationally demanding and require significant computational resources. The CPMD method is related to the more common Born–Oppenheimer molecular dynamics (BOMD) method in that the quantum mechanical effect of the electrons is included in the calculation of energy and forces for the classical motion of the nuclei. CPMD and BOMD are different types of AIMD. However, whereas BOMD treats the electronic structure problem within the time-independent Schrödinger equation, CPMD explicitly includes the electrons as active degrees of freedom, via (fictitious) dynamical variables. The software is a parallelized plane wave / pseudopotential implementation of density functional theory, particularly designed for ab initio molecular dynamics. Car–Parrinello method The Car–Parrinello method is a type of molecular dynamics, usually employing periodic boundary conditions, planewave basis sets, and density functional theory, proposed by Roberto Car and Michele Parrinello in 1985 while working at SISSA, who were subsequently awarded the Dirac Medal by ICTP in 2009. In contrast to Born–Oppenheimer molecular dynamics wherein the nuclear (ions) degree of freedom are propagated using ionic forces which are calculated at each iteration by approximately solving the electronic problem with conventional matrix diagonalization methods, the Car–Parrinello method explicitly introduces the electronic degrees of freedom as (fictitious) dynamical variables, writing an extended Lagrangian for the system which leads to a system of coupled equations of motion for both ions and electrons. In this way, an explicit electronic minimization at each time step, as done in Born–Oppenheimer MD, is not needed: after an initial standard electronic minimization, the fictitious dynamics of the electrons keeps them on the electronic ground state corresponding to each new ionic configuration visited along the dynamics, thus yielding accurate ionic forces. In order to maintain this adiabaticity condition, it is necessary that the fictitious mass of the electrons is chosen small enough to avoid a significant energy transfer from the ionic to the electronic degrees of freedom. This small fictitious mass in turn requires that the equations of motion are integrated using a smaller time step than the one (1–10 fs) commonly used in Born–Oppenheimer molecular dynamics. Currently, the CPMD method can be applied to systems that consist of a few tens or hundreds of atoms and access timescales on the order of tens of picoseconds. General approach In CPMD the core electrons are usually described by a pseudopotential and the wavefunction of the valence electrons are approximated by a plane wave basis set. The ground state electronic density (for fixed nuclei) is calculated self-consistently, usually using the density functional theory method. Kohn-Sham equations are often used to calculate the electronic structure, where electronic orbitals are expanded in a plane-wave basis set. Then, using that density, forces on the nuclei can be computed, to update the trajectories (using, e.g. the Verlet integration algorithm). In addition, however, the coefficients used to obtain the electronic orbital functions can be treated as a set of extra spatial dimensions, and trajectories for the orbitals can be calculated in this context. Fictitious dynamics CPMD is an approximation of the Born–Oppenheimer MD (BOMD) method. In BOMD, the electrons' wave function must be minimized via matrix diagonalization at every step in the trajectory. CPMD uses fictitious dynamics to keep the electrons close to the ground state, preventing the need for a costly self-consistent iterative minimization at each time step. The fictitious dynamics relies on the use of a fictitious electron mass (usually in the range of 400 – 800 a.u.) to ensure that there is very little energy transfer from nuclei to electrons, i.e. to ensure adiabaticity. Any increase in the fictitious electron mass resulting in energy transfer would cause the system to leave the ground-state BOMD surface. Lagrangian where is the fictitious mass parameter; E[{ψi},{RI}] is the Kohn–Sham energy density functional, which outputs energy values when given Kohn–Sham orbitals and nuclear positions. Orthogonality constraint where δij is the Kronecker delta. Equations of motion The equations of motion are obtained by finding the stationary point of the Lagrangian under variations of ψi and RI, with the orthogonality constraint. where Λij is a Lagrangian multiplier matrix to comply with the orthonormality constraint. Born–Oppenheimer limit In the formal limit where μ → 0, the equations of motion approach Born–Oppenheimer molecular dynamics. Software packages There are a number of software packages available for performing AIMD simulations. Some of the most widely used packages include: CP2K: an open-source software package for AIMD. Quantum Espresso: an open-source package for performing DFT calculations. It includes a module for AIMD. VASP: a commercial software package for performing DFT calculations. It includes a module for AIMD. Gaussian: a commercial software package that can perform AIMD. NWChem: an open-source software package for AIMD. LAMMPS: an open-source software package for performing classical and ab initio MD simulations. SIESTA: an open-source software package for AIMD. Application Studying the behavior of water near a hydrophobic graphene sheet. Investigating the structure and dynamics of liquid water at ambient temperature. Solving the heat transfer problems (heat conduction and thermal radiation) between Si/Ge superlattices. Probing the proton transfer along 1D water chains inside carbon nanotubes. Evaluating the critical point of aluminum. Predicting the amorphous phase of the phase-change memory material GeSbTe. Studying the combustion process of lignite-water systems. Computing and analyzing the IR spectra in terms of H-bond interactions. See also Computational physics Density functional theory Computational chemistry Molecular dynamics Quantum chemistry Ab initio quantum chemistry methods Quantum chemistry computer programs List of software for molecular mechanics modeling List of quantum chemistry and solid-state physics software CP2K References External links Car-Parrinello Molecular Dynamics about [CP2K Open Source Molecular Dynamics ] Density functional theory Density functional theory software Computational chemistry Computational chemistry software Molecular dynamics Molecular dynamics software Quantum chemistry Theoretical chemistry Mathematical chemistry Simulation software Scientific simulation software Physics software Science software Algorithms Computational physics Electronic structure methods
Car–Parrinello molecular dynamics
Physics,Chemistry,Mathematics
1,646
63,992,561
https://en.wikipedia.org/wiki/Sulfate%20nitrates
The sulfate nitrates are a family of double salts that contain both sulfate and nitrate ions (NO3−, SO42−). They are in the class of mixed anion compounds. A few rare minerals are in this class. Two sulfate nitrates are in the class of anthropogenic compounds, accidentally made as a result of human activities in fertilizers that are a mix of ammonium nitrate and ammonium sulfate, and also in the atmosphere as polluting ammonia, nitrogen dioxide, and sulfur dioxide react with the oxygen and water there to form solid particles. The nitro group (NO3−) can act as a ligand, and complexes containing it can form salts with sulfate. List References Sulfates Nitrates Mixed anion compounds
Sulfate nitrates
Physics,Chemistry
153
21,922,970
https://en.wikipedia.org/wiki/Teichm%C3%BCller%E2%80%93Tukey%20lemma
In mathematics, the Teichmüller–Tukey lemma (sometimes named just Tukey's lemma), named after John Tukey and Oswald Teichmüller, is a lemma that states that every nonempty collection of finite character has a maximal element with respect to inclusion. Over Zermelo–Fraenkel set theory, the Teichmüller–Tukey lemma is equivalent to the axiom of choice, and therefore to the well-ordering theorem, Zorn's lemma, and the Hausdorff maximal principle. Definitions A family of sets is of finite character provided it has the following properties: For each , every finite subset of belongs to . If every finite subset of a given set belongs to , then belongs to . Statement of the lemma Let be a set and let . If is of finite character and , then there is a maximal (according to the inclusion relation) such that . Applications In linear algebra, the lemma may be used to show the existence of a basis. Let V be a vector space. Consider the collection of linearly independent sets of vectors. This is a collection of finite character. Thus, a maximal set exists, which must then span V and be a basis for V. Notes References Brillinger, David R. "John Wilder Tukey" Families of sets Order theory Axiom of choice Lemmas in set theory
Teichmüller–Tukey lemma
Mathematics
286
74,149,338
https://en.wikipedia.org/wiki/Einsteinium%20hexafluoride
Einsteinium hexafluoride is a binary inorganic chemical compound of einsteinium and fluorine with the chemical formula . This is a hypothetical compound—its existence has been predicted theoretically, but the compound has yet to be isolated. Physical properties It is unlikely that the compound is stable. References Einsteinium compounds Hexafluorides Hypothetical chemical compounds Actinide halides
Einsteinium hexafluoride
Chemistry
78
996,739
https://en.wikipedia.org/wiki/Frances%20Power%20Cobbe
Frances Power Cobbe (4 December 1822 – 5 April 1904) was an Anglo-Irish writer, philosopher, religious thinker, social reformer, anti-vivisection activist and leading women's suffrage campaigner. She founded a number of animal advocacy groups, including the National Anti-Vivisection Society (NAVS) in 1875 and the British Union for the Abolition of Vivisection (BUAV) in 1898, and was a member of the executive council of the London National Society for Women's Suffrage. Life Frances Power Cobbe was a member of the prominent Cobbe family, descended from Archbishop Charles Cobbe, Primate of Ireland. She was born in Newbridge House in the family estate in present-day Donabate, County Dublin. Cobbe was educated mainly at home by governesses with a brief period at a school in Brighton. She studied English literature, French, German, Italian, music, and the Bible. She then read heavily in the family library especially in religion and theology, joined several subscription libraries, and studied Greek and geometry with a local clergyman. She organised her own study schedule and ended up very well educated. In the late 1830s Cobbe went through a crisis of faith. The humane theology of Theodore Parker, an American transcendentalist and abolitionist, restored her faith (she went on later to edit Parker's collected writings). She began to set out her ideas in what became an Essay on True Religion. Her father disapproved and for a while expelled her from the home. She kept studying and writing anyway and eventually revised the Essay into her first book, the Essay on Intuitive Morals. The first volume came out anonymously in 1855. In 1857 Cobbe's father died and left her an annuity. She took the chance to travel on her own around parts of Europe and the Near East. This took her to Italy where she met a community of similarly independent women: Isa Blagden with whom she went on briefly to share a house, the sculptor Harriet Hosmer, the poet Elizabeth Barrett Browning, the painter Rosa Bonheur, the scientist Mary Somerville and the Welsh sculptor who became her partner, Mary Lloyd (sculptor). In letters and published writing, Cobbe referred to Lloyd alternately as "husband," "wife," and "dear friend." Cobbe also formed a lasting attachment to Italy and went there regularly. She contributed many newspaper and journal articles on Italy, some of which became her 1864 book Italics. Returning to England Cobbe tried working at the Red Lodge Reformatory and living with the owner, Mary Carpenter, from 1858 to 1859. The turbulent relationship between the two meant that Cobbe left the school and moved out. Cobbe now focused on writing and began to publish her first articles in Victorian periodicals. She quickly became very successful and was able to support herself by writing. She and Lloyd began to live together in London. Cobbe kept up a steady stream of journal essays, many of them reissued as books. She became a leader writer for the London newspaper The Echo (London). Cobbe became involved in feminist campaigns for the vote, for women to be admitted to study at university on the same terms as men, and for married women's property rights. She was on the executive council of the London National Society for Women's Suffrage. Her 1878 essay Wife-Torture in England influenced the passage of the 1878 Matrimonial Causes Act, which gave women of violent husbands the right to a legal separation. Cobbe became very concerned about the rise of animal experimentation or vivisection and founded the Victoria Street Society, which later became the National Anti-Vivisection Society, in 1875. The organisation campaigned for laws to regulate vivisection. She and her allies had already prepared a draft bill, Henniker's Bill, presented to parliament in 1875. They proposed regular inspections of licensed premises and that experimenters must always use anaesthetics except under time-limited personal licences. In response Charles Darwin, Thomas Henry Huxley, John Burdon Sanderson and others drafted a rival Playfair's Bill which proposed a lighter system of regulation. Ultimately the Cruelty to Animals Act, 1876 introduced a compromise system. Cobbe found it so watered-down that she gave up on regulation and began to campaign for the abolition of vivisection. The anti-vivisection movement became split between the abolitionists and the moderates. Cobbe later came to think the Victoria Street Society had become too moderate and started the British Union for the Abolition of Vivisection in 1898. In 1884, Cobbe and Lloyd retired to Hengwrt in Wales. Cobbe stayed there after Lloyd died in 1896. Cobbe continued to publish and campaign right until her death. However her friend, the writer Blanche Atkinson, wrote, “The sorrow of Miss Lloyd's death changed the whole aspect of existence for Miss Cobbe. The joy of life had gone. It had been such a friendship as is rarely seen – perfect in love, sympathy, and mutual understand.” They are buried together at Saint Illtyd Church Cemetery, Llanelltyd, Gwynedd, Wales. In her will, Cobbe bequeathed all the copyrights of her works to Atkinson . Thought and ideas In Cobbe's first book An Essay on Intuitive Morals, vol. 1, she combined Kantian ethics, theism, and intuitionism. She had encountered Kant in the early 1850s. She argued that the key concept in ethics is duty, that duties presuppose a moral law, and a moral law presupposes an absolute moral legislator - God. She argued that we know by intuition what the law requires us to do. We can trust our intuition because it is "God's tuition". We can do what the law requires because we have noumenal selves as well as being in the world of phenomena. She rejected eudaimonism and utilitarianism. Cobbe applied her moral theory to animal rights, first in The Rights of Man and the Claims of Brutes from 1863. She argued that humans may do harm to animals in order to satisfy real wants but not from mere "wantonness". For example, humans may eat meat but not kill birds for feathers to decorate hats. The harm or pain inflicted must be the minimum possible. For Cobbe this set limits to vivisection, for example, it must always be done under anaesthetia. Cobbe engaged with Darwinism. She had met the Darwin family in 1868. Emma Darwin liked her, saying "Miss Cobbe was very agreeable." Cobbe persuaded Charles Darwin to read Immanuel Kant's Metaphysics of Morals. Darwin had a review copy of Descent of Man sent to her (as well as to Alfred Russel Wallace and St. George Jackson Mivart. This led to her critique of Darwin, Darwinism in Morals, in The Theological Review in April 1871. Cobbe thought morality could not be explained by evolution and needed reference to God. Darwin could show why we do feel sympathy for others, but not why we ought to feel it. However, the debate with Darwin led Cobbe to revise her views about duties to animals. She started to think that sympathy was central and we must above all treat animals in ways that show sympathy for them. Vivisection violated this. She also introduced a distinction between sympathy and what she called heteropathy, similar to hostility or cruelty. She thought we naturally have cruel instincts that found an outlet in vivisection. Religion in contrast cultivated sympathy, but science was undermining it. This became part of a wide-ranging account of the direction of European civilisation. These were just some of the huge range of philosophical topics on which Cobbe wrote. They included aesthetics, philosophy of mind, philosophy of religion, history, pessimism, life after death, and many more. Her books included The Pursuits of Women (1863), Essays New and Old on Ethical and Social Subjects (1865), Darwinism in Morals, and other Essays (1872), The Hopes of the Human Race (1874), The Duties of Women (1881), The Peak in Darien, with some other Inquiries touching concerns of the Soul and the Body (1882), The Scientific Spirit of the Age (1888) and The Modern Rack: Papers on Vivisection (1889), as well as her autobiography. Legacy In the late nineteenth century Cobbe was very well known for her philosophical views. For example, Margaret Oliphant in The Victorian Age of English Literature, when discussing philosophy, said "There are few ladies to be found among these ranks, but the name of Miss Frances Power Cobbe may be mentioned as that of a clear writer and profound thinker". A portrait of her is included in a mural by Walter P. Starmer unveiled in 1921 in the church of St Jude-on-the-Hill in Hampstead Garden Suburb, London. Her name and picture (and those of 58 other women's suffrage supporters) are on the plinth of the statue of Millicent Fawcett in Parliament Square, London, unveiled in 2018. Her name is listed (as F. Power Cobbe) on the Reformers’ Memorial in Kensal Green Cemetery in London. The Animal Theology professorship at the Graduate Theological Foundation is named after Cobbe. Her philosophical contribution is now being rediscovered as part of the recovery of women in the history of philosophy. Bibliography The intuitive theory of morals. Theory of morals, 1855 Essays on the pursuits of Woman, 1863 The red flag in John bull's eyes, 1863 The cities of the past, 1864 Broken Lights: an Inquiry into the Present Condition and Future Prospects of Religious Faith, 1864 Religious duty, 1864 The confessions of a lost Dog, 1867 Dawning Lights : an Inquiry Concerning the Secular Results of the New Reformation, 1867 Criminals, Idiots, Women, and Minors, 1869 Alone to the Alone: Prayers for Theists, 1871 Darwinism in Morals, and Other Essays, 1872 The Hopes of the Human Race, 1874 The Moral Aspects of Vivisection, 1875 The Age of Science: A Newspaper of the Twenthies Century, 1877 The Duties of Women, 1881 The Peak in Darien, 1882 Life of Frances Power Cobbe as told by herself. Vol. I; Vol. II, 1894 See also Brown Dog affair Lizzy Lind af Hageby Caroline Earle White List of animal rights advocates Women and animal advocacy References Further reading Frances Power Cobbe, The Modern Rack: Papers on Vivisection. London: Swan Sonnenschein, 1889. Buettinger, Craig. "Women and antivivisection in late nineteenth century America", Journal of Social History, Vol. 30, No. 4 (Summer, 1997), pp. 857–872. Caine, Barbara. Victorian feminists. Oxford 1992 Hamilton, Susan. Frances Power Cobbe and Victorian Feminism. Palgrave Macmillan, 2006. Mitchell, Sally. Frances Power Cobbe: Victorian Feminist, Journalist, Reformer. University of Virginia Press, 2004. Rakow, Lana and Kramarae, Cheris. The Revolution in Words: Women's Source Library. London, Routledge 2003 Stone, Alison. Entries on Cobbe's philosophical thought, Encyclopedia of Concise Concepts by Women in Philosophy Encyclopedia of Concise Concepts by Women Philosophers - History Of Women Philosophers Stone, Alison (2022). Frances Power Cobbe. Cambridge University Press. Lori Williamson, Power and protest : Frances Power Cobbe and Victorian society. 2005. . A 320-page biography. Victorian feminist, social reformer and anti-vivisectionist, discussion on BBC Radio 4's Woman's Hour, 27 June 2005 State University of New York – Frances Power Cobbe (1822–1904) The archives of the British Union for the Abolition of Vivisection (ref U DBV) are held at the Hull History Centre. Details of holdings are on its online catalogue. External links Frances Power Cobbe archives at the National Library of Wales Frances 1822 births 1904 deaths British anti-vivisectionists Feminist writers Irish animal rights activists Irish feminists Irish non-fiction writers Irish women non-fiction writers Irish suffragists LGBTQ feminists LGBTQ philosophers Irish lesbian writers Non-Darwinian evolution People from Fingal Women of the Victorian era Irish women writers British social reformers British women philosophers British philosophers Irish women's rights activists 19th-century Irish women writers Irish women philosophers 19th-century Irish philosophers 19th-century British women writers Irish anti-vivisectionists Activists from Fingal
Frances Power Cobbe
Biology
2,566
7,070,302
https://en.wikipedia.org/wiki/Alagoas%20curassow
The Alagoas curassow (Mitu mitu) is a glossy-black, pheasant-like bird. It was formerly found in forests in Northeastern Brazil in what is now the states of Pernambuco and Alagoas, which is the origin of its common name. It is now extinct in the wild; there are about 130 individuals in captivity. German naturalist Georg Marcgrave first identified the Alagoas curassow in 1648 in its native range. Subsequently, the origin and legitimacy of the bird began to be questioned due to the lack of specimens. An adult female curassow was rediscovered in 1951, in the coastal forests of Alagoas. The Mitu mitu was then accepted as a separate species. At that time fewer than 60 birds were left in the wild, in the forests around São Miguel dos Campos. Several authors in the 1970s brought to light the growing destruction of its habitat and the rarity of the species. Even with these concerns, the last large forest remnants which contained native Mitu mitu were demolished for sugarcane agriculture. Description The Alagoas curassow measures approximately in length. Feathers covering its body are black and glossy, with a blue-purple hue. Specimens of Mitu mitu also has a large, bright red beak, flattened at its sides, with a white tip. The same red coloration found on its legs and feet. The tips of its tail feathers are light brown in color, with chestnut colored feathers under the tail. It has a unique grey colored, crescent-shaped patch of bare skin covering its ears, a character not found in other curassows. The distinct coloration separates M.mitu as its own species distinct from other curassow species. Sexual dimorphism is not pronounced: females tend to be lighter in color and slightly smaller in size. The birds can live to more than twenty four years in captivity. Video recording in captivity show that this cracid sporadically makes a high-pitched chirping sound. Population Since 1977, the entire Mitu mitu population has been in captivity. The population numbered 44 in 2000, and by 2008, there were 130 birds in two aviaries. About 35% of the birds were hybrids with M. tuberosum. Habitat and ecology Mitu mitu native habitat is subtropical/tropical moist lowland primary forest, where it was known to consume fruit of Phyllanthus, Eugenia and "mangabeira." It is extinct and extirpated in its native range in Alagoas and Pernambuco states, Northeastern Brazil. Breeding habits Due to their absence in the wild and lack of study previously conducted on these cracids before their extinction in the wild, not much is known about their breeding habits outside of captivity. Alagoas curassow females begin reproducing at about 2 years old. In captivity, they produce about 2–3 eggs each year. There has been a greater genetic variability amongst the Alagoas curassow after 1990, when hybrid breeding programs were introduced; Alagoas curassows were bred with closely related razor-billed curassows. Taxonomy The Alagoas curassow was first mentioned by German naturalist Georg Marcgrave in his work Historia Naturalis Brasiliae which was published in 1648. Because of the lack of information and specimens, it was considered conspecific with the common razor-billed curassow, until its rediscovery in 1951 in the Alagoas lowland forests, Brazil. Following the review of Pereira & Baker (2004), they are today believed to be a fairly basal lineage of its genus, related to the crestless curassow, the other Mitu species with brown eumelanin in the tail tip. Its lineage has been distinct since the Miocene-Pliocene boundary (approximately 5 million years ago), when it became isolated in refugia in the Atlantic Forest. Conservation efforts As this species is extinct in the wild, the total population of 130 birds only persists in two separate captive populations. A reintroduction plan is being organized, though it faces challenges. Even if the population could be bred to healthy numbers, the species would need to be reintroduced into a large natural geographical area. Human expansion and overpopulation has caused nearly all of the Alagoas curassow's natural habitat to be destroyed. One potential reintroduction site has been proposed. Precautions would have to be taken in order to prevent illegal hunting of the species after reintroduction. Status The Alagoas curassow became extinct in the wild due to deforestation and hunting. The last wild Alagoas curassow was seen and killed in 1984, or possibly 1987 or 1988. The captive population has been extensively hybridized with the razor-billed curassow, and there are several dozen purebred birds left. These are being maintained and bred in two privately owned professional aviaries in Brazil mainly due to lack of official interest owing to the long-standing doubt about the taxon's validity. Diet and interactions The Alagoas curassow is known to consume a diet of fruits and nuts. Although not much information is known about this species' interactions and behavior in the wild, the stomach contents of these birds were found to contain fruits specifically from the castelo tree. It has also been said that they enjoy fruits from the plant Clarisia racemosa. Generally, the female birds weigh less than the males and lay about 2–3 eggs a year. The average lifespan in captivity is about 24 years. The lack of knowledge about their behavior in the wild makes it difficult to know how the birds interact with other species. The impact of their introduction on interactions with other species is difficult to predict. For instance, the Chamek spider monkey also eats Clarisia racemosa, which could lead to competition with the Alagoas curassow. A lack of genetic diversity is another potential concern. Scientists have been controlling the sexual interactions within the species by pairing certain birds together in order to reduce hybridization and maintain the original Alagoas curassow. Future of the species With the objective to preserve the species and to increase genetic variability in the population, the "original" stock had their DNA examined by scientists in order to guide future pairings. Once a captive population has been successfully created, they can start being reintroduced back into the wild. The more ideal locations would be large forest remnants, such as those located at Usina Utinga-Leão and Usina Serra Grande. Footnotes References BirdLife International (2000): Alagoas Curassow. In: Threatened Birds of the World: 132. Lynx Edicions & BirdLife International, Barcelona & Cambridge, UK. Alagoas Curassow (Mitu Mitu). Arkive. Web. 24 October 2013. Kirwan, Guy M. Mitu Mitu. Neotropical Birds Online. Web. 24 October 2013. Further reading External links BirdLife Species Factsheet Video of Alagoas curassow in captivity Mitu (bird) Curassows Birds of the Atlantic Forest Endemic birds of Brazil Birds described in 1766 Species extinct in the wild Taxa named by Carl Linnaeus
Alagoas curassow
Biology
1,492
26,165,818
https://en.wikipedia.org/wiki/HD%20129445
HD 129445 (HIP 72203; LTT 5856) is a star located in the southern constellation Circinus. It has an apparent magnitude of 8.80, making it faintly visble in binoculars but not to the naked eye. The object is located relatively close at a distance of 219 light-years based on Gaia DR3 parallax measurements but it is drifting away with a spectroscopic radial velocity of . It has an absolute magnitude of +4.73, which is similar to the Sun's absolute magnitude of 4.83. Physical characteristics HD 129445 has a stellar classification of G6 V, indicating that it is an ordinary G-type main-sequence star like our Sun, albeit a bit cooler. It has 106% the mass of the Sun and 118% the radius of the Sun. It radiates 1.23 times the luminosity of the Sun from its photosphere at an effective temperature of , giving it a yellow hue when viewed in the night sky. HD 129445 is extremely metal enriched with an iron abundance more than twice of that of the Sun's and it spins slowly with a projected rotational velocity of . It is slightly older than the Sun at the age of 4.94 billion years. Planetary system The star was observed by the Magellan Planet Search Program due to its absolute visual magnitude and high metallicity. The Magellan program conducted 17 doppler velocity measurements, which spans a full orbital period. The results led the program to detect a planet dubbed HD 129445 b. In 2023, the inclination and true mass of HD 129445 b were determined via astrometry. See also HD 152079 HD 164604 HD 175167 HD 86226 List of extrasolar planets References G-type main-sequence stars Planetary systems with one confirmed planet Circinus CD-68 01403 129445 072203
HD 129445
Astronomy
394
51,488,947
https://en.wikipedia.org/wiki/Faraday%20House
Faraday House Electrical Engineering College was created to train engineers in power generation and distribution. It was set up at a time before engineering was widely taught at universities, founded as an adjunct to a commercial company for supplying towns with electricity. It operated between 1890 and 1967, mainly at Southampton Row, London. Six of its alumni have been presidents of the Institution of Electrical Engineers. The Faraday House curriculum covered the whole electrical field, at a level less theoretical than the City and Guilds Institute at South Kensington, with the four-year course of study resulting in a D.F.H. (Diploma of Faraday House). The first year was spent at the college, then eight months at a mechanical engineering works, followed by five more terms at the college, and finally a period spent as a graduate apprentice at an electrical engineering works. Examinations were supervised by the Institution of Electrical Engineers, and two senior scholarships were offered; the Faraday (75 guineas per annum), and the Maxwell (40 guineas per annum). At a 1992 symposium held in his honour, the microscopist Vernon Ellis Cosslett, who lectured at the college from 1935 to 1939, during an interview with Tom Mulvey, of the Department of Electronic Engineering and Applied Physics at Aston University, Birmingham, related: "... Faraday House... an 'Engineering College for the sons of Gentlemen'... was set up in the 1880s before electrical engineering was respectable at universities; the engineering industry set it up on their own account and funded it themselves. They had a grand man in charge, one Alexander Robinson, a man of some eminence... running the thing very well at a level we would now call HNC, Higher National Certificate, Higher National Diploma level." The building has been occupied by Syracuse University's study abroad program since 2005. References 1890 establishments in England 1967 disestablishments in England Defunct universities and colleges in London Educational institutions established in 1890 Electrical engineering departments Engineering education in the United Kingdom Syracuse University buildings
Faraday House
Engineering
415
15,172
https://en.wikipedia.org/wiki/Internet%20slang
Internet slang (also called Internet shorthand, cyber-slang, netspeak, digispeak or chatspeak (bruh, who even calls it that)) is a non-standard or unofficial form of language used by people on the Internet to communicate to one another. A popular example of Internet slang is "lol" meaning "laugh out loud". Since Internet slang is constantly changing, it is difficult to provide a standardized definition. However, it can be understood to be any type of slang that Internet users have popularized, and in many cases, have coined. Such terms often originate with the purpose of saving keystrokes or to compensate for character limit restrictions. Many people use the same abbreviations in texting, instant messaging, and social networking websites. Acronyms, keyboard symbols, and abbreviations are common types of Internet slang. New dialects of slang, such as leet or Lolspeak, develop as ingroup Internet memes rather than time savers. Many people also use Internet slang in face-to-face, real life communication. Creation and evolution Origins Internet slang originated in the early days of the Internet with some terms predating the Internet. The earliest forms of Internet slang assumed people's knowledge of programming and commands in a specific language. Internet slang is used in chat rooms, social networking services, online games, video games and in the online community. Since 1979, users of communications networks like Usenet created their own shorthand. Motivations The primary motivation for using a slang unique to the Internet is to ease communication. However, while Internet slang shortcuts save time for the writer, they take two times as long for the reader to understand, according to a study by the University of Tasmania. On the other hand, similar to the use of slang in traditional face-to-face speech or written language, slang on the Internet is often a way of indicating group membership. Internet slang provides a channel which facilitates and constrains the ability to communicate in ways that are fundamentally different from those found in other semiotic situations. Many of the expectations and practices which we associate with spoken and written language are no longer applicable. The Internet itself is ideal for new slang to emerge because of the richness of the medium and the availability of information. Slang is also thus motivated for the "creation and sustenance of online communities". These communities, in turn, play a role in solidarity or identification or an exclusive or common cause. David Crystal distinguishes among five areas of the Internet where slang is used — The Web itself, email, asynchronous chat (for example, mailing lists), synchronous chat (for example, Internet Relay Chat), and virtual worlds. The electronic character of the channel has a fundamental influence on the language of the medium. Options for communication are constrained by the nature of the hardware needed in order to gain Internet access. Thus, productive linguistic capacity (the type of information that can be sent) is determined by the preassigned characters on a keyboard, and receptive linguistic capacity (the type of information that can be seen) is determined by the size and configuration of the screen. Additionally, both sender and receiver are constrained linguistically by the properties of the internet software, computer hardware, and networking hardware linking them. Electronic discourse refers to writing that is "very often reads as if it were being spoken – that is, as if the sender were writing talking". Types of slang Internet slang does not constitute a homogeneous language variety; rather, it differs according to the user and type of Internet situation. Audience design occurs in online platforms, and therefore online communities can develop their own sociolects, or shared linguistic norms. Within the language of Internet slang, there is still an element of prescriptivism, as seen in style guides, for example Wired Style, which are specifically aimed at usage on the Internet. Even so, few users consciously heed these prescriptive recommendations on CMC (Computer-mediated communication), but rather adapt their styles based on what they encounter online. Although it is difficult to produce a clear definition of Internet slang, the following types of slang may be observed. This list is not exhaustive. Views Many debates about how the use of slang on the Internet influences language outside of the digital sphere go on. Even though the direct causal relationship between the Internet and language has yet to be proven by any scientific research, Internet slang has invited split views on its influence on the standard of language use in non-computer-mediated communications. Prescriptivists tend to have the widespread belief that the Internet has a negative influence on the future of language, and that it could lead to a degradation of standard. Some would even attribute any decline of standard formal English to the increase in usage of electronic communication. It has also been suggested that the linguistic differences between Standard English and CMC can have implications for literacy education. This is illustrated by the widely reported example of a school essay submitted by a Scottish teenager, which contained many abbreviations and acronyms likened to SMS language. There was great condemnation of this style by the mass media as well as educationists, who expressed that this showed diminishing literacy or linguistic abilities. On the other hand, descriptivists have counter-argued that the Internet allows better expressions of a language. Rather than established linguistic conventions, linguistic choices sometimes reflect personal taste. It has also been suggested that as opposed to intentionally flouting language conventions, Internet slang is a result of a lack of motivation to monitor speech online. Hale and Scanlon describe language in emails as being derived from "writing the way people talk", and that there is no need to insist on 'Standard' English. English users, in particular, have an extensive tradition of etiquette guides, instead of traditional prescriptive treatises, that offer pointers on linguistic appropriateness. Using and spreading Internet slang also adds onto the cultural currency of a language. It is important to the speakers of the language due to the foundation it provides for identifying within a group, and also for defining a person's individual linguistic and communicative competence. The result is a specialized subculture based on its use of slang. In scholarly research, attention has, for example, been drawn to the effect of the use of Internet slang in ethnography, and more importantly to how conversational relationships online change structurally because slang is used. In German, there is already considerable controversy regarding the use of anglicisms outside of CMC. This situation is even more problematic within CMC, since the jargon of the medium is dominated by English terms. An extreme example of an anti-anglicisms perspective can be observed from the chatroom rules of a Christian site, which bans all anglicisms ("" [Using anglicisms is strictly prohibited!]), and also translates even fundamental terms into German equivalents. Journalism In April 2014, Gawkers editor-in-chief Max Read instituted new writing style guidelines banning internet slang for his writing staff. Internet slang has gained attraction, however in other publications ranging from BuzzFeed to The Washington Post, gaining attention from younger viewers. Clickbait headlines have particularly sparked attention, originating from the rise of BuzzFeed in the journalistic sphere which ultimately lead to an online landscape populated with social media references and a shift in language use. Use beyond computer-mediated communication Internet slang has crossed from being mediated by the computer into other non-physical domains. Here, these domains are taken to refer to any domain of interaction where interlocutors need not be geographically proximate to one another, and where the Internet is not primarily used. Internet slang is now prevalent in telephony, mainly through short messages (SMS) communication. Abbreviations and interjections, especially, have been popularized in this medium, perhaps due to the limited character space for writing messages on mobile phones. Another possible reason for this spread is the convenience of transferring the existing mappings between expression and meaning into a similar space of interaction. At the same time, Internet slang has also taken a place as part of everyday offline language, among those with digital access. The nature and content of online conversation is brought forward to direct offline communication through the telephone and direct talking, as well as through written language, such as in writing notes or letters. In the case of interjections, such as numerically based and abbreviated Internet slang, are not pronounced as they are written physically or replaced by any actual action. Rather, they become lexicalized and spoken like non-slang words in a "stage direction" like fashion, where the actual action is not carried out but substituted with a verbal signal. The notions of flaming and trolling have also extended outside the computer, and are used in the same circumstances of deliberate or unintentional implicatures. The expansion of Internet slang has been furthered through codification and the promotion of digital literacy. The subsequently existing and growing popularity of such references among those online as well as offline has thus advanced Internet slang literacy and globalized it. Awareness and proficiency in manipulating Internet slang in both online and offline communication indicates digital literacy and teaching materials have even been developed to further this knowledge. A South Korean publisher, for example, has published a textbook that details the meaning and context of use for common Internet slang instances and is targeted at young children who will soon be using the Internet. Similarly, Internet slang has been recommended as language teaching material in second language classrooms in order to raise communicative competence by imparting some of the cultural value attached to a language that is available only in slang. Meanwhile, well-known dictionaries such as the ODE and Merriam-Webster have been updated with a significant and growing body of slang jargon. Besides common examples, lesser known slang and slang with a non-English etymology have also found a place in standardized linguistic references. Along with these instances, literature in user-contributed dictionaries such as Urban Dictionary has also been added to. Codification seems to be qualified through frequency of use, and novel creations are often not accepted by other users of slang. Present Although Internet slang began as a means of "opposition" to mainstream language, its popularity with today's globalized digitally literate population has shifted it into a part of everyday language, where it also leaves a profound impact. Frequently used slang also have become conventionalised into memetic "unit[s] of cultural information". These memes in turn are further spread through their use on the Internet, prominently through websites. The Internet as an "information superhighway" is also catalysed through slang. The evolution of slang has also created a 'slang union' as part of a unique, specialised subculture. Such impacts are, however, limited and requires further discussion especially from the non-English world. This is because Internet slang is prevalent in languages more actively used on the Internet, like English, which is the Internet's lingua franca. Around the world In Japanese, the term moe has come into common use among slang users to mean something "preciously cute" and appealing. Aside from the more frequent abbreviations, acronyms, and emoticons, Internet slang also uses archaic words or the lesser-known meanings of mainstream terms. Regular words can also be altered into something with a similar pronunciation but altogether different meaning, or attributed new meanings altogether. Phonetic transcriptions are the transformation of words to how it sounds in a certain language, and are used as internet slang. In places where logographic languages are used, such as China, a visual Internet slang exists, giving characters dual meanings, one direct and one implied. The Internet has helped people from all over the world to become connected to one another, enabling "global" relationships to be formed. As such, it is important for the various types of slang used online to be recognizable for everyone. It is also important to do so because of how other languages are quickly catching up with English on the Internet, following the increase in Internet usage in predominantly non-English speaking countries. In fact, as of January 2020, only approximately 25.9% of the online population is made up of English speakers. Different cultures tend to have different motivations behind their choice of slang, on top of the difference in language used. For example, in China, because of the tough Internet regulations imposed, users tend to use certain slang to talk about issues deemed as sensitive to the government. These include using symbols to separate the characters of a word to avoid detection from manual or automated text pattern scanning and consequential censorship. An outstanding example is the use of the term river crab to denote censorship. River crab (hexie) is pronounced the same as "harmony"—the official term used to justify political discipline and censorship. As such Chinese netizens reappropriate the official terms in a sarcastic way. Abbreviations are popular across different cultures, including countries like Japan, China, France, Portugal, etc., and are used according to the particular language the Internet users speak. Significantly, this same style of slang creation is also found in non-alphabetical languages as, for example, a form of "e gao" or alternative political discourse. The difference in language often results in miscommunication, as seen in an onomatopoeic example, "555", which sounds like "crying" in Chinese, and "laughing" in Thai. A similar example is between the English "haha" and the Spanish "jaja", where both are onomatopoeic expressions of laughter, but the difference in language also meant a different consonant for the same sound to be produced. For more examples of how other languages express "laughing out loud", see also: LOL In terms of culture, in Chinese, the numerically based onomatopoeia "770880" (), which means to 'kiss and hug you', is used. This is comparable to "XOXO", which many Internet users use. In French, "pk" or "pq" is used in the place of pourquoi, which means 'why'. This is an example of a combination of onomatopoeia and shortening of the original word for convenience when writing online. In conclusion, every different country has their own language background and cultural differences and hence, they tend to have their own rules and motivations for their own Internet slang. However, at present, there is still a lack of studies done by researchers on some differences between the countries. On the whole, the popular use of Internet slang has resulted in a unique online and offline community as well as a couple sub-categories of "special internet slang which is different from other slang spread on the whole internet... similar to jargon... usually decided by the sharing community". It has also led to virtual communities marked by the specific slang they use and led to a more homogenized yet diverse online culture. Internet slang in advertisements Internet slang can make advertisements more effective. Through two empirical studies, it was proven that Internet slang could help promote or capture the crowd's attention through advertisement, but did not increase the sales of the product. However, using Internet slang in advertisement may attract a certain demographic, and might not be the best to use depending on the product or goods. Furthermore, an overuse of Internet slang also negatively effects the brand due to quality of the advertisement, but using an appropriate amount would be sufficient in providing more attention to the ad. According to the experiment, Internet slang helped capture the attention of the consumers of necessity items. However, the demographic of luxury goods differ, and using Internet slang would potentially have the brand lose credibility due to the appropriateness of Internet slang. See also Roman and medieval abbreviations used to save space on manuscripts and epigraphs: References Further reading Alt URL External links Dictionaries of slang and abbreviations: All Acronyms FOLDOC, computing InternetSlang.com SlangInternet.com Internet Slangs Slang Dictionary SlangLang.net Slang.net Computer-mediated communication Internet memes Occupational cryptolects Slang by language
Internet slang
Technology
3,292
43,424,809
https://en.wikipedia.org/wiki/Nanoremediation
Nanoremediation is the use of nanoparticles for environmental remediation. It is being explored to treat ground water, wastewater, soil, sediment, or other contaminated environmental materials. Nanoremediation is an emerging industry; by 2009, nanoremediation technologies had been documented in at least 44 cleanup sites around the world, predominantly in the United States. In Europe, nanoremediation is being investigated by the EC funded NanoRem Project. A report produced by the NanoRem consortium has identified around 70 nanoremediation projects worldwide at pilot or full scale. During nanoremediation, a nanoparticle agent must be brought into contact with the target contaminant under conditions that allow a detoxifying or immobilizing reaction. This process typically involves a pump-and-treat process or in situ application. Some nanoremediation methods, particularly the use of nano zero-valent iron for groundwater cleanup, have been deployed at full-scale cleanup sites. Other methods remain in research phases. Applications Nanoremediation has been most widely used for groundwater treatment, with additional extensive research in wastewater treatment. Nanoremediation has also been tested for soil and sediment cleanup. Even more preliminary research is exploring the use of nanoparticles to remove toxic materials from gases. Groundwater remediation Currently, groundwater remediation is the most common commercial application of nanoremediation technologies. Using nanomaterials, especially zero-valent metals (ZVMs), for groundwater remediation is an emerging approach that is promising due to the availability and effectiveness of many nanomaterials for degrading or sequestering contaminants. Nanotechnology offers the potential to effectively treat contaminants in situ, avoiding excavation or the need to pump contaminated water out of the ground. The process begins with nanoparticles being injected into a contaminated aquifer via an injection well. The nanoparticles are then transported by groundwater flow to the source of contamination. Upon contact, nanoparticles can sequester contaminants (via adsorption or complexation), immobilizing them, or they can degrade the contaminants to less harmful compounds. Contaminant transformations are typically redox reactions. When the nanoparticle is the oxidant or reductant, it is considered reactive. The ability to inject nanoparticles to the subsurface and transport them to the contaminant source is imperative for successful treatment. Reactive nanoparticles can be injected into a well where they will then be transported down gradient to the contaminated area. Drilling and packing a well is quite expensive. Direct push wells cost less than drilled wells and are the most often used delivery tool for remediation with nanoiron. A nanoparticle slurry can be injected along the vertical range of the probe to provide treatment to specific aquifer regions. Surface water treatment The use of various nanomaterials, including carbon nanotubes and TiO2, shows promise for treatment of surface water, including for purification, disinfection, and desalination. Target contaminants in surface waters include heavy metals, organic contaminants, and pathogens. In this context, nanoparticles may be used as sorbents, as reactive agents (photocatalysts or redox agents), or in membranes used for nanofiltration. Trace contaminant detection Nanoparticles may assist in detecting trace levels of contaminants in field settings, contributing to effective remediation. Instruments that can operate outside of a laboratory often are not sensitive enough to detect trace contaminants. Rapid, portable, and cost-effective measurement systems for trace contaminants in groundwater and other environmental media would thus enhance contaminant detection and cleanup. One potential method is to separate the analyte from the sample and concentrate them to a smaller volume, easing detection and measurement. When small quantities of solid sorbents are used to absorb the target for concentration, this method is referred to as solid-phase microextraction. With their high reactivity and large surface area, nanoparticles may be effective sorbents to help concentrate target contaminants for solid-phase microextraction, particularly in the form of self-assembled monolayers on mesoporous supports. The mesoporous silica structure, made through a surfactant templated sol-gel process, gives these self-assembled monolayers high surface area and a rigid open pore structure. This material may be an effective sorbent for many targets, including heavy metals such as mercury, lead, and cadmium, chromate and arsenate, and radionuclides such as 99Tc, 137CS, uranium, and the actinides. Mechanism The small size of nanoparticles leads to several characteristics that may enhance remediation. Nanomaterials are highly reactive because of their high surface area per unit mass. Their small particle size also allows nanoparticles to enter small pores in soil or sediment that larger particles might not penetrate, granting them access to contaminants sorbed to soil and increasing the likelihood of contact with the target contaminant. Because nanomaterials are so tiny, their movement is largely governed by Brownian motion as compared to gravity. Thus, the flow of groundwater can be sufficient to transport the particles. Nanoparticles then can remain suspended in solution longer to establish an in situ treatment zone. Once a nanoparticle contacts the contaminant, it may degrade the contaminant, typically through a redox reaction, or adsorb to the contaminant to immobilize it. In some cases, such as with magnetic nano-iron, adsorbed complexes may be separated from the treated substrate, removing the contaminant. Target contaminants include organic molecules such as pesticides or organic solvents and metals such as arsenic or lead. Some research is also exploring the use of nanoparticles to remove excessive nutrients such as nitrogen and phosphorus. Materials A variety of compounds, including some that are used as macro-sized particles for remediation, are being studied for use in nanoremediation. These materials include zero-valent metals like zero-valent iron, calcium carbonate, carbon-based compounds such as graphene or carbon nanotubes, and metal oxides such as titanium dioxide and iron oxide. Nano zero-valent iron As of 2012, nano zero-valent iron (nZVI) was the nanoscale material most commonly used in bench and field remediation tests. nZVI may be mixed or coated with another metal, such as palladium, silver, or copper, that acts as a catalyst in what is called a bimetallic nanoparticle. nZVI may also be emulsified with a surfactant and an oil, creating a membrane that enhances the nanoparticle's ability to interact with hydrophobic liquids and protects it against reactions with materials dissolved in water. Commercial nZVI particle sizes may sometimes exceed true “nano” dimensions (100 nm or less in diameter). nZVI appears to be useful for degrading organic contaminants, including chlorinated organic compounds such as polychlorinated biphenyls (PCBs) and trichloroethene (TCE), as well as immobilizing or removing metals. nZVI and other nanoparticles that do not require light can be injected belowground into the contaminated zone for in situ groundwater remediation and, potentially, soil remediation. nZVI nanoparticles can be prepared by using sodium borohydride as the key reductant. NaBH4 (0.2 M) is added into FeCl3•6H2 (0.05 M) solution (~1:1 volume ratio). Ferric iron is reduced via the following reaction: 4Fe3+ + 3B + 9H2O → 4Fe0 + 3H2B + 12H+ + 6H2 Palladized Fe particles are prepared by soaking the nanoscale iron particles with an ethanol solution of 1wt% of palladium acetate ([Pd(C2H3O2)2]3). This causes the reduction and deposition of Pd on the Fe surface: Pd2+ + Fe 0 → Pd0 + Fe2+ Similar methods may be used to prepared Fe/Pt, Fe/Ag, Fe/Ni, Fe/Co, and Fe/Cu bimetallic particles. With the above methods, nanoparticles of diameter 50-70 nm may be produced. The average specific surface area of Pd/Fe particles is about 35 m2/g. Ferrous iron salt has also been successfully used as the precursor. Titanium dioxide Titanium dioxide (TiO2) is also a leading candidate for nanoremediation and wastewater treatment, although as of 2010 it is reported to have not yet been expanded to full-scale commercialization. When exposed to ultraviolet light, such as in sunlight, titanium dioxide produces hydroxyl radicals, which are highly reactive and can oxidize contaminants. Hydroxyl radicals are used for water treatment in methods generally termed advanced oxidation processes. Because light is required for this reaction, TiO2 is not appropriate for underground in situ remediation, but it may be used for wastewater treatment or pump-and-treat groundwater remediation. TiO2 is inexpensive, chemically stable, and insoluble in water. TiO2 has a wide band gap energy (3.2 eV) that requires the use of UV light, as opposed to visible light only, for photocatalytic activation. To enhance the efficiency of its photocatalysis, research has investigated modifications to TiO2 or alternative photocatalysts that might use a greater portion of photons in the visible light spectrum. Potential modifications include doping TiO2 with metals, nitrogen, or carbon. Challenges When using in situ remediation the reactive products must be considered for two reasons. One reason is that a reactive product might be more harmful or mobile than the parent compound. Another reason is that the products can affect the effectiveness and/or cost of remediation. TCE (trichloroethylene), under reducing conditions by nanoiron, may sequentially dechlorinate to DCE (dichloroethene) and VC (vinyl chloride). VC is known to be more harmful than TCE, meaning this process would be undesirable. Nanoparticles also react with non-target compounds. Bare nanoparticles tend to clump together and also react rapidly with soil, sediment, or other material in ground water. For in situ remediation, this action inhibits the particles from dispersing in the contaminated area, reducing their effectiveness for remediation. Coatings or other treatment may allow nanoparticles to disperse farther and potentially reach a greater portion of the contaminated zone. Coatings for nZVI include surfactants, polyelectrolyte coatings, emulsification layers, and protective shells made from silica or carbon. Such designs may also affect the nanoparticles’ ability to react with contaminants, their uptake by organisms, and their toxicity. A continuing area of research involves the potential for nanoparticles used for remediation to disperse widely and harm wildlife, plants, or people. In some cases, bioremediation may be used deliberately at the same site or with the same material as nanoremediation. Ongoing research is investigating how nanoparticles may interact with simultaneous biological remediation. See also Green nanotechnology Nanofiltration References Nanotechnology and the environment
Nanoremediation
Materials_science
2,460
23,470,390
https://en.wikipedia.org/wiki/NetWeaver%20Developer
NetWeaver Developer is a knowledgebase development system. This article gives a brief history of the system, summarizes key features of the software, is a bit of a primer, describing basic attributes of a NetWeaver knowledgebase, and provides secondary references that independently document some of the NetWeaver applications developed since the late 1980s (see the #References section in this article, as well as applications documented for the EMDS system). First, though, a word about knowledgebases. While there are various ways of describing a knowledgebase, perhaps one of the more central concepts is that a knowledgebase provides a formal specification for interpreting information. Formal in this context means that the specification is ontologically committed to the semantics and syntax prescribed by a knowledgebase processor (aka, an engine). A brief history NetWeaver was created in late 1991 as a response to ease knowledge engineering tasks by giving a graphical user interface to the ICKEE (IConic Knowledge Engineering Environment) inference engine developed at Penn State University by Bruce J. Miller and Michael C. Saunders. The first iterations were simply a visual representation of dependency networks stored in a LISP-like syntax. NetWeaver quickly evolved into an interactive interface where the visual environment was also capable of editing the dependency networks and saving them in the ICKEE file format. Eventually NetWeaver became "live" in the sense that it could evaluate the dependency networks in real time. NetWeaver basics A NetWeaver knowledgebase graphically represents a problem to be evaluated as networks of topics, each of which evaluates a proposition. The formal specification of each topic is graphically constructed, and composed of other topics (e.g., premises) related by logic operators such as and, or, not, etc. NetWeaver topics and operators return a continuous-valued ‘‘truth value’’, that expresses the strength of evidence that the operator and its arguments provide to a topic or to another logic operator. The specification of an individual NetWeaver topic supports potentially complex reasoning because both topics and logic operators may be specified as arguments to an operator. Considered in its entirety, the complete knowledgebase specification for a problem can be thought of a mental map of the logical dependencies among propositions. In other words, the knowledgebase amounts to a formal logical argument in the classical sense. When logic meets graphics Cognitive theory suggests that human beings have two fundamental modes of reasoning: logical (albeit however informally some folks may do that when left to their own devices) and spatial. Interesting things happen when logic is implemented graphically. First, the knowledge of individual subject-matter experts engaged in [[knowledge engineering]] often is not fully integrated when dealing with complex problems, at least initially. Rather, this knowledge may exist in a somewhat more loosely organized state, a sort of knowledge soup with chunks of knowledge floating about in it. A common observation of knowledge engineers experienced in graphically designing knowledgebases is that the process of constructing a graphic representation of problem-solving knowledge in a formal logical framework seems to be synergistic, with new insights into the expert's knowledge emerging as the process unfolds. (At the moment, this assertion is largely anecdotal. Contributors to this article need to find a suitable way to document this point, because it is actually a rather important finding not simply limited to NetWeaver, but knowledge engineering more broadly). Second, synergies similar to those observed in organizing the reasoning of individual subject-matter experts also can occur in knowledge engineering projects that require the interaction of multiple disciplines. For example, many different kinds of specialists may be involved in evaluating the overall health of a watershed. Use of a formal logic system, with well defined syntax and semantics, allows specialists’ representation of their problem solving approach to be expressed in a common language, which in turn facilitates understanding of how all the various perspectives of the different specialists fit together. About NetWeaver knowledgebases A NetWeaver knowledgebase has been defined by the developers as a network of networks (Miller and Saunders 2002). Each network corresponds to a topic of interest in the problem being evaluated by the knowledgebase. NetWeaver knowledgebases are object-based. There are two basic types of objects: networks, and data links, each of which is represented in the logic structure by a programming object which has both state and behavior. The NetWeaver engine is a Windows dynamic link library (DLL) developed by Rules of Thumb, Inc. (North East, PA). NetWeaver Developer is an interface to the engine that is used for designing knowledgebases. Logic networks A knowledgebase represents knowledge about how to solve a problem in terms of the topics of interest in the problem domain, and relations among these topics. Each logic network in a NetWeaver knowledge base represents a proposition about the condition of some ecosystem state or process. State - The key state variable of a logic network is its truth value which expresses the degree to which evidence from antecedent networks and data links support or refute the proposition. Logically, network A is said to be antecedent to network B if B depends upon A because network A must be evaluated before network B can be evaluated. Behavior - The basic function of a network is to evaluate the truth of its proposition. NetWeaver networks have three basic behaviors related to this function: They query their antecedents to determine the latters' state. They evaluate their own state, given the state of their antecedents. They inform higher level networks that depend on them about their state. Data links A data link is an elementary dependency network with slightly modified behavior. State - Like a network, a data link may evaluate to a truth value, given a data input. A data link may also hold a data value that is subsequently transformed by mathematical operations defined for a calculated data link. Behavior: In NetWeaver Developer, data links prompt the user for data input. On receipt of data, data links evaluate their state, given the data input (simple data links), or pass the data value to a special data link that performs some transformation of input data (calculated data link). They inform higher level networks that depend on them about their state. Truth values The truth value is the basic state variable of networks and data links. It expresses an observation's degree of membership in a set. Evaluations of degree of set membership are quantified in the semantics of fuzzy logic. Equivalently, think of the truth value metric as expressing the degree to which evidence supports the proposition of the network or data link; in EMDS, the symbology for maps displaying network truth values is based on the concept of strength of evidence. For additional discussion on this topic, see Interpretation of Truth Values. Data links are frequently used to read a datum and evaluate its degree of membership in a concept that is quantified in a fuzzy argument (an argument that quantifies fuzzy set membership). Thus, in a data link the argument is a mathematical statement of a proposition. Some simple examples include: If the datum fully satisfies the argument, then the truth value of the data link is 1 (full support). If the datum is fully contrary to the argument, then the truth value of the data link is -1 (no support). If the datum partially satisfies the argument, then the truth value of the data link is in the open interval (-1, 1). Note in particular that negative truth values greater than -1 do not connote negative truth. Rather, such values connote low membership, or low support. If the data is not known, then the truth value of the data link is 0 (undetermined). Interpretation of truth values within networks must be treated more generally, because the truth value of a network may depend on several to many logic operators. Simple examples related to the two key logic operators, AND and OR, are: If ‘’’all’’’ logical antecedents to an AND operator fully support the AND relation, then the truth value of the operator is 1 (full support). If ‘’’any’’’ logical antecedent to an AND operator is fully contrary to the AND relation, then the truth value of the operator is -1 (no support). If ‘’’any’’’ logical antecedent to an OR operator fully supports the OR relation, then the truth value is 1 (full support). If there is no evidence for or against an AND or OR relation, then the truth value of either operator is 0 (undetermined). As with data links, networks may also evaluate to partially true. Two conditions give rise to this condition in NetWeaver: One or more data items are missing and cannot be supplied, and therefore contribute a value of 0 to an AND. One or more data items that influence the truth value of a dependency network have been evaluated against a fuzzy argument and found not to have full membership in the fuzzy set defined by the fuzzy argument (the data provides only partial support for the proposition). Notes References Barr, N.B., R.S. Copeland, M. De Meyer, D. Masiga, H.G. Kibogo, M.K. Billah, E. Osir, R.A. Wharton, and B.A. McPheron. 2006. Molecular diagnostics of economically important Ceratitis fruit fly species (Diptera: Tephritidae) in Africa using PCR and RFLP analyses. Bulletin of Entomological Research, 96: 505–521. online Dai, J.J., S. Lorenzato and D. M. Rocke. 2004. A knowledge-based model of watershed assessment for sediment. Environmental Modelling & Software Volume 19: 423–433. online Galbraith, John M., Ray B. Bryant, Robert J. Ahrens. 1998. An Expert System for Soil Taxonomy. Soil Science Volume 163: 748–758. online Heaton, Jill S., Kenneth E. Nussear, Todd C. Esque, Richard D. Inman, Frank M. Davenport, Thomas E. Leuteritz, Philip A. Medica, Nathan W. Strout, Paul A. Burgess, and Lisa Benvenuti. 2008. Spatially explicit decision support for selecting translocation areas for Mojave desert tortoises. Biodiversity and Conservation 17:575–590. online Hu, Z.B., X.Y. He, Y.H. Li, J.J. Zhu, Y. Mu, and Z.X. Guan. 2007. Ying yong sheng tai xue bao (The journal of applied ecology) 18:2841-5. online Janssen, R., H. Goosen, M.L. Verhoeven, J.T.A. Verhoeven, A.Q.A. Omtzigt, and E. Maltby. 2005. Decision support for integrated wetland management, Environmental Modelling & Software Volume 20: 215–229. online Mendoza, G.A., and Ravi Prabhu. 2004. Fuzzy methods for assessing criteria and indicators of sustainable forest management. Ecological Indicators 3: 227–236. online Paterson, Barbara, Greg Stuart-Hill, Les G. Underhill, Tim T. Dunne, Britta Schinzel, Chris Brown, Ben Beytell, Fanuel Demas, Pauline Lindeque, Jo Tagg, and Chris Weaver. 2008. A fuzzy decision support tool for wildlife translocations into communal conservancies in Namibia, Environmental Modelling & Software Volume 23: 521–534. online Porter, Andrea, Adel Sadek, and Nancy Hayden. 2006. Fuzzy Geographic Information Systems for Phytoremediation Plant Selection. J. Envir. Engrg. 132: 120. online Saunders, M.C., T.J. Sullivan, B.L. Nash, K.A. Tonnessen, B.J. Miller. 2005. A knowledge-based approach for classifying lake water chemistry. Knowledge-Based Systems Volume 18: 47–54. online External links Ecosystem Management Decision Support (EMDS) System Rules of Thumb, Inc. Knowledge engineering
NetWeaver Developer
Engineering
2,515
30,762,165
https://en.wikipedia.org/wiki/Kepler-11f
Kepler-11f is an exoplanet (extrasolar planet) discovered in the orbit of the Sun-like star Kepler-11 by NASA's Kepler space telescope, which searches for planets that transit (cross in front of) their host stars. Kepler-11f is the fifth planet from its star, orbiting one quarter of the distance (.25 AU) of the Earth from the Sun every 47 days. It is the furthest of the first five planets in the system. Kepler-11f is the least massive of Kepler-11's six planets, at nearly twice the mass of Earth; it is about 2.6 times the radius of Earth. Along with planets d and e and unlike the two inner planets in the system, Kepler-11f has a density lower than that of water and comparable to that of Saturn. This suggests that Kepler-11f has a significant hydrogen–helium atmosphere. The Kepler-11 planets constitute the first system discovered with more than three transiting planets. Kepler-11f was announced to the public on February 2, 2011, after follow-up investigations at several observatories. Analysis of the planets and study results were published the next day in the journal Nature. Name and discovery Kepler-11, known as KOI-157 when it was first flagged for a transit event, is the planet's host star, and it is included in the planet's name to denote that. Because Kepler-11f was discovered with five other planets, the planets of Kepler-11 were sorted by distance from the host star; thus, since Kepler-11f is the fifth planet from its star, it was given the letter "f." The name "Kepler" is derived from the Kepler satellite, a NASA Earth-trailing spacecraft that constantly observes a small patch of sky between the constellations Cygnus and Lyra for stars that are transited by, in particular, terrestrial planets. As these planets cross in front of their host stars with respect to Earth, a small and periodic dip in the star's brightness occurs; this dip is noted by the spacecraft and tagged for future study. Scientists then analyze the transit event more carefully to verify if the planet actually exists and to gather information on the planet's orbit and composition (if possible). Follow-up observations were conducted at observatories at the W. M. Keck Observatory's Keck 1 telescope in Hawaii; the Shane and Hale telescopes in California; the Harlan J. Smith and Hobby–Eberly telescopes in Texas; telescopes at the WIYN (including MMT) and Whipple observatories in Arizona; and the Nordic Optical Telescope in the Canary Islands. The Spitzer Space Telescope was also used. According to NASA, Kepler-11's system is the most compact and the flattest system yet discovered, surpassing even the Solar System. Host star Kepler-11 is a G-type star, much like the Sun is, and is located 659 parsecs away in the Cygnus constellation. It has 95% the mass and 110% the radius of the Sun. Its mass and radius, combined with an approximate iron content (metallicity) of 0 and effective temperature of 5680 K, makes the star very similar to the Sun, though slightly more diffuse and slightly cooler. However, the star is approximately 1.74 times the age of the Sun, and is estimated to have existed for eight billion years. Kepler-11 has six known planets in orbit: Kepler-11b, Kepler-11c, Kepler-11d, Kepler-11e, Kepler-11f, and Kepler-11g. Kepler-11's five inner planets orbit closely to their host star, and their orbits would fit within that of Mercury's. With an apparent magnitude of 14.2, Kepler-11 cannot be seen with the naked eye. Characteristics Kepler-11f is, at 2.3 times the mass of Earth, the least massive of the six planets discovered in the orbit of Kepler-11, although the planet's mass may range from 1.1 to 4.5, or from approximately that of Earth's mass to that of Kepler-10b, a rather large confidence interval. Its radius is the second smallest of the six planets discovered in the system at 2.61 times the radius of Earth. Kepler-11f has a density of about 0.7 g/cm3, comparable to that of the Solar System's least dense planet, Saturn. Kepler-11f is the fifth planet from Kepler-11, orbiting its host star every 46.68876 days at a distance of 0.25 AU. Its orbital eccentricity is unknown. In comparison, Mercury orbits the Sun every 87.97 days at a distance of 0.387 AU. Kepler-11f has an orbital inclination of 89.4°; it can be seen almost edge-on with respect to Earth. Its surface equilibrium temperature is 544 K, over twice the surface equilibrium temperature of Earth and about two-thirds the surface temperature of Venus. Kepler-11f's low density, characteristic of the outer planets of the system, suggests that a hydrogen–helium atmosphere is present on these planets, classifying it as "gas dwarf" due to its small size and mass. The gaseous content of the planet is calculated at a few percent of its total mass, but the envelope accounts for 70-80% of the planet's total volume. Kepler-11f's low density is not shared by the planets Kepler-11b and Kepler-11c because stellar irradiation has reduced their atmospheres to a thin layer. The planets accreted such atmospheres because they formed within the first few million years of the system's existence, when a protoplanetary disk was still present. References Hot Neptunes Exoplanets with Kepler designations Exoplanets discovered in 2011 Transiting exoplanets Giant planets Cygnus (constellation) 11f f
Kepler-11f
Astronomy
1,231
2,359,020
https://en.wikipedia.org/wiki/Mathematical%20methods%20in%20electronics
Mathematical methods are integral to the study of electronics. Mathematics in electronics engineering Mathematical Methods in Electronics Engineering involves applying mathematical principles to analyze, design, and optimize electronic circuits and systems. Key areas include: Linear Algebra: Used to solve systems of linear equations that arise in circuit analysis. Applications include network theory and the analysis of electrical circuits using matrices and vector spaces Calculus: Essential for understanding changes in electronic signals. Used in the analysis of dynamic systems and control systems. Integral calculus is used in analyzing waveforms and signals. Differential Equations: Applied to model and analyze the behavior of circuits over time. Used in the study of filters, oscillators, and transient responses of circuits. Complex Numbers and Complex Analysis: Important for circuit analysis and impedance calculations. Used in signal processing and to solve problems involving sinusoidal signals. Probability and Statistics: Used in signal processing and communication systems to handle noise and random signals. Reliability analysis of electronic components. Fourier and Laplace Transforms: Crucial for analyzing signals and systems. Fourier transforms are used for frequency analysis and signal processing. Laplace transforms are used for solving differential equations and analyzing system stability. Numerical Methods: Employed for simulating and solving complex circuits that cannot be solved analytically. Used in computer-aided design tools for electronic circuit design. Vector Calculus: Applied in electromagnetic field theory. Important for understanding the behavior of electromagnetic waves and fields in electronic devices. Optimization: Techniques used to design efficient circuits and systems. Applications include minimizing power consumption and maximizing signal integrity. These methods are integral to systematically analyzing and improving the performance and functionality of electronic devices and systems. Mathematical methods applied in foundational electrical laws and theorems A number of fundamental electrical laws and theorems apply to all electrical networks. These include: Faraday's law of induction: Any change in the magnetic environment of a coil of wire will cause a voltage (emf) to be "induced" in the coil. Gauss's Law: The total of the electric flux out of a closed surface is equal to the charge enclosed divided by the permittivity. Kirchhoff's Current Law: The sum of all currents entering a node is equal to the sum of all currents leaving the node, or the sum of total current at a junction is zero. Kirchhoff's voltage law: The directed sum of the electrical potential differences around a circuit must be zero. Ohm's Law: The voltage across a resistor is the product of its resistance and the current flowing through it, at constant temperature. Norton's Theorem: Any two-terminal collection of voltage sources and resistors is electrically equivalent to an ideal current source in parallel with a single resistor. Thévenin's Theorem: Any two-terminal combination of voltage sources and resistors is electrically equivalent to a single voltage source in series with a single resistor. Millman's Theorem: The voltage on the ends of branches in parallel is equal to the sum of the currents flowing in every branch divided by the total equivalent conductance. Analytical methods In addition to the foundational principles and theorems, several analytical methods are integral to the study of electronics: Network analysis (electrical circuits): Essential for comprehending capacitor and inductor behavior under changing voltage inputs, particularly significant in fields such as signal processing, power electronics, and control systems. This entails solving intricate networks of resistors through techniques like node-voltage and mesh-current methods. Signal analysis: Involves Fourier analysis, Nyquist–Shannon sampling theorem, and information theory, essential for understanding and manipulating signals in various systems. These methods build on the foundational laws and theorems provide insights and tools for the analysis and design of complex electronic systems. See also Introduction to Electronics Georgia Tech University of California, Santa Cruz Electrical Engineering curriculum University of California, Berkeley Electrical Engineering curriculum (UCSC Catalog) (Berkeley Academic Guide) References Electronic engineering Applied mathematics
Mathematical methods in electronics
Mathematics,Technology,Engineering
791
716,803
https://en.wikipedia.org/wiki/Quantale
In mathematics, quantales are certain partially ordered algebraic structures that generalize locales (point free topologies) as well as various multiplicative lattices of ideals from ring theory and functional analysis (C*-algebras, von Neumann algebras). Quantales are sometimes referred to as complete residuated semigroups. Overview A quantale is a complete lattice with an associative binary operation , called its multiplication, satisfying a distributive property such that and for all and (here is any index set). The quantale is unital if it has an identity element for its multiplication: for all . In this case, the quantale is naturally a monoid with respect to its multiplication . A unital quantale may be defined equivalently as a monoid in the category Sup of complete join-semilattices. A unital quantale is an idempotent semiring under join and multiplication. A unital quantale in which the identity is the top element of the underlying lattice is said to be strictly two-sided (or simply integral). A commutative quantale is a quantale whose multiplication is commutative. A frame, with its multiplication given by the meet operation, is a typical example of a strictly two-sided commutative quantale. Another simple example is provided by the unit interval together with its usual multiplication. An idempotent quantale is a quantale whose multiplication is idempotent. A frame is the same as an idempotent strictly two-sided quantale. An involutive quantale is a quantale with an involution that preserves joins: A quantale homomorphism is a map that preserves joins and multiplication for all and : See also Relation algebra References J. Paseka, J. Rosicky, Quantales, in: B. Coecke, D. Moore, A. Wilce, (Eds.), Current Research in Operational Quantum Logic: Algebras, Categories and Languages, Fund. Theories Phys., vol. 111, Kluwer Academic Publishers, 2000, pp. 245–262. M. Piazza, M. Castellan, Quantales and structural rules. Journal of Logic and Computation, 6 (1996), 709–724. K. Rosenthal, Quantales and Their Applications, Pitman Research Notes in Mathematics Series 234, Longman Scientific & Technical, 1990. Order theory
Quantale
Mathematics
506
69,360,178
https://en.wikipedia.org/wiki/June%202075%20lunar%20eclipse
A partial lunar eclipse will occur at the Moon’s descending node of orbit on Friday, June 28, 2075, with an umbral magnitude of 0.6235. A lunar eclipse occurs when the Moon moves into the Earth's shadow, causing the Moon to be darkened. A partial lunar eclipse occurs when one part of the Moon is in the Earth's umbra, while the other part is in the Earth's penumbra. Unlike a solar eclipse, which can only be viewed from a relatively small area of the world, a lunar eclipse may be viewed from anywhere on the night side of Earth. Occurring only about 5.5 hours after perigee (on June 28, 2075, at 4:10 UTC), the Moon's apparent diameter will be larger. Visibility The eclipse will be completely visible over eastern Australia, western North America, Antarctica, and the central and eastern Pacific Ocean, seen rising over east Asia and western Australia and setting over much of North and South America. Eclipse details Shown below is a table displaying details about this particular solar eclipse. It describes various parameters pertaining to this eclipse. Eclipse season This eclipse is part of an eclipse season, a period, roughly every six months, when eclipses occur. Only two (or occasionally three) eclipse seasons occur each year, and each season lasts about 35 days and repeats just short of six months (173 days) later; thus two full eclipse seasons always occur each year. Either two or three eclipses happen each eclipse season. In the sequence below, each eclipse is separated by a fortnight. Related eclipses Eclipses in 2075 A penumbral lunar eclipse on January 2. A total solar eclipse on January 16. A partial lunar eclipse on June 28. An annular solar eclipse on July 13. A partial lunar eclipse on December 22. Metonic Preceded by: Lunar eclipse of September 9, 2071 Followed by: Lunar eclipse of April 16, 2079 Tzolkinex Preceded by: Lunar eclipse of May 17, 2068 Followed by: Lunar eclipse of August 8, 2082 Half-Saros Preceded by: Solar eclipse of June 22, 2066 Followed by: Solar eclipse of July 3, 2084 Tritos Preceded by: Lunar eclipse of July 28, 2064 Followed by: Lunar eclipse of May 28, 2086 Lunar Saros 121 Preceded by: Lunar eclipse of June 17, 2057 Followed by: Lunar eclipse of July 8, 2093 Inex Preceded by: Lunar eclipse of July 18, 2046 Followed by: Lunar eclipse of June 8, 2104 Triad Preceded by: Lunar eclipse of August 27, 1988 Followed by: Lunar eclipse of April 29, 2162 Lunar eclipses of 2074–2078 This eclipse is a member of a semester series. An eclipse in a semester series of lunar eclipses repeats approximately every 177 days and 4 hours (a semester) at alternating nodes of the Moon's orbit. The penumbral lunar eclipses on February 11, 2074 and August 7, 2074 occur in the previous lunar year eclipse set, and the penumbral lunar eclipses on April 27, 2078 and October 21, 2078 occur in the next lunar year eclipse set. Saros 121 Half-Saros cycle A lunar eclipse will be preceded and followed by solar eclipses by 9 years and 5.5 days (a half saros). This lunar eclipse is related to two total solar eclipses of Solar Saros 128. See also List of lunar eclipses and List of 21st-century lunar eclipses References 2075-06 2075-06
June 2075 lunar eclipse
Astronomy
732
26,919,717
https://en.wikipedia.org/wiki/Electromerism
Electromerism is a type of isomerism between a pair of molecules (electromers, electro-isomers) differing in the way electrons are distributed among the atoms and the connecting chemical bonds. In some literature electromerism is equated to valence tautomerism, a term usually reserved for tautomerism involving reconnecting chemical bonds. One group of electromers are excited electronic states, but isomerism is usually limited to ground state molecules. Another group of electromers are also called redox isomers: metal ions that can exchange their oxidation state with their ligands (see non-innocent ligand). One of the first instances involved a cobalt(II)-quinone complex vs the related cobalt(III)-semiquinone species. Some metalloporphyrins exist as electromers. as well as a set without a metal. References Isomerism
Electromerism
Chemistry
185
2,019,981
https://en.wikipedia.org/wiki/Vapor%20pressures%20of%20the%20elements%20%28data%20page%29
Vapor pressure Notes Values are given in terms of temperature necessary to reach the specified pressure. Valid results within the quoted ranges from most equations are included in the table for comparison. A conversion factor is included into the original first coefficients of the equations to provide the pressure in pascals (CR2: 5.006, SMI: -0.875). Ref. SMI uses temperature scale ITS-48. No conversion was done, which should be of little consequence however. The temperature at standard pressure should be equal to the normal boiling point, but due to the considerable spread does not necessarily have to match values reported elsewhere. log refers to log base 10 (T/K) refers to temperature in Kelvin (K) (P/Pa) refers to pressure in Pascal (Pa) References CRC.a-m David R. Lide (ed), CRC Handbook of Chemistry and Physics, 84th Edition. CRC Press. Boca Raton, Florida, 2003; Section 6, Fluid Properties; Vapor Pressure Uncertainties of several degrees should generally be assumed. (e) Indicates extrapolated values beyond the region of experimental data, subject to greater uncertainty. (i) Indicates values calculated from ideal gas thermodynamic functions. (s) Indicates the substance is solid at this temperature. As quoted from these sources: a - Lide, D.R., and Kehiaian, H.V., CRC Handbook of Thermophysical and Thermochemical Data, CRC Press, Boca Raton, Florida, 1994. b - Stull, D., in American Institute of Physics Handbook, Third Edition, Gray, D.E., Ed., McGraw Hill, New York, 1972. c - Hultgren, R., Desai, P.D., Hawkins, D.T., Gleiser, M., Kelley, K.K., and Wagman, D.D., Selected Values of Thermodynamic Properties of the Elements, American Society for Metals, Metals Park, OH, 1973. d - TRCVP, Vapor Pressure Database, Version 2.2P, Thermodynamic Research Center, Texas A&M University, College Station, TX. e - Barin, I., Thermochemical Data of Pure Substances, VCH Publishers, New York, 1993. f - Ohse, R.W. Handbook of Thermodynamic and Transport Properties of Alkali Metals, Blackwell Scientific Publications, Oxford, 1994. g - Gschneidner, K.A., in CRC Handbook of Chemistry and Physics, 77th Edition, p. 4-112, CRC Press, Boca Raton, Florida, 1996. h - . i - Wagner, W., and de Reuck, K.M., International Thermodynamic Tables of the Fluid State, No. 9. Oxygen, Blackwell Scientific Publications, Oxford, 1987. j - Marsh, K.N., Editor, Recommended Reference Materials for the Realization of Physicochemical Properties, Blackwell Scientific Publications, Oxford, 1987. k - l - m - CR2 David R. Lide (ed), CRC Handbook of Chemistry and Physics, 84th Edition, online version. CRC Press. Boca Raton, Florida, 2003; Section 4, Properties of the Elements and Inorganic Compounds; Vapor Pressure of the Metallic Elements The equations reproduce the observed pressures to an accuracy of ±5% or better. Coefficients from this source: KAL National Physical Laboratory, Kaye and Laby Tables of Physical and Chemical Constants; Section 3.4.4, D. Ambrose, Vapour pressures from 0.2 to 101.325 kPa. Retrieved Jan 2006. SMI.a-s W.E. Forsythe (ed.), Smithsonian Physical Tables 9th ed., online version (1954; Knovel 2003). Table 363, Evaporation of Metals The equations are described as reproducing the observed pressures to a satisfactory degree of approximation. From these sources: a - K.K. Kelley, Bur. Mines Bull. 383, (1935). b - c - Brewer, The thermodynamic and physical properties of the elements, Report for the Manhattan Project, (1946). d - e - f - g - ; h - i - . j - k - Int. National Critical Tables, vol. 3, p. 306, (1928). l - m - n - o - p - q - r - s - H.A. Jones, I. Langmuir, Gen. Electric Rev., vol. 30, p. 354, (1927). See also Properties of chemical elements Chemical element data pages
Vapor pressures of the elements (data page)
Chemistry
985
58,799,990
https://en.wikipedia.org/wiki/QV%20Andromedae
QV Andromedae (abbreviated to QV And, also known as HR 369 in the Bright Star Catalogue) is an Alpha2 Canum Venaticorum variable in the constellation Andromeda. Its maximum apparent visual magnitude is 6.6, so it can be seen by the naked eye under very favourable conditions. The brightness varies slightly following a periodic cycle of approximately 5.23 days. The stellar classification of this star is B9IIIpSi, where the pSi suffix indicates that the star shows peculiar chemical composition with stronger than usual silicon lines. This type of star is known as an Ap star, with the chemical peculiarities caused by strong magnetic fields and slow rotation leading to chemical stratification in the atmosphere. The star is rotating at a projected rotational velocity of 49 km/s, with up to 0.05 magnitude variation of brightness during one rotation cycle. This leads to the classification of the star as an Alpha2 Canum Venaticorum variable. The variability of QV Andromedae was first identified in 1975, and confirmed from Hipparcos photometry. It was assigned the variable star designation QV Andromedae in the 73rd namelist of variable stars in 1997. References Andromeda (constellation) 007546 Alpha2 Canum Venaticorum variables Andromedae, QV 0369 Durchmusterung objects 005939 Ap stars
QV Andromedae
Astronomy
289
51,551,398
https://en.wikipedia.org/wiki/Handbook%20of%20the%20New%20Zealand%20Flora
Handbook of the New Zealand Flora (abbreviated Handb. N. Zeal. Fl.) is a two volume work by English botanist Joseph Dalton Hooker with systematic botanical descriptions of plants native to New Zealand. The first part published in 1864 covers flowering plants, and the second part published in 1867 covers Hepaticae, mosses, lichens, fungi and algae. References Further reading New Zealand Botany in New Zealand Books about New Zealand
Handbook of the New Zealand Flora
Biology
90
44,058
https://en.wikipedia.org/wiki/Big%20Bang%20nucleosynthesis
In physical cosmology, Big Bang nucleosynthesis (also known as primordial nucleosynthesis, and abbreviated as BBN) is the production of nuclei other than those of the lightest isotope of hydrogen (hydrogen-1, 1H, having a single proton as a nucleus) during the early phases of the universe. This type of nucleosynthesis is thought by most cosmologists to have occurred from 10 seconds to 20 minutes after the Big Bang. It is thought to be responsible for the formation of most of the universe's helium (as isotope helium-4 (4He)), along with small fractions of the hydrogen isotope deuterium (2H or D), the helium isotope helium-3 (3He), and a very small fraction of the lithium isotope lithium-7 (7Li). In addition to these stable nuclei, two unstable or radioactive isotopes were produced: the heavy hydrogen isotope tritium (3H or T) and the beryllium isotope beryllium-7 (7Be). These unstable isotopes later decayed into 3He and 7Li, respectively, as above. Elements heavier than lithium are thought to have been created later in the life of the Universe by stellar nucleosynthesis, through the formation, evolution and death of stars. Characteristics There are several important characteristics of Big Bang nucleosynthesis (BBN): The initial conditions (neutron–proton ratio) were set in the first second after the Big Bang. The universe was very close to homogeneous at this time, and strongly radiation-dominated. The fusion of nuclei occurred between roughly 10 seconds to 20 minutes after the Big Bang; this corresponds to the temperature range when the universe was cool enough for deuterium to survive, but hot and dense enough for fusion reactions to occur at a significant rate. It was widespread, encompassing the entire observable universe. The key parameter which allows one to calculate the effects of Big Bang nucleosynthesis is the baryon/photon number ratio, which is a small number of order 6 × 10−10. This parameter corresponds to the baryon density and controls the rate at which nucleons collide and react; from this it is possible to calculate element abundances after nucleosynthesis ends. Although the baryon per photon ratio is important in determining element abundances, the precise value makes little difference to the overall picture. Without major changes to the Big Bang theory itself, BBN will result in mass abundances of about 75% of hydrogen-1, about 25% helium-4, about 0.01% of deuterium and helium-3, trace amounts (on the order of 10−10) of lithium, and negligible heavier elements. That the observed abundances in the universe are generally consistent with these abundance numbers is considered strong evidence for the Big Bang theory. In this field, for historical reasons it is customary to quote the helium-4 fraction by mass, symbol Y, so that 25% helium-4 means that helium-4 atoms account for 25% of the mass, but less than 8% of the nuclei would be helium-4 nuclei. Other (trace) nuclei are usually expressed as number ratios to hydrogen. The first detailed calculations of the primordial isotopic abundances came in 1966 and have been refined over the years using updated estimates of the input nuclear reaction rates. The first systematic Monte Carlo study of how nuclear reaction rate uncertainties impact isotope predictions, over the relevant temperature range, was carried out in 1993. Important parameters The creation of light elements during BBN was dependent on a number of parameters; among those was the neutron–proton ratio (calculable from Standard Model physics) and the baryon-photon ratio. Neutron–proton ratio The neutron–proton ratio was set by Standard Model physics before the nucleosynthesis era, essentially within the first 1-second after the Big Bang. Neutrons can react with positrons or electron neutrinos to create protons and other products in one of the following reactions: n \ + e+ <=> \overline{\nu}_e + p n \ + \nu_{e} <=> p + e- At times much earlier than 1 sec, these reactions were fast and maintained the n/p ratio close to 1:1. As the temperature dropped, the equilibrium shifted in favour of protons due to their slightly lower mass, and the n/p ratio smoothly decreased. These reactions continued until the decreasing temperature and density caused the reactions to become too slow, which occurred at about T = 0.7 MeV (time around 1 second) and is called the freeze out temperature. At freeze out, the neutron–proton ratio was about 1/6. However, free neutrons are unstable with a mean life of 880 sec; some neutrons decayed in the next few minutes before fusing into any nucleus, so the ratio of total neutrons to protons after nucleosynthesis ends is about 1/7. Almost all neutrons that fused instead of decaying ended up combined into helium-4, due to the fact that helium-4 has the highest binding energy per nucleon among light elements. This predicts that about 8% of all atoms should be helium-4, leading to a mass fraction of helium-4 of about 25%, which is in line with observations. Small traces of deuterium and helium-3 remained as there was insufficient time and density for them to react and form helium-4. Baryon–photon ratio The baryon–photon ratio, η, is the key parameter determining the abundances of light elements after nucleosynthesis ends. Baryons and light elements can fuse in the following main reactions: along with some other low-probability reactions leading to 7Li or 7Be. (An important feature is that there are no stable nuclei with mass 5 or 8, which implies that reactions adding one baryon to 4He, or fusing two 4He, do not occur). Most fusion chains during BBN ultimately terminate in 4He (helium-4), while "incomplete" reaction chains lead to small amounts of left-over 2H or 3He; the amount of these decreases with increasing baryon-photon ratio. That is, the larger the baryon-photon ratio the more reactions there will be and the more efficiently deuterium will be eventually transformed into helium-4. This result makes deuterium a very useful tool in measuring the baryon-to-photon ratio. Sequence Big Bang nucleosynthesis began roughly about 20 seconds after the big bang, when the universe had cooled sufficiently to allow deuterium nuclei to survive disruption by high-energy photons. (Note that the neutron–proton freeze-out time was earlier). This time is essentially independent of dark matter content, since the universe was highly radiation dominated until much later, and this dominant component controls the temperature/time relation. At this time there were about six protons for every neutron, but a small fraction of the neutrons decay before fusing in the next few hundred seconds, so at the end of nucleosynthesis there are about seven protons to every neutron, and almost all the neutrons are in Helium-4 nuclei. One feature of BBN is that the physical laws and constants that govern the behavior of matter at these energies are very well understood, and hence BBN lacks some of the speculative uncertainties that characterize earlier periods in the life of the universe. Another feature is that the process of nucleosynthesis is determined by conditions at the start of this phase of the life of the universe, and proceeds independently of what happened before. As the universe expands, it cools. Free neutrons are less stable than helium nuclei, and the protons and neutrons have a strong tendency to form helium-4. However, forming helium-4 requires the intermediate step of forming deuterium. Before nucleosynthesis began, the temperature was high enough for many photons to have energy greater than the binding energy of deuterium; therefore any deuterium that was formed was immediately destroyed (a situation known as the "deuterium bottleneck"). Hence, the formation of helium-4 was delayed until the universe became cool enough for deuterium to survive (at about T = 0.1 MeV); after which there was a sudden burst of element formation. However, very shortly thereafter, around twenty minutes after the Big Bang, the temperature and density became too low for any significant fusion to occur. At this point, the elemental abundances were nearly fixed, and the only changes were the result of the radioactive decay of the two major unstable products of BBN, tritium and beryllium-7. History of theory The history of Big Bang nucleosynthesis began with the calculations of Ralph Alpher in the 1940s. Alpher published the Alpher–Bethe–Gamow paper that outlined the theory of light-element production in the early universe. Heavy elements Big Bang nucleosynthesis produced very few nuclei of elements heavier than lithium due to a bottleneck: the absence of a stable nucleus with 8 or 5 nucleons. This deficit of larger atoms also limited the amounts of lithium-7 produced during BBN. In stars, the bottleneck is passed by triple collisions of helium-4 nuclei, producing carbon (the triple-alpha process). However, this process is very slow and requires much higher densities, taking tens of thousands of years to convert a significant amount of helium to carbon in stars, and therefore it made a negligible contribution in the minutes following the Big Bang. The predicted abundance of CNO isotopes produced in Big Bang nucleosynthesis is expected to be on the order of 10−15 that of H, making them essentially undetectable and negligible. Indeed, none of these primordial isotopes of the elements from beryllium to oxygen have yet been detected, although those of beryllium and boron may be able to be detected in the future. So far, the only stable nuclides known experimentally to have been made during Big Bang nucleosynthesis are protium, deuterium, helium-3, helium-4, and lithium-7. Helium-4 Big Bang nucleosynthesis predicts a primordial abundance of about 25% helium-4 by mass, irrespective of the initial conditions of the universe. As long as the universe was hot enough for protons and neutrons to transform into each other easily, their ratio, determined solely by their relative masses, was about 1 neutron to 7 protons (allowing for some decay of neutrons into protons). Once it was cool enough, the neutrons quickly bound with an equal number of protons to form first deuterium, then helium-4. Helium-4 is very stable and is nearly the end of this chain if it runs for only a short time, since helium neither decays nor combines easily to form heavier nuclei (since there are no stable nuclei with mass numbers of 5 or 8, helium does not combine easily with either protons, or with itself). Once temperatures are lowered, out of every 16 nucleons (2 neutrons and 14 protons), 4 of these (25% of the total particles and total mass) combine quickly into one helium-4 nucleus. This produces one helium for every 12 hydrogens, resulting in a universe that is a little over 8% helium by number of atoms, and 25% helium by mass. One analogy is to think of helium-4 as ash, and the amount of ash that one forms when one completely burns a piece of wood is insensitive to how one burns it. The resort to the BBN theory of the helium-4 abundance is necessary as there is far more helium-4 in the universe than can be explained by stellar nucleosynthesis. In addition, it provides an important test for the Big Bang theory. If the observed helium abundance is significantly different from 25%, then this would pose a serious challenge to the theory. This would particularly be the case if the early helium-4 abundance was much smaller than 25% because it is hard to destroy helium-4. For a few years during the mid-1990s, observations suggested that this might be the case, causing astrophysicists to talk about a Big Bang nucleosynthetic crisis, but further observations were consistent with the Big Bang theory. Deuterium Deuterium is in some ways the opposite of helium-4, in that while helium-4 is very stable and difficult to destroy, deuterium is only marginally stable and easy to destroy. The temperatures, time, and densities were sufficient to combine a substantial fraction of the deuterium nuclei to form helium-4 but insufficient to carry the process further using helium-4 in the next fusion step. BBN did not convert all of the deuterium in the universe to helium-4 due to the expansion that cooled the universe and reduced the density, and so cut that conversion short before it could proceed any further. One consequence of this is that, unlike helium-4, the amount of deuterium is very sensitive to initial conditions. The denser the initial universe was, the more deuterium would be converted to helium-4 before time ran out, and the less deuterium would remain. There are no known post-Big Bang processes which can produce significant amounts of deuterium. Hence observations about deuterium abundance suggest that the universe is not infinitely old, which is in accordance with the Big Bang theory. During the 1970s, there were major efforts to find processes that could produce deuterium, but those revealed ways of producing isotopes other than deuterium. The problem was that while the concentration of deuterium in the universe is consistent with the Big Bang model as a whole, it is too high to be consistent with a model that presumes that most of the universe is composed of protons and neutrons. If one assumes that all of the universe consists of protons and neutrons, the density of the universe is such that much of the currently observed deuterium would have been burned into helium-4. The standard explanation now used for the abundance of deuterium is that the universe does not consist mostly of baryons, but that non-baryonic matter (also known as dark matter) makes up most of the mass of the universe. This explanation is also consistent with calculations that show that a universe made mostly of protons and neutrons would be far more clumpy than is observed. It is very hard to come up with another process that would produce deuterium other than by nuclear fusion. Such a process would require that the temperature be hot enough to produce deuterium, but not hot enough to produce helium-4, and that this process should immediately cool to non-nuclear temperatures after no more than a few minutes. It would also be necessary for the deuterium to be swept away before it reoccurs. Producing deuterium by fission is also difficult. The problem here again is that deuterium is very unlikely due to nuclear processes, and that collisions between atomic nuclei are likely to result either in the fusion of the nuclei, or in the release of free neutrons or alpha particles. During the 1970s, cosmic ray spallation was proposed as a source of deuterium. That theory failed to account for the abundance of deuterium, but led to explanations of the source of other light elements. Lithium Lithium-7 and lithium-6 produced in the Big Bang are on the order of: lithium-7 to be 10−9 of all primordial nuclides; and lithium-6 around 10−13. Measurements and status of theory The theory of BBN gives a detailed mathematical description of the production of the light "elements" deuterium, helium-3, helium-4, and lithium-7. Specifically, the theory yields precise quantitative predictions for the mixture of these elements, that is, the primordial abundances at the end of the big-bang. In order to test these predictions, it is necessary to reconstruct the primordial abundances as faithfully as possible, for instance by observing astronomical objects in which very little stellar nucleosynthesis has taken place (such as certain dwarf galaxies) or by observing objects that are very far away, and thus can be seen in a very early stage of their evolution (such as distant quasars). As noted above, in the standard picture of BBN, all of the light element abundances depend on the amount of ordinary matter (baryons) relative to radiation (photons). Since the universe is presumed to be homogeneous, it has one unique value of the baryon-to-photon ratio. For a long time, this meant that to test BBN theory against observations one had to ask: can all of the light element observations be explained with a single value of the baryon-to-photon ratio? Or more precisely, allowing for the finite precision of both the predictions and the observations, one asks: is there some range of baryon-to-photon values which can account for all of the observations? More recently, the question has changed: Precision observations of the cosmic microwave background radiation with the Wilkinson Microwave Anisotropy Probe (WMAP) and Planck give an independent value for the baryon-to-photon ratio. Using this value, are the BBN predictions for the abundances of light elements in agreement with the observations? The present measurement of helium-4 indicates good agreement, and yet better agreement for helium-3. But for lithium-7, there is a significant discrepancy between BBN and WMAP/Planck, and the abundance derived from Population II stars. The discrepancy is a factor of 2.4―4.3 below the theoretically predicted value. This discrepancy, called the "cosmological lithium problem", is considered a problem for the original models, that have resulted in revised calculations of the standard BBN based on new nuclear data, and to various reevaluation proposals for primordial proton–proton nuclear reactions, especially the abundances of , versus . Non-standard scenarios In addition to the standard BBN scenario there are numerous non-standard BBN scenarios. These should not be confused with non-standard cosmology: a non-standard BBN scenario assumes that the Big Bang occurred, but inserts additional physics in order to see how this affects elemental abundances. These pieces of additional physics include relaxing or removing the assumption of homogeneity, or inserting new particles such as massive neutrinos. There have been, and continue to be, various reasons for researching non-standard BBN. The first, which is largely of historical interest, is to resolve inconsistencies between BBN predictions and observations. This has proved to be of limited usefulness in that the inconsistencies were resolved by better observations, and in most cases trying to change BBN resulted in abundances that were more inconsistent with observations rather than less. The second reason for researching non-standard BBN, and largely the focus of non-standard BBN in the early 21st century, is to use BBN to place limits on unknown or speculative physics. For example, standard BBN assumes that no exotic hypothetical particles were involved in BBN. One can insert a hypothetical particle (such as a massive neutrino) and see what has to happen before BBN predicts abundances that are very different from observations. This has been done to put limits on the mass of a stable tau neutrino. See also Big Bang Chronology of the universe Nucleosynthesis Relic abundance Stellar nucleosynthesis Ultimate fate of the universe References External links For a general audience White, Martin: Overview of BBN Wright, Ned: BBN (cosmology tutorial) Big Bang nucleosynthesis on arxiv.org Academic articles Report-no: FERMILAB-Pub-00-239-A Jedamzik, Karsten, "Non-Standard Big Bang Nucleosynthesis Scenarios". Max-Planck-Institut für Astrophysik, Garching. Steigman, Gary, Primordial Nucleosynthesis: Successes And Challenges ; Forensic Cosmology: Probing Baryons and Neutrinos With BBN and the CBR ; and Big Bang Nucleosynthesis: Probing the First 20 Minutes R. A. Alpher, H. A. Bethe, G. Gamow, The Origin of Chemical Elements , Physical Review 73 (1948), 803. The so-called αβγ paper, in which Alpher and Gamow suggested that the light elements were created by hydrogen ions capturing neutrons in the hot, dense early universe. Bethe's name was added for symmetry These two 1948 papers of Gamow laid the foundation for our present understanding of big-bang nucleosynthesis R. A. Alpher and R. Herman, "On the Relative Abundance of the Elements," Physical Review 74 (1948), 1577. This paper contains the first estimate of the present temperature of the universe Java Big Bang element abundance calculator C. Pitrou, A. Coc, J.-P. Uzan, E. Vangioni, Precision big bang nucleosynthesis with improved Helium-4 predictions ; Nucleosynthesis Physical cosmological concepts Big Bang
Big Bang nucleosynthesis
Physics,Chemistry,Astronomy
4,459
210,844
https://en.wikipedia.org/wiki/Royal%20Swedish%20Academy%20of%20Engineering%20Sciences
The Royal Swedish Academy of Engineering Sciences (, IVA), founded on 24 October 1919 by King Gustaf V, is one of the royal academies in Sweden. The academy is an independent organisation, which promotes contact and exchange between business, research, and government, in Sweden and internationally. It is the world's oldest academy of engineering sciences.(OECD Reviews of Innovation Policy: Sweden 2012). Leadership The King is the patron of the academy. The following people have been presidents of IVA since its foundation in 1919: 1919–1940: Axel F. Enström 1941–1959: Edy Velander 1960–1970: Sven Brohult 1971–1982: Gunnar Hambraeus 1982–1994: Hans G. Forsberg 1995–2000: Kurt Östlund 1999–2001: (temporary) Enrico Deiaco 2001–2008: Lena Treschow Torell 2008–2017: Björn O. Nilsson 2017–2023: Tuula Teeri 2024-Present Sylvia Schwaag Serger Academy member Each year, outstanding scientists and engineers from universities and industries are elected into membership of IVA. Currently, the academy has 1000 Swedish and 300 foreign members. Foreign members are non-resident and non-citizen of Sweden. All new members are nominated by existing members. Focus areas The academy focuses on twelve areas of engineering sciences: Mechanical Engineering Electrical Engineering Building and Construction Chemical Engineering Mining and Materials Management Basic and Interdisciplinary Engineering Sciences Forest Technology Economics Biotechnology Education and Research Policy Information Technology Each focus area is addressed by a committee with a representative chair. Awards The academy awards several prizes, medals and scholarships: Large Gold Medal (since 1924) Gold Medal (since 1921) Brinell medal (Brinellmedaljen, since 1936, and named after Johan August Brinell) Gold Plaque (since 1951) Honorary Sign (since 1919) Axel F. Enstrom Medal (1959–1981) Prize for science in journalism (since 2015) Hans Werthén Fonden The King Carl XVI Gustafs 50-years-old Foundation See also List of Swedish scientists Royal Academy of Engineering, UK Royal Swedish Academy of Sciences, Sweden Spanish Royal Academy of Sciences, Spain National Academy of Engineering, USA References External links The Royal Swedish Academy of Engineering Sciences 1919 establishments in Sweden Engineering Sciences Science and technology in Sweden National academies of engineering
Royal Swedish Academy of Engineering Sciences
Engineering
472
70,432,824
https://en.wikipedia.org/wiki/Diselenolene%20metal%20complexes
1,2-Diselenolene metal complexes are a class of coordination compounds homologous to 1,2-dithiolene metal complexes and formally deriving from ene-1,2-diselenolato ligands. 1,2-Diselenolene (DSE) is a versatile ligand that forms complexes with various transition metals. The term refers to the metal complexes containing at least one five-membered heterocyclic molecule that contains two adjacent selenium atoms in a planar, anti-aromatic ring system. One of the earliest examples of DSE metal complexes was reported by Davison and Shawl. Since then, several other transition metals such as palladium, platinum, gold, and nickel have been shown to form complexes with DSE. The synthesis of DSE metal complexes is typically achieved by the reaction of a DSE proligand with a metal precursor in the presence of a suitable reducing agent. The choice of reducing agent depends on the oxidation state of the metal ion and the desired redox properties of the complex. The characterization of DSE metal complexes is typically performed by a combination of spectroscopic and electrochemical techniques. Infrared and Raman spectroscopy can be used to identify the vibrational modes of the ligand and the metal-ligand bonds. X-ray crystallography can provide valuable information about the molecular structure and bonding in the complex. Electrochemical techniques such as cyclic voltammetry and differential pulse voltammetry can be used to determine the redox properties of the complexes. The electronic properties of DSE metal complexes can be tuned by varying the metal center and the substitution pattern on the DSE ligand. For example, the replacement of one or both selenium atoms with sulfur or tellurium can alter the electronic properties of the complex. The substitution of alkyl or aryl groups on the DSE ligand can also affect the redox properties and stability of the complex. In recent years, there has been growing interest in the use of DSE metal complexes as catalysts for organic transformations. For example, the palladium complex [Pd(DSE)2] has been shown to be an effective catalyst for the Suzuki-Miyaura cross-coupling reaction. The platinum complex [Pt(DSE)2] has been used as a catalyst for the oxidation of alcohols and the reduction of nitroarenes. The unique electronic properties of DSE metal complexes make them promising candidates for catalytic applications in a wide range of organic transformations. In addition to their potential applications in catalysis, DSE metal complexes also exhibit interesting optoelectronic properties. For example, the palladium complex [Pd(DSE)2] has been shown to exhibit photoluminescence in the solid state. The gold complex [Au(DSE)2] has been used as a building block for the construction of luminescent materials. The unique electronic properties of DSE metal complexes make them promising candidates for applications in optoelectronics and photonics. Non-innocence In addition to the unique structural and electronic properties of 1,2-diselenolene metal complexes, their behavior as redox-active ligands has also been a subject of recent investigation. The term "non-innocent" has been used to describe ligands that undergo changes in oxidation state or electronic configuration upon coordination to a metal center, which can dramatically impact the reactivity and properties of the resulting complex. Several studies have demonstrated the non-innocence of 1,2-diselenolene ligands in their metal complexes. For example, the coordination of a 1,2-diselenolene ligand to a metal center can result in significant changes in the ligand's redox potential. In some cases, the ligand can become reduced upon coordination, leading to the formation of metal-ligand radical species. These radicals can then participate in a variety of redox processes, including the oxidation or reduction of other ligands or substrates. Non-innocent behavior has also been observed in the reactivity of 1,2-diselenolene metal complexes towards small molecules such as O2 and CO2. In some cases, coordination of these molecules to the metal center can induce changes in the electronic structure of the 1,2-diselenolene ligand, leading to the formation of reactive intermediates. These intermediates can then participate in a range of chemical reactions, including the activation of O2 for selective oxidation reactions. In addition to their redox and reactivity properties, 1,2-diselenolene metal complexes have also been investigated for their potential applications in areas such as catalysis, materials science, and electronics. For example, the unique electronic and optical properties of these complexes have led to their use as building blocks for the development of new organic electronic materials, such as OLEDs and solar cells. Furthermore, the non-innocence of 1,2-diselenolene ligands has been exploited for the development of novel catalytic systems. For instance, the use of 1,2-diselenolene-based ligands in transition metal catalysts has been shown to improve the selectivity and activity of a variety of catalytic reactions, including olefin polymerization and cross-coupling reactions. Overall, the combination of unique structural and electronic properties, as well as non-innocent behavior, make 1,2-diselenolene metal complexes a promising class of compounds for a wide range of applications in chemistry and materials science. Continued investigation into the fundamental properties and reactivity of these compounds is likely to lead to further discoveries and innovations in these fields. References Selenium compounds Heterocyclic compounds Transition metal compounds
Diselenolene metal complexes
Chemistry
1,157
2,004,122
https://en.wikipedia.org/wiki/Context-based%20access%20control
Context-based access control (CBAC) is a feature of firewall software, which intelligently filters TCP and UDP packets based on application layer protocol session information. It can be used for intranets, extranets and internets. CBAC can be configured to permit specified TCP and UDP traffic through a firewall only when the connection is initiated from within the network needing protection. (In other words, CBAC can inspect traffic for sessions that originate from the external network.) However, while this example discusses inspecting traffic for sessions that originate from the external network, CBAC can inspect traffic for sessions that originate from either side of the firewall. This is the basic function of a stateful inspection firewall. Without CBAC, traffic filtering is limited to access list implementations that examine packets at the network layer, or at most, the transport layer. However, CBAC examines not only network layer and transport layer information but also examines the application-layer protocol information (such as FTP connection information) to learn about the state of the TCP or UDP session. This allows support of protocols that involve multiple channels created as a result of negotiations in the FTP control channel. Most of the multimedia protocols as well as some other protocols (such as FTP, RPC, and SQL*Net) involve multiple control channels. CBAC inspects traffic that travels through the firewall to discover and manage state information for TCP and UDP sessions. This state information is used to create temporary openings in the firewall's access lists to allow return traffic and additional data connections for permissible sessions (sessions that originated from within the protected internal network). CBAC works through deep packet inspection and hence Cisco calls it 'IOS firewall' in their Internetwork Operating System (IOS). CBAC also provides the following benefits: Denial-of-service prevention and detection Real-time alerts and audit trails See also References Computer access control Firewall software Packets (information technology) Data security Access control
Context-based access control
Technology,Engineering
408
33,074,744
https://en.wikipedia.org/wiki/Protein%20music
Protein music or, more broadly, genetic music (including DNA music) is a musical technique where music is composed by converting protein sequences or DNA sequences to musical notes. The earliest published references to genetic music in the scientific literature include a short correspondence by Hayashi and Munakata in Nature in 1984, a publication by geneticist Susumu Ohno and Midori Ohno (his wife and a musician) in Immunogenetics, and a paper in the journal Bioinformatics (then called Computer Applications in the Biosciences) co-authored by Ross D. King and Colin Angus (a member of the British psychedelic band The Shamen) in 1996, Shortly before the King and Angus publication the French physicist and composer Joël Sternheimer (a singer also known by his stage name, Évariste) applied for a patent to use protein music to affect protein synthesis. The idea that music can affect protein synthesis is generally viewed as pseudoscientific by the molecular biology community, although the methods proposed by Sternheimer form the basis for software called Proteodyne. Applications for genetic music proposed in the scientific literature include aids to memorization and education. Theory The idea that genes and music exhibit similarities was noted even earlier than the scientific publications in the area by Douglas Hofstadter in Gödel, Escher, Bach. Hofstadter even proposes that meaning is constructed in protein and in music. The ideas that supports the possibility of creating harmonic musics using this method are: The repetition process governs both the musical composition and the DNA sequence construction. The conformations and energetics of the protein secondary and tertiary structures at the atomic level. See also for full compositions made using this concept. Pink noise (the correlation structure "1/f spectra") have been found in both musical signals and DNA sequences. Models with duplication and mutation operations, such as the "expansion-modification model" are able to generate sequences with 1/f spectra. When DNA sequences are converted to music, it sounds musical. Human Genome Project has revealed similar genetic themes not only between species, but also between proteins. Musical renditions of DNA and proteins is not only a music composition method, but also a technique for studying genetic sequences. Music is a way of representing sequential relationships in a type of informational string to which the human ear is keenly attuned. The analytic and educational potential of using music to represent genetic patterns has been recognized from secondary school to university level. Susumu Ohno and DNA music Susumu Ohno, one of the referents in the development of protein music, proposed in the early 80s that repetition is a fundamental to the evolution of proteins. This idea was fundamental to his notion that the repetition in biological sequences would have parallels in music composition, leading Ohno to state that the "...all-pervasive principle of repetitious recurrence governs not only coding sequence construction but also human endeavor in musical composition." By implementing the concept of musical transformation in DNA sequences, and changing the fragments into musical scores, researchers are allowed to explore the repetitions in the sequences in terms of musical periodicities. The approach consists of assigning musical notes to nucleotide sequences, unveiling hidden patterns of relationship within genetic coding. Music and DNA share similarities in their structure by exhibiting repeating units and motifs. Musical Patterns Periodicities and the principle of repetitious recurrences govern many aspects of life on this earth, including musical compositions and coding base sequences in genomes. This inherent similarity resulted in the effort to interconvert the two. One of music’s uses, from its creation by the primitive Homo sapiens to the modern day, is as a time-keeping device. In Ohno’s rendition, a space and a line on the octave scale are assigned to each base, A, G, T, and C. His work compares and identifies parallels in genomic sequences and notable music from the early Baroque and Romantic periods. Beyond the parallels that can be found rhythmically in music and peptide sequences, musical patterns can be a valuable tool for identifying sequence patterns of interest. For example, work done by Robert P. Bywater and Jonathan N. Middleton has used melody generation software to identify protein folds from sequence data. Periodicities in genes and proteins Given the importance of repetition in music it is logical to assume that deviations from purely random patterns are likely to be necessary to produce aesthetically pleasing sonic patterns. Indeed, the idea that repetition is key in the formation of functional proteins was central to Ohno's early work in the area of genetic music. The question of randomness in protein sequences has received substantial attention, with early work suggesting that protein sequences are effectively random (at least when viewed at the scale of proteomes). However, subsequent work suggests the existence of statistically important regularities in protein sequences and experimental work has shown that periodicities can play a role in the origin of ordered proteins. Presumably, these periodicities are responsible for the aesthetically pleasing nature of music based on at least some proteins. Ohno suggested that one important deviation from randomness is palindromic amino acid sequences ("peptide palindromes") in DNA-binding proteins, such as the H1 histone. Another example of these periodic sequences are the dipeptidic repeats found in the per locus coding sequences in Drosophila melanogaster have been found in the mouse as well. Ohno argues that the coding sequences behave periodically not merely as unique products of pure randomness and understanding this is a key feature to unraveling the complexity behind the genetic information challenging the notion of randomness in biological processes and comparing it more proximate with music. Although peptide palindromes are important deviations from randomness, they are distinct from palindromic sequences in nucleic acids, which are sequences that read identically to the sequence in the same direction on complementary strands. Peptide palindromes, as defined by Ohno, are actually much more similar to palindromes in other contexts. For example, the mouse H1 histone palindrome highlighted by Ohno is KAVKPKAAKPKVAK (letters correspond to the standard one-letter amino acid codes); note that this sequence simply reads identically when written forwards or backwards and is unrelated to nucleic acid complementarity. Large-scale surveys of peptide palindromes indicate that they are present in many proteins but they are not necessarily associated with any specific protein structures. The relationship between peptide palindromes and protein music has not been studied at a large scale. Practice Examples of simple protein structures converted to midi music file show the independence of protein music from musical instrument, and the convenience of using protein structures in music composition. The software Algorithmic arts can convert raw genetic data (freely available for download on the web) to music. There are many examples of musics generated by this software, both by designer and by others. Several people have composed musics using protein structure, and several students and professors have used music as a method to study proteins. The recording Sounds of HIV is a musical adaptation of the genetic material of HIV/AIDS. References Further reading Journal articles, Arranged by post date: External links Clark, M. A. "A Protein Primer: A Musical Introduction to Protein Structure", WhoZoo.org. See: Henahan, Sean (03/20/98) "Protein Preludes", AccessExcellence.com. Your DNA Song "Protein Code to Music Translation", Music and DNA http://nsmn1.uh.edu/dgraur/ Susumu Ohno: Music Based on part of a Immunoglobulin Gene https://www.youtube.com/watch?v=9Q1EkWtff2I&list=PLgONlyj5hSN9z2ephX5AhLZJKVE0QmNXH&index=1 Additional information about Évariste (Joël Sternheimer) that includes a discussion of protein music: https://lachansonfrancaise.net/2016/02/05/chanteur-fou-ou-scientifique-chevronne-evariste/ (in French) Experimental music genres Musical techniques Proteins
Protein music
Chemistry
1,706
2,318,269
https://en.wikipedia.org/wiki/ECall
eCall (an abbreviation of "emergency call") is an initiative by the European Union, intended to bring rapid assistance to motorists involved in a collision anywhere within the European Union. The aim is for all new cars to incorporate a system that automatically contacts the emergency services in the event of a serious accident, sending location and sensor information. eCall was made mandatory in all new cars approved for manufacture within the European Union as of April 2018. History The concept of eCall was presented in 1999 by European civil servant Luc Tytgat, during the launch of the European Commission's Galileo project. One year earlier, 170 experts met in Brussels, invited by the Commission, to analyse the European dependence on the American GPS system, but also to gather civilian applications propositions. In 2001, the project was first presented as a European calling system, in the context of the German youth science competition Jugend forscht. In 2007, the project was postponed. In 2011, the project was pushed again by the European Commission. In the summer of 2013, the project was adopted and was scheduled to be completed by 1 October 2015. On 6 September 2013, trade associations operating in the automotive after market (like AIRC, CLEPA, FIA, FIGEAFA) welcomed the European Commission's eCall initiative and fully support the Europe-wide mandatory introduction of eCall by 2015 in all new type-approved cars and light commercial vehicles. AIRC (Association des Reparateurs en Carrosserie) General Secretary Karel Bukholczer said that eCall represents an important initiative to reduce fatalities and the severity of injuries on Europe's roads. Slovenia introduced eCall in December 2015. Italy deployed a pilot program in selected regions in May 2017, and Sweden adopted eCall in October 2017. Since 2018, eCall is part of an UNECE effort to standardize the devices with the UNECE Regulation 144 related to accident emergency call components (AECC), accident emergency call devices (AECD), and accident emergency call systems (AECS). The deployment of eCall devices was made mandatory in all new cars sold in the European Union on 1 April 2018. IP-based emergency services mechanisms are introduced to support the next generation of the pan-European in-vehicle emergency call service in May 2017. The de jure ITU-T Recommendation identifies requirements of an internet of things (IoT)-based automotive emergency response system (AERS), i.e. eCall, for factory preinstalled and aftermarket devices in March 2018. The ITU-T Recommendation identifies minimum set of data structure for automotive emergency response system: ITU-T Y.4467, and minimum set of data transfer protocol for automotive emergency response system: ITU-T Y.4468 in January 2020. In November 2020, new vehicles in United Arab Emirates started to have the eCall systems. Concept The eCall initiative aims to deploy a device installed in all vehicles that will automatically dial 112 in the event of a serious road accident, and wirelessly send airbag deployment and impact sensor information, as well as GPS or Galileo coordinates to local emergency agencies. A manual call button is also provided. eCall builds on E112. According to some estimates, eCall could reduce emergency response times by 40 percent in urban areas and by 50 percent in rural areas. Many companies are involved with telematics technology to use in different aspects of eCall including in-vehicle systems, wireless data delivery, and public safety answering point systems. Standardization of communication protocols and human language issues are some of the obstacles. Prototypes have been successfully tested with GPRS and in-band signalling over cellular networks. At the same time proprietary eCall solutions that rely on SMS exist already today from car makers such as BMW, PSA and Volvo Cars. The project is also supported by the European Automobile Manufacturers Association (ACEA), an interest group of European car, bus, and truck manufacturers, and ERTICO. Many of the stakeholder companies involved with telematics technology have membership in ERTICO or ACEA. Privacy concerns As with all schemes to add mandatory wireless transceivers to cars, there are privacy concerns to be addressed. Depending on the final implementation of the system, it may be possible for the system to become activated without an actual crash taking place. Also, the occupants of the car have no control over the remote activation of the microphone, making a car susceptible to eavesdropping. Similar initiatives In Russia, a fully interoperable system called ERA-GLONASS is being deployed, with the aim to require an eCall terminal and a GPS/GLONASS receiver in new vehicles by 2015–2017. In North America, a similar service is available from GM via their OnStar service, and by Ford with "Sync with Emergency Assistance". Essential patent As of 2023, Avanci maintains a portfolio of intellectual property required to meet legally required eCall functionality in automobiles at a cost of $3/vehicle for licensing. See also Enhanced 911 Event data recorder References External links 2015 establishments in Slovenia 2018 establishments in Europe Automotive safety Emergency communication Information technology projects Information technology organizations based in Europe
ECall
Technology,Engineering
1,056
42,436,013
https://en.wikipedia.org/wiki/SGM%20Light
SGM Light A/S is a Danish manufacturer of LED lighting for the concert touring, entertainment, architectural, commercial, and industrial sectors. The company is based in Aarhus, Denmark. History SGM Technology for Lighting was founded in 1975 in Italy, by Gabriele Giorgi and Maurizio Guidi — the company name a truncation of ‘Societa Gabriele Maurizio’. In the early days they were known for producing a diverse catalogue of products for the emerging disco industry — ranging from illuminated dancefloor modules, ‘bubblesmoke’ machines and controllers — from their base in Pesaro. In April 2009 ownership was passed by president Gabriele Giorgi and his daughter Alessandra to long-standing Italian pro audio company, RCF Group. Peter Johansen was brought in to head up R&D in late 2010 — marking his return to the industry following a ten-year absence, after earlier setting up Martin Professional which he subsequently floated on the Copenhagen Stock Exchange. He recruited many of his former R&D team and quickly relocated the operation to Denmark — to work on innovations for the entertainment and architectural segments. In February 2012 Peter Johansen formed a consortium to acquire the company from the RCF Group, renaming it SGM A/S. Peter Johansen was joined by the LED specialists from his Danish R&D team led by engineer Finn Kallestrup and former members of his Martin Professional sales force as well as some of the company's Italian personnel. With R&D, administration and after-sales based in Denmark, manufacturing initially took place at bases in Italy, Thailand, and China before being consolidated into a single, purpose-built factory in Denmark. To broaden its marketing reach, SGM has set up subsidiaries in key territories around the world, including SGM Lighting Inc based in Orlando, FL and serving the North American market, SGM Deutschland and SGM UK. In November 2015, SGM A/S was restructured as SGM Light A/S with new ownership and new capital from an Italian company, while maintaining the management, the entire team behind the company, the product portfolio, and the distribution network. In October 2019, Peter Johansen stepped down from his role as SGM Light's CEO. Three Executive Directors (Torben Balmer, Mikkel Falk, and Ulrik Jakobsen) managed the company until 2021, when Ulrik Jakobsen became SGM's CEO. Entertainment Products SGM Light manufactures a wide range of IP65 or IP66 rated products for the entertainment industry. These products include moving heads, washes, strobes, blinders, and effect lighting products. Product releases (luminaries) 2010. X-5 (LED White strobe) - Product Manager: Peter Johansen 2011. P-5 (LED RGBW wash) - Product Manager: Peter Johansen 2011. P-5 White (LED White wash) - Product Manager: Peter Johansen 2012. XC-5 (LED RGB strobe) - Product Manager: Peter Johansen 2012. Q-7 (LED RGBW flood strobe blinder) - Product Manager: Peter Johansen 2012. LB-100 (LED RGB balls) - Product Manager: Peter Johansen 2012. LD-5 (LED RGB dome) - Product Manager: Peter Johansen 2012. LT-100 / LT-200 (LED RGB tubes) - Product Manager: Peter Johansen 2013. Q-7 White (LED White flood strobe blinder) - Product Manager: Peter Johansen 2013. G-Spot (LED RGB Spot moving head) - Product Manager: Peter Johansen 2013. SixPack (LED RGBA batten) - Product Manager: Peter Johansen 2014. G-Profile (LED RGB Profile moving head) - Product Manager: Morten Kristensen 2014. P-2 (LED RGBW wash) - Product Manager: Morten Kristensen 2014. Q-2 (LED RGBW wash) - Product Manager: Morten Kristensen 2014. Q-2 White (LED White wash) - Product Manager: Morten Kristensen 2015. G-1 Beam (LED White Beam moving head) - Product Manager: Morten Kristensen 2015. G-1 Wash (LED RGBW Wash moving head) - Product Manager: Morten Kristensen 2016. G-Wash (LED RGB Wash moving head) - Product Manager: Morten Kristensen 2016. G-4 Wash (LED RGB Wash moving head) - Product Manager: Peter Johansen 2016. G-4 Wash-Beam (LED RGB Wash moving head) - Product Manager: Peter Johansen 2016. S-4 Fresnel (LED RGB Fresnel) - Product Manager: Peter Johansen 2017. G-Spot Turbo / G-Profile Turbo (LED RGB Spot moving head) - Product Manager: Peter Johansen 2017. G-7 Spot (LED White Spot moving head) - Product Manager: Ben Díaz 2017. P-6 (LED RGBW wash) - Product Manager: Ben Díaz 2019: G-7 BeaSt (LED White Beam moving head) - Product Manager: Ben Díaz 2021: Q-8 (LED RGBW wash strobe blinder) - Product Manager: Ben Díaz 2023: Touring VPL (Touring Video Pixel Linear) Architectural Products Besides the entertainment focused product range, SGM Light also develops and builds IP66 rated lighting fixtures for architectural installations. These fixtures are marked POI (Permanent Outdoor Installation) and specifically developed to fulfill the requirements of installations on buildings, bridges, cruise ships, etc. Product releases (luminaries) 2015. R-2 / R-2 W (LED RGB and White rail light) - Product Manager: Morten Kristensen 2016. I-5 / I-5 W / I-5 R / I-5 G / I-5 B / I-5 UV (LED long-throw wash) - Product Manager: Peter Johansen 2016. I-2 / I-2 W / I-2 R / I-2 G / I-2 B / I-2 UV (LED long-throw wash) - Product Manager: Peter Johansen 2016. POI series: G-Spot, G-Profile, G-Wash, P, Q, i - Product Manager: Peter Johansen 2017. VPL series (LED RGB linear light) - Product Manager: Ben Díaz 2018. G-7 Spot POI / G-Spot Turbo POI / G-Profile Turbo POI (LED moving heads) - Product Manager: Ben Díaz 2018: P-6 POI (LED RGBW wash) - Product Manager: Ben Díaz 2020: G-7 BeaSt POI (LED White Beam moving head) - Product Manager: Ben Díaz 2021: Q-8 POI (LED RGBW wash strobe blinder) - Product Manager: Ben Díaz 2021: i-6 / i-6 POI (LED RGBW long-throw wash) - Product Manager: Ben Díaz Industry awards In October 2013 SGM won the PLASA Award for Innovation for developing the weather-resistant G-Spot LED moving head. In April 2015 SGM won the Prolight+Sound International Press Award (pipa) for best lighting product for the Q-7 flood, blind, strobe. References Electrical equipment manufacturers
SGM Light
Engineering
1,520
10,669,204
https://en.wikipedia.org/wiki/Energy%20independence
Energy independence is independence or autarky regarding energy resources, energy supply and/or energy generation by the energy industry. Energy dependence, in general, refers to mankind's general dependence on either primary or secondary energy for energy consumption (fuel, transport, automation, etc.). In a narrower sense, it may describe the dependence of one country on energy resources from another country. Energy dependence has been identified as one of several factors (energy sources diversification, energy suppliers diversification, energy sources fungibility, energy transport, market liquidity, energy resources, political stability, energy intensity, GDP) negatively contributing to energy security. Generally, a higher level of energy dependence is associated with higher risk, because of the possible interference of trade regulations, international armed conflicts, terrorist attacks, etc. Techniques for energy independence Renewable energy A study found that transition from fossil fuels to renewable energy systems reduces risks from mining, trade and political dependence because renewable energy systems don't need fuel – they depend on trade only for the acquisition of materials and components during construction. Renewable energy is found to be an efficient way to ensure energy independence and security. It also supports the transition to a low carbon economy and society. Ways to manage the variability of renewable energy – such as little solar power on cloudy days – include dispatchable generation and smart grids. Bioenergy hydropower and hydrogen energy could be used for such purposes alongside storage-options like batteries. Nuclear power Several countries are conducting extensive research and development programs around renewable energy sources like solar, wind, water, and nuclear energy in hopes to achieve energy independence. However, because solar, wind, and water cannot always be derived as an energy source, nuclear energy is seen as a near-universal alternative that is efficient, safe, and combats the climate crisis. Under the conceived notion that the expansion of and investment in nuclear energy power plants is a key step in the goal of achieving energy independence many countries, and companies, are supporting nuclear power research efforts. The International Thermonuclear Experimental Reactor (ITER), located in France, is an experimental tokamak nuclear fusion reactor that is a collaboration between 35 countries. This project was launched in 2007 and still under construction today. In 2020, the U.S. Department of Energy awarded $160 million in initial funding to TerraPower and X-energy to build advanced nuclear reactors that will be affordable to construct and operate. Both companies are expected to produce their product within 7 years. In that same tone, there are several other companies and institutions across the globe that are gaining attention from their nuclear power innovations and research efforts. Commonwealth Fusion Systems, founded in 2018, is focusing on the development of nuclear fusion. In 2020, The Energy Impact Center launched its OPEN100 project, the world's first open-source blueprint for the design, construction, and financing of nuclear power plants. General Fusion is a Canadian company currently developing a fusion power device, based on magnetized target fusion. Flibe Energy aims to tackle the future of nuclear energy by researching and developing the liquid fluoride thorium reactor (LFTR). In addition, safe and cost-effective storage of nuclear waste in the Waste Isolation Pilot Plant and full version of this underground storage in New Mexico is important for the nuclear fuel cycle. Global examples Energy independence is being attempted by large or resource-rich and economically-strong countries like the United States, Russia, China and the Near and Middle East, but it is so far an idealized status that at present can be only approximated by non-sustainable exploitation of a country's (non-renewable) natural resources. Another factor in reducing dependence is the addition of renewable energy sources to the energy mix. Usually, a country relies on local and global energy renewable and non-renewable resources, a mixed-model solution that presumes various energy sources and modes of energy transfer between countries like electric power transmission, oil transport (oil and gas pipelines and tankers), etc. The European dependence on Russian energy is a good example because Russia is Europe's main supplier of hard coal, crude oil, and natural gas. Oil wars in and between the Middle East, Russia, and the United States that have made markets unpredictable and volatile are also a great example as to why energy advocates and experts suggest countries invest in energy independence. The international dependence of energy resources exposes countries to vulnerability in every aspect of life — countries rely on energy for food, infrastructure, security, transportation, and more. In the Scottish Independence debate, energy independence is a key argument in favour of Scottish exit. Since the discovery of large oil fields, pro-independence proponents have used the tagline "It's Scotland's Oil" in campaigns. Scottish oil and gas production constitutes 82% of the UK's oil and gas. Accordingly, economic and political independence would be followed by high-stakes energy agreements, wherein some argue the fiscal power would lie with Scotland. Political independence would supposedly return decisions about the future of energy to the Scottish people, who are more likely to vote in favour of renewable energy on Scottish soil. Therefore, less reliance on international gas supplies, and a focus on low-emission local energy is a key tenet of the "Building a New Scotland" prospectus promoting Scottish Independence. See also Related concepts Energy resilience Energy security Energy development Energy policy Efficient energy use National efforts Making Sweden an Oil-Free Society United States energy independence Energy policy of Turkey India's three-stage nuclear power programme Phase-out of fossil fuel vehicles References External links https://www.iea.org/publications/freepublications/publication/KeyWorld_Statistics_2015.pdf Energy economics
Energy independence
Environmental_science
1,154
2,550,321
https://en.wikipedia.org/wiki/Gray%20iron
Gray iron, or grey cast iron, is a type of cast iron that has a graphitic microstructure. It is named after the gray color of the fracture it forms, which is due to the presence of graphite. It is the most common cast iron and the most widely used cast material based on weight. It is used for housings where the stiffness of the component is more important than its tensile strength, such as internal combustion engine cylinder blocks, pump housings, valve bodies, electrical boxes, and decorative castings. Grey cast iron's high thermal conductivity and specific heat capacity are often exploited to make cast iron cookware and disc brake rotors. on brakes in freight trains has been greatly reduced in the European Union over concerns regarding noise pollution. Deutsche Bahn for example had replaced grey iron brakes on 53,000 of its freight cars (85% of their fleet) with newer, quieter models by 2019—in part to comply with a law that came into force in December 2020. Structure A typical chemical composition to obtain a graphitic microstructure is 2.5 to 4.0% carbon and 1 to 3% silicon by weight. Graphite may occupy 6 to 10% of the volume of grey iron. Silicon is important for making grey iron as opposed to white cast iron, because silicon is a graphite stabilizing element in cast iron, which means it helps the alloy produce graphite instead of iron carbides; at 3% silicon almost no carbon is held in chemical form as iron carbide. Another factor affecting graphitization is the solidification rate; the slower the rate, the greater the time for the carbon to diffuse and accumulate into graphite. A moderate cooling rate forms a more pearlitic matrix, while a fast cooling rate forms a more ferritic matrix. To achieve a fully ferritic matrix the alloy must be annealed. Rapid cooling partly or completely suppresses graphitization and leads to the formation of cementite, which is called white iron. The graphite takes on the shape of a three-dimensional flake. In two dimensions, as a polished surface, the graphite flakes appear as fine lines. The graphite has no appreciable strength, so they can be treated as voids. The tips of the flakes act as preexisting notches at which stresses concentrate and it therefore behaves in a brittle manner. The presence of graphite flakes makes the grey iron easily machinable as they tend to crack easily across the graphite flakes. Grey iron also has very good damping capacity and hence it is often used as the base for machine tool mountings. Classifications In the United States, the most commonly used classification for gray iron is ASTM International standard A48. This orders gray iron into classes which correspond with its minimum tensile strength in thousands of pounds per square inch (ksi); e.g. class 20 gray iron has a minimum tensile strength of . Class 20 has a high carbon equivalent and a ferrite matrix. Higher strength gray irons, up to class 40, have lower carbon equivalents and a pearlite matrix. Gray iron above class 40 requires alloying to provide solid solution strengthening, and heat treating is used to modify the matrix. Class 80 is the highest class available, but it is extremely brittle. ASTM A247 is also commonly used to describe the graphite structure. Other ASTM standards that deal with gray iron include ASTM A126, ASTM A278, and ASTM A319. In the automotive industry, the SAE International (SAE) standard SAE J431 is used to designate grades instead of classes. These grades are a measure of the tensile strength-to-Brinell hardness ratio. The variation of the tensile modulus of elasticity of the various grades is a reflection of the percentage of graphite in the material as such material has neither strength nor stiffness and the space occupied by graphite acts like a void, thereby creating a spongy material. Advantages and disadvantages Gray iron is a common engineering alloy because of its relatively low cost and good machinability, which results from the graphite lubricating the cut and breaking up the chips. It also has good galling and wear resistance because the graphite flakes self-lubricate. The graphite also gives gray iron an excellent damping capacity because it absorbs the energy and converts it into heat. Grey iron cannot be worked (forged, extruded, rolled etc.) even at temperature. Gray iron also experiences less solidification shrinkage than other cast irons that do not form a graphite microstructure. The silicon promotes good corrosion resistance and increased fluidity when casting. Gray iron is generally considered easy to weld. Compared to the more modern iron alloys, gray iron has a low tensile strength and ductility; therefore, its impact and shock resistance is almost non-existent. See also Meehanite Notes References Further reading Cast iron Ferrous alloys Iron fr:Fonte (métallurgie)#Fonte grise
Gray iron
Chemistry
1,049
57,241,168
https://en.wikipedia.org/wiki/International%20scientific%20committee%20on%20price%20history
The International Scientific Committee on Price History was created in 1929 by William Beveridge and Edwin Francis Gay after receiving a five-year grant from the Rockefeller Foundation. The national representatives were William Beveridge for Great Britain, Moritz John Elsas for Germany, Edwin Francis Gay for the United States, Earl J. Hamilton for Spain, Henri Hauser for France and Alfred Francis Pribram for Austria; later, Franciszek Bujak for Poland and Nicolaas Wilhelmus Posthumus for the Netherlands also joined; Arthur H. Cole was in charge of finances for the whole project. Books by the committee Hamilton (Earl J.), American Treasure and the Price Revolution in Spain (1501–1650), 1934. Hamilton (Earl J.), Money, Prices and Wages in Valencia, Aragon and Navarre (1351–1500), 1936. Hauser (Henri), Recherches et documents sur l’histoire des prix en France de 1500 à 1800, 1936. Elsas (Moritz John), Umriß einer Geschichte der Preise und Löhne in Deutschland vom ausgehenden Mittelalter bis zum Beginn des 19. Jarhunderts, 3 vol., 1936–1949. Přibram (Alfred Francis), Materialien zur Geschichte der Preise und Löhne in Österreich, 1938. Cole (Arthur Harrison), Wholesale Commodity Prices in the United States 1700–1861, 1938. Beveridge (William H.), Prices and Wages in England from the 12th to the 19th Century, 1939. Posthumus (Nicolaas), Nederlandsche Prijsgeschiedenis, 1943–1964. Hamilton (Earl J.), War and Prices in Spain (1651–1800), 1947. References Arthur H. Cole, Ruth Crandall, "The International Scientific Committee on Price History", The Journal of Economic History, 24/3, September 1964, p. 381–388. Olivier Dumoulin, "Aux origines de l'histoire des prix", Annales. Économies, sociétés, civilisations, 45/2, 1990, . Julien Demade, Produire un fait scientifique. Beveridge et le Comité international d'histoire des prix, Paris, Publications de la Sorbonne, 2018. Historiography 1930s in economic history History of science 1940s in economic history
International scientific committee on price history
Technology
493
9,591,673
https://en.wikipedia.org/wiki/Unglie
A unglie ("finger") is an obsolete unit of length equal to three-fourths of an inch (1.905 cm) that was used in India and Pakistan. After metrification in both countries, the unit became obsolete. See also List of customary units of measurement in South Asia References Units of length Customary units in India Obsolete units of measurement
Unglie
Mathematics
77
47,572,355
https://en.wikipedia.org/wiki/Marasmius%20aporpus
Marasmius aporpus is a species of fungus in the large agaric genus Marasmius. Found in Chile, it was described as new to science in 1969 by mycologist Rolf Singer. See also List of Marasmius species References External links aporpus Fungi described in 1969 Fungi of Chile Taxa named by Rolf Singer Fungus species
Marasmius aporpus
Biology
72
20,207,543
https://en.wikipedia.org/wiki/VUF-8430
VUF-8430 is a histamine agonist selective for the H4 subtype. References Histamine agonists Guanidines Thioureas
VUF-8430
Chemistry
37
15,936,151
https://en.wikipedia.org/wiki/Patatin
Patatin is a family of glycoproteins found in potatoes (Solanum tuberosum) and is also known as tuberin as it is commonly found within vacuoles of parenchyma tissue in the tuber of the plant. They consist of about 366 amino acids all making up and isoelectric point of 4.9. They have a molecular weight ranging from 40 to 45 kDa, but are commonly found as a 80kDa dimer. The main function of patatin is as a storage protein but it also has lipase activity and can cleave fatty acids from membrane lipids. The patatin protein makes up about 40% of the soluble protein in potato tubers. Members of this protein family have also been found in animals. Allergy Patatin is identified as a major cause of potato allergy. It has found to be similar to latex, and when in contact with open skin, there has been an increase of immunoglobulin E which causes some allergic reactions and symptoms, such as asthmatic symptoms, or atopic dermatitis. It is unclear as to why the plant does this, however it could be a potential defense mechanism against insects. Function Functionally, patatin serves as a key contributor to the antioxidant activity in potato tubers, in order to keep the potato fresh. Additionally, they function as acyl hydrolases, which breaks down different types of substrates. Notably, patatins also demonstrate β-1,3-glucanase activity, suggesting their involvement in breaking down polysaccharides. This diverse enzymatic activity contributes to ensuring the nutritional composition of the potato. Beyond their role as storage proteins, patatins play a significant part in the plant's defense mechanisms against pests and fungal pathogens. The galactolipase and β-1,3-glucanase activities exhibited by patatins are believed to contribute to the plant's resistance to external threats. This dual functionality underscores the importance of patatins in safeguarding the potato plant against potential environmental challenges. Beyond its role as a storage protein, patatin's functions extend to antioxidant activity and categorization as an esterase enzyme complex. It demonstrates enzymatic activity in lipid metabolism through lipid acyl hydrolases (LAHs) and acyl transferases. This activity varies across potato cultivars, extraction techniques, and fatty acid substrates. Structure The patatin genes are located at a single major locus, comprising both functional and non-functional genes. Patatin isoforms exhibit considerable variability among different potato cultivars. Patatin's primary residence in the vacuole, alongside protease inhibitor variants, positions it as a major player in potato tuber proteins. The ngLOC software predicts 296 vacuolar proteins, with 450 putative vacuolar proteins identified through mass spectrometric sequencing. Notably, the tuber vacuole is recognized as a protein storage vacuole, with a distinct absence of proteolytic or glycolytic enzymes. Structurally, patatin emerges as a tertiary stabilized protein, exhibiting stability up to 45 °C. Beyond this threshold, its secondary structure begins to unfold, with the α-helical portion denaturing at 55 °C. This vulnerability to temperature changes highlights the delicate balance in maintaining its structural integrity. Patatin's hydrolase activity, attributed to its parallel β-sheet core with a catalytic serine located in the nucleophilic elbow loop, places it within the hydrolase family. This core structure is crucial for its lipid acyl hydrolase (LAH) activity, providing insights into its enzymatic functions and potential participation in plant defenses. There is a study delves into the multifaceted properties of patatin, the predominant protein in potatoes, revealing its structural diversity through the identification of several isoforms. Notably rich in essential amino acids, patatin emerges as a valuable source of nutrition. The glycoprotein nature of patatin, characterized by O-linked glycosylation, incorporates various monosaccharides, including fucose, indicating a fucosylated glycan structural feature. The specific binding of patatin to AAL, a fucose-affine lectin, underscores its distinctive glycan composition. Moving beyond its molecular characteristics, the research explores the regulatory effects of patatin on lipid metabolism, fat catabolism, fat absorption, and lipase activity in zebrafish larvae subjected to high-fat feeding. Results suggest that patatin, at a concentration of 37.0 μg/mL, promotes lipid decomposition metabolism by 23% and exhibits inhibitory effects on lipase activity and fat absorption, positioning it as a potential natural constituent with anti-obesity properties. These findings illuminate the diverse facets of patatin, shedding light on its nutritional significance and its prospective role in combating obesity. Isoforms Patatin is a complex assembly of proteins represented by two multigene families: class I in large concentrations in the tuber and class II in smaller concentrations throughout the potato plant. Isoforms A, B, C, and D exhibit charge-based differences, with isoform A presenting the lowest surface charge. These isoforms, homologous in nature, differ in molecular masses and ratios, showcasing their structural diversity. Glycosylation Patatin isoforms undergo glycosylation, impacting their molecular masses and contributing to variations between isoforms. Experimental discrepancies in molar mass differences indicate potential glycosylation between protein and carbohydrates in potatoes. This glycosylation may play a role in the protein's functional characteristics. Patatin-like phospholipase The patatin-like phospholipase (PNPLA) domain, found in proteins encoding patatin, is widespread across diverse life forms, spanning eukaryotes and prokaryotes. These proteins are involved in a variety of biological functions, encompassing sepsis induction, host colonization, triglyceride metabolism, and membrane trafficking. Key features of PNPLA domain-containing proteins include their lipase and transacylase properties, signifying their significant roles in maintaining lipid and energy homeostasis across different organisms and biological contexts. References Potatoes
Patatin
Chemistry
1,317
34,017,528
https://en.wikipedia.org/wiki/Tension%20control%20bolt
A tension control bolt (TC bolt) is a heavy duty bolt used in steel frame construction. The head is usually domed and is not designed to be driven. The end of the shank has a spline on it which is engaged by a special power wrench which prevents the bolt from turning while the nut is tightened. When the appropriate tension is reached the spline shears off. See also Screw list Shear pin References Metalworking Threaded fasteners Torque
Tension control bolt
Physics
94
16,821,253
https://en.wikipedia.org/wiki/CFD-DEM
The CFD-DEM model, or Computational Fluid Dynamics / Discrete Element Method model, is a process used to model or simulate systems combining fluids with solids or particles. In CFD-DEM, the motion of discrete solids or particles phase is obtained by the Discrete Element Method (DEM) which applies Newton's laws of motion to every particle, while the flow of continuum fluid is described by the local averaged Navier–Stokes equations that can be solved using the traditional Computational Fluid Dynamics (CFD) approach. The interactions between the fluid phase and solids phase is modeled by use of Newton's third law. The direct incorporation of CFD into DEM to study the gas fluidization process so far has been attempted by Tsuji et al. and most recently by Hoomans et al., Deb et al. and Peng et al. A recent overview over fields of application was given by Kieckhefen et al. Parallelization OpenMP has been shown to be more efficient in performing coupled CFD-DEM calculations in parallel framework as compared to MPI by Amritkar et al. Recently, a multi-scale parallel strategy is developed. Generally, the simulation domain is divided into many sub-domains and each process calculates only one sub-domain using MPI passing boundary information; for each sub-domain, the CPUs are used to solve the fluid phase while the general purpose GPUs are used to solve the movement of particles. However, in this computation method CPUs and GPUs work in serial. That is, the CPUs are idle while the GPUs are calculating the solid particles, and the GPUs are idle when the CPUs are calculating the fluid phase. To further accelerate the computation, the CPU and GPU computing can be overlapped using the shared memory of a Linux system. Thus, the fluid phase and particles can be calculated at the same time. Reducing computation cost using Coarse Grained Particles The computation cost of CFD-DEM is huge due to a large number of particles and small time steps to resolve particle-particle collisions. To reduce computation cost, many real particles can be lumped into a Coarse Grained Particle (CGP). The diameter of the CGP is calculated by the following equation: where is the number of real particles in CGP. Then, the movement of CGPs can be tracked using DEM. In simulations using Coarse Grained Particles, the real particles in a CGP are subjected to the same drag force, same temperature and same species mass fractions. The momentum, heat and mass transfers between fluid and particles are firstly calculated using the diameter of real particles and then scaled by times. The value of is directly related to computation cost and accuracy. When is equal to unity, the simulation becomes DEM-based achieving results that are of the highest possible accuracy. As this ratio increases, the speed of the simulation increases drastically but its accuracy deteriorates. Apart from an increase in speed, general criteria for selecting a value for this parameter is not yet available. However, for systems with distinct mesoscale structures, like bubbles and clusters, the parcel size should be small enough to resolve the deformation, aggregation, and breakage of bubbles or clusters. The process of lumping particles together reduces the collision frequency, which directly influences the energy dissipation. To account for this error, an effective restitution coefficient was proposed by Lu et al., based on kinetic theory of granular flow, by assuming the energy dissipation during collisions for the original system and the coarse grained system are identical. References Computational physics
CFD-DEM
Physics
728
19,803,106
https://en.wikipedia.org/wiki/Nadir%20%28topography%29
In topography, a nadir is a point on a surface that is lower in elevation than all points immediately adjacent to it. Mathematically, a nadir is a local minimum of elevation. A nadir may be the lowest point of a dry basin or depression, or the deepest point of a body of water or ice. The nadir of a body of water is often called a "deep", as in the Challenger Deep, the nadir of the Earth's oceans. See also Depression (geology) Endorheic basin Geoid List of places on land with elevations below sea level Maxima and minima Summit (topography) (antonym) Topography References Cartography Geodesy Physical geography Surveying Topography
Nadir (topography)
Mathematics,Engineering
146
18,867,510
https://en.wikipedia.org/wiki/Conditional%20mutual%20information
In probability theory, particularly information theory, the conditional mutual information is, in its most basic form, the expected value of the mutual information of two random variables given the value of a third. Definition For random variables , , and with support sets , and , we define the conditional mutual information as This may be written in terms of the expectation operator: . Thus is the expected (with respect to ) Kullback–Leibler divergence from the conditional joint distribution to the product of the conditional marginals and . Compare with the definition of mutual information. In terms of PMFs for discrete distributions For discrete random variables , , and with support sets , and , the conditional mutual information is as follows where the marginal, joint, and/or conditional probability mass functions are denoted by with the appropriate subscript. This can be simplified as In terms of PDFs for continuous distributions For (absolutely) continuous random variables , , and with support sets , and , the conditional mutual information is as follows where the marginal, joint, and/or conditional probability density functions are denoted by with the appropriate subscript. This can be simplified as Some identities Alternatively, we may write in terms of joint and conditional entropies as This can be rewritten to show its relationship to mutual information usually rearranged as the chain rule for mutual information or Another equivalent form of the above is Another equivalent form of the conditional mutual information is Like mutual information, conditional mutual information can be expressed as a Kullback–Leibler divergence: Or as an expected value of simpler Kullback–Leibler divergences: , . More general definition A more general definition of conditional mutual information, applicable to random variables with continuous or other arbitrary distributions, will depend on the concept of regular conditional probability. Let be a probability space, and let the random variables , , and each be defined as a Borel-measurable function from to some state space endowed with a topological structure. Consider the Borel measure (on the σ-algebra generated by the open sets) in the state space of each random variable defined by assigning each Borel set the -measure of its preimage in . This is called the pushforward measure The support of a random variable is defined to be the topological support of this measure, i.e. Now we can formally define the conditional probability measure given the value of one (or, via the product topology, more) of the random variables. Let be a measurable subset of (i.e. ) and let Then, using the disintegration theorem: where the limit is taken over the open neighborhoods of , as they are allowed to become arbitrarily smaller with respect to set inclusion. Finally we can define the conditional mutual information via Lebesgue integration: where the integrand is the logarithm of a Radon–Nikodym derivative involving some of the conditional probability measures we have just defined. Note on notation In an expression such as and need not necessarily be restricted to representing individual random variables, but could also represent the joint distribution of any collection of random variables defined on the same probability space. As is common in probability theory, we may use the comma to denote such a joint distribution, e.g. Hence the use of the semicolon (or occasionally a colon or even a wedge ) to separate the principal arguments of the mutual information symbol. (No such distinction is necessary in the symbol for joint entropy, since the joint entropy of any number of random variables is the same as the entropy of their joint distribution.) Properties Nonnegativity It is always true that , for discrete, jointly distributed random variables , and . This result has been used as a basic building block for proving other inequalities in information theory, in particular, those known as Shannon-type inequalities. Conditional mutual information is also non-negative for continuous random variables under certain regularity conditions. Interaction information Conditioning on a third random variable may either increase or decrease the mutual information: that is, the difference , called the interaction information, may be positive, negative, or zero. This is the case even when random variables are pairwise independent. Such is the case when: in which case , and are pairwise independent and in particular , but Chain rule for mutual information The chain rule (as derived above) provides two ways to decompose : The data processing inequality is closely related to conditional mutual information and can be proven using the chain rule. Interaction information The conditional mutual information is used to inductively define the interaction information, a generalization of mutual information, as follows: where Because the conditional mutual information can be greater than or less than its unconditional counterpart, the interaction information can be positive, negative, or zero, which makes it hard to interpret. References Information theory Entropy and information
Conditional mutual information
Physics,Mathematics,Technology,Engineering
975
31,520,243
https://en.wikipedia.org/wiki/Anuj%20Batra
Anuj Batra is a research electrical engineer at Texas Instruments, specializing in ultrawideband wireless technology. He holds a BS in electrical engineering from Cornell University, an MS in electrical engineering from Stanford University, and a Ph.D. in electrical engineering from Georgia Tech. In 2004, he was recognized as a "young innovator" by inclusion in the MIT Technology Review's "TR100" list. References Living people Year of birth missing (living people) Electrical engineers Georgia Tech alumni Cornell University College of Engineering alumni Stanford University School of Engineering alumni American electrical engineers
Anuj Batra
Engineering
116
69,572,737
https://en.wikipedia.org/wiki/Baptist%20Memorial%20Hospital-Memphis%20%281912%E2%80%932000%29
The original Baptist Memorial Hospital was a 2,000-bed medical facility and complex of multiple hospital buildings located at 899 Madison Avenue in midtown Memphis, Tennessee. The facility closed in 2000 after 88 years of service, and was demolished in 2005. With the closure, Baptist transferred their last 12 patients to Baptist Memorial Hospital-Memphis (Formerly known as Baptist East) in eastern Memphis. Baptist later donated the land and buildings to Memphis Bioworks Foundation in 2002, and today the land is owned by The University of Tennessee Health Science Center. It was once the world's largest privately owned hospital. History and construction The 7-story 150-bed Baptist Memorial Hospital in midtown Memphis originally opened on July 22, 1912, and since then was expanded over the years to form what was the largest privately owned hospital in the United States by the mid 20th century. The idea for the hospital was formed at a Shelby County Baptist Association meeting in 1906 when Dr. H.P. Hurt of the Bellevue Baptist Church proposed a new Baptist-sponsored hospital. In 1914, the hospital was in debt and near closure due to a lack of patients. The hospitals superintendent A.E. Jennings raised $1 million to save the hospital. It was the first hospital to have a hotel for patients, and an office building for doctors. A.E. Jennings retired as superintendent in 1946 and Dr. Frank Groner became the hospitals new superintendent in 1946. In the 1970s, Baptist continued to grow, and in 1979, Baptist East, a satellite hospital, was completed. In 1980, Joseph Powell succeeded Frank Groner as the CEO and administrator, and Baptist began expanding its branches of hospitals across the mid-southern United States. In 1994, Joseph Powell retired as CEO and president and Stephen C. Reynolds took over as the new 4th CEO and president of Baptist. Physicians & Surgeons Building The Physicians & Surgeons Building (shortened to P&S building) was one of the original buildings, a 110-foot 9-story low-rise building located on 893-909 Madison Avenue. The building was originally constructed in 1919 as an addition for the Baptist Memorial Hospital, but went through several phases until its completion in 1937, and another addition in 1946. Its architecture was of neo-classical design. Madison East and Union East Tower The main hospital tower was a 255-foot tall, 924,000 square foot, 1,400-bed, 21-story X-shaped hospital building located on 899 Madison Avenue in the eastern part of the complex. It was constructed as an expansion of the original hospital buildings on Madison Avenue. Planning on the first tower expansion was started in 1953 and the 13-story Madison East wing was completed in 1956. As Baptist continued to grow, what was originally planned to grow over 20 years was expanded much faster in the early 1960s. In 1967, the Union east 21 story tower was completed, the Madison East wing was enlarged, and the hospital’s iconic X shape was finalized. Russwood Park fire and professional building expansions During this same time Russwood Park, a professional baseball park and stadium, was located across the street from the hospital prior to being destroyed by a fire on Easter Sunday, April 17, 1960. This fire also slightly damaged the Madison East tower, breaking several windows and putting the entire hospital at risk. The fire is considered one of Memphis’s largest historical fires, due to it being an all-wood construction stadium and the number of fire companies that ended up responding. With Russwood being a total loss, the baseball park was cleared, Baptist purchased the land and continued to grow. Baptist had a professional doctor’s building on Dudley Street that was built with the hospital addition in 1956, and they built their next set of professional office buildings, 910 and 920 Madison, in 1965 and 1975, with the hospital’s first parking garage attached. The hospital complex was finally completed with the addition of 930 Madison and Madison Plaza in 1991. After closing the hospital in 2000, Baptist donated the professional buildings to The University of Tennessee Health Science Center in 2002. These three buildings, 910, 920, and 930 Madison still stand today and are now academic and research facilities, as well as surgery centers, doctors offices, business offices, and food court. Closure & demolition After decades of expansion, Baptist Memorial Healthcare had spread to multiple towns across the mid-southern United States. Baptist East had completed significant expansions throughout the late 1980s and 1990s that shifted primary services to the East campus and essentially made it the primary hospital of the Baptist system. The original Baptist hospital buildings were in need of significant work and the original portions of the hospital had outlived their useful life. On November 17, 2000, the hospital closed and transferred its last 12 patients to other facilities, ending its 88 years of continual service. After closing the campus, Baptist donated the hospital facilities to Memphis Bioworks Foundation in 2002. Later, the land was donated to the University of Tennessee Health Science Center after Memphis Bioworks ceased operations. Demolition of the hospital facilities began in 2005. The Research Laboratory and Physicians & Surgeons Building were both imploded on May 8, 2005. The Madison and Union East tower was demolished via controlled implosion on November 6, 2005, at 6:45 AM by Chandler Demolition and Controlled Demolition, Inc. to make room for a planned biomedical research park. The research park never materialized, and today the land sits mostly vacant except for one building on Dudley Street and the current College of Pharmacy building. Notable births Lisa Marie Presley Brett Allred Notable deaths Elvis Presley See also Medical District, Memphis Baptist Memorial Hospital-Memphis List of hospitals in Tennessee References External links Display Location: Baptist Memorial Hospital (Downtown) - Urban Exploration Resource Landmark and Legend: Baptist Hospital, Medical Center Historic Memphis Hospitals - and Medical Centers Baptist Centennial Documentary 1912 establishments in Tennessee 2000 disestablishments in Tennessee Hospital buildings completed in 1937 Hospital buildings completed in 1967 Buildings and structures demolished in 2005 Buildings and structures demolished by controlled implosion Hospitals in Memphis, Tennessee
Baptist Memorial Hospital-Memphis (1912–2000)
Engineering
1,213
58,855,557
https://en.wikipedia.org/wiki/Geography%20of%20disability
'Geography of disability''' is a multi-disciplinary branch of human geography which studies the experiences of people with disabilities and the extent to which disability in a population can be influenced by its geographical location. Potential components of studies in a geographical analysis include the environment, politics, incidental and additional supports, and the socio-economic landscape of the region being examined. This field has become increasingly important as policymakers have become aware of the need to ensure equal access to community resources for all individuals, regardless of mobility challenges. According to the World Health Organization, about 15 percent of the world's population lives with some form of disability; two to four percent have significant difficulties in functioning. The WHO report indicates that poverty, government investment in medical services, and individual access to health care impact disability rates in developed regions. A distinction between impairment and disability is also made. History Research surrounding disabilities has centered around a medical model, in that there is an assumed "able body" for the greater proportion of a population. With this classification, cognitive disabilities were an outlier; one example is hysteria. This perspective has de-emphasized the importance of the person'' in care, rather than the diagnoses or condition(s) they may have. During the late 20th century, geographers and sociologists began to adapt the field's understanding of disability from a medical point of view towards a socio-spatial determinant. In 1972, the Union of the Physically Impaired Against Segregation was formed by Lancastrian disability-rights advocate Paul Hunt and the South African writer and activist Vic Finkelstein in the wake of their struggles to remain independent adults while relying on social programmes to provide essential healthcare services. As the disability rights movement gathered support and gained political victories in a number of industrialized nations during the 1980s and 1990s, sociopolitical discussions also shifted to a more person-centered view of how to support people with disabilities in their communities. As Rob Imrie and Claire Edwards reflected in their 2016 paper, "The Geographies of Disability: Reflections on the Development of a Sub-Discipline", Simon Williams advocated a new definition of disability in a 1999 paper, calling disability "an emergent property, located, temporally speaking, in terms of the interplay between the biological reality of physiological impairment, structural conditioning (i.e. enablement/constraints) and sociocultural interaction/elaboration". With other nascent sociopolitical movements such as third-wave feminism and environmental justice in the early decades of the 21st century, geographical studies of disability have begun to examine the relationship between disability as a function of socioeconomic status (given the racialization of poverty in many Western nations) and physical, mental, and developmental diagnoses in areas with severe air or water pollution, or poor access to food and housing. Models of disability The medical model considers disability a physical problem: an incapability of a disabled person to perform activities of daily living like a non-disabled person. This model focuses on easing inconvenience and improving the daily experience of a person with disabilities, such as advanced assistive devices or mobility aids like wheelchairs for disabled people who live independently. The social model of disability evokes social integration by demonstrating difficulties faced by disabled people due to their physical or mental functioning differences. This model encourages the mainstream social and cultural structure to accommodate disabled individuals with more assistive infrastructure and improved social attitudes. Children and teenagers with learning difficulties are more likely to experience discrimination, such as rejection by mainstream schools. Youth aged 15–24 with special healthcare needs are ten times more prone to discrimination than disabled adults over age 65. Rob Imrie and Claire Edwards described how geographical-research methodology is used for social research on disabled people: Socioeconomic status and disability The logic of disability oppression closely parallels that of other groups. It is bound up with political-economic needs and belief systems of domination. Income Due to the wide range of severity in healthcare needs, many people with disabilities can work part- or full-time with sufficient support; those whose needs preclude gainful employment must rely on socialized healthcare programs. Neoliberal governments have sought to limit the growth of entitlement programs, with a common perception that recipients may have "cheated the system." Data from applicants to the Social Security Administration during the 1970s and 1980s indicated that of those who were denied coverage by the program, fewer than half returned to work. Public-health professionals recognize the need for supportive programs, but may be incentivized by governmental administration or local legislatures to reduce budget costs by weakening services. Social Security in the United States helps more than 20 million program participants to remain above the poverty line, with most in the program within 150 percent of the poverty line. Although median annual income for full-time workers in the United States has grown to over $56,000, the median benefits received annually by Social Security participants is just under $30,000. Similar to workforce financial disparities and pay gaps as a function of race, gender, and age, less than one-fifth of white Americans rely on Social Security for their main income; over 40 percent of Hispanic Americans and one-third of Black Americans do so. Incidence Areas with significant, widespread material wealth will have a high level of development if the wealth is allowed to remain in the community. Poorer urban areas lack the resources to fight illegal pollution, unfair housing practices, and other detrimental community policies whicht may disproportionately impact them. During the early 2000s, longitudinal data on the incidence of noninfectious diseases began to indicate the interaction between residence and the risk of developing special healthcare needs or disabilities later in life. U.S. neighborhoods most heavily impacted by redlining had higher incidences of pediatric asthma, lead levels, and obesity. Sixty percent of Hispanic Americans, 50 percent of African Americans, and 33 percent of whites live in areas which fail to meet two or more federal air-quality standards. Public-health data Research on disability by the Australian Institute of Health and Welfare found a strong positive correlation between residence in economically-disadvantaged areas and the probability of acquiring a mild or severe disability. This correlation is also seen in the United States. According to that country's Disability Statistics Annual Report, the distribution of people with disabilities aged 18 to 64 is concentrated in the Southeast (including Georgia, Tennessee, Louisiana and Arkansas). The theory that lower socio-economic status increases disability risk is supported in this region, the most economically disadvantaged in the United States. Another example is Memphis, Tennessee, a city with a poverty rate of 26.2 percent and one of the country's densest disability populations; a reported 12.6 to 17.8 percent of its working population (aged 18 to 64) live with some form of disability. The figures indicated that females were at greater risk of disability than males, regardless of age. According to Eurostat, European women were three percent more prone to chronic health problems and daily activity difficulties than men in 2011. Difficulties in obtaining a diagnosis For many children and adults, a key barrier to supportive services for their healthcare needs is receiving sufficient care to obtain a disability diagnosis. In the United States, the Health Resources and Services Administration have identified Healthcare Provider Shortage Areas: regions in need of more licensed primary, dental, or mental-healthcare providers to support its population. Harris County, Texas, which includes Greater Houston, needs 75 additional psychiatrists to meet the minimum required ratio of providers to patients in need. Without access to a provider of a diagnosis, patients may not be able to show required documentation to the state or federal agencies who are gatekeepers to social-welfare entitlement programs such as Medicaid and Social Security Disability Insurance. In Australia, this is known as a District of Workforce Shortage. According to 2021 data, nearly 300 districts across Australia have fewer than the national average of psychiatrists (4.6 per 100,000 people). In the United Kingdom, a 2021 study by the Royal College of Psychiatrists indicated that there were about 4,500 full-time psychiatrists to support the country's population of 56.5 million. Canada acknowledges a healthcare-workforce shortage, but has not published specific data on provider numbers. According to data collected from 2017 to 2021, there are over 3,000 speech language pathologists in Ontario (23.5 per 100,000 people) but 423 in Saskatchewan (35.9 per 100,000). Disability-rights advocates and labor activists have highlighted several causes of this workforce shortage. Chief among them are the cost of schooling (many clinical fields require postgraduate education and training) and the opportunity cost of pursuing an advanced degree when entering a separate field (or private industry) will be more lucrative long-term. Environmental politics of disability A disabled person's origin and living environment, including mobility, accessibility, space and living conditions, determine and impact their daily experience and their physical and mental disorders. Capacity and space Researchers have highlighted the need for a deeper understanding of the relationship between ability and accessibility in public and private spaces. In their 2014 paper, "Disability and Deleuze: An Exploration of Becoming and Embodiment in Children's Everyday Environments", Stephens and Ruddick write that neither space should be seen as solely supportive or un-supportive; every space a child (and later adult) may exist in will vary in support and services. Given the broad spectrum of physical, emotional, and social capabilities, urban planners and civil engineers have begun to shift design practices for public spaces to incorporate more stakeholder feedback from affected persons or invite discussion on behalf of them. For example, an adult with severe cerebral palsy may be nonverbal and rely on a wheelchair for mobility; carers, family, and friends can provide feedback on private and public projects to best accommodate them. Accessible tourism Accessible tourism derives from the geography of disability. It strives for the right for those with disabilities to participate in tourism and promotes best accessibility practices by engaging representatives of the international tourism sector, disabled individuals, and non-governmental parties. Accessible tourism, which aims to promote "tourism without barriers", has a UNWTO publication. UNWTO has also published a variety of information, including manuals and recommendations on accessible tourism. The geographical model of disability was created during research into the geography of disability. Geographies of Disability Hate Crime Areas that are designated for disabled people tend to attract violence due to bystanders knowing disabled individuals will be located in these specific spaces. Disabled individuals face a wide range of harassment in different settings, "Harassment, name-calling and sometimes violence on streets, in shopping areas and parks, and in local neighbourhoods; harassment near, and damage to, people’s homes, access ramps, gardens and adapted cars; being shouted at and victimised for use of disabled parking spaces at shopping centres and for occupying wheelchair spaces on public transport; verbal abuse and being pushed past in shops, cafes and pubs; abuse in online spaces; being taunted outside care facilities; and abuse, violence and exploitation within institutional care, day centres and individuals’ homes" A survey was conducted where individuals who experienced hate crime were asked where bad things happen, "In Medway, Kent (SE England), people with learning disabilities described the following as “where bad things happen”: school, college or day centre (43%); in the street as they were walking somewhere (35%); in and around their home (28%); in their neighbourhood (28%); and on public transport (25%)" Another study by McClimens et al. in 2014 found that people with learning disabilities had fear about their personal safety which shaped the way they Sheffield city centre, "Similarly, a study by McClimens et al. (2014) found that fears about personal safety shaped people with learning disabilities’ use of Sheffield city centre: certain places (e.g., near a homeless shelter), certain people (e.g., those begging for money) and certain times (e.g., after dark) made people fearful. As with Pain’s (1997) respondents, no one reported being a victim of crime. Their fears, however, were “real enough” and “they tend to avoid certain places and situations” (McClimens et al., 2014, p. 17). Pain concludes that fear of crime is “an extension of the discrimination and, in some cases, harassment which disabled people may face using urban spaces in everyday life” (1997, p. 241). As such, fear of crime has an arguably greater impact on many more people’s lives than exceptional incidents of violence" "As Hall and Wilton argue, disabled parking bays, wheelchair spaces on public transport and other such “designated” disability spaces are not necessarily spaces of inclusion. It is the interactions of those using them that make them what they are – if the nature of the encounters within them are commonly negative, as with the above example, such spaces can become exclusionary." Due to disabled individuals belief that the hate crime they experience will not be taken seriously by the police and others they are reluctant to report it. There are possibilities to reshape the way disability hate crime is addressed and looked at, "reshape the ways in which the police and local authority agencies interpret and address hate crime, from the current focus on increasing reporting of incidents and prosecutions to prevention strategies, to identify and intervene in spaces and relations where hostility is likely to emerge. More positively, a relational perspective can demonstrate the potential of engendering positive connections and alliances, and spaces, between disabled and non-disabled people to reduce the likelihood of disability hate crime." Discrimination in law and policy During the shift to virtual spaces for work and schooling during the COVID-19 public-health emergency in early 2020, a number of disability advocates noted that many disabled workers have been denied telecommuting technology as companies cite cost control. Several countries have enacted policies to address discrimination against people with disabilities. According to a WHO report about disability barriers, In Australia, the Disability Discrimination Act 1992 (DDA) prohibits direct or indirect discrimination against a person who is temporarily or permanently disabled or potentially disabled in employment, education, access to services and public places, and purchasing housing. References Disability studies Human geography Physical geography
Geography of disability
Environmental_science
2,897
14,817,828
https://en.wikipedia.org/wiki/RABGEF1
Rab5 GDP/GTP exchange factor is a protein that in humans is encoded by the RABGEF1 gene. RABGEF1 forms a complex with rabaptin-5 (RABPT5; MIM 603616) that is required for endocytic membrane fusion, and it serves as a specific guanine nucleotide exchange factor for RAB5(RAB5A; MIM 179512) (Horiuchi et al., 1997) [supplied by OMIM]. References Further reading
RABGEF1
Chemistry
114
8,591,847
https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Scutum
This is the list of notable stars in the constellation Scutum, sorted by decreasing brightness. See also List of stars by constellation References List Scutum
List of stars in Scutum
Astronomy
33
2,549,891
https://en.wikipedia.org/wiki/IceCube%20Neutrino%20Observatory
The IceCube Neutrino Observatory (or simply IceCube) is a neutrino observatory developed by the University of Wisconsin–Madison and constructed at the Amundsen–Scott South Pole Station in Antarctica. The project is a recognized CERN experiment (RE10). Its thousands of sensors are located under the Antarctic ice, distributed over a cubic kilometer. Similar to its predecessor, the Antarctic Muon And Neutrino Detector Array (AMANDA), IceCube consists of spherical optical sensors called Digital Optical Modules (DOMs), each with a photomultiplier tube (PMT) and a single-board data acquisition computer which sends digital data to the counting house on the surface above the array. IceCube was completed on 18 December 2010. DOMs are deployed on strings of 60 modules each at depths between 1,450 and 2,450 meters into holes melted in the ice using a hot water drill. IceCube is designed to look for point sources of neutrinos in the teraelectronvolt (TeV) range to explore the highest-energy astrophysical processes. Construction IceCube is part of a series of projects developed and supervised by the University of Wisconsin–Madison. Collaboration and funding are provided by numerous other universities and research institutions worldwide. Construction of IceCube was only possible during the Antarctic austral summer from November to February, when permanent sunlight allows for 24-hour drilling. Construction began in 2005, when the first IceCube string was deployed and sufficient data was collected to verify that the optical sensors functioned correctly. In the 2005–2006 season, an additional eight strings were deployed, making IceCube the largest neutrino telescope in the world. Construction was completed on 17 December 2010. The total cost of the project was $279 million. As of 2024, plans for further upgrades to the array are in the federal approval process. If approved, the detectors for IceCube2 will each be eight times the size of those currently emplaced. The observatory will be able to detect more sources of particles, and discern their properties more finely at both lower and higher energy levels. Sub-detectors The IceCube Neutrino Observatory is composed of several sub-detectors which is also in addition to the main in-ice array. AMANDA, the Antarctic Muon And Neutrino Detector Array, was the first part built, and it served as a proof-of-concept for IceCube. AMANDA was turned off in May 2009. The IceTop array is a series of Cherenkov detectors on the surface of the glacier, with two detectors approximately above each IceCube string. IceTop is used as a cosmic ray shower detector, for cosmic ray composition studies and coincident event tests: if a muon is observed going through IceTop, it cannot be from a neutrino interacting in the ice. The Deep Core Low-Energy Extension is a densely instrumented region of the IceCube array which extends the observable energies below 100 GeV. The Deep Core strings are deployed at the center (in the surface plane) of the larger array, deep in the clearest ice at the bottom of the array (between 1760 and 2450 m deep). There are no Deep Core DOMs between 1850 and 2107 m depth, as the ice is not as clear in those layers. PINGU (Precision IceCube Next Generation Upgrade) is a proposed extension that will allow detection of low energy neutrinos (GeV energy scale), with uses including determining the neutrino mass hierarchy, precision measurement of atmospheric neutrino oscillation (both tau neutrino appearance and muon neutrino disappearance), and searching for WIMP annihilation in the Sun. A vision has been presented for a larger observatory, IceCube-Gen2. Experimental mechanism Neutrinos are electrically neutral leptons, and only interact very rarely with matter through the weak force. When they do react with the molecules of water in the ice via the charged current interaction, they create charged leptons (electrons, muons, or taus) corresponding to the flavor of the neutrino. These charged leptons can, if they are energetic enough, emit Cherenkov radiation. This happens when the charged particle travels through the ice faster than the speed of light in the ice, similar to the bow shock of a boat traveling faster than the waves it crosses. This light can then be detected by photomultiplier tubes within the digital optical modules making up IceCube. The detector signatures of the three charged leptons are distinct, and as such it's possible to determine the neutrino flavor of charged current events. On the other hand if the neutrino scattered off the ice via the neutral current instead, the final state contains no information of the neutrino flavor since no charged lepton was created. The signals from the PMTs are digitized and then sent to the surface of the glacier on a cable. These signals are collected in a surface counting house, and some of them are sent north via satellite for further analysis. Since 2014, hard drives rather than tape store the balance of the data which is sent north once a year via ship. Once the data reaches experimenters, they can reconstruct kinematical parameters of the incoming neutrino. High-energy neutrinos may cause a large signal in the detector, pointing back to their origin. Clusters of such neutrino directions indicate point sources of neutrinos. Each of the above steps requires a certain minimum energy, and thus IceCube is sensitive mostly to high-energy neutrinos, in the range of 107 to about 1021 eV. IceCube is more sensitive to muons than other charged leptons, because they are the most penetrating and thus have the longest tracks in the detector. Thus, of the neutrino flavors, IceCube is most sensitive to muon neutrinos. An electron resulting from an electron neutrino event typically scatters several times before losing enough energy to fall below the Cherenkov threshold; this means that electron neutrino events cannot typically be used to point back to sources, but they are more likely to be fully contained in the detector, and thus they can be useful for energy studies. These events are more spherical, or "cascade"-like, than "track"-like; muon neutrino events are more track-like. Tau leptons can also create cascade events; but are short-lived and cannot travel very far before decaying, and are thus usually indistinguishable from electron cascades. A tau could be distinguished from an electron with a "double bang" event, where a cascade is seen both at the tau creation and decay. This is only possible with very high energy taus. Hypothetically, to resolve a tau track, the tau would need to travel at least from one DOM to an adjacent DOM (17 m) before decaying. As the average lifetime of a tau is , a tau traveling at near the speed of light would require 20 TeV of energy for every meter traveled. Realistically, an experimenter would need more space than just one DOM to the next to distinguish two cascades, so double bang searches are centered at PeV scale energies. Such searches are underway but have not so far isolated a double bang event from background events. Another way to detect lower energy tau neutrinos is through the "double pulse" signature, where a single DOM detect two distinct light arrival times corresponding to the neutrino interaction and tau decay vertices. One can also use machine learning (ML) techniques, such as Convolutional Neural Networks, to distinguish the tau neutrino signal. In 2024 the IceCube collaboration published its findings of seven astrophysical tau neutrino candidates using such a technique. There is a large background of muons created not by neutrinos from astrophysical sources but by cosmic rays impacting the atmosphere above the detector. There are about 106 times more cosmic ray muons than neutrino-induced muons observed in IceCube. Most of these can be rejected using the fact that they are traveling downwards. Most of the remaining (up-going) events are from neutrinos, but most of these neutrinos are from cosmic rays hitting the far side of the Earth; some unknown fraction may come from astronomical sources, and these neutrinos are the key to IceCube point source searches. Estimates predict the detection of about 75 upgoing neutrinos per day in the fully constructed IceCube detector. The arrival directions of these astrophysical neutrinos are the points with which the IceCube telescope maps the sky. To distinguish these two types of neutrinos statistically, the direction and energy of the incoming neutrino is estimated from its collision by-products. Unexpected excesses in energy or excesses from a given spatial direction indicate an extraterrestrial source. Experimental goals Point sources of high energy neutrinos A point source of neutrinos could help explain the mystery of the origin of the highest energy cosmic rays. These cosmic rays have energies high enough that they cannot be contained by galactic magnetic fields (their gyroradii are larger than the radius of the galaxy), so they are believed to come from extra-galactic sources. Astrophysical events which are cataclysmic enough to create such high energy particles would probably also create high energy neutrinos, which could travel to the Earth with very little deflection, because neutrinos interact so rarely. IceCube could observe these neutrinos: its observable energy range is about 100 GeV to several PeV. The more energetic an event is, the larger volume IceCube may detect it in; in this sense, IceCube is more similar to Cherenkov telescopes like the Pierre Auger Observatory (an array of Cherenkov detecting tanks) than it is to other neutrino experiments, such as Super-K (with inward-facing PMTs fixing the fiducial volume). IceCube is more sensitive to point sources in the northern hemisphere than in the southern hemisphere. It can observe astrophysical neutrino signals from any direction, but neutrinos coming from the direction of the southern hemisphere are swamped by the cosmic-ray muon background. Thus, early IceCube point source searches focus on the northern hemisphere, and the extension to southern hemisphere point sources takes extra work. Although IceCube is expected to detect very few neutrinos (relative to the number of photons detected by more traditional telescopes), it should have very high resolution with the ones that it does find. Over several years of operation, it could produce a flux map of the northern hemisphere similar to existing maps like that of the cosmic microwave background, or gamma ray telescopes, which use particle terminology more like IceCube. Likewise, KM3NeT could complete the map for the southern hemisphere. IceCube scientists may have detected their first neutrinos on 29 January 2006. Gamma-ray bursts coincident with neutrinos When protons collide with one another or with photons, the result is usually pions. Charged pions decay into muons and muon neutrinos whereas neutral pions decay into gamma rays. Potentially, the neutrino flux and the gamma ray flux may coincide in certain sources such as gamma-ray bursts and supernova remnants, indicating the elusive nature of their origin. Data from IceCube is being used in conjunction with gamma-ray satellites like Swift or Fermi for this goal. IceCube has not observed any neutrinos in coincidence with gamma ray bursts, but is able to use this search to constrain neutrino flux to values less than those predicted by the current models. Indirect dark matter searches Weakly interacting massive particle (WIMP) dark matter could be gravitationally captured by massive objects like the Sun and accumulate in the core of the Sun. With a high enough density of these particles, they would annihilate with each other at a significant rate. The decay products of this annihilation could decay into neutrinos, which could be observed by IceCube as an excess of neutrinos from the direction of the Sun. This technique of looking for the decay products of WIMP annihilation is called indirect, as opposed to direct searches which look for dark matter interacting within a contained, instrumented volume. Solar WIMP searches are more sensitive to spin-dependent WIMP models than many direct searches, because the Sun is made of lighter elements than direct search detectors (e.g. xenon or germanium). IceCube has set better limits with the 22 string detector (about of the full detector) than the AMANDA limits. Neutrino oscillations IceCube can observe neutrino oscillations from atmospheric cosmic ray showers, over a baseline across the Earth. It is most sensitive at ~25 GeV, the energy range for which the DeepCore sub-array has been optimized. DeepCore consists of 6 strings deployed in the 2009–2010 austral summer with a closer horizontal and vertical spacing. In 2014, DeepCore data was used to determine the mixing angle θ23 and mass splitting Δm223. This measurement has since been improved with more data and improved detector calibration and data processing. As more data is collected and IceCube measurements are refined further, it may be possible to observe the characteristic modification of the oscillation pattern at ~15 GeV that determines the neutrino mass hierarchy. This mechanism for determining the mass hierarchy only works as the mixing angle θ13 is large. Galactic supernovae Despite the fact that individual neutrinos expected from supernovae have energies well below the IceCube energy cutoff, IceCube could detect a local supernova. It would appear as a detector-wide, brief, correlated rise in noise rates. The supernova would have to be relatively close (within our galaxy) to get enough neutrinos before the 1/r2 distance dependence took over. IceCube is a member of the Supernova Early Warning System (SNEWS). Sterile neutrinos A signature of sterile neutrinos would be a distortion of the energy spectrum of atmospheric neutrinos around 1 TeV, for which IceCube is uniquely positioned to search. This signature would arise from matter effects as atmospheric neutrinos interact with the matter of the Earth. The described detection strategy, along with its South Pole position, could allow the detector to provide the first robust experimental evidence of extra dimensions predicted in string theory. Many extensions of the Standard Model of particle physics, including string theory, propose a sterile neutrino; in string theory this is made from a closed string. These could leak into extra dimensions before returning, making them appear to travel faster than the speed of light. An experiment to test this may be possible in the near future. Furthermore, if high energy neutrinos create microscopic black holes (as predicted by some aspects of string theory) it would create a shower of particles, resulting in an increase of "down" neutrinos while reducing "up" neutrinos. In 2016, scientists at the IceCube detector did not find any evidence for the sterile neutrino. Results The IceCube collaboration has published flux limits for neutrinos from point sources, gamma-ray bursts, and neutralino annihilation in the Sun, with implications for WIMP–proton cross section. A shadowing effect from the Moon has been observed. Cosmic ray protons are blocked by the Moon, creating a deficit of cosmic ray shower muons in the direction of the Moon. A small (under 1%) but robust anisotropy has been observed in cosmic ray muons. In November 2013 it was announced that IceCube had detected 28 neutrinos that likely originated outside the Solar System and among those a pair of high energy neutrinos in the peta-electron volt range, making them the highest energy neutrinos discovered to date. The pair were nicknamed "Bert" and "Ernie", after characters from the Sesame Street TV show. Later in 2013 the number of detection increased to 37 candidates including a new high energy neutrino at 2000-TeV given the name of "Big Bird". IceCube measured 10–100 GeV atmospheric muon neutrino disappearance in 2014, using three years of data taken May 2011 to April 2014, including DeepCore, determining neutrino oscillation parameters ∆m232 = × 10−3 eV2 and sin2(θ23) = (normal mass hierarchy), comparable to other results. The measurement was improved using more data in 2017, and in 2019 atmospheric tau neutrino appearance was measured. The latest measurement with improved detector calibration and data processing from 2023 has resulted in more precise values of the oscillation parameters, determining ∆m232 = (2.41 ± 0.07) × 10−3 eV2 and sin2(θ23) = 0.51 ± 0.05 (normal mass hierarchy). In July 2018, the IceCube Neutrino Observatory announced that they had traced an extremely-high-energy neutrino that hit their detector in September 2017 back to its point of origin in the blazar TXS 0506 +056 located 5.7 billion light-years away in the direction of the constellation Orion, the results had a statistical significance of 3-3.5σ. This was the first time that a neutrino detector had been used to locate an object in space, and indicated that a source of cosmic rays had been identified. In 2020, evidence of the Glashow resonance at 2.3σ (formation of the W boson in antineutrino-electron collisions) was announced. In February 2021, the tidal disruption event (TDE) AT2019dsg was reported as candidate for a neutrino source and the TDE AT2019fdr as a second candidate in June 2022. In November 2022, IceCube announced strong evidence of a neutrino source emitted by the active galactic nucleus of Messier 77. It is the second detection by IceCube after TXS 0506+056, and only the fourth known source including SN1987A and solar neutrinos. OKS 1424+240 and GB9 are other possible candidates. In June 2023 IceCube identified as a galactic map the neutrino diffuse emission from the Galactic plane at the 4.5σ level of significance. See also Antarctic Muon And Neutrino Detector Array Radio Ice Cherenkov Experiment ANTARES and KM3NeT, similar neutrino telescopes using deep-sea water instead of ice. Multimessenger astronomy References External links AMANDA at UCI IceCube expermint record on INSPIRE-HEP Astronomical observatories in the Antarctic Neutrino observatories Neutrino astronomy Particle experiments University of Wisconsin–Madison Cosmic-ray experiments 2010 establishments in Antarctica CERN experiments
IceCube Neutrino Observatory
Astronomy
3,969
14,777,007
https://en.wikipedia.org/wiki/KCNJ12
ATP-sensitive inward rectifier potassium channel 12 is a lipid-gated ion channel that in humans is encoded by the KCNJ12 gene. Function This gene encodes an inwardly rectifying K+ channel that may be blocked by divalent cations. This protein is thought to be one of multiple inwardly rectifying channels that contribute to the cardiac inward rectifier current (IK1). The gene is located within the Smith–Magenis syndrome region on chromosome 17. Interactions KCNJ12 has been shown to interact with: APBA1, CASK, DLG1, DLG2, DLG3, DLG4, LIN7A LIN7B, and LIN7C. See also Inward-rectifier potassium channel References Further reading External links Ion channels
KCNJ12
Chemistry
168
37,665,378
https://en.wikipedia.org/wiki/Zeta%20Lupi
ζ Lupi (Latinised as Zeta Lupi) is the brighter component of a wide double star in the constellation Lupus, consisting of an orange-hued primary and a fainter secondary with a golden-yellow hue. It is visible to the naked eye with a combined apparent visual magnitude of 3.41. Based upon an annual parallax shift of 27.80 mas as seen from Earth, it is located 117.3 light-years from the Sun. This is a probable binary star system. As of 2013, the pair had an angular separation of 71.20 arcseconds along a position angle of 249°. The primary, component A, is an evolved G-type giant star with a visual magnitude of 3.50 and a stellar classification of G7 III. This is a red clump star, indicating that it is generating energy through the thermonuclear fusion of helium in its core region. It is 2.6 times more massive than the Sun and nine times larger. Zeta Lupi is 50 times more luminous than the Sun, emitting this energy from its photosphere at an effective temperature of 5114 K. The secondary, component B, has a visual magnitude of 6.74. References G-type giants Horizontal-branch stars Double stars Lupus (constellation) Lupi, Zeta Durchmusterung objects 134505 074395 5649
Zeta Lupi
Astronomy
288
9,304,399
https://en.wikipedia.org/wiki/Equilibrium%20selection
Equilibrium selection is a concept from game theory which seeks to address reasons for players of a game to select a certain equilibrium over another. The concept is especially relevant in evolutionary game theory, where the different methods of equilibrium selection respond to different ideas of what equilibria will be stable and persistent for one player to play even in the face of deviations (and mutations) of the other players. This is important because there are various equilibrium concepts, and for many particular concepts, such as the Nash equilibrium, many games have multiple equilibria. Equilibrium Selection with Repeated Games A stage game is an n-player game where players choose from a finite set of actions, and there is a payoff profile for their choices. A repeated game is playing a number of repetitions of a stage game in discrete periods of time (Watson, 2013). A player's reputation affects the actions and behavior of the other players. In other words, how a player behaves in preceding rounds determines the actions of their opponents in subsequent rounds. An example is the interaction between an employee and an employer where the employee shirks their responsibility for a short-term gain then loses on the bonus that the employer discontinues after observing the employee's behavior (Watson, 2013). The dynamics of equilibrium selection for repeated games can be illustrated with a two-period game. With every action from the players in one period, a new subgame is initiated based on that action profile. For the Nash Equilibrium of the entire game, a subgame perfect equilibrium of every game is required. Hence, in the last period of a repeated game, the players will choose a stage game Nash Equilibrium. Equilibria stipulation actions that are not Nash Equilibrium strategies in the stage game are still supported. This can be achieved by establishing a reputation of "cooperation" in the preceding periods that leads to the opponent selecting a more favorable Nash Equilibrium strategy in the final period. If a player builds a reputation of deviating instead of co-operating, then the opponent can "punish" the player by choosing a less favorable Equilibrium in the final period of the repeated game. Focal point Another concept that can help to select an equilibrium is focal point. This concept was first introduced by Thomas Schelling, a Nobel-winning game theorist, in his book The Strategy of Conflict in 1960 (Schelling, 1960). When the participants are in a coordinate game where the players do not have a chance to discuss their strategies beforehand, focal point is a solution that somehow stands out as the natural answer.   For example, in an experiment conducted in 1990 by Mehta et al. (1994), the researchers let the participants answer a questionnaire, which contained the question "name a year" or "name a city in England". The participants were asked to provide the first answer that came to their minds, and many provided their birth year or hometown city. However, when they had the incentive to coordinate - the participants were told they would be paid if they managed to answer the question the same way as an anonymous partner - most of them chose 1990 (the year at the time) and London (the largest city in the UK). These are not the first answers that came to their minds, but they are the best bets after deliberation while trying to find a partner without prior discussion. In this case, the year 1990, or the city of London, is the focal point of this game, to help the players to select the best equilibrium in this coordinate game. Besides, even in the situation where the game players are allowed to communicate with each other, such as negotiation, focal point can still be useful for them selecting an appropriate equilibrium: When the negotiation is about to the end, each player must make the last-minute decision about how aggressive they should be and to what extent they should trust their opponents (Hyde, 2017). Symmetries Various authors have studied games with symmetries (over players or actions), especially in the case of fully cooperative games. For example, the team game Hanabi has been considered, in which the different players and suits are symmetric. Relatedly, some authors have considered how equilibrium selection relates between isomorphic games. Generally, it has been argued that a group of players shouldn't choose strategies that arbitrarily break these symmetries, i.e., should play symmetric strategies to aim for symmetric equilibria. Similarly they should play isomorphic games isomorphically. For example, in Hanabi, hints corresponding to the different suits should have analogous (symmetric) meanings. In team games, the optimal symmetric strategy is also a (symmetric) Nash equilibrium. While breaking symmetries may allow for higher utilities, it results in unnatural, inhuman strategies. Every finite symmetric game (including non-team games) has a symmetric Nash equilibrium. Examples of equilibrium selection concepts Risk & Payoff dominance Definition: Consider a situation where a game has multiple Nash equilibria (NE), the equilibrium can be classified into two categories: A Nash equilibrium is considered risk dominant if it has the largest basin of attraction (i.e. is less risky). A Nash equilibrium is considered payoff dominant if it is Pareto superior to all other Nash equilibria in the game. Explanation: A risk dominant NE is chosen when the player wants to avoid big losses while a payoff dominant NE is considered for an optimal payoff solution. Note that either the risk dominant or payoff dominant is a typical type of NE. Example: Take the normal form payoff matrix of a game as an example: There are two NEs in this game, i.e. (U, L) and (D,R). Here (U, L) is a payoff dominant NE since such a strategy can return an optimal overall payoff. However, given the uncertainty of an opponent's action, one of the players may consider a more conservative strategy (D for player 1 and R for player two), which can avoid the situation of "a great loss", i.e. getting 0 payoff once the opponent deviates. Hence the (D, R) is a risk dominant NE. 1/2 dominance See also Pareto efficiency Equilibrium refinement References Harsanyi, John C. and Selten, Reinhard, A General Theory of Equilibrium Selection in Games, MIT Press (1988) Watson, J. (2013). Repeated Games and Reputation. In Strategy: An introduction to game theory (3rd ed., pp. 291–305). essay, Norton & Company. Hyde, T., 2017. Can Schelling's focal points help us understand high-stakes negotiations?. [online] Aeaweb.org. Available at: <https://www.aeaweb.org/research/can-schellings-focal-points-help-us-understand-high-stakes-negotiations> [Accessed 9 December 2021]. Mehta, J., Starmer, C., & Sugden, R. (1994). The Nature of Salience: An Experimental Investigation of Pure Coordination Games. The American Economic Review, 84(3), 658–673. Schelling, T. C. (1960). The strategy of conflict. Cambridge, Mass. Game theory equilibrium concepts
Equilibrium selection
Mathematics
1,499
13,037,090
https://en.wikipedia.org/wiki/Trichloroethylsilane
Trichloroethylsilane is a compound with formula Si(C2H5)Cl3. Organochlorosilanes
Trichloroethylsilane
Chemistry
30
23,294,952
https://en.wikipedia.org/wiki/Diprophylline
Diprophylline (INN) or dyphylline (USAN) (trade names Dilor, Lufyllin) is a xanthine derivative with bronchodilator and vasodilator effects. It is used in the treatment of respiratory disorders like asthma, cardiac dyspnea, and bronchitis. It acts as an adenosine receptor antagonist and phosphodiesterase inhibitor. See also Xanthine References Adenosine receptor antagonists Diols Phosphodiesterase inhibitors Xanthines
Diprophylline
Chemistry
114
6,308,157
https://en.wikipedia.org/wiki/Nanoporous%20materials
Nanoporous materials consist of a regular organic or inorganic bulk phase in which a porous structure is present. Nanoporous materials exhibit pore diameters that are most appropriately quantified using units of nanometers. The diameter of pores in nanoporous materials is thus typically 100 nanometers or smaller. Pores may be open or closed, and pore connectivity and void fraction vary considerably, as with other porous materials. Open pores are pores that connect to the surface of the material whereas closed pores are pockets of void space within a bulk material. Open pores are useful for molecular separation techniques, adsorption, and catalysis studies. Closed pores are mainly used in thermal insulators and for structural applications. Most nanoporous materials can be classified as bulk materials or membranes. Activated carbon and zeolites are two examples of bulk nanoporous materials, while cell membranes can be thought of as nanoporous membranes. A porous medium or a porous material is a material containing pores (voids). The skeletal portion of the material is often called the "matrix" or "frame". The pores are typically filled with a fluid (liquid or gas). There are many natural nanoporous materials, but artificial materials can also be manufactured. One method of doing so is to combine polymers with different melting points, so that upon heating one polymer degrades. A nanoporous material with consistently sized pores has the property of letting only certain substances pass through, while blocking others. Classifications Classification By Size The term nanomaterials covers diverse forms of materials with various applications. According to IUPAC porous materials are subdivided into 3 categories: Microporous materials: 0.2–2 nm Mesoporous materials: 2–50 nm Macroporous materials: 50–1000 nm These categories conflict with the classical definition of nanoporous materials, as they have pore diameters between 1 and 100 nm. This range covers all the classifications listed above. However, for the sake of simplicity, scientists choose to use the term nanomaterials and list its associated diameter instead. Microporous and mesoporous materials are distinguished as separate material classes owing to the distinct applications afforded by the pores sizes in these materials. Confusingly, the term microporous is used to describe materials with smaller pores sizes than materials commonly referred to simply as nanoporous. More correctly, microporous materials are better understood as a subset of nanoporous materials, namely materials that exhibit pore diameters smaller than 2 nm. Having pore diameters with length scales of molecules, such materials enable applications that require molecular selectivity such as filtration and separation membranes. Mesoporous materials, referring generally to materials with average pore diameters in the range 2-50 nm are interesting as catalyst support materials and adsorbents owing to their high surface area to volume ratios. Sometimes classifying by size becomes difficult as there could be porous materials that have various diameters. For example, microporous materials may have a few pores with 2 to 50 nm diameter due to random grain packing. These specifics must be taken into consideration when categorizing by pore size. Classification By Network Materials In addition to classification by size, nanoporous materials can be further classified into organic and inorganic network materials. A network material is the structure 'hosts' the pores and is where the medium (gas or liquid) interacts with the substrate. While there are plenty of inorganic nanoporous membranes, there are few organic ones due to issues with stability. Organic Organic nanoporous materials are polymers made from elements such as boron, carbon, nitrogen, and oxygen. These materials are usually microporous although mesoporous/microporous structures do exist. These include covalent organic frameworks (COFs), covalent triazine frameworks, polymers of intrinsic microporosity (PIMs), hyper cross-linked polymers (HCPs), and conjugated microporous polymers (CMPs). Each of these has different structures and manufacturing steps. In general, to create organic nanoporous materials, a monomer with greater than 2 branches (i.e. covalent bonds) is dissolved in a solvent. After additional monomers are added and polymerization occurs, the solvent is removed and the remaining structure is considered a nanoporous material. Organic nanoporous materials can be further classified into crystalline and amorphous networks. Crystalline networks are materials that have a well-defined pore sizes. The pore sizes are so well defined that simply by changing the monomer, one can obtain different pore sizes. COFs are an example of such crystalline structure. In contrast, amorphous nanoporous materials have a distribution of pore sizes and are usually disordered. An example is PIMs. Both categories have various uses in gas sorption and catalysis reactions. Inorganic Inorganic nanoporous materials are porous materials that include the use of oxide-type, carbon, binary, and pure metal materials. Examples include zeolites, nanoporous alumina, and titania nanotubes. Zeolites are crystalline hydrated tectoaluminosilicates. This material is a combination of alkali/alkali earth metals, alumina, and silica hydrates. These are used for ion-exchange beds and for water purification. Nanoporous alumina is a biocompatible material widely used in various dental and orthopedic implants. Titania nanotubes are also used in orthopedics but are special as they can form a titanium oxide layer upon exposure to oxygen. Because the surface of the material is oxide-protected, this material has excellent biocompatibility with incredible mechanical strength. Applications Gas Storage/Sensing Gas storage is crucial for energy, medical, and environmental applications. Nanoporous materials enable a unique method of gas storage through adsorption. When the substrate and gas interact with each other, the gas molecules can physio-adsorb or covalently bond with the nanoporous material, which is known as physical storage and chemical storage, respectively. While one may store gases in the bulk phase, such as in a bottle, nanoporous materials enable higher storage density, which is attractive for energy applications. One example of this application is hydrogen storage. With the onset of climate change, there is an increased interest in zero-emission vehicles, especially in fuel cell electric vehicles. By storing hydrogen at high densities using porous materials, one can increase electric car mileage range. Another use case for nanoporous materials is as a substrate for gas sensors. For example, measuring the electrical resistivity of a porous metal can yield the exact concentration of an analyte species in gaseous form. Since the resistivity of the substrate is proportional to the surface area of the porous media, using nanoporous materials will yield higher sensitivity in detecting trace gaseous species than their bulk counterparts. This is especially useful as nanoporous materials have a higher effective surface area normalized to the top-view surface area Biological applications Nanoporous materials are used in biological applications as well. Enzyme catalyzed reactions in biological applications are highly utilized for metabolism and processing large molecules. Nanoporous materials offer the opportunity to embed enzymes onto the porous substrate which enhances the lifetime of the reactions for long-term implants. Another application is found in DNA sequencing. By coating an inorganic nanoporous membrane on an insulating material, nanopores can be utilized for single-molecule analysis. By threading DNA through these nanopores, one can read out the ionic current through the pore which can be correlated to one of four nucleotides. References Porous media Nanomaterials
Nanoporous materials
Materials_science,Engineering
1,610
14,091,260
https://en.wikipedia.org/wiki/Pr%C3%A9vost%20reaction
The Prévost reaction is a chemical reaction in which an alkene is converted by iodine and the silver salt of benzoic acid to a vicinal diol with anti stereochemistry. The reaction was discovered by the French chemist Charles Prévost (1899–1983). Reaction mechanism The reaction between silver benzoate (1) and iodine is very fast and produces a very reactive iodinium benzoate intermediate (2). The reaction of the iodinium salt (2) with an alkene gives another short-lived iodinium salt (3). Nucleophilic substitution (SN2) by the benzoate salt gives the ester (4). Another silver ion causes the neighboring group substitution of the benzoate ester to give the oxonium salt (5). A second SN2 substitution by the benzoate anion gives the desired diester (6). In the final step hydrolysis of the ester groups gives the anti-diol. This outcome is the opposite of that of the related Woodward cis-hydroxylation which gives syn addition. References See also Woodward cis-hydroxylation Organic redox reactions Substitution reactions Name reactions
Prévost reaction
Chemistry
252
2,468,460
https://en.wikipedia.org/wiki/Float-zone%20silicon
Float-zone silicon is very pure silicon obtained by vertical zone melting. The process was developed at Bell Labs by Henry Theuerer in 1955 as a modification of a method developed by William Gardner Pfann for germanium. In the vertical configuration molten silicon has sufficient surface tension to keep the charge from separating. The major advantages is crucibleless growth that prevents contamination of the silicon from the vessel itself and therefore an inherently high-purity alternative to boule crystals grown by the Czochralski method. The concentrations of light impurities, such as carbon (C) and oxygen (O2) elements, are extremely low. Another light impurity, nitrogen (N2), helps to control microdefects and also brings about an improvement in mechanical strength of the wafers, and is now being intentionally added during the growth stages. The diameters of float-zone wafers are generally not greater than 200 mm due to the surface tension limitations during growth. A polycrystalline rod of ultrapure electronic-grade silicon is passed through an RF heating coil, which creates a localized molten zone from which the crystal ingot grows. A seed crystal is used at one end to start the growth. The whole process is carried out in an evacuated chamber or in an inert gas purge. The molten zone carries the impurities away with it and hence reduces impurity concentration (most impurities are more soluble in the melt than the crystal). Specialized doping techniques like core doping, pill doping, gas doping and neutron transmutation doping are used to incorporate a uniform concentration of desirable impurity. Float-zone silicon wafers may be irradiated by neutrons to turn it into a n-doped semiconductor. Application Float-zone silicon is typically used for power devices and detector applications, where high-resistivity is required. It is highly transparent to terahertz radiation, and is usually used to fabricate optical components, such as lenses and windows, for terahertz applications. It is also used in solar arrays of satellites as it has higher conversion efficiency. See also Bridgman–Stockbarger method Micro-pulling-down Laser-heated pedestal growth References Michael Riordan & Lillian Hoddeson (1997) Crystal Fire: the birth of the information age, page 230, W. W. Norton & Company . Industrial processes Semiconductor growth Silicon, Float-zone Methods of crystal growth
Float-zone silicon
Chemistry,Materials_science
490
61,588,164
https://en.wikipedia.org/wiki/Rollout%20%28drag%20racing%29
Rollout or rollout allowance is an adjustment in timed acceleration runs used by North-American drag racing and enthusiast magazines to create approximate parity over time between historic 0 to 60 mph and 1/4 mile acceleration times and those measured today using the Global Positioning System (GPS). Historically, light gates were used at the beginning and end of acceleration runs. These measured the end of runs accurately, but only began timing once a vehicle began to move (enough to trigger the light gate). Since this was the standard method, published acceleration times reflected a consistent "rolling start" inaccuracy across races, records, road tests, and enthusiast magazine reviews. Since the error was impossible to eliminate and applied to all vehicles in all timed runs it was simply ignored as a "net wash". It only became an issue with the advent to modern GPS, which records a speed run from a standing start. To create parity with the historic method (and historic record), a convention evolved in North America to approximate the rolling start by subtracting the time it takes for a vehicle to cover its first from total recorded elapsed time. Further reading References Drag racing Measurement Car performance
Rollout (drag racing)
Physics,Mathematics
239
5,735,334
https://en.wikipedia.org/wiki/Planned%20giving
Planned giving (less commonly known as gift planning ) is an area of fundraising that refers to several specific gift types that can be funded with cash, equity, or property. These gift vehicles are commonly based on United States tax law, but Canada, the United Kingdom, and other nations are beginning to establish similar laws. In the United States the specific rules of planned giving are defined by the United States Congress and the Internal Revenue Service. History and etymology The term "planned giving" was coined in 1969 by Robert F. Sharpe, Sr.: "A donor usually considers a current gift to your institution as a cash outlay now. To make a deferred gift, a person decides to give at some future date, either a number of years from now or at death. A deferred gift is a present decision to make a future gift, evidenced by a legal contract. "While the name 'deferred giving' is best known to professionals in the field, it is not a term that communicates very much to the average donor. Therefore, we suggest the term 'planned giving.' When a person makes a planned gift, it suggests forethought." —Give & Take, a publication of the Sharpe Group, August 1972 Education The use of planned giving by colleges and universities was pioneered by Allen Hawley at Pomona College. In 1942, Hawley introduced what became known as the Pomona Plan, where members receive a lifetime annuity in exchange for donating to the college upon their death. The plan's model has since been adopted by many other institutions, although the annuity rates offered by Pomona remain among the highest. Usage Planned gifts are referred to as such because they require more planning, negotiation and counsel than many other gifts. Planned gifts can result in immediate income, income to charity over time or serve to delay a gift for life or other period of time while the donor or others retain income and/or access to the assets used to fund the gift. Because of the current or future charitable benefits, a number of state and/or federal income tax, capital gains, estate and gift benefits are associated with giving in this way. Parents who have a child with a disability should ensure that the inheritance they leave for their child does not affect their child's eligibility to social assistance programs such as the Ontario Disability Support Program (ODSP). A Henson trust can be useful to ensure this. Efforts to encourage planned gifts are popular among thousands of colleges, universities, hospitals, museums and community foundations in the United States. Funds generated through planned gifts are devoted to current funding needs as well as capital projects and endowments. Reports published during and after the Great Depression of the 1930s indicate that planned gifts provided a higher percentage of philanthropic dollars than in times of economic prosperity. See "Philanthropy in Uncertain Times - A Retrospective 1931-1949" and "Summary of Recent Research on Depression Giving," from the Sharpe Group. Research shows that planned giving may become considerably more important as a type of philanthropy in the United States due to the aging Baby Boomer population. This is often referred to as the "Great Wealth Transfer." See "Philanthropy's Missing Trillions in the Stanford Social Innovation Review. Types of planned gifts By far, the most commonly utilized planned gift is the bequest of property through a person's final will. Other types include: Charitable bargain sale Charitable Gift Annuity (CGA) Charitable Remainder Annuity Trust (CRAT) Charitable Remainder Unitrust (CRUT) Charitable remainder trust Charitable lead annuity trust Charitable lead trust Donor Managed Investment Account Pooled income fund Retained life estate Testamentary life income Assets to give Securities Business Interests Cash Life insurance Personal property Real estate Retirement plan References Marrick, Peter. 2009. The T.A.S.K: The Trusted Advisor's Survival Kit. LexixNeixis Canada Inc. Philanthropy
Planned giving
Biology
786
77,618,701
https://en.wikipedia.org/wiki/8176%20aluminium%20alloy
8176 aluminium alloy is produced using iron, zinc and silicon as additives. It is used in power lines due to its high electrical conductivity. Chemical composition Applications Aluminium 8176 is used in building wiring and cables. References External links Material Properties Aluminium alloys
8176 aluminium alloy
Chemistry
55
199,706
https://en.wikipedia.org/wiki/String%20literal
A string literal or anonymous string is a literal for a string value in the source code of a computer program. Modern programming languages commonly use a quoted sequence of characters, formally "bracketed delimiters", as in x = "foo", where , "foo" is a string literal with value foo. Methods such as escape sequences can be used to avoid the problem of delimiter collision (issues with brackets) and allow the delimiters to be embedded in a string. There are many alternate notations for specifying string literals especially in complicated cases. The exact notation depends on the programming language in question. Nevertheless, there are general guidelines that most modern programming languages follow. Syntax Bracketed delimiters Most modern programming languages use bracket delimiters (also balanced delimiters) to specify string literals. Double quotations are the most common quoting delimiters used: "Hi There!" An empty string is literally written by a pair of quotes with no character at all in between: "" Some languages either allow or mandate the use of single quotations instead of double quotations (the string must begin and end with the same kind of quotation mark and the type of quotation mark may or may not give slightly different semantics): 'Hi There!' These quotation marks are unpaired (the same character is used as an opener and a closer), which is a hangover from the typewriter technology which was the precursor of the earliest computer input and output devices. In terms of regular expressions, a basic quoted string literal is given as: "[^"]*" This means that a string literal is written as: a quote, followed by zero, one, or more non-quote characters, followed by a quote. In practice this is often complicated by escaping, other delimiters, and excluding newlines. Paired delimiters A number of languages provide for paired delimiters, where the opening and closing delimiters are different. These also often allow nested strings, so delimiters can be embedded, so long as they are paired, but still result in delimiter collision for embedding an unpaired closing delimiter. Examples include PostScript, which uses parentheses, as in (The quick (brown fox)) and m4, which uses the backtick (`) as the starting delimiter, and the apostrophe (') as the ending delimiter. Tcl allows both quotes (for interpolated strings) and braces (for raw strings), as in "The quick brown fox" or {The quick {brown fox}}; this derives from the single quotations in Unix shells and the use of braces in C for compound statements, since blocks of code is in Tcl syntactically the same thing as string literals – that the delimiters are paired is essential for making this feasible. The Unicode character set includes paired (separate opening and closing) versions of both single and double quotations: “Hi There!” ‘Hi There!’ „Hi There!“ «Hi There!» These, however, are rarely used, as many programming languages will not register them (one exception is the paired double quotations which can be used in Visual Basic .NET). Unpaired marks are preferred for compatibility, as they are easier to type on a wide range of keyboards, and so even in languages where they are permitted, many projects forbid their use for source code. Whitespace delimiters String literals might be ended by newlines. One example is MediaWiki template parameters. {{Navbox |name=Nulls |title=[[wikt:Null|Nulls]] in [[computing]] }} There might be special syntax for multi-line strings. In YAML, string literals may be specified by the relative positioning of whitespace and indentation. - title: An example multi-line string in YAML body : | This is a multi-line string. "special" metacharacters may appear here. The extent of this string is represented by indentation. No delimiters Some programming languages, such as Perl and PHP, allow string literals without any delimiters in some contexts. In the following Perl program, for example, red, green, and blue are string literals, but are unquoted: %map = (red => 0x00f, blue => 0x0f0, green => 0xf00); Perl treats non-reserved sequences of alphanumeric characters as string literals in most contexts. For example, the following two lines of Perl are equivalent: $y = "x"; $y = x; Declarative notation In the original FORTRAN programming language (for example), string literals were written in so-called Hollerith notation, where a decimal count of the number of characters was followed by the letter H, and then the characters of the string: 35HAn example Hollerith string literal This declarative notation style is contrasted with bracketed delimiter quoting, because it does not require the use of balanced "bracketed" characters on either side of the string. Advantages: eliminates text searching (for the delimiter character) and therefore requires significantly less overhead avoids the problem of delimiter collision enables the inclusion of metacharacters that might otherwise be mistaken as commands can be used for quite effective data compression of plain text strings Drawbacks: this type of notation is error-prone if used as manual entry by programmers special care is needed in case of multi byte encodings This is however not a drawback when the prefix is generated by an algorithm as is most likely the case. Constructor functions C++ has two styles of string, one inherited from C (delimited by "), and the safer std::string in the C++ Standard Library. The std::string class is frequently used in the same way a string literal would be used in other languages, and is often preferred to C-style strings for its greater flexibility and safety. But it comes with a performance penalty for string literals, as std::string usually allocates memory dynamically, and must copy the C-style string literal to it at run time. Before C++11, there was no literal for C++ strings (C++11 allows "this is a C++ string"s with the s at the end of the literal), so the normal constructor syntax was used, for example: std::string str = "initializer syntax"; std::string str("converting constructor syntax"); std::string str = string("explicit constructor syntax"); all of which have the same interpretation. Since C++11, there is also new constructor syntax: std::string str{"uniform initializer syntax"}; auto str = "constexpr literal syntax"s; Delimiter collision When using quoting, if one wishes to represent the delimiter itself in a string literal, one runs into the problem of delimiter collision. For example, if the delimiter is a double quote, one cannot simply represent a double quote itself by the literal """ as the second quote is interpreted as the end of the string literal, not as the value of the string, and similarly one cannot write "This is "in quotes", but invalid." as the middle quoted portion is instead interpreted as outside of quotes. There are various solutions, the most general-purpose of which is using escape sequences, such as "\"" or "This is \"in quotes\" and properly escaped.", but there are many other solutions. Paired quotes, such as braces in Tcl, allow nested strings, such as {foo {bar} zork} but do not otherwise solve the problem of delimiter collision, since an unbalanced closing delimiter cannot simply be included, as in {}}. Doubling up A number of languages, including Pascal, BASIC, DCL, Smalltalk, SQL, J, and Fortran, avoid delimiter collision by doubling up on the quotation marks that are intended to be part of the string literal itself: 'This Pascal string''contains two apostrophes''' "I said, ""Can you hear me?""" Dual quoting Some languages, such as Fortran, Modula-2, JavaScript, Python, and PHP allow more than one quoting delimiter; in the case of two possible delimiters, this is known as dual quoting. Typically, this consists of allowing the programmer to use either single quotations or double quotations interchangeably – each literal must use one or the other. "This is John's apple." 'I said, "Can you hear me?"' This does not allow having a single literal with both delimiters in it, however. This can be worked around by using several literals and using string concatenation: 'I said, "This is ' + "John's" + ' apple."' Python has string literal concatenation, so consecutive string literals are concatenated even without an operator, so this can be reduced to: 'I said, "This is '"John's"' apple."' Delimiter quoting C++11 introduced so-called raw string literals. They consist, essentially of R" end-of-string-id ( content ) end-of-string-id ", that is, after R" the programmer can enter up to 16 characters except whitespace characters, parentheses, or backslash, which form the end-of-string-id (its purpose is to be repeated to signal the end of the string, eos id for short), then an opening parenthesis (to denote the end of the eos id) is required. Then follows the actual content of the literal: Any sequence characters may be used (except that it may not contain a closing parenthesis followed by the eos id followed a quote), and finally – to terminate the string – a closing parenthesis, the eos id, and a quote is required. The simplest case of such a literal is with empty content and empty eos id: R"()". The eos id may itself contain quotes: R""(I asked, "Can you hear me?")"" is a valid literal (the eos id is " here.) Escape sequences don't work in raw string literals. D supports a few quoting delimiters, with such strings starting with q" plus an opening delimiter and ending with the respective closing delimiter and ". Available delimiter pairs are (), <>, {}, and []; an unpaired non-identifier delimiter is its own closing delimiter. The paired delimiters nest, so that q"(A pair "()" of parens in quotes)" is a valid literal; an example with the non-nesting / character is q"/I asked, "Can you hear me?"/". Similar to C++11, D allows here-document-style literals with end-of-string ids: q" end-of-string-id newline content newline end-of-string-id " In D, the end-of-string-id must be an identifier (alphanumeric characters). In some programming languages, such as sh and Perl, there are different delimiters that are treated differently, such as doing string interpolation or not, and thus care must be taken when choosing which delimiter to use; see different kinds of strings, below. Multiple quoting A further extension is the use of multiple quoting, which allows the author to choose which characters should specify the bounds of a string literal. For example, in Perl: qq^I said, "Can you hear me?"^ qq@I said, "Can you hear me?"@ qq§I said, "Can you hear me?"§ all produce the desired result. Although this notation is more flexible, few languages support it; other than Perl, Ruby (influenced by Perl) and C++11 also support these. A variant of multiple quoting is the use of here document-style strings. Lua (as of 5.1) provides a limited form of multiple quoting, particularly to allow nesting of long comments or embedded strings. Normally one uses [[ and ]] to delimit literal strings (initial newline stripped, otherwise raw), but the opening brackets can include any number of equal signs, and only closing brackets with the same number of signs close the string. For example: local ls = [=[ This notation can be used for Windows paths: local path = [[C:\Windows\Fonts]] ]=] Multiple quoting is particularly useful with regular expressions that contain usual delimiters such as quotes, as this avoids needing to escape them. An early example is sed, where in the substitution command s/regex/replacement/ the default slash / delimiters can be replaced by another character, as in s,regex,replacement, . Constructor functions Another option, which is rarely used in modern languages, is to use a function to construct a string, rather than representing it via a literal. This is generally not used in modern languages because the computation is done at run time, rather than at parse time. For example, early forms of BASIC did not include escape sequences or any other workarounds listed here, and thus one instead was required to use the CHR$ function, which returns a string containing the character corresponding to its argument. In ASCII the quotation mark has the value 34, so to represent a string with quotes on an ASCII system one would write "I said, " + CHR$(34) + "Can you hear me?" + CHR$(34) In C, a similar facility is available via sprintf and the %c "character" format specifier, though in the presence of other workarounds this is generally not used: char buffer[32]; snprintf(buffer, sizeof buffer, "This is %cin quotes.%c", 34, 34); These constructor functions can also be used to represent nonprinting characters, though escape sequences are generally used instead. A similar technique can be used in C++ with the std::string stringification operator. Escape sequences Escape sequences are a general technique for representing characters that are otherwise difficult to represent directly, including delimiters, nonprinting characters (such as backspaces), newlines, and whitespace characters (which are otherwise impossible to distinguish visually), and have a long history. They are accordingly widely used in string literals, and adding an escape sequence (either to a single character or throughout a string) is known as escaping. One character is chosen as a prefix to give encodings for characters that are difficult or impossible to include directly. Most commonly this is backslash; in addition to other characters, a key point is that backslash itself can be encoded as a double backslash \\ and for delimited strings the delimiter itself can be encoded by escaping, say by \" for ". A regular expression for such escaped strings can be given as follows, as found in the ANSI C specification: "(\\.|[^\\"])*" meaning "a quote; followed by zero or more of either an escaped character (backslash followed by something, possibly backslash or quote), or a non-escape, non-quote character; ending in a quote" – the only issue is distinguishing the terminating quote from a quote preceded by a backslash, which may itself be escaped. Multiple characters can follow the backslash, such as \uFFFF, depending on the escaping scheme. An escaped string must then itself be lexically analyzed, converting the escaped string into the unescaped string that it represents. This is done during the evaluation phase of the overall lexing of the computer language: the evaluator of the lexer of the overall language executes its own lexer for escaped string literals. Among other things, it must be possible to encode the character that normally terminates the string constant, plus there must be some way to specify the escape character itself. Escape sequences are not always pretty or easy to use, so many compilers also offer other means of solving the common problems. Escape sequences, however, solve every delimiter problem and most compilers interpret escape sequences. When an escape character is inside a string literal, it means "this is the start of the escape sequence". Every escape sequence specifies one character which is to be placed directly into the string. The actual number of characters required in an escape sequence varies. The escape character is on the top/left of the keyboard, but the editor will translate it, therefore it is not directly tapeable into a string. The backslash is used to represent the escape character in a string literal. Many languages support the use of metacharacters inside string literals. Metacharacters have varying interpretations depending on the context and language, but are generally a kind of 'processing command' for representing printing or nonprinting characters. For instance, in a C string literal, if the backslash is followed by a letter such as "b", "n" or "t", then this represents a nonprinting backspace, newline or tab character respectively. Or if the backslash is followed by 1-3 octal digits, then this sequence is interpreted as representing the arbitrary code unit with the specified value in the literal's encoding (for example, the corresponding ASCII code for an ASCII literal). This was later extended to allow more modern hexadecimal character code notation: "I said,\t\t\x22Can you hear me?\x22\n" Note: Not all sequences in the list are supported by all parsers, and there may be other escape sequences which are not in the list. Nested escaping When code in one programming language is embedded inside another, embedded strings may require multiple levels of escaping. This is particularly common in regular expressions and SQL query within other languages, or other languages inside shell scripts. This double-escaping is often difficult to read and author. Incorrect quoting of nested strings can present a security vulnerability. Use of untrusted data, as in data fields of an SQL query, should use prepared statements to prevent a code injection attack. In PHP 2 through 5.3, there was a feature called magic quotes which automatically escaped strings (for convenience and security), but due to problems was removed from version 5.4 onward. Raw strings A few languages provide a method of specifying that a literal is to be processed without any language-specific interpretation. This avoids the need for escaping, and yields more legible strings. Raw strings are particularly useful when a common character needs to be escaped, notably in regular expressions (nested as string literals), where backslash \ is widely used, and in DOS/Windows paths, where backslash is used as a path separator. The profusion of backslashes is known as leaning toothpick syndrome, and can be reduced by using raw strings. Compare escaped and raw pathnames in C#: "The Windows path is C:\\Foo\\Bar\\Baz\\" @"The Windows path is C:\Foo\Bar\Baz\" Extreme examples occur when these are combined – Uniform Naming Convention paths begin with \\, and thus an escaped regular expression matching a UNC name begins with 8 backslashes, "\\\\\\\\", due to needing to escape the string and the regular expression. Using raw strings reduces this to 4 (escaping in the regular expression), as in C# @"\\\\". In XML documents, CDATA sections allows use of characters such as & and < without an XML parser attempting to interpret them as part of the structure of the document itself. This can be useful when including literal text and scripting code, to keep the document well formed. <![CDATA[ if (path!=null && depth<2) { add(path); } ]]> Multiline string literals In many languages, string literals can contain literal newlines, spanning several lines. Alternatively, newlines can be escaped, most often as \n. For example: echo 'foo bar' and echo -e "foo\nbar" are both valid bash, producing: foo bar Languages that allow literal newlines include bash, Lua, Perl, PHP, R, and Tcl. In some other languages string literals cannot include newlines. Two issues with multiline string literals are leading and trailing newlines, and indentation. If the initial or final delimiters are on separate lines, there are extra newlines, while if they are not, the delimiter makes the string harder to read, particularly for the first line, which is often indented differently from the rest. Further, the literal must be unindented, as leading whitespace is preserved – this breaks the flow of the code if the literal occurs within indented code. The most common solution for these problems is here document-style string literals. Formally speaking, a here document is not a string literal, but instead a stream literal or file literal. These originate in shell scripts and allow a literal to be fed as input to an external command. The opening delimiter is <<END where END can be any word, and the closing delimiter is END on a line by itself, serving as a content boundary – the << is due to redirecting stdin from the literal. Due to the delimiter being arbitrary, these also avoid the problem of delimiter collision. These also allow initial tabs to be stripped via the variant syntax <<-END though leading spaces are not stripped. The same syntax has since been adopted for multiline string literals in a number of languages, most notably Perl, and are also referred to as here documents, and retain the syntax, despite being strings and not involving redirection. As with other string literals, these can sometimes have different behavior specified, such as variable interpolation. Python, whose usual string literals do not allow literal newlines, instead has a special form of string, designed for multiline literals, called triple quoting. These use a tripled delimiter, either ''' or """. These literals are especially used for inline documentation, known as docstrings. Tcl allows literal newlines in strings and has no special syntax to assist with multiline strings, though delimiters can be placed on lines by themselves and leading and trailing newlines stripped via string trim, while string map can be used to strip indentation. String literal concatenation A few languages provide string literal concatenation, where adjacent string literals are implicitly joined into a single literal at compile time. This is a feature of C, C++, D, Ruby, and Python, which copied it from C. Notably, this concatenation happens at compile time, during lexical analysis (as a phase following initial tokenization), and is contrasted with both run time string concatenation (generally with the + operator) and concatenation during constant folding, which occurs at compile time, but in a later phase (after phrase analysis or "parsing"). Most languages, such as C#, Java and Perl, do not support implicit string literal concatenation, and instead require explicit concatenation, such as with the + operator (this is also possible in D and Python, but illegal in C/C++ – see below); in this case concatenation may happen at compile time, via constant folding, or may be deferred to run time. Motivation In C, where the concept and term originate, string literal concatenation was introduced for two reasons: To allow long strings to span multiple lines with proper indentation in contrast to line continuation, which destroys the indentation scheme; and To allow the construction of string literals by macros (via stringizing). In practical terms, this allows string concatenation in early phases of compilation ("translation", specifically as part of lexical analysis), without requiring phrase analysis or constant folding. For example, the following are valid C/C++: char *s = "hello, " "world"; printf("hello, " "world"); However, the following are invalid: char *s = "hello, " + "world"; printf("hello, " + "world"); This is because string literals have array type, char [n] (C) or const char [n] (C++), which cannot be added; this is not a restriction in most other languages. This is particularly important when used in combination with the C preprocessor, to allow strings to be computed following preprocessing, particularly in macros. As a simple example: char *file_and_message = ": message"; will (if the file is called a.c) expand to: char *file_and_message = "a.c" ": message"; which is then concatenated, being equivalent to: char *file_and_message = "a.c: message"; A common use case is in constructing printf or scanf format strings, where format specifiers are given by macros. A more complex example uses stringification of integers (by the preprocessor) to define a macro that expands to a sequence of string literals, which are then concatenated to a single string literal with the file name and line number: #define STRINGIFY(x) #x #define TOSTRING(x) STRINGIFY(x) #define AT ":" TOSTRING() Beyond syntactic requirements of C/C++, implicit concatenation is a form of syntactic sugar, making it simpler to split string literals across several lines, avoiding the need for line continuation (via backslashes) and allowing one to add comments to parts of strings. For example, in Python, one can comment a regular expression in this way: re.compile("[A-Za-z_]" # letter or underscore "[A-Za-z0-9_]*" # letter, digit or underscore ) Problems Implicit string concatenation is not required by modern compilers, which implement constant folding, and causes hard-to-spot errors due to unintentional concatenation from omitting a comma, particularly in vertical lists of strings, as in: l = ['foo', 'bar' 'zork'] Accordingly, it is not used in most languages, and it has been proposed for deprecation from D and Python. However, removing the feature breaks backwards compatibility, and replacing it with a concatenation operator introduces issues of precedence – string literal concatenation occurs during lexing, prior to operator evaluation, but concatenation via an explicit operator occurs at the same time as other operators, hence precedence is an issue, potentially requiring parentheses to ensure desired evaluation order. A subtler issue is that in C and C++, there are different types of string literals, and concatenation of these has implementation-defined behavior, which poses a potential security risk. Different kinds of strings Some languages provide more than one kind of literal, which have different behavior. This is particularly used to indicate raw strings (no escaping), or to disable or enable variable interpolation, but has other uses, such as distinguishing character sets. Most often this is done by changing the quoting character or adding a prefix or suffix. This is comparable to prefixes and suffixes to integer literals, such as to indicate hexadecimal numbers or long integers. One of the oldest examples is in shell scripts, where single quotes indicate a raw string or "literal string", while double quotes have escape sequences and variable interpolation. For example, in Python, raw strings are preceded by an r or R – compare 'C:\\Windows' with r'C:\Windows' (though, a Python raw string cannot end in an odd number of backslashes). Python 2 also distinguishes two types of strings: 8-bit ASCII ("bytes") strings (the default), explicitly indicated with a b or B prefix, and Unicode strings, indicated with a u or U prefix. while in Python 3 strings are Unicode by default and bytes are a separate bytes type that when initialized with quotes must be prefixed with a b. C#'s notation for raw strings is called @-quoting. @"C:\Foo\Bar\Baz\" While this disables escaping, it allows double-up quotes, which allow one to represent quotes within the string: @"I said, ""Hello there.""" C++11 allows raw strings, unicode strings (UTF-8, UTF-16, and UTF-32), and wide character strings, determined by prefixes. It also adds literals for the existing C++ string, which is generally preferred to the existing C-style strings. In Tcl, brace-delimited strings are literal, while quote-delimited strings have escaping and interpolation. Perl has a wide variety of strings, which are more formally considered operators, and are known as quote and quote-like operators. These include both a usual syntax (fixed delimiters) and a generic syntax, which allows a choice of delimiters; these include: '' "" `` // m// qr// s/// y/// q{} qq{} qx{} qw{} m{} qr{} s{}{} tr{}{} y{}{} REXX uses suffix characters to specify characters or strings using their hexadecimal or binary code. E.g., '20'x "0010 0000"b "00100000"b all yield the space character, avoiding the function call X2C(20). String interpolation In some languages, string literals may contain placeholders referring to variables or expressions in the current context, which are evaluated (usually at run time). This is referred to as variable interpolation, or more generally string interpolation. Languages that support interpolation generally distinguish strings literals that are interpolated from ones that are not. For example, in sh-compatible Unix shells (as well as Perl and Ruby), double-quoted (quotation-delimited, ") strings are interpolated, while single-quoted (apostrophe-delimited, ') strings are not. Non-interpolated string literals are sometimes referred to as "raw strings", but this is distinct from "raw string" in the sense of escaping. For example, in Python, a string prefixed with r or R has no escaping or interpolation, a normal string (no prefix) has escaping but no interpolation, and a string prefixed with f or F has escaping and interpolation. For example, the following Perl code: $name = "Nancy"; $greeting = "Hello World"; print "$name said $greeting to the crowd of people."; produces the output: Nancy said Hello World to the crowd of people. In this case, the metacharacter character ($) (not to be confused with the sigil in the variable assignment statement) is interpreted to indicate variable interpolation, and requires some escaping if it needs to be outputted literally. This should be contrasted with the printf function, which produces the same output using notation such as: printf "%s said %s to the crowd of people.", $name, $greeting; but does not perform interpolation: the %s is a placeholder in a printf format string, but the variables themselves are outside the string. This is contrasted with "raw" strings: print '$name said $greeting to the crowd of people.'; which produce output like: $name said $greeting to the crowd of people. Here the $ characters are not metacharacters, and are not interpreted to have any meaning other than plain text. Embedding source code in string literals Languages that lack flexibility in specifying string literals make it particularly cumbersome to write programming code that generates other programming code. This is particularly true when the generation language is the same or similar to the output language. For example: writing code to produce quines generating an output language from within a web template; using XSLT to generate XSLT, or SQL to generate more SQL generating a PostScript representation of a document for printing purposes, from within a document-processing application written in C or some other language. Nevertheless, some languages are particularly well-adapted to produce this sort of self-similar output, especially those that support multiple options for avoiding delimiter collision. Using string literals as code that generates other code may have adverse security implications, especially if the output is based at least partially on untrusted user input. This is particularly acute in the case of Web-based applications, where malicious users can take advantage of such weaknesses to subvert the operation of the application, for example by mounting an SQL injection attack. See also Character literal XML Literals Sigil (computer programming) Notes References External links Literals In Programming Source code Literal Articles with example Python (programming language) code
String literal
Mathematics,Technology
7,109
54,227,899
https://en.wikipedia.org/wiki/The%20Art%20of%20Unit%20Testing
The Art of Unit Testing is a 2009 book by Roy Osherove which covers unit test writing for software. It's written with .NET Framework examples, but the fundamentals can be applied by any developer. The second edition was published in 2013. It has two additional chapters, as well as reorganization and updating of chapters from the first edition. The second edition is still in print and is available at the Manning Publications website. Reception Reviews of both editions have been largely positive. Slashdot book review says that "Osherove's book has something for all readers, regardless of their experience with unit testing.". Ward Bell wrote "It just arrived and I read it in one sitting. I am so pleased that I did. I’ll quarrel with it... but do not let that deter you from rushing to buy your own copy." References External links The Art of Unit Testing book site, which contains free sample chapters and additional reading and resources. 2009 non-fiction books 2013 non-fiction books Manning Publications books
The Art of Unit Testing
Technology
211
17,992,940
https://en.wikipedia.org/wiki/Sodium%20chromate
Sodium chromate is the inorganic compound with the formula Na2CrO4. It exists as a yellow hygroscopic solid, which can form tetra-, hexa-, and decahydrates. It is an intermediate in the extraction of chromium from its ores. Production and reactivity It is obtained on a vast scale by roasting chromium ores in air in the presence of sodium carbonate: 2Cr2O3 + 4 Na2CO3 + 3 O2 → 4 Na2CrO4 + 4 CO2 This process converts the chromium into a water-extractable form, leaving behind iron oxides. Typically calcium carbonate is included in the mixture to improve oxygen access and to keep silicon and aluminium impurities in an insoluble form. The process temperature is typically around 1100 °C. For lab and small scale preparations a mixture of chromite ore, sodium hydroxide and sodium nitrate reacting at lower temperatures may be used (even 350 C in the corresponding potassium chromate system). Subsequent to its formation, the chromate salt is converted to sodium dichromate, the precursor to most chromium compounds and materials. The industrial route to chromium(III) oxide involves reduction of sodium chromate with sulfur. Acid-base behavior It converts to sodium dichromate when treated with acids: 2 Na2CrO4 + 2HCl → Na2Cr2O7 + 2NaCl + H2O Further acidification affords chromium trioxide: Na2CrO4 + H2SO4 → CrO3 + Na2SO4 + H2O Uses Aside from its central role in the production of chromium from its ores, sodium chromate is used as a corrosion inhibitor in the petroleum industry. It is also a dyeing auxiliary in the textile industry. It is a diagnostic pharmaceutical in determining red blood cell volume. In organic chemistry, sodium chromate is used as an oxidant, converting primary alcohols to carboxylic acids and secondary alcohols to ketones. Sodium chromate is a strong oxidizer. Safety As with other Cr(VI) compounds, sodium chromate is carcinogenic. The compound is also corrosive and exposure may produce severe eye damage or blindness. Human exposure further encompasses impaired fertility, heritable genetic damage and harm to unborn children. See also Chromate and dichromate References Further reading Chromates Sodium compounds Oxidizing agents
Sodium chromate
Chemistry
523