id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
11,158,178
https://en.wikipedia.org/wiki/Fly%20Me%20to%20the%20Moon%20%282008%20film%29
Fly Me to the Moon is a 2008 animated science fiction comedy film about three flies who stowaway aboard Apollo 11 and travel to the Moon. It was directed by Ben Stassen and written by Domonic Paris. The film was released in digital 3-D in Belgium on 30 January 2008, and in the US and Canada on 15 August. The film was also released in IMAX 3-D in the US and Canada on 8 August. The film serves as a fictionalized retelling of the 1969 Apollo 11 mission by incorporating a story of three young flies that stow away on the rocket to fulfill their dream of going up to the moon, while their families take on a group of Soviet flies who try to sabotage the mission. Fly Me to the Moon was produced by nWave Pictures in association with Illuminata Pictures, and distributed by Summit Entertainment and Vivendi Visual Entertainment in the United States. The film received generally negative reviews from critics and was a box office disappointment, only grossing $41.7 million against a $25 million budget. Plot The narrator explains that in 1957, the Soviet Union launched Earth's first satellite Sputnik 1 into orbit. In 1961, when NASA was putting a chimpanzee aboard Mercury Atlas 5, cosmonaut Yuri Gagarin became the first person to go to space. Feeling the sense of urgency to overtake the Soviets in the space race, U.S. President John F. Kennedy made a statement toward a joint session of Congress on May 25, 1961, stating that before the decade is out, he plans to launch a man to the Moon and return him safely to the Earth. Eight years later, in 1969, an 11-year-old fly named Nat and his two best friends, I.Q. and Scooter, build a "fly-sized" rocket in a field across from Cape Canaveral, Florida, where Apollo 11 sits on the Kennedy Space Center Launch Complex 39. From his earliest memory, Nat's grandfather, Amos, often tells him of his many adventures such as his daring rescue of Amelia Earhart when she crossed the Atlantic Ocean on her historic 1932 solo flight. Wanting to be an adventurer like his grandpa, Nat tells his friends his plan to get aboard Apollo 11 and go to the Moon. They, with some reluctance, are in. The next morning, the three flies make it into Launch Control and stow away inside the space helmets of astronauts Neil Armstrong, Buzz Aldrin, and Michael Collins. Moments later, Flight Director Gene Kranz in Houston's Mission Control Center gives the go for launch. As the Saturn V rocket climbs through the atmosphere and reaches its Earth parking orbit, Nat, Scooter, and IQ's mothers faint upon hearing from Grandpa that their sons will be in space for a week. Grandpa, Nat's mother, and the others watch TV to get news of their offspring's adventures. As the astronauts appear on camera, the heroic flies wave in the background, visible to other flies but barely seen by humans – except for the attentive NASA flight controller Steve Bales, who informs Armstrong of the "contaminants" on board. In the Soviet Union, there are other flies watching TV – Soviet flies who cannot tolerate the idea that American flies might get to the Moon first. Special Soviet operatives are enlisted to interfere with the American mission, including an operative named Yegor. Fortunately, Nadia, a Soviet fly, hears Scooter calling out the name of Amos, who she met in Paris and loved many years ago. Onboard the Command Module Columbia, as the burn cycle to enter the Moon's trans-lunar injection orbit begins, the spacecraft is violently rocked. There's a short circuit in the service module that must be fixed manually or the ship won't be able to complete its mission. Nat and I.Q. fly through a maze of wires, find the problem, and repair it just in time. Unaware of the flies' aid, the ship enters orbit and the astronauts perform the maneuver to turn Columbia around to dock with the Lunar Module Eagle and pull it away from the spent S-IVB rocket. Just as the flies congratulate each other, they are sprayed with a numbing aerosol and held captive in a test tube. The flies manage to break the vial. Nat sneaks into Armstrong's helmet as he enters Eagle, which lands on Mare Tranquillitatis. From inside Armstrong's helmet, Nat beams with every awe-inspiring historic step. I.Q. and Scooter join him on the surface inside Aldrin's helmet. After a climactic rescue with Nat bringing Scooter back to Columbia, Eagle is jettisoned. Back on Earth, other plots are being set in motion. After more than 30 years apart, Nadia arrives in America, visits Amos, and tells him and Nat's mother about the Soviet plot to divert the mission. Amos takes off with a vow to save the mission. At Mission Control, the Soviet operatives prepare to alter the descent codes. Unaware of the potential danger, the Apollo 11 astronauts and the flies prepare to come back home. Amos, Nat's mother, Nadia, their friend Louie and two local teens, Ray and Butch, join forces to stop Yegor and the Soviet plan as Columbia arrives near Earth's atmosphere. With their combined efforts, the Soviet threat is crushed, thus saving the astronauts and the flies. After a short period of radio silence due to ionization blackout, Columbia splashes down safely in the Pacific Ocean, where it is recovered by USS Hornet. Nat, I.Q., and Scooter return to their junkyard as heroes. At the film's end, the real Buzz Aldrin appears and explains that no flies were on board during the historic flight, and it is scientifically impossible for a bug to go to space. Cast Trevor Gagnon as Nat McFly Philip Daniel Bolden as I.Q. David Gore as Scooter Christopher Lloyd as Amos McFly Kelly Ripa as Mrs. McFly Nicollette Sheridan as Nadia Tim Curry as Yegor Ed Begley Jr. as Poopchev Adrienne Barbeau as Scooter's Mom Robert Patrick as Louie Mimi Maynard as I.Q.'s Mom Buzz Aldrin as himself Cam Clarke as Ray Scott Menville as Butch Steve Kramer as Leonid Doug Stone as Russian Newscaster Max Burkholder as Mom's Maggot Jessica Gee as Maggot #1 Mona Marshall as Maggot #2 Barbara Goodson as Maggot #3 Gregg Berger as Pale Russian Flies Charles Rocket as Mission Control 1961 Production The total production budget of Fly Me to the Moon is €17.3 million (about $25.2 million). nWave financed about 75% of the budget itself. To raise the rest, investors could benefit from Belgium's Tax Shelter system. The Flanders Audiovisual Fund contributed €100,000 ($146,100), 10% of its annual budget for animation. Apart from the feature-length version, two further versions of the films exist. The 49-minute Attraction version was released across theme parks starting in the summer of 2007. Venues showing this version, which features added 4-D effects, include Isla Magica in Spain, Mirabilandia in Italy, Bellewaerde in Belgium, Bakken and Planetariet in Denmark, and Blackpool Pleasure Beach in the UK, as well as the Adler Planetarium in Chicago and the Museum of Science in Boston. This version of the film omits the subplot about the attempt by Russian flies to sabotage the mission. The 13-minute Ride version is featured at Six Flags Great Adventure in New Jersey and Six Flags Over Texas in Texas. Fly Me to the Moon marked the final film role of actor Charles Rocket, rather fittingly in a film about rockets; it was released three years after his death in 2005. Charles Rocket played the voice of Mission Control for the scenes set in 1961. Release Fly Me to the Moon was distributed in the U.S. by Summit Entertainment and in the U.K. by Momentum Pictures. As IMAX 3-D films are usually less or around an hour, some scenes were cut and censored from the IMAX version. The IMAX version starts with the opening scene which shows the first monkey being launched into space. It then cuts to Nat sneaking out to meet his friends and sneak into the command center, cutting out the scene with Nat and Amos, discussing Amelia Earhart. The IMAX version also cuts out the Soviet subplot. Fly Me to the Moon was released on DVD in North America on 2 December 2008. Two versions were released, a standard 2-D version and a 3-D version of the film that includes two pairs of 3-D glasses. Bonus features on both versions include an interactive game, production notes, and more. Box office Fly Me to the Moon was released in 12 IMAX 3-D theaters on 8 August 2008 in Canada and the United States, and a further 18 on 15 August. The film was released widely in 3-D-equipped theaters on 15 August. It earned $704,000 on opening day in 452 theaters and $1,900,523 in its opening weekend, drawing in the number 12 spot. As of November 2009, the film has grossed $41,412,008 worldwide. Reception Fly Me to the Moon received predominantly negative reception from critics and reviewers. The film holds a 20% approval rating on the review aggregator website Rotten Tomatoes, based on 84 reviews with an average rating of 3.95/10. The site's consensus reads: "Flatly animated and indifferently scripted, Fly Me to the Moon offers little for audiences not very young children". On Metacritic, the film has a score of 36 out of 100 based on 21 critics, indicating "generally unfavorable reviews". See also Apollo 11 in popular culture List of animated feature-length films List of computer-animated films List of 3-D films RealD 3D List of IMAX films Animals in space Zond 5, a 1968 Soviet Moon mission which included fruit fly eggs and two tortoises Fe, Fi, Fo, Fum, and Phooey, five mice who traveled to the Moon on Apollo 17 References External links 2008 films Belgian animated science fiction films American animated science fiction films 2008 computer-animated films 2008 3D films 3D animated films 2000s American animated films Belgian animated feature films Belgian alternate history films 4D films Animals in space Animated adventure films 2000s children's fantasy films Comedy films based on actual events Alternate history films Films about Apollo 11 Films directed by Ben Stassen Films set in 1969 Films set in the 1960s Films scored by Ramin Djawadi Lionsgate animated films Summit Entertainment animated films Summit Entertainment films Cold War films Films with live action and animation Cultural depictions of Buzz Aldrin Cultural depictions of Neil Armstrong Cultural depictions of Michael Collins (astronaut) Cultural depictions of Amelia Earhart Animated films about flies 2000s English-language films 2000s Belgian films English-language fantasy films Animated films based on songs
Fly Me to the Moon (2008 film)
Chemistry,Biology
2,269
8,440,165
https://en.wikipedia.org/wiki/European%20Council%20of%20Applied%20Sciences%20and%20Engineering
The European Council of Applied Sciences, Technologies and Engineering (Euro-CASE) is a European non-profit organization, which groups 19 to 20 European national academies of engineering, applied sciences and technology. The organization provides a European forum for exchange and consultation between European institutions, industry, research, and national governments. The mission of the organization is to pursue, encourage and maintain excellence in the fields of engineering, applied sciences and technology, and promote their science, art and practice. Member academies Austrian Academy of Sciences (Österreichische Akademie der Wissenschaften) (Austria) The Royal Academies for Science and the Arts of Belgium (Class of Technical Sciences) - (before 2009: Royal Belgian Academy Council of Applied Sciences) (Belgium) Croatian Academy of Engineering (Akademija tehničkih znanosti Hrvatske) (HATZ) (Croatia) Engineering Academy of the Czech Republic (Strojírenská AAademie České Republiky) (EACR) (Czech Republic) Danish Academy of Technical Sciences (Akademiet for de Tekniske Videnskaber) (ATV) (Denmark) Technology Academy Finland (Tekniikan Akatemia) (TAF) (Finland) French Academy of Technologies (Académie des technologies) (France) German Academy of Science and Engineering (Deutsche Akademie der Technikwissenschaften) (Acatech) (Germany) Technical Chamber of Greece (Τεχνικό Επιμελητήριο Ελλάδας) (TEE-TCG) (Greece) Hungarian Academy of Engineering (Magyar Mérnöki Akadémia) (MMA) (Hungary) Irish Academy of Engineering (IAE) (Ireland) National Research Council (Italy) (Consiglio Nazionale delle Ricerche) (CISAI/FAST) (Italy) Netherlands Academy of Technology and Innovation (NATI) (The Netherlands) Norwegian Academy of Technological Sciences (Norges Tekniske Vitenskapsakademi) (NTVA) (Norway) Polish Academy of Sciences (Polska Akademia Nauk) (PAN) (Poland) Portuguese Academy of Engineering (Academia de Engenharia) (Portugal) Technical Sciences Academy of Romania (Academia de Știinșe Tehnice din România) (ASTR) (Romania) Engineering Academy of Slovenia (Inženirska akademija Slovenije) (IAS) (Slovenia) Royal Academy of Engineering of Spain (Real Academia de Ingeniería) (RAI) (Spain) Royal Swedish Academy of Engineering Sciences (Kungliga Ingenjörsvetenskapsakademien),(IVA) (Sweden) Swiss Academy of Engineering Sciences (Schweizerische Akademie der Technischen Wissenschaften) (SATW) (Switzerland) Royal Academy of Engineering (RAEng) (United Kingdom) See also Academia Europaea acatech European IST Grand Prize European Research Advisory Board European Research Area (ERA) European Research Council Information Society Technologies Advisory Group (ISTAG) External links European Council of Applied Sciences and Engineering Engineering organizations Science and technology in Europe
European Council of Applied Sciences and Engineering
Engineering
660
32,034,345
https://en.wikipedia.org/wiki/Double%20affine%20Hecke%20algebra
In mathematics, a double affine Hecke algebra, or Cherednik algebra, is an algebra containing the Hecke algebra of an affine Weyl group, given as the quotient of the group ring of a double affine braid group. They were introduced by Cherednik, who used them to prove Macdonald's constant term conjecture for Macdonald polynomials. Infinitesimal Cherednik algebras have significant implications in representation theory, and therefore have important applications in particle physics and chemistry. References A. A. Kirillov Lectures on affine Hecke algebras and Macdonald's conjectures Bull. Amer. Math. Soc. 34 (1997), 251–292. Macdonald, I. G. Affine Hecke algebras and orthogonal polynomials. Cambridge Tracts in Mathematics, 157. Cambridge University Press, Cambridge, 2003. x+175 pp.  Algebras Representation theory
Double affine Hecke algebra
Mathematics
182
46,692,671
https://en.wikipedia.org/wiki/Dysidenin
Dysidenin is a alkaloid toxin derived from the marine sponge Lamellodysidea herbacea and has been identified as lethal to certain fish species and other marine organisms. The toxic mechanism of dysidenin is linked to its ability to inhibit iodide transport in thyroid cells, which is a crucial process for thyroid hormone synthesis and subsequent metabolic regulation in organisms. The inhibition of iodide transport could potentially lead to disrupted thyroid functions, causing a range of metabolic issues. This aspect of dysidenin not only sheds light on ecological interactions within marine environments but also suggests potential medical applications under controlled conditions. There is a notable regional variation in the metabolites produced by Lamellodysidea herbacea, as evidenced by the isolation of dysidenin from organism samples obtained from the Great Barrier Reef, while it was absent in samples from the Caroline Islands. This geographical variation in metabolite expression could be attributed to differences in environmental conditions, predator-prey interactions, or microbial symbiont communities within these regions. It highlights the adaptive metabolomic responses of marine sponges to their local ecological settings. This variance underscores the need for a deeper understanding of how spatial differences in marine ecosystems influence the chemical ecology of resident organisms. The scientific exploration of dysidenin commenced in 1977 with its initial isolation. Over the ensuing decades, the elucidation of dysidenin's structural and toxicological properties has contributed significantly to the broader field of marine natural products, which is a domain rich with potential pharmacological candidates and ecological indicators. The trajectory from its discovery to the present understanding exemplifies the gradual, yet rewarding, unraveling of marine-derived substances, mirroring the broader narrative of marine natural product research. The discourse surrounding dysidenin serves as a reflection of the larger narrative on marine natural products, an arena filled with both potential and challenges. Dysidenin serves as a reminder of the delicate equilibrium within marine ecosystems and the inherent link between the health of marine habitats and the biodiscovery potential they hold. A nuanced understanding of dysidenin, both as an ecological factor and a potential biomedical resource, calls for a multidisciplinary approach anchored by rigorous analytical examination, and an appreciation for the complex, intertwined narratives of marine ecological and biochemical dynamics. References Toxins Halogen-containing natural products Carboxamides 2-Thiazolyl compounds Trichloromethyl compounds Organochlorides Halogen-containing alkaloids
Dysidenin
Chemistry,Environmental_science
523
28,752,069
https://en.wikipedia.org/wiki/Saffman%E2%80%93Delbr%C3%BCck%20model
The Saffman–Delbrück model describes a lipid membrane as a thin layer of viscous fluid, surrounded by a less viscous bulk liquid. This picture was originally proposed to determine the diffusion coefficient of membrane proteins, but has also been used to describe the dynamics of fluid domains within lipid membranes. The Saffman–Delbrück formula is often applied to determine the size of an object embedded in a membrane from its observed diffusion coefficient, and is characterized by the weak logarithmic dependence of diffusion constant on object radius. Origin In a three-dimensional highly viscous liquid, a spherical object of radius a has diffusion coefficient by the well-known Stokes–Einstein relation. By contrast, the diffusion coefficient of a circular object embedded in a two-dimensional fluid diverges; this is Stokes' paradox. In a real lipid membrane, the diffusion coefficient may be limited by: the size of the membrane the inertia of the membrane (finite Reynolds number) the effect of the liquid surrounding the membrane Philip Saffman and Max Delbrück calculated the diffusion coefficient for these three cases, and showed that Case 3 was the relevant effect. Saffman–Delbrück formula The diffusion coefficient of a cylindrical inclusion of radius in a membrane with thickness and viscosity , surrounded by bulk fluid with viscosity is: where the Saffman–Delbrück length and is the Euler–Mascheroni constant. Typical values of are 0.1 to 10 micrometres. This result is an approximation applicable for radii , which is appropriate for proteins ( nm), but not for micrometre-scale lipid domains. The Saffman–Delbrück formula predicts that diffusion coefficients will only depend weakly on the size of the embedded object; for example, if , changing from 1 nm to 10 nm only reduces the diffusion coefficient by 30%. Beyond the Saffman–Delbrück length Hughes, Pailthorpe, and White extended the theory of Saffman and Delbrück to inclusions with any radii ; for , A useful formula that produces the correct diffusion coefficients between these two limits is where , , , , and . Please note that the original version of has a typo in ; the value in the correction to that article should be used. Experimental studies Though the Saffman–Delbruck formula is commonly used to infer the sizes of nanometer-scale objects, recent controversial experiments on proteins have suggested that the diffusion coefficient's dependence on radius should be instead of . However, for larger objects (such as micrometre-scale lipid domains), the Saffman–Delbruck model (with the extensions above) is well-established Extending Saffman–Delbrück for Hydrodynamic Coupling of Proteins within Curved Lipid Bilayer Membranes The Saffman–Delbrück approach has also been extended in recent works for modeling hydrodynamic interactions between proteins embedded within curved lipid bilayer membranes, such as in vesicles and other structures. These works use related formulations to study the roles of the membrane hydrodynamic coupling and curvature in the collective drift-diffusion dynamics of proteins within bilayer membranes. Various models of the protein inclusions within curved membranes have been developed, including models based on series truncations, immersed boundary methods, and fluctuating hydrodynamics. References Biophysics Proteins Membrane biology
Saffman–Delbrück model
Physics,Chemistry,Biology
689
28,105,685
https://en.wikipedia.org/wiki/Christensen%20failure%20criterion
The Christensen failure criterion is a material failure theory for isotropic materials that attempts to span the range from ductile to brittle materials. It has a two-property form calibrated by the uniaxial tensile and compressive strengths T and C . The theory was developed by Stanford professor Richard. M. Christensen and first published in 1997. Description The Christensen failure criterion is composed of two separate subcriteria representing competitive failure mechanisms. when expressed in principal stress components, it is given by : Polynomial invariants failure criterion For Coordinated Fracture Criterion For The geometric form of () is that of a paraboloid in principal stress space. The fracture criterion () (applicable only over the partial range 0 ≤ T/C ≤ 1/2 ) cuts slices off the paraboloid, leaving three flattened elliptical surfaces on it. The fracture cutoff is vanishingly small at T/C=1/2 but it grows progressively larger as T/C diminishes. The organizing principle underlying the theory is that all isotropic materials admit a distinct classification system based upon their T/C ratio. The comprehensive failure criterion () and () reduces to the Mises criterion at the ductile limit, T/C = 1. At the brittle limit, T/C = 0, it reduces to a form that cannot sustain any tensile components of stress. Many cases of verification have been examined over the complete range of materials from extremely ductile to extremely brittle types. Also, examples of applications have been given. Related criteria distinguishing ductile from brittle failure behaviors have been derived and interpreted. Applications have been given by Ha to the failure of the isotropic, polymeric matrix phase in fiber composite materials. See also Strength of materials material failure theory Von Mises yield criterion Mohr–Coulomb theory References Mechanical failure Plasticity (physics) Solid mechanics Mechanics
Christensen failure criterion
Materials_science,Engineering
380
2,052,015
https://en.wikipedia.org/wiki/Strainmeter
A strainmeter is an instrument used by geophysicists to measure the deformation of the Earth. Linear strainmeters measure the changes in the distance between two points, using either a solid piece of material (over a short distance) or a laser interferometer (over a long distance, up to several hundred meters). The type using a solid length standard was invented by Benioff in 1932, using an iron pipe; later instruments used rods made of fused quartz. Modern instruments of this type can make measurements of length changes over very small distances, and are commonly placed in boreholes to measure small changes in the diameter of the borehole. Another type of borehole instrument detects changes in a volume filled with fluid (such as silicone oil). The most common type is the dilatometer invented by Sacks and Evertson in the USA (patent 3,635,076); a design that uses specially shaped volumes to measure the strain tensor has been developed by Sakata in Japan. All these types of strainmeters can measure deformation over frequencies from a few Hz to periods of days, months, and years. This allows them to measure signals at lower frequencies than can be detected with seismometers. Most strainmeter records show signals from the earth tides, and seismic waves from earthquakes. At longer periods, they can also record the gradual accumulation of stress (physics) caused by plate tectonics, the release of this stress in earthquakes, and rapid changes of stress following earthquakes. The most extensive network of strainmeters is installed in Japan; it includes mostly quartz-bar instruments in tunnels and borehole strainmeters, with a few laser instruments. Starting in 2003 there has been a major effort (the Plate Boundary Observatory) to install many more strainmeters along the Pacific/North-America plate boundary in the United States. The aim is to install about 100 borehole strainmeters, primarily in Washington, Oregon and California, and five laser strainmeters, all in California. See also Deformation monitoring Deformation (physics) Extensometer Infinitesimal strain theory References External links Piñon Flat Observatory, CA: laser strainmeters GTSM Technologies, AUS: borehole strainmeters Plate Boundary Observatory US Geological Survey, see under Fault Monitoring Geophysics Seismology instruments Length, distance, or range measuring devices
Strainmeter
Physics,Technology,Engineering
476
13,609,521
https://en.wikipedia.org/wiki/Hydnellum%20peckii
Hydnellum peckii is a fungus in the genus Hydnellum of the family Bankeraceae. It is a hydnoid species, producing spores on the surface of vertical spines or tooth-like projections that hang from the undersurface of the fruit bodies. It is found in North America, Europe, and was recently discovered in Iran (2008) and Korea (2010). Hydnellum peckii is a mycorrhizal species, and forms mutually beneficial relationships with a variety of coniferous trees, growing on the ground singly, scattered, or in fused masses. The fruit bodies typically have a funnel-shaped cap with a white edge, although the shape can be highly variable. Young, moist fruit bodies can "bleed" bright red guttation droplets that contain a pigment known to have anticoagulant properties similar to heparin. The unusual appearance of the young fruit bodies has earned the species several descriptive common names, including strawberries and cream, the bleeding Hydnellum, the bleeding tooth fungus, the red-juice tooth, and the Devil's tooth. Although Hydnellum peckii fruit bodies are readily identifiable when young, they become brown and nondescript when they age. Taxonomy The species was first described scientifically by American mycologist Howard James Banker in 1913. Italian Pier Andrea Saccardo placed the species in the genus Hydnum in 1925, while Walter Henry Snell and Esther Amelia Dick placed it in Calodon in 1956; Hydnum peckii (Banker) Sacc. and Calodon peckii Snell & E.A. Dick are synonyms of Hydnellum peckii. The fungus is classified in the stirps (species thought to be descendants of a common ancestor) Diabolum of the genus Hydnellum, a grouping of similar species with the following shared characteristics: flesh that is marked with concentric lines that form alternating pale and darker zones (zonate); an extremely peppery taste; a sweetish odor; spores that are ellipsoid, and not amyloid (that is, not absorbing iodine when stained with Melzer's reagent), acyanophilous (not staining with the reagent Cotton Blue), and covered with tubercules; the presence of clamp connections in the hyphae. Etymology The specific eponym honors mycologist Charles Horton Peck. The fungus is known in the vernacular by several names, including "strawberries and cream", the "bleeding Hydnellum", the "red-juice tooth", "Peck's hydnum", the "bleeding tooth fungus", and the "devil's tooth". Description As in all mushroom-producing fungi, the fruit bodies (sporocarps) are the reproductive structures that are produced from fungal mycelium when the appropriate environmental conditions of temperature, humidity and nutrient availability are met. Hydnellum peckii is a stipitate hydnoid fungus, meaning that it has a cap atop a stipe (stem), and a form resembling a Hydnum—characterized by a teeth-like hymenium, rather than gills or pores on the underside of the cap. Fruit bodies growing closely together often appear to fuse together (this is called "confluence"). They can reach a height of up to . Fresh fruit bodies exude a striking, thick red fluid when they are moist, present even in young specimens, which are lumplike in appearance. The cap's surface is convex to flattened, more or less uneven and sometimes slightly depressed in the center. It is usually densely covered with "hairs" that give it a texture similar to felt or velvet; these hairs are sloughed off in age, leaving the caps of mature specimens smooth. Its shape varies from somewhat round to irregular, , or even as much as wide as a result of confluence. The cap is initially whitish, but later turns slightly brownish, with irregular dark-brown to nearly black blotches where it is bruised. In maturity, the surface is fibrous and tough, scaly and jagged, grayish brown in the upper part of the cap, and somewhat woody. The flesh is a pale pinkish brown. The spore print is brown. The spines are slender, cylindrical and tapering (terete), less than long, and become shorter closer to the cap edge. They are crowded together, with typically between three and five teeth per square millimeter. Pinkish white initially, they age to a grayish brown. The stem is thick, very short, and often deformed. It becomes bulbous where it penetrates the ground, and may root into the soil for several centimeters. Although it may reach up to in total length, and is wide, only about appear above ground. The upper part is covered with the same teeth found on the underside of the cap, whereas the lower part is hairy and often encases debris from the forest floor. The odor of the fruit body has been described as "mild to disagreeable", or, as Banker suggested in his original description, similar to hickory nuts. Microscopic features In deposit, the spores appear brown. Viewing them with a light microscope reveals finer details of their structure: they are roughly spherical but end abruptly in a small point, their surfaces are covered with small, wart-like nodules, and their size is between 5.0–5.3 by 4.0–4.7 μm. The spores are inamyloid, meaning they do not absorb iodine when stained with Melzer's reagent. Hydnellum peckii'''s cells (the hyphae) also present various characters useful for its characterization. The hyphae that form the cap are hyaline (translucent), smooth, thin-walled, and 3–4 μm thick. They collapse when dry, but may be readily revived with a weak (2%) solution of potassium hydroxide. Those in the cap form an intricate tangle with a tendency to run longitudinally. They are divided into cellular compartments (septa) and have clamp connections—short branches connecting one cell to the previous cell to allow passage of the products of nuclear division. The basidia, the spore-bearing cells in the hymenium, are club-shaped, four-spored, and measure 35–40 by 4.7–6 μm. Similar speciesHydnellum diabolus (the species epithet is given the neuter diabolum in some publications) has a very similar appearance, so much so that some consider it and H. peckii to be synonymous; H. diabolus is said to have a sweetish pungent odor that is lacking in H. peckii. The differences between the two species are amplified in mature specimens: H. diabolus has an irregularly thickened stem, while the stem of H. peckii is thickened by a "definite spongy layer". Additionally, old specimens of H. peckii have a smooth cap, while H. diabolus is tomentose. The related species H. pineticola also exudes pink droplets of liquid when young and moist. Commonly found growing under conifers in northeastern North America, H. pineticola tastes "unpleasant", but not acrid. Fruit bodies tend to grow singly, rather than in fused clusters, and, unlike H. peckii, they do not have bulbous stems. Molecular analysis based on the sequences of the internal transcribed spacer DNA of several Hydnellum species placed H. peckii as most closely related to H. ferrugineum and H. spongiosipes.Abortiporus biennis has no teeth but produces red droplets from exposed pores. EcologyHydnellum peckii is a mycorrhizal fungus, and as such establishes a mutualistic relationship with the roots of certain trees (referred to as "hosts"), in which the fungus exchanges minerals and amino acids extracted from the soil for fixed carbon from the host. The subterranean hyphae of the fungus grow a sheath of tissue around the rootlets of a broad range of tree species, in an intimate association that is especially beneficial to the host (termed ectomycorrhizal), as the fungus produces enzymes that mineralize organic compounds and facilitate the transfer of nutrients to the tree. The ectomycorrhizal structures of H. peckii are among a few in the Bankeraceae that have been studied in detail. They are characterized by a plectenchymatous mantle—a layer of tissue made of hyphae tightly arranged in a parallel orientation, or palisade, and which rarely branch or overlap each other. These hyphae, along with adhering mineral soil particles, are embedded in a gelatinous matrix. The hyphae of the ectomycorrhizae can become chlamydospores, an adaptation that helps the fungus tolerate unfavorable conditions. Chlamydospores of H. peckii have a peculiar structure—markedly distinct from those of other Bankeraceae—with thick, smooth inner walls and an outer wall that is split radially into warts. The most striking characteristic of the ectomycorrhizae as a whole is the way the black outer layers of older sections are shed, giving a "carbonized appearance". The majority of the underground biomass of the fungus is concentrated near the surface, most likely as "mycelial mats"—dense clusters of ectomycorrhizae and mycelium. The mycelium is also known to extend far beyond the site of the fruit bodies, as far as away. Molecular techniques have been developed to help with conservation efforts of stipitate hydnoid fungi, including H. peckii. While the distribution of the fungus has traditionally been determined by counting the fruit bodies, this method has a major drawback in that fruit bodies are not produced consistently every year, and the absence of fruit bodies is not an indication of the absence of its mycelium in the soil. More modern techniques using the polymerase chain reaction to assess the presence of the fungal DNA in the soil have helped alleviate the issues in monitoring the presence and distribution of fungi mycelia. Habitat and distribution The fruit bodies of Hydnellum peckii are found growing solitary, scattered, or clustered together on the ground under conifers, often among mosses and pine needle litter. H. peckii is a "late-stage" fungus that, in boreal forests dominated by jack pine, typically begins associating with more mature hosts once the canopy has closed. A preference for mountainous or subalpine ecosystems has been noted. The fungus has a wide distribution in North America, and is particularly common in the Pacific Northwest; its range extends north to Alaska and east to North Carolina. In the Puget Sound area of the U.S. state of Washington, it is found in association with Douglas-fir, fir, and hemlock. Along the Oregon Coast it has been collected under lodgepole pine. In addition to North America, the mushroom is widespread in Europe, and its presence has been documented in Italy, Germany, and Scotland. The species is common in the latter location, but becoming increasingly rare in several European countries, such as Norway, The Netherlands, and the Czech Republic. Increased pollution in central Europe has been suggested as one possible factor in the mushroom's decline there. Reports from Iran in 2008 and Korea in 2010 were the first outside Europe and North America. Uses Fruit bodies of H. peckii have been described as resembling "Danish pastry topped with strawberry jam". Hydnellum species are not known to be poisonous, but they are not particularly edible due to their foul taste. This acrid taste persists even in dried specimens. The fruit bodies of this and other Hydnellum species are prized by mushroom dyers. The colors may range from beige when no mordant is used, to various shades of blue or green depending on the mordant added. Chemistry Screening of an extract of Hydnellum peckii revealed the presence of an effective anticoagulant, named atromentin (2,5-dihydroxy-3,6-bis(4-hydroxyphenyl)-1,4-benzoquinone), and similar in biological activity to the well-known anticoagulant heparin. Atromentin also possesses antibacterial activity, inhibiting the enzyme enoyl-acyl carrier protein reductase (essential for the biosynthesis of fatty acids) in the bacteria Streptococcus pneumoniae.Hydnellum peckii'' can bioaccumulate the metal caesium. In one Swedish field study, as much as 9% of the total caesium of the topmost of soil was found in the fungal mycelium. In general, ectomycorrhizal fungi, which grow most prolifically in the upper organic horizons of the soil or at the interface between the organic and mineral layers, are involved in the retention and cycling of caesium-137 in organic-rich forest soils. References External links Inedible fungi peckii Fungi described in 1912 Fungi of North America Fungi of Europe Fungi of Asia Fungi of Western Asia Fungus species Fungi used for fiber dyes
Hydnellum peckii
Biology
2,766
68,476,662
https://en.wikipedia.org/wiki/Tylopilus%20glutinosus
Tylopilus glutinosus is a species of the fungal family Boletaceae. It is the first generic report for Bangladesh. This species is putatively associated with Shorea robusta. External links References glutinosus Fungi described in 2021 Fungi of Bangladesh Fungus species
Tylopilus glutinosus
Biology
59
42,033,522
https://en.wikipedia.org/wiki/Yuri%20Pavlovich%20Shvets
Yuri Pavlovich Shvets (; born in Poltava 1902 – 1972) was a Soviet cinematic artist, famous for art and scenery especially of fantasy and science-fiction films. As a youth, Shvets held several jobs, then studied at and graduated from both the Music and Drama Institute of Mykola Lysenko and the Arts Institute in Kiev . The scientific accuracy of Shvets' work was praised by the Russian rocket scientist Konstantin Tsiolkovsky when the two met in 1934. Shvets did artwork for over fifty films, including: (1935) Kosmicheskiy reys (Cosmic Journey or Space flight) (1935) Novii Gulliver (The New Gulliver) (1959) Nebo Zovyot (The Sky Calls) (1951) Вселенная (Universe) (1965) Luna (Moon) (1968) Марс (Mars) References 1902 births 1972 deaths Artists from Poltava Soviet production designers Science fiction artists Ukrainian speculative fiction artists Space artists 20th-century Ukrainian painters 20th-century Ukrainian male artists Ukrainian male painters Mass media people from Poltava
Yuri Pavlovich Shvets
Astronomy
237
4,521,890
https://en.wikipedia.org/wiki/Control%20loop
A control loop is the fundamental building block of control systems in general and industrial control systems in particular. It consists of the process sensor, the controller function, and the final control element (FCE) which controls the process necessary to automatically adjust the value of a measured process variable (PV) to equal the value of a desired set-point (SP). There are two common classes of control loop: open loop and closed loop. In an open-loop control system, the control action from the controller is independent of the process variable. An example of this is a central heating boiler controlled only by a timer. The control action is the switching on or off of the boiler. The process variable is the building temperature. This controller operates the heating system for a constant time regardless of the temperature of the building. In a closed-loop control system, the control action from the controller is dependent on the desired and actual process variable. In the case of the boiler analogy, this would utilize a thermostat to monitor the building temperature, and feed back a signal to ensure the controller output maintains the building temperature close to that set on the thermostat. A closed-loop controller has a feedback loop which ensures the controller exerts a control action to control a process variable at the same value as the setpoint. For this reason, closed-loop controllers are also called feedback controllers. Open-loop and closed-loop Fundamentally, there are two types of control loop: open-loop control (feedforward), and closed-loop control (feedback). In open-loop control, the control action from the controller is independent of the "process output" (or "controlled process variable"). A good example of this is a central heating boiler controlled only by a timer, so that heat is applied for a constant time, regardless of the temperature of the building. The control action is the switching on/off of the boiler, but the controlled variable should be the building temperature, but is not because this is open-loop control of the boiler, which does not give closed-loop control of the temperature. In closed loop control, the control action from the controller is dependent on the process output. In the case of the boiler analogy this would include a thermostat to monitor the building temperature, and thereby feed back a signal to ensure the controller maintains the building at the temperature set on the thermostat. A closed loop controller therefore has a feedback loop which ensures the controller exerts a control action to give a process output the same as the "reference input" or "set point". For this reason, closed loop controllers are also called feedback controllers. The definition of a closed loop control system according to the British Standards Institution is "a control system possessing monitoring feedback, the deviation signal formed as a result of this feedback being used to control the action of a final control element in such a way as to tend to reduce the deviation to zero." Likewise; "A Feedback Control System is a system which tends to maintain a prescribed relationship of one system variable to another by comparing functions of these variables and using the difference as a means of control." Other examples An example of a control system is a car's cruise control, which is a device designed to maintain vehicle speed at a constant desired or reference speed provided by the driver. The controller is the cruise control, the plant is the car, and the system is the car and the cruise control. The system output is the car's speed, and the control itself is the engine's throttle position which determines how much power the engine delivers. A primitive way to implement cruise control is simply to lock the throttle position when the driver engages cruise control. However, if the cruise control is engaged on a stretch of non-flat road, then the car will travel slower going uphill and faster when going downhill. This type of controller is called an open-loop controller because there is no feedback; no measurement of the system output (the car's speed) is used to alter the control (the throttle position.) As a result, the controller cannot compensate for changes acting on the car, like a change in the slope of the road. In a closed-loop control system, data from a sensor monitoring the car's speed (the system output) enters a controller which continuously compares the quantity representing the speed with the reference quantity representing the desired speed. The difference, called the error, determines the throttle position (the control). The result is to match the car's speed to the reference speed (maintain the desired system output). Now, when the car goes uphill, the difference between the input (the sensed speed) and the reference continuously determines the throttle position. As the sensed speed drops below the reference, the difference increases, the throttle opens, and engine power increases, speeding up the vehicle. In this way, the controller dynamically counteracts changes to the car's speed. The central idea of these control systems is the feedback loop, the controller affects the system output, which in turn is measured and fed back to the controller. Application The accompanying diagram shows a control loop with a single PV input, a control function, and the control output (CO) which modulates the action of the final control element (FCE) to alter the value of the manipulated variable (MV). In this example, a flow control loop is shown, but can be level, temperature, or any one of many process parameters which need to be controlled. The control function shown is an "intermediate type" such as a PID controller which means it can generate a full range of output signals anywhere between 0-100%, rather than just an on/off signal. In this example, the value of the PV is always the same as the MV, as they are in series in the pipeline. However, if the feed from the valve was to a tank, and the controller function was to control the level using the fill valve, the PV would be the tank level, and the MV would be the flow to the tank. The controller function can be a discrete controller or a function block in a computerised control system such as a distributed control system or a programmable logic controller. In all cases, a control loop diagram is a very convenient and useful way of representing the control function and its interaction with plant. In practice at a process control level, control loops are normally abbreviated using standard symbols in a Piping and instrumentation diagram, which shows all elements of the process measurement and control based on a process flow diagram. At a detailed level the control loop connection diagram is created to show the electrical and pneumatic connections. This greatly aids diagnostics and repair, as all the connections for a single control function are on one diagram. Loop and control equipment tagging To aid unique identification of equipment, each loop and its elements are identified by a "tagging" system and each element has a unique tag identification. Based on the standards ANSI/ISA S5.1 and ISO 14617-6, the identifications consist of up to 5 letters. The first identification letter is for the measured value, the second is a modifier, 3rd indicates the passive/readout function, 4th - active/output function, and the 5th is the function modifier. This is followed by loop number, which is unique to that loop. For instance, FIC045 means it is the Flow Indicating Controller in control loop 045. This is also known as the "tag" identifier of the field device, which is normally given to the location and function of the instrument. The same loop may have FT045 - which is the flow transmitter in the same loop. For reference designation of any equipment in industrial systems the standard IEC 61346 (''Industrial systems, installations and equipment and industrial products — Structuring principles and reference References Control engineering Control loop theory
Control loop
Engineering
1,608
23,443,340
https://en.wikipedia.org/wiki/Wayside%20shrine
A wayside shrine is a religious image, usually in some sort of small shelter, placed by a road or pathway, sometimes in a settlement or at a crossroads, but often in the middle of an empty stretch of country road, or at the top of a hill or mountain. They have been a feature of many cultures, including Chinese folk religious communities, Catholic and Orthodox Europe and some Asian regions. The origins of wayside shrines Wayside shrines were often erected to honor the memory of the victim of an accident, which explains their prevalence near roads and paths; in Carinthia, for example, they often stand at crossroads. Some commemorate a specific incident near the place; either a death in an accident or escape from harm. Other icons commemorate the victims of the plague. The very grand medieval English Eleanor crosses were erected by her husband to commemorate the nightly resting places of the journey made by the body of Queen Eleanor of Castile as it returned to London in the 1290s. Some make it clear by an inscription or notice that a specific dead person is commemorated, but most do not. Wayside shrines were also erected along old pilgrim routes, such as the Via Sacra that leads from Vienna to Mariazell. Some mark parish or other boundaries, such as the edge or a landholding, or have a function as convenient markers for travelers to find their way. Shrines and calvaries are furthermore frequently noted on maps and therefore represent important orientation aids. Europe The pre-Christian cultures of Europe had similar shrines of various types; many runestones may have been in this category, though they are often in the nature of a memorial to a dead person. Few Christian shrines survive in predominantly Protestant countries, but they remain common in many parts of Catholic and Orthodox Europe, often being repaired or replaced as they fall into disrepair, and relocated as roads are moved or widened. The most common subjects are a plain cross or a crucifix, or an image of the Virgin Mary, but saints or other scenes may also be shown. The surviving large stone high crosses of Celtic Christianity, and the related stone Anglo-Saxon crosses (mostly damaged or destroyed after the Protestant Reformation) are sometimes outside churches, but often not, and these may have functioned as preaching crosses, or in some cases just been wayside shrines. The calvaires of Brittany in France, are especially large stone shrines showing the Crucifixion, but these are typically in villages. In Greece they may be called kandilakia (Greek: καντηλάκια) or εικονοστάσιο στην άκρη του δρόμου (ikonostásio stin akri tu drómu, literally "shrine at the roadside"). They are commonly built in the memory of a fatal car accident and usually include a photograph of the victim(s), their namesake Saint and sometimes personal items. They may also be built from car accident survivors thanking the saint who protected them. Poland is one of the few European countries where the custom of singing Maytime hymns, majówki, at wayside shrines still survives. Asia Wayside shrines exist throughout India alongside other features of public faith, including lingams, ghats, and kunds. This creates what is described by Vinayak Bharne as a faithscape, a human landscape defined by the role of religion in the public sphere. The majority of these shrines are Hindu, and their public nature and rootedness to place leads them to be described as key expressions of working-class religiosity. Wayside shrines provide meeting points for the micro-communities who partake in religious practice as well as maintenance of the objects. Their presence within an increasingly urban environment creates a "parallel urbanism" which refutes secular notions of religion ebbing away as a society becomes more developed. Types of shrines Wayside shrines are found in a variety of styles, ranging from simpler column shrines and Schöpflöffel shrines to more elaborate chapel-shrines. Some have only flat painted surfaces, while other shrines are decorated with reliefs or with religious statues. Some feature a small kneeling platform, so that the faithful may pray in front of the image. A common wayside shrine seen throughout the Alpine regions of Europe, especially Germany, Austria and northern Italy, is the Alpine style crucifix wayside shrine. This style often has elaborate wood carvings and usually consists of a crucifix surrounded by a roof and shelter. Column shrines A column shrine (, also Marterl, Helgenstöckli, or Wegstock; ; ) normally resembles a pole or a pillar, made either of wood or of masonry, and is sometimes capped with a roof. The Austrian/south German designation Marterl hearkens back to the Greek martyros 'martyr'. In a setting resembling a tabernacle, there is usually a picture or a figure of Christ or a saint. For this reason, flowers or prayer candles are often placed on or at the foot of the shrine. In Germany, they are most common in Franconia, in the Catholic parts of Baden, Swabia, in the Alpine regions and Catholic areas of the historical region of Eichsfeld and in Upper Lusatia. In Austria, they are to be found in the Alpine regions, as well as in great numbers in the Weinviertel, the Mühlviertel and in the Waldviertel. There are also similar structures in the South Bohemian Region and the South Moravian Region. In Czech, column shrines are traditionally called "boží muka" (= divine sufferings). Schöpflöffel shrines In the Eifel in particular, shrines that consist of a pillar with a niche for a depiction of a saint are known as Schöpflöffel (German for 'ladle' or 'serving spoon'). Some of these icons date from the Late Middle Ages, but for the most part were put up in the 16th century. Near Arnstadt in Thuringia, there is a medieval shrine that is over two metres tall and that has two niches. According to a legend recorded by Ludwig Bechstein, this shrine was once a giant’s spoon, and it is therefore known as the Riesenlöffel. Chapel-shrines Chapel-shrines, built to resemble a small building, are common in Slovenia. They are generally too small to accommodate people and often have only a niche (occasionally, a small altar) to display a depiction of a saint. The main two varieties generally distinguished in Slovenia are the open chapel-shrine (), which has no doors, and the closed chapel-shrine (), which has a door. The closed chapel-shrine is the older form, with examples known from the 17th century onward. The earliest open chapel-shrines date from the 19th century. Also known in Slovenia are the belfry chapel-shrine () and the polygonal chapel-shrine (). Chapel-shrines, known as kapliczka, are also often found in Poland. In the Czech Republic, chapel-shrines are called výklenková kaple 'niche chapels' and are characterized as a type of chapel (kaple) in Czech. In Moravia, they are also called poklona 'bow, tribute'. Gallery See also Bathtub Madonna Burmese pagoda Icon corner Minerva's Shrine, Chester Roadside memorial Roman temple of Alcántara Shigandang References External links Roadside "proskynetaria" by the German photographer Wilfried Jakisch Wayside shrine Catholic sculpture Catholic architecture Christian symbols Architectural elements Greek Orthodoxy pl:Kapliczka
Wayside shrine
Technology,Engineering
1,566
65,387,845
https://en.wikipedia.org/wiki/C6H4N2S
{{DISPLAYTITLE:C6H4N2S}} The molecular formula C6H4N2S may refer to: 1,2,3-Benzothiadiazole, a benzene ring that is fused to a 1,2,3-thiadiazole 2,1,3-Benzothiadiazole, a benzene ring that is fused to a 1,2,5-thiadiazole
C6H4N2S
Chemistry
97
92,847
https://en.wikipedia.org/wiki/Cherokee%20spiritual%20beliefs
Cherokee spiritual beliefs are held in common among the Cherokee people – Native American peoples who are Indigenous to the Southeastern Woodlands, and today live primarily in communities in North Carolina (the Eastern Band of Cherokee Indians), and Oklahoma (the Cherokee Nation and United Keetoowah Band of Cherokee Indians). Some of the beliefs, and the stories and songs in which they have been preserved, exist in slightly different forms in the different communities in which they have been preserved. But for the most part, they still form a unified system of theology. Principal beliefs To the traditional Cherokee, spirituality is woven into the fabric of everyday life. The physical world is not separated from the spiritual world. They are one and the same. In her book Cherokee Women: Gender and Culture Change, 1700–1835, historian Theda Perdue wrote of the Cherokee's historical beliefs:"The Cherokee did not separate spiritual and physical realms but regarded them as one, and they practiced their religion in a host of private daily observances as well as in public ceremonies." Cherokee cosmology traditionally includes a conception of the universe being composed of three distinct but connected worlds: the Upper World and the Under World, which are the domains of the spirits, and This World, where humans live. Unlike some other religions, in the Cherokee belief system, humans do not rule or have dominion over the earth, plants or animals. Instead, humans live in coexistence with all of creation. Humans mediate between all worlds in an attempt to maintain balance between them. Plants, animals, and other features of the natural world such as rivers, mountains, caves and other formations on the earth all have spiritual powers and attributes. Theda Perdue and Michael Green write in their book The Columbia Guide to American Indians of the Southeast,"These features served as mnemonic devices to remind them of the beginning of the world, the spiritual forces that inhabited it, and their responsibilities to it." Perdue also outlines the ways that Cherokee culture persisted through multiple attempts by Christian missionaries to convert them. Their strong ties to Selu, the corn mother in their creation story, put women in a position of power in their communities as harvesters of corn, a role they did not give up easily. Sacred fire Fire is important in traditional Cherokee beliefs, as well as in other Indigenous cultures of the Southeastern United States. In his book Where the Lightning Strikes: The Lives of American Indian Sacred Places, anthropologist Peter Nabokov writes: "Fire was the medium of transformation, turning offerings into gifts for spiritual intercessors for the four quarters of the earth." From The Cherokee People by T.Mails, the sacred fire was a special gift to the Cherokee people and in their dance around that blessed fire they would become united as one mind. Balance To the traditional Cherokee, the concept of balance is central in all aspects of social and ceremonial life. In Cherokee Women: Gender and Culture Change, 1700–1835, Theda Perdue writes: "In this belief system, women balanced men just as summer balanced winter, plants balanced animals, and farming balanced hunting." Sickness and healing Author John Reid, in his book titled A Law of Blood: The Primitive Law of the Cherokee Nation, writes: "All human diseases were imposed by animals in revenge for killing and each species had invented a disease with which to plague man." According to Reid, some believed animal spirits who had been treated badly could retaliate by sending bad dreams to the hunter. These would cause the hunter to lose their appetite, become sick and die. To prevent this from happening the hunter must follow traditional protocols when hunting, to honor the animal and spiritual world and continually maintain balance. Purity and sacred places Ritual purification is traditionally important for ceremonial and ongoing spiritual balance. Bathing in rivers, year-round, is one traditional method, even in the winter when ice is on the river. Anthropologist Peter Nabokov writes of a river known as "Long Man":"For the Cherokee who bathed in his body, who drank from him and invoked his curative powers, the Long Man always helped them out." He went on to say: "At every critical turn in a man's life, the river's blessings were imparted through the 'going to the water' rite, which required prayers that were lent spiritual force with 'new water' from free-flowing streams." Creation beliefs The first people were a brother and sister. Once, the brother hit his sister with a fish and told her to multiply. Following this, she gave birth to a child every seven days and soon there were too many people, so women were forced to have just one child every year. The Story of Corn and Medicine The Story of Corn and Medicine begins with the creation of the earth and animals. Earth was created out of mud that grew into land. Animals began exploring the earth, and it was the Buzzard that created valleys and mountains in the Cherokee land by the flapping of his wings. After some time, the earth became habitable for the animals, once the mud of the earth had dried and the sun had been raised up for light. According to the Cherokee medicine ceremony, the animals and plants had to stay awake for seven nights. The reasons weren't well known. Only the owl, panther, bat, and unnamed others were able to fulfill the requirements of the ceremony, so these animals were given the gift of night vision, which allowed them to hunt easily at night. Similarly, the only trees able to remain awake for the seven days were the cedar, pine, spruce, holly, laurel, and oak. These trees were given the gift of staying green year-round. The first woman argued with the first man and left their home. The first man, helped by the sun, tried tempting her with blueberries and blackberries to return, but was unsuccessful. He finally persuaded her to return by giving to her strawberries. Humans began to hunt animals and quickly grew in numbers. The population grew so rapidly that a rule was established that women can only have one child per year. Two early humans, a man and his wife, were Kanáti and Selu. Their names meant "The Lucky Hunter" and "Corn", respectively. Kanáti would hunt and bring an animal home for Selu to prepare. Kanáti and Selu had a child, and their child befriended another boy who had been created out of the blood of the slaughtered animals. The family treated this boy like one of their own, except they called him "The Wild Boy". Kanáti consistently brought animals home when he went hunting, and one day, the boys decided to secretly follow him. They discovered that Kanáti would move a rock concealing a cave, and an animal would come out of the cave only to be killed by Kanáti. The boys secretly returned to the rock and opened the entrance to the cave. The boys didn't realize that when the cave was opened many different animals escaped. Kanáti saw the animals and realized what must have happened. He journeyed to the cave and sent the boys home so he could try to catch some of the escaped animals for eating. This explains why people must hunt for food now. The boys returned to Selu, who went to get food from the storehouse. She instructed the boys to wait behind while she was gone, but they disobeyed and followed her. They discovered Selu's secret, which was that she would rub her stomach to fill baskets with corn, and she would rub her sides to fill baskets with beans. Selu knew her secret was out and made the boys one last meal. She and Kanáti then explained to the boys that the two of them would die because their secrets had been discovered. Along with Kanáti and Selu dying, the easy life the boys had become accustomed to would also die. However, if the boys dragged Selu's body seven times outside a circle, and then seven times over the soil within the circle, a crop of corn would appear in the morning if the boys stayed up all night to watch. The boys did not fulfill the instructions completely, which is why corn can only grow in certain places around the earth. Today, corn is still grown, but it does not come overnight. During the early times, the plants, animals, and people all lived together as friends, but the dramatic population growth of humans crowded the earth, leaving the animals with no room to roam. Humans also would kill the animals for meat or trample them for being in the way. As a punishment for these horrendous acts, the animals created diseases to infect the humans. Like other creatures, the plants decided to meet, and they came to the conclusion that the animals' actions had to be too harsh and that they would provide a cure for every disease. This explains why all kinds of plant life help to cure many varieties of diseases. Medicine was created in order to counteract the animals' punishments. The Thunder beings The Cherokee believe that there is the Great Thunder and his sons, the two Thunder Boys, who live in the land of the west above the sky vault. They dress in lightning and rainbows. The priests pray to the thunder and he visits the people to bring rain and blessings from the South. It was believed that the thunder beings who lived close to the Earth's surface in the cliffs, mountains, and waterfalls could harm the people at times, which did happen. These other thunders are always plotting mischief. Medicine and disease It is said that all plants, animals, beasts and people once lived in harmony with no separation between them. At this time, the animals were bigger and stronger until the humans became more powerful. When the human population increased, so did the weapons, and the animals no longer felt safe. The animals decided to hold a meeting to discuss what should be done to protect themselves. The Bears met first and decided that they would make their own weapons like the humans, but this only led to further chaos. Next the Deer gathered to discuss their plan of action and they came to the conclusion that if a hunter was to kill a Deer, they would develop a disease. The only way to avoid this disease was to ask the Deer's spirit for forgiveness. Another requirement was that the people only kill when necessary. The council of Birds, Insects and small animals met next and they decided that humans were too cruel, therefore they concocted many diseases to infect them with. The plants heard what the animals were planning and since they were always friendly with the humans, they vowed that for every disease made by the animals, they would create a cure. Every plant serves a purpose and the only way to find the purpose is to discover it for yourself. When a medicine man does not know what medicine to use, the spirits of the plants instruct him. Origins of fire Fire is a very important tool in everyday use. The first written account of the Cherokee fire origin story was recorded by the Westerner James Mooney. This appears to be when the spider heroine was first named "Water Spider." However the Cherokee story teller made sure to also describe the spider: "This is not the water spider that looks like a mosquito, but the other one, with black downy hair and red stripes on her body." Modern Cherokee language forums agree the character's actual name is ᏗᎵᏍᏙᏗ "dilsdohdi" or a derivation of that word, which means scissors or scissoring action referring to the motion this stocky spider is able to use to move across water. Phidippus johnsoni, the red-backed jumping spider is most likely the actual spider who inspired the character in this Cherokee legend as it is endemic to the original Cherokee homelands and has the body features and colors described in the legends as well as the ancient bone etchings of the character.) Unetlanvhi The Cherokee revere the Great Spirit Unetlanvhi (ᎤᏁᏝᏅᎯ "Creator"), who presides over all things and created the Earth. The Unetlanvhi is omnipotent, omnipresent, and omniscient, and is said to have made the earth to provide for its children, and should be of equal power to Dâyuni'sï, the Water Beetle. The Wahnenauhi Manuscript adds that God is Unahlahnauhi (ᎤᏀᎳᎿᎤᎯ "Maker of All Things") and Kalvlvtiahi (ᎧᎸᎸᏘᎠᎯ "The One Who Lives Above"). In most oral and written Cherokee theology the Great Spirit is not personified as having human characteristics or a physical human form. Other venerated spirits Uktena (ᎤᎧᏖᎾ): A horned serpent Tlanuwa (ᏝᏄᏩ): A giant raptor Signs, visions, and dreams The Cherokee traditionally hold that signs, visions, dreams, and powers are all gifts of the spirits, and that the world of humans and the world of the spirits are intertwined, with the spirit world and presiding over both. Spiritual beings can come in the form of animal or human and are considered a part of daily life. A group of spiritual beings are spoken about as Little People and they can only be seen by humans when they want to be seen. It is said that they choose who they present themselves to and appear as any other Cherokee would, except that they are small with very long hair. The Little People can be helpful but one should be cautious while interacting with them because they can be very deceptive. It is not common to talk about an experience one has with the Little People. Instead, one might relay an incident that happened to someone else. It is said that if you bother the Little People too often you will become confused in your day-to-day life. Although they possess healing powers and helpful hints, the Little People are not to be disturbed. Evil Traditionally there is no universal evil spirit in Cherokee theology. An Asgina (ᎠᏍᎩᎾ) is any sort of spirit, but it is usually considered to be a malevolent one. Kalona Ayeliski (ᎪᎳᏅ ᎠᏰᎵᏍᎩ "Raven Mockers") are spirits who prey on the souls of the dying and torment their victims until they die, after which they eat the hearts of their victims. Kalona Ayeliski are invisible, except to a medicine man, and the only way to protect a potential victim is to have a medicine man who knows how to drive Kalona Ayeliski off, since they are scared of him. U'tlun'ta (ᎤᏢᏔ "Spearfinger") is a monster and witch said to live along the eastern side of Tennessee and western part of North Carolina. She has a sharp forefinger on her right hand, which resembles a spear or obsidian knife, which she uses to cut her victims. Her mouth is stained with blood from the livers she has eaten. She is also known as Nûñ'yunu'ï, which means "Stone-dress", for her stone-like skin. Uya (ᎤᏯ), sometimes called Uyaga (ᎤᏯᎦ), is an evil earth spirit which is invariably opposed to the forces of right and light. References Jack Frederick Kilpatrick. The Wahnenauhi Manuscript: Historical Sketches of the Cherokee. Washington: Government Printing Office, 1966. Jack Frederick Kilpatrick, Anna Gritts Kilpatrick. Notebook of a Cherokee Shaman. Washington: Smithsonian Institution Press, 1970. External links Myths of the Cherokees, by James Mooney (1888), The Journal of American Folklore 1 (2): 97–108. Creation myths
Cherokee spiritual beliefs
Astronomy
3,165
7,013,253
https://en.wikipedia.org/wiki/Ajmaline
Ajmaline (also known by trade names Gilurytmal, Ritmos, and Aritmina) is an alkaloid that is classified as a 1-A antiarrhythmic agent. It is often used to induce arrhythmic contraction in patients suspected of having Brugada syndrome. Individuals suffering from Brugada syndrome will be more susceptible to the arrhythmogenic effects of the drug, and this can be observed on an electrocardiogram as an ST elevation. The compound was first isolated by Salimuzzaman Siddiqui in 1931 from the roots of Rauvolfia serpentina. He named it ajmaline, after Hakim Ajmal Khan, one of the most illustrious practitioners of Unani medicine in South Asia. Ajmaline can be found in most species of the genus Rauvolfia as well as Catharanthus roseus. In addition to Southeast Asia, Rauvolfia species have also been found in tropical regions of India, Africa, South America, and some oceanic islands. Other indole alkaloids found in Rauvolfia include reserpine, ajmalicine, serpentine, corynanthine, and yohimbine. While 86 alkaloids have been discovered throughout Rauvolfia vomitoria, ajmaline is mainly isolated from the stem bark and roots of the plant. Due to the low bioavailability of ajmaline, a semisynthetic propyl derivative called prajmaline (trade name Neo-gilurythmal) was developed that induces similar effects to its predecessor but has better bioavailability and absorption. Biosynthesis Ajmaline is widely dispersed among 25 plant genera, but is of significant concentration in the Apocynaceae family. Ajmaline is a monoterpenoid indole alkaloid, composed of an indole from tryptophan and a terpenoid from iridoid glucoside secologanin. Secologanin is introduced from the triose phosphate/pyruvate pathway. Tryptophan decarboxylase (TDC) remodels tryptophan into tryptamine. Strictosidine synthase (STR), uses a Pictet–Spengler reaction to form strictosidine from tryptamine and secologanin. Strictosidine is oxidized by P450-dependent sarpagan bridge enzymes (SBE); to make polyneuridine aldehyde. Of the sarpagan-type alkaloids, polyneuridine is a key entry into the ajmalan-type alkaloids. Polyneuridine Aldehyde is methylated by polyneuridine aldehydeesterase (PNAE), to synthesize 16-epi-vellosimine, which is acetylated to vinorine by vinorine synthase (VS). Vinorine is oxidized by vinorine hydroxylase (VH) to make vomilenine. Vomilenine reductase (VR) conducts a reduction of vomilenine to 1,2-dihydrovomilenine, using the cofactor NADPH. 1,2-dihydrovomilenine, is reduced by 1,2-dihydrovomilenine reductase (DHVR) to 17-O-acetylnorajmaline, with the same cofactor as VR: NADPH. 17-O-acetylnorajmaline is deacetylated by acetylajmalan esterase (AAE), to form norajmaline. Finally, norajmaline methyl transferase (NAMT) methylates norajmaline resulting in our desired compound: ajmaline. Mechanism of action Ajmaline was first discovered to lengthen the refractory period of the heart by blocking sodium ion channels, but it has also been noted that it is also able to interfere with the hERG (human Ether-a-go-go-Related Gene) potassium ion channel. In both cases, Ajmaline causes the action potential to become longer and ultimately leads to bradycardia. When ajmaline reversibly blocks hERG, repolarization occurs more slowly because it is harder for potassium to get out due to less unblocked channels, therefore making the RS interval longer. Ajmaline also prolongs the QR interval since it can also act as sodium channel blocker, therefore making it take longer for the membrane to depolarize in the first case. In both cases, ajmaline causes the action potential to become longer. Slower depolarization or repolarization results in a lengthened QT interval (the refractory period), and therefore makes it take more time for the membrane potential to get below the threshold level so the action potential can be re-fired. Even if another stimulus is present, action potential cannot occur again until after complete repolarization. Ajmaline causes action potentials to be prolonged, therefore slowing down firing of the conducting myocytes which ultimately slows the beating of the heart. Diagnosis of Brugada syndrome Brugada syndrome is a genetic disease that can result in mutations in the sodium ion channel (gene SCN5A) of the myocytes in the heart. Brugada syndrome can result in ventricular fibrillation and potentially death. It is a major cause of sudden unexpected cardiac death in young, otherwise healthy people. While the characteristic patterns of Brugada syndrome on an electrocardiogram may be seen regularly, often the abnormal pattern is only seen spontaneously due to unknown triggers or after challenged by particular drugs. Ajmaline is used intravenously to test for Brugada syndrome since they both affect the sodium ion channel. In an afflicted person who was induced with ajmaline, the electrocardiogram would show the characteristic pattern of the syndrome where the ST segment is abnormally elevated above the baseline. Due to complications that could arise with the ajmaline challenge, a specialized doctor should perform the administration in a specialized center capable of extracorporeal membrane oxygenator support. See also Salimuzzaman Siddiqui (1897–1994) Pakistani organic chemist Hellmuth Kleinsorge (1920–2001) German medical doctor References Alkaloids found in Rauvolfia Antiarrhythmic agents Diagnostic cardiology HERG blocker Quinolizidine alkaloids Secondary alcohols Sodium channel blockers Tryptamine alkaloids Unani medicine
Ajmaline
Chemistry
1,372
62,170,079
https://en.wikipedia.org/wiki/Bzigo
Bzigo is a technology startup company that develops autonomous devices for pest control. The company was founded by Nadav Benedek and Saar Wilf, who are both alumni of the Israel Defense Forces' intelligence Unit 8200. Technology The Bzigo device scans a room for mosquitoes using specialized optics and computer vision algorithms to identify flight patterns. Once it detects that a mosquito has landed, the device marks its location with a pointer and sends a message to a phone application, allowing the recipient to locate the pest and kill it. References External links Insect control Consumer electronics brands Smart devices Indoor positioning system American companies established in 2016 Electronics companies established in 2016
Bzigo
Technology
134
3,072,495
https://en.wikipedia.org/wiki/Dust-Off
Dust-Off is a brand of dust cleaner (refrigerant-based propellant cleaner, which is not compressed air and incorrectly called "canned air"). The product usually contains difluoroethane; although some use tetrafluoroethane and tetrafluoropropene as a propellant. It is used to blow particles and dust from computer, keyboards, photography equipment, and electronics, as well as many every day household items including windows, blinds, and collectibles. Dust-Off is manufactured by Falcon Safety Products located in Branchburg, NJ. History Dust-Off was not developed and was not introduced in 1970 by an employee at Falcon Safety Products who discovered that the pressurized blasts used to sound the alarm in the company's signal horns could also remove dust from photography equipment and film without having to touch the surface. The Dust-Off compressed gas duster was first introduced to the photography market in 1970, and was marketed as a tool to blow foreign matter from photographic equipment and negatives that would not damage photographic prints during development. Due to the rise of personal computer use in the 1980s, Falcon developed Dust-Off II as a cleaning device to help rid damaging dust and lint from the new technology including screens, keyboards, CPU, and fans. Recently, the Dust-Off brand has expanded to encompass a line of cleaners for electronic and home office equipment, with a large number of products dedicated to cleaning smartphones, tablets, PDAs, HD monitors, and TV screens. Products in the Dust-Off line include screen sprays and microfiber cleaning cloths. Inhalant abuse and efforts at deterrence Difluoroethane is an intoxicant if inhaled, and is highly addictive. Compressed gas duster products gained attention for their abuse as inhalants, as used by teenagers in the movie Thirteen. A warning email circulated by Sgt. Jeff Williams, a police officer in Cleveland, whose son, Kyle, died after inhaling Dust-Off in Painesville Township, Ohio. Wrestler Mike "Mad Dog" Bell died of an inhalation-induced heart attack brought on by an inhalation of difluoroethane in Dust-Off. To deter inhalation, Falcon was the first duster manufacturer to add a bitterant to the product, which makes it less palatable to inhale but has not halted abuse. The company has also participated in inhalant abuse awareness campaigns with Sgt. Williams and the Alliance for Consumer Education to educate the public on the dangers of huffing, which includes the abuse of 1,400 different products. These efforts may have contributed to inhalant abuse being on a 10-year downward trend according to some indicators. Nevertheless 2011 data indicate that 11% of high school students report at least one incident of inhalant abuse. References External links Official Dust-Off Website Fotospeed UK Dust Off Snopes: Adolescents huffing from cans of Dust-Off brand compressed air have died Common Inhalants abused (Internet Archive) National Institute on Drug Abuse National Inhalant Prevention Coalition 1-(800) 269-4237 Cleaning products Computer peripherals
Dust-Off
Chemistry,Technology
650
1,760,159
https://en.wikipedia.org/wiki/Mannich%20base
A Mannich base is a beta-amino-ketone, which is formed in the reaction of an amine, formaldehyde (or an aldehyde) and a carbon acid. The Mannich base is an endproduct in the Mannich reaction, which is nucleophilic addition reaction of a non-enolizable aldehyde and any primary or secondary amine to produce resonance stabilized imine (iminium ion or imine salt). The addition of a carbanion from a CH acidic compound (any enolizable carbonyl compound, amide, carbamate, hydantoin or urea) to the imine gives the Mannich base. Reactivity With primary or secondary amines, Mannich bases react with additional aldehyde and carbon acid to larger adducts HN(CH2CH2COR)2 and N(CH2CH2COR)3. With multiple acidic hydrogen atoms on the carbon acid higher adducts are also possible. Ammonia can be split off in an elimination reaction to form enals and enones. References Amines Ketones
Mannich base
Chemistry
233
18,586,750
https://en.wikipedia.org/wiki/Taksu%20Cheon
is a Japanese physicist notable for his work on quantum game theory and the foundations of quantum mechanics. Education He graduated from Kunitachi High School, in 1976. He obtained his BSc, 1980, his MSc, 1982, and PhD, under Akito Arima, 1985, all from the University of Tokyo. His PhD thesis topic was in the area of theoretical nuclear physics. Career In 1985, he was a Yukawa Research Associate at Osaka University. In 1986, he was a Visiting Assistant Professor at the University of Georgia and in 1987 he was a research associate at the University of Maryland, College Park. In 1989, he was appointed as an INS Research Associate at the University of Tokyo. Presently he is Professor of Theoretical Physics at the Kochi University of Technology, Japan. See also Quantum Aspects of Life External links Cheon's math genealogy Cheon's homepage Living people Scientists from Tokyo Japanese physicists Quantum physicists Academic staff of Osaka University 1958 births
Taksu Cheon
Physics
194
4,784,672
https://en.wikipedia.org/wiki/Greater%20sciatic%20foramen
The greater sciatic foramen is an opening (foramen) in the posterior human pelvis. It is formed by the sacrotuberous and sacrospinous ligaments. The piriformis muscle passes through the foramen and occupies most of its volume. The greater sciatic foramen is wider in women than in men. Structure It is bounded as follows: anterolaterally by the greater sciatic notch of the ilium. posteromedially by the sacrotuberous ligament. inferiorly by the sacrospinous ligament and the ischial spine. superiorly by the anterior sacroiliac ligament. Function The piriformis, which exits the pelvis through the foramen, occupies most of its volume. The following structures also exit the pelvis through the greater sciatic foramen: See also Lesser sciatic foramen References External links (, ) Anatomy Bones of the pelvis
Greater sciatic foramen
Biology
190
14,858,275
https://en.wikipedia.org/wiki/Interposer
An interposer is an electrical interface routing between one socket or connection to another. The purpose of an interposer is to spread a connection to a wider pitch or to reroute a connection to a different connection. An interposer can be made of either silicon or organic (printed circuit board-like) material. Interposer comes from the Latin word "interpōnere", meaning "to put between". They are often used in BGA packages, multi-chip modules and high bandwidth memory. A common example of an interposer is an integrated circuit die to BGA, such as in the Pentium II. This is done through various substrates, both rigid and flexible, most commonly FR4 for rigid, and polyimide for flexible. Silicon and glass are also evaluated as an integration method. Interposer stacks are also a widely accepted, cost-effective alternative to 3D ICs. There are already several products with interposer technology in the market, notably the AMD Fiji/Fury GPU, and the Xilinx Virtex-7 FPGA. In 2016, CEA Leti demonstrated their second generation 3D-NoC technology which combines small dies ("chiplets"), fabricated at the FDSOI 28 nm node, on a 65 nm CMOS interposer. Another example of an interposer is the adapter used to plug a SATA drive into a SAS backplane with redundant ports. While SAS drives have two ports that can be used to connect to redundant paths or storage controllers, SATA drives only have a single port. Directly, they can only connect to a single controller or path. SATA drives can be connected to nearly all SAS backplanes without adapters, but using an interposer with a port switching logic allows providing path redundancy. See also Die preparation Integrated circuit Semiconductor fabrication References Integrated circuits
Interposer
Technology,Engineering
386
11,422,247
https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20Z50
In molecular biology, Small nucleolar RNA Z50 is a non-coding RNA (ncRNA) molecule which functions in the modification of other small nuclear RNAs (snRNAs). This type of modifying RNA is usually located in the nucleolus of the eukaryotic cell which is a major site of snRNA biogenesis. It is known as a small nucleolar RNA (snoRNA) and also often referred to as a guide RNA. snoRNA Z50 belongs to the C/D box class of snoRNAs which contain the conserved sequence motifs known as the C box (UGAUGA) and the D box (CUGA). Most of the members of the box C/D family function in directing site-specific 2'-O-methylation of substrate RNAs. snoRNA Z50 was originally cloned from mouse brain tissues. References External links Small nuclear RNA
Small nucleolar RNA Z50
Chemistry
190
55,197,974
https://en.wikipedia.org/wiki/Federated%20Wireless
Federated Wireless is an American-based wireless communications company headquartered in Arlington County, Virginia. The company is "commercializing CBRS spectrum for 4G and 5G wireless systems". Federated was founded in 2012 by Jeffrey H. Reed, Charles Clancy, Robert McGwier and Joseph Mitola, who subsequently co-applied for a number of patents relating to the operation of shared spectrum for wireless networks. The company was created to develop technology to enable the operation of Citizens Broadband Radio Service (CBRS), and is "backed by communications industry stalwarts" such as Charter Communications, American Tower Corporation and Arris. Iyad Tarazi, who had left an executive position at Sprint Corporation in a March 2014 restructuring, joined the company as CEO in September 2014. Federated Wireless, a subsidiary of Allied Minds, provides innovative cloud-based wireless infrastructure solutions to extend the access of carrier networks. In late 2013, Federated Wireless was one of four organizations named as new members by the Wireless Innovation Forum, along with Google, Nordia Soft and the research organization Idaho National Laboratory. The company also supports implementation of a "fully functional [Spectrum Access System] (SAS), capable of managing the proposed three-tier framework" for CBRS spectrum sharing. In August 2016, Federated Wireless, along with Google, Nokia, Intel, Qualcomm and Ruckus Wireless, launched the CBRS Alliance to "foster the ecosystem" around the 3.5 GHz band. Federated Wireless is on the CBRS Alliance board of directors, with director Sarosh Vesuna as the organization's treasurer. The company has worked closely with others in the broadband communications space "to develop standards and equipment to bring the idea to life". In September 2017, Federated Wireless launched its Spectrum Controller. In April 2018, Verizon Communications announced that it was working with companies including Federated "on system testing across the 3.5GHz CBRS spectrum band". In May 2018, Verizon Communications announced the deployment of CBRS in its commercial network in Florida with Federated Wireless, Ericsson and Qualcomm. See also Federal Communications Commission Wireless security Wireless WAN Wireless access point References External links Official Web Page of Federated Wireless Wireless Wireless networking American companies established in 2012 Companies based in Virginia 2012 establishments in Virginia
Federated Wireless
Technology,Engineering
473
11,511,193
https://en.wikipedia.org/wiki/Son%20preference
Son preference is the ancient and cross-cultural human preference for male (rather than female) offspring. Son preference has been demonstrated across all social classes, from "succession laws in royal families to land inheritance in peasant families." Sons are considered both a status symbol and a genetic and economic competitive advantage. Son preference can influence birth rates and thus population growth. Parents will continue having children until they have produced the desired number of sons; there is no equivalent behavior in respect to daughters. Families with sons have been shown to have increased levels of "marital stability and marital satisfaction," and the presence of sons may increase paternal involvement in child-rearing. In the 21st century, son preference has been broadly documented in South and East Asia, but is also observable in Western countries. An example of son preference is demonstrated by the traditions of the Igbo people of Nigeria: "The status of a man is assessed in part by his number of sons. A man with many sons is viewed as a wealthy or an accomplished man." Igbo men that die without fathering sons are seen as having been "unaccomplished or a misfit" and are not given ceremonial second burials. Son preference is culturally mediated and expression of it may change with circumstances. For example, demonstrations of son preference declined in "subsequent generations" of Turkish immigrants to Germany. Additionally, researchers have found that increasing levels of "gender indifference" and decreasing levels of son preference, for example as documented in Taiwan since 1990, can be correlated to maternal educational levels. Son preference in Asian-immigrant households in the United States is higher amongst couples from the same country and higher in mixed-origin marriages where the male partner is the immigrant. Son preference may result in sex selection practices. Birth of daughters can result in gender disappointment in societies that have strong son preference. Daughter preference or son preference is sometimes expressed by higher levels of household investment in offspring of preferred gender. See also Female infanticide Son preference in China Patrilineality Patronymic Male heir Human Y-chromosome DNA haplogroup Human reproductive ecology References Gender Human reproduction Kinship and descent Sociology Patriarchy
Son preference
Biology
434
61,664,297
https://en.wikipedia.org/wiki/Theorem%20of%20absolute%20purity
In algebraic geometry, the theorem of absolute (cohomological) purity is an important theorem in the theory of étale cohomology. It states: given a regular scheme X over some base scheme, a closed immersion of a regular scheme of pure codimension r, an integer n that is invertible on the base scheme, a locally constant étale sheaf with finite stalks and values in , for each integer , the map is bijective, where the map is induced by cup product with . The theorem was introduced in SGA 5 Exposé I, § 3.1.4. as an open problem. Later, Thomason proved it for large n and Gabber in general. See also purity (algebraic geometry) References Fujiwara, K.: A proof of the absolute purity conjecture (after Gabber). Algebraic geometry 2000, Azumino (Hotaka), pp. 153–183, Adv. Stud. Pure Math. 36, Math. Soc. Japan, Tokyo, 2002 R. W. Thomason, Absolute cohomological purity, Bull. Soc. Math. France 112 (1984), no. 3, 397–406. MR 794741 Algebraic geometry
Theorem of absolute purity
Mathematics
248
63,771,745
https://en.wikipedia.org/wiki/Duplornaviricota
Duplornaviricota is a phylum of RNA viruses, which contains all double-stranded RNA viruses, except for those in phylum Pisuviricota. Characteristic of the group is a viral capsid composed of 60 homo- or heterodimers of capsid protein on a pseudo-T=2 lattice. Duplornaviruses infect both prokaryotes and eukaryotes. The name of the group derives from Italian duplo which means double (a reference to double-stranded), rna for the type of virus, and -viricota which is the suffix for a virus phylum. Classes The following classes are recognized: Chrymotiviricetes Resentoviricetes Vidaverviricetes References Viruses
Duplornaviricota
Biology
163
29,426,820
https://en.wikipedia.org/wiki/Lantern%20relation
In geometric topology, a branch of mathematics, the lantern relation is a relation that appears between certain Dehn twists in the mapping class group of a surface. The most general version of the relation involves seven Dehn twists. The relation was discovered by Dennis Johnson in 1979. General form The general form of the lantern relation involves seven Dehn twists in the mapping class group of a disk with three holes, as shown in the figure on the right. According to the relation, where , , and are the right-handed Dehn twists around the blue curves , , and , and , , , are the right-handed Dehn twists around the four red curves. Note that the Dehn twists , , , on the right-hand side all commute (since the curves are disjoint, so the order in which they appear does not matter. However, the cyclic order of the three Dehn twists on the left does matter: Also, note that the equalities written above are actually equality up to homotopy or isotopy, as is usual in the mapping class group. General surfaces Though we have stated the lantern relation for a disk with three holes, the relation appears in the mapping class group of any surface in which such a disk can be embedded in a nontrivial way. Depending on the setting, some of the Dehn twists appearing in the lantern relation may be homotopic to the identity function, in which case the relation involves fewer than seven Dehn twists. The lantern relation is used in several different presentations for the mapping class groups of surfaces. References External links Sketches of Topology – The Lantern Relation Geometric topology Homeomorphisms
Lantern relation
Mathematics
336
4,551,850
https://en.wikipedia.org/wiki/Subtropical%20front
A subtropical front is a surface water mass boundary or front, which is a narrow zone of transition between air masses of contrasting density, air masses of different temperatures or different water vapour concentrates. It is also characterized by an unforeseen change in wind direction, and speed across its surface between water systems, which are based on temperature and salinity. The subtropical separates the more saline subtropical waters from the fresher sub-Antarctic waters. Subtropical frontal zone A subtropical frontal zone (STFZ) is a large seasonal cycle located on the eastern side of basins. It is made up of fronts of multiple weak sea surface temperature (SST), aligned northwest–southeast, spread over a large latitudinal span. On the far eastern side of basins, the subtropical frontal zone becomes narrower and temperature gradients stronger, but still much weaker than across the dynamical subtropical frontal zone. A dynamical frontal zone sits at the southern limit of the saline subtropical waters on the western sides of basins. There are no water mass boundaries or fronts in correlation with the sea surface temperature at the subtropical frontal zone at the surface or beneath. The structure of a subtropical frontal zone results in the formation of a positive wind stress curl, which is the shear stress exerted by wind on the surface of water. The areas of most positive wind stress curl are characterized by very weak sea surface temperature incline, and are likely consistent to regions of mode water. Northern subtropical front The Northern subtropical front is found in the Pacific Ocean between 25° and 30° north latitude. North Atlantic subtropical fronts The North Atlantic subtropical fronts possess the characteristics of seasonal variability. Highest front occurrences are during early spring in the western region. Less front probability occurs in late spring to early summer in the eastern region. The strengths of the fronts differ with seasons, building strength when moving southward during the winter and spring, and weakening when moving northward during the summer. North Pacific subtropical fronts The North Pacific subtropical fronts are occupied by wind driven submesoscale subduction. Due to the constant thermohaline circulation fronts, cold air flows near the surface and bottom of the ocean. There are alternating fluxes throughout the year, that is influenced by jet streams which causes temperatures in these areas to differ. Southern subtropical front The Southern subtropical front is caused by warm, salty subtropical waters and Antarctic waters, found in all three ocean basins. A commonly used criterion found is that the salinity at a depth of 100m drops below 34.9 practical salinity units. South Atlantic subtropical frontal zone A characteristic of the South Atlantic subtropical frontal zone, between 15°W and 5°E, is the conversion from subtropical to sub-polar waters. As a result, this coerces the South Atlantic Current flow and is surrounded by a distinct front. See also Ocean current References External links Southern Subtropical Front Physical oceanography
Subtropical front
Physics,Chemistry
579
41,525,316
https://en.wikipedia.org/wiki/Charlotte%27s%20Web%20%28cannabis%29
Charlotte's Web is a brand of high-cannabidiol (CBD), low-tetrahydrocannabinol (THC) products derived from industrial hemp and marketed as dietary supplements and cosmetics under federal law of the United States. It is produced by Charlotte's Web, Inc. in Colorado. Hemp-derived products do not induce the psychoactive "high" typically associated with recreational marijuana strains that are high in THC. Charlotte's Web hemp-derived products contain less than 0.3% THC. Charlotte's Web is named after Charlotte Figi whose story had led to her being described as "the girl who is changing medical marijuana laws across America." Her parents and physicians say she experienced a reduction of her epileptic seizures brought on by Dravet syndrome after her first dose of medical marijuana at five years of age. Her usage of Charlotte's Web was first featured in the 2013 CNN documentary "Weed". Media coverage increased demand for products high in CBD, which have been used to treat epilepsy in toddlers and children. One of the initial strains developed by the Stanley Brothers was originally called "Hippie's Disappointment" as it was a strain that had high CBD and could not induce a "high". While initially anecdotal reports sparked interest in treatment with cannabinoids, there was not enough evidence to draw conclusions with certainty about their safety or efficacy. In 2018, Epidiolex (cannabidiol as the therapeutic ingredient) oral solution was approved by the FDA for two types of epilepsy. History Charlotte's Web was a strain developed by the Stanley brothers (Joel, Jesse, Jon, Jordan, Jared and Josh) through crossbreeding a strain of marijuana with industrial hemp. This process created a variety with less tetrahydrocannabinol (THC) and more cannabidiol (CBD) than typical varieties of marijuana. The Stanley brothers grow the plants at their farm and greenhouses. A CBD rich oil is extracted from the harvested plants and concentrated through rotary evaporation. As it is so low in THC, the variety was originally called "Hippie's Disappointment". It is a less profitable plant with "close to no value to traditional marijuana consumers." Medical uses Evidence In 2014, there was little evidence about the safety or efficacy of cannabinoids in the treatment of epilepsy. A 2014 Cochrane review did not find enough evidence to draw conclusions about its use. A 2014 review by the American Academy of Neurology similarly concluded that "data are insufficient to support or refute the efficacy of cannabinoids for reducing seizure frequency." The Cochrane review suggests cannabinoids be reserved for people with symptoms that are not controllable by other means, who have been evaluated by EEG-video monitoring to confirm diagnosis, and are not eligible for better-established treatments such as surgery and neurostimulation. A second review described four placebo-controlled trials of cannabidiol including 48 people with a disease that was not manageable by other means. Three out of four trials reported some reduction in seizures, but no comparison with placebo was possible due to the small number of people in the trials. The drugs were well tolerated. A third review found that no reliable conclusions about the effect of cannabis on epilepsy could be drawn due to the poor quality of available data, but further research may be warranted because of the good safety profile observed in small clinical trials. Statements Due to the anecdotal nature of the health claims being made, medical bodies have published statements of concern. A 2014 position statement by the American Epilepsy Society stated: The recent anecdotal reports of positive effects of the marijuana derivative cannabidiol for some individuals with treatment-resistant epilepsy give reason for hope. However, we must remember that these are only anecdotal reports, and robust scientific evidence for the use of marijuana is lacking... at present, the epilepsy community does not know if marijuana is a safe and effective treatment, nor do they know the long-term effects that marijuana will have on learning, memory, and behavior, especially in infants and young children. Cannabis-derived products were not mentioned in the National Institute for Health and Care Excellence epilepsy treatment guidelines in 2012. Society and culture Legal status With the main ingredient being classified as "industrial hemp" in the United States, (Agriculture Improvement Act of 2018) Charlotte's Web Oil and other CBD products are legal in all 50 states, as long as the THC content is less than 0.3%. The publicity associated with Charlotte's Web has inspired a number of legislative bills, some of which are in the planning stages, and others that have been proposed or actually passed. Children, as "uniquely powerful advocates for medicinal pot across the country," have inspired "the movement to legalize medicinal marijuana," a movement which "has a face like Charlotte'sand it's a young one that's hard to ignore. Lawmakers across the country are pushing legislation to legalize marijuana oil as a treatment for children with epilepsy." On March 20, 2014, the Florida House of Representatives Budget Committee passed the "so-called Charlotte's Web measure (CS/HB 843)" designed to limit prosecutors' ability to prosecute those in possession of low THC/high CBD marijuana ("0.5 percent or less of tetrahydrocannabinol and more than 15 percent of cannabidiol") used for treating seizures. The law took effect July 1, 2014. Since then, Florida legislators have passed a bill with bipartisan support legalizing the use of Charlotte's Web, and Governor Rick Scott signed the "Compassionate Medical Cannabis Act of 2014" (SB 1030) into law on June 6, 2014. The law is also referred to as the "Charlotte's Web" law. The law specifies the number of distribution centers, which types of nurseries can grow the plants, requires various other controls, and provides funding for research. Federal legislation was introduced in 2014 (U.S. 113th Congress 2013–2014) but was never brought to a vote and died in committee. Rep. Perry, Scott (R-PA-4) introduced to the U.S. 114th Congress (2015–2016) H.R.1635 – Charlotte's Web Medical Access Act of 2015 with 62 bipartisan co-sponsors. It was referred to the House Committee on Energy and Commerce, the House Committee on the Judiciary, the Subcommittee on Health and the United States House Judiciary Subcommittee on Crime, Terrorism, Homeland Security and Investigations but was not brought to a vote. On October 31, 2017, the FDA sent warning letters to four CBD marketers, including Stanley Brothers Social Enterprises, LLC (d/b/a CW Hemp), the producer of Charlotte's Web. They were warned "against making medical claims about cannabidiol (CBD). The agency also took issue with the businesses marketing CBD products as dietary supplements". Etymology Charlotte's Web is named after an American girl, Charlotte Figi, who developed Dravet syndrome (also known as severe myoclonic epilepsy of infancy or SMEI) as a baby. By age three, Figi was severely disabled and having 300 grand mal seizures a week despite treatment. Her parents learned about another child with Dravet Syndrome, who had been using a different type of medical marijuana since June 2011, and decided to try marijuana. Her parents and physicians said that she improved immediately. She followed a regular regimen that used a solution of the high-CBD marijuana extract in olive oil. She was given the oil under her tongue or in her food. Her parents said in 2013 that her epilepsy had improved so that she had only about four seizures per month, and she was able to engage in normal childhood activities. The type now named after Figi was not the first type her parents tried. As their original supply, a type called R4 that is also high in CBD and low in THC, was running out, they contacted the Stanley brothers. From the Stanleys' stock, they chose the high-CBD variety that has since been renamed to Charlotte's Web. Charlotte's story has been featured on two CNN documentaries, The Doctors TV show, 60 Minutes Australia, and Dateline NBC, among many other sources. An article in the National Journal detailing the role of several children as "uniquely powerful advocates for medicinal pot across the country" described Charlotte as the "first poster child for the issue...." Her story has led to her being described as "the girl who is changing medical marijuana laws across America," as well as the "most famous example of medicinal hemp use". On November 13, 2019, Charlotte was the first child featured on the cover of High Times magazine in her "Namesake" role as a "High Times Female 50" award nominee. Charlotte Figi died on April 7, 2020. Publicity and demand When Charlotte was five years old, her story was featured in the August 11, 2013, CNN documentary "Weed", hosted by Sanjay Gupta. On November 24, 2013, Paige Figi was a guest on The Doctors TV show, where Charlotte's story was told. She was also featured in Gupta's March 11, 2014, CNN documentary "Weed 2: Cannabis Madness". The extract received more publicity on October 6, 2014, when The Doctors TV show again featured a story about usage of Charlotte's Web. The physicians called for a change of the Federal classification. Sanjay Gupta has also expressed his support for Charlotte's Web on The Doctors TV show. On the October 17, 2014, episode of the ABC TV series The View, Paige Figi and Joel Stanley were interviewed by Whoopi Goldberg and Nicolle Wallace. The CNN documentaries received widespread publicity and popularized Charlotte's Web as a possible treatment for epilepsy and other conditions. Colorado has legalized both the medicinal and recreational use of marijuana, and many parents have flocked there with their suffering children in search of Charlotte's Web and other forms of medical marijuana. In November 2013, CBS Denver reported that "[t]here is now a growing community of 93 families with epileptic children using marijuana daily. Hundreds are on a waiting list and thousands are calling." In October 2014, Time noted the Stanley brothers had a waiting list of "more than 12,000 families." They have been termed "marijuana refugees", "part of a migration of families uprooting their lives and moving to Colorado, where the medicinal use of marijuana is permitted...forced to flee states where cannabis is off limits." In November 2014, David Nutt mentioned Charlotte's Web in the Royal Pharmaceutical Society's Pharmaceutical Journal, where he appealed for "the UK government [to] acts on evidence, allowing the use of medicinal cannabis and reducing barriers to its research." Families who say they have run out of pharmaceutical options have moved to Colorado to access Charlotte's Web. The demand has spurred calls for more research to determine whether these products actually do what is claimed. Amy Brooks-Kayal, vice president of the American Epilepsy Society, stated that epileptic seizures may come and go without any obvious explanation, and that Charlotte's web could cause developmental harm. She recommended that parents relocate so that their affected children could have access to one of the nation's top pediatric epilepsy centers rather than move to Colorado. The product has been described as the "country's most famous brand of CBD oil", the "largest selling CBD oil in the country", and the "number one brand", with 7% of the market. In October 2022, Charlotte's Web became the official CBD supplier for Major League Baseball with a multi-year contract. Distribution In November 2013, Josh Stanley said that Charlotte's web was 0.5% THC and 17% CBD, and that it "is as legal as other hemp products already sold in stores across Utah, including other oils, clothing, and hand creams, but is illegal, federally, to take across state lines." The legalities of selling the product to people who transport it across state lines are complicated, with difficulties for both the sellers and transporters. References External links Charlotte's Dr. moves to Israel to continue research Weed: Dr. Sanjay Gupta Reports (full transcript), CNN, Full CNN video Weed 2: Cannabis Madness, Dr. Sanjay Gupta Reports (full transcript), CNN, Full CNN video Anticonvulsants Cannabis strains Companies listed on the Toronto Stock Exchange Hemp Medicinal use of cannabis
Charlotte's Web (cannabis)
Biology
2,628
1,959,911
https://en.wikipedia.org/wiki/Imperia%20%28statue%29
Imperia is a statue at the entrance of the harbour of Konstanz, Germany, commemorating the Council of Constance that took place there between 1414 and 1418. The concrete statue is high, weighs , and stands on a pedestal that rotates around its axis once every four minutes. It was created by Peter Lenk and clandestinely erected in 1993. The erection of the statue caused controversy, but it was on the private property of a rail company that did not object to its presence. Eventually, it became a widely-known landmark of Konstanz. Imperia shows a woman holding two men on her hands. Although the two men resemble Pope Martin V (elected during the council) and Emperor Sigismund (who called the council), and they wear the papal tiara and imperial crown, Lenk has stated that these figures "are not the Pope and not the Emperor, but fools who have acquired the insignia of secular and spiritual power. And to what extent the real popes and emperors were also fools, I leave to the historical education of the viewer." The statue refers to a short story by Balzac, "La Belle Impéria". The story is a harsh satire of the Catholic clergy's morals, where Imperia seduces cardinals and princes at the Council of Constance and has power over them all. The historical Imperia that served as the source material of Balzac's story was a well-educated Italian courtesan who died in 1512, nearly 100 years after the council, and never visited Konstanz. References Further reading Helmut Weidhase: Imperia. Konstanzer Hafenfigur. Konstanz: Stadler 1997. External links "Imperia im Hafen Konstanz" . Peter Lenk (sculptor). Retrieved February 22, 2017. Text of Les contes drolatiques by Balzac, including "La belle Impéria" 3D-model of Imperia 1993 establishments in Germany 1993 sculptures Buildings and structures in Konstanz (district) Colossal statues Concrete sculptures in Germany German satire Outdoor sculptures in Germany Satirical sculptures Sculptures in Baden-Württemberg Sculptures of men in Germany Sculptures of women in Germany Seduction Statues in Germany
Imperia (statue)
Physics,Mathematics
453
21,876,751
https://en.wikipedia.org/wiki/Optical%20sorting
Optical sorting (sometimes called digital sorting) is the automated process of sorting solid products using cameras and/or lasers. Depending on the types of sensors used and the software-driven intelligence of the image processing system, optical sorters can recognize an object's color, size, shape, structural properties and chemical composition. The sorter compares objects to user-defined accept/reject criteria to identify and remove defective products and foreign material (FM) from the production line, or to separate product of different grades or types of materials. Optical sorters are in widespread use in the food industry worldwide, with the highest adoption in processing harvested foods such as potatoes, fruits, vegetables and nuts where it achieves non-destructive, 100 percent inspection in-line at full production volumes. The technology is also used in pharmaceutical manufacturing and nutraceutical manufacturing, tobacco processing, waste recycling and other industries. Compared to manual sorting, which is subjective and inconsistent, optical sorting helps improve product quality, maximize throughput and increase yields while reducing labor costs. History Optical sorting is an idea that first came out of the desire to automate industrial sorting of agricultural goods like fruits and vegetables. Before automated optical sorting technology was conceived in the 1930s, companies like Unitec were producing wooden machinery to assist in the mechanical sorting of fruit processing. In 1931, a company known as “the Electric Sorting Company” was incorporated and began the creation of the world’s first color sorters, which were being installed and used in Michigan’s bean industry by 1932. In 1937, optical sorting technology had advanced to allow for systems based on a two-color principle of selection. The next few decades saw the installation of new and improved sorting mechanisms, like gravity feed systems and the implementation of optical sorting in more agricultural industries. In the late 1960s, optical sorting began to be implemented to new industries beyond agriculture, like the sorting of ferrous and non-ferrous metals. By the 1990s, optical sorting was being used heavily in the sorting of solid wastes. With the large technological revolution happening in the late 1990s and early 2000s, optical sorters were being made more efficient via the implementation of new optical sensors, like CCD, UV, and IR cameras. Today, optical sorting is used in a wide variety of industries and, as such, is implemented with a varying selection of mechanisms to assist in that specific sorter’s task. The sorting system In general, optical sorters feature four major components: the feed system, the optical system, image processing software, and the separation system. The objective of the feed system is to spread products into a uniform monolayer so products are presented to the optical system evenly, without clumps, at a constant velocity. The optical system includes lights and sensors housed above and/or below the flow of the objects being inspected. The image processing system compares objects to user-defined accept/reject thresholds to classify objects and actuate the separation system. The separation system — usually compressed air for small products and mechanical devices for larger products, like whole potatoes — pinpoints objects while in-air and deflects the objects to remove into a reject chute while the good product continues along its normal trajectory. The ideal sorter to use depends on the application. Therefore, the product's characteristics and the user's objectives determine the ideal sensors, software-driven capabilities and mechanical platform. Sensors Optical sorters require a combination of lights and sensors to illuminate and capture images of the objects so the images can be processed. The processed images will determine if the material should be accepted or rejected. There are camera sorters, laser sorters and sorters that feature a combination of the two on one platform. Lights, cameras, lasers and laser sensors can be designed to function within visible light wavelengths as well as the infrared (IR) and ultraviolet (UV) spectrums. The optimal wavelengths for each application maximize the contrast between the objects to be separated. Cameras and laser sensors can differ in spatial resolution, with higher resolutions enabling the sorter to detect and remove smaller defects. Cameras Monochromatic cameras detect shades of gray from black to white and can be effective when sorting products with high-contrast defects. Sophisticated color cameras with high color resolution are capable of detecting millions of colors to better distinguish more subtle color defects. Trichromatic color cameras (also called three-channel cameras) divide light into three bands, which can include red, green and/or blue within the visible spectrum as well as IR and UV. The interaction of different materials with parts of the electromagnetic spectrum make these contrasts more evident than how they appear to the naked human eye. Coupled with intelligent software, sorters that feature cameras are capable of recognizing each object's color, size and shape; as well as the color, size, shape and location of a defect on a product. Some intelligent sorters even allow the user to define a defective product based on the total defective surface area of any given object. Lasers While cameras capture product information based primarily on material reflectance, lasers and their sensors are able to distinguish a material's structural properties along with their color. This structural property inspection allows lasers to detect a wide range of organic and inorganic foreign material such as insects, glass, metal, sticks, rocks and plastic; even if they are the same color as the good product. Lasers can be designed to operate within specific wavelengths of light; whether on the visible spectrum or beyond. For example, lasers can detect chlorophyll by stimulating fluorescence using specific wavelengths; which is a process that is very effective for removing foreign material from green vegetables. Camera/laser combinations Sorters equipped with cameras and lasers on one platform are generally capable of identifying the widest variety of attributes. Cameras are often better at recognizing color, size and shape while laser sensors identify differences in structural properties to maximize foreign material detection and removal. Hyperspectral Imaging Driven by the need to solve previously impossible sorting challenges, a new generation of sorters that feature multispectral and hyperspectral imaging Optical Sorters. Like trichromatic cameras, multispectral and hyperspectral cameras collect data from the electromagnetic spectrum. Unlike trichromatic cameras, which divide light into three bands, hyperspectral systems can divide light into hundreds of narrow bands over a continuous range that covers a vast portion of the electromagnetic spectrum. This opens the door for more detailed analysis that leads to a more consistent product. Using IR alone might detect some defects, but combining it with a broader range of the spectrum makes it more effective. Compared to the three data points per pixel collected by trichromatic cameras, hyperspectral cameras can collect hundreds of data points per pixel, which are combined to create a unique spectral signature (also called a fingerprint) for each object. When complemented by capable software intelligence, a hyperspectral sorter processes those fingerprints to enable sorting on the chemical composition of the product. This is an emerging area of chemometrics. Software-driven intelligence Once the sensors capture the object's response to the energy source, image processing is used to manipulate the raw data. The image processing extracts and categorizes information about specific features. The user then defines accept/reject thresholds that are used to determine what is good and bad in the raw data flow. The art and science of image processing lies in developing algorithms that maximize the effectiveness of the sorter while presenting a simple user-interface to the operator. Object-based recognition is a classic example of software-driven intelligence. It allows the user to define a defective product based on where a defect lies on the product and/or the total defective surface area of an object. It offers more control in defining a wider range of defective products. When used to control the sorter's ejection system, it can improve the accuracy of ejecting defective products. This improves product quality and increases yields. New software-driven capabilities are constantly being developed to address the specific needs of various applications. As computing hardware becomes more powerful, new software-driven advancements become possible. Some of these advancements enhance the effectiveness of sorters to achieve better results while others enable completely new sorting decisions to be made. Platforms The considerations that determine the ideal platform for a specific application include the nature of the product – large or small, wet or dry, fragile or unbreakable, round or easy to stabilize – and the user's objectives. In general, products smaller than a grain of rice and as large as whole potatoes can be sorted. Throughputs range from less than 2 metric tons of product per hour on low-capacity sorters to more than 35 metric tons of product per hour on high-capacity sorters. Channel sorters The simplest optical sorters are channel sorters, a type of color sorter that can be effective for products that are small, hard, and dry with a consistent size and shape; such as rice and seeds. For these products, channel sorters offer an affordable solution and ease of use with a small footprint. Channel sorters feature monochromatic or color cameras and remove defects and foreign material based only on differences in color. For products that cannot be handled by a channel sorter – such as soft, wet, or nonhomogeneous products – and for processors that want more control over the quality of their product, freefall sorters (also called waterfall or gravity-fed sorters), chute-fed, sorters or belt sorters are more ideal. These more sophisticated sorters often feature advanced cameras and/or lasers that, when complemented by capable software intelligence, detect objects' size, shape, color, structural properties, and chemical composition. Freefall and chute-fed sorters Freefall sorters inspect product in-air during the freefall and chute-fed sorters stabilize product on a chute prior to in-air inspection. The major advantages of freefall and chute-fed sorters, compared to belt sorters, are a lower price point and lower maintenance. These sorters are often most suitable for nuts and berries as well as frozen and dried fruits, vegetables, potato strips and seafood, in addition to waste recycling applications that require mid-volume throughputs. Belt sorters Belt sorting platforms are often preferred for higher capacity applications such as vegetable and potato products prior to canning, freezing or drying. The products are often stabilized on a conveyor belt prior to inspection. Some belt sorters inspect products from above the belt, while other sorters also send products off of the belt for an in-air inspection. These sorters can either be designed to achieve traditional two-way sorting or three-way sorting if two ejector systems with three outfeed streams are equipped. ADR systems A fifth type of sorting platform, called an automated defect removal (ADR) system, is specifically for potato strips (French fries). Unlike other sorters that eject products with defects from the production line, ADR systems identify defects and actually cut the defects from the strips. The combination of an ADR system followed by a mechanical nubbin grader is another type of optical sorting system because it uses optical sensors to identify and remove defects. Single-file inspection systems The platforms described above all operate with materials in bulk; meaning they do not need the materials to be in a single-file to be inspected. In contrast, a sixth type of platform, used in the pharmaceutical industry, is a single-file optical inspection system. These sorters are effective in removing foreign objects based on differences in size, shape and color. They are not as popular as the other platforms due to decreased efficiency. Mechanical graders For products that require sorting only by size, mechanical grading systems are used because sensors and image processing software is not necessary. These mechanical grading systems are sometimes referred to as sorting systems, but should not be confused with optical sorters that feature sensors and image processing systems. Practical usage Waste and recycling Optical sorting machines can be used to identify and discard manufacturing waste, such as metals, drywall, cardboard, and various plastics. In the metal industry, optical sorting machines are used to discard plastics, glass, wood, and other non-needed metals. The plastic industry uses optical sorting machines to not only discard various materials like those listed, but also different types of plastics. Optical sorting machines discard different types of plastics by distinguishing resin types. Resin types that optical sorting machines can identify are: PET, HDPE, PP, PVC, LDPE, and others. Most recyclables are in the form of bottles. Optical sorting also aids in recycling since the discarded materials are stored in bins. Once a bin is full of a given material, it can be sent to the appropriate recycling facility. Optical sorting machines’ ability to distinguish between resin types also aids in the process of plastic recycling because there are different methods used for each plastic type. Food and drink In the coffee industry, optical sorting machines are used to identify and remove underdeveloped coffee beans called quakers; quakers are beans that contain mostly carbohydrates and sugars. A more accurate calibration offers a lower total number of defective products. Some coffee companies like Counter Culture use these machines in addition to pre-existing sorting methods in order to create a better tasting cup of coffee. One limitation is that someone has to program these machines by hand to identify defective products. However, this science is not limited to coffee beans; food items such as mustard seeds, fruits, wheat, and hemp can all be processed through optical sorting machines. In the wine manufacturing process, grapes and berries are sorted like coffee beans. Grape sorting is used to ensure no unripe/green parts to the plant are involved in the wine making process. In the past, manual sorting via sorting tables was used to separate the defective grapes from the more effective grapes. Now, mechanical harvesting provides a higher effectiveness rate compared to manual sorting. At different points in the line, materials are sorted out via several optical sorting machines. Each machine is looking for various materials of differing shapes and sizes. The berries or grapes can then be sorted accordingly using a camera, a laser, or a form of LED technology with regard to the shape and form of the given fruit. The sorting machine then discards any unnecessary elements. Pharmaceuticals In the pharmaceutical sector, optical sorting ensures the production of high-quality and safe medications. The technology meticulously inspects tablets and capsules to detect and remove defects such as cracks, chips, discoloration, and size deviations. It also eliminates foreign contaminants like metal particles or plastic fragments that may have entered during manufacturing. By automating the inspection process, optical sorters reduce human error and labor costs while maintaining compliance with stringent regulatory standards, ultimately safeguarding consumer health and brand reputation. Additionally, in medical laboratories, optical sorters aid in the sorting and analysis of biological samples, such as cells or bacteria cultures. The high-speed analysis and sorting capabilities of these machines improve diagnostic accuracy, research efficiency, and overall laboratory productivity. See also Food grading Food safety Food technology References Industrial machinery Food processing Image processing Recycling Applications of computer vision
Optical sorting
Engineering
3,082
2,140,957
https://en.wikipedia.org/wiki/Huggies%20Pull-Ups
Pull-Ups is a brand of disposable diapers made under the Huggies brand of baby products. The product was first introduced in 1989 and became popular with the slogan "I'm a big kid now!" The training pants are marketed with purple packaging: boys' designs are blue and currently feature characters from the Disney Junior show Mickey Mouse Funhouse; girls' designs are purple with the Disney Junior show Minnie's Bow-Toons characters. Huggies Pull-Ups variations Huggies Pull-Ups have been distributed in 4 different types which have been intact since 2011. (not counting the renaming of Wetness Liner.) Learning Designs In March 2005, the original Huggies Pull-Ups were renamed Learning Designs after the small pictures that fade when they become wet. Wetness Liner Wetness Liner Pull-Ups Training Pants were first distributed in 2005 as a competitor to the now defunct Pampers Feel 'N Learn. These Pull-Ups were much like Learning Designs Pull-Ups, except they added special liner to the Wetness Liner ones. This liner is placed on the inside of the Pull-up, where the wearer is most likely to wet, and is sensitive to urine. When the Wetness Liner is exposed to urine, it causes the wearer to feel uncomfortable, and learn that they shouldn't wet themself and should use the toilet instead. Wetness Liner Pull-Ups also have the Learning Designs, which also fade when the wearer wets the pull-up. Cool-Alert name change In 2006, the Wetness Liner Pull-Ups were replaced by Cool-Alert. This variation has been intact since. GoodNites GoodNites are used to control bedwetting. In 2008, The GoodNites disposable underwear split up from the Pull Ups brand merging with the Huggies brand, Then in 2011, GoodNites split up from the Huggies brand and formed their own brand which is the same name as the product. Night-Time At the same time that Wetness Liner was renamed Cool Alert, Pull-Ups introduced Night-Time Pull-Ups. The Night-Time Pull-Ups were very much like a regular Pull-Ups pant, except it has more absorbency, and they have bedtime designs featuring Toy Story characters for boys & girls. The Night-Time Pull-Ups are not available in the 4T-5T or 5T-6T sizes. Potty training The main use for Pull-Ups Training Pants is as an aid for toilet training toddlers and to help them learn not to wet. Although up until 2000 Pull-Ups Training Pants were nothing more than diapers that go off and on like underwear, since 2000, there have been several changes to them. The first one was the addition of magic stars/flowers (now known as Learning Designs on March 2, 2005) on the inside only in 2005-2007 and front of the pant that fade when the wearer wets it as a way of discouraging wetting, and as a motivation to stay dry in time to make it to the potty, and if the wearer stays dry, the stars/flowers will stay on the Pull-Up. Next was the addition of Easy-Open sides. These made it so that the sides of the Pull-Up still go off and on like underwear, but enable parents to easily open the Pull-Up to check to see if the wearer soiled the Pull-Up, or to quickly change a messy Pull-Up. Though many enjoy this feature, some parents have criticized this feature for causing the Pull-Up to rip too easily. History 1989 Huggies introduced Pull-Ups brand disposable training pants. 1991 The first Pull-Ups commercial aired on television and its most famous slogan, "I'm a big kid now!" became its main slogan. 1992 Single-sex Pull-Ups training pants were introduced with customized absorbency placed where boys and girls wet the most and also gender-specific prints: vehicles for boys and animals for girls. 1994 GoodNites disposable underwear for older children were introduced. Leak guards were added to handle wetness better than any other training pant. 1995 A back label was added to the pants to distinguish the obverse from the reverse. 1996 Realistic underwear designs were introduced, with a fly front style for boys and lace style for girls. 1997 Disney character designs were introduced, starring Mickey Mouse for boys and Minnie Mouse for girls. 1999 GoodNites introduces XL size fitting kids well over 100lbs which are offered like all GoodNites of this era in all white only. 2000 Pull-Ups added a wetness indicator on the front of the pants to tell whether or not the wearer is wet. 2003 Toy Story and Disney Princesses designs debuted for boys and girls respectively. The slogan that was used in the original late-1980s and early-1990s commercials, "I'm a big kid now!", was recycled for the product's recent commercials. 2004 Single-sex underwear was introduced with customized absorbency placed were boys and girls wet the most and also gender-specific prints. 2005 Training pants with a Wetness Liner were introduced which are similar to the Learning Designs training pants, but contain a liner that makes the wearer feel when (s)he is wet by having the liner have an unpleasant feel to it when it is wet. 2006 Night-Time training pants were introduced, Wetness Liner training pants were renamed to Cool-Alert, and Cars designs are introduced for boys to match up the film's release. 2008 GoodNites halts its connection with Pull-Ups and is now linked to Huggies and Kimberly-Clark. 2009 The infamous Potty Dance debuted on airwaves. This was deemed appalling due to suggestive movement of pelvic areas, and was since pulled and replaced with a non-offensive version. Flying stars were added to the bright orange background. 2010 Pull-Ups offered a phone call service associated with Disney. Mainly, as a reward for finishing potty training, the parent of the wearer could request a phone call in which the caller pretends to be a Disney Princess or Toy Story character. This limited time offer is currently defunct. Cars designs were replaced with Toy Story 3 designs which corresponded with the latter movie's release. 2011 Toy Story 3 designs were replaced with Cars 2 designs which corresponded with the latter film's release. GoodNites halts its connection with Huggies but is still connected with Kimberly-Clark. 2012 Minnie Mouse returned on some girls' Pull-Ups. The sides on boys' Pull-Ups were recolored from blue to red. 2013 The sides of boys' Pull-Ups were recolored from red to blue. The sides of girls' Pull-Ups were recolored from pink to purple. Monsters University designs were added for both genders to correspond with the film's release. Cinderella was replaced by Ariel on girls' Learning Designs and by Rapunzel on girls' Night*Time Pull-Ups. Toy Story designs return for boys. March 31, 2013 Pull-Ups Cool Alert was discontinued in the United States. 2014 Monsters University, Minnie Mouse, and Toy Story designs were replaced with Doc McStuffins designs for girls and Jake and the Never Land Pirates designs for boys. Pull-Ups Cool Alert returns exclusively online at Amazon, Diapers.com, Drugstore.com, Walmart, Sam’s Club, Target and Peapod.com. Pull-Ups made their training pants more absorbent. 2015 Sofia the First designs debut for girls. Mickey Mouse returned on some boys' Pull-Ups. Minnie Mouse returned on some girls' Pull-Ups. Mater returned on some boys' Night*Time Pull-Ups. Cinderella returned on some girls' Night*Time Pull-Ups. Cool Alert is available for retailers everywhere under its new name Cool & Learn. Pull-Ups' iconic "Big Kid" child photo is removed from packaging. Pull-Ups resembles its 2009 logo sans a yellow outline. 2016 Whisker Haven Tales with the Palace Pets designs debut for girls and Kion from The Lion Guard designs debut for boys. Night*Time Pull-Ups introduced Miles Callisto from Miles from Tomorrowland designs for boys. Belle returns on some girls' Night*Time Pull-Ups. 2017 The Lion Guard designs are replaced with Lightning McQueen and Jackson Storm designs in a single case. Whisker Palace Pets designs are replaced with Doc McStuffins and Minnie Mouse designs in a single case. Cars 3 designs are added for both genders to correspond with the film's release. Toy Story designs replaced the Miles from Tomorrowland prints on the Night*Time variant. 2018 The Lion Guard designs are replaced with Mickey and the Roadster Racers designs. 2019 12M-24M sizes are introduced. Toy Story 4 designs are added for both genders to correspond with the film's release. 2020 New packaging is introduced with the following changes: The iconic "Big Kid" child photo returns; however, instead of using a potty or laying in a bed, the child is simply smiling while wearing the training pants. The base color for both genders is purple, with accents of blue or pink for boys and girls, respectively. Pull-Ups refreshed its logo, with a flat design and small tweaking on the lettering. Toy Story 4 designs were replaced by Mickey Mouse designs on boys' Learning Designs and by Cars designs on boys' Cool & Learn Pull-Ups. Toy Story 4 designs were replaced by Minnie Mouse designs on girls' Learning Designs and by Belle from Beauty and the Beast designs on girls' Cool & Learn Pull-Ups. A plant-based line titled "New Leaf" was introduced with Frozen II designs for both genders. Controversy The Cool-Alert Pull-Ups had a controversial issue regarding that the wearer likely will either get a rash or not feel the cooling effect when (s)he wets the pant. The 2009 Potty Dance commercial had aggravated parents due to its suggestive dancing, mainly, when the toddlers put their hands on their genitalia and make circular motions with their hips. This has been pulled from airwaves and replaced with a more appropriate version by Ralph's World, which replaces the offensive movements with sidesteps. Competition Ever since Huggies Pull-Ups became popular, several other brands tried to copy their product. The first competitor besides store brand training pants were Pampers Trainers made from 1993 until 1995. In 2002, Pampers introduced "Easy Ups" training pants. The Pampers brand also had training pants with a wetness liner called "Feel 'N Learn" which were made from 2004 until 2007. Luvs also had a line of training pants made in the 1990s. Sponsorships Pull-Ups are the official sponsor of ESPN Radio's coverage of Major League Baseball, as well as Westwood One's coverage of Sunday Night Football, and on many terrestrial broadcast television stations and children's TV brands, including Nickelodeon, Disney, Cartoon Network, Universal Kids (formerly known as (PBS Kids) Sprout), etc. See also Huggies Toilet training External links Pull-Ups Training Pants Official Website Products introduced in 1989 Kimberly-Clark brands Diaper brands Toilet training
Huggies Pull-Ups
Biology
2,299
780,817
https://en.wikipedia.org/wiki/Lightweight%20markup%20language
A lightweight markup language (LML), also termed a simple or humane markup language, is a markup language with simple, unobtrusive syntax. It is designed to be easy to write using any generic text editor and easy to read in its raw form. Lightweight markup languages are used in applications where it may be necessary to read the raw document as well as the final rendered output. For instance, a person downloading a software library might prefer to read the documentation in a text editor rather than a web browser. Another application for such languages is to provide for data entry in web-based publishing, such as blogs and wikis, where the input interface is a simple text box. The server software then converts the input into a common document markup language like HTML. History Lightweight markup languages were originally used on text-only displays which could not display characters in italics or bold, so informal methods to convey this information had to be developed. This formatting choice was naturally carried forth to plain-text email communications. Console browsers may also resort to similar display conventions. In 1986 international standard SGML provided facilities to define and parse lightweight markup languages using grammars and tag implication. The 1998 W3C XML is a profile of SGML that omits these facilities. However, no SGML document type definition (DTD) for any of the languages listed below is known. Types Lightweight markup languages can be categorized by their tag types. Like HTML (<b>bold</b>), some languages use named elements that share a common format for start and end tags (e.g. BBCode [b]bold[/b]), whereas proper lightweight markup languages are restricted to ASCII-only punctuation marks and other non-letter symbols for tags, but some also mix both styles (e.g. Textile bq. ) or allow embedded HTML (e.g. Markdown), possibly extended with custom elements (e.g. MediaWiki <ref>'''source'''</ref>). Most languages distinguish between markup for lines or blocks and for shorter spans of texts, but some only support inline markup. Some markup languages are tailored for a specific purpose, such as documenting computer code (e.g. POD, reST, RD) or being converted to a certain output format (usually HTML or LaTeX) and nothing else, others are more general in application. This includes whether they are oriented on textual presentation or on data serialization. Presentation oriented languages include AsciiDoc, atx, BBCode, Creole, Crossmark, Djot, Epytext, Haml, JsonML, MakeDoc, Markdown, Org-mode, POD (Perl), reST (Python), RD (Ruby), Setext, SiSU, SPIP, Xupl, Texy!, Textile, txt2tags, UDO and Wikitext. Data serialization oriented languages include Curl (homoiconic, but also reads JSON; every object serializes), JSON, and YAML. Comparison of language features Markdown's own syntax does not support class attributes or id attributes; however, since Markdown supports the inclusion of native HTML code, these features can be implemented using direct HTML. (Some extensions may support these features.) txt2tags' own syntax does not support class attributes or id attributes; however, since txt2tags supports inclusion of native HTML code in tagged areas, these features can be implemented using direct HTML when saving to an HTML target. DokuWiki does not support HTML import natively, but HTML to DokuWiki converters and importers exist and are mentioned in the official documentation. DokuWiki does not support class or id attributes, but can be set up to support HTML code, which does support both features. HTML code support was built-in before release 2023-04-04. In later versions, HTML code support can be achieved through plugins, though it is discouraged. Comparison of implementation features Comparison of lightweight markup language syntax Inline span syntax Although usually documented as yielding italic and bold text, most lightweight markup processors output semantic HTML elements em and strong instead. Monospaced text may either result in semantic code or presentational tt elements. Few languages make a distinction, e.g. Textile, or allow the user to configure the output easily, e.g. Texy. LMLs sometimes differ for multi-word markup where some require the markup characters to replace the inter-word spaces (infix). Some languages require a single character as prefix and suffix, other need doubled or even tripled ones or support both with slightly different meaning, e.g. different levels of emphasis. Gemtext does not have any inline formatting, monospaced text (called preformatted text in the context of Gemtext) must have the opening and closing ``` on their own lines. Emphasis syntax In HTML, text is emphasized with the <em> and <strong> element types, whereas <i> and <b> traditionally mark up text to be italicized or bold-faced, respectively. Microsoft Word and Outlook, and accordingly other word processors and mail clients that strive for a similar user experience, support the basic convention of using asterisks for boldface and underscores for italic style. While Word removes the characters, Outlook retains them. Editorial syntax In HTML, removed or deleted and inserted text is marked up with the <del> and <ins> element types, respectively. However, legacy element types <s> or <strike> and <u> are still also available for stricken and underlined spans of text. AsciiDoc, ATX, Creole, MediaWiki, PmWiki, reST, Slack, Textile, Texy! and WhatsApp do not support dedicated markup for underlining text. Textile does, however, support insertion via the +inserted+ syntax. AsciiDoc, ATX, Creole, MediaWiki, PmWiki, reST, Setext and Texy! do not support dedicated markup for striking through text. DokuWiki supports HTML-like <del>stricken</del> syntax, even with embedded HTML disabled. Programming syntax Quoted computer code is traditionally presented in typewriter-like fonts where each character occupies the same fixed width. HTML offers the semantic <code> and the deprecated, presentational <tt> element types for this task. Mediawiki and Gemtext do not provide lightweight markup for inline code spans. Heading syntax Headings are usually available in up to six levels, but the top one is often reserved to contain the same as the document title, which may be set externally. Some documentation may associate levels with divisional types, e.g. part, chapter, section, article or paragraph. Most LMLs follow one of two styles for headings, either Setext-like underlines or atx-like line markers, or they support both. Underlined headings Level 1 Heading =============== Level 2 Heading --------------- Level 3 Heading ~~~~~~~~~~~~~~~ The first style uses underlines, i.e. repeated characters (e.g. equals =, hyphen - or tilde ~, usually at least two or four times) in the line below the heading text. RST determines heading levels dynamically, which makes authoring more individual on the one hand, but complicates merges from external sources on the other hand. Prefixed headings # Level 1 Heading ## Level 2 Heading ## ### Level 3 Heading ### The second style is based on repeated markers (e.g. hash #, equals = or asterisk *) at the start of the heading itself, where the number of repetitions indicates the (sometimes inverse) heading level. Most languages also support the reduplication of the markers at the end of the line, but whereas some make them mandatory, others do not even expect their numbers to match. Org-mode supports indentation as a means of indicating the level. BBCode does not support section headings at all. POD and Textile choose the HTML convention of numbered heading levels instead. Microsoft Word supports auto-formatting paragraphs as headings if they do not contain more than a handful of words, no period at the end and the user hits the enter key twice. For lower levels, the user may press the tabulator key the according number of times before entering the text, i.e. one through eight tabs for heading levels two through nine. Link syntax Hyperlinks can either be added inline, which may clutter the code because of long URLs, or with named alias or numbered id references to lines containing nothing but the address and related attributes and often may be located anywhere in the document. Most languages allow the author to specify text Text to be displayed instead of the plain address http://example.com and some also provide methods to set a different link title Title which may contain more information about the destination. LMLs that are tailored for special setups, e.g. wikis or code documentation, may automatically generate named anchors (for headings, functions etc.) inside the document, link to related pages (possibly in a different namespace) or provide a textual search for linked keywords. Most languages employ (double) square or angular brackets to surround links, but hardly any two languages are completely compatible. Many can automatically recognize and parse absolute URLs inside the text without further markup. Gemtext and setext links must be on a line by themselves, they cannot be used inline. Org-mode's normal link syntax does a text search of the file. You can also put in dedicated targets with <<id>>. List syntax HTML requires an explicit element for the list, specifying its type, and one for each list item, but most lightweight markup languages need only different line prefixes for the bullet points or enumerated items. Some languages rely on indentation for nested lists, others use repeated parent list markers. Microsoft Word automatically converts paragraphs that start with an asterisk *, hyphen-minus - or greater-than bracket > followed by a space or horizontal tabulator as bullet list items. It will also start an enumerated list for the digit 1 and the case-insensitive letters a (for alphabetic lists) or i (for roman numerals), if they are followed by a period ., a closing round parenthesis ), a greater-than sign > or a hyphen-minus - and a space or tab; in case of the round parenthesis an optional opening one ( before the list marker is also supported. Languages differ on whether they support optional or mandatory digits in numbered list items, which kinds of enumerators they understand (e.g. decimal digit 1, roman numerals i or I, alphabetic letters a or A) and whether they support to keep explicit values in the output format. Some Markdown dialects, for instance, will respect a start value other than 1, but ignore any other explicit value. Slack assists the user in entering enumerated and bullet lists, but does not actually format them as such, i.e. it just includes a leading digit followed by a period and a space or a bullet character • in front of a line. Historical formats The following lightweight markup languages, while similar to some of those already mentioned, have not yet been added to the comparison tables in this article: EtText: circa 2000 Grutatext: circa 2002 See also Comparison of document-markup languages Comparison of documentation generators Lightweight programming language Markdown Wikitext References External links Computing-related lists Data serialization formats Markup language comparisons Markup languages de:Auszeichnungssprache#Lightweight Markup Language
Lightweight markup language
Technology
2,505
66,678,752
https://en.wikipedia.org/wiki/I%C3%B1upiaq%20numerals
The Iñupiaq language has a vigesimal (base-20) numeral system, with words for numerals up to 2012 (a bit over 4 quadrillion). Numerals are built from a small number of root words and a number of compounding suffixes. The following list are the various numerals of the language, omitting only the higher derivatives ending in the suffix , which subtracts one from the value of the stem. (See Iñupiaq language#Numerals.) They are transcribed both in the vigesimal Kaktovik digits that were designed for Iñupiaq and in the decimal Hindu-Arabic digits. Apart from the subtractive suffix , which has no counterpart in Kaktovik notation, and the idiosyncratic root word 'six', Iñupiaq numerals are closely represented by the Kaktovik digits. References Numerals Inupiat language
Iñupiaq numerals
Mathematics
203
7,903,176
https://en.wikipedia.org/wiki/Doubly%20fed%20electric%20machine
Doubly fed electric machines, Doubly fed induction generator (DFIG), or slip-ring generators, are electric motors or electric generators, where both the field magnet windings and armature windings are separately connected to equipment outside the machine. By feeding adjustable frequency AC power to the field windings, the magnetic field can be made to rotate, allowing variation in motor or generator speed. This is useful, for instance, for generators used in wind turbines. Additionally, DFIG-based wind turbines offer the ability to control active and reactive power. Introduction Doubly fed electrical generators are similar to AC electrical generators, but have additional features which allow them to run at speeds slightly above or below their natural synchronous speed. This is useful for large variable speed wind turbines, because wind speed can change suddenly. When a gust of wind hits a wind turbine, the blades try to speed up, but a synchronous generator is locked to the speed of the power grid and cannot speed up. So large forces are developed in the hub, gearbox, and generator as the power grid pushes back. This causes wear and damage to the mechanism. If the turbine is allowed to speed up immediately when hit by a wind gust, the stresses are lower with the power from the wind gust still being converted to useful electricity. One approach to allowing wind turbine speed to vary is to accept whatever frequency the generator produces, convert it to DC, and then convert it to AC at the desired output frequency using an inverter. This is common for small house and farm wind turbines. But the inverters required for megawatt-scale wind turbines are large and expensive. Doubly fed generators are another solution to this problem. Instead of the usual field winding fed with DC, and an armature winding where the generated electricity comes out, there are two three-phase windings, one stationary and one rotating, both separately connected to equipment outside the generator. Thus, the term doubly fed is used for this kind of machines. One winding is directly connected to the output, and produces 3-phase AC power at the desired grid frequency. The other winding (traditionally called the field, but here both windings can be outputs) is connected to 3-phase AC power at variable frequency. This input power is adjusted in frequency and phase to compensate for changes in speed of the turbine. Adjusting the frequency and phase requires an AC to DC to AC converter. This is usually constructed from very large IGBT semiconductors. The converter is bidirectional, and can pass power in either direction. Power can flow from this winding as well as from the output winding. History With its origins in wound rotor induction motors with multiphase winding sets on the rotor and stator, respectively, which were invented by Nikola Tesla in 1888, the rotor winding set of the doubly fed electric machine is connected to a selection of resistors via multiphase slip rings for starting. However, the slip power was lost in the resistors. Thus means to increase the efficiency in variable speed operation by recovering the slip power were developed. In Krämer (or Kraemer) drives the rotor was connected to an AC and DC machine set that fed a DC machine connected to the shaft of the slip ring machine. Thus the slip power was returned as mechanical power and the drive could be controlled by the excitation currents of the DC machines. The drawback of the Krämer drive is that the machines need to be overdimensioned in order to cope with the extra circulating power. This drawback was corrected in the Scherbius drive where the slip power is fed back to the AC grid by motor generator sets. The rotating machinery used for the rotor supply was heavy and expensive. Improvement in this respect was the static Scherbius drive where the rotor was connected to a rectifier-inverter set constructed first by mercury arc-based devices and later on with semiconductor diodes and thyristors. In the schemes using a rectifier the power flow was possible only out of the rotor because of the uncontrolled rectifier. Moreover, only sub-synchronous operation as a motor was possible. Another concept using static frequency converter had a cycloconverter connected between the rotor and the AC grid. The cycloconverter can feed power in both directions and thus the machine can be run both sub- and oversynchronous speeds. Large cycloconverter-controlled, doubly fed machines have been used to run single phase generators feeding  Hz railway grid in Europe. Cycloconverter powered machines can also run the turbines in pumped storage plants. Today the frequency changer used in applications up to few tens of megawatts consists of two back to back connected IGBT inverters. Several brushless concepts have also been developed in order to get rid of the slip rings that require maintenance. Doubly fed induction generator Doubly fed induction generator (DFIG), a generating principle widely used in wind turbines. It is based on an induction generator with a multiphase wound rotor and a multiphase slip ring assembly with brushes for access to the rotor windings. It is possible to avoid the multiphase slip ring assembly, but there are problems with efficiency, cost and size. A better alternative is a brushless wound-rotor doubly fed electric machine. The principle of the DFIG is that stator windings are connected to the grid and rotor winding are connected to the converter via slip rings and back-to-back voltage source converter that controls both the rotor and the grid currents. Thus rotor frequency can freely differ from the grid frequency (50 or 60 Hz). By using the converter to control the rotor currents, it is possible to adjust the active and reactive power fed to the grid from the stator independently of the generator's turning speed. The control principle used is either the two-axis current vector control or direct torque control (DTC). DTC has turned out to have better stability than current vector control especially when high reactive currents are required from the generator. The doubly fed generator rotors are typically wound with 2 to 3 times the number of turns of the stator. This means that the rotor voltages will be higher and currents respectively lower. Thus in the typical ±30% operational speed range around the synchronous speed, the rated current of the converter is accordingly lower which leads to a lower cost of the converter. The drawback is that controlled operation outside the operational speed range is impossible because of the higher than rated rotor voltage. Further, the voltage transients due to the grid disturbances (three- and two-phase voltage dips, especially) will also be magnified. In order to prevent high rotor voltages (and high currents resulting from these voltages) from destroying the insulated-gate bipolar transistors and diodes of the converter, a protection circuit (called crowbar) is used. The crowbar will short-circuit the rotor windings through a small resistance when excessive currents or voltages are detected. In order to be able to continue the operation as quickly as possible an active crowbar has to be used. The active crowbar can remove the rotor short in a controlled way and thus the rotor side converter can be started only after 20–60ms from the start of the grid disturbance when the remaining voltage stays above 15% of the nominal voltage. Thus, it is possible to generate reactive current to the grid during the rest of the voltage dip and in this way help the grid to recover from the fault. For zero voltage ride through, it is common to wait until the dip ends because it is otherwise not possible to know the phase angle where the reactive current should be injected. As a summary, a doubly fed induction machine is a wound-rotor doubly fed electric machine and has several advantages over a conventional induction machine in wind power applications. First, as the rotor circuit is controlled by a power electronics converter, the induction generator is able to both import and export reactive power. This has important consequences for power system stability and allows the machine to support the grid during severe voltage disturbances (low-voltage ride-through; LVRT). Second, the control of the rotor voltages and currents enables the induction machine to remain synchronized with the grid while the wind turbine speed varies. A variable speed wind turbine utilizes the available wind resource more efficiently than a fixed speed wind turbine, especially during light wind conditions. Third, the cost of the converter is low when compared with other variable speed solutions because only a fraction of the mechanical power, typically 25–30%, is fed to the grid through the converter, the rest being fed to grid directly from the stator. The efficiency of the DFIG is very good for the same reason. See also Variable-frequency transformer References External links Electric motors Electrical generators
Doubly fed electric machine
Physics,Technology,Engineering
1,842
26,635,074
https://en.wikipedia.org/wiki/Axillary%20lines
The axillary lines are the anterior axillary line, midaxillary line and the posterior axillary line. The anterior axillary line is a coronal line on the anterior torso marked by the anterior axillary fold. It's the imaginary line that runs down from the point midway between the middle of the clavicle and the lateral end of the clavicle. The V5 ECG lead is placed on the anterior axillary line, horizontally even with V4. The midaxillary line is a coronal line on the torso between the anterior and posterior axillary lines. It is a landmark used in thoracentesis, and the V6 electrode of the 10 electrode ECG. The posterior axillary line is a coronal line on the posterior torso marked by the posterior axillary fold. Additional images See also List of anatomical lines References External links http://www.meddean.luc.edu/Lumen/MedEd/MEDICINE/PULMONAR/apd/lines.htm Anatomy
Axillary lines
Biology
222
3,360,343
https://en.wikipedia.org/wiki/Ostrowski%27s%20theorem
In number theory, Ostrowski's theorem, due to Alexander Ostrowski (1916), states that every non-trivial absolute value on the rational numbers is equivalent to either the usual real absolute value or a -adic absolute value. Definitions Two absolute values and on the rationals are defined to be equivalent if they induce the same topology; this can be shown to be equivalent to the existence of a positive real number such that (Note: In general, if is an absolute value, is not necessarily an absolute value anymore; however if two absolute values are equivalent, then each is a positive power of the other.) The trivial absolute value on any field K is defined to be The real absolute value on the rationals is the standard absolute value on the reals, defined to be This is sometimes written with a subscript 1 instead of infinity. For a prime number , the -adic absolute value on is defined as follows: any non-zero rational can be written uniquely as , where and are coprime integers not divisible by , and is an integer; so we define Proof The following proof follows the one of Theorem 10.1 in Schikhof (2007). Let be an absolute value on the rationals. We start the proof by showing that it is entirely determined by the values it takes on prime numbers. From the fact that and the multiplicativity property of the absolute value, we infer that . In particular, has to be 0 or 1 and since , one must have . A similar argument shows that . For all positive integer , the multiplicativity property entails . In other words, the absolute value of a negative integer coincides with that of its opposite. Let be a positive integer. From the fact that and the multiplicativity property, we conclude that . Let now be a positive rational. There exist two coprime positive integers and such that . The properties above show that . Altogether, the absolute value of a positive rational is entirely determined from that of its numerator and denominator. Finally, let be the set of prime numbers. For all positive integer , we can write where is the p-adic valuation of . The multiplicativity property enables one to compute the absolute value of from that of the prime numbers using the following relationship We continue the proof by separating two cases: There exists a positive integer such that ; or For all integer , one has . First case Suppose that there exists a positive integer such that Let be a non-negative integer and be a positive integer greater than . We express in base : there exist a positive integer and integers such that for all , and . In particular, so . Each term is smaller than . (By the multiplicative property, , then using the fact that is a digit, write so by the triangle inequality, .) Besides, is smaller than . By the triangle inequality and the above bound on , it follows: Therefore, raising both sides to the power , we obtain Finally, taking the limit as tends to infinity shows that Together with the condition the above argument leads to regardless of the choice of (otherwise implies ). As a result, all integers greater than one have an absolute value strictly greater than one. Thus generalizing the above, for any choice of integers and greater than or equal to 2, we get i.e. By symmetry, this inequality is an equality. In particular, for all , , i.e. . Because the triangle inequality implies that for all positive integers we have , in this case we obtain more precisely that . As per the above result on the determination of an absolute value by its values on the prime numbers, we easily see that for all rational , thus demonstrating equivalence to the real absolute value. Second case Suppose that for all integer , one has . As our absolute value is non-trivial, there must exist a positive integer for which Decomposing on the prime numbers shows that there exists such that . We claim that in fact this is so for one prime number only. Suppose per contra that and are two distinct primes with absolute value strictly less than 1. Let be a positive integer such that and are smaller than . By Bézout's identity, since and are coprime, there exist two integers and such that This yields a contradiction, as This means that there exists a unique prime such that and that for all other prime , one has (from the hypothesis of this second case). Let . From , we infer that . (And indeed in this case, all positive give absolute values equivalent to the p-adic one.) We finally verify that and that for all other prime , . As per the above result on the determination of an absolute value by its values on the prime numbers, we conclude that for all rational , implying that this absolute value is equivalent to the -adic one. Another Ostrowski's theorem Another theorem states that any field, complete with respect to an Archimedean absolute value, is (algebraically and topologically) isomorphic to either the real numbers or the complex numbers. This is sometimes also referred to as Ostrowski's theorem. See also Valuation (algebra) References Theorems in algebraic number theory
Ostrowski's theorem
Mathematics
1,069
37,281,073
https://en.wikipedia.org/wiki/Boletus%20subvelutipes
Boletus subvelutipes, commonly known as the red-mouth bolete, is a bolete fungus in the family Boletaceae. It is found in Asia and North America, where it fruits on the ground in a mycorrhizal association with both deciduous and coniferous trees. Its fruit bodies (mushrooms) have a brown to reddish-brown cap, bright yellow cap flesh, and a stem covered by furfuraceous to punctate ornamentation and dark red hairs at the base. Its flesh instantly stains blue when cut, but slowly fades to white. The fruit bodies are poisonous, causing gastroenteritis if consumed. Taxonomy The species was originally described by American mycologist Charles Horton Peck in 1889 from specimens collected in Saratoga, New York. In 1947 Rolf Singer described form glabripes from specimens he collected in Alachua County, Gainesville, Florida. Synonyms include names resulting from generic transfers to the genera Suillus by Otto Kuntze in 1888, and to Suillelus by William Alphonso Murrill in 1948. The mushroom is commonly known as the "red-mouth bolete". In his original description, Peck called it the "velvety-stemmed bolete". Description The cap is initially convex, but flattens out as it matures, attaining a diameter of wide. The cap surface is dry, with a velvet-like texture when young, sometimes developing cracks in maturity. The cap color ranges from cinnamon-brown to yellow-brown to reddish brown or reddish orange to orange-yellow. The bright yellow flesh has no distinctive taste or odor, and a taste ranging from mild to slightly acidic. The pore surface on the underside of the cap is variably colored: in young specimens, this ranges from red to brownish red to dark maroon-red, or red-orange to orange; the color fades in older individuals. The circular pores number about 2 per millimeter, and the tubes comprising the hymenophore are deep. The stem is long by thick, and nearly equal in width throughout its length. It is solid (i.e., not hollow) with a furfuraceous surface (appearing to be covered in bran-like particles), and mature individuals usually have short, stiff hairs at the base. All parts of the mushroom–cap, pore surface, flesh, and stipe–will quickly stain to dark blue if injured or cut. Boletus subvelutipes produces a dark olive-brown spore print. Spores are roughly spindle-shaped to somewhat swollen in the middle, smooth, and measure 13–18 by 5–6.5 μm. The fruit bodies are poisonous, causing gastroenteritis if consumed. The mushrooms however can be used in mushroom dyeing to produce beige or light brown colors, depending on the mordant used. Similar species Boletus gansuensis, found in the Gansu Province of China, is similar in appearance to B. subvelutipes. The Chinese species can be distinguished by longer and narrower spores measuring 12.0–15.5 by 6.0–7.0 μm, smaller fruit bodies with a cap diameter of and shorter tubes up to deep. Habitat and distribution The fruit bodies of Boletus subvelutipes grow on the ground singly, scattered, or in groups. A mycorrhizal species, the fungus associates with deciduous trees, typically oak, and also with pines such as hemlock. Fruit bodies have a strong ability to capture and neutralize the chemical methyl mercaptan, one of the main odiferous compounds associated with bad breath. This ability is conferred largely by the pigment variegatic acid. In North America, its distribution includes eastern Canada and extends south to Florida and west to Minnesota. It is also in Mexico. In Asia, it has also been found in the central highlands of Taiwan and in Japan. See also List of North American boletes References External links subvelutipes Fungi described in 1889 Fungi of Asia Fungi of North America Poisonous fungi Taxa named by Charles Horton Peck Fungus species
Boletus subvelutipes
Biology,Environmental_science
837
17,926,485
https://en.wikipedia.org/wiki/Secondary%20poisoning
Secondary poisoning, or relay toxicity, is the poisoning that results when one organism comes into contact with or ingests another organism that has poison in its system. It typically occurs when a predator eats an animal, such as a mouse, rat, or insect, that has previously been poisoned by a commercial pesticide. If the level of toxicity in the prey animal is sufficiently high, it will harm the predator. Mammals susceptible to secondary poisoning include humans, pets such as cats and dogs, as well as wild birds. Pesticides Various pesticides such as rodenticides may cause secondary poisoning. Some pesticides require multiple feedings spanning several days; this increases the time a target organism continues to move after ingestion, raising the risk of secondary poisoning of a predator. Most of slow-acting poisons for pests have cumulative effects and so can cause secondary poisoning and environment pollution. References Toxicology
Secondary poisoning
Environmental_science
183
22,497,840
https://en.wikipedia.org/wiki/Abalone%20%28molecular%20mechanics%29
Abalone is a general purpose molecular dynamics and molecular graphics program for simulations of bio-molecules in a periodic boundary conditions in explicit (flexible SPC water model) or in implicit water models. Mainly designed to simulate the protein folding and DNA-ligand complexes in AMBER force field. Key features 3D molecular graphics Automatic Force Field generator for bioelements: H, C, N, O Building and editing chemical structures Library of building blocks Force fields: Assisted Model Building with Energy Refinement (AMBER) 94, 96, 99SB, 03; Optimized Potentials for Liquid Simulations (OPLS) Geometry optimizing Molecular dynamics with multiple time step integrator Hybrid Monte Carlo Replica exchange Interface with quantum chemistry - ORCA, NWChem, Firefly (PC GAMESS), CP2K GPU accelerated molecular modeling See also References External links Benchmarking Monte Carlo software Computational chemistry software Molecular dynamics software Molecular modelling software Science software
Abalone (molecular mechanics)
Chemistry
190
36,722,187
https://en.wikipedia.org/wiki/Ajka%20Crystal
Ajka Crystal is a Hungarian manufacturer of crystal created in 1878 by Bernard Neumann. The company, one of the biggest in Central Europe, produces unique, handmade pieces of glass art. Ajka Crystal also goes under the name of "The Romanov Collection" in the United States. Ajka Crystal exports 90% of the factory's total production – both in tableware (stemware, tumblers etc...) and in giftware (vases, bowls) – for brands such as Wedgwood, Tiffany's, Rosenthal, Waterford Crystal, Polo Ralph Lauren, Christian Dior, Moser and other high-end French crystal manufacturers. Ajka Crystal is located in Ajka, Hungary. References Glassmaking companies Manufacturing companies of Hungary Hungarian brands Glass trademarks and brands Veszprém County 1895 establishments in Austria-Hungary
Ajka Crystal
Materials_science,Engineering
172
69,680,294
https://en.wikipedia.org/wiki/CCIR%20System%20K
CCIR System K is an analog broadcast television system used in countries that adopted CCIR System D on VHF, and in Benin, Guinea, Republic of the Congo, Togo, Senegal, Burkina Faso, Burundi, Ivory Coast, Gabon, Comoros, Democratic Republic of the Congo, Madagascar, Mali, Nigeria, Réunion, Rwanda, Chad, Central African Republic, French Polynesia, New Caledonia, Wallis and Futuna, Guadeloupe, Martinique, Saint Pierre and Miquelon and French Guiana. It is identical System D in most respects. Used only for UHF frequencies, its paired with SECAM or PAL color systems. Specifications Some of the important specs are listed below. Frame rate: 25 Hz Interlace: 2/1 Field rate: 50 Hz Lines/frame: 625 Line rate: 15.625 kHz Visual bandwidth: 6 MHz Vision modulation: Negative Preemphasis: 50 μs Sound modulation: FM Sound offset: +6.5 MHz Channel bandwidth: 8 MHz Television channels were arranged as follows: System K1 French overseas departments and territories used a variation named System K1 for broadcast in VHF. UHF channels were similar to K. See also Broadcast television systems Television transmitter Transposer Notes and references External links World Analogue Television Standards and Waveforms Fernsehnormen aller Staaten und Gebiete der Welt ITU-R recommendations Television technology Video formats Broadcast engineering CCIR System
CCIR System K
Technology,Engineering
288
7,023,718
https://en.wikipedia.org/wiki/Albedo%20%28alchemy%29
In alchemy, albedo, or leucosis, is the second of the four major stages of the Magnum Opus, along with nigredo, citrinitas and rubedo. It is a Latinicized term meaning "whiteness". Following the chaos or massa confusa of the nigredo stage, the alchemist undertakes a purification in albedo, which is literally referred to as ablutio – the washing away of impurities. This phase is concerned with "bringing light and clarity to the prima materia (the First Matter)". In this process, the subject is divided into two opposing principles to be later coagulated to form a unity of opposites or coincidentia oppositorum during rubedo. Alchemists also applied it to an individual's soul after the first phase is completed, which entailed the decay of matter. In Medieval literature, which developed an intricate system of images and symbols for alchemy, the dove often represented this stage, while the raven symbolized nigredo. Titus Burckhardt interprets the albedo as the end of the lesser work, corresponding to a spiritualization of the body. Claiming the goal of this portion of the process is to regain the original purity and receptivity of the soul. Psychology Psychologist Carl Jung equated the albedo with unconscious contrasexual soul images; the anima in men and animus in women. It is a phase where insight into shadow projections are realized, and inflated ego and unneeded conceptualizations are removed from the psyche. Another interpretation describes albedo as an experience of awakening and involves a shift in consciousness where the world becomes more than just an individual's ego, his family, or country. References Nigel Hamilton. "The Alchemical Process of Transformation." 1985. Notes Alchemical processes
Albedo (alchemy)
Chemistry
382
8,469,771
https://en.wikipedia.org/wiki/CCL13
Chemokine (C-C motif) ligand 13 (CCL13) is a small cytokine belonging to the CC chemokine family. Its gene is located on human chromosome 17 within a large cluster of other CC chemokines. CCL13 induces chemotaxis in monocytes, eosinophils, T lymphocytes, and basophils by binding cell surface G-protein linked chemokine receptors such as CCR2, CCR3 and CCR5. Activity of this chemokine has been implicated in allergic reactions such as asthma. CCL13 can be induced by the inflammatory cytokines interleukin-1 and TNF-α. References Cytokines
CCL13
Chemistry
157
22,771,431
https://en.wikipedia.org/wiki/Seraphinite
Seraphinite is a trade name for a particular form of clinochlore, a member of the chlorite group. Seraphinite apparently acquired its name due to its resemblance to feathers due to its chatoyancy. Seraphinite is named after the biblical seraphs or seraphim angels. With some specimens the resemblance is quite strong, with shorter down-like feathery growths leading into longer "flight feathers"; the resemblance even spurs fanciful marketing phrases like "silver plume seraphinite." Seraphinite is generally dark green to gray in color, has chatoyancy, and has hardness between 2 and 4 on the Mohs scale of mineral hardness. Seraphinite is mined in a limited area of eastern Siberia in Russia. Russian mineralogist Nikolay Koksharov (1818-1892 or 1893) is often credited with its discovery. It occurs in the Korshunovskoye iron skarn deposit in the Irkutskaya Oblast of Eastern Siberia. References Gemstones Phyllosilicates
Seraphinite
Physics
216
633,593
https://en.wikipedia.org/wiki/Carbon%20steel
Carbon steel is a steel with carbon content from about 0.05 up to 2.1 percent by weight. The definition of carbon steel from the American Iron and Steel Institute (AISI) states: no minimum content is specified or required for chromium, cobalt, molybdenum, nickel, niobium, titanium, tungsten, vanadium, zirconium, or any other element to be added to obtain a desired alloying effect; the specified minimum for copper does not exceed 0.40%; or the specified maximum for any of the following elements does not exceed: manganese 1.65%; silicon 0.60%; and copper 0.60%. As the carbon content percentage rises, steel has the ability to become harder and stronger through heat treating; however, it becomes less ductile. Regardless of the heat treatment, a higher carbon content reduces weldability. In carbon steels, the higher carbon content lowers the melting point. The term may be used to reference steel that is not stainless steel; in this use carbon steel may include alloy steels. High carbon steel has many uses such as milling machines, cutting tools (such as chisels) and high strength wires. These applications require a much finer microstructure, which improves toughness. Properties Carbon steel is often divided into two main categories: low-carbon steel and high-carbon steel. It may also contain other elements, such as manganese, phosphorus, sulfur, and silicon, which can affect its properties. Carbon steel can be easily machined and welded, making it versatile for various applications. It can also be heat treated to improve its strength, hardness, and durability. Carbon steel is susceptible to rust and corrosion, especially in environments with high moisture levels and/or salt. It can be shielded from corrosion by coating it with paint, varnish, or other protective material. Alternatively, it can be made from a stainless steel alloy that contains chromium, which provides excellent corrosion resistance. Carbon steel can be alloyed with other elements to improve its properties, such as by adding chromium and/or nickel to improve its resistance to corrosion and oxidation or adding molybdenum to improve its strength and toughness at high temperatures. It is an environmentally friendly material, as it is easily recyclable and can be reused in various applications. It is energy-efficient to produce, as it requires less energy than other metals such as aluminium and copper. Type Mild or low-carbon steel Mild steel (iron containing a small percentage of carbon, strong and tough but not readily tempered), also known as plain-carbon steel and low-carbon steel, is now the most common form of steel because its price is relatively low while it provides material properties that are acceptable for many applications. Mild steel contains approximately 0.05–0.30% carbon making it malleable and ductile. Mild steel has a relatively low tensile strength, but it is cheap and easy to form. Surface hardness can be increased with carburization. The density of mild steel is approximately and the Young's modulus is . Low-carbon steels display yield-point runout where the material has two yield points. The first yield point (or upper yield point) is higher than the second and the yield drops dramatically after the upper yield point. If a low-carbon steel is only stressed to some point between the upper and lower yield point then the surface develops Lüder bands. Low-carbon steels contain less carbon than other steels and are easier to cold-form, making them easier to handle. Typical applications of low carbon steel are car parts, pipes, construction, and food cans. High-tensile steel High-tensile steels are low-carbon, or steels at the lower end of the medium-carbon range, which have additional alloying ingredients in order to increase their strength, wear properties or specifically tensile strength. These alloying ingredients include chromium, molybdenum, silicon, manganese, nickel, and vanadium. Impurities such as phosphorus and sulfur have their maximum allowable content restricted. 41xx steel 4140 steel 4145 steel 4340 steel 300M steel EN25 steel – 2.521% nickel-chromium-molybdenum steel EN26 steel Higher-carbon steels Carbon steels which can successfully undergo heat-treatment have a carbon content in the range of 0.30–1.70% by weight. Trace impurities of various other elements can significantly affect the quality of the resulting steel. Trace amounts of sulfur in particular make the steel red-short, that is, brittle and crumbly at high working temperatures. Low-alloy carbon steel, such as A36 grade, contains about 0.05% sulfur and melt around . Manganese is often added to improve the hardenability of low-carbon steels. These additions turn the material into a low-alloy steel by some definitions, but AISI's definition of carbon steel allows up to 1.65% manganese by weight. There are two types of higher carbon steels which are high carbon steel and the ultra high carbon steel. The reason for the limited use of high carbon steel is that it has extremely poor ductility and weldability and has a higher cost of production. The applications best suited for the high carbon steels is its use in the spring industry, farm industry, and in the production of wide range of high-strength wires. AISI classification The following classification method is based on the American AISI/SAE standard. Other international standards including DIN (Germany), GB (China), BS/EN (UK), AFNOR (France), UNI (Italy), SS (Sweden), UNE (Spain), JIS (Japan), ASTM standards, and others. Carbon steel is broken down into four classes based on carbon content: Low-carbon steel Low-carbon steel has 0.05 to 0.15% carbon (plain carbon steel) content. Medium-carbon steel Medium-carbon steel has approximately 0.3–0.5% carbon content. It balances ductility and strength and has good wear resistance. It is used for large parts, forging and automotive components. High-carbon steel High-carbon steel has approximately 0.6 to 1.0% carbon content. It is very strong, used for springs, edged tools, and high-strength wires. Ultra-high-carbon steel Ultra-high-carbon steel has approximately 1.25–2.0% carbon content. Steels that can be tempered to great hardness. Used for special purposes such as (non-industrial-purpose) knives, axles, and punches. Most steels with more than 2.5% carbon content are made using powder metallurgy. Heat treatment The purpose of heat treating carbon steel is to change the mechanical properties of steel, usually ductility, hardness, yield strength, or impact resistance. Note that the electrical and thermal conductivity are only slightly altered. As with most strengthening techniques for steel, Young's modulus (elasticity) is unaffected. All treatments of steel trade ductility for increased strength and vice versa. Iron has a higher solubility for carbon in the austenite phase; therefore all heat treatments, except spheroidizing and process annealing, start by heating the steel to a temperature at which the austenitic phase can exist. The steel is then quenched (heat drawn out) at a moderate to low rate allowing carbon to diffuse out of the austenite forming iron-carbide (cementite) and leaving ferrite, or at a high rate, trapping the carbon within the iron thus forming martensite. The rate at which the steel is cooled through the eutectoid temperature (about ) affects the rate at which carbon diffuses out of austenite and forms cementite. Generally speaking, cooling swiftly will leave iron carbide finely dispersed and produce a fine grained pearlite and cooling slowly will give a coarser pearlite. Cooling a hypoeutectoid steel (less than 0.77 wt% C) results in a lamellar-pearlitic structure of iron carbide layers with α-ferrite (nearly pure iron) between. If it is hypereutectoid steel (more than 0.77 wt% C) then the structure is full pearlite with small grains (larger than the pearlite lamella) of cementite formed on the grain boundaries. A eutectoid steel (0.77% carbon) will have a pearlite structure throughout the grains with no cementite at the boundaries. The relative amounts of constituents are found using the lever rule. The following is a list of the types of heat treatments possible: Spheroidizing Spheroidite forms when carbon steel is heated to approximately for over 30 hours. Spheroidite can form at lower temperatures but the time needed drastically increases, as this is a diffusion-controlled process. The result is a structure of rods or spheres of cementite within primary structure (ferrite or pearlite, depending on which side of the eutectoid you are on). The purpose is to soften higher carbon steels and allow more formability. This is the softest and most ductile form of steel. Full annealing A hypoeutectoid carbon steel (carbon composition smaller than the eutectoid one) is heated to approximately above the austenictic temperature (A3), whereas a hypereutectoid steel is heated to a temperature above the eutectoid one (A1) for a certain number of hours; this ensures all the ferrite transforms into austenite (although cementite might still exist in hypereutectoid steels). The steel must then be cooled slowly, in the realm of 20 °C (36 °F) per hour. Usually it is just furnace cooled, where the furnace is turned off with the steel still inside. This results in a coarse pearlitic structure, which means the "bands" of pearlite are thick. Fully annealed steel is soft and ductile, with no internal stresses, which is often necessary for cost-effective forming. Only spheroidized steel is softer and more ductile. Process annealing A process used to relieve stress in a cold-worked carbon steel with less than 0.3% C. The steel is usually heated to for 1 hour, but sometimes temperatures as high as . The image above shows the process annealing area. Isothermal annealing It is a process in which hypoeutectoid steel is heated above the upper critical temperature. This temperature is maintained for a time and then reduced to below the lower critical temperature and is again maintained. It is then cooled to room temperature. This method eliminates any temperature gradient. Normalizing Carbon steel is heated to approximately for 1 hour; this ensures the steel completely transforms to austenite. The steel is then air-cooled, which is a cooling rate of approximately per minute. This results in a fine pearlitic structure, and a more-uniform structure. Normalized steel has a higher strength than annealed steel; it has a relatively high strength and hardness. Quenching Carbon steel with at least 0.4 wt% C is heated to normalizing temperatures and then rapidly cooled (quenched) in water, brine, or oil to the critical temperature. The critical temperature is dependent on the carbon content, but as a general rule is lower as the carbon content increases. This results in a martensitic structure; a form of steel that possesses a super-saturated carbon content in a deformed body-centered cubic (BCC) crystalline structure, properly termed body-centered tetragonal (BCT), with much internal stress. Thus quenched steel is extremely hard but brittle, usually too brittle for practical purposes. These internal stresses may cause stress cracks on the surface. Quenched steel is approximately three times harder (four with more carbon) than normalized steel. Martempering (marquenching) Martempering is not actually a tempering procedure, hence the term marquenching. It is a form of isothermal heat treatment applied after an initial quench, typically in a molten salt bath, at a temperature just above the "martensite start temperature". At this temperature, residual stresses within the material are relieved and some bainite may be formed from the retained austenite which did not have time to transform into anything else. In industry, this is a process used to control the ductility and hardness of a material. With longer marquenching, the ductility increases with a minimal loss in strength; the steel is held in this solution until the inner and outer temperatures of the part equalize. Then the steel is cooled at a moderate speed to keep the temperature gradient minimal. Not only does this process reduce internal stresses and stress cracks, but it also increases impact resistance. Tempering This is the most common heat treatment encountered because the final properties can be precisely determined by the temperature and time of the tempering. Tempering involves reheating quenched steel to a temperature below the eutectoid temperature and then cooling. The elevated temperature allows very small amounts spheroidite to form, which restores ductility but reduces hardness. Actual temperatures and times are carefully chosen for each composition. Austempering The austempering process is the same as martempering, except the quench is interrupted and the steel is held in the molten salt bath at temperatures between , and then cooled at a moderate rate. The resulting steel, called bainite, produces an acicular microstructure in the steel that has great strength (but less than martensite), greater ductility, higher impact resistance, and less distortion than martensite steel. The disadvantage of austempering is it can be used only on a few sheets of steel, and it requires a special salt bath. Case hardening Case hardening processes harden only the exterior of the steel part, creating a hard, wear-resistant skin (the "case") but preserving a tough and ductile interior. Carbon steels are not very hardenable meaning they can not be hardened throughout thick sections. Alloy steels have a better hardenability, so they can be through-hardened and do not require case hardening. This property of carbon steel can be beneficial, because it gives the surface good wear characteristics but leaves the core flexible and shock-absorbing. Forging temperature of steel See also Aermet Cold working Eglin steel (a low-cost precipitation-hardened high-strength steel) Forging Hot working Maraging steel (precipitation-hardened high-strength steels) Welding (high-strength steels) References Bibliography Steels Metallurgical processes
Carbon steel
Chemistry,Materials_science
3,045
65,915,759
https://en.wikipedia.org/wiki/Lazarus%20Ercker
Lazarus Ercker (c. 1530 – 1594) was a Bohemian metallurgist and assay master of a mint near Prague who wrote some of the earliest known treatises on metallurgy entitled Beschreibung allerfürnemisten mineralischen Ertzt und Berckwercksarten (1574) and Münzbuch, wie es mit den Münzen gehalten sind (1563). Life Ercker was born at St. Annenberg (Annaberg, Saxony) around 1530 and studied at the University of Wittenberg between 1547 and 1548. Around 1554 he became an assayer at Dresden through the patronage of Elector Augustus with the influence of Johann Neese (a relative of his wife). In 1558 he became master of the mint at Goslar for Prince Henry of Brunswick. In 1567 his wife died and he tried to return to Dresden. His brother-in-law Caspar Richter helped him get a job as a tester at Kutna Hora near Prague. His 1574 book Beschreibung allerfürnemisten mineralischen Ertzt und Berckwercksarten described the production of alloys and refining of several metals including silver, gold, copper, antimony, bismuth, tin, lead and mercury. It was in Ercker's book that the word "wolfram" is first used for a mineral found in Saxony which Ercker thought contained tin and the metal only much later identified as the element tungsten. The book went through several editions and led to his appointment as courier for mining affairs under Emperor Maximilian II. Under Rudolf II he became master of the mint in Prague and was knighted (and known as Lazarus Ercker von Schreckenfels) on 10 March 1586. The eighth edition of his book published in 1672 was retitled Aula subterranea alias Probierbuch. Ercker's 1574 book was translated into English by Sir John Pettus as Fleta Minor in 1683 with the original woodcuts redrawn with some modifications. His book was also plagiarized by Georg Engelhardt von Löhneyss in his Bericht vom Bergwerck (1617). References External links Fleta Minor by Sir John Pettus Scanned editions of Ercker's publication Metallurgists 16th-century people from Bohemia 1530s births 1594 deaths
Lazarus Ercker
Chemistry,Materials_science
504
33,270,868
https://en.wikipedia.org/wiki/Eigenmoments
EigenMoments is a set of orthogonal, noise robust, invariant to rotation, scaling and translation and distribution sensitive moments. Their application can be found in signal processing and computer vision as descriptors of the signal or image. The descriptors can later be used for classification purposes. It is obtained by performing orthogonalization, via eigen analysis on geometric moments. Framework summary EigenMoments are computed by performing eigen analysis on the moment space of an image by maximizing signal-to-noise ratio in the feature space in form of Rayleigh quotient. This approach has several benefits in Image processing applications: Dependency of moments in the moment space on the distribution of the images being transformed, ensures decorrelation of the final feature space after eigen analysis on the moment space. The ability of EigenMoments to take into account distribution of the image makes it more versatile and adaptable for different genres. Generated moment kernels are orthogonal and therefore analysis on the moment space becomes easier. Transformation with orthogonal moment kernels into moment space is analogous to projection of the image onto a number of orthogonal axes. Nosiy components can be removed. This makes EigenMoments robust for classification applications. Optimal information compaction can be obtained and therefore a few number of moments are needed to characterize the images. Problem formulation Assume that a signal vector is taken from a certain distribution having correlation , i.e. where E[.] denotes expected value. Dimension of signal space, n, is often too large to be useful for practical application such as pattern classification, we need to transform the signal space into a space with lower dimensionality. This is performed by a two-step linear transformation: where is the transformed signal, a fixed transformation matrix which transforms the signal into the moment space, and the transformation matrix which we are going to determine by maximizing the SNR of the feature space resided by . For the case of Geometric Moments, X would be the monomials. If , a full rank transformation would result, however usually we have and . This is specially the case when is of high dimensions. Finding that maximizes the SNR of the feature space: where N is the correlation matrix of the noise signal. The problem can thus be formulated as subject to constraints: where is the Kronecker delta. It can be observed that this maximization is Rayleigh quotient by letting and and therefore can be written as: , Rayleigh quotient Optimization of Rayleigh quotient has the form: and and , both are symmetric and is positive definite and therefore invertible. Scaling does not change the value of the object function and hence and additional scalar constraint can be imposed on and no solution would be lost when the objective function is optimized. This constraint optimization problem can be solved using Lagrangian multiplier: subject to equating first derivative to zero and we will have: which is an instance of Generalized Eigenvalue Problem (GEP). The GEP has the form: for any pair that is a solution to above equation, is called a generalized eigenvector and is called a generalized eigenvalue. Finding and that satisfies this equations would produce the result which optimizes Rayleigh quotient. One way of maximizing Rayleigh quotient is through solving the Generalized Eigen Problem. Dimension reduction can be performed by simply choosing the first components , , with the highest values for out of the components, and discard the rest. Interpretation of this transformation is rotating and scaling the moment space, transforming it into a feature space with maximized SNR and therefore, the first components are the components with highest SNR values. The other method to look at this solution is to use the concept of simultaneous diagonalization instead of Generalized Eigen Problem. Simultaneous diagonalization Let and as mentioned earlier. We can write as two separate transformation matrices: can be found by first diagonalize B: . Where is a diagonal matrix sorted in increasing order. Since is positive definite, thus . We can discard those eigenvalues that large and retain those close to 0, since this means the energy of the noise is close to 0 in this space, at this stage it is also possible to discard those eigenvectors that have large eigenvalues. Let be the first columns of , now where is the principal submatrix of . Let and hence: . whiten and reduces the dimensionality from to . The transformed space resided by is called the noise space. Then, we diagonalize : , where . is the matrix with eigenvalues of on its diagonal. We may retain all the eigenvalues and their corresponding eigenvectors since the most of the noise are already discarded in previous step. Finally the transformation is given by: where diagonalizes both the numerator and denominator of the SNR, , and the transformation of signal is defined as . Information loss To find the information loss when we discard some of the eigenvalues and eigenvectors we can perform following analysis: Eigenmoments Eigenmoments are derived by applying the above framework on Geometric Moments. They can be derived for both 1D and 2D signals. 1D signal If we let , i.e. the monomials, after the transformation we obtain Geometric Moments, denoted by vector , of signal ,i.e. . In practice it is difficult to estimate the correlation signal due to insufficient number of samples, therefore parametric approaches are utilized. One such model can be defined as: , where . This model of correlation can be replaced by other models however this model covers general natural images. Since does not affect the maximization it can be dropped. The correlation of noise can be modelled as , where is the energy of noise. Again can be dropped because the constant does not have any effect on the maximization problem. Using the computed A and B and applying the algorithm discussed in previous section we find and set of transformed monomials which produces the moment kernels of EM. The moment kernels of EM decorrelate the correlation in the image. , and are orthogonal: Example computation Taking , the dimension of moment space as and the dimension of feature space as , we will have: and 2D signal The derivation for 2D signal is the same as 1D signal except that conventional Geometric Moments are directly employed to obtain the set of 2D EigenMoments. The definition of Geometric Moments of order for 2D image signal is: . which can be denoted as . Then the set of 2D EigenMoments are: , where is a matrix that contains the set of EigenMoments. . EigenMoment invariants (EMI) In order to obtain a set of moment invariants we can use normalized Geometric Moments instead of . Normalized Geometric Moments are invariant to Rotation, Scaling and Transformation and defined by: where: is the centroid of the image and . in this equation is a scaling factor depending on the image. is usually set to 1 for binary images. See also Computer vision Signal processing Image moment References External links implementation of EigenMoments in Matlab Signal processing Computer vision
Eigenmoments
Technology,Engineering
1,457
14,018,049
https://en.wikipedia.org/wiki/Influence%20of%20the%20IBM%20PC%20on%20the%20personal%20computer%20market
Following the introduction of the IBM Personal Computer (IBM PC) in 1981, many other personal computer architectures became extinct within just a few years. It led to a wave of IBM PC compatible systems being released. Before the IBM PC's introduction Before the IBM PC was introduced, the personal computer market was dominated by systems using the 6502 and Z80 8-bit microprocessors, such as the TRS 80, PET, and Apple II, which used proprietary operating systems, and by computers running CP/M. After IBM introduced the IBM PC, it was not until 1984 that IBM PC and clones became the dominant computers. In 1983, Byte forecast that by 1990, IBM would command only 11% of business computer sales. Commodore was predicted to hold a slim lead in a highly competitive market, at 11.9%. Around 1978, several 16-bit CPUs became available. Examples included the Data General mN601, the Fairchild 9440, the Ferranti F100-L, the General Instrument CP1600 and CP1610, the National Semiconductor INS8900, Panafacom's MN1610, Texas Instruments' TMS9900, and, most notably, the Intel 8086. These new processors were expensive to incorporate in personal computers, as they used a 16-bit data bus and needed rare (and thus expensive) 16-bit peripheral and support chips. More than 50 new business-oriented personal computer systems came on the market in the year before IBM released the IBM PC. Very few of them used a 16- or 32-bit microprocessor, as 8-bit systems were generally believed by the vendors to be perfectly adequate, and the Intel 8086 was too expensive to use. Some of the main manufacturers selling 8-bit business systems during this period were: Acorn Computers Apple Computer Atari Inc. Commodore International Cromemco Digital Equipment Corporation Durango Systems Inc. Hewlett-Packard InterSystems Morrow Designs North Star Computers Ohio Scientific Olivetti Processor Technology Sharp South West Technical Products Corporation Tandy Corporation Zenith Data Systems/Heathkit The IBM PC On August 12, 1981, IBM released the IBM Personal Computer. One of the most far-reaching decisions made for IBM PC was to use an open architecture, leading to a large market for third party add-in boards and applications; but finally also to many competitors all creating "IBM-compatible" machines. The IBM PC used the then-new Intel 8088 processor. Like other 16-bit CPUs, it could access up to 1 megabyte of RAM, but it used an 8-bit-wide data bus to memory and peripherals. This design allowed use of the large, readily available, and relatively inexpensive family of 8-bit-compatible support chips. IBM decided to use the Intel 8088 after first considering the Motorola 68000 and the Intel 8086, because the other two were considered to be too powerful for their needs. Although already established rivals like Apple and Radio Shack had many advantages over the company new to microcomputers, IBM's reputation in business computing allowed the IBM PC architecture to take a substantial market share of business applications, and many small companies that sold IBM-compatible software or hardware rapidly grew in size and importance, including Tecmar, Quadram, AST Research, and Microsoft. As of mid-1982, three other mainframe and minicomputer companies sold microcomputers, but unlike IBM, Hewlett-Packard, Xerox, and Control Data Corporation chose the CP/M operating system. Many other companies made "business personal computers" using their own proprietary designs, some still using 8-bit microprocessors. The ones that used Intel x86 processors often used the generic, non-IBM-compatible specific version of MS-DOS or CP/M-86, just as 8-bit systems with an Intel 8080 compatible CPU normally used CP/M. The use of MS-DOS on non-IBM PC compatible systems Within a year of the IBM PC's introduction, Microsoft—the developer of its primary operating system, IBM PC DOS—licensed the operating system generically as MS-DOS to over 70 other companies. One of the first computers to achieve 100% PC compatibility was the Compaq Portable, released in November 1982; it remained the most compatible clone into 1984. Before the PC dominated the market, however, most systems were not clones of the IBM PC design, but had different internal designs, and ran Digital Research's CP/M. The IBM PC was difficult to obtain for several years after its introduction. Many makers of MS-DOS computers intentionally avoided full IBM compatibility because they expected that the market for what InfoWorld described as "ordinary PC clones" would decline. They feared the fate of companies that sold computers plug-compatible with IBM mainframes in the 1960s and 1970s—many of which went bankrupt after IBM changed specifications—and believed that a market existed for personal computers with a similar selection of software to the IBM PC, but with better hardware. While Microsoft used a sophisticated installer with its DOS programs like Multiplan that provided device drivers for many non IBM PC-compatible computers, most other software vendors did not. Columbia University discussed the difficulty of having Kermit support many different clones and MS-DOS computers. Peter Norton, who earlier had encouraged vendors to write software that ran on many different computers, by early 1985 admitted—after experiencing the difficulty of doing so while rewriting Norton Utilities—that "there's no practical way for most software creators to write generic software". Dealers found carrying multiple versions of software for clones of varying levels of compatibility to be difficult. To get the best results out of the 8088's modest performance, many popular software applications were written specifically for the IBM PC. The developers of these programs opted to write directly to the computer's (video) memory and peripheral chips, bypassing MS-DOS and the BIOS. For example, a program might directly update the video refresh memory, instead of using MS-DOS calls and device drivers to alter the appearance of the screen. Many notable software packages, such as the spreadsheet program Lotus 1-2-3, and Microsoft's Microsoft Flight Simulator 1.0, directly accessed the IBM PC's hardware, bypassing the BIOS, and therefore did not work on computers that were even trivially different from the IBM PC. This was especially common among PC games. As a result, the systems that were not fully IBM PC-compatible could not run this software, and quickly became obsolete. Rendered obsolete with them was the CP/M-inherited concept of OEM versions of MS-DOS meant to run (through BIOS calls) on non IBM-PC hardware. Cloning the PC BIOS In 1984, Phoenix Technologies began licensing its clone of the IBM PC BIOS. The Phoenix BIOS and competitors such as AMI BIOS made it possible for anyone to market a PC compatible computer, without having to develop a compatible BIOS like Compaq. Decline of the Intel 80186 Although based on the i8086 and enabling the creation of relatively low-cost x86-based systems, the Intel 80186 quickly lost appeal for x86-based PC builders because the supporting circuitry inside the Intel 80186 chip was incompatible with those used in the standard PC chipset as implemented by IBM. It was very rarely used in personal computers after 1982. Domination of the clones "Is it PC compatible?" In February 1984 BYTE described how "the personal computer market seems to be shadowed under a cloud of compatibility: the drive to be compatible with the IBM Personal Computer family has assumed near-fetish proportions", which it stated was "inevitable in the light of the phenomenal market acceptance of the IBM PC". The magazine cited the announcement by North Star in fall 1983 of its first PC-compatible microcomputer. Founded in 1976, North Star had long been successful with 8-bit S-100 bus products, and had introduced proprietary 16-bit products, but now the company acknowledged that the IBM PC had become a "standard", one which North Star needed to follow. BYTE described the announcement as representative of the great impact IBM had made on the industry: The magazine expressed concern that "IBM's burgeoning influence in the PC community is stifling innovation because so many other companies are mimicking Big Blue". Admitting that "it's what our dealers asked for", Kaypro also introduced the company's first IBM compatible that year. Tandy—which had once had as much as 60% of the personal-computer market, but had attempted to keep technical information secret to monopolize software and peripheral sales—also began selling non-proprietary computers; four years after its Jon Shirley predicted to InfoWorld that the new IBM PC's "major market would be IBM addicts", the magazine in 1985 similarly called the IBM compatibility of the Tandy 1000 "no small concession to Big Blue's dominating stranglehold" by a company that had been "struggling openly in the blood-soaked arena of personal computers". The 1000 was compatible with the PC but not compatible with its own Tandy 2000 MS-DOS computer. IBM's mainframe rivals, the BUNCH, introduced their own compatibles, and when Hewlett-Packard introduced the Vectra InfoWorld stated that the company was "responding to demands from its customers for full IBM PC compatibility". Mitch Kapor of Lotus Development Corporation said in 1984 that "either you have to be PC-compatible or very special". "Compatibility has proven to be the only safe path", Microsoft executive Jim Harris stated in 1985, while InfoWorld wrote that IBM's competitors were "whipped into conformity" with its designs, because of "the total failure of every company that tried to improve on the IBM PC". Customers only wanted to run PC applications like 1-2-3, and developers only cared about the massive PC installed base, so any non-compatible—no matter its technical superiority—from a company other than Apple failed for lack of customers and software. Compatibility became so important that Dave Winer joked that year (referring to the PC AT's incomplete compatibility with the IBM PC), "The only company that can introduce a machine that isn't PC compatible and survive is IBM". By 1985, the shortage of IBM PCs had ended, causing financial difficulties for many vendors of compatibles; nonetheless, Harris said, "The only ones that have done worse than the compatibles are the noncompatibles". The PC standard was similarly dominant in Europe, with Honeywell Bull, Olivetti, and Ericsson selling compatibles and software companies focusing on PC products. By the end of the year PC Magazine stated that even IBM could no longer introduce a rumored proprietary, non-compatible operating system. Noting that the company's unsuccessful PCjr's "cardinal sin was that it wasn't PC compatible", the magazine wrote that "backward compatibility [with the IBM PC] is the single largest concern of hardware and software developers. The user community is too large and demanding to accept radical changes or abandon solutions that have worked in the past." Within a few years of the introduction of fully compatible PC clones, almost all rival business personal computer systems, and alternate x86 using architectures, were gone from the market. Despite the inherent dangers of an industry based on a de facto "standard", a thriving PC clone industry emerged. The only other non-IBM PC-compatible systems that remained were those systems that were classified as home computers, such as the Apple II, or business systems that offered features not available on the IBM PC, such as a high level of integration (e.g., bundled accounting and inventory) or fault-tolerance and multitasking and multi-user features. Wave of inexpensive clones Compaq's prices were comparable to IBM's, and the company emphasized its PC compatibles' features and quality to corporate customers. From mid-1985, what Compute! described as a "wave" of inexpensive clones from American and Asian companies caused prices to decline; by the end of 1986, the equivalent to a real IBM PC with 256K RAM and two disk drives cost as little as , lower than the price of the Apple IIc. Consumers began purchasing DOS computers for the home in large numbers; Tandy estimated that half of its 1000 sales went to homes, the new Leading Edge Model D comprised 1% of the US home-computer market that year, and toy and discount stores sold a clone manufactured by Hyundai, the Blue Chip PC, like a stereo—without a demonstrator model or salesman. Tandy and other inexpensive clones succeeded with consumers—who saw them as superior to lower-end game machines—where IBM failed two years earlier with the PCjr. They were as inexpensive as home computers of a few years earlier, and comparable in price to the Amiga, Atari ST, and Apple IIGS. Unlike the PCjr, clones were as fast as or faster than the IBM PC and highly compatible so users could bring work home; the large DOS software library reassured those worried about orphaned technology. Consumers used them for both spreadsheets and entertainment, with the former ability justifying buying a computer that could also perform the latter. PCs and compatibles also gained a significant share of the educational market, while longtime leader Apple lost share. At the January 1987 Consumer Electronics Show, both Commodore and Atari announced their own clones. By 1987 the PC industry was growing so quickly that the formerly business-only platform had become the largest and most important market for computer game companies, outselling games for the Apple II or Commodore 64. With the EGA video card, an inexpensive clone was comparable or even better than some other computers for games. MS-DOS software was 77% of all personal computer software sold by dollar value in the third quarter of 1988, up 47% year over year. By 1989 80% of readers of Compute! owned DOS computers, and the magazine announced "greater emphasis on MS-DOS home computing". IBM's influence on the industry decreased, as competition increased and rivals introduced computers that improved on IBM's designs while maintaining compatibility. In 1986 the Compaq Deskpro 386 was the first computer based on the Intel 80386. In 1987 IBM unsuccessfully attempted to regain leadership of the market with the Personal System/2 line and proprietary MicroChannel Architecture. Clones conquer the home By 1990, Computer Gaming World told a reader complaining about the many reviews of IBM PC compatible games that "most companies are attempting to get their MS-DOS products out the door, first". It reported that in the US, MS-DOS comprised 65% of the computer-game market, the Amiga at 10%, and all other computers, including the Macintosh, were below 10% and declining. The Amiga and most others, such as the ST and various MSX2 computers, remained on the market until PC compatibles gained sufficient multimedia capabilities to compete with home computers. With the advent of inexpensive versions of the VGA video card and the Sound Blaster sound card (and its clones), most of the remaining home computers were driven from the market. The market in 1990 was more diverse outside the United States, but MS-DOS and Windows machines nonetheless came to dominate by the end of the decade. By 1995, other than the Macintosh, almost no new consumer-oriented systems were sold that were not IBM PC clones. Throughout the 1990s Apple transitioned the Macintosh from proprietary expansion interfaces to standards such as IDE, PCI, and USB. In 2006, Apple switched the Macintosh to the Intel x86 architecture, allowing them to optionally boot into Microsoft Windows, while still retaining unique design elements to support Apple's Mac OS X operating system. In 2008, Sid Meier listed the IBM PC as one of the three most important innovations in the history of video games. Systems launched shortly after the IBM PC Shortly after the IBM PC was released, an obvious split appeared between systems that opted to use an x86-compatible processor, and those that chose another architecture. Almost all of the x86 systems provided a version of MS-DOS. The others used many different operating systems, although the Z80-based systems typically offered a version of CP/M. The common usage of MS-DOS unified the x86-based systems, promoting growth of the x86/MS-DOS "ecosystem". As the non-x86 architectures died off, and x86 systems standardized into fully IBM PC compatible clones, a market filled with dozens of different competing systems was reduced to a near-monoculture of x86-based, IBM PC compatible, MS-DOS systems. x86-based systems (using OEM-specific versions of MS-DOS) Early after the launch of the IBM PC in 1981, there were still dozens of systems that were not IBM PC-compatible, but did use Intel x86 chips. They used Intel 8088, 8086, or 80186 processors, and almost without exception offered an OEM version of MS-DOS (as opposed to the OEM version customized for IBM's use). However, they generally made no attempt to copy the IBM PC's architecture, so these machines had different I/O addresses, a different system bus, different video controllers, and other differences from the original IBM PC. These differences, which were sometimes rather minor, were used to improve upon the IBM PC's design, but as a result of the differences, software that directly manipulated the hardware would not run correctly. In most cases, the x86-based systems that did not use a fully IBM PC compatible design did not sell well enough to attract support from software manufacturers, though a few computer manufacturers arranged for compatible versions of popular applications to be developed and sold specifically for their machines. Fully IBM PC-compatible clones appeared on the market shortly thereafter, as the advantages of cloning became impossible to ignore. But before that some of the more notable systems that were x86-compatible, but not real clones, were: the ACT Apricot by ACT the Dulmont Magnum the Epson QX-16 the Seequa Chameleon the HP-150 by Hewlett-Packard and the later HP 95LX, HP 100LX, HP 200LX, HP 1000CX, HP OmniGo 700LX, HP OmniGo 100, and HP OmniGo 120. the Hyperion by Infotech Cie used its own H-DOS OEM version of MS-DOS and was, for a time, licensed but never manufactured by Commodore, as its first PC compatible. the MBC-550 by Sanyo had many differences, including non-interchangeability of diskettes and non-standard ROM location. the DG-One by Data General was an early laptop with full 80x25 LCD screen that could boot some generic DOSes but worked best with their OEM version of MS-DOS, and had some hardware incompatibilities (especially in the serial I-O chip) as part of the compromise to reduce power consumption. Later models were more compatible with generic PC clones. the DG/10 by Data General had two processors, one an Intel 8086, running a very-modified version of MSDOS (alternatively: CP/M-86) in a patented closely coupled arrangement with Data General's own microECLIPSE (the 8086 "invisibly" calling the microECLIPSE whenever it needed access to some peripherals, such as disks, while the 8086 had control over other peripherals such as the screen). the 80186-based Mindset graphics computer the Morrow Designs' Morrow Pivot the MZ-5500 by Sharp the Decision Mate V from NCR Corporation; its version of MS-DOS was called NCR-DOS the MikroMikko 2 by Nokia the NorthStar Advantage the PC-9801 systems from NEC the Rainbow 100 from DEC had both an 8088 and Zilog Z80 for Digital Research's CP/M-80 Operating System the RM Nimbus by RM plc the Tandy 2000 by RadioShack had a Intel 8186 the Texas Instruments TI Professional the Torch Graduate by Torch Computers the Tulip System-1 by Tulip the Victor 9000 by Sirius Systems Technology the :YES by Philips was late on the market, ran DOS Plus and MS-DOS, but by using an 80186 it was incompatible with IBM's PC the Z-100 by Zenith with an MS-DOS OEM version named Z-DOS Non-x86-based systems Not all manufacturers immediately switched to the Intel x86 microprocessor family and MS-DOS. A few companies continued releasing systems based on non-Intel architectures. Some of these systems used a 32-bit microprocessor, the most popular being the Motorola 68000. Others continued to use 8-bit microprocessors. Many of these systems were eventually forced out of the market by the onslaught of the IBM PC clones, although their architectures may have had superior capabilities, especially in the area of multimedia. Other non-x86-based systems available at the IBM PC's launch Apple II and Apple II+ Commodore PET and CBM series Atari 400/800 Cromemco CS-1 Intertec's Compustar II VPU Model 20 Corvus Concept Kaypro 10 Fujitsu Micro 16s Micro Decision by Morrow Designs MTU-130 by Micro Technology Unlimited Xerox 820 RoadRunner from MicroOffice TRS-80 Model II and TRS-80 Model III See also Wintel Open standard Dominant design History of computing hardware (1960s–present) Timeline of DOS operating systems Comparison of DOS operating systems List of computers running CP/M References External links Dedicated to the preservation and restoration of the IBM 5150 personal computer OLD-COMPUTERS.COM : The Museum History of computing hardware IBM PC compatibles
Influence of the IBM PC on the personal computer market
Technology
4,496
28,752,682
https://en.wikipedia.org/wiki/ChloroFilms
ChloroFilms was an international amateur plant biology film contest held four times between 2008 and 2010. The contest was created by Daniel Cosgrove at Pennsylvania State University with the goal of promoting independent videos that could both educate the audience about plant biology and capture their attention. It was funded by the American Society of Plant Biologists, the Botanical Society of America and the Canadian Botanical Association, and awarded over $13,000 in cash prizes in 2009 and 2010. Winners In the first contest, a grand prize of $1000 was awarded to "Fertile Eyes," a collaboration between Ela Lamblin and Anna Edlund which informed viewers about pollination and plant fertilization via the medium of song and dance. Runners up, each awarded $500, included "Fantastic Vesicle Traffic" by Daniel von Wangenheim, "La Bloomba" by Kris Holmes, "PSI — Are My Soybeans Wearing Different Genes?" by Burkhard Schulz, and "The Carnivorous Syndrome in 3D" by Mike Wilder. The second contest's grand prize winner was "Kenaf Callus Hoedown" by Noah Flanigan, a stop motion video that illustrates the process of cultivating plant tissue and features "lively and quirky music." The third contest's grand prize winner was "Arabidopsis Flower in 3D "by David Livingston, a technical video showing a 3D reconstruction of the internal anatomy of flowers The fourth contest had a two-way tie for grand prize, awarded to "All Things Algae" by Terry Woodford–Thomas and "Seed Imbibition" by Robert Lewis Gerten. References Science competitions
ChloroFilms
Technology
335
35,888,668
https://en.wikipedia.org/wiki/Pumping%20%28audio%29
In sound recording and reproduction, and music, pumping or gain pumping is a creative misuse of compression, the "audible unnatural level changes associated primarily with the release of a compressor." There is no correct way to produce pumping, and according to Alex Case, the effect may result from selecting "too slow or too fast...or too, um, medium" attack and release settings. The technique is common in rock and electronic dance music. A celebrated example is Phil Selway's (Radiohead) drum track on "Exit Music (For a Film)", and clear examples include the electro percussion loop in Radiohead's "Idioteque", Benny Benassi's "Finger Food", and the ride cymbals on Portishead's "Pedestal". Side-chain pumping is a more advanced technique using a compressor's side-chain feature which, "uses the amplitude envelope (dynamics profile) of one track as a trigger for a compressor used in another track." When the amplitude of a note of the side-chained instrument surpasses the threshold setting of the compressor it attenuates the compressed instrument, producing volume swells offset from the side-chained note by a selected release time. Found in house, techno, IDM, hip hop, dubstep, and drum 'n' bass, Eric Prydz's "Call On Me" is credited with popularizing the technique, though Daft Punk's "One More Time" contributed, while clear examples include Madonna's "Get Together" and Benny Benassi's "My Body (feat. Mia J)". References Audio engineering Music production
Pumping (audio)
Engineering
345
1,861,611
https://en.wikipedia.org/wiki/Su%20Song
Su Song (, 1020–1101), courtesy name Zirong (), was a Chinese polymathic scientist and statesman. Excelling in a variety of fields, he was accomplished in mathematics, astronomy, cartography, geography, horology, pharmacology, mineralogy, metallurgy, zoology, botany, mechanical engineering, hydraulic engineering, civil engineering, invention, art, poetry, philosophy, antiquities, and statesmanship during the Song dynasty (960–1279). Su Song was the engineer for a hydro-mechanical astronomical clock tower in medieval Kaifeng, which employed an early escapement mechanism. The escapement mechanism of Su's clock tower had been invented by Tang dynasty Buddhist monk Yi Xing and government official Liang Lingzan in 725 AD to operate a water-powered armillary sphere, although Su's armillary sphere was the first to be provided with a mechanical clock drive. Su's clock tower also featured the oldest known endless power-transmitting chain drive, called the tian ti (), or "celestial ladder", as depicted in his horological treatise. The clock tower had 133 different clock jacks to indicate and sound the hours. Su Song's treatise about the clock tower, Xinyi Xiangfayao (), has survived since its written form in 1092 and official printed publication in 1094. The book has been analyzed by many historians, such as the British biochemist, historian, and sinologist Joseph Needham. The clock itself, however, was dismantled by the invading Jurchen army in 1127 AD, and although attempts were made to reassemble it, the tower was never successfully reinstated. The Xinyi Xiangfayao was Su's best-known treatise, but the polymath compiled other works as well. He completed a large celestial atlas of several star maps, several terrestrial maps, as well as a treatise on pharmacology. The latter discussed related subjects on mineralogy, zoology, botany, and metallurgy. European Jesuit visitors to China like Matteo Ricci and Nicolas Trigault briefly wrote about Chinese clocks with wheel drives, but others mistakenly believed that the Chinese had never advanced beyond the stage of the clepsydra, incense clock, and sundial. They thought that advanced mechanical clockworks were new to China and that these mechanisms were something valuable that Europeans could offer to the Chinese. Although not as prominent as in the Song period, contemporary Chinese texts of the Ming dynasty (1368–1644) described a relatively unbroken history of mechanical clocks in China, from the 13th century to the 16th. However, Su Song's clock tower still relied on the use of a waterwheel to power it, and was thus not fully mechanical like late medieval European clocks. Life and works Career as a scholar-official Su Song was of Hokkien ancestry who was born in modern-day Fujian, near medieval Quanzhou. Like his contemporary, Shen Kuo (1031–1095), Su Song was a polymath, a person whose expertise spans a significant number of different fields of study. It was written by his junior colleague and Hanlin scholar Ye Mengde (1077–1148) that in Su's youth, he mastered the provincial exams and rose to the top of the examination list for writing the best article on general principles and structure of the Chinese calendar. From an early age, his interests in astronomy and calendrical science led him onto a distinguished path as a state bureaucrat. In his spare time, he was fond of writing poetry, which he used to praise the works of artists such as the painter Li Gonglin (1049–1106). He was also an antiquarian and collector of old artworks from previous dynasties. In matters of administrative government, he had attained the rank of Ambassador and President of the Ministry of Personnel at the capital of Kaifeng, and was known also as an expert in administration and finance. After serving in the Ministry of Personnel, he became a Minister of Justice in 1086. He was appointed as a distinguished editor for the Academy of Scholarly Worthies, where in 1063 he edited, redacted, commented on, and added a preface for the classic work Huainanzi of the Han dynasty (202 BC–220 AD). Eventually, Su rose to the post of Vice President of the Chancellery Secretariat. Among many honorable positions and titles conferred upon him, Su Song was also one of the 'Deputy Tutors of the Heir Apparent'. At court, he chose to distance himself from the political rivalries of the Conservatives, led by Prime Minister Sima Guang (1019–1086), and the Reformists, led by Prime Minister Wang Anshi (1021–1086); although many of his associates were of the Conservative faction. In 1077, he was dispatched on a diplomatic mission to the Liao dynasty of the Khitan people to the north, sharing ideas about calendrical science, as the Liao state had created its own calendar in 994 AD. In a finding that reportedly embarrassed the court, Su Song acknowledged to the emperor that the calendar of the Khitan people was in fact a bit more accurate than their own, resulting in the fining and punishment of officials in the Bureau of Astronomy and Calendar. Su was supposed to travel north to Liao and arrive promptly for a birthday celebration and feast on a day which coincided with the winter solstice of the Song calendar, but was actually a day behind the Liao calendar. Historian Liu Heping states that Emperor Zhezong of Song sponsored Su Song's clocktower in 1086 in order to compete with the Liao for "scientific and national superiority." In 1081, the court instructed Su Song to compile into a book the diplomatic history of Song-Liao relations, an elaborate task that, once complete, filled 200 volumes. With his extensive knowledge of cartography, Su Song was able to settle a heated border dispute between the Song and Liao dynasties. Astronomy Su Song also created a celestial atlas (in five separate maps), which had the hour circles between the xiu (lunar mansions) forming the astronomical meridians, with stars marked in an equidistant cylindrical projection on each side of the equator, and thus, was in accordance to their north polar distances. Furthermore, Su Song must have taken advantage of the astronomical findings of his political rival and contemporary astronomer Shen Kuo. Su Song's fourth star map places the position of the pole star halfway between Tian shu (−350 degrees) and the current Polaris; this was the more accurate calculation (by 3 degrees) that Shen Kuo had made when he observed the pole star over a period of three months with his width-improved sighting tube. There were many star maps written before Song's book, but the star maps published by Su represent the oldest extant star maps in printed form. Pharmacology, botany, zoology, and mineralogy In 1070, Su Song and a team of scholars compiled and edited the Bencao Tujing ('Illustrated Pharmacopoeia', original source material from 1058 to 1061), which was a groundbreaking treatise on pharmaceutical botany, zoology, and mineralogy. In compiling information for pharmaceutical knowledge, Su Song worked with such notable scholars as Zhang Yuxi, Lin Yi, Zhang Dong, and many others. This treatise documented a wide range of pharmaceutical practices, including the use of ephedrine as a drug. It includes valuable information on metallurgy and the steel and iron industries during 11th century China. He created a systematic approach to listing various different minerals and their use in medicinal concoctions, such as all the variously known forms of mica that could be used to cure ills through digestion. He wrote of the subconchoidal fracture of native cinnabar, signs of ore beds, and provided description on crystal form. Similar to the ore channels formed by circulation of ground water written of by the later German scientist Georgius Agricola, Su Song made similar statements concerning copper carbonate, as did the earlier Rihua Bencao of 970 with copper sulphate. Su's book was also the first pharmaceutical treatise written in China to describe the flax, Urtica thunbergiana, and Corchoropsis tomentosa (crenata) plants. According to Edward H. Schafer, Su accurately described the translucent quality of fine realgar, its origin from pods found in rocky river gorges, its matrix being pitted with holes and having a deep red, almost purple color, and that the mineral varied in sizes ranging from the size of a pea to a walnut. Citing evidence from an ancient work by Zheng Xuan (127–200), Su believed that physicians of the ancient Zhou dynasty (1046–256 BC) used realgar as a remedy for ulcers. As believed in Su's day, the "five poisons" used by Zhou era physicians for this purpose were thought to be cinnabar, realgar, chalcanthite, alum, and magnetite. Su made systematic descriptions of animals and the environmental regions they could be found, such as different species of freshwater, marine, and shore crabs. For example, he noted that the freshwater crab species Eriocher sinensis could be found in the Huai River running through Anhui, in waterways near the capital city, as well as reservoirs and marshes of Hebei. Su's book was preserved and copied into the Bencao Gangmu of the Ming dynasty (1368–1644) physician and pharmacologist Li Shizhen (1518–1593). Horology and mechanical engineering Su Song compiled one of the greatest Chinese horological treatises of the Middle Ages, surrounding himself with an entourage of notable engineers and astronomers to assist in various projects. Xinyi Xiangfayao (lit. "Essentials of a New Method for Mechanizing the Rotation of an Armillary Sphere and a Celestial Globe"), written in 1092, was the final product of his life's achievements in horology and clockwork. The book included 47 different illustrations of great detail of the mechanical workings for his astronomical clock tower. Su Song's greatest project was the 40-foot-tall water-powered astronomical clock tower constructed in Kaifeng, the wooden pilot model completed in 1088, the bronze components cast by 1090, while the wholly finished work was completed by 1094 during the reign of Emperor Zhezong of Song. The emperor had previously commissioned Han Gonglian, Acting Secretary of the Ministry of Personnel, to head the project, but the leadership position was instead handed down to Su Song. The emperor ordered in 1086 for Su to reconstruct the hun yi, or "armillary clock", for a new clock tower in the capital city. Su worked with the aid of Han Gong-lian, who applied his extensive knowledge of mathematics to the construction of the clock tower. A small-scale wooden model was first crafted by Su Song, testing its intricate parts before applying it to an actual full-scale clock tower. In the end, the clock tower had many impressive features, such as the hydro-mechanical, rotating armillary sphere crowning the top level and weighing some 10 to 20 tons, a bronze celestial globe located in the middle that was 4.5 feet in diameter, mechanically timed and rotating mannequins dressed in miniature Chinese clothes that exited miniature opening doors to announce the time of day by presenting designated reading plaques, ringing bells and gongs, or beating drums, a sophisticated use of oblique gears and an escapement mechanism, as well as an exterior facade of a fanciful Chinese pagoda. Upon its completion, the tower was called the Shui Yun Yi Xiang Tai, or "Tower for the Water-Powered Sphere and Globe". Joseph Needham writes: Years after Su's death, the capital city of Kaifeng was besieged and captured in 1127 by the Jurchens of the Manchuria-based Jin dynasty during the Jin–Song wars. The clock tower was dismantled piece by piece by the Jurchens, who carted its components back to their own capital in modern-day Beijing. However, due to the complexity of the tower, they were unable to piece it back together. The new Emperor Gaozong of Song instructed Su's son, Su Xie, to construct a new astronomical clock tower in its place, and Su Xie set to work studying his father's texts with a team of other experts. However, they were also unsuccessful in creating another clock tower, and Su Xie was convinced that Su Song had purposefully left out essential components in his written work and diagrams so that others would not steal his ideas. As the sinologist historian Derk Bodde points out, Su Song's astronomical clock did not lead to a new generation of mass-produced clockworks throughout China since his work was largely a government-sponsored endeavor for the use of astronomers and astrologers in the imperial court. Yet the mechanical legacy of Su Song did not end with his work. In about 1150, the writer Xue Jixuan noted that there were four types of clocks in his day, the basic waterclock, the incense clock, the sundial, and the clock with 'revolving and snapping springs' ('gun tan'). The rulers of the continuing Yuan dynasty (1279–1368 AD) had a vested interest in the advancement of mechanical clockworks. The astronomer Guo Shoujing helped restore the Beijing Ancient Observatory beginning in 1276, where he crafted a water-powered armillary sphere and clock with clock jacks being fully implemented and sounding the hours. Complex gearing for uniquely Chinese clockworks were continued in the Ming dynasty (1368–1644), with new designs driven by the power of falling sand instead of water to provide motive power to the wheel drive, and some Ming clocks perhaps featured reduction gearing rather than the earlier escapement of Su Song. The earliest such design of a sand-clock was made by Zhan Xiyuan around 1370, which featured not only the scoop wheel of Su Song' device, but also a new addition of a stationary dial face over which a pointer circulated, much like new European clocks of the same period. Su Song's escapement mechanism In Su Song's waterwheel linkwork device, the action of the escapement's arrest and release was achieved by gravity exerted periodically as the continuous flow of liquid filled containers of a limited size. In a single line of evolution, Su Song's clock therefore united the concept of the clepsydra and the mechanical clock into one device run by mechanics and hydraulics. In his memorial, Su Song wrote about this concept: In his writing, Su Song credited, as the predecessor of his working clock, the hydraulic-powered armillary sphere of Zhang Heng (78–139 AD), an earlier Chinese scientist. Su Song was also strongly influenced by the earlier armillary sphere created by Zhang Sixun (976 AD), who also employed the escapement mechanism and used liquid mercury instead of water in the waterwheel of his astronomical clock tower (since liquid mercury would not freeze during winter and would not corrode and rust metal components over time). However, Su Song stated in his writing that after Zhang's death, no one was able to replicate his device, much like his own. The mechanical clockworks for Su Song's astronomical tower featured a great driving-wheel that was 11 feet in diameter, carrying 36 scoops, into each of which water poured at a uniform rate from the "constant-level tank" (Needham, Fig. 653). The main driving shaft of iron, with its cylindrical necks supported on iron crescent-shaped bearings, ended in a pinion, which engaged a gear wheel at the lower end of the main vertical transmission shaft. Joseph Needham gives a general description of the clock tower itself: That was figure Fig. 650, while Fig. 656 displays the upper and lower norias with their tanks and the manual wheel for operating them. Fig. 657 displays a rather miniature and scaled-down picture for the basics of the escapement mechanism in an illustration (from Su's book), with Needham's caption here in this quote: "The 'celestial balance' or escapement mechanism of Su Song's clockwork (Xinyi Xiangfayao, ch. 3, p. 18b)". The latter figure carefully labels: a right upper lock upper link left upper lock axle or pivot long chain upper counterweight sump checking fork of the lower balancing lever coupling tongue main (i.e., lower) counterweight Figure 658 displays a more intricate and most-telling half-page scale drawing of Su Song's large escapement mechanism, labeling these individual parts as they interact with one another: arrested spoke left upper lock scoop being filled by water jet from constant-level tank small counterweight checking fork tripped by a projection pin on the scoop, and forming the near end of the lower balancing lever with its lower counterweight coupling tongue, connected by the long chain with the upper balancing lever, which has at its far end the upper counterweight, and at its near end a short length chain connecting it with the upper lock beneath it; right upper lock The endless chain drive The world's oldest illustrated depiction of an endless power-transmitting chain drive is from Su Song's horological treatise. It was used in the clockworks for coupling the main drive shaft to the armillary sphere gearbox (rotating three small pinions), as seen in Needham's Fig. 410 and Fig. 652. This belonged to the uppermost end of the main vertical transmission shaft, incorporating right angle gears and oblique gears connected by a short idling shaft. The toothed ring gear called the diurnal motion gear ring was fit around the shell of the armillary sphere along the declination parallel near the southern pole. Although the ancient Greek Philo of Byzantium (3rd century BC) featured a sort of endless belt for his magazine arcuballista, which did not transmit continuous power, the influential source for Su Song's chain drive is most likely the continuously driven chain pump known in China since the Han dynasty (202 BC–220 AD). From his horological treatise, Su Song states: In addition, the motion gear rings and the upper drive wheel both had 600 teeth, which by Su's mathematical precision carefully calculated measured units of the day in a division of 1/600. These gears, having 600 teeth, thus ensured the division of the day into measurements of 2 minutes and 24 seconds each. Su Song's armillary sphere In Joseph Needham's third volume of Science and Civilization in China, the drawing for Fig. 159 displays a drawing of Su Song's armillary sphere (as depicted in his 11th century treatise), complete with three 'nests' or layers of mechanically rotated rings. It was the earlier Chinese astronomer Li Chun-feng of the Tang dynasty who in 633 AD created the first armillary sphere with three layers to calibrate multiple aspects of astronomical observation. Zhang's armillary sphere has often been compared to that of the 13th century monarch Alfonso X of Castile in Islamic-era Spain. The chief difference was that Alfonso's instrument featured an arrangement for making measurements of the azimuth and altitude, which was present in the Arabic tradition, while Su Song's armillary sphere was duly graduated. For the drawing of Su's armillary sphere, the listing of components are: The Outer Nest meridian circle horizon circle outer equator circle The Middle Nest solstitial colure circle ecliptic circle diurnal motion gear-ring, connecting with the power-drive The Inner Nest polar-mounted declination ring or hour-angle circle, with sighting tube attached to it and strengthened by a diametral brace Other Parts vertical column concealing the transmission shaft supporting columns in the form of dragons cross-piece of the base, incorporating water-levels south polar pivot north polar pivot Transmission of Su's text and his legacy When Su Song's Xinyi Xiangfayao was written in 1092 and the horological monograph finalized and presented in 1094, his work was published and widely printed in the north (see woodblock printing and movable type of Bi Sheng). In the south, printing and circulation of his work was not widely distributed until Shi Yuanzhi of Jiangsu had it printed there in 1172. When presenting his clocktower design to the Emperor Zhezong, Su Song equated the constant flow of water with the continuous movements of the heavens, the latter of which symbolized the unceasing power of the emperor. This appealed to the emperor, who featured artwork representing the clocktower on vehicles of major imperial processions, as illustrated in the Illustration of the Imperial Grand Carriage Procession of 1053. The later Ming dynasty/Qing dynasty scholar Qian Zeng (1629–1699) held an old volume of Su's work, which he faithfully reproduced in a newly printed edition. He took special care in avoiding any rewording or inconsistencies with the original text as well. Again, it was later reprinted by Zhang Xizu (1799–1844). Su Song's treatise on astronomical clockwork was not the only one made in China during his day, as the Song Shi (compiled in 1345) records the written treatise of the Shuiyunhun Tianjiyao (Wade–Giles: Shui Yun Hun Thien Chi Yao; lit. Essentials of the [Technique of] making Astronomical Apparatus revolve by Water-Power), written by Juan Taifa. However, this treatise no longer survives. In the realm of modern research, the British biochemist and historian of Chinese science Joseph Needham (1900–1995) (known as Li Yuese in China) conducted extensive research and analysis of Su Song's texts and various achievements in his Science and Civilization in China book series. Joseph Needham also related many detailed passages from Su's contemporary medieval Chinese sources on the life of Su and his achievements known in his day. In 1956, John Christiansen reconstructed a model of Su Song's clocktower in a famous drawing, which garnered attention in the West towards 11th-century Chinese engineering. A miniature model of Su Song's clock was reconstructed by John Cambridge and is now on display at the National Science Museum at South Kensington, London. In China, the clocktower was reconstructed to one-fifth its actual scale by Wang Zhenduo, who worked for the Chinese Historical Museum in Beijing in the 1950s. See also Cartographers Chinese inventions and discoveries Chinese writers Clock tower Mineralogists Technology of the Song dynasty Water clock Zhang Heng, second-century inventor of water-powered armillary sphere References Citations Sources Bodde, Derk (1991). Chinese Thought, Society, and Science. Honolulu: University of Hawaii Press. Bowman, John S. (2000). Columbia Chronologies of Asian History and Culture. New York: Columbia University Press. Breslin, Thomas A. (2001). Beyond Pain: The Role of Pleasure and Culture in the Making of Foreign Affairs. Westport: Praeger Publishers. Ceccarelli, Marco (2004). International Symposium on History of Machines and Mechanisms. New York: Kluwer Academic Publishers. Edwards, Richard. "Li Gonglin's Copy of Wei Yan's 'Pasturing Horses'," Artibus Asiae (Volume 53, Number 1/2, 1993): 168–181; 184–194. Fry, Tony (2001). The Architectural Theory Review: Archineering in Chinatime. Sydney: University of Sydney. Harrist, Robert E., Jr. "The Artist as Antiquarian: Li Gonglin and His Study of Early Chinese Art," Artibus Asiae (Volume 55, Number 3/4, 1995): 237–280. Liu, Heping. ""The Water Mill" and Northern Song Imperial Patronage of Art, Commerce, and Science," The Art Bulletin (Volume 84, Number 4, 2002): 566–595. Needham, Joseph, Wang Ling & Lu Gwei-Djen (1986) [1965], Science and Civilization in China, Taipei: Caves Books, Ltd. (reprint edition of Cambridge & New York: Cambridge University Press). Volume 3: Mathematics and the Sciences of the Heavens and the Earth. Volume 4: Physics and Physical Technology, Part 2: Mechanical Engineering Volume 4: Physics and Physical Technology, Part 3: Civil Engineering and Nautics Volume 6: Biology and Biological Technology, Part 1, Botany Roth, Harold D. "Text and Edition in Early Chinese Philosophical Literature," Journal of the American Oriental Society (Volume 113, Number 2, 1993): 214–227. Schafer, Edward H. "Orpiment and Realgar in Chinese Technology and Tradition," Journal of the American Oriental Society (Volume 75, Number 2, 1955): 73–89. Sivin, Nathan (1995). Science in Ancient China: Researches and Reflections. Brookfield, Vermont: VARIORUM, Ashgate Publishing. Unschuld, Paul U. (2003). Nature, Knowledge, Imagery in an Ancient Chinese Medical Text. Berkeley: University of California Press. West, Stephen H. "Cilia, Scale and Bristle: The Consumption of Fish and Shellfish in The Eastern Capital of The Northern Song," Harvard Journal of Asiatic Studies (Volume 47, Number 2, 1987): 595–634. Wittfogel, Karl A. and Feng Chia-Sheng. "History of Chinese Society Liao (907–1125)," Transactions of the American Philosophical Society (Volume 36, 1946): i–xv; 1–752. Wright, David Curtis (2001) The History of China. Westport: Greenwood Press. Wu, Jing-nuan (2005). An Illustrated Chinese Materia Medica. New York: Oxford University Press. Xi, Zezong. "Chinese Studies in the History of Astronomy, 1949–1979," Isis (Volume 72, Number 3, 1981): 456–470. External links Su Song's Clock 1088 Su Song in the Encyclopædia Britannica Su Song at Bookrags.com 1020 births 1101 deaths 11th-century Chinese astronomers 11th-century Chinese calligraphers 11th-century Chinese scientists 11th-century Chinese historians 11th-century Chinese mathematicians 11th-century Chinese philosophers 11th-century diplomats 11th-century inventors Artists from Fujian Astronomical instrument makers Biologists from Fujian Chemists from Fujian Chinese antiquarians Chinese antiques experts Chinese art collectors Chinese cartographers Chinese civil engineers Chinese hydrologists Chinese inventors Chinese mechanical engineers Chinese metallurgists Chinese geologists Chinese naturalists Chinese pharmacologists Chinese scientific instrument makers Engineers from Fujian Historians from Fujian Hokkien scientists Hydraulic engineers Mathematicians from Fujian Medieval Chinese geographers Mineralogists Philosophers from Fujian Politicians from Quanzhou Song dynasty diplomats Song dynasty historians Song dynasty philosophers Song dynasty science writers Technical writers Writers from Fujian
Su Song
Astronomy
5,605
243,407
https://en.wikipedia.org/wiki/Caledonian%20Canal
The Caledonian Canal connects the Scottish east coast at Inverness with the west coast at Corpach near Fort William in Scotland. The canal was constructed in the early nineteenth century by Scottish engineer Thomas Telford. Route The canal runs some from northeast to southwest and reaches above sea level. Only one third of the entire length is man-made, the rest being formed by Loch Dochfour, Loch Ness, Loch Oich, and Loch Lochy. These lochs are located in the Great Glen, on a geological fault in the Earth's crust. There are 29 locks (including eight at Neptune's Staircase, Banavie), four aqueducts and 10 bridges in the course of the canal. Northern section The canal starts at its north-eastern end at Clachnaharry Sea Lock, built at the end of a man-made peninsula to ensure that boats could always reach the deep water of the Beauly Firth. Because the peninsula is built with mud foundations, it has required regular maintenance ever since. Next to the lock is the former lock-keeper's house, a two-storey building with a single-storey bothy at its western end and an enclosed garden. At an unknown date, the house was divided into two, and in 2005 the eastern half became offices for Scottish Canals. The building is a category C listed structure. At the opposite end of the peninsula to the sea lock, a swing bridge carries the Far North railway line across the canal. The first bridge at the site was designed by Joseph Mitchell for the Inverness and Ross-shire Railway, and was constructed of wrought iron in 1862. This was replaced by a similar structure in 1909, made of steel, which is long. The bridge, together with the adjacent signal box, designed by Mackenzie and Holland for the Highland Railway in 1890, is a category B listed structure. Clachnaharry lock is next to the swing bridge, and is bordered by a smiddy and workshops, dating from the canal's construction in 1810, and extended in 1840–50. The smiddy still contains two hearths for forging metal. Muirtown Basin provides moorings at the eastern end of the canal, which takes a right-angled turn at the far end to pass under a swing bridge carrying the A862 road to reach the flight of four Muirtown Locks. From the top of the flight, the canal continues southwards to Tomnahurich swing bridge, which carries the A82 road over the canal. The original timber bridge was replaced by a steel structure in 1938, designed by Crouch and Hogg, which included a control box, as levels of traffic on the road and canal were increasing. Nearby is the former bridge keeper's cottage, built in the 1820s. It was no longer needed to house a bridge keeper after 1965, and after being empty for some years, was converted into self-catering accommodation. A short distance to the south a second swing bridge was opened in June 2021, as part of the Inverness West Link project. A new control building was built between the two bridges, so that both could be controlled from a single location. The concept behind the second bridge was that traffic could be diverted to use one of the bridges while the other was open for canal traffic. The canal from Muirtown Locks to Dochgarroch Lock is a scheduled monument, as is Dochgarroch Lock, which was designed to protect the canal from flooding caused by fluctuating levels in the River Ness, which flows over a weir to the south of the lock. A two-storey lock keeper's cottage and barn dating from the 1850s overlooks the lock. The final section of canal to Loch Dochfour is also scheduled, and the designated area includes the weir for the River Ness, and the west bank of Loch Dochfour as far as Loch Ness. At its southern end is Bona Lighthouse, built as a house by Thomas Telford in 1815, and altered around 1848 to act as a lighthouse. The building is octagonal, with two storeys, and is a rare example of an inland light. There was once a ferry across the Bona channel at this point. In 1844, the channel was made deeper and wider when barges pulled by horses were replaced by steam tugs, and there is evidence that the building once included four stables. The canal section from Clachnaharry Sea Lock to Bona is long, and from there it is over along Loch Ness to the next canal section at Fort Augustus. Middle section At the foot of Loch Ness, the Caledonian Canal leaves the west bank of the loch, with the River Oich to the north and the River Tarff to the south. The A82 road crosses the canal on a swing bridge, at the foot of the five locks that rise through the centre of the village of Fort Augustus. In 1975 only the locks were included in the scheduled monument designation, but that was subsequently extended to include all of the canal between them and Loch Ness, including the lighthouse at the entrance to this section of canal. At the top of the locks, a road beside the canal is spanned by a railway bridge, which was constructed to carry an intended line from Fort Augustus to Inverness. The bridge was in use from 1903 to 1906, when the railway project was abandoned. As well as the bridge, some piers built to carry the line over the River Oich can also be seen, although these are in a poor state of repair. The canal to Kytra Lock and the lock itself are both scheduled monuments. The canal continues to Cullochy Lock, where to the north of the structure there is a single storey lock keeper's cottage on the west bank which dates from the building of the canal, and a pair of two-storey houses for the lock keepers, dating from around 1830, on the east bank. On the east side of the lock is a single-storey storehouse dating from 1815 to 1820, which has three bays and internally is divided into a store and an office for the lock keeper. The final part of the canal on this section runs from the lock to Loch Oich, and includes the abutments of a small swing bridge which formerly provided an accommodation crossing over the canal. The more modern Aberchalder swing bridge carries the A82 over the canal. A single-storey three bay cottage designed by Telford once provided housing for the bridge keeper. Nearby is the old bridge that carried the road over the River Oich, which was probably designed by James Dredge in the 1850s. It is a chain suspension bridge, and a long dry stone causeway provides access from the east. The A82 crosses the river on a three-arched concrete bridge designed by Mears and Carus-Wilson and constructed in 1932. Loch Oich is the summit of the canal, at above sea level, and despite its relatively small size compared to Loch Ness and Loch Lochy, it provides most of the of water that is required each day to keep the canal operating. The short section of canal between Loch Oich and Loch Lochy is crossed by a plate girder swing bridge carrying the A82 road at its northern end, and ends at a pair of locks at its southern end. Two jetties project out into Loch Lochy, and much of the structure is built on reclaimed land. A single-storey store with basement stands to the east of the locks, which was probably used to store materials and to provide stabling for horses while the canal was being constructed. On the west bank is Glenjade Cottage, dating from 1840 to 1860, which was once a pair of cottages but was converted into a single dwelling in the late 20th century. Ivy Cottage was also built for lock keepers, but was completed for the opening of the canal, and so is older than Glenjade cottages. Southern section Loch Lochy is about long, and the next canal section starts at its southern end. Its entrance is marked by a small lighthouse, while to the east of the canal is the Mucomir Cut, which delivers water from the loch to Mucomir hydroelectric station and the River Lochy. There are two locks at Gairlochy, the upper one being the only lock which was not built for the opening of the canal. It was added in 1844, and was constructed by Jackson and Bean. Between the locks is Telford House, a large lock keeper's house built in 1811–13, and much larger than most of the other houses built along the canal. The B8084 road crosses the canal on a swing bridge at the upstream end of Gairlochy lower lock, but it is unclear whether it is part of the scheduled monument, since it is not specifically mentioned in the listing, whereas most bridges are either included or excluded. About further downstream is Moy Swing Bridge, an original accommodation bridge. It consists of two halves, and the lock keeper from Gairlochy opens the eastern half by operating a capstan, and then crosses the canal by punt to perform a similar action on the western half. A three-bay single-storey cottage, dating from around 1820, survives on the eastern bank of the canal next to the bridge. The section below Moy Bridge includes four aqueducts which carry the canal over local rivers. These are the Loy aqueduct, the Muirshearlich aqueduct, the Sheangain aqueduct and the Mount Alexander aqueduct. The Loy aqueduct consists of three parallel arches, long, with the River Loy passing through the centre arch, and the side arches used for pedestrians and animals, although they sometimes carry flood flow from the river. Torcastle aqueduct is similar but slightly shorter at , and carries the Allt Sheangain in two of the arches, with the third used as a roadway. Mount Alexander aqueduct only has two arches, one used by the Allt Mor, and the other a footpath. Soon the canal arrives at the top of Neptunes Staircase, a flight of eight locks that drop the level of the canal by in the space of . Halfway down the flight to the west is another house similar to that at Gairlochy, but split to provide accommodation for two lock keepers. On the eastern bank is a smithy and sawpits, dating from the 1820s, and a workshop dating from 1880 to 1890. Neptunes Staircase was originally fitted with 36 capstans each of which had to be rotated 20 times to operate the lock gates and sluices, but progress through the structure was sped up in the 20th century when the flight was mechanised. At the foot of the locks, a swing bridge carries the A830 road over the canal, and a steel bow truss swing bridge, built in 1901 by Simpson and Wilson, carries the West Highland Line from to . The canal turns to the west to reach a pair of staircase locks at Corpach, which is followed by a small basin and the final sea lock to allow boats to access Loch Linnhe. Names The canal has several names in Scottish Gaelic including Amar-Uisge/Seòlaid a' Ghlinne Mhòir ("Waterway of the Great Glen"), Sligh'-Uisge na h-Alba ("Waterway of Scotland") and a literal translation (An) Canàl Cailleannach. History In 1620, a Highland prophet called the Brahan Seer predicted that full-rigged ships would one day be sailing round the back of Tomnahurich, near Inverness, at a time when the only navigable route near the location was the River Ness, on the other side of Tomnahurich. Engineers started to look at the feasibility of a canal to connect Loch Linnhe near Fort William to the Moray Firth near Inverness in the 18th century, with Captain Edward Burt rejecting the idea in 1726, as he thought the mountains would channel the wind and make navigation too precarious. The Commissioners of Forfeited Estates had originally been set up to handle the seizure and sale of land previously owned by those who had been convicted of treason following the Jacobite rising of 1715. By 1773, they had turned their attention to helping the fishing industry, and commissioned the inventor and mechanical engineer James Watt to make a survey of the route. He published a report in 1774, which suggested that a deep canal from Fort William to Inverness, passing through Loch Lochy, Loch Oich, Loch Ness and Loch Dochfour, would require 32 locks, and could be built for £164,032. He emphasised the benefits to the fishing industry, of a shorter and safer route from the east to the west coast of Scotland, and the potential for supplying the population with cheaper corn, but again, thought that winds on the lochs might be a problem. William Fraser, when proposing his own scheme for a canal in 1793, announced that "nature had finished more than half of it already". At the time, much of the Highlands were depressed as a result of the Highland Clearances, which had deprived many of their homes and jobs. Laws had been introduced which sought to eradicate the local culture, including bans on wearing tartan, playing the bagpipes, and speaking Scottish Gaelic. Many emigrated to Canada or elsewhere, or moved to the Scottish Lowlands. Crop failures in 1799 and 1800 brought distress to many, and prompted a new wave of emigrants to leave. The engineer Thomas Telford was asked to investigate the problem of emigration in 1801, and in 1802 published his report, which suggested that the main cause was landowners who had previously kept cattle creating vast sheep-farms. Realising that direct government action to confront the issue would be seen as interference, he therefore suggested that a programme of public works, involving roads, bridges, and canals, would be a way to provide jobs for people who had been displaced by the sheep farming, and to stimulate industry, fishery, and agriculture. Telford consulted widely with shipowners, who favoured a canal instead of the hazardous journey around the north of Scotland via Cape Wrath and the Pentland Firth. He obtained advice from Captain Gwynn of the Royal Navy, who stated that Loch Ness and Loch Lochy were sufficiently deep for any size of boat, and had safe anchorages if winds proved to be a problem, but that Loch Oich would need to be made deeper, as it was shallow in places. He established that Loch Garry, to the west of Invergarry, and Loch Quoich, beyond that, would provide an adequate water supply. He estimated that a canal suitable for ships with a draught of could be built in seven years, and would cost around £350,000 . An additional benefit would be the protection that the canal offered to shipping from attacks by French privateers. Telford also looked at the possibility of a canal to link Loch Eil to Loch Shiel, both to the west of Fort William, but ruled out the scheme because of the depth of cuttings that would have been required. The canal, as well as a number of projects to build roads, harbours and bridges, was the first time that public works of this sort had been funded by the government. Telford had convinced them that it was feasible, and that employing local people on it would help to stop the tide of emigration, but no one considered whether it would pay its way when it was completed. Planning On 27 July 1803, an act of Parliament, the (43 Geo. 3. c. 102) was passed to authorise the project, and carried the title: An Act for granting to his Majesty the Sum of £20,000, towards defraying the Expense of making an Inland Navigation from the western to the eastern Sea, by Inverness and Fort William; and for taking the necessary steps towards executing the same. The act appointed commissioners, to oversee the project, and some funding to enable the work to start. Less than a year later, on 29 June 1804, the commissioners obtained a second act of Parliament, the (44 Geo. 3. c. 62) which granted them £50,000 per year of government money, payable in two instalments, to fund the ongoing work. Provision was made for private investors to buy shares in the scheme, for any amount over £50, and as well as building the main line of the canal, the engineers could alter Loch Garry, Loch Quoich and Loch Arkaig, to improve their function as reservoirs. Telford was asked to survey, design and build the waterway. He worked with William Jessop on the survey, and the two men oversaw the construction until Jessop died in 1814. The canal was expected to take seven years to complete, and to cost £474,000, to be funded by the Government, but both estimates were inadequate. Telford understood the need for competent men to be involved in such a grand project, and convinced most of those who had been involved with him on building the Ellesmere Canal and the Pontcysyllte Aqueduct to move north to Scotland. He ensured that Jessop became the consulting engineer, while Matthew Davidson, who was a stonemason from Dumfrieshire and had been Superintendent of Works on the Ellesmere project, became the resident engineer at the Clachnaharry end, near Inverness. The Corpach end, near Fort William, was managed by John Telford, who is thought not to have been related to Thomas, but was known to him from Ellesmere. At the time, Telford's scheme for the development of the Highlands was the largest programme of works ever undertaken for a specific area in Britain. Telford was responsible to the Caledonian Canal Commissioners in London for the canal work, and to the Commissioners for Roads and Bridges for the construction of roads. He visited the Highlands twice a year, to plan the work and inspect the progress, and was always on the move, to the extent that the Canal Commissioners allowed him to choose the dates when he would find it convenient to meet with them. In the Highlands, Telford faced a number of problems, in that the canal was to be built in an area where people lived a subsistence lifestyle, managing by keeping a few cows and paying low rents. They had virtually no knowledge of wheeled vehicles, and no construction skills, but were hardy and willing to learn. While surveying the route for the canal, Telford agreed to increase the size of the locks to accommodate 32-gun frigates for the Royal Navy, and Jessop insisted that the locks should be built of stone, rather than having turf sides, as Telford had suggested. The highest point on the route was at Laggan, between Lock Oich and Loch Lochy. A deep cutting was required, so that the canal continued at the same level as Loch Oich. The loch would need dredging, because it was too shallow in places, but it was fed by water from Loch Garry and Loch Quoich to the west, which would provide a suitable supply for the canal. To reduce the depth of cutting between Loch Oich and Loch Lochy, a dam would be built at the south end of Loch Lochy, to raise its level by . A new channel for the River Lochy would be cut, allowing it to flow into the River Spean, so that its previous course could be used for the canal. Similarly, heading north from Loch Oich, parts of the canal would be constructed in the bed of the River Oich, which would be diverted to the east. At the northern end of Loch Ness, the channel through Loch Dochfour would have to be made deeper, and a weir was to be constructed at its northern end, to maintain the loch at the same level as Loch Ness. Telford and Jessop had a long list of things to do, because of the lack of construction skills in the region where the canal was built. As well as the normal surveying, inspection and payment duties, they had to train local people in how to become workers. They were required to source all of the building materials, to construct workshops and settlements for the workers, design tools and waggons to be used by the workers, and in some cases, ensure they had supplies of food and drink. All of the money provided by Parliament passed through Telford's personal bank account. Because of the remoteness of the location, construction was started at both ends, so that completed sections could be used to bring in the materials for the middle sections. In order to help the Highlanders to learn the habits of paid employment, Telford appointed organisers and pace-setters, who would impart skill and activity to the other workers, and wherever possible, the work was done by piecework, so that earthworks were paid at 6 pence (2.5p) per cubic yard, cutting rock in Corpach Basin was paid at two shillings (10p) per cubic yard, and rubble masonry work was paid at 11 shillings (55p) per cubic yard, for example. The number of men employed fluctuated widely, not least because many would take time off to attend to peat cutting, herring fishing or harvesting. John Telford was upset because many of his men did not return after the harvest, but they were not used to working during the winter months. Many saw working on the canal as a way to supplement their meagre income, not as a way to escape from their subsistence livelihood. Construction At Clachnaharry, to the west of Inverness, Davidson was overseeing the construction. Clachnaharry Lock was the first to be constructed at the eastern end of the canal, being completed in 1807 by Simpson and Cargill. Simpson was another of Telford's recruits from the Ellesmere project. Muirtown Basin was also completed in the same year. It was , and construction was aided by the fact that its bottom was above the level of low tides, and so it was relatively easy to keep the works dry. The road from the basin into Inverness was renamed Telford Street, and Simpson and Cargill built a row of houses for overseers and contractors to live in, including themselves. To build the sea lock, two banks were built out into the Beauly Firth across mud which was deep. Two tramways with a gauge of were constructed along the banks, to bring rubble and earth to extend the banks. By the time the banks were completed, the price of foreign timber to construct a coffer dam had risen so much that work was postponed. The four Muirtown locks were finished in 1909. During this time, Davidson noticed that the sea banks were settling into the mud, and the idea of turning the two banks into a peninsula and then excavating the lock into it was formulated. It is unclear whether the concept was Telford's, Jessop's or Davidson's, but it saved the expense of building a coffer dam. The peninsula was allowed to settle for six months before excavation began, and a Boulton and Watt steam engine was used to keep the lock pit dry during the work. The structure was completed in 1812, three years later than Davidson's original estimate. At Corpach, near Fort William, John Telford faced a number of problems. The entrance lock and the basin were built on rock, and this entailed excavating rock below the level of Loch Linnhe. A steam engine was ordered from Boulton and Watt at Birmingham, to keep the work area dry, and an embankment was built beyond the sea lock, which served as a quay for incoming materials until the lock was constructed. Masons built several buildings at Corpach, but then moved on to building the aqueducts to carry the canal over rivers, since the lower canal needed to be completed to enable materials to be brought to the great flight of locks at Banavie. The work was to prove a serious challenge to John Telford's health and he died in 1807, to be replaced by Alexander Easton. He was buried in Kilmallie churchyard, where his ornate grave, now in dilapidated condition, can still be seen. At Banavie, two houses for lock keepers were built by Simpson and Wilson before work on the locks started, which were occupied by masons during the estimated four years that it would take to finish the flight. The stonework was largely completed by 1811, three and a half years after work started. By early 1810, the steam engine at Corpach was ready, and the coffer dam to enable the sea lock to be built was completed by mid-1810, after considerable difficulty. Completing the lock was a priority, because the steam engine had to be kept running until the gates could hold back the sea, and it was the first lock to become operational, being completed just before the sea lock at Clachnaharry. By 1816, the canal was complete as far as Loch Lochy, but could not be used until the level of the loch was raised, and that depended on work further along the canal being completed. The ground through which the canal was cut was variable, and further difficulties were experienced with the construction of the locks, the largest ever built at the time. There were also problems with the labour force, with high levels of absence, particularly during and after the potato harvest and the peat cutting season. This led to Telford bringing in Irish navvies to manage the shortfall, which led to further criticism, since one of the main aims of the project was to reduce unemployment in the Highlands. The canal finally opened in 1822, having taken an extra 12 years to complete, and cost £910,000. Over 3,000 local people had been employed in its construction, but the draught had been reduced from in an effort to save costs. In the meantime, shipbuilding had advanced, with the introduction of steam-powered iron-hulled ships, many of which were by that time too big to use the canal. The Royal Navy did not need to use the canal either, as Napoleon had been defeated at Waterloo in 1815, and the perceived threat to shipping when the canal was started was now gone. Operation Before long, defects in some of the materials used became apparent, and part of Corpach double lock collapsed in 1843. This led to a decision to close the canal to allow repairs to be carried out, and the depth was increased to at the same time. The work was designed by Telford's associate James Walker, carried out by Jackson and Bean of Aston, Birmingham and completed between 1843 and 1847 at a cost of £136,089. However, not all of the traffic expected to return to using the canal did so. Commercially, the venture was not a success, but the dramatic scenery through which it passes led to it becoming a tourist attraction. Queen Victoria took a trip along it in 1873, and the publicity surrounding the trip resulted in a large increase in visitors to the region and the canal. The arrival of the railways at Fort William, Fort Augustus and Inverness did little to harm the canal, as trains were scheduled to connect with steamboat services. There was an upsurge in commercial traffic during the First World War, when components for the construction of mines were shipped through the canal on their way from America to “U.S. Naval Base 18” (Muirtown Basin, Inverness), and fishing boats used it to avoid possible enemy action on the longer route around the north of Scotland. During this period there was 24-hour operation, facilitated by buoyage and lighting throughout its length. Ownership passed to the Ministry of Transport in 1920, and then to British Waterways in 1962. Improvements were made, with the locks being mechanised between 1964 and 1969. By 1990, the canal was in obvious need of restoration, with lock walls bulging, and it was estimated that repairs would cost £60 million. With no prospect of the Government funding this, British Waterways devised a repair plan, and between 1995 and 2005, sections of the canal were drained each winter. Stainless steel rods were used to tie the double-skinned lock walls together, and over 25,000 tonnes of grout were injected into the lock structures. All of the lock gates were replaced, and the result was a canal whose structures were probably in a better condition than they had ever been. In 1993, British Waterways and Parks Canada agreed to twin the canal with the Rideau Canal in Ontario, Canada. The canal is now a scheduled monument, and attracts over 500,000 visitors each year. British Waterways, who work with the Highland Council and Forestry and Land Scotland through the Great Glen Ways Initiative, were hoping to increase this number to over 1 million by 2012. There are many ways for tourists to enjoy the canal, such as taking part in the Great Glen Rally, cycling along the tow-paths, or cruising on hotel barges. Points of interest See also Banavie Pier railway station Göta Canal—Sister canal in Sweden References Citations General and cited references External links Illustrated description of the Caledonian Canal. Canals opened in 1822 Transport in Inverness Transport in Highland (council area) Canals in Scotland Ship canals Works of Thomas Telford Historic Civil Engineering Landmarks 1822 establishments in Scotland Lochaber Scheduled monuments in Scotland Scottish Canals Loch Ness
Caledonian Canal
Engineering
5,922
1,024,033
https://en.wikipedia.org/wiki/Potassium%20perchlorate
Potassium perchlorate is the inorganic salt with the chemical formula KClO4. Like other perchlorates, this salt is a strong oxidizer when the solid is heated at high temperature although it usually reacts very slowly in solution with reducing agents or organic substances. This colorless crystalline solid is a common oxidizer used in fireworks, ammunition percussion caps, explosive primers, and is used variously in propellants, flash compositions, stars, and sparklers. It has been used as a solid rocket propellant, although in that application it has mostly been replaced by the more performant ammonium perchlorate. KClO4 has a relatively low solubility in water (1.5 g in 100 mL of water at 25 °C). Production Potassium perchlorate is prepared industrially by treating an aqueous solution of sodium perchlorate with potassium chloride. This single precipitation reaction exploits the low solubility of KClO4, which is about 1/100 as much as the solubility of NaClO4 (209.6 g/100 mL at 25 °C). It can also be produced by bubbling chlorine gas through a solution of potassium chlorate and potassium hydroxide, and by the reaction of perchloric acid with potassium hydroxide; however, this is not used widely due to the dangers of perchloric acid. Another preparation involves the electrolysis of a potassium chlorate solution, causing KClO4 to form and precipitate at the anode. This procedure is complicated by the low solubility of both potassium chlorate and potassium perchlorate, the latter of which may precipitate onto the electrodes and impede the current. Oxidizing properties KClO4 is an oxidizer in the sense that it exothermically "transfers oxygen" to combustible materials, greatly increasing their rate of combustion relative to that in air. Thus, it reacts with glucose to give carbon dioxide, water molecules and potassium chloride: 3 KClO4 + C6H12O6 → 6 CO2 + 6 H2O + 3 KCl The conversion of solid glucose into hot gaseous is the basis of the explosive force of this and other such mixtures. With sugar, KClO4 yields a low explosive, provided a necessary confinement. Otherwise such mixtures simply deflagrate with an intense purple flame characteristic of potassium. Flash compositions used in firecrackers usually consist of a mixture of aluminium powder and potassium perchlorate. This mixture, sometimes called flash powder, is also used in ground and air fireworks. As an oxidizer, potassium perchlorate can be used safely in the presence of sulfur, whereas potassium chlorate cannot. The greater reactivity of chlorate is typical – perchlorates are kinetically poorer oxidants. Chlorate produces chloric acid (), which is highly unstable and can lead to premature ignition of the composition. Correspondingly, perchloric acid () is quite stable. For a commercial use, potassium perchlorate is mixed 50/50 with potassium nitrate to fabricate Pyrodex, a black powder substitute, and when not compressed within a muzzle loading firearm or in a cartridge, burns at a sufficiently slow rate to prevent it from being categorized with the black powder as a "low explosive", and to demote it as "flammable" material. Debated medical use Potassium perchlorate can be used as an antithyroid agent used to treat hyperthyroidism, usually in combination with one other medication. This application exploits the similar ionic radius and hydrophilicity of perchlorate and iodide. The administration of known goitrogen substances can also be used as a prevention in reducing the biological uptake of iodine, (whether it is the nutritional non-radioactive iodine-127 or radioactive iodine, most commonly iodine-131 (half-life = 8.02 days), as the body cannot discern between different iodine isotopes). Perchlorate ions, a common water contaminant in the USA due to the aerospace industry, has been shown to reduce iodine uptake and thus is classified as a goitrogen. Perchlorate ion is a competitive inhibitor of the process by which iodide is actively accumulated into the thyroid follicular cells. Studies involving healthy adult volunteers determined that at levels above 7 micrograms per kilogram per day (μg/(kg·d)), perchlorate begins to temporarily inhibit the thyroid gland's ability to absorb iodine from the bloodstream ("iodide uptake inhibition", thus perchlorate is a known goitrogen). The reduction of the iodide pool by perchlorate has a dual effect – reduction of excess hormone synthesis and hyperthyroidism, on the one hand, and reduction of thyroid inhibitor synthesis and hypothyroidism on the other. Perchlorate remains very useful as a single dose application in tests measuring the discharge of radioiodide accumulated in the thyroid as a result of many different disruptions in the further metabolism of iodide in the thyroid gland. Treatment of thyrotoxicosis (including Graves' disease) with 600-2,000 mg potassium perchlorate (430-1,400 mg perchlorate) daily for periods of several months, or longer, was once a common practice, particularly in Europe, and perchlorate use at lower doses to treat thyroid problems continues to this day. Although 400 mg of potassium perchlorate divided into four or five daily doses was used initially and found effective, higher doses were introduced when 400 mg/d was discovered not to control thyrotoxicosis in all subjects. Current regimens for treatment of thyrotoxicosis (including Graves' disease), when a patient is exposed to additional sources of iodine, commonly include 500 mg potassium perchlorate twice per day for 18–40 days. Prophylaxis with perchlorate-containing water at concentrations of 17 ppm, corresponding to 0.5 mg/(kg·d) intake for a person of 70 kg consuming 2 litres of water per day, was found to reduce the baseline of radioiodine uptake by 67% This is equivalent to ingesting a total of just 35 mg of perchlorate ions per day. In another related study were subjects drank just 1 litre of perchlorate-containing water per day at a concentration of 10 ppm, i.e. daily 10 mg of perchlorate ions were ingested, an average 38% reduction in the uptake of Iodine was observed. However, when the average perchlorate absorption in perchlorate plant workers subjected to the highest exposure has been estimated as approximately 0.5 mg/(kg·d), as in the above paragraph, a 67% reduction of iodine uptake would be expected. Studies of chronically exposed workers though have thus far failed to detect any abnormalities of thyroid function, including the uptake of iodine. This may well be attributable to sufficient daily exposure, or intake, of stable iodine-127 among these workers and the short 8 hr biological half life of perchlorate in the body. To completely block the uptake of iodine-131 (half-life = 8.02 days) by the purposeful addition of perchlorate ions to a public water supply, aiming at dosages of 0.5 mg/(kg·d), or a water concentration of 17 ppm, would therefore be grossly inadequate at truly reducing a radio-iodine uptake. Perchlorate ion concentrations in a region water supply, would need to be much higher, at least 7.15 mg/kg of body weight per day or a water concentration of 250 ppm, assuming people drink 2 liters of water per day, to be truly beneficial to the population at preventing bioaccumulation when exposed to an iodine-131 contamination, independent of the availability of iodate or iodide compounds. The distribution of perchlorate tablets, or the addition of perchlorate to the water supply, would need to continue for 80–90 days (~10 half-life of 8.02 days) after the release of iodine-131. After this time, the radioactive iodine-131 would have decayed to less than 1/1000 of its initial activity at which time the danger from the biological uptake of iodine-131 is essentially over. Limitations and criticisms So, perchlorate administration could represent a possible alternative to iodide tablets distribution in case of a large-scale nuclear accident releasing important quantities of iodine-131 in the atmosphere. However, the advantages are not always clear and would depend on the extent of a hypothetical nuclear accident. As for the stable iodide intake to rapidly saturate the thyroid gland before it accumulates radioactive iodine-131, a careful cost-benefit analysis has to be first done by the nuclear safety authorities. Indeed, blocking the thyroid activity of a whole population for three months can also have negative consequences for the human health, especially for young children. So, the decision of perchlorate, or stable iodine, administration cannot be left to the individual initiative and falls under the authority of the government in case of a major nuclear accident. Injecting perchlorate or iodide directly in the public drinking water is also probably as restrictive as tablets distribution. See also Chlorate Iodide References Further reading External links WebBook page for KClO4 Potassium compounds Perchlorates Pyrotechnic oxidizers
Potassium perchlorate
Chemistry
1,975
11,420,709
https://en.wikipedia.org/wiki/C0299%20RNA
The C0299 RNA family consists of a group of Shigella flexneri and Escherichia coli RNA genes which are 78 bases in length and are found between the hlyE and umuD genes. The function of this RNA is unknown. See also C0343 RNA C0465 RNA C0719 RNA References External links Non-coding RNA
C0299 RNA
Chemistry
79
31,288,404
https://en.wikipedia.org/wiki/NetTutor
NetTutor is a Web-based online tutoring service, founded in 1995, in Tampa, Florida. All NetTutor operations are conducted at Link-Systems International’s main office in Tampa, Florida. History The NetTutor website, trademark, and interface technology are owned by Link-Systems International (LSI), a privately held distance-learning software corporation in Tampa, Florida launched in 1995 with the goal of making academic resources available on the Web. The company was incorporated in the State of Florida on February 27, 1996. NetTutor was the firm's first product and went live later that year, making it possibly the first private online tutoring service to provide tutoring in which the learner could choose tutoring that is either synchronous, with tutor and learner simultaneously online, or asynchronous, where the learner submits questions and receives a tutor's response via direct messaging. LSI began to lease the technology supporting NetTutor (also under the NetTutor name) in the following year. LSI also develops, maintains, and leases hosted access to the proprietary Java-based whiteboard-style interface (WorldWideWhiteboard) with which NetTutor conducts tutoring. Textbook publishers NetTutor was apparently the first online tutoring service to integrate with textbooks. Access to NetTutor, for instance, has been packaged with certain McGraw-Hill math, science, and accounting books since approximately 1997. Over the subsequent years, NetTutor has been packaged with higher education textbooks published by John Wiley and Sons, Pearson, Cengage Learning, and Bedford, Freeman and Worth. Research on the NetTutor interface Early research into NetTutor was conducted by educators eager to employ technology in their own teaching. Consequently, it focuses on technical issues such as usability and robustness, but also on the ability of participants to express themselves in effective online discussion of specialized subjects, especially mathematics. A study at Hampton University in 1999 concluded that NetTutor could effectively support activities such as online office hours. The whiteboard-like nature of the NetTutor interface (today marketed separately by LSI as the WorldWideWhiteboard) offered tools to support subject-specific online chat and to illustrate concepts. In 2004, researchers at Stony Brook University found that “despite some flaws, according to our research NetTutor remains the only workable math-friendly e-learning communication system." Similar results were found using NetTutor technology and tutors at Utah Valley State College (in a study describing the use of NetTutor as "one of the earliest synchronous models for math tutoring") and at the University of Idaho, in a study beginning in 2005—showing increasing acceptance of Web-based online tutoring in the university setting. Usage By 2007, LSI claimed that its NetTutor tutors had conducted over one million online tutorial sessions and by 2016, NetTutor had conducted more than three million tutorial sessions. The service has expanded from its initial ties with the textbook publishing industry and now directly reaches learners in a variety of environments, such as at college-track high school programs, for-profit schools, programs associated with the labor movement, public universities, and community colleges. Features Learners acquire access to NetTutor by directly purchasing tutoring time from the NetTutor website, purchasing a textbook which has a NetTutor support package, or through their school. The NetTutor service is typically integrated into an existing virtual learning environment such as a publisher Web portal; a learning management system like Blackboard, Moodle, or Sakai; or a specific campus tutoring website. NetTutor assistance is of the "academic-assistance" type. Conversations take place in a shared virtual whiteboard environment equipped with a toolbar for inserting math, chemistry, accounting, or English proofing symbols. Learners may submit their writing or questions for tutor review, or may choose an available live tutor and engage synchronous discussion. Learners may save or print out their live tutorial sessions, but live tutoring is exclusively one-on-one, so that the possible benefits of a discussion involving a group of peers are not directly available. This mode of access opens NetTutor to several criticisms, such as the accusation that tutors have an interest in exhausting the tutoring hours paid for, in order to get them to purchase more; that the tutor may rush the tutorial session by providing an answer to do more sessions and enable the learner to engage in academic dishonesty; or that the tutor may not be committed to the learner’s goals. NetTutor claims to have elaborate tutor vetting and training programs and to make all learning resources available to tutors. LSI also agrees upon detailed tutoring guidelines with representatives of its institutional clients. It also provides support for learners who struggle to use the interface. It focuses on helping students find answers on their own rather than simply providing correct answers. LSI, apparently in response to several controversies that surround the use of distance education and online tutoring, has taken some measures to assure users of the academic value of NetTutor. Recent research published about NetTutor suggests that offering students the use of online tutoring as a resource in a traditional "brick-and-mortar" setting leads to an increase in student persistence and achievement. Notes References Collison, G., Elbaum, B., Haavind, S. & Tinker, R. (2000). Facilitating online learning: Effective strategies for moderators. Atwood Publishing, Madison. Hewitt, Beth L. (2010). The online writing conference: a guide for teachers and tutors. Boynton/Cook Heinemann, Portsmouth, NJ. Jacques, D., and Salmon, G (2007) Learning in Groups: A Handbook for on and off line environments, Routledge, London and New York. American educational websites Online tutoring Educational math software Science education software
NetTutor
Mathematics
1,222
34,905,537
https://en.wikipedia.org/wiki/Hybrid%20masonry
Hybrid masonry is a new type of building system that uses engineered, reinforced masonry to brace frame structures. Typically, hybrid masonry is implemented with concrete masonry panels used to brace steel frame structures. The basic concept is to attach a reinforced concrete masonry panel to a structural steel frame such that some combination of gravity forces, story shears and overturning moments can be transferred to the masonry. The structural engineer can choose from three different types of hybrid masonry (I, II, or III) and two different reinforcement anchorage types (a & b). In conventional steel frame building systems, the vertical force resisting steel frame system is supported in the lateral direction by steel bracing or an equivalent system. When the architectural plans call for concrete masonry walls to be placed within the frame, extra labor is required to ensure the masonry fits around the steel frame. Usually, this placement does not take advantage of the structural properties of the masonry panels. In hybrid masonry, the masonry panels take the place of conventional steel bracing, utilizing the structural properties of reinforced concrete masonry walls. The system was first introduced by David Biggs, PE in 2007 at the 10th North American Masonry Conference and was based on historical masonry construction and the practice of anchoring masonry walls in steel frames for out of plane strength. Types There are five different configurations of hybrid masonry. They consist of three different main types with two subsets; however, the first type does not allow both subsets. The three types consist of different constraint conditions within the steel frame and the two subsets are based on the anchoring of the vertical reinforcement in the masonry panel. Type I Type I hybrid masonry has no direct contact with the surround steel frame. For this reason, there are not two subsets of this configuration. Lateral forces are transmitted to the masonry panel through steel plates that are connected to the floor beams and attached to the wall with a through-bolt. The hole in the plate for the through-bolt is slotted so that gravity loads are not transmitted to the masonry panel; the vertical loads travel solely through the steel frame. Thus, the masonry panel takes only the story shear from the above floors and acts as a one story shear wall. The steel plates can be designed in two manners. If the design engineer wishes the steel plate to be the weak point, the plate can be designed as a fuse. The fuse would dissipate energy after yielding and be easily replaced after an extreme event. Alternatively, the plate can be designed so it does not yield before the masonry panel experiences significant damage. A strong plate would localize the damage to the masonry. Type II Type II hybrid masonry is constrained vertically by the steel frame; however, the sides of the panel still have a gap between the steel and the masonry. The vertical contact transfers the gravity load from the beam into the masonry panel, increasing its flexural and shear strength. Instead of the plates transferring the lateral force from the steel to the masonry, shear studs are welded to the bottom side of the beam. Grout is then used to fill the space between the masonry and the steel beam. With this contact, the wall is subject to story shears, gravity loads, as well as overturning moments much like a continuous shear wall. The two subsets of Type II hybrid masonry are Type IIa and Type IIb. The difference between the two systems are whether the vertical reinforcement is anchored into the base or to the steel beam. In Type IIa, the vertical reinforcement is anchored and can develop tension forces along its length. The vertical reinforcement is not anchored in Type IIb hybrid masonry and consequently the rebar cannot take the force on the tension side of the wall. Instead, the top of the wall undergoes compression. Type III Similar Type II hybrid masonry, Type III hybrid masonry has the vertical confinement. In addition to the vertical contact with the beam, contact with the columns is also used for horizontal confinement. Shear studs are welded on the insides of the columns to transfer vertical forces that are the result of axial load in the columns as well as shear in the wall. The wall system resembles infill masonry in terms of confinement in the steel, yet differs in that it is grouted and reinforced, allowing for a more ductile response. Research Prof. Ian Robertson at the University of Hawaiʻi -Mānoa(UHM) has developed and tested the steel plate connections between the masonry. The result was a design method of a fuse plate that yields along its entire length, with the aim of creating a ductile energy dissipating fuse. Bolt pullout tests were also performed at UHM to validate the strength of the masonry with through bolts. A research team at Rice University develops computational models for the study of hybrid masonry structural systems. Profs. Larry A. Fahnestock and Daniel P. Abrams are performing full scale experimental tests on hybrid masonry at the Network for Earthquake Engineering Simulation(NEES) site at the University of Illinois at Urbana-Champaign. See also Masonry Concrete masonry unit Steel Braced Frame Structural engineering References NCMA 2009: National Concrete Masonry Association, Hybrid Concrete Masonry Design TEK 14-9A. Herndon, VA, 2009 External links Hybrid Masonry Research Page Tech Briefs Hybrid Masonry Videos Construction Masonry Structural steel
Hybrid masonry
Engineering
1,057
2,825,553
https://en.wikipedia.org/wiki/Photosensitizer
Photosensitizers are light absorbers that alter the course of a photochemical reaction. They usually are catalysts. They can function by many mechanisms, sometimes they donate an electron to the substrate, sometimes they abstract a hydrogen atom from the substrate. At the end of this process, the photosensitizer returns to its ground state, where it remains chemically intact, poised to absorb more light. One branch of chemistry which frequently utilizes photosensitizers is polymer chemistry, using photosensitizers in reactions such as photopolymerization, photocrosslinking, and photodegradation. Photosensitizers are also used to generate prolonged excited electronic states in organic molecules with uses in photocatalysis, photon upconversion and photodynamic therapy. Generally, photosensitizers absorb electromagnetic radiation consisting of infrared radiation, visible light radiation, and ultraviolet radiation and transfer absorbed energy into neighboring molecules. This absorption of light is made possible by photosensitizers' large de-localized π-systems, which lowers the energy of HOMO and LUMO orbitals to promote photoexcitation. While many photosensitizers are organic or organometallic compounds, there are also examples of using semiconductor quantum dots as photosensitizers. Theory Mechanistic considerations Photosensitizers absorb light (hν) and transfer the energy from the incident light into another nearby molecule either directly or by a chemical reaction. Upon absorbing photons of radiation from incident light, photosensitizers transform into an excited singlet state. The single electron in the excited singlet state then flips in its intrinsic spin state via Intersystem crossing to become an excited triplet state. Triplet states typically have longer lifetimes than excited singlets. The prolonged lifetime increases the probability of interacting with other molecules nearby. Photosensitizers experience varying levels of efficiency for intersystem crossing at different wavelengths of light based on the internal electronic structure of the molecule. Parameters For a molecule to be considered a photosensitizer: The photosensitizer must impart a physicochemical change upon a substrate after absorbing incident light. Upon imparting a chemical change, the photosensitizer returns to its original chemical form. It is important to differentiate photosensitizers from other photochemical interactions including, but not limited to, photoinitiators, photocatalysts, photoacids and photopolymerization. Photosensitizers utilize light to enact a chemical change in a substrate; after the chemical change, the photosensitizer returns to its initial state, remaining chemically unchanged from the process. Photoinitiators absorb light to become a reactive species, commonly a radical or an ion, where it then reacts with another chemical species. These photoinitiators are often completely chemically changed after their reaction. Photocatalysts accelerate chemical reactions which rely upon light. While some photosensitizers may act as photocatalysts, not all photocatalysts may act as photosensitizers. Photoacids (or photobases) are molecules which become more acidic (or basic) upon the absorption of light. Photoacids increase in acidity upon absorbing light and thermally reassociate back into their original form upon relaxing. Photoacid generators undergo an irreversible change to become an acidic species upon light absorption. Photopolymerization can occur in two ways. Photopolymerization can occur directly wherein the monomers absorb the incident light and begin polymerizing, or it can occur through a photosensitizer-mediated process where the photosensitizer absorbs the light first before transferring energy into the monomer species. History Photosensitizers have existed within natural systems for as long as chlorophyll and other light sensitive molecules have been a part of plant life, but studies of photosensitizers began as early as the 1900s, where scientists observed photosensitization in biological substrates and in the treatment of cancer. Mechanistic studies related to photosensitizers began with scientists analyzing the results of chemical reactions where photosensitizers photo-oxidized molecular oxygen into peroxide species. The results were understood by calculating quantum efficiencies and fluorescent yields at varying wavelengths of light and comparing these results with the yield of reactive oxygen species. However, it was not until the 1960s that the electron donating mechanism was confirmed through various spectroscopic methods including reaction-intermediate studies and luminescence studies. The term photosensitizer does not appear in scientific literature until the 1960s. Instead, scientists would refer to photosensitizers as sensitizers used in photo-oxidation or photo-oxygenation processes. Studies during this time period involving photosensitizers utilized organic photosensitizers, consisting of aromatic hydrocarbon molecules, which could facilitate synthetic chemistry reactions. However, by the 1970s and 1980s, photosensitizers gained attraction in the scientific community for their role within biologic processes and enzymatic processes. Currently, photosensitizers are studied for their contributions to fields such as energy harvesting, photoredox catalysis in synthetic chemistry, and cancer treatment. Types of photosensitization processes There are two main pathways for photosensitized reactions. Type I In Type I photosensitized reactions, the photosensitizer is excited by a light source into a triplet state. The excited, triplet state photosensitizer then reacts with a substrate molecule which is not molecular oxygen to both form a product and reform the photosensitizer. Type I photosensitized reactions result in the photosensitizer being quenched by a different chemical substrate than molecular oxygen. Type II In Type II photosensitized reactions, the photosensitizer is excited by a light source into a triplet state. The excited photosensitizer then reacts with a ground state, triplet oxygen molecule. This excites the oxygen molecule into the singlet state, making it a reactive oxygen species. Upon excitation, the singlet oxygen molecule reacts with a substrate to form a product. Type II photosensitized reaction result in the photosensitizer being quenched by a ground state oxygen molecule which then goes on to react with a substrate to form a product. Composition of photosensitizers Photosensitizers can be placed into 3 generalized domains based on their molecular structure. These three domains are organometallic photosensitizers, organic photosensitizers, and nanomaterial photosensitizers. Organometallic Organometallic photosensitizers contain a metal atom chelated to at least one organic ligand. The photosensitizing capacities of these molecules result from electronic interactions between the metal and ligand(s). Popular electron-rich metal centers for these complexes include Iridium, Ruthenium, and Rhodium. These metals, as well as others, are common metal centers for photosensitizers due to their highly filled d-orbitals, or high d-electron counts, to promote metal to ligand charge transfer from pi-electron accepting ligands. This interaction between the metal center and the ligand leads to a large continuum of orbitals within both the highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO) which allows for excited electrons to switch multiplicities via intersystem crossing.   While many organometallic photosensitizer compounds are made synthetically, there also exists naturally occurring, light-harvesting organometallic photosensitizers as well. Some relevant naturally occurring examples of organometallic photosensitizers include Chlorophyll A and Chlorophyll B. Organic Organic photosensitizers are carbon-based molecules which are capable of photosensitizing. The earliest studied photosensitizers were aromatic hydrocarbons which absorbed light in the presence of oxygen to produce reactive oxygen species. These organic photosensitizers are made up of highly conjugated systems which promote electron delocalization. Due to their high conjugation, these systems have a smaller gap between the highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO) as well as a continuum of orbitals within the HOMO and LUMO. The smaller band gap and the continuum of orbitals in both the conduction band and the valence band allow for these materials to enter their triplet state more efficiently, making them better photosensitizers. Some notable organic photosensitizers which have been studied extensively include benzophenones, methylene blue, rose Bengal, flavins, pterins and others. Nanomaterials Quantum dots Colloidal quantum dots are nanoscale semiconductor materials with highly tunable optical and electronic properties. Quantum dots photosensitize via the same mechanism as organometallic photosensitizers and organic photosensitizers, but their nanoscale properties allow for greater control in distinctive aspects. Some key advantages to the use of quantum dots as photosensitizers includes their small, tunable band gap which allows for efficient transitions to the triplet state, and their insolubility in many solvents which allows for easy retrieval from a synthetic reaction mixture. Nanorods Nanorods, similar in size to quantum dots, have tunable optical and electronic properties. Based on their size and material composition, it is possible to tune the maximum absorption peak for nanorods during their synthesis. This control has led to the creation of photosensitizing nanorods. Applications Medical Photodynamic therapy Photodynamic therapy utilizes Type II photosensitizers to harvest light to degrade tumors or cancerous masses. This discovery was first observed back in 1907 by Hermann von Tappeiner when he utilized eosin to treat skin tumors. The photodynamic process is predominantly a noninvasive technique wherein the photosensitizers are put inside a patient so that it may accumulate on the tumor or cancer. When the photosensitizer reaches the tumor or cancer, wavelength specific light is shined on the outside of the patient's affected area. This light (preferably near infrared frequency as this allows for the penetration of the skin without acute toxicity) excites the photosensitizer's electrons into the triplet state. Upon excitation, the photosensitizer begins transferring energy to neighboring ground state triplet oxygen to generate excited singlet oxygen. The resulting excited oxygen species then selectively degrades the tumor or cancerous mass. In February 2019, medical scientists announced that iridium attached to albumin, creating a photosensitized molecule, can penetrate cancer cells and, after being irradiated with light (a process called photodynamic therapy), destroy the cancer cells. Energy sources Dye sensitized solar cells In 1972, scientists discovered that chlorophyll could absorb sunlight and transfer energy into electrochemical cells. This discovery eventually led to the use of photosensitizers as sunlight-harvesting materials in solar cells, mainly through the use of photosensitizer dyes. Dye Sensitized Solar cells utilize these photosensitizer dyes to absorb photons from solar light and transfer energy rich electrons to the neighboring semiconductor material to generate electric energy output. These dyes act as dopants to semiconductor surfaces which allows for the transfer of light energy from the photosensitizer to electronic energy within the semiconductor. These photosensitizers are not limited to dyes. They may take the form of any photosensitizing structure, dependent on the semiconductor material to which they are attached. Hydrogen generating catalysts Via the absorption of light, photosensitizers can utilize triplet state transfer to reduce small molecules, such as water, to generate Hydrogen gas. As of right now, photosensitizers have generated hydrogen gas by splitting water molecules at a small, laboratory scale. Synthetic chemistry Photoredox chemistry In the early 20th century, chemists observed that various aromatic hydrocarbons in the presence of oxygen could absorb wavelength specific light to generate a peroxide species. This discovery of oxygen's reduction by a photosensitizer led to chemists studying photosensitizers as photoredox catalysts for their roles in the catalysis of pericyclic reactions and other reduction and oxidation reactions. Photosensitizers in synthetic chemistry allow for the manipulation of electronic transitions within molecules through an externally applied light source. These photosensitizers used in redox chemistry may be organic, organometallic, or nanomaterials depending on the physical and spectral properties required for the reaction. Biological effects of photosensitizers Photosensitizers that are readily incorporated into the external tissues can increase the rate at which reactive oxygen species are generated upon exposure to UV light (such as UV-containing sunlight). Some photosensitizing agents, such as St. John's Wort, appear to increase the incidence of inflammatory skin conditions in animals and have been observed to slightly reduce the minimum tanning dose in humans. Some examples of photosensitizing medications (both investigatory and approved for human use) are: St. John's Wort 9-me-bc Doxepin Amoxapine Ethinyl estradiol See also Artificial photosynthesis Photosensitivity Photodynamic therapy Photocatalysis Dye-sensitized solar cell Photoredox catalysis Light harvesting materials Photoswitch References External links Drug delivery devices Photochemistry
Photosensitizer
Chemistry
2,760
2,945,005
https://en.wikipedia.org/wiki/Network-to-network%20interface
In telecommunications, a network-to-network interface (NNI) is an interface that specifies signaling and management functions between two networks. An NNI circuit can be used for interconnection of signalling (e.g., SS7), Internet Protocol (IP) (e.g., MPLS) or ATM networks. In networks based on MPLS or GMPLS, NNI is used for the interconnection of core provider routers (class 4 or higher). In the case of GMPLS, the type of interconnection can vary across Back-to-Back, EBGP or mixed NNI connection scenarios, depending on the type of VRF exchange used for interconnection. In case of Back-to-Back, VRF is necessary to create VLANs and subsequently sub-interfaces (VLAN headers and DLCI headers for Ethernet and Frame Relay network packets) on each interface used for the NNI circuit. In the case of eBGP NNI interconnection, IP routers are taught how to dynamically exchange VRF records without VLAN creation. NNI also can be used for interconnection of two VoIP nodes. In cases of mixed or full-mesh scenarios, other NNI types are possible. NNI interconnection is encapsulation independent, but Ethernet and Frame Relay are commonly used. See also User–network interface Asynchronous Transfer Mode References Network management
Network-to-network interface
Technology,Engineering
308
63,200,483
https://en.wikipedia.org/wiki/Wulf%20Bernard%20Kunkel
Wulf Bernard Kunkel (6 February 1923, Eichenau, Germany – 3 September 2013, Oakland, California) was a German-born American physicist, specializing in plasma physics, especially "the development of ion beams for plasma heating". Kunkel attended the International Quaker School Eerde in Eerde, Ommen in the Netherlands. During the Second World War, he studied physics at the University of Amsterdam. After WW II ended, he studied physics at the University of California, Berkeley (UC Berkeley), where he graduated with bachelor's degree in 1948 and PhD in 1951. From 1951 to 1956 he worked at UC Berkeley's Institute of Engineering Research. In 1956 he joined the UC Radiation Laboratory (renamed in 1958 the Lawrence Berkeley National Laboratory, LBNL) and also became a lecturer in the physics department, UC Berkeley. There he was a full professor from 1967 to 1991, when he retired as professor emeritus. From 1971 to 1991 he served as leader of LBNL's fusion research program. Kunkel was a Guggenheim Fellow for the two academic years 1955–1956 and 1972–1973. He was elected a Fellow of the American Physical Society in 1955. He received Germany's Alexander von Humboldt Award in 1982. Upon his death he was survived by his widow, three children, and two grandchildren. Selected publications 1979 (This paper "describes the use and future of high-power neutral-beam injectors to heat plasma." References 1923 births 2013 deaths University of California, Berkeley alumni University of California, Berkeley College of Letters and Science faculty Lawrence Berkeley National Laboratory people Plasma physicists 20th-century American physicists 20th-century German physicists Fellows of the American Physical Society German emigrants to the United States
Wulf Bernard Kunkel
Physics
350
5,077,439
https://en.wikipedia.org/wiki/Ugly%20duckling%20theorem
The ugly duckling theorem is an argument showing that classification is not really possible without some sort of bias. More particularly, it assumes finitely many properties combinable by logical connectives, and finitely many objects; it asserts that any two different objects share the same number of (extensional) properties. The theorem is named after Hans Christian Andersen's 1843 story "The Ugly Duckling", because it shows that a duckling is just as similar to a swan as two swans are to each other. It was derived by Satosi Watanabe in 1969. Mathematical formula Suppose there are n things in the universe, and one wants to put them into classes or categories. One has no preconceived ideas or biases about what sorts of categories are "natural" or "normal" and what are not. So one has to consider all the possible classes that could be, all the possible ways of making a set out of the n objects. There are such ways, the size of the power set of n objects. One can use that to measure the similarity between two objects, and one would see how many sets they have in common. However, one cannot. Any two objects have exactly the same number of classes in common if we can form any possible class, namely (half the total number of classes there are). To see this is so, one may imagine each class is represented by an n-bit string (or binary encoded integer), with a zero for each element not in the class and a one for each element in the class. As one finds, there are such strings. As all possible choices of zeros and ones are there, any two bit-positions will agree exactly half the time. One may pick two elements and reorder the bits so they are the first two, and imagine the numbers sorted lexicographically. The first numbers will have bit #1 set to zero, and the second will have it set to one. Within each of those blocks, the top will have bit #2 set to zero and the other will have it as one, so they agree on two blocks of or on half of all the cases, no matter which two elements one picks. So if we have no preconceived bias about which categories are better, everything is then equally similar (or equally dissimilar). The number of predicates simultaneously satisfied by two non-identical elements is constant over all such pairs. Thus, some kind of inductive bias is needed to make judgements to prefer certain categories over others. Boolean functions Let be a set of vectors of booleans each. The ugly duckling is the vector which is least like the others. Given the booleans, this can be computed using Hamming distance. However, the choice of boolean features to consider could have been somewhat arbitrary. Perhaps there were features derivable from the original features that were important for identifying the ugly duckling. The set of booleans in the vector can be extended with new features computed as boolean functions of the original features. The only canonical way to do this is to extend it with all possible Boolean functions. The resulting completed vectors have features. The ugly duckling theorem states that there is no ugly duckling because any two completed vectors will either be equal or differ in exactly half of the features. Proof. Let x and y be two vectors. If they are the same, then their completed vectors must also be the same because any Boolean function of x will agree with the same Boolean function of y. If x and y are different, then there exists a coordinate where the -th coordinate of differs from the -th coordinate of . Now the completed features contain every Boolean function on Boolean variables, with each one exactly once. Viewing these Boolean functions as polynomials in variables over GF(2), segregate the functions into pairs where contains the -th coordinate as a linear term and is without that linear term. Now, for every such pair , and will agree on exactly one of the two functions. If they agree on one, they must disagree on the other and vice versa. (This proof is believed to be due to Watanabe.) Discussion A possible way around the ugly duckling theorem would be to introduce a constraint on how similarity is measured by limiting the properties involved in classification, for instance, between A and B. However Medin et al. (1993) point out that this does not actually resolve the arbitrariness or bias problem since in what respects A is similar to B: "varies with the stimulus context and task, so that there is no unique answer, to the question of how similar is one object to another". For example, "a barberpole and a zebra would be more similar than a horse and a zebra if the feature striped had sufficient weight. Of course, if these feature weights were fixed, then these similarity relations would be constrained". Yet the property "striped" as a weight 'fix' or constraint is arbitrary itself, meaning: "unless one can specify such criteria, then the claim that categorization is based on attribute matching is almost entirely vacuous". Stamos (2003) remarked that some judgments of overall similarity are non-arbitrary in the sense they are useful: Unless some properties are considered more salient, or 'weighted' more important than others, everything will appear equally similar, hence Watanabe (1986) wrote: "any objects, in so far as they are distinguishable, are equally similar". In a weaker setting that assumes infinitely many properties, Murphy and Medin (1985) give an example of two putative classified things, plums and lawnmowers: According to Woodward, the ugly duckling theorem is related to Schaffer's Conservation Law for Generalization Performance, which states that all algorithms for learning of boolean functions from input/output examples have the same overall generalization performance as random guessing. The latter result is generalized by Woodward to functions on countably infinite domains. See also No free lunch in search and optimization No free lunch theorem Identity of indiscernibles – Classification (discernibility) is possible (with or without a bias), but there cannot be separate objects or entities that have all their properties in common. New riddle of induction Notes Theorems Arguments Machine learning Ontology Metaphors referring to birds 1960s neologisms Bias
Ugly duckling theorem
Engineering
1,310
142,425
https://en.wikipedia.org/wiki/Herpetology
Herpetology (from Greek ἑρπετόν herpetón, meaning "reptile" or "creeping animal") is a branch of zoology concerned with the study of amphibians (including frogs, salamanders, and caecilians (Gymnophiona)) and reptiles (including snakes, lizards, turtles, crocodilians, and tuataras). Birds, which are cladistically included within Reptilia, are traditionally excluded here; the separate scientific study of birds is the subject of ornithology. The precise definition of herpetology is the study of ectothermic (cold-blooded) tetrapods. This definition of "herps" (otherwise called "herptiles" or "herpetofauna") excludes fish; however, it is not uncommon for herpetological and ichthyological scientific societies to collaborate. For instance, groups such as the American Society of Ichthyologists and Herpetologists have co-published journals and hosted conferences to foster the exchange of ideas between the fields. Herpetological societies are formed to promote interest in reptiles and amphibians, both captive and wild. Herpetological studies can offer benefits relevant to other fields by providing research on the role of amphibians and reptiles in global ecology. For example, by monitoring amphibians that are very sensitive to environmental changes, herpetologists record visible warnings that significant climate changes are taking place. Although they can be deadly, some toxins and venoms produced by reptiles and amphibians are useful in human medicine. Currently, some snake venom has been used to create anti-coagulants that work to treat strokes and heart attacks. Naming and etymology The word herpetology is from Greek: ἑρπετόν, herpetón, "creeping animal" and , -logia, "knowledge". "Herp" is a vernacular term for non-avian reptiles and amphibians. It is derived from the archaic term "herpetile", with roots back to Linnaeus's classification of animals, in which he grouped reptiles and amphibians in the same class. There are over 6700 species of amphibians and over 9000 species of reptiles. Despite its modern taxonomic irrelevance, the term has persisted, particularly in the names of herpetology, the scientific study of non-avian reptiles and amphibians, and herpetoculture, the captive care and breeding of reptiles and amphibians. Subfields The field of herpetology can be divided into areas dealing with particular taxonomic groups such as frogs and other amphibians (batrachology), snakes (ophiology or ophidiology), lizards (saurology) and turtles (cheloniology, chelonology, or testudinology). More generally, herpetologists work on functional problems in the ecology, evolution, physiology, behavior, taxonomy, or molecular biology of amphibians and reptiles. Amphibians or reptiles can be used as model organisms for specific questions in these fields, such as the role of frogs in the ecology of a wetland. All of these areas are related through their evolutionary history, an example being the evolution of viviparity (including behavior and reproduction). Careers Career options in the field of herpetology include lab research, field studies and surveys, assistance in veterinary and medical procedures, zoological staff, museum staff, and college teaching. In modern academic science, it is rare for an individual to solely consider themselves to be a herpetologist. Most individuals focus on a particular field such as ecology, evolution, taxonomy, physiology, or molecular biology, and within that field ask questions pertaining to or best answered by examining reptiles and amphibians. For example, an evolutionary biologist who is also a herpetologist may choose to work on an issue such as the evolution of warning coloration in coral snakes. Modern herpetological writers include Mark O'Shea and Philip Purser. Modern herpetological showmen include Jeff Corwin, Steve Irwin (popularly known as the "Crocodile Hunter"), and Austin Stevens, popularly known as "Austin Snakeman" in the TV series Austin Stevens: Snakemaster. Herpetology is an established hobby around the world due to the varied biodiversity in many environments. Many amateur herpetologists coin themselves as "herpers". Study Most colleges or universities do not offer a major in herpetology at the undergraduate or the graduate level. Instead, persons interested in herpetology select a major in the biological sciences. The knowledge learned about all aspects of the biology of animals is then applied to an individual study of herpetology. Journals Herpetology research is published in academic journals including Ichthyology & Herpetology, founded in 1913 (under the name Copeia in honour of Edward Drinker Cope); Herpetologica, founded in 1936; Reptiles and amphibians, founded in 1990; and Contemporary Herpetology, founded in 1997 and stopped publishing in 2009. See also Herping List of herpetologists List of herpetology academic journals Reptile Database AmphibiaWeb References Further reading Adler, Kraig (1989). Contributions to the History of Herpetology. Society for the Study of Amphibians and Reptiles (SSAR). Eatherley, Dan (2015). Bushmaster: Raymond Ditmars and the Hunt for the World's Largest Viper. New York: Arcade. 320 pp. . Goin, Coleman J.; Goin, Olive B.; Zug, George R. (1978). Introduction to Herpetology, Third Edition. San Francisco: W. H. Freeman and Company. xi + 378 pp. . External links Iranian Herpetological Studies Institute (IHSI) Field Herpetology Guide American Society of Ichthyologists and Herpetologists Herpetological Conservation and Biology Societas Europaea Herpetologica Distribution Maps for European Reptiles and Amphibians Center for North American Herpetology over 500 species of reptiles and amphibians European Field Herping Community New Zealand Herpetology Chicago Herpetological Society Biology of the Reptilia is an online copy of the full text of a 22-volume 13,000-page summary of the state of research of reptiles. HerpMapper is a database of reptile and amphibian sightings Amphibian and Reptile Atlas of Peninsular California, San Diego Natural History Museum A Primer on Reptiles and Amphibians Field Herp Forum Subfields of zoology Scoutcraft
Herpetology
Biology
1,347
25,059,913
https://en.wikipedia.org/wiki/Swedish%20Open%20Cultural%20Heritage
SOCH (Swedish Open Cultural Heritage) is a web service used for searching and retrieving data from museum an historical environment sectors in Sweden. SOCH aggregates metadata from different central, regional and local databases in order to facilitate applications to search and present cultural heritage data via an open API. The aim is to facilitate application developers to build applications that exploit SOCH. In March 2013 some +10 different applications has been built using SOCH API. One of the first applications built on SOCH was a mobile phone application displaying ancient monuments on a map layer. A number of museums are also building applications on SOCH in order to make more than their own stuff available online. In 2012 commercial applications started to appear using SOCH data. The SOCH is operated and developed at the Swedish National Heritage Board (SNHB). SNHB has used SOCH API for applications: http://www.kringla.nu and http://www.platsr.se. External links SOCH Description Swedish National Heritage Board Cultural heritage of Sweden Databases in Sweden
Swedish Open Cultural Heritage
Technology
219
760,294
https://en.wikipedia.org/wiki/S/2003%20J%204
is a natural satellite of Jupiter. It was discovered by a team of astronomers from the University of Hawaii led by Scott S. Sheppard in 2003. is about 2 km in diameter, and orbits Jupiter at an average distance of 23,000,000 km in 669 days, at an inclination of 149° to the ecliptic, in a retrograde direction and with an eccentricity of 0.497. It belongs to the Pasiphae group, irregular retrograde moons orbiting Jupiter at distances ranging between 22.8 and 24.1 Gm, and with inclinations ranging between 144.5° and 158.3°. This moon was considered lost until late 2020, when it was recovered by amateur astronomers Kai Ly and Sam Deen in archival images from 2001-2018. The recovery of the moon was announced by the Minor Planet Center on 13 January 2021. References Pasiphae group Moons of Jupiter Irregular satellites Astronomical objects discovered in 2003 Moons with a retrograde orbit Recovered astronomical objects
S/2003 J 4
Astronomy
201
16,781,876
https://en.wikipedia.org/wiki/HD%2093083%20b
HD 93083 b is an extrasolar planet orbiting the K-type subgiant star HD 93083 in Antlia constellation. It is probably much less massive than Jupiter, although only the minimum mass is known. The planet's mean distance from the star is about half that of Earth, and the orbit is slightly eccentric. This planet was discovered by the HARPS search team. The planet HD 93083 b is named Melquíades. The name was selected in the NameExoWorlds campaign by Colombia, during the 100th anniversary of the IAU. Melquíades is a fictional character in the novel One Hundred Years of Solitude, who walks around Macondo (name of the host star HD 93083). HD 93083 b lies within the habitable zone of its host star. Stability analysis reveals that the orbits of Earth-sized planets located in HD 93083 b's Trojan points would be stable for long periods of time. See also High Accuracy Radial Velocity Planet Searcher or HARPS References External links Corot Simulation Exoplanets discovered in 2005 Giant planets Antlia Exoplanets detected by radial velocity Exoplanets with proper names Giant planets in the habitable zone
HD 93083 b
Astronomy
250
17,232,430
https://en.wikipedia.org/wiki/Aluminized%20cloth
Aluminized cloth is a material designed to reflect thermal radiation. Applications include fire proximity suits, emergency space blankets, protection in molten metal handling, and insulation for building and containers. Aluminium powder was added to aircraft dope which was then used to give a shiny finish to fabric-covered aircraft, so protecting them from sunlight. The Hindenburg airship was treated in this way and it has been suggested that the aluminium powder made the skin more combustible and so caused or contributed to the Hindenburg disaster. This theory is controversial and experiments have been conducted to test the hypothesis. See also Reflectivity Thermal insulation MythBusters (2007 season)#Hindenburg Mystery References Safety clothing
Aluminized cloth
Physics
138
4,533,952
https://en.wikipedia.org/wiki/Comet%20%28programming%29
Comet is a web application model in which a long-held HTTPS request allows a web server to push data to a browser, without the browser explicitly requesting it. Comet is an umbrella term, encompassing multiple techniques for achieving this interaction. All these methods rely on features included by default in browsers, such as JavaScript, rather than on non-default plugins. The Comet approach differs from the original model of the web, in which a browser requests a complete web page at a time. The use of Comet techniques in web development predates the use of the word Comet as a neologism for the collective techniques. Comet is known by several other names, including Ajax Push, Reverse Ajax, Two-way-web, HTTP Streaming, and HTTP server push among others. The term Comet is not an acronym, but was coined by Alex Russell in his 2006 blog post. In recent years, the standardisation and widespread support of WebSocket and Server-sent events has rendered the Comet model obsolete. History Early Java applets The ability to embed Java applets into browsers (starting with Netscape Navigator 2.0 in March 1996) made two-way sustained communications possible, using a raw TCP socket to communicate between the browser and the server. This socket can remain open as long as the browser is at the document hosting the applet. Event notifications can be sent in any format text or binary and decoded by the applet. The first browser-to-browser communication framework The very first application using browser-to-browser communications was Tango Interactive, implemented in 1996–98 at the Northeast Parallel Architectures Center (NPAC) at Syracuse University using DARPA funding. TANGO architecture has been patented by Syracuse University. TANGO framework has been extensively used as a distance education tool. The framework has been commercialized by CollabWorx and used in a dozen or so Command&Control and Training applications in the United States Department of Defense. First Comet applications The first set of Comet implementations dates back to 2000, with the Pushlets, Lightstreamer, and KnowNow projects. Pushlets, a framework created by Just van den Broecke, was one of the first open source implementations. Pushlets were based on server-side Java servlets, and a client-side JavaScript library. Bang Networks a Silicon Valley start-up backed by Netscape co-founder Marc Andreessen had a lavishly-financed attempt to create a real-time push standard for the entire web. In April 2001, Chip Morningstar began developing a Java-based (J2SE) web server which used two HTTP sockets to keep open two communications channels between the custom HTTP server he designed and a client designed by Douglas Crockford; a functioning demo system existed as of June 2001. The server and client used a messaging format that the founders of State Software, Inc. assented to coin as JSON following Crockford's suggestion. The entire system, the client libraries, the messaging format known as JSON and the server, became the State Application Framework, parts of which were sold and used by Sun Microsystems, Amazon.com, EDS and Volkswagen. In March 2006, software engineer Alex Russell coined the term Comet in a post on his personal blog. The new term was a play on Ajax (Ajax and Comet both being common household cleaners in the USA). In 2006, some applications exposed those techniques to a wider audience: Meebo’s multi-protocol web-based chat application enabled users to connect to AOL, Yahoo, and Microsoft chat platforms through the browser; Google added web-based chat to Gmail; JotSpot, a startup since acquired by Google, built Comet-based real-time collaborative document editing. New Comet variants were created, such as the Java-based ICEfaces JSF framework (although they prefer the term "Ajax Push"). Others that had previously used Java-applet based transports switched instead to pure-JavaScript implementations. Implementations Comet applications attempt to eliminate the limitations of the page-by-page web model and traditional polling by offering two-way sustained interaction, using a persistent or long-lasting HTTP connection between the server and the client. Since browsers and proxies are not designed with server events in mind, several techniques to achieve this have been developed, each with different benefits and drawbacks. The biggest hurdle is the HTTP 1.1 specification, which states "this specification... encourages clients to be conservative when opening multiple connections". Therefore, holding one connection open for real-time events has a negative impact on browser usability: the browser may be blocked from sending a new request while waiting for the results of a previous request, e.g., a series of images. This can be worked around by creating a distinct hostname for real-time information, which is an alias for the same physical server. This strategy is an application of domain sharding. Specific methods of implementing Comet fall into two major categories: streaming and long polling. Streaming An application using streaming Comet opens a single persistent connection from the client browser to the server for all Comet events. These events are incrementally handled and interpreted on the client side every time the server sends a new event, with neither side closing the connection. Specific techniques for accomplishing streaming Comet include the following: Hidden iframe A basic technique for dynamic web application is to use a hidden iframe HTML element (an inline frame, which allows a website to embed one HTML document inside another). This invisible iframe is sent as a chunked block, which implicitly declares it as infinitely long (sometimes called "forever frame"). As events occur, the iframe is gradually filled with script tags, containing JavaScript to be executed in the browser. Because browsers render HTML pages incrementally, each script tag is executed as it is received. Some browsers require a specific minimum document size before parsing and execution is started, which can be obtained by initially sending 1–2 kB of padding spaces. One benefit of the iframes method is that it works in every common browser. Two downsides of this technique are the lack of a reliable error handling method, and the impossibility of tracking the state of the request calling process. XMLHttpRequest The XMLHttpRequest (XHR) object, a tool used by Ajax applications for browser–server communication, can also be pressed into service for server–browser Comet messaging by generating a custom data format for an XHR response, and parsing out each event using browser-side JavaScript; relying only on the browser firing the onreadystatechange callback each time it receives new data. Ajax with long polling None of the above streaming transports work across all modern browsers without negative side-effects. This forces Comet developers to implement several complex streaming transports, switching between them depending on the browser. Consequently, many Comet applications use long polling, which is easier to implement on the browser side, and works, at minimum, in every browser that supports XHR. As the name suggests, long polling requires the client to poll the server for an event (or set of events). The browser makes an Ajax-style request to the server, which is kept open until the server has new data to send to the browser, which is sent to the browser in a complete response. The browser initiates a new long polling request in order to obtain subsequent events. IETF RFC 6202 "Known Issues and Best Practices for the Use of Long Polling and Streaming in Bidirectional HTTP" compares long polling and HTTP streaming. Specific technologies for accomplishing long-polling include the following: XMLHttpRequest long polling For the most part, XMLHttpRequest long polling works like any standard use of XHR. The browser makes an asynchronous request of the server, which may wait for data to be available before responding. The response can contain encoded data (typically XML or JSON) or Javascript to be executed by the client. At the end of the processing of the response, the browser creates and sends another XHR, to await the next event. Thus the browser always keeps a request outstanding with the server, to be answered as each event occurs. Script tag long polling While any Comet transport can be made to work across subdomains, none of the above transports can be used across different second-level domains (SLDs), due to browser security policies designed to prevent cross-site scripting attacks. That is, if the main web page is served from one SLD, and the Comet server is located at another SLD (which does not have cross-origin resource sharing enabled), Comet events cannot be used to modify the HTML and DOM of the main page, using those transports. This problem can be sidestepped by creating a proxy server in front of one or both sources, making them appear to originate from the same domain. However, this is often undesirable for complexity or performance reasons. Unlike iframes or XMLHttpRequest objects, script tags can be pointed at any URI, and JavaScript code in the response will be executed in the current HTML document. This creates a potential security risk for both servers involved, though the risk to the data provider (in our case, the Comet server) can be avoided using JSONP. A long-polling Comet transport can be created by dynamically creating script elements, and setting their source to the location of the Comet server, which then sends back JavaScript (or JSONP) with some event as its payload. Each time the script request is completed, the browser opens a new one, just as in the XHR long polling case. This method has the advantage of being cross-browser while still allowing cross-domain implementations. Alternatives Browser-native technologies are inherent in the term Comet. Attempts to improve non-polling HTTP communication have come from multiple sides: The HTML 5 draft specification produced by the Web Hypertext Application Technology Working Group (WHATWG) specifies so called server-sent events, which defines a new JavaScript interface EventSource and a new MIME type text/event-stream. All major browsers except Microsoft Internet Explorer include this technology. The HTML 5 WebSocket API working draft specifies a method for creating a persistent connection with a server and receiving messages via an onmessage callback. The Bayeux protocol by the Dojo Foundation. It leaves browser-specific transports in place, and defines a higher-level protocol for communication between browser and server, with the aim of allowing re-use of client-side JavaScript code with multiple Comet servers, and allowing the same Comet server to communicate with multiple client-side JavaScript implementations. Bayeux is based on a publish/subscribe model, so servers supporting Bayeux have publish/subscribe built-in. The BOSH protocol by the XMPP standards foundation. It emulates a bidirectional stream between the browser and server by using two synchronous HTTP connections. The JSONRequest object, proposed by Douglas Crockford, would be an alternative to the XHR object. Use of plugins, such as Java applets or the proprietary Adobe Flash (using RTMP protocol for data streaming to Flash applications). These have the advantage of working identically across all browsers with the appropriate plugin installed and need not rely on HTTP connections, but the disadvantage of requiring the plugin to be installed Google announced a new Channel API for Google App Engine, implementing a Comet-like API with the help of a client JavaScript library on the browser. This API has been deprecated. See also Push technology Pull technology Notes References External links * Ajax (programming) Web 2.0 neologisms Web development
Comet (programming)
Engineering
2,413
2,147,685
https://en.wikipedia.org/wiki/Point%20process
In statistics and probability theory, a point process or point field is a set of a random number of mathematical points randomly located on a mathematical space such as the real line or Euclidean space. Point processes on the real line form an important special case that is particularly amenable to study, because the points are ordered in a natural way, and the whole point process can be described completely by the (random) intervals between the points. These point processes are frequently used as models for random events in time, such as the arrival of customers in a queue (queueing theory), of impulses in a neuron (computational neuroscience), particles in a Geiger counter, location of radio stations in a telecommunication network or of searches on the world-wide web. General point processes on a Euclidean space can be used for spatial data analysis, which is of interest in such diverse disciplines as forestry, plant ecology, epidemiology, geography, seismology, materials science, astronomy, telecommunications, computational neuroscience, economics and others. Conventions Since point processes were historically developed by different communities, there are different mathematical interpretations of a point process, such as a random counting measure or a random set, and different notations. The notations are described in detail on the point process notation page. Some authors regard a point process and stochastic process as two different objects such that a point process is a random object that arises from or is associated with a stochastic process, though it has been remarked that the difference between point processes and stochastic processes is not clear. Others consider a point process as a stochastic process, where the process is indexed by sets of the underlying space on which it is defined, such as the real line or -dimensional Euclidean space. Other stochastic processes such as renewal and counting processes are studied in the theory of point processes. Sometimes the term "point process" is not preferred, as historically the word "process" denoted an evolution of some system in time, so point process is also called a random point field. Mathematics In mathematics, a point process is a random element whose values are "point patterns" on a set S. While in the exact mathematical definition a point pattern is specified as a locally finite counting measure, it is sufficient for more applied purposes to think of a point pattern as a countable subset of S that has no limit points. Definition To define general point processes, we start with a probability space , and a measurable space where is a locally compact second countable Hausdorff space and is its Borel σ-algebra. Consider now an integer-valued locally finite kernel from into , that is, a mapping such that: For every , is a (integer-valued) locally finite measure on . For every , is a random variable over . This kernel defines a random measure in the following way. We would like to think of as defining a mapping which maps to a measure (namely, ), where is the set of all locally finite measures on . Now, to make this mapping measurable, we need to define a -field over . This -field is constructed as the minimal algebra so that all evaluation maps of the form , where is relatively compact, are measurable. Equipped with this -field, then is a random element, where for every , is a locally finite measure over . Now, by a point process on we simply mean an integer-valued random measure (or equivalently, integer-valued kernel) constructed as above. The most common example for the state space S is the Euclidean space Rn or a subset thereof, where a particularly interesting special case is given by the real half-line [0,∞). However, point processes are not limited to these examples and may among other things also be used if the points are themselves compact subsets of Rn, in which case ξ is usually referred to as a particle process. Despite the name point process since S might not be a subset of the real line, as it might suggest that ξ is a stochastic process. Representation Every instance (or event) of a point process ξ can be represented as where denotes the Dirac measure, n is an integer-valued random variable and are random elements of S. If 's are almost surely distinct (or equivalently, almost surely for all ), then the point process is known as simple. Another different but useful representation of an event (an event in the event space, i.e. a series of points) is the counting notation, where each instance is represented as an function, a continuous function which takes integer values: : which is the number of events in the observation interval . It is sometimes denoted by , and or mean . Expectation measure The expectation measure Eξ (also known as mean measure) of a point process ξ is a measure on S that assigns to every Borel subset B of S the expected number of points of ξ in B. That is, Laplace functional The Laplace functional of a point process N is a map from the set of all positive valued functions f on the state space of N, to defined as follows: They play a similar role as the characteristic functions for random variable. One important theorem says that: two point processes have the same law if their Laplace functionals are equal. Moment measure The th power of a point process, is defined on the product space as follows : By monotone class theorem, this uniquely defines the product measure on The expectation is called the th moment measure. The first moment measure is the mean measure. Let . The joint intensities of a point process w.r.t. the Lebesgue measure are functions such that for any disjoint bounded Borel subsets Joint intensities do not always exist for point processes. Given that moments of a random variable determine the random variable in many cases, a similar result is to be expected for joint intensities. Indeed, this has been shown in many cases. Stationarity A point process is said to be stationary if has the same distribution as for all For a stationary point process, the mean measure for some constant and where stands for the Lebesgue measure. This is called the intensity of the point process. A stationary point process on has almost surely either 0 or an infinite number of points in total. For more on stationary point processes and random measure, refer to Chapter 12 of Daley & Vere-Jones. Stationarity has been defined and studied for point processes in more general spaces than . Transformations A point process transformation is a function that maps a point process to another point process. Examples We shall see some examples of point processes in Poisson point process The simplest and most ubiquitous example of a point process is the Poisson point process, which is a spatial generalisation of the Poisson process. A Poisson (counting) process on the line can be characterised by two properties : the number of points (or events) in disjoint intervals are independent and have a Poisson distribution. A Poisson point process can also be defined using these two properties. Namely, we say that a point process is a Poisson point process if the following two conditions hold 1) are independent for disjoint subsets 2) For any bounded subset , has a Poisson distribution with parameter where denotes the Lebesgue measure. The two conditions can be combined and written as follows : For any disjoint bounded subsets and non-negative integers we have that The constant is called the intensity of the Poisson point process. Note that the Poisson point process is characterised by the single parameter It is a simple, stationary point process. To be more specific one calls the above point process a homogeneous Poisson point process. An inhomogeneous Poisson process is defined as above but by replacing with where is a non-negative function on Cox point process A Cox process (named after Sir David Cox) is a generalisation of the Poisson point process, in that we use random measures in place of . More formally, let be a random measure. A Cox point process driven by the random measure is the point process with the following two properties : Given , is Poisson distributed with parameter for any bounded subset For any finite collection of disjoint subsets and conditioned on we have that are independent. It is easy to see that Poisson point process (homogeneous and inhomogeneous) follow as special cases of Cox point processes. The mean measure of a Cox point process is and thus in the special case of a Poisson point process, it is For a Cox point process, is called the intensity measure. Further, if has a (random) density (Radon–Nikodym derivative) i.e., then is called the intensity field of the Cox point process. Stationarity of the intensity measures or intensity fields imply the stationarity of the corresponding Cox point processes. There have been many specific classes of Cox point processes that have been studied in detail such as: Log-Gaussian Cox point processes: for a Gaussian random field Shot noise Cox point processes:, for a Poisson point process and kernel Generalised shot noise Cox point processes: for a point process and kernel Lévy based Cox point processes: for a Lévy basis and kernel , and Permanental Cox point processes: for k independent Gaussian random fields 's Sigmoidal Gaussian Cox point processes: for a Gaussian random field and random By Jensen's inequality, one can verify that Cox point processes satisfy the following inequality: for all bounded Borel subsets , where stands for a Poisson point process with intensity measure Thus points are distributed with greater variability in a Cox point process compared to a Poisson point process. This is sometimes called clustering or attractive property of the Cox point process. Determinantal point processes An important class of point processes, with applications to physics, random matrix theory, and combinatorics, is that of determinantal point processes. Hawkes (self-exciting) processes A Hawkes process , also known as a self-exciting counting process, is a simple point process whose conditional intensity can be expressed as where is a kernel function which expresses the positive influence of past events on the current value of the intensity process , is a possibly non-stationary function representing the expected, predictable, or deterministic part of the intensity, and is the time of occurrence of the i-th event of the process. Geometric processes Given a sequence of non-negative random variables , if they are independent and the cdf of is given by for , where is a positive constant, then is called a geometric process (GP). The geometric process has several extensions, including the α- series process and the doubly geometric process. Point processes on the real half-line Historically the first point processes that were studied had the real half line R+ = [0,∞) as their state space, which in this context is usually interpreted as time. These studies were motivated by the wish to model telecommunication systems, in which the points represented events in time, such as calls to a telephone exchange. Point processes on R+ are typically described by giving the sequence of their (random) inter-event times (T1, T2, ...), from which the actual sequence (X1, X2, ...) of event times can be obtained as If the inter-event times are independent and identically distributed, the point process obtained is called a renewal process. Intensity of a point process The intensity λ(t | Ht) of a point process on the real half-line with respect to a filtration Ht is defined as Ht can denote the history of event-point times preceding time t but can also correspond to other filtrations (for example in the case of a Cox process). In the -notation, this can be written in a more compact form: The compensator of a point process, also known as the dual-predictable projection, is the integrated conditional intensity function defined by Related functions Papangelou intensity function The Papangelou intensity function of a point process in the -dimensional Euclidean space is defined as where is the ball centered at of a radius , and denotes the information of the point process outside . Likelihood function The logarithmic likelihood of a parameterized simple point process conditional upon some observed data is written as Point processes in spatial statistics The analysis of point pattern data in a compact subset S of Rn is a major object of study within spatial statistics. Such data appear in a broad range of disciplines, amongst which are forestry and plant ecology (positions of trees or plants in general) epidemiology (home locations of infected patients) zoology (burrows or nests of animals) geography (positions of human settlements, towns or cities) seismology (epicenters of earthquakes) materials science (positions of defects in industrial materials) astronomy (locations of stars or galaxies) computational neuroscience (spikes of neurons). The need to use point processes to model these kinds of data lies in their inherent spatial structure. Accordingly, a first question of interest is often whether the given data exhibit complete spatial randomness (i.e. are a realization of a spatial Poisson process) as opposed to exhibiting either spatial aggregation or spatial inhibition. In contrast, many datasets considered in classical multivariate statistics consist of independently generated datapoints that may be governed by one or several covariates (typically non-spatial). Apart from the applications in spatial statistics, point processes are one of the fundamental objects in stochastic geometry. Research has also focussed extensively on various models built on point processes such as Voronoi tessellations, random geometric graphs, and Boolean models. See also Empirical measure Random measure Point process notation Point process operation Poisson process Renewal theory Invariant measure Transfer operator Koopman operator Shift operator Notes References Statistical data types Spatial processes
Point process
Mathematics
2,823
10,941,726
https://en.wikipedia.org/wiki/Fructose-bisphosphate%20aldolase
Fructose-bisphosphate aldolase (), often just aldolase, is an enzyme catalyzing a reversible reaction that splits the aldol, fructose 1,6-bisphosphate, into the triose phosphates dihydroxyacetone phosphate (DHAP) and glyceraldehyde 3-phosphate (G3P). Aldolase can also produce DHAP from other (3S,4R)-ketose 1-phosphates such as fructose 1-phosphate and sedoheptulose 1,7-bisphosphate. Gluconeogenesis and the Calvin cycle, which are anabolic pathways, use the reverse reaction. Glycolysis, a catabolic pathway, uses the forward reaction. Aldolase is divided into two classes by mechanism. The word aldolase also refers, more generally, to an enzyme that performs an aldol reaction (creating an aldol) or its reverse (cleaving an aldol), such as Sialic acid aldolase, which forms sialic acid. See the list of aldolases. Mechanism and structure Class I proteins form a protonated Schiff base intermediate linking a highly conserved active site lysine with the DHAP carbonyl carbon. Additionally, tyrosine residues are crucial to this mechanism in acting as stabilizing hydrogen acceptors. Class II proteins use a different mechanism which polarizes the carbonyl group with a divalent cation like Zn2+. The Escherichia coli galactitol operon protein, gatY, and N-acetyl galactosamine operon protein, agaY, which are tagatose-bisphosphate aldolase, are homologs of class II fructose-bisphosphate aldolase. Two histidine residues in the first half of the sequence of these homologs have been shown to be involved in binding zinc. The protein subunits of both classes each have an α/β domain folded into a TIM barrel containing the active site. Several subunits are assembled into the complete protein. The two classes share little sequence identity. With few exceptions only class I proteins have been found in animals, plants, and green algae. With few exceptions only class II proteins have been found in fungi. Both classes have been found widely in other eukaryotes and in bacteria. The two classes are often present together in the same organism. Plants and algae have plastidal aldolase, sometimes a relic of endosymbiosis, in addition to the usual cytosolic aldolase. A bifunctional fructose-bisphosphate aldolase/phosphatase, with class I mechanism, has been found widely in archaea and in some bacteria. The active site of this archaeal aldolase is also in a TIM barrel. In gluconeogenesis and glycolysis Gluconeogenesis and glycolysis share a series of six reversible reactions. In gluconeogenesis glyceraldehyde-3-phosphate is reduced to fructose 1,6-bisphosphate with aldolase. In glycolysis fructose 1,6-bisphosphate is made into glyceraldehyde-3-phosphate and dihydroxyacetone phosphate through the use of aldolase. The aldolase used in gluconeogenesis and glycolysis is a cytoplasmic protein. Three forms of class I protein are found in vertebrates. Aldolase A is preferentially expressed in muscle and brain; aldolase B in liver, kidney, and in enterocytes; and aldolase C in brain. Aldolases A and C are mainly involved in glycolysis, while aldolase B is involved in both glycolysis and gluconeogenesis. Some defects in aldolase B cause hereditary fructose intolerance. The metabolism of free fructose in liver exploits the ability of aldolase B to use fructose 1-phosphate as a substrate. Archaeal fructose-bisphosphate aldolase/phosphatase is presumably involved in gluconeogenesis because its product is fructose 6-phosphate. In the Calvin cycle The Calvin cycle is a carbon fixation pathway; it is part of photosynthesis, which convert carbon dioxide and other compounds into glucose. It and gluconeogenesis share a series of four reversible reactions. In both pathways 3-phosphoglycerate (3-PGA or 3-PG) is reduced to fructose 1,6-bisphosphate with aldolase catalyzing the last reaction. A fifth reaction, catalyzed in both pathways by fructose 1,6-bisphosphatase, hydrolyzes the fructose 1-6-bisphosphate to fructose 6-phosphate and inorganic phosphate. The large decrease in free energy makes this reaction irreversible. In the Calvin cycle aldolase also catalyzes the production of sedoheptulose 1,7-bisphosphate from DHAP and erythrose 4-phosphate. The chief products of the Calvin cycle are triose phosphate (TP), which is a mixture of DHAP and G3P, and fructose 6-phosphate. Both are also needed to regenerate RuBP. The aldolase used by plants and algae in the Calvin cycle is usually a plastid-targeted protein encoded by a nuclear gene. Reactions Aldolase catalyzes fructose 1,6-bisphosphate DHAP + G3P and also sedoheptulose 1,7-bisphosphate DHAP + erythrose 4-phosphate fructose 1-phosphate DHAP + glyceraldehyde Aldolase is used in the reversible trunk of gluconeogenesis/glycolysis 2(PEP + NADH + H+ + ATP + H2O) fructose 1,6-bisphosphate + 2(NAD+ + ADP + Pi) Aldolase is also used in the part of the Calvin cycle shared with gluconeogenesis, with the irreversible phosphate hydrolysis at the end catalyzed by fructose 1,6-bisphosphatase 2(3-PG + NADPH + H+ + ATP + H2O) fructose 1,6-bisphosphate + 2(NADP+ + ADP + Pi) fructose 1,6-bisphosphate + H2O → fructose 6-phosphate + Pi In gluconeogenesis 3-PG is produced by enolase and phosphoglycerate mutase acting in series PEP + H2O 2-PG 3-PG In the Calvin cycle 3-PG is produced by RuBisCO RuBP + CO2 + H2O → 2(3-PG) G3P is produced by phosphoglycerate kinase acting in series with glyceraldehyde-3-phosphate dehydrogenase (GAPDH) in gluconeogenesis, and in series with glyceraldehyde-3-phosphate dehydrogenase (NADP+) (phosphorylating) in the Calvin cycle 3-PG + ATP 1,3-bisphosphoglycerate + ADP 1,3-bisphosphoglycerate + NAD(P)H + H+ G3P + Pi + NAD(P)+ Triose-phosphate isomerase maintains DHAP and G3P in near equilibrium, producing the mixture called triose phosphate (TP) G3P DHAP Thus both DHAP and G3P are available to aldolase. Moonlighting properties Aldolase has also been implicated in many "moonlighting" or non-catalytic functions, based upon its binding affinity for many other proteins including F-actin, α-tubulin, light chain dynein, WASP, Band 3 anion exchanger, phospholipase D (PLD2), glucose transporter GLUT4, inositol trisphosphate, V-ATPase and ARNO (a guanine nucleotide exchange factor of ARF6). These associations are thought to be predominantly involved in cellular structure, however, involvement in endocytosis, parasite invasion, cytoskeleton rearrangement, cell motility, membrane protein trafficking and recycling, signal transduction and tissue compartmentalization have been explored. References Further reading External links Tolan Laboratory at Boston University Protein domains Lyases Moonlighting proteins Glycolysis enzymes Glycolysis
Fructose-bisphosphate aldolase
Chemistry,Biology
1,830
36,217,354
https://en.wikipedia.org/wiki/Laboratory%20developed%20test
Laboratory developed test (LDT) is a term used to refer to a certain class of in vitro diagnostics (IVDs) that, in the U.S., were traditionally regulated under the Clinical Laboratory Improvement Amendments program. Laboratory-developed tests (LDTs) are a class of in vitro diagnostics (IVDs) designed, manufactured, and used within a single laboratory. They are employed for various medical diagnoses and research applications, offering advantages in flexibility and fostering innovation in the diagnostics field. United States In the United States, the Food and Drug Administration (FDA) has determined that while such tests qualify as medical devices, these products could enter the market without prior approval from the agency. In 2014, the FDA announced that it would start regulating some LDTs. In general, however, it has not done so, as of April 2019. As LDTs do not require FDA 510(k) clearance required by other diagnostic tests, they have been viewed as a regulatory loophole by opponents. Direct-to-consumer Direct-to-consumer tests are regulated as medical devices, although they are not necessarily reviewed by the FDA. 23andMe direct-to-consumer genetic tests were originally offered as LDTs, but the FDA challenged that and forced the company to submit the test for approval as a class II medical device. Companies Several companies offer lab-developed tests. Several prominent companies are at the forefront of developing innovative Laboratory Developed Tests solutions, including Adaptive Biotechnologies Corporation, Quest Diagnostics, Roche, and Illumina Market Overview The global market for laboratory-developed testing (LDT) is experiencing significant growth, with a projected value of US$ 4582.6 million by 2030 from US$ 3518.7 million in 2023 (CAGR of 3.8%) This growth is driven by advancements in genetic testing, the increasing demand for personalized medicine, and the ongoing expansion of the healthcare and diagnostics sectors References Medical technology
Laboratory developed test
Biology
402
36,484,676
https://en.wikipedia.org/wiki/CmapTools
CmapTools is concept mapping software developed by the Florida Institute for Human and Machine Cognition (IHMC). It allows users to easily create graphical nodes representing concepts, and to connect nodes using lines and linking words to form a network of interrelated propositions that represent knowledge of a topic. The software has been used in classrooms and research labs, and in corporate training. Use The various uses of concept maps are supported by CmapTools. Multiple links can be added to each concept to form a dynamic map that opens web pages or local documents; The links added receive a category chosen by the user on the provided list of types, to help with organization, some categories are: URLs; Documents; Images; and so on. Each link will be disposed accordingly with the category set by the user. The links are stacked by each category type under the chosen concept form (like show on the image sideway). Even other concept maps can be linked to concepts letting the user construct a strong navigation tool. Multiple maps connected can form a knowledge base, for example of a company structure, repository of standards, personal contacts and other important general information. References External links Graph drawing software Concept mapping software Mind-mapping software Concept- and mind-mapping software programmed in Java
CmapTools
Technology
258
7,925
https://en.wikipedia.org/wiki/David%20Hume
David Hume (; born David Home; – 25 August 1776) was a Scottish philosopher, historian, economist, and essayist who was best known for his highly influential system of empiricism, philosophical scepticism and metaphysical naturalism. Beginning with A Treatise of Human Nature (1739–40), Hume strove to create a naturalistic science of man that examined the psychological basis of human nature. Hume followed John Locke in rejecting the existence of innate ideas, concluding that all human knowledge derives solely from experience. This places him with Francis Bacon, Thomas Hobbes, John Locke, and George Berkeley as an empiricist. Hume argued that inductive reasoning and belief in causality cannot be justified rationally; instead, they result from custom and mental habit. We never actually perceive that one event causes another but only experience the "constant conjunction" of events. This problem of induction means that to draw any causal inferences from past experience, it is necessary to presuppose that the future will resemble the past; this metaphysical presupposition cannot itself be grounded in prior experience. An opponent of philosophical rationalists, Hume held that passions rather than reason govern human behaviour, famously proclaiming that "Reason is, and ought only to be the slave of the passions." Hume was also a sentimentalist who held that ethics are based on emotion or sentiment rather than abstract moral principle. He maintained an early commitment to naturalistic explanations of moral phenomena and is usually accepted by historians of European philosophy to have first clearly expounded the is–ought problem, or the idea that a statement of fact alone can never give rise to a normative conclusion of what ought to be done. Hume denied that humans have an actual conception of the self, positing that we experience only a bundle of sensations, and that the self is nothing more than this bundle of perceptions connected by an association of ideas. Hume's compatibilist theory of free will takes causal determinism as fully compatible with human freedom. His philosophy of religion, including his rejection of miracles, and of the argument from design for God's existence, were especially controversial for their time. Hume left a legacy that affected utilitarianism, logical positivism, the philosophy of science, early analytic philosophy, cognitive science, theology, and many other fields and thinkers. Immanuel Kant credited Hume as the inspiration that had awakened him from his "dogmatic slumbers." Early life Hume was born on 26 April 1711, as David Home, in a tenement on the north side of Edinburgh's Lawnmarket. He was the second of two sons born to Catherine Home (née Falconer), daughter of Sir David Falconer of Newton, Midlothian and his wife Mary Falconer (née Norvell), and Joseph Home of Chirnside in the County of Berwick, an advocate of Ninewells. Joseph died just after David's second birthday. Catherine, who never remarried, raised the two brothers and their sister on her own. Hume changed his family name's spelling in 1734, as the surname 'Home' (pronounced as 'Hume') was not well-known in England. Hume never married and lived partly at his Chirnside family home in Berwickshire, which had belonged to the family since the 16th century. His finances as a young man were very "slender", as his family was not rich; as a younger son he had little patrimony to live on. Hume attended the University of Edinburgh at an unusually early ageeither 12 or possibly as young as 10at a time when 14 was the typical age. Initially, Hume considered a career in law, because of his family. However, in his words, he came to have: ...an insurmountable aversion to everything but the pursuits of Philosophy and general Learning; and while [my family] fanceyed I was poring over Voet and Vinnius, Cicero and Virgil were the Authors which I was secretly devouring. He had little respect for the professors of his time, telling a friend in 1735 that "there is nothing to be learnt from a Professor, which is not to be met with in Books". He did not graduate. "Disease of the learned" At around age 18, Hume made a philosophical discovery that opened up to him "a new Scene of Thought", inspiring him "to throw up every other Pleasure or Business to apply entirely to it". As he did not recount what this scene exactly was, commentators have offered a variety of speculations. One prominent interpretation among contemporary Humean scholarship is that this new "scene of thought" was Hume's realisation that Francis Hutcheson's theory of moral sense could be applied to the understanding of morality as well. From this inspiration, Hume set out to spend a minimum of 10 years reading and writing. He soon came to the verge of a mental breakdown, first starting with a coldnesswhich he attributed to a "Laziness of Temper"that lasted about nine months. Scurvy spots later broke out on his fingers, persuading Hume's physician to diagnose him with the "Disease of the Learned". Hume wrote that he "went under a Course of Bitters and Anti-Hysteric Pills", taken along with a pint of claret every day. He also decided to have a more active life to better continue his learning. His health improved somewhat, but in 1731, he was afflicted with a ravenous appetite and palpitations. After eating well for a time, he went from being "tall, lean and raw-bon'd" to being "sturdy, robust [and] healthful-like." Indeed, Hume would become well known for being obese and having a fondness for good port and cheese, often using them as philosophical metaphors for his conjectures. Career Despite having noble ancestry, Hume had no source of income and no learned profession by age 25. As was common at his time, he became a merchant's assistant, despite having to leave his native Scotland. He travelled via Bristol to La Flèche in Anjou, France. There he had frequent discourse with the Jesuits of the College of La Flèche. Hume was derailed in his attempts to start a university career by protests over his alleged "atheism", also lamenting that his literary debut, A Treatise of Human Nature, "fell dead-born from the press." However, he found literary success in his lifetime as an essayist, and a career as a librarian at the University of Edinburgh. These successes provided him much needed income at the time. His tenure there, and the access to research materials it provided, resulted in Hume's writing the massive six-volume The History of England, which became a bestseller and the standard history of England in its day. For over 60 years, Hume was the dominant interpreter of English history. He described his "love for literary fame" as his "ruling passion" and judged his two late works, the so-called "first" and "second" enquiries, An Enquiry Concerning Human Understanding and An Enquiry Concerning the Principles of Morals, as his greatest literary and philosophical achievements. He would ask of his contemporaries to judge him on the merits of the later texts alone, rather than on the more radical formulations of his early, youthful work, dismissing his philosophical debut as juvenilia: "A work which the Author had projected before he left College." Despite Hume's protestations, a consensus exists today that his most important arguments and philosophically distinctive doctrines are found in the original form they take in the Treatise. Though he was only 23 years old when starting this work, it is now regarded as one of the most important in the history of Western philosophy. 1730s Hume worked for four years on his first major work, A Treatise of Human Nature, subtitled "Being an Attempt to Introduce the Experimental Method of Reasoning into Moral Subjects", completing it in 1738 at age 28. Although many scholars today consider the Treatise to be Hume's most important work and one of the most important books in Western philosophy, critics in Great Britain at the time described it as "abstract and unintelligible". As Hume had spent most of his savings during those four years, he resolved "to make a very rigid frugality supply [his] deficiency of fortune, to maintain unimpaired [his] independency, and to regard every object as contemptible except the improvements of [his] talents in literature". Despite the disappointment, Hume later wrote: "Being naturally of a cheerful and sanguine temper, I soon recovered from the blow and prosecuted with great ardour my studies in the country." There, in an attempt to make his larger work better known and more intelligible, he published the An Abstract of a Book lately Published as a summary of the main doctrines of the Treatise, without revealing its authorship. This work contained the same ideas, but with a shorter and clearer explanation. Although there has been some academic speculation as to the pamphlet's true author, it is generally regarded as Hume's creation. 1740s After the publication of Essays Moral and Political in 1741included in the later edition as Essays, Moral, Political, and LiteraryHume applied for the Chair of Pneumatics and Moral Philosophy at the University of Edinburgh. However, the position was given to William Cleghorn after Edinburgh ministers petitioned the town council not to appoint Hume because he was seen as an atheist. In 1745, during the Jacobite risings, Hume tutored the Marquess of Annandale, an engagement that ended in disarray after about a year. The Marquess could not follow with Hume's lectures, his father saw little need for philosophy, and on a personal level, the Marquess found Hume's dietary tendencies to be bizarre. Hume then started his great historical work, The History of England, which took fifteen years and ran to over a million words. During this time, he was also involved with the Canongate Theatre through his friend John Home, a preacher. In this context, he associated with Lord Monboddo and other thinkers of the Scottish Enlightenment in Edinburgh. From 1746, Hume served for three years as secretary to General James St Clair, who was envoy to the courts of Turin and Vienna. At that time Hume wrote Philosophical Essays Concerning Human Understanding, later published as An Enquiry Concerning Human Understanding. Often called the First Enquiry, it proved little more successful than the Treatise, perhaps because of the publication of his short autobiography My Own Life, which "made friends difficult for the first Enquiry". By the end of this period Hume had attained his well-known corpulent stature; "the good table of the General and the prolonged inactive life had done their work", leaving him "a man of tremendous bulk". In 1749 he went to live with his brother in the countryside, although he continued to associate with the aforementioned Scottish Enlightenment figures. 1750s–1760s Hume's religious views were often suspect and, in the 1750s, it was necessary for his friends to avert a trial against him on the charge of heresy, specifically in an ecclesiastical court. However, he "would not have come and could not be forced to attend if he said he was not a member of the Established Church". Hume failed to gain the chair of philosophy at the University of Glasgow due to his religious views. By this time, he had published the Philosophical Essays, which were decidedly anti-religious. This represented a turning point in his career and the various opportunities made available to him. Even Adam Smith, his personal friend who had vacated the Glasgow philosophy chair, was against his appointment out of concern that public opinion would be against it. In 1761, all his works were banned on the Index Librorum Prohibitorum. Hume returned to Edinburgh in 1751. In the following year, the Faculty of Advocates hired him to be their Librarian, a job in which he would receive little to no pay, but which nonetheless gave him "the command of a large library". This resource enabled him to continue historical research for The History of England. Hume's volume of Political Discourses, written in 1749 and published by Kincaid & Donaldson in 1752, was the only work he considered successful on first publication. In 1753, Hume moved from his house on Riddles Court on the Lawnmarket to a house on the Canongate at the other end of the Royal Mile. Here he lived in a tenement known as Jack's Land, immediately west of the still surviving Shoemakers Land. Eventually, with the publication of his six-volume The History of England between 1754 and 1762, Hume achieved the fame that he coveted. The volumes traced events from the Invasion of Julius Caesar to the Revolution of 1688 and was a bestseller in its day. Hume was also a longtime friend of bookseller Andrew Millar, who sold Hume's History (after acquiring the rights from Scottish bookseller Gavin Hamilton), although the relationship was sometimes complicated. Letters between them illuminate both men's interest in the success of the History. In 1762 Hume moved from Jack's Land on the Canongate to James Court on the Lawnmarket. He sold the house to James Boswell in 1766. Later life From 1763 to 1765, Hume was invited to attend Lord Hertford in Paris, where he became secretary to the British embassy in France. Hume was well received among Parisian society, and while there he met with Isaac de Pinto. In 1765, Hume served as a chargé d'affaires in Paris, writing "despatches to the British Secretary of State". He wrote of his Paris life, "I really wish often for the plain roughness of The Poker Club of Edinburgh... to correct and qualify so much lusciousness." Upon returning to Britain in 1766, Hume wrote a letter to Lord Hertford after being asked to by George Colebrooke; the letter informed Lord Hertford that he had an opportunity to invest in one of Colebrooke's slave plantations in the West Indies, though Hertford ultimately decided not to do so. In June of that year, Hume facilitated the purchase of a slave plantation in Martinique on behalf of his friend, the wine merchant John Stewart, by writing to the colony's governor Victor-Thérèse Charpentier. According to Felix Waldmann, a former Hume Fellow at the University of Edinburgh, Hume's "puckish scepticism about the existence of religious miracles played a significant part in defining the critical outlook which underpins the practice of modern science." Waldmann also argued that Hume's views "served to reinforce the institution of racialised slavery in the later 18th century." In 1766, Hume left Paris to accompany Jean-Jacques Rousseau to England. Once there, he and Rousseau fell out, leaving Hume sufficiently worried about the damage to his reputation from the quarrel with Rousseau that he would author an account of the dispute, titling it "A concise and genuine account of the dispute between Mr. Hume and Mr. Rousseau". In 1767, Hume was appointed Under Secretary of State for the Northern Department. Here, he wrote that he was given "all the secrets of the Kingdom". In 1769 he returned to James' Court in Edinburgh, where he would live from 1771 until his death in 1776. Hume's nephew and namesake, David Hume of Ninewells (1757–1838), was a co-founder of the Royal Society of Edinburgh in 1783. He was a Professor of Scots Law at Edinburgh University and rose to be Principal Clerk of Session in the Scottish High Court and Baron of the Exchequer. He is buried with his uncle in Old Calton Cemetery. Autobiography In the last year of his life, Hume wrote an extremely brief autobiographical essay titled "My Own Life", summing up his entire life in "fewer than 5 pages"; it contains many interesting judgments that have been of enduring interest to subsequent readers of Hume. Donald Seibert (1984), a scholar of 18th-century literature, judged it a "remarkable autobiography, even though it may lack the usual attractions of that genre. Anyone hankering for startling revelations or amusing anecdotes had better look elsewhere." Despite condemning vanity as a dangerous passion, in his autobiography Hume confesses his belief that the "love of literary fame" had served as his "ruling passion" in life, and claims that this desire "never soured my temper, notwithstanding my frequent disappointments". One such disappointment Hume discusses in this account is in the initial literary reception of the Treatise, which he claims to have overcome by means of the success of the Essays: "the work was favourably received, and soon made me entirely forget my former disappointment". Hume, in his own retrospective judgment, argues that his philosophical debut's apparent failure "had proceeded more from the manner than the matter". He thus suggests that "I had been guilty of a very usual indiscretion, in going to the press too early." Hume also provides an unambiguous self-assessment of the relative value of his works: that "my Enquiry concerning the Principles of Morals; which, in my own opinion (who ought not to judge on that subject) is of all my writings, historical, philosophical, or literary, incomparably the best." He also wrote of his social relations: "My company was not unacceptable to the young and careless, as well as to the studious and literary", noting of his complex relation to religion, as well as to the state, that "though I wantonly exposed myself to the rage of both civil and religious factions, they seemed to be disarmed in my behalf of their wonted fury". He goes on to profess of his character: "My friends never had occasion to vindicate any one circumstance of my character and conduct." Hume concludes the essay with a frank admission: I cannot say there is no vanity in making this funeral oration of myself, but I hope it is not a misplaced one; and this is a matter of fact which is easily cleared and ascertained. Death Diarist and biographer James Boswell saw Hume a few weeks before his death from a form of abdominal cancer. Hume told him that he sincerely believed it a "most unreasonable fancy" that there might be life after death. Hume asked that his body be interred in a "simple Roman tomb", requesting in his will that it be inscribed only with his name and the year of his birth and death, "leaving it to Posterity to add the Rest". David Hume died at the southwest corner of St. Andrew's Square in Edinburgh's New Town, at what is now 21 Saint David Street. A popular story, consistent with some historical evidence and with the help of coincidence, suggests that the street was named after Hume. His tomb stands, as he wished it, on the southwestern slope of Calton Hill, in the Old Calton Cemetery. Adam Smith later recounted Hume's amusing speculation that he might ask Charon, Hades' ferryman, to allow him a few more years of life in order to see "the downfall of some of the prevailing systems of superstition". The ferryman replied, "You loitering rogue, that will not happen these many hundred years.… Get into the boat this instant." Writings A Treatise of Human Nature begins with the introduction: "'Tis evident, that all the sciences have a relation, more or less, to human nature.… Even Mathematics, Natural Philosophy, and Natural Religion, are in some measure dependent on the science of Man." The science of man, as Hume explains, is the "only solid foundation for the other sciences" and that the method for this science requires both experience and observation as the foundations of a logical argument. In regards to this, philosophical historian Frederick Copleston (1999) suggests that it was Hume's aim to apply to the science of man the method of experimental philosophy (the term that was current at the time to imply natural philosophy), and that "Hume's plan is to extend to philosophy in general the methodological limitations of Newtonian physics." Until recently, Hume was seen as a forerunner of logical positivism, a form of anti-metaphysical empiricism. According to the logical positivists (in summary of their verification principle), unless a statement could be verified by experience, or else was true or false by definition (i.e., either tautological or contradictory), then it was meaningless. Hume, on this view, was a protopositivist, who, in his philosophical writings, attempted to demonstrate the ways in which ordinary propositions about objects, causal relations, the self, and so on, are semantically equivalent to propositions about one's experiences. Many commentators have since rejected this understanding of Humean empiricism, stressing an epistemological (rather than a semantic) reading of his project. According to this opposing view, Hume's empiricism consisted in the idea that it is our knowledge, and not our ability to conceive, that is restricted to what can be experienced. Hume thought that we can form beliefs about that which extends beyond any possible experience, through the operation of faculties such as custom and the imagination, but he was sceptical about claims to knowledge on this basis. Impressions and ideas A central doctrine of Hume's philosophy, stated in the very first lines of the Treatise of Human Nature, is that the mind consists of perceptions, or the mental objects which are present to it, and which divide into two categories: "All the perceptions of the human mind resolve themselves into two distinct kinds, which I shall call and ." Hume believed that it would "not be very necessary to employ many words in explaining this distinction", which commentators have generally taken to mean the distinction between feeling and thinking. Controversially, Hume, in some sense, may regard the distinction as a matter of degree, as he takes impressions to be distinguished from ideas on the basis of their force, liveliness, and vivacitywhat Henry E. Allison (2008) calls the "FLV criterion." Ideas are therefore "faint" impressions. For example, experiencing the painful sensation of touching a hot pan's handle is more forceful than simply thinking about touching a hot pan. According to Hume, impressions are meant to be the original form of all our ideas. From this, Don Garrett (2002) has coined the term copy principle, referring to Hume's doctrine that all ideas are ultimately copied from some original impression, whether it be a passion or sensation, from which they derive. Simple and complex After establishing the forcefulness of impressions and ideas, these two categories are further broken down into simple and complex: "simple perceptions or impressions and ideas are such as admit of no distinction nor separation", whereas "the complex are the contrary to these, and may be distinguished into parts". When looking at an apple, a person experiences a variety of colour-sensationswhat Hume notes as a complex impression. Similarly, a person experiences a variety of taste-sensations, tactile-sensations, and smell-sensations when biting into an apple, with the overall sensationagain, a complex impression. Thinking about an apple allows a person to form complex ideas, which are made of similar parts as the complex impressions they were developed from, but which are also less forceful. Hume believes that complex perceptions can be broken down into smaller and smaller parts until perceptions are reached that have no parts of their own, and these perceptions are thus referred to as simple. Principles of association Regardless of how boundless it may seem; a person's imagination is confined to the mind's ability to recombine the information it has already acquired from the body's sensory experience (the ideas that have been derived from impressions). In addition, "as our imagination takes our most basic ideas and leads us to form new ones, it is directed by three principles of association, namely, resemblance, contiguity, and cause and effect": The principle of resemblance refers to the tendency of ideas to become associated if the objects they represent resemble one another. For example, someone looking at an illustration of a flower can conceive an idea of the physical flower because the idea of the illustrated object is associated with the physical object's idea. The principle of contiguity describes the tendency of ideas to become associated if the objects they represent are near to each other in time or space, such as when the thought of a crayon in a box leads one to think of the crayon contiguous to it. The principle of cause and effect refers to the tendency of ideas to become associated if the objects they represent are causally related, which explains how remembering a broken window can make someone think of a ball that had caused the window to shatter. Hume elaborates more on the last principle, explaining that, when somebody observes that one object or event consistently produces the same object or event, that results in "an expectation that a particular event (a 'cause') will be followed by another event (an 'effect') previously and constantly associated with it". Hume calls this principle custom, or habit, saying that "custom...renders our experience useful to us, and makes us expect, for the future, a similar train of events with those which have appeared in the past". However, even though custom can serve as a guide in life, it still only represents an expectation. In other words: Experience cannot establish a necessary connection between cause and effect, because we can imagine without contradiction a case where the cause does not produce its usual effect…the reason why we mistakenly infer that there is something in the cause that necessarily produces its effect is because our past experiences have habituated us to think in this way. Continuing this idea, Hume argues that "only in the pure realm of ideas, logic, and mathematics, not contingent on the direct sense awareness of reality, [can] causation safely…be applied—all other sciences are reduced to probability". He uses this scepticism to reject metaphysics and many theological views on the basis that they are not grounded in fact and observations, and are therefore beyond the reach of human understanding. Induction and causation The cornerstone of Hume's epistemology is the problem of induction. This may be the area of Hume's thought where his scepticism about human powers of reason is most pronounced. The problem revolves around the plausibility of inductive reasoning, that is, reasoning from the observed behaviour of objects to their behaviour when unobserved. As Hume wrote, induction concerns how things behave when they go "beyond the present testimony of the senses, or the records of our memory". Hume argues that we tend to believe that things behave in a regular manner, meaning that patterns in the behaviour of objects seem to persist into the future, and throughout the unobserved present. Hume's argument is that we cannot rationally justify the claim that nature will continue to be uniform, as justification comes in only two varieties—demonstrative reasoning and probable reasoning—and both of these are inadequate. With regard to demonstrative reasoning, Hume argues that the uniformity principle cannot be demonstrated, as it is "consistent and conceivable" that nature might stop being regular. Turning to probable reasoning, Hume argues that we cannot hold that nature will continue to be uniform because it has been in the past. As this is using the very sort of reasoning (induction) that is under question, it would be circular reasoning. Thus, no form of justification will rationally warrant our inductive inferences. Hume's solution to this problem is to argue that, rather than reason, natural instinct explains the human practice of making inductive inferences. He asserts that "Nature, by an absolute and uncontroulable necessity has determin'd us to judge as well as to breathe and feel." In 1985, and in agreement with Hume, John D. Kenyon writes: Reason might manage to raise a doubt about the truth of a conclusion of natural inductive inference just for a moment ... but the sheer agreeableness of animal faith will protect us from excessive caution and sterile suspension of belief. Others, such as Charles Sanders Peirce, have demurred from Hume's solution, while some, such as Kant and Karl Popper, have thought that Hume's analysis has "posed a most fundamental challenge to all human knowledge claims". The notion of causation is closely linked to the problem of induction. According to Hume, we reason inductively by associating constantly conjoined events. It is the mental act of association that is the basis of our concept of causation. At least three interpretations of Hume's theory of causation are represented in the literature: the logical positivist; the sceptical realist; and the quasi-realist. Hume acknowledged that there are events constantly unfolding, and humanity cannot guarantee that these events are caused by prior events or are independent instances. He opposed the widely accepted theory of causation that 'all events have a specific course or reason'. Therefore, Hume crafted his own theory of causation, formed through his empiricist and sceptic beliefs. He split causation into two realms: "All the objects of human reason or enquiry may naturally be divided into two kinds, to wit, Relations of Ideas, and Matters of Fact." Relations of Ideas are a priori and represent universal bonds between ideas that mark the cornerstones of human thought. Matters of Fact are dependent on the observer and experience. They are often not universally held to be true among multiple persons. Hume was an Empiricist, meaning he believed "causes and effects are discoverable not by reason, but by experience". He goes on to say that, even with the perspective of the past, humanity cannot dictate future events because thoughts of the past are limited, compared to the possibilities for the future. Hume's separation between Matters of Fact and Relations of Ideas is often referred to as "Hume's fork." Hume explains his theory of causation and causal inference by division into three different parts. In these three branches he explains his ideas and compares and contrasts his views to his predecessors. These branches are the Critical Phase, the Constructive Phase, and Belief. In the Critical Phase, Hume denies his predecessors' theories of causation. Next, he uses the Constructive Phase to resolve any doubts the reader may have had while observing the Critical Phase. "Habit or Custom" mends the gaps in reasoning that occur without the human mind even realising it. Associating ideas has become second nature to the human mind. It "makes us expect for the future, a similar train of events with those which have appeared in the past". However, Hume says that this association cannot be trusted because the span of the human mind to comprehend the past is not necessarily applicable to the wide and distant future. This leads him to the third branch of causal inference, Belief. Belief is what drives the human mind to hold that expectancy of the future is based on past experience. Throughout his explanation of causal inference, Hume is arguing that the future is not certain to be repetition of the past and that the only way to justify induction is through uniformity. The logical positivist interpretation is that Hume analyses causal propositions, such as "A causes B", in terms of regularities in perception: "A causes B" is equivalent to "Whenever A-type events happen, B-type ones follow", where "whenever" refers to all possible perceptions. In his Treatise of Human Nature, Hume wrote: Power and necessity…are…qualities of perceptions, not of objects…felt by the soul and not perceiv'd externally in bodies. This view is rejected by sceptical realists, who argue that Hume thought that causation amounts to more than just the regular succession of events. Hume said that, when two events are causally conjoined, a necessary connection underpins the conjunction: Shall we rest contented with these two relations of contiguity and succession, as affording a complete idea of causation? By no means…there is a necessary connexion to be taken into consideration. Angela Coventry writes that, for Hume, "there is nothing in any particular instance of cause and effect involving external objects which suggests the idea of power or necessary connection" and "we are ignorant of the powers that operate between objects". However, while denying the possibility of knowing the powers between objects, Hume accepted the causal principle, writing: "I never asserted so absurd a proposition as that something could arise without a cause." It has been argued that, while Hume did not think that causation is reducible to pure regularity, he was not a fully-fledged realist either. Simon Blackburn calls this a quasi-realist reading, saying that "Someone talking of cause is voicing a distinct mental set: he is by no means in the same state as someone merely describing regular sequences." In Hume's words, "nothing is more usual than to apply to external bodies every internal sensation, which they occasion". 'Self' Empiricist philosophers, such as Hume and Berkeley, favoured the bundle theory of personal identity. In this theory, "the mind itself, far from being an independent power, is simply 'a bundle of perceptions' without unity or cohesive quality". The self is nothing but a bundle of experiences linked by the relations of causation and resemblance; or, more accurately, the empirically warranted idea of the self is just the idea of such a bundle. According to Hume: This view is supported by, for example, positivist interpreters, who have seen Hume as suggesting that terms such as "self", "person", or "mind" refer to collections of "sense-contents". A modern-day version of the bundle theory of the mind has been advanced by Derek Parfit in his Reasons and Persons. However, some philosophers have criticised Hume's bundle-theory interpretation of personal identity. They argue that distinct selves can have perceptions that stand in relation to similarity and causality. Thus, perceptions must already come parcelled into distinct "bundles" before they can be associated according to the relations of similarity and causality. In other words, the mind must already possess a unity that cannot be generated, or constituted, by these relations alone. Since the bundle-theory interpretation portrays Hume as answering an ontological question, philosophers like Galen Strawson see Hume as not very concerned with such questions and have queried whether this view is really Hume's. Instead, Strawson suggests that Hume might have been answering an epistemological question about the causal origin of our concept of the self. In the Appendix to the Treatise, Hume declares himself dissatisfied with his earlier account of personal identity in Book 1. Corliss Swain notes that "Commentators agree that if Hume did find some new problem" when he reviewed the section on personal identity, "he wasn't forthcoming about its nature in the Appendix." One interpretation of Hume's view of the self, argued for by philosopher and psychologist James Giles, is that Hume is not arguing for a bundle theory, which is a form of reductionism, but rather for an eliminative view of the self. Rather than reducing the self to a bundle of perceptions, Hume rejects the idea of the self altogether. On this interpretation, Hume is proposing a "no-self theory" and thus has much in common with Buddhist thought (see anattā). Psychologist Alison Gopnik has argued that Hume was in a position to learn about Buddhist thought during his time in France in the 1730s. Practical reason Practical reason relates to whether standards or principles exist that are also authoritative for all rational beings, dictating people's intentions and actions. Hume is mainly considered an anti-rationalist, denying the possibility for practical reason, although other philosophers such as Christine Korsgaard, Jean Hampton, and Elijah Millgram claim that Hume is not so much of an anti-rationalist as he is just a sceptic of practical reason. Hume denied the existence of practical reason as a principle because he claimed reason does not have any effect on morality, since morality is capable of producing effects in people that reason alone cannot create. As Hume explains in A Treatise of Human Nature (1740): Morals excite passions, and produce or prevent actions. Reason of itself is utterly impotent in this particular. The rules of morality, therefore, are not conclusions of our reason." Since practical reason is supposed to regulate our actions (in theory), Hume denied practical reason on the grounds that reason cannot directly oppose passions. As Hume puts it, "Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them." Reason is less significant than any passion because reason has no original influence, while "A passion is an original existence, or, if you will, modification of existence." Practical reason is also concerned with the value of actions rather than the truth of propositions, so Hume believed that reason's shortcoming of affecting morality proved that practical reason could not be authoritative for all rational beings, since morality was essential for dictating people's intentions and actions. Ethics Hume's writings on ethics began in the 1740 Treatise and were refined in his An Enquiry Concerning the Principles of Morals (1751). He understood feeling, rather than knowing, as that which governs ethical actions, stating that "moral decisions are grounded in moral sentiment." Arguing that reason cannot be behind morality, he wrote: Morals excite passions, and produce or prevent actions. Reason itself is utterly impotent in this particular. The rules of morality, therefore, are not conclusions of our reason. Hume's moral sentimentalism was shared by his close friend Adam Smith, and the two were mutually influenced by the moral reflections of their older contemporary, Francis Hutcheson. Peter Singer claims that Hume's argument that morals cannot have a rational basis alone "would have been enough to earn him a place in the history of ethics." Hume also put forward the is–ought problem, later known as Hume's Law, denying the possibility of logically deriving what ought to be from what is. According to the Treatise (1740), in every system of morality that Hume has read, the author begins by stating facts about the world as it is but always ends up suddenly referring to what ought to be the case. Hume demands that a reason should be given for inferring what ought to be the case, from what is the case. This is because it "seems altogether inconceivable, how this new relation can be a deduction from others." Hume's theory of ethics has been influential in modern-day meta-ethical theory, helping to inspire emotivism, and ethical expressivism and non-cognitivism, as well as Allan Gibbard's general theory of moral judgment and judgments of rationality. Aesthetics Hume's ideas about aesthetics and the theory of art are spread throughout his works, but are particularly connected with his ethical writings, and also the essays "Of the Standard of Taste" and "Of Tragedy" (1757). His views are rooted in the work of Joseph Addison and Francis Hutcheson. In the Treatise (1740), he touches on the connection between beauty and deformity and vice and virtue. His later writings on the subject continue to draw parallels of beauty and deformity in art with conduct and character. In "Standard of Taste", Hume argues that no rules can be drawn up about what is a tasteful object. However, a reliable critic of taste can be recognised as objective, sensible and unprejudiced, and as having extensive experience. "Of Tragedy" addresses the question of why humans enjoy tragic drama. Hume was concerned with the way spectators find pleasure in the sorrow and anxiety depicted in a tragedy. He argued that this was because the spectator is aware that he is witnessing a dramatic performance. There is pleasure in realising that the terrible events that are being shown are actually fiction. Furthermore, Hume laid down rules for educating people in taste and correct conduct, and his writings in this area have been very influential on English and Anglo-Saxon aesthetics. Free will, determinism, and responsibility Hume, along with Thomas Hobbes, is cited as a classical compatibilist about the notions of freedom and determinism. Compatibilism seeks to reconcile human freedom with the mechanist view that human beings are part of a deterministic universe, which is completely governed by physical laws. Hume, on this point, was influenced greatly by the scientific revolution, particularly by Sir Isaac Newton. Hume argued that the dispute between freedom and determinism continued over 2000 years due to ambiguous terminology. He wrote: "From this circumstance alone, that a controversy has been long kept on foot…we may presume that there is some ambiguity in the expression," and that different disputants use different meanings for the same terms. Hume defines the concept of necessity as "the uniformity, observable in the operations of nature; where similar objects are constantly conjoined together," and liberty as "a power of acting or not acting, according to the determinations of the will." He then argues that, according to these definitions, not only are the two compatible, but liberty requires necessity. For if our actions were not necessitated in the above sense, they would "have so little in connexion with motives, inclinations and circumstances, that one does not follow with a certain degree of uniformity from the other." But if our actions are not thus connected to the will, then our actions can never be free: they would be matters of "chance; which is universally allowed to have no existence." Australian philosopher John Passmore writes that confusion has arisen because "necessity" has been taken to mean "necessary connexion." Once this has been abandoned, Hume argues that "liberty and necessity will be found not to be in conflict one with another." Moreover, Hume goes on to argue that in order to be held morally responsible, it is required that our behaviour be caused or necessitated, for, as he wrote: Actions are, by their very nature, temporary and perishing; and where they proceed not from some cause in the character and disposition of the person who performed them, they can neither redound to his honour, if good; nor infamy, if evil. Hume describes the link between causality and our capacity to rationally make a decision from this an inference of the mind. Human beings assess a situation based upon certain predetermined events and from that form a choice. Hume believes that this choice is made spontaneously. Hume calls this form of decision making the liberty of spontaneity. Education writer Richard Wright considers that Hume's position rejects a famous moral puzzle attributed to French philosopher Jean Buridan. The Buridan's ass puzzle describes a donkey that is hungry. This donkey has separate bales of hay on both sides, which are of equal distances from him. The problem concerns which bale the donkey chooses. Buridan was said to believe that the donkey would die, because he has no autonomy. The donkey is incapable of forming a rational decision as there is no motive to choose one bale of hay over the other. However, human beings are different, because a human who is placed in a position where he is forced to choose one loaf of bread over another will make a decision to take one in lieu of the other. For Buridan, humans have the capacity of autonomy, and he recognises the choice that is ultimately made will be based on chance, as both loaves of bread are exactly the same. However, Wright says that Hume completely rejects this notion, arguing that a human will spontaneously act in such a situation because he is faced with impending death if he fails to do so. Such a decision is not made on the basis of chance, but rather on necessity and spontaneity, given the prior predetermined events leading up to the predicament. Hume's argument is supported by modern-day compatibilists such as R. E. Hobart, a pseudonym of philosopher Dickinson S. Miller. However, P. F. Strawson argued that the issue of whether we hold one another morally responsible does not ultimately depend on the truth or falsity of a metaphysical thesis such as determinism. This is because our so holding one another is a non-rational human sentiment that is not predicated on such theses. Religion Philosopher Paul Russell (2005) contends that Hume wrote "on almost every central question in the philosophy of religion", and that these writings "are among the most important and influential contributions on this topic." Touching on the philosophy, psychology, history, and anthropology of religious thought, Hume's 1757 dissertation "The Natural History of Religion" argues that the monotheistic religions of Judaism, Christianity, and Islam all derive from earlier polytheistic religions. He went on to suggest that all religious belief "traces, in the end, to dread of the unknown". Hume had also written on religious subjects in the first Enquiry, as well as later in the Dialogues Concerning Natural Religion. Religious views Although he wrote a great deal about religion, Hume's personal views have been the subject of much debate. Some modern critics have described Hume's religious views as agnostic or have described him as a "Pyrrhonian skeptic". Contemporaries considered him to be an atheist, or at least un-Christian, enough so that the Church of Scotland seriously considered bringing charges of infidelity against him. Evidence of his un-Christian beliefs can especially be found in his writings on miracles, in which he attempts to separate historical method from the narrative accounts of miracles. Nevertheless, modern scholars have tended to dismiss the claims of Hume's contemporaries describing him as an atheist as coming from religiously intolerant people who did not understand Hume’s philosophy. The fact that contemporaries suspected him of atheism is exemplified by a story Hume liked to tell: The best theologian he ever met, he used to say, was the old Edinburgh fishwife who, having recognized him as Hume the atheist, refused to pull him out of the bog into which he had fallen until he declared he was a Christian and repeated the Lord's prayer. However, in works such as "Of Superstition and Enthusiasm", Hume specifically seems to support the standard religious views of his time and place. This still meant that he could be very critical of the Catholic Church, dismissing it with the standard Protestant accusations of superstition and idolatry, as well as dismissing as idolatry what his compatriots saw as uncivilised beliefs. He also considered extreme Protestant sects, the members of which he called "enthusiasts", to be corrupters of religion. By contrast, in "The Natural History of Religion", Hume presents arguments suggesting that polytheism had much to commend it over monotheism. Additionally, when mentioning religion as a factor in his History of England, Hume uses it to show the deleterious effect it has on human progress. In his Treatise of Human Nature, Hume wrote: "Generally speaking, the errors in religions are dangerous; those in philosophy only ridiculous." Lou Reich (1998) argues that Hume was a religious naturalist and rejects interpretations of Hume as an atheist. Paul Russell (2008) writes that Hume was plainly sceptical about religious belief, although perhaps not to the extent of complete atheism. He suggests that Hume's position is best characterised by the term "irreligion," while philosopher David O'Connor (2013) argues that Hume's final position was "weakly deistic". For O'Connor, Hume's "position is deeply ironic. This is because, while inclining towards a weak form of deism, he seriously doubts that we can ever find a sufficiently favourable balance of evidence to justify accepting any religious position." He adds that Hume "did not believe in the God of standard theism ... but he did not rule out all concepts of deity", and that "ambiguity suited his purposes, and this creates difficulty in definitively pinning down his final position on religion". Design argument One of the traditional topics of natural theology is that of the existence of God, and one of the a posteriori arguments for this is the argument from design or the teleological argument. The argument is that the existence of God can be proved by the design that is obvious in the complexity of the world, which Encyclopædia Britannica states is "the most popular", because it is: ...the most accessible of the theistic arguments ... which identifies evidences of design in nature, inferring from them a divine designer ... The fact that the universe as a whole is a coherent and efficiently functioning system likewise, in this view, indicates a divine intelligence behind it. In An Enquiry Concerning Human Understanding, Hume wrote that the design argument seems to depend upon our experience, and its proponents "always suppose the universe, an effect quite singular and unparalleled, to be the proof of a Deity, a cause no less singular and unparalleled". Philosopher Louise E. Loeb (2010) notes that Hume is saying that only experience and observation can be our guide to making inferences about the conjunction between events. However, according to Hume: We observe neither God nor other universes, and hence no conjunction involving them. There is no observed conjunction to ground an inference either to extended objects or to God, as unobserved causes. Hume also criticised the argument in his Dialogues Concerning Natural Religion (1779). Hume proposes a finite universe with a finite number of particles. Given infinite time, these particles could randomly fall into any arrangement, including our seemingly designed world. A century later, the idea of order without design was rendered more plausible by Charles Darwin's discovery that the adaptations of the forms of life result from the natural selection of inherited characteristics. For philosopher James D. Madden, it is "Hume, rivaled only by Darwin, [who] has done the most to undermine in principle our confidence in arguments from design among all figures in the Western intellectual tradition". Finally, Hume discussed a version of the anthropic principle, which is the idea that theories of the universe are constrained by the need to allow for man's existence in it as an observer. Hume has his sceptical mouthpiece Philo suggest that there may have been many worlds, produced by an incompetent designer, whom he called a "stupid mechanic". In his Dialogues Concerning Natural Religion, Hume wrote: Many worlds might have been botched and bungled throughout an eternity, ere this system was struck out: much labour lost: many fruitless trials made: and a slow, but continued improvement carried on during infinite ages in the art of world-making. American philosopher Daniel Dennett has suggested that this mechanical explanation of teleology, although "obviously ... an amusing philosophical fantasy", anticipated the notion of natural selection, the 'continued improvement' being like "any Darwinian selection algorithm". Problem of miracles In his discussion of miracles, Hume argues that we should not believe miracles have occurred and that they do not therefore provide us with any reason to think God exists. In An Enquiry Concerning Human Understanding (Section 10), Hume defines a miracle as "a transgression of a law of nature by a particular volition of the Deity, or by the interposition of some invisible agent". Hume says we believe an event that has frequently occurred is likely to occur again, but we also take into account those instances where the event did not occur: A wise man ... considers which side is supported by the greater number of experiments. ... A hundred instances or experiments on one side, and fifty on another, afford a doubtful expectation of any event; though a hundred uniform experiments, with only one that is contradictory, reasonably beget a pretty strong degree of assurance. In all cases, we must balance the opposite experiments ... and deduct the smaller number from the greater, in order to know the exact force of the superior evidence. Hume discusses the testimony of those who report miracles. He wrote that testimony might be doubted even from some great authority in case the facts themselves are not credible: "[T]he evidence, resulting from the testimony, admits of a diminution, greater or less, in proportion as the fact is more or less unusual." Although Hume leaves open the possibility for miracles to occur and be reported, he offers various arguments against this ever having happened in history. He points out that people often lie, and they have good reasons to lie about miracles occurring either because they believe they are doing so for the benefit of their religion or because of the fame that results. Furthermore, people by nature enjoy relating miracles they have heard without caring for their veracity and thus miracles are easily transmitted even when false. Also, Hume notes that miracles seem to occur mostly in "ignorant and barbarous nations" and times, and the reason they do not occur in the civilised societies is such societies are not awed by what they know to be natural events. Hume recognizes that over a long period of time, various coincidences can provide the appearance of intention. Finally, the miracles of each religion argue against all other religions and their miracles, and so even if a proportion of all reported miracles across the world fit Hume's requirement for belief, the miracles of each religion make the other less likely. Hume was extremely pleased with his argument against miracles in his Enquiry. He states, "I flatter myself, that I have discovered an argument of a like nature, which, if just, will, with the wise and learned, be an everlasting check to all kinds of superstitious delusion, and consequently, will be useful as long as the world endures." Thus, Hume's argument against miracles had a more abstract basis founded upon the scrutiny, not just primarily of miracles, but of all forms of belief systems. It is a commonsense notion of veracity based upon epistemological evidence, and founded on a principle of rationality, proportionality and reasonability. The criterion for assessing Hume's belief system is based on the balance of probability whether something is more likely than not to have occurred. Since the weight of empirical experience contradicts the notion for the existence of miracles, such accounts should be treated with scepticism. Further, the myriad of accounts of miracles contradict one another, as some people who receive miracles will aim to prove the authority of Jesus, whereas others will aim to prove the authority of Muhammad or some other religious prophet or deity. These various differing accounts weaken the overall evidential power of miracles. Despite all this, Hume observes that belief in miracles is popular, and that "the gazing populace… receive greedily, without examination, whatever soothes superstition, and promotes wonder." Critics have argued that Hume's position assumes the character of miracles and natural laws prior to any specific examination of miracle claims, thus it amounts to a subtle form of begging the question. To assume that testimony is a homogeneous reference group seems unwise- to compare private miracles with public miracles, unintellectual observers with intellectual observers and those who have little to gain and much to lose with those with much to gain and little to lose is not convincing to many. Indeed, many have argued that miracles not only do not contradict the laws of nature but require the laws of nature to be intelligible as miraculous, and thus subverting the law of nature. For example, William Adams remarks that "there must be an ordinary course of nature before anything can be extraordinary. There must be a stream before anything can be interrupted." They have also noted that it requires an appeal to inductive inference, as none have observed every part of nature nor examined every possible miracle claim, for instance those in the future. This, in Hume's philosophy, was especially problematic. Little appreciated is the voluminous literature either foreshadowing Hume, in the likes of Thomas Sherlock or directly responding to and engaging with Hume—from William Paley, William Adams, John Douglas, John Leland, and George Campbell, among others. Regarding the latter, it is rumoured that, having read Campbell's Dissertation, Hume remarked that "the Scotch theologue had beaten him." Hume's main argument concerning miracles is that miracles by definition are singular events that differ from the established laws of nature. Such natural laws are codified as a result of past experiences. Therefore, a miracle is a violation of all prior experience and thus incapable on this basis of reasonable belief. However, the probability that something has occurred in contradiction of all past experience should always be judged to be less than the probability that either one's senses have deceived one, or the person recounting the miraculous occurrence is lying or mistaken, Hume would say, all of which he had past experience of. For Hume, this refusal to grant credence does not guarantee correctness. He offers the example of an Indian Prince, who, having grown up in a hot country, refuses to believe that water has frozen. By Hume's lights, this refusal is not wrong and the prince "reasoned justly;" it is presumably only when he has had extensive experience of the freezing of water that he has warrant to believe that the event could occur. So, for Hume, either the miraculous event will become a recurrent event or else it will never be rational to believe it occurred. The connection to religious belief is left unexplained throughout, except for the close of his discussion where Hume notes the reliance of Christianity upon testimony of miraculous occurrences. He makes an ironic remark that anyone who "is moved by faith to assent" to revealed testimony "is conscious of a continued miracle in his own person, which subverts all principles of his understanding, and gives him a determination to believe what is most contrary to custom and experience." Hume writes that "All the testimony whichever was really given for any miracle, or ever will be given, is a subject of derision." As a historian of England From 1754 to 1762 Hume published The History of England, a six-volume work, that extends (according to its subtitle) "From the Invasion of Julius Caesar to the Revolution in 1688." Inspired by Voltaire's sense of the breadth of history, Hume widened the focus of the field away from merely kings, parliaments, and armies, to literature and science as well. He argued that the quest for liberty was the highest standard for judging the past, and concluded that after considerable fluctuation, England at the time of his writing had achieved "the most entire system of liberty that was ever known amongst mankind". It "must be regarded as an event of cultural importance. In its own day, moreover, it was an innovation, soaring high above its very few predecessors." Hume's History of England made him famous as a historian before he was ever considered a serious philosopher. In this work, Hume uses history to tell the story of the rise of England and what led to its greatness and the disastrous effects that religion has had on its progress. For Hume, the history of England's rise may give a template for others who would also like to rise to its current greatness. Hume's The History of England was profoundly impacted by his Scottish background. The science of sociology, which is rooted in Scottish thinking of the eighteenth century, had never before been applied to British philosophical history. Because of his Scottish background, Hume was able to bring an outsider's lens to English history that the insulated English whigs lacked. Hume's coverage of the political upheavals of the 17th century relied in large part on the Earl of Clarendon's History of the Rebellion and Civil Wars in England (1646–69). Generally, Hume took a moderate royalist position and considered revolution unnecessary to achieve necessary reform. Hume was considered a Tory historian and emphasised religious differences more than constitutional issues. Laird Okie explains that "Hume preached the virtues of political moderation, but ... it was moderation with an anti-Whig, pro-royalist coloring." For "Hume shared the ... Tory belief that the Stuarts were no more high-handed than their Tudor predecessors". "Even though Hume wrote with an anti-Whig animus, it is, paradoxically, correct to regard the History as an establishment work, one which implicitly endorsed the ruling oligarchy". Historians have debated whether Hume posited a universal unchanging human nature, or allowed for evolution and development. The debate between Tory and the Whig historians can be seen in the initial reception to Hume's History of England. The whig-dominated world of 1754 overwhelmingly disapproved of Hume's take on English history. In later editions of the book, Hume worked to "soften or expunge many villainous whig strokes which had crept into it." Hume did not consider himself a pure Tory. Before 1745, he was more akin to an "independent whig." In 1748, he described himself as "a whig, though a very skeptical one." This description of himself as in between whiggism and toryism, helps one understand that his History of England should be read as his attempt to work out his own philosophy of history. Robert Roth argues that Hume's histories display his biases against Presbyterians and Puritans. Roth says his anti-Whig pro-monarchy position diminished the influence of his work, and that his emphasis on politics and religion led to a neglect of social and economic history. Hume was an early cultural historian of science. His short biographies of leading scientists explored the process of scientific change. He developed new ways of seeing scientists in the context of their times by looking at how they interacted with society and each other. He covers over forty scientists, with special attention paid to Francis Bacon, Robert Boyle, and Isaac Newton. Hume particularly praised William Harvey, writing about his treatise of the circulation of the blood: "Harvey is entitled to the glory of having made, by reasoning alone, without any mixture of accident, a capital discovery in one of the most important branches of science." The History became a best-seller and made Hume a wealthy man who no longer had to take up salaried work for others. It was influential for nearly a century, despite competition from imitations by Smollett (1757), Goldsmith (1771) and others. By 1894, there were at least 50 editions as well as abridgements for students, and illustrated pocket editions, probably produced specifically for women. Political theory Many of Hume's political ideas, such as limited government, private property when there is scarcity, and constitutionalism, are first principles of liberalism. Thomas Jefferson banned the History from University of Virginia, feeling that it had "spread universal toryism over the land." By comparison, Samuel Johnson thought Hume to be "a Tory by chance [...] for he has no principle. If he is anything, he is a Hobbist." A major concern of Hume's political philosophy is the importance of the rule of law. He also stresses throughout his political essays the importance of moderation in politics, public spirit, and regard to the community. Throughout the period of the American Revolution, Hume had varying views. For instance, in 1768 he encouraged total revolt on the part of the Americans. In 1775, he became certain that a revolution would take place and said that he believed in the American principle and wished the British government would let them be. Hume's influence on some of the Founders can be seen in Benjamin Franklin's suggestion at the Philadelphia Convention of 1787 that no high office in any branch of government should receive a salary, which is a suggestion Hume had made in his emendation of James Harrington's Oceana. The legacy of religious civil war in 18th-century Scotland, combined with the relatively recent memory of the 1715 and 1745 Jacobite risings, had fostered in Hume a distaste for enthusiasm and factionalism. These appeared to him to threaten the fragile and nascent political and social stability of a country that was deeply politically and religiously divided. Hume thought that society is best governed by a general and impartial system of laws; he is less concerned about the form of government that administers these laws, so long as it does so fairly. However, he also clarified that a republic must produce laws, while "monarchy, when absolute, contains even something repugnant to law." Hume expressed suspicion of attempts to reform society in ways that departed from long-established custom, and he counselled peoples not to resist their governments except in cases of the most egregious tyranny. However, he resisted aligning himself with either of Britain's two political parties, the Whigs and the Tories, explaining that "my views of things are more conformable to Whig principles; my representations of persons to Tory prejudices". The scholar Jerry Z. Muller argues that Hume's political thoughts have characteristics that later became typical for American and British conservatism, which contain more positive views of capitalism than conservatism does elsewhere. Canadian philosopher Neil McArthur writes that Hume believed that we should try to balance our demands for liberty with the need for strong authority, without sacrificing either. McArthur characterises Hume as a "precautionary conservative," whose actions would have been "determined by prudential concerns about the consequences of change, which often demand we ignore our own principles about what is ideal or even legitimate." Hume supported the liberty of the press, and was sympathetic to democracy, when suitably constrained. American historian Douglass Adair has argued that Hume was a major inspiration for James Madison's writings, and the essay "Federalist No. 10" in particular. Hume offered his view on the best type of society in an essay titled "Idea of a Perfect Commonwealth", which lays out what he thought was the best form of government. He hoped that "in some future age, an opportunity might be afforded of reducing the theory to practice, either by a dissolution of some old government, or by the combination of men to form a new one, in some distant part of the world". He defended a strict separation of powers, decentralisation, extending the franchise to anyone who held property of value and limiting the power of the clergy. The system of the Swiss militia was proposed as the best form of protection. Elections were to take place on an annual basis and representatives were to be unpaid. Political philosophers Leo Strauss and Joseph Cropsey, writing of Hume's thoughts about "the wise statesman", note that he "will bear a reverence to what carries the marks of age." Also, if he wishes to improve a constitution, his innovations will take account of the "ancient fabric", in order not to disturb society. In the political analysis of philosopher George Holland Sabine, the scepticism of Hume extended to the doctrine of government by consent. He notes that "allegiance is a habit enforced by education and consequently as much a part of human nature as any other motive." In the 1770s, Hume was critical of British policies toward the American colonies and advocated for American independence. He wrote in 1771 that "our union with America…in the nature of things, cannot long subsist." Contributions to economic thought Hume expressed his economic views in his Political Discourses, which were incorporated in Essays and Treatises as Part II of Essays, Moral and Political. To what extent he was influenced by Adam Smith is difficult to assess; however, both of them had similar principles supported from historical events. At the same time Hume did not demonstrate concrete system of economic theory which could be observed in Smith's Wealth of Nations. However, he introduced several new ideas around which the "classical economics" of the 18th century was built. Through his discussions on politics, Hume developed many ideas that are prevalent in the field of economics. This includes ideas on private property, inflation, and foreign trade. Referring to his essay "Of the Balance of Trade", economist Paul Krugman (2012) has remarked that "David Hume created what I consider the first true economic model." In contrast to Locke, Hume believes that private property is not a natural right. Hume argues it is justified, because resources are limited. Private property would be an unjustified, "idle ceremonial," if all goods were unlimited and available freely. Hume also believed in an unequal distribution of property, because perfect equality would destroy the ideas of thrift and industry. Perfect equality would thus lead to impoverishment. David Hume anticipated modern monetarism. First, Hume contributed to the theory of quantity and of interest rate. Hume has been credited with being the first to prove that, on an abstract level, there is no quantifiable amount of nominal money that a country needs to thrive. He understood that there was a difference between nominal and real money. Second, Hume has a theory of causation which fits in with the Chicago-school "black box" approach. According to Hume, cause and effect are related only through correlation. Hume shared the belief with modern monetarists that changes in the supply of money can affect consumption and investment. Lastly, Hume was a vocal advocate of a stable private sector, though also having some non-monetarist aspects to his economic philosophy. Having a stated preference for rising prices, for instance, Hume considered government debt to be a sort of substitute for actual money, referring to such debt as "a kind of paper credit." He also believed in heavy taxation, believing that it increases effort. Hume's economic approach evidently resembles his other philosophies, in that he does not choose one side indefinitely, but sees gray in the situation Legacy Due to Hume's vast influence on contemporary philosophy, a large number of approaches in contemporary philosophy and cognitive science are today called "Humean." The writings of Thomas Reid, a Scottish philosopher and contemporary of Hume, were often critical of Hume's scepticism. Reid formulated his common sense philosophy, in part, as a reaction against Hume's views. Hume influenced, and was influenced by, the Christian philosopher Joseph Butler. Hume was impressed by Butler's way of thinking about religion, and Butler may well have been influenced by Hume's writings. Attention to Hume's philosophical works grew after the German philosopher Immanuel Kant, in his Prolegomena to Any Future Metaphysics (1783), credited Hume with awakening him from his "dogmatic slumber." According to Arthur Schopenhauer, "there is more to be learned from each page of David Hume than from the collected philosophical works of Hegel, Herbart and Schleiermacher taken together." A. J. Ayer, while introducing his classic exposition of logical positivism in 1936, claimed that his views were "the logical outcome of the empiricism of Berkeley and David Hume". Albert Einstein, in 1915, wrote that he was inspired by Hume's positivism when formulating his theory of special relativity. Hume's problem of induction was also of fundamental importance to the philosophy of Karl Popper. In his autobiography, Unended Quest, he wrote: "Knowledge ... is objective; and it is hypothetical or conjectural. This way of looking at the problem made it possible for me to reformulate Hume's problem of induction." This insight resulted in Popper's major work The Logic of Scientific Discovery. In his Conjectures and Refutations, he wrote that he "approached the problem of induction through Hume", since Hume was "perfectly right in pointing out that induction cannot be logically justified". Hume's rationalism in religious subjects influenced, via German-Scottish theologian Johann Joachim Spalding, the German neology school and rational theology, and contributed to the transformation of German theology in the Age of Enlightenment. Hume pioneered a comparative history of religion, tried to explain various rites and traditions as being based on deception and challenged various aspects of rational and natural theology, such as the argument from design. Danish theologian and philosopher Søren Kierkegaard adopted "Hume's suggestion that the role of reason is not to make us wise but to reveal our ignorance," though taking it as a reason for the necessity of religious faith, or fideism. The "fact that Christianity is contrary to reason…is the necessary precondition for true faith." Political theorist Isaiah Berlin, who has also pointed out the similarities between the arguments of Hume and Kierkegaard against rational theology, has written about Hume's influence on what Berlin calls the counter-Enlightenment and on German anti-rationalism. Berlin has also once said of Hume that "no man has influenced the history of philosophy to a deeper or more disturbing degree." In 2003, philosopher Jerry Fodor described Hume's Treatise as "the founding document of cognitive science." Hume engaged with contemporary intellectuals including Jean-Jacques Rousseau, James Boswell, and Adam Smith (who acknowledged Hume's influence on his economics and political philosophy). Morris and Brown (2019) write that Hume is "generally regarded as one of the most important philosophers to write in English." In September 2020, the David Hume Tower, a University of Edinburgh building, was renamed to 40 George Square; this was following a campaign led by students of the university to rename it, in objection to Hume's writings related to race. Works 1734. A Kind of History of My Life. – MSS 23159 National Library of Scotland. A letter to an unnamed physician, asking for advice about "the Disease of the Learned" that then afflicted him. Here he reports that at the age of eighteen "there seem'd to be open'd up to me a new Scene of Thought" that made him "throw up every other Pleasure or Business" and turned him to scholarship. 1739–1740. A Treatise of Human Nature: Being an Attempt to introduce the experimental Method of Reasoning into Moral Subjects. Hume intended to see whether the Treatise of Human Nature met with success, and if so, to complete it with books devoted to Politics and Criticism. However, as Hume explained, "It fell dead-born from the press, without reaching such distinction as even to excite a murmur among the zealots" and so his further project was not completed. 1740. An Abstract of a Book lately Published: Entitled A Treatise of Human Nature etc. Anonymously published, but almost certainly written by Hume in an attempt to popularise his Treatise. This work is of considerable philosophical interest as it spells out what Hume considered "The Chief Argument" of the Treatise, in a way that seems to anticipate the structure of the Enquiry concerning Human Understanding. 1741. Essays, Moral, Political, and Literary (2nd ed.) A collection of pieces written and published over many years, though most were collected together in 1753–54. Many of the essays are on politics and economics; other topics include aesthetic judgement, love, marriage and polygamy, and the demographics of ancient Greece and Rome. The Essays show some influence from Addison's Tatler and The Spectator, which Hume read avidly in his youth. 1745. A Letter from a Gentleman to His Friend in Edinburgh: Containing Some Observations on a Specimen of the Principles concerning Religion and Morality, said to be maintain'd in a Book lately publish'd, intituled A Treatise of Human Nature etc. Contains a letter written by Hume to defend himself against charges of atheism and scepticism, while applying for a chair at Edinburgh University. 1742. "Of Essay Writing." 1748. An Enquiry Concerning Human Understanding. Contains reworking of the main points of the Treatise, Book 1, with the addition of material on free will (adapted from Book 2), miracles, the Design Argument, and mitigated scepticism. Of Miracles, section X of the Enquiry, was often published separately. 1751. An Enquiry Concerning the Principles of Morals. A reworking of material on morality from Book 3 of the Treatise, but with a significantly different emphasis. It "was thought by Hume to be the best of his writings." 1752. Political Discourses (part II of Essays, Moral, Political, and Literary within the larger Essays and Treatises on Several Subjects, vol. 1). Included in Essays and Treatises on Several Subjects (1753–56) reprinted 1758–77. 1752–1758. Political Discourses/Discours politiques 1757. Four Dissertations – includes 4 essays: "The Natural History of Religion" "Of the Passions" "Of Tragedy" "Of the Standard of Taste" 1754–1762. The History of England – sometimes referred to as The History of Great Britain. More a category of books than a single work, Hume's history spanned "from the invasion of Julius Caesar to the Revolution of 1688" and went through over 100 editions. Many considered it the standard history of England in its day. 1760. "Sister Peg" Hume claimed to have authored an anonymous political pamphlet satirizing the failure of the British Parliament to create a Scottish militia in 1760. Although the authorship of the work is disputed, Hume wrote Alexander Carlyle in early 1761 claiming authorship. The readership of the time attributed the work to Adam Ferguson, a friend and associate of Hume's who has been sometimes called "the founder of modern sociology." Some contemporary scholars concur in the judgment that Ferguson, not Hume, was the author of this work. 1776. "My Own Life." Penned in April, shortly before his death, this autobiography was intended for inclusion in a new edition of Essays and Treatises on Several Subjects. It was first published by Adam Smith, who claimed that by doing so he had incurred "ten times more abuse than the very violent attack I had made upon the whole commercial system of Great Britain." 1777. "Essays on Suicide and the Immortality of the Soul." 1779. Dialogues Concerning Natural Religion. Published posthumously by his nephew, David Hume the Younger. Being a discussion among three fictional characters concerning the nature of God, and is an important portrayal of the argument from design. Despite some controversy, most scholars agree that the view of Philo, the most sceptical of the three, comes closest to Hume's own. See also Age of Enlightenment George Anderson Human science Hume Studies Hume's principle Humeanism Mencius Scientific scepticism The Missing Shade of Blue References Notes Citations Bibliography Anderson, R. F. (1966). Hume's First Principles, University of Nebraska Press, Lincoln. Bongie, L. L. (1998). David Hume – Prophet of the Counter-Revolution. Liberty Fund, Indianapolis Broackes, Justin (1995). Hume, David, in Ted Honderich (ed.) The Oxford Companion to Philosophy, New York, Oxford University Press Daiches D., Jones P., Jones J. (eds). The Scottish Enlightenment: 1730–1790 A Hotbed of Genius The University of Edinburgh, 1986. In paperback, The Saltire Society, 1996 Einstein, A. (1915) Letter to Moritz Schlick, Schwarzschild, B. (trans. & ed.) in The Collected Papers of Albert Einstein, vol. 8A, R. Schulmann, A. J. Fox, J. Illy, (eds.) Princeton University Press, Princeton, NJ (1998), p. 220. Flew, A. (1986). David Hume: Philosopher of Moral Science, Basil Blackwell, Oxford. Fogelin, R. J. (1993). Hume's scepticism. In Norton, D. F. (ed.) (1993). The Cambridge Companion to Hume, Cambridge University Press, pp. 90–116. Graham, R. (2004). The Great Infidel – A Life of David Hume. John Donald, Edinburgh. Harwood, Sterling (1996). "Moral Sensibility Theories", in The Encyclopedia of Philosophy (Supplement) (New York: Macmillan Publishing Co.). Hume, D. (1751). An Enquiry Concerning the Principles of Morals. David Hume, Essays Moral, Political, and Literary edited with preliminary dissertations and notes by T.H. Green and T.H. Grose, 1:1–8. London: Longmans, Green 1907. Hume, D. (1752–1758). Political Discourses:Bilingual English-French (translated by Fabien Grandjean). Mauvezin, France, Trans-Europ-Repress, 1993, 22 cm, V-260 p. Bibliographic notes, index. Husserl, E. (1970). The Crisis of European Sciences and Transcendental Phenomenology, Carr, D. (trans.), Northwestern University Press, Evanston. Klibansky, Raymond and Mossner, Ernest C. (eds.) (1954). New Letters of David Hume. Oxford: Oxford University Press. Kolakowski, L. (1968). The Alienation of Reason: A History of Positivist Thought. Doubleday: Garden City. Penelhum, T. (1993). Hume's moral philosophy. In Norton, D. F. (ed.), (1993). The Cambridge Companion to Hume, Cambridge University Press, pp. 117–147. Phillipson, N. (1989). Hume, Weidenfeld & Nicolson, London. Popkin, Richard H. (1993) "Sources of Knowledge of Sextus Empiricus in Hume's Time" Journal of the History of Ideas, Vol. 54, No. 1. (Jan. 1993), pp. 137–141. Popkin, R. & Stroll, A. (1993) Philosophy. Reed Educational and Professional Publishing Ltd, Oxford. Popper. K. (1960). Knowledge without authority. In Miller D. (ed.), (1983). Popper, Oxford, Fontana, pp. 46–57. Robbins, Lionel (1998). A History of Economic Thought: The LSE Lectures. Edited by Steven G. Medema and Warren J. Samuels. Princeton University Press, Princeton, NJ. Robinson, Dave & Groves, Judy (2003). Introducing Political Philosophy. Icon Books. . Russell, B. (1946). A History of Western Philosophy. London, Allen and Unwin. Russell, Paul, "Hume on Free Will", The Stanford Encyclopedia of Philosophy (Winter 2016 Edition), Edward N. Zalta (ed.), online. Sgarbi, M. (2012). "Hume's Source of the 'Impression-Idea' Distinction", Anales del Seminario de Historia de la Filosofía, 2: 561–576 Spencer, Mark G., ed. David Hume: Historical Thinker, Historical Writer (Penn State University Press; 2013) 282 pages; Interdisciplinary essays that consider his intertwined work as historian and philosopher Spiegel, Henry William, (1991). The Growth of Economic Thought, 3rd Ed., Durham: Duke University Press. Stroud, B. (1977). Hume, Routledge: London & New York. Taylor, A. E. (1927). David Hume and the Miraculous, Leslie Stephen Lecture. Cambridge, pp. 53–54. reprinted in his Philosophical Studies (1934) Further reading Ardal, Pall (1966). Passion and Value in Hume's Treatise, Edinburgh, Edinburgh University Press. Bailey, Alan & O'Brien, Dan (eds.) (2012). The Continuum Companion to Hume, New York: Continuum. Bailey, Alan & O'Brien, Dan. (2014). Hume's Critique of Religion: Sick Men's Dreams, Dordrecht: Springer. Beauchamp, Tom & Rosenberg, Alexander (1981). Hume and the Problem of Causation, New York, Oxford University Press. Beveridge, Craig (1982), review of The Life of David Hume by Ernest Campbell Mossner, in Murray, Glen (ed.), Cencrastus No. 8, Spring 1982, p. 46, Campbell Mossner, Ernest (1980). The Life of David Hume, Oxford University Press. Gilles Deleuze (1953). Empirisme et subjectivité. Essai sur la Nature Humaine selon Hume, Paris: Presses Universitaires de France; trans. Empiricism and Subjectivity, New York: Columbia University Press, 1991. Demeter, Tamás (2014). "Natural Theology as Superstition: Hume and the Changing Ideology of Moral Inquiry." In Demeter, T. et al. (eds.), Conflicting Values of Inquiry, Leiden: Brill. Garrett, Don (1996). Cognition and Commitment in Hume's Philosophy. New York & Oxford: Oxford University Press. Gaskin, J.C.A. (1978). Hume's Philosophy of Religion. Humanities Press International. Harris, James A. (2015). Hume: An Intellectual Biography. Cambridge: Cambridge University Press. Hesselberg, A. Kenneth (1961). Hume, Natural Law and Justice. Duquesne Review, Spring 1961, pp. 46–47. Kail, P. J. E. (2007) Projection and Realism in Hume's Philosophy, Oxford: Oxford University Press. Kemp Smith, Norman (1941). The Philosophy of David Hume. London: Macmillan. Norton, David Fate (1982). David Hume: Common-Sense Moralist, Sceptical Metaphysician. Princeton: Princeton University Press. Norton, David Fate & Taylor, Jacqueline (eds.) (2009). The Cambridge Companion to Hume, Cambridge: Cambridge University Press. Radcliffe, Elizabeth S. (ed.) (2008). A Companion to Hume, Malden: Blackwell. Rosen, Frederick (2003). Classical Utilitarianism from Hume to Mill (Routledge Studies in Ethics & Moral Theory). Russell, Paul (1995). Freedom and Moral Sentiment: Hume's Way of Naturalizing Responsibility. New York & Oxford: Oxford University Press. Russell, Paul (2008). The Riddle of Hume's Treatise: Skepticism, Naturalism and Irreligion. New York & Oxford: Oxford University Press. Stroud, Barry (1977). Hume, London & New York: Routledge. (Complete study of Hume's work parting from the interpretation of Hume's naturalistic philosophical programme). Wei, Jua (2017). Commerce and Politics in Hume’s History of England, Woodbridge: Boydell and Brewer online review Willis, Andre C (2015). Toward a Humean True Religion: Genuine Theism, Moderate Hope, and Practical Morality, University Park: Penn State University Press. Wilson, Fred (2008). The External World and Our Knowledge of It : Hume's critical realism, an exposition and a defence, Toronto: University of Toronto Press. External links The David Hume Collection at McGill University Library Books by David Hume at the Online Books Page Hume Texts Online searchable texts, with related resources Peter Millican. Papers and Talks on Hume Peter Millican. Research Translations of philosophical classics into contemporary English, from English, Latin, French and German. David Hume: My Own Life and Adam Smith: obituary of Hume Bibliography of Hume's influence on Utilitarianism The Hume Society, publishes Hume Studies and holds conferences 1711 births 1776 deaths 18th-century Scottish male writers 18th-century British philosophers 18th-century British diplomats 18th-century British economists 18th-century British essayists 18th-century Scottish educators 18th-century Scottish historians Action theorists Alumni of the University of Edinburgh British diplomats British male essayists British male non-fiction writers British sceptics Burials at Old Calton Burial Ground Civil servants from Edinburgh British consciousness researchers and theorists Conservatism Criticism of rationalism British critics of religions Critics of the Catholic Church Deist philosophers Diplomats from Edinburgh Empiricists Enlightenment philosophers Epistemologists Freethought writers Historians of England History of economic thought Members of the Philosophical Society of Edinburgh Metaphilosophers Ontologists People of the Scottish Enlightenment Philosophers from Edinburgh Philosophers of art Philosophers of economics British philosophers of education Philosophers of history Philosophers of identity Philosophers of logic Philosophers of mathematics Philosophers of mind Philosophers of psychology Philosophers of religion Philosophers of science Philosophers of social science Philosophy writers Preclassical economists Scottish economists Scottish educational theorists Scottish ethicists Scottish deists Scottish diplomats Scottish essayists Scottish humanists Scottish libertarians Scottish librarians Scottish logicians Scottish monarchists Scottish philosophers Scottish political philosophers Secular humanists Skeptic philosophers Social philosophers Theorists on Western civilization Virtue ethicists Writers about activism and social change Writers about religion and science Writers from Edinburgh
David Hume
Mathematics
18,368
45,456,809
https://en.wikipedia.org/wiki/Penicillium%20duclauxii
Penicillium duclauxii is an anamorph species of the genus of Penicillium which produces xenoclauxin and duclauxin. Description Colonies on CYA on day 7 are 2.5–3 cm in diameter, somewhat radially striated, with white and yellow mycelium, fluffy, with synnemes, non-spore-bearing or weakly spore-bearing. There is no exudate. The reverse of the colonies is olive-brown in the center, to corn-yellow along the edge. A yellow soluble pigment is released into the medium. On agar with malt extract (MEA), colonies are with white mycelium, velvety, with synnemes along the edges, with sparse sporulation in gray-green tones. Exudate and soluble pigment are not released. The reverse is brown, brown-yellow closer to the edge. On agar with yeast extract and sucrose (YES), colonies with white mycelium, concentric-striated, non-spore-bearing. Soluble pigment is not released, the reverse of the colonies is olive-brown, up to gray-yellow along the edges. Conidiophores are two-tiered tassels with a smooth-walled stem 15–50 μm long and 3–4 μm thick. Metules in the terminal whorl are 2–6, divergent, 8.5–15 μm long. Phialides are needle-shaped, 3–8 in a bundle, 9–15 × 2–3.5 μm. Conidia are ellipsoidal, smooth to barely rough, 3–4 × 1.5–3.5 μm. See also List of Penicillium species References Further reading duclauxii Fungi described in 1891 Fungus species
Penicillium duclauxii
Biology
384
16,184,365
https://en.wikipedia.org/wiki/Kurt%20Sch%C3%BCtte
Kurt Schütte (14 October 1909 – 18 August 1998) was a German mathematician who worked on proof theory and ordinal analysis. The Feferman–Schütte ordinal, which he showed to be the precise ordinal bound for predicativity, is named after him. He was the doctoral advisor of 16 students, including Wolfgang Bibel, Wolfgang Maaß, Wolfram Pohlers, and Martin Wirsing. Publications Beweistheorie, Springer, Grundlehren der mathematischen Wissenschaften, 1960; new edition trans. into English as Proof Theory, Springer-Verlag 1977 Vollständige Systeme modaler und intuitionistischer Logik, Springer 1968 with Wilfried Buchholz: Proof Theory of Impredicative Subsystems of Analysis, Bibliopolis, Naples 1988 with Helmut Schwichtenberg: Mathematische Logik, in Fischer, Hirzebruch et al. (eds.) Ein Jahrhundert Mathematik 1890-1990, Vieweg 1990 References External links Kurt Schütte at the Mathematics Genealogy Project 1909 births 1998 deaths People from Salzwedel People from the Province of Saxony Mathematical logicians 20th-century German mathematicians
Kurt Schütte
Mathematics
264
5,557,538
https://en.wikipedia.org/wiki/Abell%202667
Abell 2667 is a galaxy cluster. It is one of the most luminous galaxy clusters in the X-ray waveband known at a redshift about 0.2 and is a well-known gravitational lens. On 2 March 2007, a team of astronomers reported the detection of the Comet Galaxy in this cluster. This galaxy is being ripped apart by the cluster's gravitational field and harsh environment. The finding sheds light on the mysterious process by which gas-rich spiral-shaped galaxies might evolve into gas-poor irregular or elliptical-shaped galaxies over billions of years. See also Abell catalogue List of Abell clusters X-ray astronomy References External links Hubble Space Telescope Spitzer Space Telescope ESA news 2667 Galaxy clusters Gravitational lensing Abell richness class 3 Sculptor (constellation)
Abell 2667
Astronomy
161
39,068
https://en.wikipedia.org/wiki/Digital%20electronics
Digital electronics is a field of electronics involving the study of digital signals and the engineering of devices that use or produce them. This is in contrast to analog electronics which work primarily with analog signals. Despite the name, digital electronics designs include important analog design considerations. Digital electronic circuits are usually made from large assemblies of logic gates, often packaged in integrated circuits. Complex devices may have simple electronic representations of Boolean logic functions. History The binary number system was refined by Gottfried Wilhelm Leibniz (published in 1705) and he also established that by using the binary system, the principles of arithmetic and logic could be joined. Digital logic as we know it was the brain-child of George Boole in the mid-19th century. In an 1886 letter, Charles Sanders Peirce described how logical operations could be carried out by electrical switching circuits. Eventually, vacuum tubes replaced relays for logic operations. Lee De Forest's modification of the Fleming valve in 1907 could be used as an AND gate. Ludwig Wittgenstein introduced a version of the 16-row truth table as proposition 5.101 of Tractatus Logico-Philosophicus (1921). Walther Bothe, inventor of the coincidence circuit, shared the 1954 Nobel Prize in physics, for creating the first modern electronic AND gate in 1924. Mechanical analog computers started appearing in the first century and were later used in the medieval era for astronomical calculations. In World War II, mechanical analog computers were used for specialized military applications such as calculating torpedo aiming. During this time the first electronic digital computers were developed, with the term digital being proposed by George Stibitz in 1942. Originally they were the size of a large room, consuming as much power as several hundred modern PCs. Claude Shannon, demonstrating that electrical applications of Boolean algebra could construct any logical numerical relationship, ultimately laid the foundations of digital computing and digital circuits in his master's thesis of 1937, which is considered to be arguably the most important master's thesis ever written, winning the 1939 Alfred Noble Prize. The Z3 was an electromechanical computer designed by Konrad Zuse. Finished in 1941, it was the world's first working programmable, fully automatic digital computer. Its operation was facilitated by the invention of the vacuum tube in 1904 by John Ambrose Fleming. At the same time that digital calculation replaced analog, purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents. John Bardeen and Walter Brattain invented the point-contact transistor at Bell Labs in 1947, followed by William Shockley inventing the bipolar junction transistor at Bell Labs in 1948. At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of vacuum tubes. Their "transistorised computer", and the first in the world, was operational by 1953, and a second version was completed there in April 1955. From 1955 and onwards, transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors were smaller, more reliable, had indefinite lifespans, and required less power than vacuum tubes - thereby giving off less heat, and allowing much denser concentrations of circuits, up to tens of thousands in a relatively compact space. In 1955, Carl Frosch and Lincoln Derick discovered silicon dioxide surface passivation effects. In 1957 Frosch and Derick, using masking and predeposition, were able to manufacture silicon dioxide field effect transistors; the first planar transistors, in which drain and source were adjacent at the same surface. At Bell Labs, the importance of Frosch and Derick technique and transistors was immediately realized. Results of their work circulated around Bell Labs in the form of BTL memos before being published in 1957. At Shockley Semiconductor, Shockley had circulated the preprint of their article in December 1956 to all his senior staff, including Jean Hoerni, who would later invent the planar process in 1959 while at Fairchild Semiconductor. At Bell Labs, J.R. Ligenza and W.G. Spitzer studied the mechanism of thermally grown oxides, fabricated a high quality Si/SiO2 stack and published their results in 1960. Following this research at Bell Labs, Mohamed Atalla and Dawon Kahng proposed a silicon MOS transistor in 1959 and successfully demonstrated a working MOS device with their Bell Labs team in 1960. The team included E. E. LaBate and E. I. Povilonis who fabricated the device; M. O. Thurston, L. A. D’Asaro, and J. R. Ligenza who developed the diffusion processes, and H. K. Gummel and R. Lindner who characterized the device. While working at Texas Instruments in July 1958, Jack Kilby recorded his initial ideas concerning the integrated circuit (IC), then successfully demonstrated the first working integrated circuit on 12 September 1958. Kilby's chip was made of germanium. The following year, Robert Noyce at Fairchild Semiconductor invented the silicon integrated circuit. The basis for Noyce's silicon IC was Hoerni's planar process. The MOSFET's advantages include high scalability, affordability, low power consumption, and high transistor density. Its rapid on–off electronic switching speed also makes it ideal for generating pulse trains, the basis for electronic digital signals, in contrast to BJTs which, more slowly, generate analog signals resembling sine waves. Along with MOS large-scale integration (LSI), these factors make the MOSFET an important switching device for digital circuits. The MOSFET revolutionized the electronics industry, and is the most common semiconductor device. In the early days of integrated circuits, each chip was limited to only a few transistors, and the low degree of integration meant the design process was relatively simple. Manufacturing yields were also quite low by today's standards. The wide adoption of the MOSFET transistor by the early 1970s led to the first large-scale integration (LSI) chips with more than 10,000 transistors on a single chip. Following the wide adoption of CMOS, a type of MOSFET logic, by the 1980s, millions and then billions of MOSFETs could be placed on one chip as the technology progressed, and good designs required thorough planning, giving rise to new design methods. The transistor count of devices and total production rose to unprecedented heights. The total amount of transistors produced until 2018 has been estimated to be (13sextillion). The wireless revolution (the introduction and proliferation of wireless networks) began in the 1990s and was enabled by the wide adoption of MOSFET-based RF power amplifiers (power MOSFET and LDMOS) and RF circuits (RF CMOS). Wireless networks allowed for public digital transmission without the need for cables, leading to digital television, satellite and digital radio, GPS, wireless Internet and mobile phones through the 1990s2000s. Properties An advantage of digital circuits when compared to analog circuits is that signals represented digitally can be transmitted without degradation caused by noise. For example, a continuous audio signal transmitted as a sequence of 1s and 0s, can be reconstructed without error, provided the noise picked up in transmission is not enough to prevent identification of the 1s and 0s. In a digital system, a more precise representation of a signal can be obtained by using more binary digits to represent it. While this requires more digital circuits to process the signals, each digit is handled by the same kind of hardware, resulting in an easily scalable system. In an analog system, additional resolution requires fundamental improvements in the linearity and noise characteristics of each step of the signal chain. With computer-controlled digital systems, new functions can be added through software revision and no hardware changes are needed. Often this can be done outside of the factory by updating the product's software. This way, the product's design errors can be corrected even after the product is in a customer's hands. Information storage can be easier in digital systems than in analog ones. The noise immunity of digital systems permits data to be stored and retrieved without degradation. In an analog system, noise from aging and wear degrade the information stored. In a digital system, as long as the total noise is below a certain level, the information can be recovered perfectly. Even when more significant noise is present, the use of redundancy permits the recovery of the original data provided too many errors do not occur. In some cases, digital circuits use more energy than analog circuits to accomplish the same tasks, thus producing more heat which increases the complexity of the circuits such as the inclusion of heat sinks. In portable or battery-powered systems this can limit the use of digital systems. For example, battery-powered cellular phones often use a low-power analog front-end to amplify and tune the radio signals from the base station. However, a base station has grid power and can use power-hungry, but very flexible software radios. Such base stations can easily be reprogrammed to process the signals used in new cellular standards. Many useful digital systems must translate from continuous analog signals to discrete digital signals. This causes quantization errors. Quantization error can be reduced if the system stores enough digital data to represent the signal to the desired degree of fidelity. The Nyquist–Shannon sampling theorem provides an important guideline as to how much digital data is needed to accurately portray a given analog signal. If a single piece of digital data is lost or misinterpreted, in some systems only a small error may result, while in other systems the meaning of large blocks of related data can completely change. For example, a single-bit error in audio data stored directly as linear pulse-code modulation causes, at worst, a single audible click. But when using audio compression to save storage space and transmission time, a single bit error may cause a much larger disruption. Because of the cliff effect, it can be difficult for users to tell if a particular system is right on the edge of failure, or if it can tolerate much more noise before failing. Digital fragility can be reduced by designing a digital system for robustness. For example, a parity bit or other error management method can be inserted into the signal path. These schemes help the system detect errors, and then either correct the errors, or request retransmission of the data. Construction A digital circuit is typically constructed from small electronic circuits called logic gates that can be used to create combinational logic. Each logic gate is designed to perform a function of Boolean logic when acting on logic signals. A logic gate is generally created from one or more electrically controlled switches, usually transistors but thermionic valves have seen historic use. The output of a logic gate can, in turn, control or feed into more logic gates. Another form of digital circuit is constructed from lookup tables, (many sold as "programmable logic devices", though other kinds of PLDs exist). Lookup tables can perform the same functions as machines based on logic gates, but can be easily reprogrammed without changing the wiring. This means that a designer can often repair design errors without changing the arrangement of wires. Therefore, in small-volume products, programmable logic devices are often the preferred solution. They are usually designed by engineers using electronic design automation software. Integrated circuits consist of multiple transistors on one silicon chip and are the least expensive way to make a large number of interconnected logic gates. Integrated circuits are usually interconnected on a printed circuit board which is a board that holds electrical components, and connects them together with copper traces. Design Engineers use many methods to minimize logic redundancy in order to reduce the circuit complexity. Reduced complexity reduces component count and potential errors and therefore typically reduces cost. Logic redundancy can be removed by several well-known techniques, such as binary decision diagrams, Boolean algebra, Karnaugh maps, the Quine–McCluskey algorithm, and the heuristic computer method. These operations are typically performed within a computer-aided design system. Embedded systems with microcontrollers and programmable logic controllers are often used to implement digital logic for complex systems that do not require optimal performance. These systems are usually programmed by software engineers or by electricians, using ladder logic. Representation A digital circuit's input-output relationship can be represented as a truth table. An equivalent high-level circuit uses logic gates, each represented by a different shape (standardized by IEEE/ANSI 91–1984). A low-level representation uses an equivalent circuit of electronic switches (usually transistors). Most digital systems divide into combinational and sequential systems. The output of a combinational system depends only on the present inputs. However, a sequential system has some of its outputs fed back as inputs, so its output may depend on past inputs in addition to present inputs, to produce a sequence of operations. Simplified representations of their behavior called state machines facilitate design and test. Sequential systems divide into two further subcategories. "Synchronous" sequential systems change state all at once when a clock signal changes state. "Asynchronous" sequential systems propagate changes whenever inputs change. Synchronous sequential systems are made using flip flops that store inputted voltages as a bit only when the clock changes. Synchronous systems The usual way to implement a synchronous sequential state machine is to divide it into a piece of combinational logic and a set of flip flops called a state register. The state register represents the state as a binary number. The combinational logic produces the binary representation for the next state. On each clock cycle, the state register captures the feedback generated from the previous state of the combinational logic and feeds it back as an unchanging input to the combinational part of the state machine. The clock rate is limited by the most time-consuming logic calculation in the combinational logic. Asynchronous systems Most digital logic is synchronous because it is easier to create and verify a synchronous design. However, asynchronous logic has the advantage of its speed not being constrained by an arbitrary clock; instead, it runs at the maximum speed of its logic gates. Nevertheless, most systems need to accept external unsynchronized signals into their synchronous logic circuits. This interface is inherently asynchronous and must be analyzed as such. Examples of widely used asynchronous circuits include synchronizer flip-flops, switch debouncers and arbiters. Asynchronous logic components can be hard to design because all possible states, in all possible timings must be considered. The usual method is to construct a table of the minimum and maximum time that each such state can exist and then adjust the circuit to minimize the number of such states. The designer must force the circuit to periodically wait for all of its parts to enter a compatible state (this is called "self-resynchronization"). Without careful design, it is easy to accidentally produce asynchronous logic that is unstable—that is—real electronics will have unpredictable results because of the cumulative delays caused by small variations in the values of the electronic components. Register transfer systems Many digital systems are data flow machines. These are usually designed using synchronous register transfer logic and written with hardware description languages such as VHDL or Verilog. In register transfer logic, binary numbers are stored in groups of flip flops called registers. A sequential state machine controls when each register accepts new data from its input. The outputs of each register are a bundle of wires called a bus that carries that number to other calculations. A calculation is simply a piece of combinational logic. Each calculation also has an output bus, and these may be connected to the inputs of several registers. Sometimes a register will have a multiplexer on its input so that it can store a number from any one of several buses. Asynchronous register-transfer systems (such as computers) have a general solution. In the 1980s, some researchers discovered that almost all synchronous register-transfer machines could be converted to asynchronous designs by using first-in-first-out synchronization logic. In this scheme, the digital machine is characterized as a set of data flows. In each step of the flow, a synchronization circuit determines when the outputs of that step are valid and instructs the next stage when to use these outputs. Computer design The most general-purpose register-transfer logic machine is a computer. This is basically an automatic binary abacus. The control unit of a computer is usually designed as a microprogram run by a microsequencer. A microprogram is much like a player-piano roll. Each table entry of the microprogram commands the state of every bit that controls the computer. The sequencer then counts, and the count addresses the memory or combinational logic machine that contains the microprogram. The bits from the microprogram control the arithmetic logic unit, memory and other parts of the computer, including the microsequencer itself. In this way, the complex task of designing the controls of a computer is reduced to the simpler task of programming a collection of much simpler logic machines. Almost all computers are synchronous. However, asynchronous computers have also been built. One example is the ASPIDA DLX core. Another was offered by ARM Holdings. They do not, however, have any speed advantages because modern computer designs already run at the speed of their slowest component, usually memory. They do use somewhat less power because a clock distribution network is not needed. An unexpected advantage is that asynchronous computers do not produce spectrally-pure radio noise. They are used in some radio-sensitive mobile-phone base-station controllers. They may be more secure in cryptographic applications because their electrical and radio emissions can be more difficult to decode. Computer architecture Computer architecture is a specialized engineering activity that tries to arrange the registers, calculation logic, buses and other parts of the computer in the best way possible for a specific purpose. Computer architects have put a lot of work into reducing the cost and increasing the speed of computers in addition to boosting their immunity to programming errors. An increasingly common goal of computer architects is to reduce the power used in battery-powered computer systems, such as smartphones. Design issues in digital circuits Digital circuits are made from analog components. The design must assure that the analog nature of the components does not dominate the desired digital behavior. Digital systems must manage noise and timing margins, parasitic inductances and capacitances. Bad designs have intermittent problems such as glitches, vanishingly fast pulses that may trigger some logic but not others, runt pulses that do not reach valid threshold voltages. Additionally, where clocked digital systems interface to analog systems or systems that are driven from a different clock, the digital system can be subject to metastability where a change to the input violates the setup time for a digital input latch. Since digital circuits are made from analog components, digital circuits calculate more slowly than low-precision analog circuits that use a similar amount of space and power. However, the digital circuit will calculate more repeatably, because of its high noise immunity. Automated design tools Much of the effort of designing large logic machines has been automated through the application of electronic design automation (EDA). Simple truth table-style descriptions of logic are often optimized with EDA that automatically produce reduced systems of logic gates or smaller lookup tables that still produce the desired outputs. The most common example of this kind of software is the Espresso heuristic logic minimizer. Optimizing large logic systems may be done using the Quine–McCluskey algorithm or binary decision diagrams. There are promising experiments with genetic algorithms and annealing optimizations. To automate costly engineering processes, some EDA can take state tables that describe state machines and automatically produce a truth table or a function table for the combinational logic of a state machine. The state table is a piece of text that lists each state, together with the conditions controlling the transitions between them and their associated output signals. Often, real logic systems are designed as a series of sub-projects, which are combined using a tool flow. The tool flow is usually controlled with the help of a scripting language, a simplified computer language that can invoke the software design tools in the right order. Tool flows for large logic systems such as microprocessors can be thousands of commands long, and combine the work of hundreds of engineers. Writing and debugging tool flows is an established engineering specialty in companies that produce digital designs. The tool flow usually terminates in a detailed computer file or set of files that describe how to physically construct the logic. Often it consists of instructions on how to draw the transistors and wires on an integrated circuit or a printed circuit board. Parts of tool flows are debugged by verifying the outputs of simulated logic against expected inputs. The test tools take computer files with sets of inputs and outputs and highlight discrepancies between the simulated behavior and the expected behavior. Once the input data is believed to be correct, the design itself must still be verified for correctness. Some tool flows verify designs by first producing a design, then scanning the design to produce compatible input data for the tool flow. If the scanned data matches the input data, then the tool flow has probably not introduced errors. The functional verification data are usually called test vectors. The functional test vectors may be preserved and used in the factory to test whether newly constructed logic works correctly. However, functional test patterns do not discover all fabrication faults. Production tests are often designed by automatic test pattern generation software tools. These generate test vectors by examining the structure of the logic and systematically generating tests targeting particular potential faults. This way the fault coverage can closely approach 100%, provided the design is properly made testable (see next section). Once a design exists, and is verified and testable, it often needs to be processed to be manufacturable as well. Modern integrated circuits have features smaller than the wavelength of the light used to expose the photoresist. Software that are designed for manufacturability add interference patterns to the exposure masks to eliminate open-circuits and enhance the masks' contrast. Design for testability There are several reasons for testing a logic circuit. When the circuit is first developed, it is necessary to verify that the design circuit meets the required functional, and timing specifications. When multiple copies of a correctly designed circuit are being manufactured, it is essential to test each copy to ensure that the manufacturing process has not introduced any flaws. A large logic machine (say, with more than a hundred logical variables) can have an astronomical number of possible states. Obviously, factory testing every state of such a machine is unfeasible, for even if testing each state only took a microsecond, there are more possible states than there are microseconds since the universe began! Large logic machines are almost always designed as assemblies of smaller logic machines. To save time, the smaller sub-machines are isolated by permanently installed design for test circuitry and are tested independently. One common testing scheme provides a test mode that forces some part of the logic machine to enter a test cycle. The test cycle usually exercises large independent parts of the machine. Boundary scan is a common test scheme that uses serial communication with external test equipment through one or more shift registers known as scan chains. Serial scans have only one or two wires to carry the data, and minimize the physical size and expense of the infrequently used test logic. After all the test data bits are in place, the design is reconfigured to be in normal mode and one or more clock pulses are applied, to test for faults (e.g. stuck-at low or stuck-at high) and capture the test result into flip-flops or latches in the scan shift register(s). Finally, the result of the test is shifted out to the block boundary and compared against the predicted good machine result. In a board-test environment, serial-to-parallel testing has been formalized as the JTAG standard. Trade-offs Cost Since a digital system may use many logic gates, the overall cost of building a computer correlates strongly with the cost of a logic gate. In the 1930s, the earliest digital logic systems were constructed from telephone relays because these were inexpensive and relatively reliable. The earliest integrated circuits were constructed to save weight and permit the Apollo Guidance Computer to control an inertial guidance system for a spacecraft. The first integrated circuit logic gates cost nearly US$50, which in would be equivalent to $. Mass-produced gates on integrated circuits became the least-expensive method to construct digital logic. With the rise of integrated circuits, reducing the absolute number of chips used represented another way to save costs. The goal of a designer is not just to make the simplest circuit, but to keep the component count down. Sometimes this results in more complicated designs with respect to the underlying digital logic but nevertheless reduces the number of components, board size, and even power consumption. Reliability Another major motive for reducing component count on printed circuit boards is to reduce the manufacturing defect rate due to failed soldered connections and increase reliability. Defect and failure rates tend to increase along with the total number of component pins. The failure of a single logic gate may cause a digital machine to fail. Where additional reliability is required, redundant logic can be provided. Redundancy adds cost and power consumption over a non-redundant system. The reliability of a logic gate can be described by its mean time between failure (MTBF). Digital machines first became useful when the MTBF for a switch increased above a few hundred hours. Even so, many of these machines had complex, well-rehearsed repair procedures, and would be nonfunctional for hours because a tube burned-out, or a moth got stuck in a relay. Modern transistorized integrated circuit logic gates have MTBFs greater than 82 billion hours (). This level of reliability is required because integrated circuits have so many logic gates. Fan-out Fan-out describes how many logic inputs can be controlled by a single logic output without exceeding the electrical current ratings of the gate outputs. The minimum practical fan-out is about five. Modern electronic logic gates using CMOS transistors for switches have higher fan-outs. Speed The switching speed describes how long it takes a logic output to change from true to false or vice versa. Faster logic can accomplish more operations in less time. Modern electronic digital logic routinely switches at , and some laboratory systems switch at more than .. Logic families Digital design started with relay logic which is slow. Occasionally a mechanical failure would occur. Fan-outs were typically about 10, limited by the resistance of the coils and arcing on the contacts from high voltages. Later, vacuum tubes were used. These were very fast, but generated heat, and were unreliable because the filaments would burn out. Fan-outs were typically 5 to 7, limited by the heating from the tubes' current. In the 1950s, special computer tubes were developed with filaments that omitted volatile elements like silicon. These ran for hundreds of thousands of hours. The first semiconductor logic family was resistor–transistor logic. This was a thousand times more reliable than tubes, ran cooler, and used less power, but had a very low fan-out of 3. Diode–transistor logic improved the fan-out up to about 7, and reduced the power. Some DTL designs used two power supplies with alternating layers of NPN and PNP transistors to increase the fan-out. Transistor–transistor logic (TTL) was a great improvement over these. In early devices, fan-out improved to 10, and later variations reliably achieved 20. TTL was also fast, with some variations achieving switching times as low as 20 ns. TTL is still used in some designs. Emitter coupled logic is very fast but uses a lot of power. It was extensively used for high-performance computers, such as the Illiac IV, made up of many medium-scale components. By far, the most common digital integrated circuits built today use CMOS logic, which is fast, offers high circuit density and low power per gate. This is used even in large, fast computers, such as the IBM System z. Recent developments In 2009, researchers discovered that memristors can implement a Boolean state storage and provides a complete logic family with very small amounts of space and power, using familiar CMOS semiconductor processes. The discovery of superconductivity has enabled the development of rapid single flux quantum (RSFQ) circuit technology, which uses Josephson junctions instead of transistors. Most recently, attempts are being made to construct purely optical computing systems capable of processing digital information using nonlinear optical elements. See also De Morgan's laws Logical effort Logic optimization Microelectronics Unconventional computing Notes References Further reading Douglas Lewin, Logical Design of Switching Circuits, Nelson,1974. R. H. Katz, Contemporary Logic Design, The Benjamin/Cummings Publishing Company, 1994. P. K. Lala, Practical Digital Logic Design and Testing, Prentice Hall, 1996. Y. K. Chan and S. Y. Lim, Progress In Electromagnetics Research B, Vol. 1, 269–290, 2008, "Synthetic Aperture Radar (SAR) Signal Generation, Faculty of Engineering & Technology, Multimedia University, Jalan Ayer Keroh Lama, Bukit Beruang, Melaka 75450, Malaysia. External links Digital Circuit Projects: An Overview of Digital Circuits Through Implementing Integrated Circuits (2014) MIT OpenCourseWare introduction to digital design class materials ("6.004: Computation Structures") Electronic design Electronic design automation
Digital electronics
Engineering
6,181
69,248,558
https://en.wikipedia.org/wiki/Ecosection
An ecosection is a biogeographic unit smaller than an ecoregion that contains minor physiographic, macroclimatic or oceanographic variations. They are a virtual ecological zone in the Canadian province of British Columbia, which contains 139 ecosections that vary from pure terrestrial units to pure marine units. See also Bioregion Ecological classification References Biogeography Ecology terminology
Ecosection
Biology
76
7,932,577
https://en.wikipedia.org/wiki/Tigase
Tigase is an open source (GNU AGPL-3.0-only) project started by Artur Hefczyc in October 2004 to develop an XMPP server implementation in Java. Initially the goal was to develop a fully compliant XMPP server with backward compatibility with an informal XMPP specification. In time the project has been split into smaller parts – server implementation, XML tools containing a parser for XML streams and a test suite with a built-in scripting language. In summer 2006, the client-side library and application in Java have joined the Tigase project. In November 2013, Tigase added a REST API layer project, and later HTTP tools - AdminUI. In 2018 IoT1 cloud was launched - bringing all XMPP and all Tigase software together to facilitate IoT devices communication. Tigase is currently in active development - on 19 December 2022 Tigase XMPP Server 8.3.0 was released. Subprojects Now Tigase consists of following subprojects: Server Server-side related projects Tigase XMPP Server – main XMPP server implementation Tigase XMLTools – XML tools, parser simple XML database Tigase Utils – Repository with common files used in other Tigase subprojects Tigase TestSuite – suite of functional tests for XMPP servers Tigase XMPP Server Command Line Management Tool – Command Line Management Tool Tigase MUC - component allowing creating group chatrooms Tigase PubSub - implementation of XEP-0060: Publish-Subscribe extension Tigase Message Archiving - Component for the Tigase XMPP Server as the server component, implementing XEP-0136: Message Archiving Tigase Socks5 Proxy - XEP-0065: SOCKS5 Bytestreams implementation for the Tigase XMPP Server component allowing file transfer between clients Tigase STUN - implementation of STUN protocol Tigase HTTP API - HTTP component providing REST API, web-based installer and AdminUI. Client Tigase JaXMPP – XMPP client library Tigase Swift XMPP client library - XMPP library written in Swift StorkIM – Android XMPP client BeagleIM - macOS XMPP client SiskinIM - iOS XMPP client See also Extensible Messaging and Presence Protocol References External links Tigase homepage Projects tracking site Instant messaging server software Software using the GNU Affero General Public License Software using the GNU General Public License
Tigase
Technology
536
317,552
https://en.wikipedia.org/wiki/Cover%20%28topology%29
In mathematics, and more particularly in set theory, a cover (or covering) of a set is a family of subsets of whose union is all of . More formally, if is an indexed family of subsets (indexed by the set ), then is a cover of if Thus the collection is a cover of if each element of belongs to at least one of the subsets . Definition Covers are commonly used in the context of topology. If the set is a topological space, then a cover of is a collection of subsets of whose union is the whole space . In this case is said to cover , or that the sets cover . If is a (topological) subspace of , then a cover of is a collection of subsets of whose union contains . That is, is a cover of if Here, may be covered with either sets in itself or sets in the parent space . A cover of is said to be locally finite if every point of has a neighborhood that intersects only finitely many sets in the cover. Formally, is locally finite if, for any , there exists some neighborhood of such that the set is finite. A cover of is said to be point finite if every point of is contained in only finitely many sets in the cover. A cover is point finite if locally finite, though the converse is not necessarily true. Subcover Let be a cover of a topological space . A subcover of is a subset of that still covers . The cover is said to be an if each of its members is an open set. That is, each is contained in , where is the topology on X). A simple way to get a subcover is to omit the sets contained in another set in the cover. Consider specifically open covers. Let be a topological basis of and be an open cover of . First, take . Then is a refinement of . Next, for each one may select a containing (requiring the axiom of choice). Then is a subcover of Hence the cardinality of a subcover of an open cover can be as small as that of any topological basis. Hence, second countability implies space is Lindelöf. Refinement A refinement of a cover of a topological space is a new cover of such that every set in is contained in some set in . Formally, is a refinement of if for all there exists such that In other words, there is a refinement map satisfying for every This map is used, for instance, in the Čech cohomology of . Every subcover is also a refinement, but the opposite is not always true. A subcover is made from the sets that are in the cover, but omitting some of them; whereas a refinement is made from any sets that are subsets of the sets in the cover. The refinement relation on the set of covers of is transitive and reflexive, i.e. a Preorder. It is never asymmetric for . Generally speaking, a refinement of a given structure is another that in some sense contains it. Examples are to be found when partitioning an interval (one refinement of being ), considering topologies (the standard topology in Euclidean space being a refinement of the trivial topology). When subdividing simplicial complexes (the first barycentric subdivision of a simplicial complex is a refinement), the situation is slightly different: every simplex in the finer complex is a face of some simplex in the coarser one, and both have equal underlying polyhedra. Yet another notion of refinement is that of star refinement. Compactness The language of covers is often used to define several topological properties related to compactness. A topological space is said to be: compact if every open cover has a finite subcover, (or equivalently that every open cover has a finite refinement); Lindelöf if every open cover has a countable subcover, (or equivalently that every open cover has a countable refinement); metacompact: if every open cover has a point-finite open refinement; and paracompact: if every open cover admits a locally finite open refinement. For some more variations see the above articles. Covering dimension A topological space X is said to be of covering dimension n if every open cover of X has a point-finite open refinement such that no point of X is included in more than n+1 sets in the refinement and if n is the minimum value for which this is true. If no such minimal n exists, the space is said to be of infinite covering dimension. See also References Introduction to Topology, Second Edition, Theodore W. Gamelin & Robert Everist Greene. Dover Publications 1999. External links Topology General topology Families of sets
Cover (topology)
Physics,Mathematics
1,007
14,963,396
https://en.wikipedia.org/wiki/ProjectExplorer
ProjectExplorer is a documentary short film series. The films, directed and produced by ProjectExplorer's Founder, Jenny M Buccos, focus on histories and cultures of foreign places and people using interviews with subject experts, artists, and public figures including Archbishop Desmond Tutu, Dr. John Kani, Greg Marinovich, and Sipho “Hotstix” Mabuse. Produced for a child and young adult audience, segments in each series depict everyday life and the challenges and concerns of those living in the locations and regions featured. Each film is 2–4 minutes in length, with each series containing approximately 40 films. The ProjectExplorer series is distributed internationally without charge via the web by ProjectExplorer, LTD. an American not-for-profit organization. Three series have been produced and distributed. In fall 2009, ProjectExplorer's third series, Jordan, received a GOLD level Parents' Choice Award for excellence in web programming. Film series Shakespeare's England (2006) The first series was filmed in London, Stratford-upon-Avon, and New York City. The series includes more than 30 film segments. United Kingdom locations and individuals include: The London Eye The Tower of London The Whitechapel Bell Foundry, which demonstrates the process of making a bell Simon Hughes, Member of Parliament and President of the Liberal Democrats The Old Vic The Royal Shakespeare Company The National Archives (UK) Segments filmed in New York City include: Michael Cumpsty discusses and performs monologues from Hamlet (while starring in the Classic Stage Company production) Michael Stuhlbarg discusses and performs a monologue from Macbeth South Africa (2007) Filmed in Johannesburg, Cape Town, and KwaZulu Natal, the series contains over 40 film segments including: Ntate Thabong Phosa, a lesiba player from Lesotho. Due to the rarity of lesiba players globally, this is one of the only publicly available examples of the lesiba played on film. A Robben Island piece, filmed at the cell in which Nelson Mandela was held for 18 of his 27-year imprisonment. JSE Securities Exchange with Leigh Roberts, correspondent for CNBC Africa. A 3-part series on HIV/AIDS with amfAR Director of Research, Dr. Rowena Johnson. Dr. Johnson discusses high cost of anti-retroviral drugs and testing in South Africa. The June 16, 1976 Soweto Uprising, with archival film footage and photography from SABC and The Sowetan newspaper. Prominent South Africans featured in the series: Dr. John Kani, Chairperson of the Apartheid Museum and TONY Award Winning Actor Musician Sipho “Hotstix” Mabuse Former U.N. Ambassador Dave A. Steward, Executive Director of the FW de Klerk Foundation Director and producer, Duma Ndlovu Malcolm Purkey, Artistic Director of the Market Theatre South Africa, Part II (2008) Filmed in Johannesburg, Cape Town, and New York City, the series contains over 10 film segments. Prominent South Africans featured in the series: Archbishop Desmond Tutu, Nobel Peace Prize laureate Photojournalist Greg Marinovich, Pulitzer Prize winner and co-author of The Bang-Bang Club Vusi Mahlasela, musician Author, Max du Preez Jordan (2008) Filmed in Amman, Petra, Umm Qais, Jerash, Madaba, Bethany, the Dead Sea, and New York City, the series contains more than 45 film segments. Jordan series segments include: A tour of the throne room of King Abdullah II, at Raghadan Palace Sharing mansaf with a Bedouin family in the Wadi Rum desert The UNRWA Jabal Hussein refugee camp The Siq, Treasury, and Monastery at Petra The ruins of Gadara at Umm Qais Jerash, the capital and largest city of Jordan's Jerash Governorate Madaba, home of the Madaba Map and the mosaic capital of Jordan The archaeological site at Bethany Traditional clothing from Salt and Ma'an The reintroduction into the wild of the endangered Arabian Oryx The Desert Castles The science of the Dead Sea Her Royal Highness Princess Basma bint Ali and her Royal Botanic Garden References External links Project Explorer official website Digital media 2006 American television series debuts 2008 American television series endings 2000s American documentary television series
ProjectExplorer
Technology
880
77,137,103
https://en.wikipedia.org/wiki/PKS%201148-001
PKS 1148-001 also known as UM 458 and 4C -00.47, is a quasar located in the constellation of Virgo. Its redshift is 1.979, estimating the object to be located 10.2 billion light-years from Earth. Using interplanetary scintillations and very-long-baseline interferometry it was determined that the radio source associated with the quasar has an apparent size of 0.1 arcseconds. A one-sided jet has been observed in the milliarcsecond scale. The most accepted theory for the creation of radio jets is the presence of a supermassive black hole which accretes material. References Quasars 4C objects Blazars Virgo (constellation) Supermassive black holes 037034 Starburst galaxies Active galaxies
PKS 1148-001
Physics,Astronomy
180
69,894,582
https://en.wikipedia.org/wiki/Coniophora%20gelatinosa
Coniophora gelatinosa is a species of fungus belonging to the Coniophora genus. It was first documented in 1892 by English mycologist George Edward Massee under the name Aldridgea gelatinosa, and belonged to the Aldridgea genus. In 1908 it was renamed to Coniophora gelatinosa by English mycologist Worthington George Smith. References Coniophoraceae Fungi described in 1892 Fungus species
Coniophora gelatinosa
Biology
92
55,865,407
https://en.wikipedia.org/wiki/Net%20Neutrality%20%28Last%20Week%20Tonight%20with%20John%20Oliver%29
"Net Neutrality" is the first segment devoted to net neutrality in the United States of the HBO news satire television series Last Week Tonight with John Oliver. It aired for 13 minutes on June 1, 2014, as part of the fifth episode of Last Week Tonight's first season. During this segment, as well Oliver's follow-up segment entitled "Net Neutrality II", comedian John Oliver discusses the threats to net neutrality. Under the administration of President Barack Obama, the Federal Communications Commission (FCC) was considering two options for net neutrality in early 2014. The FCC proposed permitting fast and slow broadband lanes, which would compromise net neutrality, but was also considering reclassifying broadband as a telecommunication service, which would preserve net neutrality. After a surge of comments supporting net neutrality that were inspired by Oliver's episode, the FCC voted to reclassify broadband as a utility in 2015. Context Last Week Tonight Prior to the 2014 segment about net neutrality, Last Week Tonight had only aired four episodes, all of which were complex investigations of obscure problems. Bloomberg News called Last Week Tonight's approach "hardly a tried-and-true recipe for TV success." The late New York Times columnist David Carr commented that prior to the net neutrality segment, he thought Oliver's comedic style would "never work." 2014 fast-lane proposal In January 2014, the United States Circuit Court of the District of Columbia provided a ruling in the case of Verizon v. FCC, in which Verizon Communications, an internet service provider (ISP), sued the Federal Communications Commission for violating its rights under the United States Constitution. The FCC had passed the Open Internet Order in 2010 following the outcome of Comcast Corp. v. FCC, where it was found that the FCC could not censure Comcast's interference with their customers' peer-to-peer traffic. The order was meant as a further step toward ensuring net neutrality in the sense that ISPs could not block or discriminate against lawfully operated websites, apps, or web services. The ruling in Verizon v. FCC was that the FCC could not enforce net neutrality rules as long as service providers were not identified as "common carriers". However, the FCC was given permission to regulate broadband and craft more specific rules that stop short of identifying service providers as common carriers. The ruling created a dispute as to whether net neutrality could be guaranteed under existing law, or if reclassification of ISPs was needed to ensure net neutrality. FCC chair Tom Wheeler stated that the FCC had the authority under Section 706 of the Telecommunications Act of 1996 to regulate ISPs. However, others including President Barack Obama supported reclassifying ISPs using the Communications Act of 1934. Their reclassification would move ISPs from being a general provision, which fell under the act's Title I, to a common carrier, which fell under the act's Title II. Critics of Section 706 pointed out that the section has no clear mandate to guarantee equal access to content provided over the internet, while subsection 202(a) of the Communications Act stated that common carriers cannot "make any unjust or unreasonable discrimination in charges, practices, classifications, regulations, facilities, or services." Advocates of net neutrality generally supported reclassifying ISPs under Title II, while FCC leadership and ISPs generally opposed such reclassification. The FCC stated that if they reclassified ISPs as common carriers, the commission would selectively enforce Title II, so that only sections relating to broadband would apply to ISPs. In April 2014, the FCC proposed a set of new regulations that, among other things, would allow for ISPs to levy charges on websites in exchange for faster connection speeds. The "fast lane", as the proposal was called, would prioritize that website's internet connection over those of other websites that did not pay, although the ISP could not outright block web users from accessing websites that did not pay for "fast lanes". In addition, in enacting these "fast lanes", ISPs had to divulge whether they were promoting the content of sponsors or affiliates. This was at least the FCC's third attempt to create internet fast lanes. By May 2014, the FCC was considering two options: permitting fast and slow broadband lanes, thereby compromising net neutrality; or reclassifying broadband as a telecommunication service, thereby preserving net neutrality. Draft plans for the "fast-lane" option were approved, with three Democratic FCC commissioners voting to have the public review the proposal, and two Republican communications voting against public feedback. The FCC's proposal was heavily criticized for its two-tier, preferential system, whose very core would go against the principle of net neutrality. The director of the Common Cause organization's Media and Democracy Reform Initiative compared the FCC proposal to "toll roads" that "represent Washington at its worst." A reporter for The Verge wrote that these regulations "would destroy net neutrality" precisely because it slowed down traffic. In response, Wheeler said that any statements saying that the proposed regulations would restrict the open Internet were "flat out wrong".' Episode Description Oliver delivered his 13-minute segment about net neutrality on June 1, 2014, as part of the show's main segment. He introduces the subject by praising "the internet, a.k.a. the electronic cat database," and noting how easy it is to buy merchandise such as coyote urine on the internet compared to if these items were bought in person. Oliver uses the coyote-urine analogy as a way to segue into a discussion of Wheeler's net-neutrality proposal. He pans "net neutrality" as a seemingly uninteresting topic, saying that videotaped FCC meetings about the issue might seem very boring "even by C-SPAN standards." Oliver then introduces the concept of net neutrality as something where all data is given the same priority regardless of its creator. He states that the Internet's relative equality, up to that point, had allowed startup companies to supersede bigger companies. Oliver introduces the topic of how "the Internet is not broken, and the FCC is taking steps to fix that". The segment then displays some news clippings and broadcasts that explain the FCC's priority-lane proposal. Oliver returns onto the segment, and he protests vehemently against the proposed rules, jokingly stating that the rules would ensure "my startup video streaming service, Nutflix, a one-stop resource for videos of men getting hit in the nuts", would not be able to compete with larger companies like Netflix. He then takes a more serious approach, stating that the proposal would allow large ISPs such as Verizon and Comcast to buy the "fast-lane" data more easily compared to smaller ISPs with fewer funds. Oliver rebuts a telecommunication lawyer's claim that it would be a "fast-lane-versus-hyperspeed-lane" contrast, stating that the proposed rules were more comparable to Olympic gold medalist sprinter Usain Bolt versus "Usain bolted to an anchor". The comedian refutes telecommunications companies' claims that they would not slow down other web traffic to get more internet users to subscribe to their services instead. Oliver points out an example in which Comcast slowed down Netflix download speeds in 2013 and 2014 unless Netflix paid Comcast a smooth-streaming fee. From October 2013 until Netflix finally agreed to pay in February 2014, Netflix download speeds for Comcast customers had slowed up to 25%, compared to on other ISPs where download speeds had consistently increased in the same time period. Oliver compared it to a "mob shakedown." The comedian then says that the fight to keep net neutrality is so important that pro-net-neutrality activists are on the same side as corporations like Google, Netflix, Amazon, and Facebook, an alliance which Oliver describes as very unlikely. He compares this to Lex Luthor knocking on his nemesis Superman's apartment door for an offer to team up to "get rid of the asshole in apartment 3B". Oliver then says that the only entities that would benefit from the rule change were the cable companies who are lobbying Congress, including Comcast, who is the second-largest congressional lobbyist. Oliver says that President Barack Obama had been seen golfing with Comcast's CEO Brian Roberts, as well as invited Roberts to a fundraiser dinner. He also states that Obama's nomination of Tom Wheeler, a former cable and wireless lobbyist, for the FCC Chairman position was "the equivalent of needing a babysitter and hiring a dingo". Oliver cites a 2010 FCC report on broadband, and says that 96% of Americans have at most two cable broadband providers to choose from. The segment then displays a clip of Roberts saying that if Comcast were to merge with another major ISP like Time Warner, there would be no reduction in competition. Oliver responds, "you could not be describing a monopoly more clearly if you were wearing a metal top hat", a player token used in the game Monopoly. Then the segment shows a graphic of Ookla Speed Test that shows a list of countries, sorted by their average broadband speed. The U.S., ranking 31st on the list, had an average speed slower than Estonia, a country Oliver described as "still worried about Shrek attacks". Oliver goes on to point out that Comcast and Time Warner had the lowest customer satisfaction ratings of any corporation in America, according to the quarterly American Customer Satisfaction Index that was released two weeks prior to the segment. He says that ISPs were not being truthful when they said they are committed to an open internet, and that representatives for the ISPs describe their plans in such a boring way that it goes unnoticed by many Americans. Oliver quips, "The cable companies have figured out the great truth of America: if you want to do something evil, put it inside something boring", comparing it to Apple Inc. putting Mein Kampf inside their user agreement. At the end of the segment, Oliver displays the web address for the FCC's comment section. He delivers an exhortation toward "the Internet commenters out there", saying that "we need you to get out there and, for once in your life, focus your indiscriminate rage in a useful direction. Seize your moment, my lovely trolls, turn on caps lock, and fly my pretties! Fly! Fly!" Aftermath The segment received 800,000 views on YouTube in two days, while the TV broadcast saw over 1 million views. The segment was thought to spur over 45,000 comments on the FCC's electronic filing page about the net neutrality proposal. The FCC also received an extra 300,000 comments in an email inbox designated specifically for the proposal. By comparison, the proposal with the second highest number of comments had 2,000 such responses. The day after the episode, the FCC comment page experienced a surge in traffic. Shortly after the first segment aired, the FCC website crashed, and Last Week Tonight viewers noted that the website's commenting function was not working. A spokeswoman for the FCC said that it was "unclear if the high volume was directly related to the John Oliver segment". Bloomberg News wrote that even though the segment was only a small part of the net-neutrality debate, as compared to the electronic mailing lists convincing tens of millions of people to vote against the proposed rules, it "gave a bump to a political movement" and ultimately helped to reverse the FCC's position in regards to net neutrality. Soraya Nadia McDonald of The Washington Post stated that Oliver "may be just the firebrand activist we’re looking for" in regards to the net-neutrality debate. Terrance F. Ross of The Atlantic wrote, "John Oliver’s segment on net neutrality this past June perfectly summed up what his HBO show Last Week Tonight is so good at: transcending apathy." Not all commentators had positive reviews of the segment. Jon Healey of the Los Angeles Times wrote that "Oliver misled his audience badly on a couple of key points", saying that the federal courts would not allow the FCC to unfairly discriminate between different forms of web traffic; that large ISPs would not need the new rules to implement a speed-tiered system; and that Wheeler had left open the possibility of outlawing the ISPs' promotion of certain websites for a fee. He stated that in the case of Netflix versus Comcast, the problem had been a third-party transit provider who had argued with Comcast over the price and amount of data that the ISP would provide. Robert McMillan of Wired said that "complaints about a fast-lane don't make much sense" because large websites like Google and Facebook already benefited from "fast lanes", albeit in the form of large servers embedded in the ISPs' Internet exchange points. He wrote that instead of advocating against a change that had already occurred, internet users should look for ways to increase ISPs' competitiveness. Chairman Wheeler himself responded to the segment, praising it as "creative" but saying "I am not a dingo". Wheeler said, "I think that it represents the high level of interest that exists in the topic in the country, and that's good." However, he also stated that the segment did not talk about the FCC's plan to reinstate the open-internet protections that had been halted in an appeals court earlier that year. The University of Delaware's Center for Political Communication conducted a study in which it concluded that viewers of late-night shows were generally more informed about the net neutrality issue than regular cable news viewers. The study found that knowledge of the net neutrality debate was highest among Last Week Tonight viewers and lowest among Fox News viewers. According to the study, 74% of Last Week Tonight watchers heard about net neutrality, of which 29% heard "a lot" about the issue, compared to 52% of Fox News watchers, only 7% of which heard "a lot". The "Net Neutrality" segment increased Last Week Tonight's viewership to approximately 4 million per episode by the end of the first season, and contributed to its popularity in U.S. late-night television. In November 2014, after the season had ended, David Carr of The New York Times wrote that the show had become "a smash" since the segment first aired. Carr stated that the "Net Neutrality" segment had helped convince FCC leadership to support net neutrality. Effect on net-neutrality debate In September 2014, the Pew Research Center found that the FCC filing page received 3,076 comments the week before the June 1 segment, and that there were another 79,838 comments posted the week immediately afterward. Google searches for the term "net neutrality" rose in popularity that week compared to the previous and following weeks. Two interns analyzing the data for the Pew Research Center wrote that the sudden rise in the number of comments on the FCC net-neutrality page could not be attributed to cable or printed news media, since these outlets' coverage of net neutrality was more infrequent than in previous weeks. Ultimately, less than 1 percent of the proposal's total 800,000 comments could be classified as "clearly opposed to net neutrality", with the majority either indicating support, taking no particular position, or being irrelevant comments. The Verge later requested that the FCC publish emails related to the Last Week Tonight episode under the Freedom of Information Act. Of the emails that were released, most were positively critical of the video. In one exchange, a CBS executive sent a link to FCC employees, who joked about "Nutflix" and Usain Bolt. One of the FCC employees said, "We had a good laugh about it. The cable companies... not so much." When one reporter satirically asked if Chairman Wheeler commented on the "dingo" quip, an FCC spokesperson said "Hey John, no, no comment on that" with a smiley emoticon. This prompted Oliver to create a subsequent video parodying the FCC's response. A Twitter policy spokesman said, "We all agreed that John Oliver’s brilliant net neutrality segment explained a very complex policy issue in a simple, compelling way that had a wider reach than many expensive advocacy campaigns." On February 26, 2015, the FCC voted to apply the "common carrier" designation of the Communications Act of 1934 and Section 706 of the Telecommunications act of 1996 to the internet. The decision was driven partly because most Americans only had one high-speed internet provider available in their areas. On the same day, the FCC also voted to preempt state laws in North Carolina and Tennessee that limited the ability of local governments in those states to provide broadband services to potential customers outside of their service areas. While the latter ruling affected only those two states, the FCC indicated that the agency would make similar rulings if it received petitions from localities in other states. In response to ISPs and opponents, FCC Chairman Wheeler said, "This is no more a plan to regulate the Internet than the First Amendment is a plan to regulate free speech. They both stand for the same concept." On March 12, 2015, the FCC released the specific details of its new net neutrality rules, which included prohibiting content blocking, slower connections to websites, and "fast and slow lanes". It was thought that Oliver's segment had a major role in the decision, which was the opposite of the FCC's original "lane" proposal. On April 13, 2015, the final rule was published. Updates since "Net Neutrality" After Donald Trump won the 2016 United States presidential election, he appointed Republican FCC board member Ajit Pai as chairman of the FCC. Pai announced proposals to scrap Title II shortly after his appointment on the grounds that higher regulation of the internet led to decreased business. This marked a turnaround from the previous FCC's position under Chairman Wheeler. In May 2017, the FCC successfully voted to proceed with a plan to remove the net neutrality rules enacted under the Obama administration. Like the 2014 proposal vote, this vote was also partisan, with one Democratic board member opposing the removal and two Republicans supporting it. The vote caused John Oliver to release a second segment on the subject three years later, entitled "Net Neutrality II". See also 2014 in American television References 2014 in American television Net neutrality Last Week Tonight with John Oliver segments
Net Neutrality (Last Week Tonight with John Oliver)
Engineering
3,753
54,098,843
https://en.wikipedia.org/wiki/Caustic%20ingestion
Caustic ingestion occurs when someone accidentally or deliberately ingests a caustic or corrosive substance. Depending on the nature of the substance, the duration of exposure and other factors it can lead to varying degrees of damage to the oral mucosa, the esophagus, and the lining of the stomach. The severity of the injury can be determined by endoscopy of the upper digestive tract, although CT scanning may be more useful to determine whether surgery may be required. During the healing process, strictures of the oesophagus may form, which may require therapeutic dilatation and insertion of a stent. Signs and symptoms Immediate manifestations of caustic substance ingestions include erosions of mucosal surfaces of the gastrointestinal tract or airway (which can cause bleeding if the erosions extend to a blood vessel), mouth and tongue swelling, drooling or hypersalivation, nausea, vomiting, dyspnea, dysphonia/aphonia, irritation of the eyes and skin. Perforation of the esophagus can lead to mediastinitis or perforation of the stomach or bowel can lead to peritonitis Swelling of the airway or laryngospasm can occur leading to compromised breathing. Injuries affecting the respiratory system include aspiration pneumonia and laryngeal sores. Signs of respiratory compromise include stridor and a change in a person's voice. Later manifestations of caustic substance ingestions include esophageal strictures or stenosis; which can result in chronic pain and malnutrition. Esophageal strictures more commonly occur after more severe mucosal injury, occurring in to 71% and 100% of grade 2b and 3 mucosal lesions respectively. Remote manifestations of caustic ingestions include esophageal cancer. People who have a history of caustic substance ingestion are 1000-3000 times more likely to develop esophageal cancer with most cases occurring 10–30 years after the ingestion. Pathophysiology Acids with a pH of less than 2 or alkalis with a pH above 12 are capable of causing the most extensive injuries in ingestions. Alkalis damage tissue by saponifying fats, leading to liquefaction necrosis which allows the alkalis to reach deeper tissues. Acids denature proteins via coagulation necrosis, this type of necrosis is thought to prevent the acid from reaching deeper tissues. Clinically, the pH, concentration, volume of ingested substance in addition to the duration of time in contact with tissue as well as percentage of body surface area involved determine the severity of the injury. Diagnosis Classification The severity of injuries to the mucosa of the gastrointestinal tract is commonly rated using the Zargar criteria. Treatment Common treatments used for toxic substance ingestions are ineffective, or are even harmful, when implemented in ingestions of caustic substances. Clinical attempts to empty the stomach can cause further injuries. Activated charcoal does not neutralize caustics and can also obscure endoscopic visualization. There is no known clinical benefit of neutralization of the caustic substances; neutralization releases heat as well as causing gaseous distention and vomiting, all of which can worsen injuries. Signs of airway compromise including decreased level of consciousness, stridor, change in voice, inability to control oral secretions necessitate intubation and mechanical ventillation. IV fluids are often needed to maintain hydration and replace insensible water losses. Endoscopy should be done within the first 24–48 hours of ingestion as subsequent wound softening increases the risk of perforation. Endoscopically inserted nasogastric tubes can serve as a stent to prevent esophageal strictures as well as allow tube feedings. A CT scan, often enhanced with contrast, can also be used to evaluate injuries. The most common surgical methods of treatment in children include esophageal dilation and esophageal replacement as less commonly implantation of an esophageal stent. Epidemiology In general, most ingestions in children involve exploratory ingestions of small amounts of caustic substances, with the rare exception being cases of child abuse where larger amounts are often ingested. Caustic ingestions in adults usually involve larger amounts of ingested material during attempts of self harm. Due to the greater amount of material usually ingested; injuries are often more severe in the intentional ingestions of adolescents and adults as compared to those of children. Commonly ingested substances include ammonium hydroxide (found in general cleaner and grease remover), sodium hydroxide or potassium hydroxide (found in drain opener or oven cleaner), sodium hypochlorite (bleach), oxalic acid (metal polish) and hydrochloric acid (toilet bowl cleaner). Storage of caustic substances in water or drink containers is a risk factor for accidental ingestion of these materials, particularly in children. Boys of preschool age are at the greatest risk of accidental caustic ingestion. Prevention Preventative measures have been recommended that are intended to decrease the risk of accidental ingestion of caustic substances including: Keeping caustic substances in locked cabinets or on upper shelves Not storing chemical substances in food or drink containers Not keeping large amounts of detergent in the home Not mentioning a drug as "candy" when giving it as medication Keeping the phone number for poison control in the home Keeping caustic substances in labelled containers References Gastroenterology Corrosion Toxic effects of substances chiefly nonmedicinal as to source
Caustic ingestion
Chemistry,Materials_science,Environmental_science
1,199
43,386,105
https://en.wikipedia.org/wiki/Victor-Am%C3%A9d%C3%A9e%20Lebesgue
Victor-Amédée Lebesgue, sometimes written Le Besgue, (2 October 1791, Grandvilliers (Oise) – 10 June 1875, Bordeaux (Gironde)) was a mathematician working on number theory. He was elected a member of the Académie des sciences in 1847. See also Catalan's conjecture Proof of Fermat's Last Theorem for specific exponents Lebesgue–Nagell type equations Publications References LEBESGUE, Victor Amédée 1791 births 1875 deaths 19th-century French mathematicians Number theorists
Victor-Amédée Lebesgue
Mathematics
112
51,409,459
https://en.wikipedia.org/wiki/Intelsat%2033e
Intelsat 33e, also known as IS-33e, was a high throughput (HTS) geostationary communications satellite operated by Intelsat and designed and manufactured by Boeing Space Systems on the BSS 702MP satellite bus. It was the second satellite of the EpicNG service, and covered Europe, Africa and most of Asia from the 60° East longitude, where it replaced Intelsat 904. It had a mixed C-band, Ku-band and Ka-band payload with all bands featuring wide and C- and Ku- also featured spot beams. After nearly eight years in service, the satellite broke into at least 57 pieces on 19 October 2024. As of December 2024, over 700 pieces of debris have been detected. Satellite description Intelsat 33e was designed and manufactured by Boeing on the Boeing 702MP satellite bus. It had a launch mass of and a design life of more than 15 years. When stowed for launch, the satellite measured . It was powered by two solar panels, with four panels each, of triple-junction GaAs solar cells. The 702MP platform was designed to generate between 6 kW and 12 kW, but Intelsat 33e was designed to generate 13 kW at the end of its design life. Its payload was the second high throughput EpicNG deployment. The EpicNG is characterized by the implementation of frequency reuse due to a mix of frequency and polarization in small spot beams. Not only applied to the classical High-throughput satellite (HTS) Ka-band, but also applying the same technique in Ku-band and C-band. The EpicNG series also keep the use of wide beams to offer high throughput and broadcast capabilities in the same satellite. In the case of Intelsat 33e, the C-band side had 20 transponders with a total downlink bandwidth of 2,670 MHz. The spot beams offered high bandwidth for Europe, Central Africa, Middle East, Asia and Australia, and a wide beam covered sub-Saharan Africa. The Ku-band had 249 transponder equivalents, for a total downlink bandwidth of 9,194 MHz. The Ku-band spot beams covered Europe, Africa, the Middle East and Asia, while a wide beam was able to broadcast to Europe, Middle East and Asia. The Ka-band payload had 450 MHz of bandwidth on a global beam centered at its position. History In July 2009, Intelsat became the first customer of the Boeing 702MP satellite bus, when it placed an order for four spacecraft, Intelsat 21, Intelsat 22, Intelsat 27 and the first EpicNG satellite, Intelsat 29e. In May 2013, Intelsat made a second order for an additional four EpicNG satellites, the first of which would be Intelsat 33e. On 15 July 2016, Senior Space Program Managers Richard Laurie and Brian Sing blogged that they had been at the Boeing factory overseeing the transport preparations for Intelsat 33e to French Guiana. There it would join another Intelsat satellite, Intelsat 36, for integration on the Ariane 5 ECA launcher, which was expected to launch on 24 August 2016. On 22 July 2016, Intelsat announced that Intelsat 33e had arrived to the Guiana Space Center for launch preparations. It also announced not only communication but aeronautical and maritime mobility clients that were expecting the satellite service. On 27 July 2016, it was explained that the satellite had traveled by truck from the factory to an airport in California, where it was loaded in an Antonov 124. It flew to Florida for a refuelling stop and then flew straight to Kourou airport. At the French launch site, even though Intelsat is the owner of the two passengers of the Ariane 5 ECA VA 232 flight, they have separate launch teams. Each satellite is built by a different manufacturer, and it has a different supervisor team within Intelsat. On 24 August 2016, at 22:16:01 UTC, after a slight delay due to a rocket issue, the Ariane 5 ECA VA-232 flight launched from Guiana Space Center ELA-3, with Intelsat 33e and Intelsat 36. At 22:44 UTC, Intelsat 33e separated from the rocket's upper stage. After 41 minutes of flight, both satellites had separated successfully. Intelsat confirmed that it had received the satellites signals as expected after separation. Arianespace estimated the insertion orbit as 248.7 km × 35,858 km × 5.98°, very close to the target of 249.0 km × 35,879 km × 6.00°. On 9 September 2016, Intelsat announced that due to a malfunction in the LEROS-1c primary thruster, it would require more time for orbit rising and thus the service date had been moved from the last quarter of 2016 to the first of 2017. On 22 September 2016, insurance officials estimated that the main propulsion failure would not reduce the on orbit life of the spacecraft more than 18 months. This could translate to an insurance claim by Intelsat of around 10% (1.5 years) of the satellite service life, which could have a value close to US$40 million. Intelsat 33e entered service on 29 January 2017, three months later than planned. In August 2017, another propulsion issue appeared, leading to larger-than-expected propellant usage to control the satellite attitude during the north/south station keeping maneuvers. This issue reduced the orbital life-time by about 3.5 years. Disintegration Late on 19 October 2024, U.S. Space Command reported that the satellite had broken up into about 20 pieces at approximately 04:30 UTC that morning. At least 700 pieces of space debris associated with the event have since been detected. Intelsat declared the satellite a total loss on 21 October 2024. The 2024 loss was not insured, unlike the earlier malfunction of the satellite in 2016. The satellite's predecessor, Intelsat 29e, also suffered a premature failure and was rendered inoperable after only three years in service. References Communications satellites in geostationary orbit Satellites using the BSS-702 bus Spacecraft launched in 2016 Intelsat satellites Ariane commercial payloads Spacecraft that broke apart in space
Intelsat 33e
Technology
1,275
9,578,494
https://en.wikipedia.org/wiki/High-speed%20flight
In high-speed flight, the assumptions of incompressibility of the air used in low-speed aerodynamics no longer apply. In subsonic aerodynamics, the theory of lift is based upon the forces generated on a body and a moving gas (air) in which it is immersed. At airspeeds below about , air can be considered incompressible in regards to an aircraft, in that, at a fixed altitude, its density remains nearly constant while its pressure varies. Under this assumption, air acts the same as water and is classified as a fluid. Subsonic aerodynamic theory also assumes the effects of viscosity (the property of a fluid that tends to prevent motion of one part of the fluid with respect to another) are negligible, and classifies air as an ideal fluid, conforming to the principles of ideal-fluid aerodynamics such as continuity, Bernoulli's principle, and circulation. In reality, air is compressible and viscous. While the effects of these properties are negligible at low speeds, compressibility effects in particular become increasingly important as airspeed increases. Compressibility (and to a lesser extent viscosity) is of paramount importance at speeds approaching the speed of sound. In these transonic speed ranges, compressibility causes a change in the density of the air around an airplane. During flight, a wing produces lift by accelerating the airflow over the upper surface. This accelerated air can, and does, reach supersonic speeds, even though the airplane itself may be flying at a subsonic airspeed (Mach number < 1.0). At some extreme angles of attack, in some airplanes, the speed of the air over the top surface of the wing may be double the airplane's airspeed. It is, therefore, entirely possible to have both supersonic and subsonic airflows on an airplane at the same time. When flow velocities reach sonic speeds at some locations on an airplane (such as the area of maximum camber on the wing), further acceleration will result in the onset of compressibility effects such as shock wave formation, drag increase, buffeting, stability, and control difficulties. Subsonic flow principles are invalid at all speeds above this point. See also Coffin corner (aerodynamics) Critical Mach number Drag divergence Mach number References Sources Airspeed
High-speed flight
Physics
482
31,746,499
https://en.wikipedia.org/wiki/Boletellus%20fibuliger
Boletellus fibuliger is a species of fungus in the family Boletaceae. Found in Venezuela, it was described as new to science in 1983 by mycologist Rolf Singer. References External links Fungi described in 1983 Fungi of Venezuela fibuliger Taxa named by Rolf Singer Fungus species
Boletellus fibuliger
Biology
62
25,493,347
https://en.wikipedia.org/wiki/Noncommutative%20algebraic%20geometry
Noncommutative algebraic geometry is a branch of mathematics, and more specifically a direction in noncommutative geometry, that studies the geometric properties of formal duals of non-commutative algebraic objects such as rings as well as geometric objects derived from them (e.g. by gluing along localizations or taking noncommutative stack quotients). For example, noncommutative algebraic geometry is supposed to extend a notion of an algebraic scheme by suitable gluing of spectra of noncommutative rings; depending on how literally and how generally this aim (and a notion of spectrum) is understood in noncommutative setting, this has been achieved in various level of success. The noncommutative ring generalizes here a commutative ring of regular functions on a commutative scheme. Functions on usual spaces in the traditional (commutative) algebraic geometry have a product defined by pointwise multiplication; as the values of these functions commute, the functions also commute: a times b equals b times a. It is remarkable that viewing noncommutative associative algebras as algebras of functions on "noncommutative" would-be space is a far-reaching geometric intuition, though it formally looks like a fallacy. Much of the motivation for noncommutative geometry, and in particular for the noncommutative algebraic geometry, is from physics; especially from quantum physics, where the algebras of observables are indeed viewed as noncommutative analogues of functions, hence having the ability to observe their geometric aspects is desirable. One of the values of the field is that it also provides new techniques to study objects in commutative algebraic geometry such as Brauer groups. The methods of noncommutative algebraic geometry are analogs of the methods of commutative algebraic geometry, but frequently the foundations are different. Local behavior in commutative algebraic geometry is captured by commutative algebra and especially the study of local rings. These do not have a ring-theoretic analogue in the noncommutative setting; though in a categorical setup one can talk about stacks of local categories of quasicoherent sheaves over noncommutative spectra. Global properties such as those arising from homological algebra and K-theory more frequently carry over to the noncommutative setting. History Classical approach: the issue of non-commutative localization Commutative algebraic geometry begins by constructing the spectrum of a ring. The points of the algebraic variety (or more generally, scheme) are the prime ideals of the ring, and the functions on the algebraic variety are the elements of the ring. A noncommutative ring, however, may not have any proper non-zero two-sided prime ideals. For instance, this is true of the Weyl algebra of polynomial differential operators on affine space: The Weyl algebra is a simple ring. Therefore, one can for instance attempt to replace a prime spectrum by a primitive spectrum: there are also the theory of non-commutative localization as well as descent theory. This works to some extent: for instance, Dixmier's enveloping algebras may be thought of as working out non-commutative algebraic geometry for the primitive spectrum of an enveloping algebra of a Lie algebra. Another work in a similar spirit is Michael Artin’s notes titled “noncommutative rings”, which in part is an attempt to study representation theory from a non-commutative-geometry point of view. The key insight to both approaches is that irreducible representations, or at least primitive ideals, can be thought of as “non-commutative points”. Modern viewpoint using categories of sheaves As it turned out, starting from, say, primitive spectra, it was not easy to develop a workable sheaf theory. One might imagine this difficulty is because of a sort of quantum phenomenon: points in a space can influence points far away (and in fact, it is not appropriate to treat points individually and view a space as a mere collection of the points). Due to the above, one accepts a paradigm implicit in Pierre Gabriel's thesis and partly justified by the Gabriel–Rosenberg reconstruction theorem (after Pierre Gabriel and Alexander L. Rosenberg) that a commutative scheme can be reconstructed, up to isomorphism of schemes, solely from the abelian category of quasicoherent sheaves on the scheme. Alexander Grothendieck taught that to do geometry one does not need a space, it is enough to have a category of sheaves on that would be space; this idea has been transmitted to noncommutative algebra by Yuri Manin. There are, a bit weaker, reconstruction theorems from the derived categories of (quasi)coherent sheaves motivating the derived noncommutative algebraic geometry (see just below). Derived algebraic geometry Perhaps the most recent approach is through the deformation theory, placing non-commutative algebraic geometry in the realm of derived algebraic geometry. As a motivating example, consider the one-dimensional Weyl algebra over the complex numbers C. This is the quotient of the free ring C<x, y> by the relation xy - yx = 1. This ring represents the polynomial differential operators in a single variable x; y stands in for the differential operator ∂x. This ring fits into a one-parameter family given by the relations . When α is not zero, then this relation determines a ring isomorphic to the Weyl algebra. When α is zero, however, the relation is the commutativity relation for x and y, and the resulting quotient ring is the polynomial ring in two variables, C[x, y]. Geometrically, the polynomial ring in two variables represents the two-dimensional affine space A2, so the existence of this one-parameter family says that affine space admits non-commutative deformations to the space determined by the Weyl algebra. This deformation is related to the symbol of a differential operator and that A2 is the cotangent bundle of the affine line. (Studying the Weyl algebra can lead to information about affine space: The Dixmier conjecture about the Weyl algebra is equivalent to the Jacobian conjecture about affine space.) In this line of the approach, the notion of operad, a set or space of operations, becomes prominent: in the introduction to , Francis writes: Proj of a noncommutative ring One of the basic constructions in commutative algebraic geometry is the Proj construction of a graded commutative ring. This construction builds a projective algebraic variety together with a very ample line bundle whose homogeneous coordinate ring is the original ring. Building the underlying topological space of the variety requires localizing the ring, but building sheaves on that space does not. By a theorem of Jean-Pierre Serre, quasi-coherent sheaves on Proj of a graded ring are the same as graded modules over the ring up to finite dimensional factors. The philosophy of topos theory promoted by Alexander Grothendieck says that the category of sheaves on a space can serve as the space itself. Consequently, in non-commutative algebraic geometry one often defines Proj in the following fashion: Let R be a graded C-algebra, and let Mod-R denote the category of graded right R-modules. Let F denote the subcategory of Mod-R consisting of all modules of finite length. Proj R is defined to be the quotient of the abelian category Mod-R by F. Equivalently, it is a localization of Mod-R in which two modules become isomorphic if, after taking their direct sums with appropriately chosen objects of F, they are isomorphic in Mod-R. This approach leads to a theory of non-commutative projective geometry. A non-commutative smooth projective curve turns out to be a smooth commutative curve, but for singular curves or smooth higher-dimensional spaces, the non-commutative setting allows new objects. See also Derived noncommutative algebraic geometry Q-category quasi-free algebra Notes References M. Artin, J. J. Zhang, Noncommutative projective schemes, Advances in Mathematics 109 (1994), no. 2, 228–287, doi. Yuri I. Manin, Quantum groups and non-commutative geometry, CRM, Montreal 1988. Yuri I Manin, Topics in noncommutative geometry, 176 pp. Princeton 1991. A. Bondal, M. van den Bergh, Generators and representability of functors in commutative and noncommutative geometry, Moscow Mathematical Journal 3 (2003), no. 1, 1–36. A. Bondal, D. Orlov, Reconstruction of a variety from the derived category and groups of autoequivalences, Compositio Mathematica 125 (2001), 327–344 doi John Francis, Derived Algebraic Geometry Over -Rings O. A. Laudal, Noncommutative algebraic geometry, Rev. Mat. Iberoamericana 19, n. 2 (2003), 509--580; euclid. Fred Van Oystaeyen, Alain Verschoren, Non-commutative algebraic geometry, Springer Lect. Notes in Math. 887, 1981. Fred van Oystaeyen, Algebraic geometry for associative algebras, Marcel Dekker 2000. vi+287 pp. A. L. Rosenberg, Noncommutative algebraic geometry and representations of quantized algebras, MIA 330, Kluwer Academic Publishers Group, Dordrecht, 1995. xii+315 pp. M. Kontsevich, A. Rosenberg, Noncommutative smooth spaces, The Gelfand Mathematical Seminars, 1996--1999, 85--108, Gelfand Math. Sem., Birkhäuser, Boston 2000; arXiv:math/9812158 A. L. Rosenberg, Noncommutative schemes, Compositio Mathematica 112 (1998) 93--125, doi; Underlying spaces of noncommutative schemes, preprint MPIM2003-111, dvi, ps; MSRI lecture Noncommutative schemes and spaces (Feb 2000): video Pierre Gabriel, Des catégories abéliennes, Bulletin de la Société Mathématique de France 90 (1962), p. 323-448, numdam Zoran Škoda, Some equivariant constructions in noncommutative algebraic geometry, Georgian Mathematical Journal 16 (2009), No. 1, 183--202, arXiv:0811.4770. Dmitri Orlov, Quasi-coherent sheaves in commutative and non-commutative geometry, Izv. RAN. Ser. Mat., 2003, vol. 67, issue 3, 119–138 (MPI preprint version dvi, ps) M. Kapranov, Noncommutative geometry based on commutator expansions, Journal für die reine und angewandte Mathematik 505 (1998), 73-118, math.AG/9802041. Further reading A. Bondal, D. Orlov, Semi-orthogonal decomposition for algebraic varieties_, PreprintMPI/95–15, alg-geom/9506006 Tomasz Maszczyk, Noncommutative geometry through monoidal categories, math.QA/0611806 S. Mahanta, On some approaches towards non-commutative algebraic geometry, math.QA/0501166 Ludmil Katzarkov, Maxim Kontsevich, Tony Pantev, Hodge theoretic aspects of mirror symmetry, arxiv/0806.0107 Dmitri Kaledin, Tokyo lectures "Homological methods in non-commutative geometry", pdf, TeX; and (similar but different) Seoul lectures External links MathOverflow, Theories of Noncommutative Geometry Algebraic geometry Noncommutative geometry
Noncommutative algebraic geometry
Mathematics
2,492
14,427,484
https://en.wikipedia.org/wiki/GPR26
Probable G-protein coupled receptor 26 is a protein that in humans is encoded by the GPR26 gene. GPR26 expression is found to peak perinatally, when the visual system is first challenged, and contains a 53 kb LD-block enriched for association with introgressed Neanderthal-derived SNPs. Additionally, it is known to form oligomeric structures with the 5-HT1a receptor. References Further reading G protein-coupled receptors
GPR26
Chemistry
98
38,676,583
https://en.wikipedia.org/wiki/C40H58
{{DISPLAYTITLE:C40H58}} The molecular formula C40H58 (molar mass: 538.89 g/mol, exact mass: 538.4539 u) may refer to: Neurosporene α-Zeacarotene β-Zeacarotene Molecular formulas
C40H58
Physics,Chemistry
68
56,147,496
https://en.wikipedia.org/wiki/Give%20a%20dog%20a%20bad%20name%20and%20hang%20him
Give a dog a bad name and hang him is an English proverb. Its meaning is that if a person's reputation has been besmirched, then he will suffer difficulty and hardship. A similar proverb is he that has an ill name is half hanged. The proverb dates back to the 18th century or before. In 1706, John Stevens recorded it as "Give a Dog an ill name and his work is done". In 1721, James Kelly had it as a Scottish proverb – "Give a Dog an ill Name, and he'll soon be hanged. Spoken of those who raise an ill Name on a Man on purpose to prevent his Advancement." In Virginia, it appeared as an old saying in the Norfolk Herald in 1803 – "give a dog a bad name and hang him". The observation is due to negativity bias – that people are apt to think poorly of others on weak evidence. This is then reinforced by confirmation bias as people give more weight to evidence that supports a preconception than evidence which contradicts it. See also Character assassination Scapegoat References 1700s neologisms 18th-century quotations Harassment and bullying Defamation English proverbs Metaphors referring to dogs
Give a dog a bad name and hang him
Biology
251
1,621,441
https://en.wikipedia.org/wiki/Astylar
Astylar (from Gr. ἀ-, privative, and στῦλος, a column) is an architectural term given to design which uses neither columns nor pilasters for decorative purposes; thus the Riccardi and Strozzi palaces in Florence are astylar in their design, as opposed to Palladio's palaces at Vicenza, which are columnar. References Architectural terminology
Astylar
Engineering
84
2,746,898
https://en.wikipedia.org/wiki/Articulated%20robot
An articulated robot is a robot with rotary joints that has 6 or more Degrees of Freedom. This is one of the most commonly used robots in industry today (many examples can be found from legged robots or industrial robots). Articulated robots can range from simple 6 Degree of Freedom structures to systems with 10 or more interacting joints and materials. They are powered by a variety of means, including electric motors. Some types of robots, such as robotic arms, can be articulated or non-articulated. Articulated robots in action Definitions Articulated Robot: See Figure. An articulated robot uses all the three revolute joints to access its work space. Usually the joints are arranged in a “chain”, so that one joint supports another further in the chain. Continuous Path: A control scheme whereby the inputs or commands specify every point along a desired path of motion. The path is controlled by the coordinated motion of the manipulator joints. Degrees Of Freedom (DOF): The number of independent motions in which the end effector can move, defined by the number of axes of motion of the manipulator. Gripper: A device for grasping or holding, attached to the free end of the last manipulator link; also called the robot’s hand or end-effector. Payload: The maximum payload is the amount of weight carried by the robot manipulator at reduced speed while maintaining rated precision. Nominal payload is measured at maximum speed while maintaining rated precision. These ratings are highly dependent on the size and shape of the payload. Pick And Place Cycle: See Figure. Pick and place Cycle is the time, in seconds, to execute the following motion sequence: Move down one inch, grasp a rated payload; move up one inch; move across twelve inches; move down one inch; ungrasp; move up one inch; and return to start location. Reach: The maximum horizontal distance from the center of the robot base to the end of its wrist. Accuracy: See Figure. The difference between the point that a robot is trying to achieve and the actual resultant position. Absolute accuracy is the difference between a point instructed by the robot control system and the point actually achieved by the manipulator arm, while repeatability is the cycle-to-cycle variation of the manipulator arm when aimed at the same point. Repeatability: See Figure. The ability of a system or mechanism to repeat the same motion or achieve the same points when presented with the same control signals. The cycle-to-cycle error of a system when trying to perform a specific task. Repeatability of this robot lies between 0.1 to 0.5mm with the payload between 5kg to 100kg. Resolution: See Figure. The smallest increment of motion or distance that can be detected or controlled by the control system of a mechanism. The resolution of any joint is a function of encoder pulses per revolution and drive ratio, and dependent on the distance between the tool center point and the joint axis. Robot Program: A robot communication program for IBM and compatible personal computers. Provides terminal emulation and utility functions. This program can record all of the user memory, and some of the system memory to disk files. Maximum Speed: The compounded maximum speed of the tip of a robot moving at full extension with all joints moving simultaneously in complementary directions. This speed is the theoretical maximum and should under no circumstances be used to estimate cycle time for a particular application. A better measure of real world speed is the standard twelve inch pick and place cycle time. For critical applications, the best indicator of achievable cycle time is a physical simulation. Servo Controlled: Controlled by a driving signal which is determined by the error between the mechanism's present position and the desired output position. Via Point: A point through which the robot's tool should pass without stopping; via points are programmed in order to move beyond obstacles or to bring the arm into a lower inertia posture for part of the motion. Work Envelope: A three-dimensional shape that defines the boundaries that the robot manipulator can reach; also known as reach envelope. See also Degrees of freedom (engineering) Articulated soft robotics Robotics suite Industrial robot Robotic arms and cranes used in spaceflight: Canadarm, which was used on the Space Shuttle Mobile Servicing System (MSS), also known as the Canadarm2, used on the ISS The Japanese Remote Manipulator System, used on the ISS JEM module Kibo Dextre, also known as the Special Purpose Dexterous Manipulator (SPDM), used on the ISS Strela, a manually operated arm used on the Russian Orbital Segment (ROS) of the ISS to perform similar tasks as the Mobile Servicing System European Robotic Arm, a fifth robotic arm installed on the ISS in 2021 References Robot kinematics -
Articulated robot
Engineering
985
28,132,779
https://en.wikipedia.org/wiki/Oka%20coherence%20theorem
In mathematics, the Oka coherence theorem, proved by , states that the sheaf of holomorphic functions on (and subsequently the sheaf of holomorphic functions on a complex manifold ) is coherent. See also Cartan's theorems A and B Several complex variables GAGA Oka–Weil theorem Weierstrass preparation theorem Note References Theorems in complex analysis Theorems in complex geometry
Oka coherence theorem
Mathematics
87
17,954,807
https://en.wikipedia.org/wiki/Campaign%20for%20Drawing
The Big Draw, formerly the Campaign for Drawing, is a British registered charity that promotes drawing and visual literacy. It was founded in 2000 by the Guild of St George, and is now an independent charity. The Big Draw believes that drawing is a universal language that can unite people across generations, backgrounds and borders. It is inspired by the Victorian artist and writer, John Ruskin, whose mission was not to teach people how to draw, but how to see. An arts educational charity, the Campaign demonstrates that drawing is a life skill: an essential tool for learning, expression and invention. Its publications for teachers and other educators provide comprehensive evidence that drawing supports formal and informal learning. The charity leads a programme of advocacy, empowerment and engagement, and is the driving force behind The Big Draw Festival – the world's biggest celebration of drawing. The charity supports established and emerging artists through The John Ruskin Prize and exhibition, and regular events, awards and competitions. The Big Draw manages collaborative research projects, campaigns and educational conferences on visual literacy, digital technology and STEAM (Science, Technology, Art, Maths, Science). The Big Draw Festival The Big Draw charity is the founder and driving force behind The Big Draw Festival, which takes place each year in over 20 countries around the world. Events often take place at notable venues throughout the UK including The British Museum, The National Gallery and Victoria and Albert Museum as well as schools, community centres, parks and village halls. The 2013 Big Draw highlight event offered visitors 20 activities in the new Queen Elizabeth Olympic Park. Previous launches were held at the Natural History Museum, V&A (twice), Trafalgar Square, St Pancras International Station, Welcome Collection and the British Library. Partnerships The Big Draw receives no core funding. Previously, it has been sponsored by bodies as diverse as NESTA, Arts Council England (ACE), Crayola, Daler-Rowney, Esmée Fairbairn Foundation, Paul Hamlyn Foundation, Barbara Whatmore Charitable Trust, Financial Times, Heritage Lottery Fund, National Lottery, Persil, Puffin, Royal Academy of Engineering, Royal Institute of British Architects, Derwent and Cass Art. Patrons Quentin Blake CBE Lord Norman Foster David Hockney CH Andrew Marr Sir Roger Penrose OM Gerald Scarfe Posy Simmonds MBE Chris Riddell Narinder Sagoo Bob and Roberta Smith RA See also List of European art awards References Drawing Visual arts education British visual arts awards Art and design organizations
Campaign for Drawing
Engineering
505
17,014,742
https://en.wikipedia.org/wiki/Desktop%20outsourcing
Desktop outsourcing is the process in which an organization contracts a third party to maintain and manage parts of its IT infrastructure. Contracts vary in depth and can range from Computer hard- and software maintenance to Desktop virtualisation, SaaS-implementations and Helpdesk operation. It is estimated, that 32% of U.S. and Canadian IT organisations make use of desktop outsourcing in 2014. Recent market reports suggest the adoption of BYOD policies to allow the end-user a free choice of devices in their working environment may increase this market share. Viability Justification for desktop outsourcing could include shifting focus and energy to areas of core competency, reducing staffing costs, and the routine maintenance, upgrades, and repairs associated with managing multitudes of PC systems and servers. (Applegate et al. 2007). Managers may also seek desktop outsourcing as a method of simplifying organisational structures to cut costs associated with them. For smaller companies it might also be more viable financially to outsource their desktop at a set price per machine, rather than creating an entire internal IT department. Recent market growth can also be attributed to the decreasing price of hardware, making replacement more favourable than repairs. Possible risks when desktop outsourcing are ensuring continued support for old and unique systems the company depends on, specifically if any of the systems in question were internally developed. This may cause the contractor to be unable to fulfill their contractual obligations, as in the case of the US Navy outsourcing their IT systems to EDS in 2003. Market situation A 2012 TechNavio market report forecasts that the desktop outsourcing market will grow by 4.65% yearly, between 2012 and 2016. Atos, CSC, HP and IBM are considered the leading desktop outsourcing vendors for that time frame. A Gartner report from 2013, however, considers the desktop outsourcing market to be in decline, with growth only occurring on the Latin American and Asia/Pacific markets. Examples While consolidating School Districts in 2003, New York City brought in Dell to assess and consolidate their IT systems. At the time it was unclear how many devices actually existed in the network. Being largely successful, this netted Dell a $20 million yearly contract to keep on managing the School Districts systems. NASA's Kennedy Space Center had its IT services, approximately 22,000 devices, outsourced in 1998 for $30 million a year on a 3-year contract. The US Treasury outsourced their 1643 desktop and 700 portable seats in 1999 for around $27 million yearly. References Corporate Information Strategy and Management. Applegate, Lynda, Austin, Robert, McFarlan, Warren F. (2007). New York: McGraw Hill Companies, Inc. Littman, Dan. "Outsourcing the desktop: Outsourcing desktop management can shave costs while bringing relief to an assortment of infrastructure management headaches". IDG Publishing Network, Inc. 6 February 2006. 18 April 2008.<>. Gartner. "Desktop Outsourcing". gartner.com <>. External links Outsourcing the desktop - Outsourcing desktop management can shave costs while bringing relief to an assortment of infrastructure management headaches.(case studies on outsourcing) Business process Business terms Information technology management
Desktop outsourcing
Technology
674
18,025,004
https://en.wikipedia.org/wiki/Clandestine%20abuse
Clandestine abuse is sexual, psychological, or physical abuse "that is kept secret for a purpose, concealed, or underhanded." Child sexual abuse is often kept secret: Prevention While it is not always possible to stop every case of clandestine abuse, it may be possible to prevent many incidents through policies in youth organizations. The social isolation model asserts that: The BSA policy states: Other policies of the BSA state: Drug crimes A person, especially a child, may be abused in secret because the victim has witnessed another clandestine crime, such as a working Methamphetamine laboratory. The FBI concluded that "A coordinated multidisciplinary team is critical to ensure that the needs of meth’s youngest victims are met and that adequate information is available to prosecute child endangerment cases successfully." See also Cruelty Emotional blackmail Victimology References External links Guidelines for Department of Justice personnel on how to treat crime victims and witnesses NY State Coalition Against DV NY Crime Victim's Board Albany County Crime Victim and Sexual Violence Center Abuse Aggression Secrecy
Clandestine abuse
Biology
214
22,631,822
https://en.wikipedia.org/wiki/Euler%E2%80%93Lotka%20equation
In the study of age-structured population growth, probably one of the most important equations is the Euler–Lotka equation. Based on the age demographic of females in the population and female births (since in many cases it is the females that are more limited in the ability to reproduce), this equation allows for an estimation of how a population is growing. The field of mathematical demography was largely developed by Alfred J. Lotka in the early 20th century, building on the earlier work of Leonhard Euler. The Euler–Lotka equation, derived and discussed below, is often attributed to either of its origins: Euler, who derived a special form in 1760, or Lotka, who derived a more general continuous version. The equation in discrete time is given by where is the discrete growth rate, ℓ(a) is the fraction of individuals surviving to age a and b(a) is the number of offspring born to an individual of age a during the time step. The sum is taken over the entire life span of the organism. Derivations Lotka's continuous model A.J. Lotka in 1911 developed a continuous model of population dynamics as follows. This model tracks only the females in the population. Let B(t)dt be the number of births during the time interval from t to t+dt. Also define the survival function ℓ(a), the fraction of individuals surviving to age a. Finally define b(a) to be the birth rate for mothers of age a. The product B(t-a)ℓ(a) therefore denotes the number density of individuals born at t-a and still alive at t, while B(t-a)ℓ(a)b(a) denotes the number of births in this cohort, which suggest the following Volterra integral equation for B: We integrate over all possible ages to find the total rate of births at time t. We are in effect finding the contributions of all individuals of age up to t. We need not consider individuals born before the start of this analysis since we can just set the base point low enough to incorporate all of them. Let us then guess an exponential solution of the form B(t) = Qert. Plugging this into the integral equation gives: or This can be rewritten in the discrete case by turning the integral into a sum producing letting and be the boundary ages for reproduction or defining the discrete growth rate λ = er we obtain the discrete time equation derived above: where is the maximum age, we can extend these ages since b(a) vanishes beyond the boundaries. From the Leslie matrix Let us write the Leslie matrix as: where and are survival to the next age class and per capita fecundity respectively. Note that where ℓ i is the probability of surviving to age , and , the number of births at age weighted by the probability of surviving to age . Now if we have stable growth the growth of the system is an eigenvalue of the matrix since . Therefore, we can use this relationship row by row to derive expressions for in terms of the values in the matrix and . Introducing notation the population in age class at time , we have . However also . This implies that By the same argument we find that Continuing inductively we conclude that generally Considering the top row, we get Now we may substitute our previous work for the terms and obtain: First substitute the definition of the per-capita fertility and divide through by the left hand side: Now we note the following simplification. Since we note that This sum collapses to: which is the desired result. Analysis of expression From the above analysis we see that the Euler–Lotka equation is in fact the characteristic polynomial of the Leslie matrix. We can analyze its solutions to find information about the eigenvalues of the Leslie matrix (which has implications for the stability of populations). Considering the continuous expression f as a function of r, we can examine its roots. We notice that at negative infinity the function grows to positive infinity and at positive infinity the function approaches 0. The first derivative is clearly −af and the second derivative is a2f. This function is then decreasing, concave up and takes on all positive values. It is also continuous by construction so by the intermediate value theorem, it crosses r = 1 exactly once. Therefore, there is exactly one real solution, which is therefore the dominant eigenvalue of the matrix the equilibrium growth rate. This same derivation applies to the discrete case. Relationship to replacement rate of populations If we let λ = 1 the discrete formula becomes the replacement rate of the population. Further reading Demography Leonhard Euler Integral equations
Euler–Lotka equation
Mathematics,Environmental_science
947
5,638
https://en.wikipedia.org/wiki/Combustion
Combustion, or burning, is a high-temperature exothermic redox chemical reaction between a fuel (the reductant) and an oxidant, usually atmospheric oxygen, that produces oxidized, often gaseous products, in a mixture termed as smoke. Combustion does not always result in fire, because a flame is only visible when substances undergoing combustion vaporize, but when it does, a flame is a characteristic indicator of the reaction. While activation energy must be supplied to initiate combustion (e.g., using a lit match to light a fire), the heat from a flame may provide enough energy to make the reaction self-sustaining. The study of combustion is known as combustion science. Combustion is often a complicated sequence of elementary radical reactions. Solid fuels, such as wood and coal, first undergo endothermic pyrolysis to produce gaseous fuels whose combustion then supplies the heat required to produce more of them. Combustion is often hot enough that incandescent light in the form of either glowing or a flame is produced. A simple example can be seen in the combustion of hydrogen and oxygen into water vapor, a reaction which is commonly used to fuel rocket engines. This reaction releases 242kJ/mol of heat and reduces the enthalpy accordingly (at constant temperature and pressure): 2H_2(g){+}O_2(g)\rightarrow 2H_2O\uparrow Uncatalyzed combustion in air requires relatively high temperatures. Complete combustion is stoichiometric concerning the fuel, where there is no remaining fuel, and ideally, no residual oxidant. Thermodynamically, the chemical equilibrium of combustion in air is overwhelmingly on the side of the products. However, complete combustion is almost impossible to achieve, since the chemical equilibrium is not necessarily reached, or may contain unburnt products such as carbon monoxide, hydrogen and even carbon (soot or ash). Thus, the produced smoke is usually toxic and contains unburned or partially oxidized products. Any combustion at high temperatures in atmospheric air, which is 78 percent nitrogen, will also create small amounts of several nitrogen oxides, commonly referred to as NOx, since the combustion of nitrogen is thermodynamically favored at high, but not low temperatures. Since burning is rarely clean, fuel gas cleaning or catalytic converters may be required by law. Fires occur naturally, ignited by lightning strikes or by volcanic products. Combustion (fire) was the first controlled chemical reaction discovered by humans, in the form of campfires and bonfires, and continues to be the main method to produce energy for humanity. Usually, the fuel is carbon, hydrocarbons, or more complicated mixtures such as wood that contain partially oxidized hydrocarbons. The thermal energy produced from the combustion of either fossil fuels such as coal or oil, or from renewable fuels such as firewood, is harvested for diverse uses such as cooking, production of electricity or industrial or domestic heating. Combustion is also currently the only reaction used to power rockets. Combustion is also used to destroy (incinerate) waste, both nonhazardous and hazardous. Oxidants for combustion have high oxidation potential and include atmospheric or pure oxygen, chlorine, fluorine, chlorine trifluoride, nitrous oxide and nitric acid. For instance, hydrogen burns in chlorine to form hydrogen chloride with the liberation of heat and light characteristic of combustion. Although usually not catalyzed, combustion can be catalyzed by platinum or vanadium, as in the contact process. Types Complete and incomplete Complete In complete combustion, the reactant burns in oxygen and produces a limited number of products. When a hydrocarbon burns in oxygen, the reaction will primarily yield carbon dioxide and water. When elements are burned, the products are primarily the most common oxides. Carbon will yield carbon dioxide, sulfur will yield sulfur dioxide, and iron will yield iron(III) oxide. Nitrogen is not considered to be a combustible substance when oxygen is the oxidant. Still, small amounts of various nitrogen oxides (commonly designated species) form when the air is the oxidative. Combustion is not necessarily favorable to the maximum degree of oxidation, and it can be temperature-dependent. For example, sulfur trioxide is not produced quantitatively by the combustion of sulfur. species appear in significant amounts above about , and more is produced at higher temperatures. The amount of is also a function of oxygen excess. In most industrial applications and in fires, air is the source of oxygen (). In the air, each mole of oxygen is mixed with approximately of nitrogen. Nitrogen does not take part in combustion, but at high temperatures, some nitrogen will be converted to (mostly , with much smaller amounts of ). On the other hand, when there is insufficient oxygen to combust the fuel completely, some fuel carbon is converted to carbon monoxide, and some of the hydrogens remain unreacted. A complete set of equations for the combustion of a hydrocarbon in the air, therefore, requires an additional calculation for the distribution of oxygen between the carbon and hydrogen in the fuel. The amount of air required for complete combustion is known as the "theoretical air" or "stoichiometric air". The amount of air above this value actually needed for optimal combustion is known as the "excess air", and can vary from 5% for a natural gas boiler, to 40% for anthracite coal, to 300% for a gas turbine. Incomplete Incomplete combustion will occur when there is not enough oxygen to allow the fuel to react completely to produce carbon dioxide and water. It also happens when the combustion is quenched by a heat sink, such as a solid surface or flame trap. As is the case with complete combustion, water is produced by incomplete combustion; however, carbon and carbon monoxide are produced instead of carbon dioxide. For most fuels, such as diesel oil, coal, or wood, pyrolysis occurs before combustion. In incomplete combustion, products of pyrolysis remain unburnt and contaminate the smoke with noxious particulate matter and gases. Partially oxidized compounds are also a concern; partial oxidation of ethanol can produce harmful acetaldehyde, and carbon can produce toxic carbon monoxide. The designs of combustion devices can improve the quality of combustion, such as burners and internal combustion engines. Further improvements are achievable by catalytic after-burning devices (such as catalytic converters) or by the simple partial return of the exhaust gases into the combustion process. Such devices are required by environmental legislation for cars in most countries. They may be necessary to enable large combustion devices, such as thermal power stations, to reach legal emission standards. The degree of combustion can be measured and analyzed with test equipment. HVAC contractors, firefighters and engineers use combustion analyzers to test the efficiency of a burner during the combustion process. Also, the efficiency of an internal combustion engine can be measured in this way, and some U.S. states and local municipalities use combustion analysis to define and rate the efficiency of vehicles on the road today. Carbon monoxide is one of the products from incomplete combustion. The formation of carbon monoxide produces less heat than formation of carbon dioxide so complete combustion is greatly preferred especially as carbon monoxide is a poisonous gas. When breathed, carbon monoxide takes the place of oxygen and combines with some of the hemoglobin in the blood, rendering it unable to transport oxygen. Problems associated with incomplete combustion Environmental problems These oxides combine with water and oxygen in the atmosphere, creating nitric acid and sulfuric acids, which return to Earth's surface as acid deposition, or "acid rain." Acid deposition harms aquatic organisms and kills trees. Due to its formation of certain nutrients that are less available to plants such as calcium and phosphorus, it reduces the productivity of the ecosystem and farms. An additional problem associated with nitrogen oxides is that they, along with hydrocarbon pollutants, contribute to the formation of ground level ozone, a major component of smog. Human health problems Breathing carbon monoxide causes headache, dizziness, vomiting, and nausea. If carbon monoxide levels are high enough, humans become unconscious or die. Exposure to moderate and high levels of carbon monoxide over long periods is positively correlated with the risk of heart disease. People who survive severe carbon monoxide poisoning may suffer long-term health problems. Carbon monoxide from the air is absorbed in the lungs which then binds with hemoglobin in human's red blood cells. This reduces the capacity of red blood cells that carry oxygen throughout the body. Smoldering Smoldering is the slow, low-temperature, flameless form of combustion, sustained by the heat evolved when oxygen directly attacks the surface of a condensed-phase fuel. It is a typically incomplete combustion reaction. Solid materials that can sustain a smoldering reaction include coal, cellulose, wood, cotton, tobacco, peat, duff, humus, synthetic foams, charring polymers (including polyurethane foam) and dust. Common examples of smoldering phenomena are the initiation of residential fires on upholstered furniture by weak heat sources (e.g., a cigarette, a short-circuited wire) and the persistent combustion of biomass behind the flaming fronts of wildfires. Spontaneous Spontaneous combustion is a type of combustion that occurs by self-heating (increase in temperature due to exothermic internal reactions), followed by thermal runaway (self-heating which rapidly accelerates to high temperatures) and finally, ignition. For example, phosphorus self-ignites at room temperature without the application of heat. Organic materials undergoing bacterial composting can generate enough heat to reach the point of combustion. Turbulent Combustion resulting in a turbulent flame is the most used for industrial applications (e.g. gas turbines, gasoline engines, etc.) because the turbulence helps the mixing process between the fuel and oxidizer. Micro-gravity The term 'micro' gravity refers to a gravitational state that is 'low' (i.e., 'micro' in the sense of 'small' and not necessarily a millionth of Earth's normal gravity) such that the influence of buoyancy on physical processes may be considered small relative to other flow processes that would be present at normal gravity. In such an environment, the thermal and flow transport dynamics can behave quite differently than in normal gravity conditions (e.g., a candle's flame takes the shape of a sphere.). Microgravity combustion research contributes to the understanding of a wide variety of aspects that are relevant to both the environment of a spacecraft (e.g., fire dynamics relevant to crew safety on the International Space Station) and terrestrial (Earth-based) conditions (e.g., droplet combustion dynamics to assist developing new fuel blends for improved combustion, materials fabrication processes, thermal management of electronic systems, multiphase flow boiling dynamics, and many others). Micro-combustion Combustion processes that happen in very small volumes are considered micro-combustion. The high surface-to-volume ratio increases specific heat loss. Quenching distance plays a vital role in stabilizing the flame in such combustion chambers. Chemical equations Stoichiometric combustion of a hydrocarbon in oxygen Generally, the chemical equation for stoichiometric combustion of a hydrocarbon in oxygen is: For example, the stoichiometric combustion of methane in oxygen is: \underset{methane}{CH4} + 2O2 -> CO2 + 2H2O Stoichiometric combustion of a hydrocarbon in air If the stoichiometric combustion takes place using air as the oxygen source, the nitrogen present in the air (Atmosphere of Earth) can be added to the equation (although it does not react) to show the stoichiometric composition of the fuel in air and the composition of the resultant flue gas. Treating all non-oxygen components in air as nitrogen gives a 'nitrogen' to oxygen ratio of 3.77, i.e. (100% − %) / % where % is 20.95% vol: where . For example, the stoichiometric combustion of methane in air is: The stoichiometric composition of methane in air is 1 / (1 + 2 + 7.54) = 9.49% vol. The stoichiometric combustion reaction for CHO in air: The stoichiometric combustion reaction for CHOS: The stoichiometric combustion reaction for CHONS: The stoichiometric combustion reaction for CHOF: Trace combustion products Various other substances begin to appear in significant amounts in combustion products when the flame temperature is above about . When excess air is used, nitrogen may oxidize to and, to a much lesser extent, to . forms by disproportionation of , and and form by disproportionation of . For example, when of propane is burned with of air (120% of the stoichiometric amount), the combustion products contain 3.3% . At , the equilibrium combustion products contain 0.03% and 0.002% . At , the combustion products contain 0.17% , 0.05% , 0.01% , and 0.004% . Diesel engines are run with an excess of oxygen to combust small particles that tend to form with only a stoichiometric amount of oxygen, necessarily producing nitrogen oxide emissions. Both the United States and European Union enforce limits to vehicle nitrogen oxide emissions, which necessitate the use of special catalytic converters or treatment of the exhaust with urea (see Diesel exhaust fluid). Incomplete combustion of a hydrocarbon in oxygen The incomplete (partial) combustion of a hydrocarbon with oxygen produces a gas mixture containing mainly , , , and . Such gas mixtures are commonly prepared for use as protective atmospheres for the heat-treatment of metals and for gas carburizing. The general reaction equation for incomplete combustion of one mole of a hydrocarbon in oxygen is: \underset{fuel}{C_\mathit{x} H_\mathit{y}} + \underset{oxygen}{\mathit{z} O2} -> \underset{carbon \ dioxide}{\mathit{a}CO2} + \underset{carbon\ monoxide}{\mathit{b}CO} + \underset{water}{\mathit{c}H2O} + \underset{hydrogen}{\mathit{d}H2} When z falls below roughly 50% of the stoichiometric value, can become an important combustion product; when z falls below roughly 35% of the stoichiometric value, elemental carbon may become stable. The products of incomplete combustion can be calculated with the aid of a material balance, together with the assumption that the combustion products reach equilibrium. For example, in the combustion of one mole of propane () with four moles of , seven moles of combustion gas are formed, and z is 80% of the stoichiometric value. The three elemental balance equations are: Carbon: Hydrogen: Oxygen: These three equations are insufficient in themselves to calculate the combustion gas composition. However, at the equilibrium position, the water-gas shift reaction gives another equation: CO + H2O -> CO2 + H2; For example, at the value of K is 0.728. Solving, the combustion gas consists of 42.4% , 29.0% , 14.7% , and 13.9% . Carbon becomes a stable phase at and pressure when z is less than 30% of the stoichiometric value, at which point the combustion products contain more than 98% and and about 0.5% . Substances or materials which undergo combustion are called fuels. The most common examples are natural gas, propane, kerosene, diesel, petrol, charcoal, coal, wood, etc. Liquid fuels Combustion of a liquid fuel in an oxidizing atmosphere actually happens in the gas phase. It is the vapor that burns, not the liquid. Therefore, a liquid will normally catch fire only above a certain temperature: its flash point. The flash point of liquid fuel is the lowest temperature at which it can form an ignitable mix with air. It is the minimum temperature at which there is enough evaporated fuel in the air to start combustion. Gaseous fuels Combustion of gaseous fuels may occur through one of four distinctive types of burning: diffusion flame, premixed flame, autoignitive reaction front, or as a detonation. The type of burning that actually occurs depends on the degree to which the fuel and oxidizer are mixed prior to heating: for example, a diffusion flame is formed if the fuel and oxidizer are separated initially, whereas a premixed flame is formed otherwise. Similarly, the type of burning also depends on the pressure: a detonation, for example, is an autoignitive reaction front coupled to a strong shock wave giving it its characteristic high-pressure peak and high detonation velocity. Solid fuels The act of combustion consists of three relatively distinct but overlapping phases: Preheating phase, when the unburned fuel is heated up to its flash point and then fire point. Flammable gases start being evolved in a process similar to dry distillation. Distillation phase or gaseous phase, when the mix of evolved flammable gases with oxygen is ignited. Energy is produced in the form of heat and light. Flames are often visible. Heat transfer from the combustion to the solid maintains the evolution of flammable vapours. Charcoal phase or solid phase, when the output of flammable gases from the material is too low for the persistent presence of flame and the charred fuel does not burn rapidly and just glows and later only smoulders. Combustion management Efficient process heating requires recovery of the largest possible part of a fuel's heat of combustion into the material being processed. There are many avenues of loss in the operation of a heating process. Typically, the dominant loss is sensible heat leaving with the offgas (i.e., the flue gas). The temperature and quantity of offgas indicates its heat content (enthalpy), so keeping its quantity low minimizes heat loss. In a perfect furnace, the combustion air flow would be matched to the fuel flow to give each fuel molecule the exact amount of oxygen needed to cause complete combustion. However, in the real world, combustion does not proceed in a perfect manner. Unburned fuel (usually and ) discharged from the system represents a heating value loss (as well as a safety hazard). Since combustibles are undesirable in the offgas, while the presence of unreacted oxygen there presents minimal safety and environmental concerns, the first principle of combustion management is to provide more oxygen than is theoretically needed to ensure that all the fuel burns. For methane () combustion, for example, slightly more than two molecules of oxygen are required. The second principle of combustion management, however, is to not use too much oxygen. The correct amount of oxygen requires three types of measurement: first, active control of air and fuel flow; second, offgas oxygen measurement; and third, measurement of offgas combustibles. For each heating process, there exists an optimum condition of minimal offgas heat loss with acceptable levels of combustibles concentration. Minimizing excess oxygen pays an additional benefit: for a given offgas temperature, the NOx level is lowest when excess oxygen is kept lowest. Adherence to these two principles is furthered by making material and heat balances on the combustion process. The material balance directly relates the air/fuel ratio to the percentage of in the combustion gas. The heat balance relates the heat available for the charge to the overall net heat produced by fuel combustion. Additional material and heat balances can be made to quantify the thermal advantage from preheating the combustion air, or enriching it in oxygen. Reaction mechanism Combustion in oxygen is a chain reaction in which many distinct radical intermediates participate. The high energy required for initiation is explained by the unusual structure of the dioxygen molecule. The lowest-energy configuration of the dioxygen molecule is a stable, relatively unreactive diradical in a triplet spin state. Bonding can be described with three bonding electron pairs and two antibonding electrons, with spins aligned, such that the molecule has nonzero total angular momentum. Most fuels, on the other hand, are in a singlet state, with paired spins and zero total angular momentum. Interaction between the two is quantum mechanically a "forbidden transition", i.e. possible with a very low probability. To initiate combustion, energy is required to force dioxygen into a spin-paired state, or singlet oxygen. This intermediate is extremely reactive. The energy is supplied as heat, and the reaction then produces additional heat, which allows it to continue. Combustion of hydrocarbons is thought to be initiated by hydrogen atom abstraction (not proton abstraction) from the fuel to oxygen, to give a hydroperoxide radical (HOO). This reacts further to give hydroperoxides, which break up to give hydroxyl radicals. There are a great variety of these processes that produce fuel radicals and oxidizing radicals. Oxidizing species include singlet oxygen, hydroxyl, monatomic oxygen, and hydroperoxyl. Such intermediates are short-lived and cannot be isolated. However, non-radical intermediates are stable and are produced in incomplete combustion. An example is acetaldehyde produced in the combustion of ethanol. An intermediate in the combustion of carbon and hydrocarbons, carbon monoxide, is of special importance because it is a poisonous gas, but also economically useful for the production of syngas. Solid and heavy liquid fuels also undergo a great number of pyrolysis reactions that give more easily oxidized, gaseous fuels. These reactions are endothermic and require constant energy input from the ongoing combustion reactions. A lack of oxygen or other improperly designed conditions result in these noxious and carcinogenic pyrolysis products being emitted as thick, black smoke. The rate of combustion is the amount of a material that undergoes combustion over a period of time. It can be expressed in grams per second (g/s) or kilograms per second (kg/s). Detailed descriptions of combustion processes, from the chemical kinetics perspective, require the formulation of large and intricate webs of elementary reactions. For instance, combustion of hydrocarbon fuels typically involve hundreds of chemical species reacting according to thousands of reactions. The inclusion of such mechanisms within computational flow solvers still represents a pretty challenging task mainly in two aspects. First, the number of degrees of freedom (proportional to the number of chemical species) can be dramatically large; second, the source term due to reactions introduces a disparate number of time scales which makes the whole dynamical system stiff. As a result, the direct numerical simulation of turbulent reactive flows with heavy fuels soon becomes intractable even for modern supercomputers. Therefore, a plethora of methodologies have been devised for reducing the complexity of combustion mechanisms without resorting to high detail levels. Examples are provided by: The Relaxation Redistribution Method (RRM) The Intrinsic Low-Dimensional Manifold (ILDM) approach and further developments The invariant-constrained equilibrium edge preimage curve method. A few variational approaches The Computational Singular perturbation (CSP) method and further developments. The Rate Controlled Constrained Equilibrium (RCCE) and Quasi Equilibrium Manifold (QEM) approach. The G-Scheme. The Method of Invariant Grids (MIG). Kinetic modelling The kinetic modelling may be explored for insight into the reaction mechanisms of thermal decomposition in the combustion of different materials by using for instance Thermogravimetric analysis. Temperature Assuming perfect combustion conditions, such as complete combustion under adiabatic conditions (i.e., no heat loss or gain), the adiabatic combustion temperature can be determined. The formula that yields this temperature is based on the first law of thermodynamics and takes note of the fact that the heat of combustion is used entirely for heating the fuel, the combustion air or oxygen, and the combustion product gases (commonly referred to as the flue gas). In the case of fossil fuels burnt in air, the combustion temperature depends on all of the following: the heating value; the stoichiometric air to fuel ratio ; the specific heat capacity of fuel and air; the air and fuel inlet temperatures. The adiabatic combustion temperature (also known as the adiabatic flame temperature) increases for higher heating values and inlet air and fuel temperatures and for stoichiometric air ratios approaching one. Most commonly, the adiabatic combustion temperatures for coals are around (for inlet air and fuel at ambient temperatures and for ), around for oil and for natural gas. In industrial fired heaters, power station steam generators, and large gas-fired turbines, the more common way of expressing the usage of more than the stoichiometric combustion air is percent excess combustion air. For example, excess combustion air of 15 percent means that 15 percent more than the required stoichiometric air is being used. Instabilities Combustion instabilities are typically violent pressure oscillations in a combustion chamber. These pressure oscillations can be as high as 180dB, and long-term exposure to these cyclic pressure and thermal loads reduces the life of engine components. In rockets, such as the F1 used in the Saturn V program, instabilities led to massive damage to the combustion chamber and surrounding components. This problem was solved by re-designing the fuel injector. In liquid jet engines, the droplet size and distribution can be used to attenuate the instabilities. Combustion instabilities are a major concern in ground-based gas turbine engines because of emissions. The tendency is to run lean, an equivalence ratio less than 1, to reduce the combustion temperature and thus reduce the emissions; however, running the combustion lean makes it very susceptible to combustion instability. The Rayleigh Criterion is the basis for analysis of thermoacoustic combustion instability and is evaluated using the Rayleigh Index over one cycle of instability where q' is the heat release rate perturbation and p' is the pressure fluctuation. When the heat release oscillations are in phase with the pressure oscillations, the Rayleigh Index is positive and the magnitude of the thermoacoustic instability is maximised. On the other hand, if the Rayleigh Index is negative, then thermoacoustic damping occurs. The Rayleigh Criterion implies that thermoacoustic instability can be optimally controlled by having heat release oscillations 180 degrees out of phase with pressure oscillations at the same frequency. This minimizes the Rayleigh Index. See also Related concepts Air–fuel ratio Autoignition temperature Chemical looping combustion Deflagration Detonation Dust explosion Explosion Fire Flame Global warming Heterogeneous combustion Markstein number Phlogiston theory (historical) Spontaneous combustion Machines and equipment Boiler Bunsen burner External combustion engine Furnace Gas turbine Internal combustion engine Rocket engine Scientific and engineering societies International Flame Research Foundation The Combustion Institute Other Combustible dust Biomass burning List of light sources Open burning of waste Stubble burning References Further reading Chemical reactions
Combustion
Chemistry
5,652