id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
1,531,739
https://en.wikipedia.org/wiki/Electric%20form%20factor
The electric form factor is the Fourier transform of electric charge distribution in a nucleon. Nucleons (protons and neutrons) are made of up and down quarks which have charges associated with them (2/3 & -1/3, respectively). The study of Form Factors falls within the regime of Perturbative QCD. The idea originated from young William Thomson. See also Form factor (disambiguation) References Electrodynamics
Electric form factor
[ "Physics", "Mathematics" ]
99
[ "Electrodynamics", "Particle physics stubs", "Particle physics", "Dynamical systems" ]
1,531,742
https://en.wikipedia.org/wiki/Magnetic%20form%20factor
In electromagnetism, a magnetic form factor is the Fourier transform of an electric charge distribution in space. See also Atomic form factor, for the form factor relevant to magnetic diffraction of free neutrons by unpaired outer electrons of an atom. Electric form factor Form factor (quantum field theory) External links Magnetic form factors, Andrey Zheludev, HFIR Center for Neutron Scattering, Oak Ridge National Laboratory "The magnetic form factor of the neutron", E.E.W. Bruins, November 1996 Electromagnetism
Magnetic form factor
[ "Physics", "Materials_science" ]
112
[ "Electromagnetism", "Materials science stubs", "Physical phenomena", "Fundamental interactions", "Electromagnetism stubs" ]
1,531,781
https://en.wikipedia.org/wiki/Rarita%E2%80%93Schwinger%20equation
In theoretical physics, the Rarita–Schwinger equation is the relativistic field equation of spin-3/2 fermions in a four-dimensional flat spacetime. It is similar to the Dirac equation for spin-1/2 fermions. This equation was first introduced by William Rarita and Julian Schwinger in 1941. In modern notation it can be written as: where is the Levi-Civita symbol, are Dirac matrices (with ) and , is the mass, , and is a vector-valued spinor with additional components compared to the four component spinor in the Dirac equation. It corresponds to the representation of the Lorentz group, or rather, its part. This field equation can be derived as the Euler–Lagrange equation corresponding to the Rarita–Schwinger Lagrangian: where the bar above denotes the Dirac adjoint. This equation controls the propagation of the wave function of composite objects such as the delta baryons () or for the conjectural gravitino. So far, no elementary particle with spin 3/2 has been found experimentally. The massless Rarita–Schwinger equation has a fermionic gauge symmetry: is invariant under the gauge transformation , where is an arbitrary spinor field. This is simply the local supersymmetry of supergravity, and the field must be a gravitino. "Weyl" and "Majorana" versions of the Rarita–Schwinger equation also exist. Equations of motion in the massless case Consider a massless Rarita–Schwinger field described by the Lagrangian density where the sum over spin indices is implicit, are Majorana spinors, and To obtain the equations of motion we vary the Lagrangian with respect to the fields , obtaining: using the Majorana flip properties we see that the second and first terms on the RHS are equal, concluding that plus unimportant boundary terms. Imposing we thus see that the equation of motion for a massless Majorana Rarita–Schwinger spinor reads: The gauge symmetry of the massless Rarita-Schwinger equation allows the choice of the gauge , reducing the equations to: A solution with spins 1/2 and 3/2 is given by: where is the spatial Laplacian, is doubly transverse, carrying spin 3/2, and satisfies the massless Dirac equation, therefore carrying spin 1/2. Drawbacks of the equation The current description of massive, higher spin fields through either Rarita–Schwinger or Fierz–Pauli formalisms is afflicted with several maladies. Superluminal propagation As in the case of the Dirac equation, electromagnetic interaction can be added by promoting the partial derivative to gauge covariant derivative: . In 1969, Velo and Zwanziger showed that the Rarita–Schwinger Lagrangian coupled to electromagnetism leads to equation with solutions representing wavefronts, some of which propagate faster than light. In other words, the field then suffers from acausal, superluminal propagation; consequently, the quantization in interaction with electromagnetism is essentially flawed. In extended supergravity, though, Das and Freedman have shown that local supersymmetry solves this problem. References Sources Collins P.D.B., Martin A.D., Squires E.J., Particle physics and cosmology (1989) Wiley, Section 1.6. Eponymous equations of physics Quantum field theory Spinors Partial differential equations Fermions Mathematical physics
Rarita–Schwinger equation
[ "Physics", "Materials_science", "Mathematics" ]
763
[ "Quantum field theory", "Matter", "Equations of physics", "Fermions", "Applied mathematics", "Theoretical physics", "Eponymous equations of physics", "Quantum mechanics", "Condensed matter physics", "Mathematical physics", "Subatomic particles" ]
7,010,754
https://en.wikipedia.org/wiki/QuickSilver%20%28project%29
The QuickSilver project at Cornell University is an AFRL-funded effort to build a platform in support of a new generation of scalable, secure, reliable distributed computing applications able to "regenerate" themselves after failure. Among the partners on this project are DARPA funding under the SRS program, the United States Air Force. Raytheon, Microsoft, IBM, and Amazon. The principal investigators are Cornell Professors Kenneth P. Birman, Johannes Gehrke, and Paul Francis External links Project home page with links to the over 140 published papers from 1999-2006. DARPA
QuickSilver (project)
[ "Technology" ]
121
[ "Computing stubs" ]
7,010,827
https://en.wikipedia.org/wiki/Auxiliary%20label
An auxiliary label (also called cautionary and advisory label or prescription drug warning label) is a label added on to a dispensed medication package by a pharmacist in addition to the usual prescription label. These labels are intended to provide supplementary information regarding the safe administration, use, and storage of the medication. Auxiliary labels provide information which can augment but not replace verbal counselling from a pharmacist. History Auxiliary labels became popular during the second half of the nineteenth century. In 2013, the first recommendations for auxiliary label usage in the United States were published as USP Chapter <17>. This included a recommendation to limit the use of auxiliary labels to evidence-based labels with critical information, and without pictures unless evidence shows increased efficacy when a picture is used. It is further recommended that labels are placed in a manner obvious to the patient without having to turn or rotate the package. Contents Auxiliary labels are small stickers consisting of one or more lines of text intended to enhance patient knowledge, with or without a pictogram. The directions for use included on the standard prescription label are typically limited to direct administration information, such as how often, when, and how to take the medication. As such, auxiliary labels are used for additional information that is not included in those directions for use printed on the label, or information which cannot fit on the prescription label itself due to limited space. Overall, auxiliary labels contain information intended to promote proper medication adherence through reminders about important information that will be seen anytime the bottle is picked up. They should be designed to be as simple as possible, written in plain language, and understandable for people with low health literacy. Sometimes auxiliary labels are used not to add additional information to the packaging, but instead to reinforce information with a pictorial representation of the instructions for use. This may consist of a pictoral representation of the frequency of use, time of day to take, the administration route, or other information. Picture representations of directions can be useful for patients with low literacy, or who have trouble reading and comprehending text instructions due to age, eyesight, or language barriers. As some medications must be stored under specific conditions (such as in original container with dessicant, or refrigerated), auxiliary labels may be used to reinforce these storage requirements to ensure it does not degrade through improper storage. Because some people may have difficulty swallowing medications whole, auxiliary labels may be used to provide advice on solutions, such as whether the medication can be chewed, crushed, or cut. Another use of auxiliary labels is important information on side effects or drug/food interactions. A 2016 study found that many patients consider side effects to be important information to be included on a prescription package and that auxiliary labels are a good tool to provide this information on the packaging itself (as opposed to a separate information sheet/handout). The same study found that patients associated the use of red as a highlight color with information regarding warnings, allergies, or side effects. Usage Deciding what auxiliary labels are suitable for a particular prescription requires knowledge of the drug's classification, interactions, and side effects. One study of auxiliary label usage found that about 80% of dispensed prescriptions would benefit from at least one auxiliary label to reinforce information, or provide additional important information aside from the directions for use. The most common auxiliary labels on prescriptions include "May cause drowsiness" and "alcohol may intensify the effect of this medication". There is no standard for how to place auxiliary labels on a prescription, but they should be placed in a manner that they will be visible and intelligible in the normal course of medication usage. Auxiliary labels placed on a prescription vial may be placed vertically, horizontally, or on the vial cap, which is called "interactive placement". Placement of the label in an interactive manner where the patient must interact with it to open the vial increases the chance the label is noticed and considered by the patient. Both horizontal and interactive placement are superior to vertical placement, which is due to the need to rotate the vial to read the information on a vertically placed label. One study in 2007 found that 82% of prescriptions had auxiliary labels placed vertically, requiring the bottle to be tilted to read the text. The same study found a wide variation in coloring used on auxiliary labels from different pharmacies, and that between 8-25% of prescriptions filled had no warning labels at all. The use of auxiliary labels does not substitute for pharmacist consultation about medications, nor for any supplemental medication guides or handouts recommended or required to be distributed with a drug. Auxiliary labels should only be used to remind or enhance instructions for use or warnings that have already been given by the pharmacist or doctor to the patient verbally. Effectiveness Auxiliary labels can commonly be misinterpreted, especially when multi-step or multi-part instructions are present on one label. Misinterpretation of auxiliary labels can occur when patients are unable to understand the wording of the label, and thus assume an instruction based on the pictogram or color of the label. In addition to misinterpretation, some studies have found that most patients ignore auxiliary labels on prescriptions completely, especially those with low health literacy. This may be due in part to the belief that information presented on the bottle is not important, or due to the manner in which the labels are affixed to the vial. When auxiliary labels are used as a reminder to the patient of important information, failure to understand and follow the instructions from auxiliary labels can result in treatment failure or adverse effects. The effectiveness of auxiliary labels can vary greatly between different label formats and specific text, with a 2006 survey finding that one common multi-step, complex label ("Do not take dairy products, antacids, or iron preparations within 1 hour of this medication") was interpreted correctly only 7.6% of the time. The overall effectiveness of auxiliary labels depends on the number of labels affixed, the design of the label, and their positioning on the medication package or vial. Simplifying the content and number of auxiliary labels can improve patient comprehension. In the United States, many labels are commonly only stocked in English, which can decrease the chance of understanding in areas with significant non-English speaking populations. Only one third of auxiliary labels in the United States are available in languages other than English. Common elements considered to increase the chance of effectiveness of an auxiliary label include a single-step instruction, using easy-to-read text (for example, low Lexile score), use of clear, simple icons (if present), use of color to represent severity, and clarity of the instruction being represented. Font size and style, including boldface or capitalization patterns, can also impact the effectiveness of an auxiliary label. The effectiveness of auxiliary labels is also increased when pharmacists explicitly instruct patients on their presence on the package, and explain the importance of each of the warnings being presented using the auxiliary labels. It has also been recommended that people with low health literacy and low literacy in general be consulted during the design process for auxiliary labels to improve the chance for comprehension and effectiveness. References Pharmacy Labels
Auxiliary label
[ "Chemistry" ]
1,463
[ "Pharmacology", "Pharmacy" ]
7,010,925
https://en.wikipedia.org/wiki/Protein%20K%20%28porin%29
Structure Proteinase K has a catalytic triad of Ser 224, His 69, and Asp 39. Two peptide chains recognize substrates, 99-104 and 132–136. There is also a cysteine free near His 69. Protein K is a porin expressed in some pathogenic strains of E. coli bacteria. It has a molecular weight of about 40 kDa and is localized to the outer membrane, through which it allows both inorganic and organic ions to pass. The addition of Protein K in the outer membrane proven to cause an increased rate of uptake of nutrients and a faster growth rate relative to the parental porin- strain. The strains in which protein K has been identified are encapsulated, or surrounded by a poly-sialic acid capsule that renders them more resistant to phagocytosis by cells in the immune system. References Whitfield C, Hancock RE, Costerson JW. (1983). Outer membrane protein K of Escherichia coli: purification and pore-forming properties in lipid bilayer membranes. J Bacteriol 156(2): 873-879 Sutcliffe J, Blumenthal R, Walter A, Foulds J. (1983). Escherichia coli outer membrane protein K is a porin. J Bacteriol 156(2): 867-872 Bliss JM, Solver RP. (1996). Coating the surface: a model for expression of capsular polysialic acid in Escherichia coli K1. Mol Microbiol 21:221. Outer membrane proteins
Protein K (porin)
[ "Chemistry" ]
328
[ "Biochemistry stubs", "Protein stubs" ]
7,011,097
https://en.wikipedia.org/wiki/Ricoh%20XR-P
The Ricoh XR-P is a 35mm Single Lens Reflex (SLR) camera introduced in 1984. Specifications The XR-P's lens system is the Ricoh System RK mount. Shutter speed ranges from 16 seconds to 1/2000 seconds plus B and TV. It has a self-timer of 10 seconds (zero seconds for left-hand shutter operation), and an interval timer of 2 seconds, 15 seconds, or 60 seconds. The viewfinder's field of view covers 93%, magnification at .88X with 50mm, F1.4 standard lens. Viewfinder display includes Exposure adjustment, AE lock, manual, program mode, TV mode, overexposure and under exposure marks, shutter speed indicator, battery low warning, and programmed F-stop. References Ricoh XR-P Multi-Program users manual, Ricoh Company, Ltd., Tokyo Ricoh XR-P The "Long Course", Ricoh Corporation, West Caldwell, NJ Single-lens reflex cameras
Ricoh XR-P
[ "Technology" ]
211
[ "System cameras", "Single-lens reflex cameras" ]
7,011,270
https://en.wikipedia.org/wiki/Warm%20dense%20matter
Warm dense matter, abbreviated WDM, can refer to either equilibrium or non-equilibrium states of matter in a (loosely defined) regime of temperature and density between condensed matter and hot plasma. It can be defined as the state that is too dense to be described by weakly coupled plasma physics yet too hot to be described by condensed matter physics. In this state, the potential energy of the Coulomb interaction between electrons and ions is on the same order of magnitude (or even significantly exceeds) their thermal energy, while the latter is comparable to the Fermi energy. Typically, WDM has a density somewhere between and a temperature on the order of several thousand kelvins (somewhere between , in the units favored by practitioners). WDM is expected in the interiors of giant planets, brown dwarfs, and small stars. WDM is routinely formed in the course of intense-laser–target interactions (including the inertial confinement fusion research), particle-beam–target interactions, and in other setups where a condensed matter is quickly heated to become a strongly interacting plasma. As such, the WDM physics is also relevant to ablation of metals (atmospheric entry from space, laser-machining of materials, etc). A WDM created using ultra-fast laser pulses may for a short time exist in a two-temperature non-equilibrium form where a small fraction of electrons are very hot, with the temperature well above that of the bulk matter. See also Liquid metals References Plasma theory and modeling
Warm dense matter
[ "Physics" ]
303
[ "Plasma theory and modeling", "Plasma physics" ]
7,011,453
https://en.wikipedia.org/wiki/Autochem
AutoChem is NASA release software that constitutes an automatic computer code generator and documenter for chemically reactive systems written by David Lary between 1993 and the present. It was designed primarily for modeling atmospheric chemistry, and in particular, for chemical data assimilation. The user selects a set of chemical species. AutoChem then searches chemical reaction databases for these species and automatically constructs the ordinary differential equations (ODE) that describe the chemical system. AutoChem symbolically differentiates the time derivatives to give the Jacobian matrix, and symbolically differentiates the Jacobian matrix to give the Hessian matrix and the adjoint. The Jacobian matrix is required by many algorithms that solve the ordinary differential equations numerically, particular when the ODEs are stiff. The Hessian matrix and the adjoint are required for four-dimensional variational data assimilation (4D-Var). AutoChem documents the whole process in a set of LaTeX and PDF files. The reactions involving the user specified constituents are extracted by the first AutoChem preprocessor program called Pick. This subset of reactions is then used by the second AutoChem preprocessor program RoC (rate of change) to generate the time derivatives, Jacobian, and Hessian. Once the two preprocessor programs have run to completion all the Fortran 90 code has been generated that is necessary for modeling and assimilating the kinetic processes. A huge observational database of many different atmospheric constituents from a host of platforms are available from the AutoChem site. AutoChem has been used to perform long term chemical data assimilation of atmospheric chemistry. This assimilation was automatically documented by the AutoChem software and is available on line at CDACentral. Data quality is always an issue for chemical data assimilation, in particular the presence of biases. To identify and understand the biases it is useful to compare observations using probability distribution functions. Such an analysis is available on line at PDFCentral which was designed for the validation of observations from the NASA Aura satellite. See also Chemical kinetics CHEMKIN Cantera Chemical WorkBench Kinetic PreProcessor (KPP) SpeedCHEM References Computational chemistry software Chemical kinetics Environmental chemistry
Autochem
[ "Chemistry", "Environmental_science" ]
451
[ "Chemical reaction engineering", "Computational chemistry software", "Chemistry software", "Theoretical chemistry stubs", "Environmental chemistry", "Computational chemistry", "Computational chemistry stubs", "nan", "Chemical kinetics", "Physical chemistry stubs" ]
7,011,824
https://en.wikipedia.org/wiki/Biotechnology%20in%20pharmaceutical%20manufacturing
Biotechnology is the use of living organisms to develop useful products. Biotechnology is often used in pharmaceutical manufacturing. Notable examples include the use of bacteria to produce things such as insulin or human growth hormone. Other examples include the use of transgenic pigs for the creation of hemoglobin in use of humans. Human Insulin Amongst the earliest uses of biotechnology in pharmaceutical manufacturing is the use of recombinant DNA technology to modify Escherichia coli bacteria to produce human insulin, which was performed at Genentech in 1978. Prior to the development of this technique, insulin was extracted from the pancreas glands of cattle, pigs, and other farm animals. While generally efficacious in the treatment of diabetes, animal-derived insulin is not indistinguishable from human insulin, and may therefore produce allergic reactions. Genentech researchers produced artificial genes for each of the two protein chains that comprise the insulin molecule. The artificial genes were "then inserted... into plasmids... among a group of genes that" are activated by lactose. Thus, the insulin-producing genes were also activated by lactose. The recombinant plasmids were inserted into Escherichia coli bacteria, which were "induced to produce 100,000 molecules of either chain A or chain B human insulin." The two protein chains were then combined to produce insulin molecules. Human growth hormone Prior to the use of recombinant DNA technology to modify bacteria to produce human growth hormone, the hormone was manufactured by extraction from the pituitary glands of cadavers, as animal growth hormones have no therapeutic value in humans. Production of a single year's supply of human growth hormone required up to fifty pituitary glands, creating significant shortages of the hormone. In 1979, scientists at Genentech produced human growth hormone by inserting DNA coding for human growth hormone into a plasmid that was implanted in Escherichia coli bacteria. The gene that was inserted into the plasmid was created by reverse transcription of the mRNA found in pituitary glands to complementary DNA. HaeIII, a type of restriction enzyme which acts at restriction sites "in the 3' noncoding region" and at the 23rd codon in complementary DNA for human growth hormone, was used to produce "a DNA fragment of 551 base pairs which includes coding sequences for amino acids 24–191 of HGH." Then "a chemically synthesized DNA 'adaptor' fragment containing an ATG initiation codon..." was produced with the codons for the first through 23rd amino acids in human growth hormone. The "two DNA fragments... [were] combined to form a synthetic-natural 'hybrid' gene." The use of entirely synthetic methods of DNA production to produce a gene that would be translated to human growth hormone in Escherichia coli would have been exceedingly laborious due to the significant length of the amino acid sequence in human growth hormone. However, if the cDNA reverse transcribed from the mRNA for human growth hormone were inserted directly into the plasmid inserted into the Escherichia coli, the bacteria would translate regions of the gene that are not translated in humans, thereby producing a "pre-hormone containing an extra 26 amino acids" which might be difficult to remove. Human blood clotting factors Prior to the development and FDA approval of a means to produce human blood clotting factors using recombinant DNA technologies, human blood clotting factors were produced from donated blood that was inadequately screened for HIV. Thus, HIV infection posed a significant danger to patients with hemophilia who received human blood clotting factors: Most reports indicate that 60 to 80 percent of patients with hemophilia who were exposed to factor VIII concentrates between 1979 and 1984 are seropositive for HIV by [the] Western blot assay. As of May 1988, more than 659 patients with hemophilia had AIDS... The first human blood clotting factor to be produced in significant quantities using recombinant DNA technology was Factor IX, which was produced using transgenic Chinese hamster ovary cells in 1986. Lacking a map of the human genome, researchers obtained a known sequence of the RNA for Factor IX by examining the amino acids in Factor IX:Microsequencing of highly purified... [Factor IX] yielded sufficient amino acid sequence to construct oligonucleotide probes. The known sequence of Factor IX RNA was then used to search for the gene coding for Factor IX in a library of the DNA found in the human liver, since it was known that blood clotting factors are produced by the human liver:A unique oligonucleotide... homologous to Factor IX mRNA... was synthesized and labeled... The resultant probe was used to screen a human liver double-stranded cDNA library... Complete two-stranded DNA sequences of the... [relevant] cDNA... contained all of the coding sequence COOH-terminal of the eleventh codon (11) and the entire 3'-untranslated sequence. This sequence of cDNA was used to find the remaining DNA sequences comprising the Factor IX gene by searching the DNA in the X chromosome:A genomic library from a human XXXX chromosome was prepared... and screen[ed] with a Factor IX cDNA probe. Hybridizing recombinant phage were isolated, plaque-purified, and the DNA isolated. Restriction mapping, Southern analysis, and DNA sequencing permitted identification of five recombinant phage-containing inserts which, when overlapped at common sequences, coded the entire 35kb Factor IX gene. Plasmids containing the Factor IX gene, along with plasmids with a gene that codes for resistance to methotrexate, were inserted into Chinese hamster ovary cells via transfection. Transfection involves the insertion of DNA into a eukaryotic cell. Unlike the analogous process of transformation in bacteria, transfected DNA is not ordinarily integrated into the cell's genome, and is therefore not usually passed on to subsequent generations via cell division. Thus, in order to obtain a "stable" transfection, a gene which confers a significant survival advantage must also be transfected, causing the few cells that did integrate the transfected DNA into their genomes to increase their population as cells that did not integrate the DNA are eliminated. In the case of this study, "grow[th] in increasing concentrations of methotrexate" promoted the survival of stably transfected cells, and diminished the survival of other cells. The Chinese hamster ovary cells that were stably transfected produced significant quantities of Factor IX, which was shown to have substantial coagulant properties, though of a lesser degree than Factor IX produced from human blood:The specific activity of the recombinant Factor IX was measured on the basis of direct measurement of the coagulant activity... The specific activity of recombinant Factor IX was 75 units/mg... compared to 150 units/mg measured for plasma-derived Factor IX... In 1992, the FDA approved Factor VIII produced using transgenic Chinese hamster ovary cells, the first such blood clotting factor produced using recombinant DNA technology to be approved. Transgenic farm animals Recombinant DNA techniques have also been employed to create transgenic farm animals that can produce pharmaceutical products for use in humans. For instance, pigs that produce human hemoglobin have been created. While blood from such pigs could not be employed directly for transfusion to humans, the hemoglobin could be refined and employed to manufacture a blood substitute. Paclitaxel (Taxol) Bristol-Myers Squibb manufactures paclitaxel using Penicillium raistrickii and plant cell fermentation (PCF). Artemisinin Transgenic yeast are used to produce artemisinin, as well as a number of insulin analogs. See also Molecular Biotechnology (journal) Bacillus isolates Fungal isolates Medicinal molds Sponge isolates Streptomyces isolates References Drug manufacturing Biotechnology
Biotechnology in pharmaceutical manufacturing
[ "Biology" ]
1,700
[ "nan", "Biotechnology" ]
7,012,204
https://en.wikipedia.org/wiki/Communication%20endpoint
A communication endpoint is a type of communication network node. It is an interface exposed by a communicating party or by a communication channel. An example of the latter type of a communication endpoint is a publish–subscribe topic or a group in group communication systems. See also Connection-oriented communication Data terminal equipment Dial peer End system Host (network) Terminal (telecommunication) References Computing terminology Telecommunications
Communication endpoint
[ "Technology" ]
80
[ "Information and communications technology", "Computing terminology", "Computer network stubs", "Telecommunications", "Computing stubs" ]
7,012,532
https://en.wikipedia.org/wiki/Richard%20Paul%20Pavlick
Richard Paul Pavlick (February 13, 1887 – November 11, 1975) was a retired postal worker from New Hampshire who stalked Senator and U.S. president-elect John F. Kennedy, with the intent of assassinating him. On December 11, 1960, in Palm Beach, Florida, Pavlick positioned himself to carry out the assassination by blowing up Kennedy and himself with dynamite, but delayed the attempt because Kennedy was with his wife Jacqueline and their two young children. He was arrested before he was able to stage another attempt. Personal background Pavlick was born on February 13, 1887, in Belmont, New Hampshire. After serving in the United States Army during World War I, he worked as a postal worker in Boston, Massachusetts, before retiring and relocating to Belmont in the 1950s. Pavlick had no family. He became known at local public meetings for his angry political rants, which included complaints that the American flag was not being displayed appropriately; he also criticized the government and hated Catholics, focusing much of his anger on the Kennedy family and their wealth. Assassination plan After Kennedy defeated Vice President Richard Nixon in the 1960 presidential election, 73-year-old Pavlick decided to kill Kennedy. He turned his property over to a local youth camp, loaded his meager possessions into his 1950 Buick, and disappeared. Soon after, Belmont's postmaster began receiving bizarre postcards from Pavlick stating that the town would soon hear from him "in a big way". Noticing that the postmarked dates and locations matched Kennedy's movements, the postmaster contacted the Secret Service; the Secret Service interviewed locals and learned of Pavlick's previous outbursts and that he had recently purchased dynamite. During his travels, Pavlick had visited the Kennedy compound at Hyannis Port, Massachusetts, and photographed the Kennedy home while also checking out the compound's security. He also surveyed the Kennedy residence in Georgetown. Shortly before 10 a.m. on Sunday, December 11, as Kennedy was preparing to leave for Mass at St. Edward Church in Palm Beach, Pavlick waited in his dynamite-laden car hoping to detonate his 1950 Buick to cause a fatal explosion. However, Pavlick changed his mind after seeing Kennedy with his wife, Jacqueline, and the couple's two small children. Pavlick said, "I did not want to harm her or the children." While waiting for another opportunity over the next few days, Pavlick visited the church to learn its interior, but the Secret Service had informed local Palm Beach police to look out for Pavlick's automobile. Four days later, on December 15, Palm Beach police officer Lester Free spotted Pavlick's vehicle crossing the Royal Poinciana Bridge. After his arrest, Pavlick said, "Kennedy money bought the White House and the Presidency. I had the crazy idea I wanted to stop Kennedy from being President." On January 27, 1961, Pavlick was committed to the federal medical center in Springfield, Missouri, then was indicted for threatening Kennedy's life seven weeks later. According to Ted Sorensen, Kennedy "was merely bemused" when he found out about Pavlick. Later life Charges against Pavlick were dropped on December 2, 1963, ten days after Kennedy's assassination in Dallas, Texas. Judge Emett Clay Choate ruled that Pavlick was mentally ill—unable to distinguish between right and wrong in his actions—and ordered that he remain in a psychiatric hospital. The federal government also dropped charges in August 1964, and Pavlick was eventually released from the New Hampshire State Hospital on December 13, 1966. Pavlick died at age 88 on November 11, 1975, at the Veterans Administration Hospital in Manchester, New Hampshire. In popular culture Pavlick was portrayed by Kent Broadhurst in the 1983 miniseries Kennedy, but his age is inaccurately portrayed as being 36, rather than the actual 73. In 2013, the Military Channel produced a hypothetical documentary, What If...? Armageddon 1962, in which Pavlick managed to kill Kennedy, and Lyndon B. Johnson's inept handling of the Cuban Missile Crisis resulted in a nuclear exchange. References External links "The Kennedy Assassin Who Failed", by Dan Lewis, Smithsonian.com, December 6, 2012. 1887 births 1975 deaths United States Army personnel of World War I Failed assassins of presidents of the United States Presidency of John F. Kennedy People from Belmont, New Hampshire Military personnel from New Hampshire American failed assassins Stalking Anti-Catholicism in the United States People acquitted by reason of insanity United States Postal Service people American Protestants
Richard Paul Pavlick
[ "Biology" ]
943
[ "Behavior", "Aggression", "Stalking" ]
7,012,714
https://en.wikipedia.org/wiki/Cav1.2
{{DISPLAYTITLE:Cav1.2}} Calcium channel, voltage-dependent, L type, alpha 1C subunit (also known as Cav1.2) is a protein that in humans is encoded by the CACNA1C gene. Cav1.2 is a subunit of L-type voltage-dependent calcium channel. Structure and function This gene encodes an alpha-1 subunit of a voltage-dependent calcium channel. Calcium channels mediate the influx of calcium ions (Ca2+) into the cell upon membrane polarization (see membrane potential and calcium in biology). The alpha-1 subunit consists of 24 transmembrane segments and forms the pore through which ions pass into the cell. The calcium channel consists of a complex of alpha-1, alpha-2/delta and beta subunits in a 1:1:1 ratio. The S3-S4 linkers of Cav1.2 determine the gating phenotype and modulated gating kinetics of the channel. Cav1.2 is widely expressed in the smooth muscle, pancreatic cells, fibroblasts, and neurons. However, it is particularly important and well known for its expression in the heart where it mediates L-type currents, which causes calcium-induced calcium release from the ER Stores via ryanodine receptors. It depolarizes at -30mV and helps define the shape of the action potential in cardiac and smooth muscle. The protein encoded by this gene binds to and is inhibited by dihydropyridine. In the arteries of the brain, high levels of calcium in mitochondria elevates activity of nuclear factor kappa B NF-κB and transcription of CACNA1c and functional Cav1.2 expression increases. Cav1.2 also regulates levels of osteoprotegerin. CaV1.2 is inhibited by the action of STIM1. Regulation The activity of CaV1.2 channels is tightly regulated by the Ca2+ signals they produce. An increase in intracellular Ca2+ concentration implicated in Cav1.2 facilitation, a form of positive feedback called Ca2+-dependent facilitation, that amplifies Ca2+ influx. In addition, increasing influx intracellular Ca2+ concentration has implicated to exert the opposite effect Ca2+ dependent inactivation. These activation and inactivation mechanisms both involve Ca2+ binding to calmodulin (CaM) in the IQ domain in the C-terminal tail of these channels. Cav1.2 channels are arranged in cluster of eight, on average, in the cell membrane. When calcium ions bind to calmodulin, which in turn binds to a Cav1.2 channel, it allows the Cav1.2 channels within a cluster to interact with each other. This results in channels working cooperatively when they open at the same time to allow more calcium ions to enter and then close together to allow the cell to relax. Clinical significance Mutation in the CACNA1C gene, the single-nucleotide polymorphism located in the third intron of the Cav1.2 gene, are associated with a variant of Long QT syndrome called Timothy's syndrome and more broadly with other CACNA1C-related disorders, and also with Brugada syndrome. Large-scale genetic analyses have shown the possibility that CACNA1C is associated with bipolar disorder and subsequently also with schizophrenia. Also, a CACNA1C risk allele has been associated to a disruption in brain connectivity in patients with bipolar disorder, while not or only to a minor degree, in their unaffected relatives or healthy controls. In a first study in Indian population, the Schizophrenia associated Genome-wide association study (GWAS) SNP was found not to be associated with the disease. Furthermore, the main effect of rs1006737 was found to be associated with spatial abilityefficiency scores. Subjects with genotypes carrying the risk allele of rs1006737 (G/A and A/A) were found to have higher spatial abilityefficiency scores as compared to those with the G/G genotype. While in healthy controls those with G/A and A/A genotypes were found to have higher spatial memoryprocessing speed scores than those with G/G genotypes, the former had lower scores than the latter in schizophrenia subjects. In the same study the genotypes with the risk allele of rs1006737 namely A/A was associated with a significantly lower Align rank transformed Abnormal and involuntary movement scale(AIMS) scores of Tardive dyskinesia(TD). Interactive pathway map See also Calcium channel Calcium channel associated transcriptional regulator References Further reading External links GeneReviews/NIH/NCBI/UW entry on Brugada syndrome GeneReviews/NIH/NCBI/UW entry on Timothy Syndrome Ion channels Biology of bipolar disorder
Cav1.2
[ "Chemistry" ]
1,036
[ "Neurochemistry", "Ion channels" ]
7,012,846
https://en.wikipedia.org/wiki/Kir2.1
{{DISPLAYTITLE:Kir2.1}} The Kir2.1 inward-rectifier potassium channel is a lipid-gated ion channel encoded by the gene. Clinical significance A defect in this gene is associated with Andersen-Tawil syndrome. A mutation in the KCNJ2 gene has also been shown to cause short QT syndrome. In research In neurogenetics, Kir2.1 is used in Drosophila research to inhibit neurons, as overexpression of this channel will hyperpolarize cells. In optogenetics, a trafficking sequence from Kir2.1 has been added to halorhodopsin to improve its membrane localization. The resulting protein eNpHR3.0 is used in optogenetic research to inhibit neurons with light. Expression of Kir2.1 gene in human HEK293 cells induce a transient outward current, creating a steady membrane potential close to the reversal potential of potassium. Interactions Kir2.1 has been shown to interact with: DLG4, Interleukin 16, and TRAK2 References Further reading External links GeneReviews/NCBI/NIH/UW entry on Andersen-Tawil syndrome OMIM entries on Anderson-Tawil syndrome Ion channels
Kir2.1
[ "Chemistry" ]
269
[ "Neurochemistry", "Ion channels" ]
7,013,043
https://en.wikipedia.org/wiki/Cadalene
Cadalene or cadalin (4-isopropyl-1,6-dimethylnaphthalene) is a polycyclic aromatic hydrocarbon with a chemical formula C15H18 and a cadinane skeleton. It is derived from generic sesquiterpenes, and ubiquitous in essential oils of many higher plants. Cadalene, together with retene, simonellite and ip-iHMN, is a biomarker of higher plants, which makes it useful for paleobotanic analysis of rock sediments. The ratio of retene to cadalene in sediments can reveal the ratio of the genus Pinaceae in the biosphere. References Petroleum products Naphthalenes Sesquiterpenes Biomarkers Isopropyl compounds
Cadalene
[ "Chemistry", "Biology" ]
165
[ "Petroleum", "Biomarkers", "Petroleum products" ]
7,013,200
https://en.wikipedia.org/wiki/Encainide
Encainide (trade name Enkaid) is a class Ic antiarrhythmic agent. It is no longer used because of its frequent proarrhythmic side effects. Synthesis See also Iferanserin Cardiac Arrhythmia Suppression Trial References Antiarrhythmic agents Benzanilides 4-Methoxyphenyl compounds Piperidines Sodium channel blockers Withdrawn drugs
Encainide
[ "Chemistry" ]
88
[ "Drug safety", "Withdrawn drugs" ]
7,013,240
https://en.wikipedia.org/wiki/Simonellite
Simonellite (1,1-dimethyl-1,2,3,4-tetrahydro-7-isopropyl phenanthrene) is a polycyclic aromatic hydrocarbon with a chemical formula C19H24. It is similar to retene. Simonellite occurs naturally as an organic mineral derived from diterpenes present in conifer resins. It is named after its discoverer, Vittorio Simonelli (1860–1929), an Italian geologist. It forms colorless to white orthorhombic crystals. It occurs in Fognano, Tuscany, Italy. Simonellite, together with cadalene, retene and ip-iHMN, is a biomarker of higher plants, which makes it useful for paleobotanic analysis of rock sediments. See also Fichtelite Retene References Organic minerals Phenanthrenes Biomarkers Diterpenes Isopropyl compounds
Simonellite
[ "Chemistry", "Biology" ]
203
[ "Organic compounds", "Biomarkers", "Organic minerals" ]
7,013,253
https://en.wikipedia.org/wiki/Ajmaline
Ajmaline (also known by trade names Gilurytmal, Ritmos, and Aritmina) is an alkaloid that is classified as a 1-A antiarrhythmic agent. It is often used to induce arrhythmic contraction in patients suspected of having Brugada syndrome. Individuals suffering from Brugada syndrome will be more susceptible to the arrhythmogenic effects of the drug, and this can be observed on an electrocardiogram as an ST elevation. The compound was first isolated by Salimuzzaman Siddiqui in 1931 from the roots of Rauvolfia serpentina. He named it ajmaline, after Hakim Ajmal Khan, one of the most illustrious practitioners of Unani medicine in South Asia. Ajmaline can be found in most species of the genus Rauvolfia as well as Catharanthus roseus. In addition to Southeast Asia, Rauvolfia species have also been found in tropical regions of India, Africa, South America, and some oceanic islands. Other indole alkaloids found in Rauvolfia include reserpine, ajmalicine, serpentine, corynanthine, and yohimbine. While 86 alkaloids have been discovered throughout Rauvolfia vomitoria, ajmaline is mainly isolated from the stem bark and roots of the plant. Due to the low bioavailability of ajmaline, a semisynthetic propyl derivative called prajmaline (trade name Neo-gilurythmal) was developed that induces similar effects to its predecessor but has better bioavailability and absorption. Biosynthesis Ajmaline is widely dispersed among 25 plant genera, but is of significant concentration in the Apocynaceae family. Ajmaline is a monoterpenoid indole alkaloid, composed of an indole from tryptophan and a terpenoid from iridoid glucoside secologanin. Secologanin is introduced from the triose phosphate/pyruvate pathway. Tryptophan decarboxylase (TDC) remodels tryptophan into tryptamine. Strictosidine synthase (STR), uses a Pictet–Spengler reaction to form strictosidine from tryptamine and secologanin. Strictosidine is oxidized by P450-dependent sarpagan bridge enzymes (SBE); to make polyneuridine aldehyde. Of the sarpagan-type alkaloids, polyneuridine is a key entry into the ajmalan-type alkaloids. Polyneuridine Aldehyde is methylated by polyneuridine aldehydeesterase (PNAE), to synthesize 16-epi-vellosimine, which is acetylated to vinorine by vinorine synthase (VS). Vinorine is oxidized by vinorine hydroxylase (VH) to make vomilenine. Vomilenine reductase (VR) conducts a reduction of vomilenine to 1,2-dihydrovomilenine, using the cofactor NADPH. 1,2-dihydrovomilenine, is reduced by 1,2-dihydrovomilenine reductase (DHVR) to 17-O-acetylnorajmaline, with the same cofactor as VR: NADPH. 17-O-acetylnorajmaline is deacetylated by acetylajmalan esterase (AAE), to form norajmaline. Finally, norajmaline methyl transferase (NAMT) methylates norajmaline resulting in our desired compound: ajmaline. Mechanism of action Ajmaline was first discovered to lengthen the refractory period of the heart by blocking sodium ion channels, but it has also been noted that it is also able to interfere with the hERG (human Ether-a-go-go-Related Gene) potassium ion channel. In both cases, Ajmaline causes the action potential to become longer and ultimately leads to bradycardia. When ajmaline reversibly blocks hERG, repolarization occurs more slowly because it is harder for potassium to get out due to less unblocked channels, therefore making the RS interval longer. Ajmaline also prolongs the QR interval since it can also act as sodium channel blocker, therefore making it take longer for the membrane to depolarize in the first case. In both cases, ajmaline causes the action potential to become longer. Slower depolarization or repolarization results in a lengthened QT interval (the refractory period), and therefore makes it take more time for the membrane potential to get below the threshold level so the action potential can be re-fired. Even if another stimulus is present, action potential cannot occur again until after complete repolarization. Ajmaline causes action potentials to be prolonged, therefore slowing down firing of the conducting myocytes which ultimately slows the beating of the heart. Diagnosis of Brugada syndrome Brugada syndrome is a genetic disease that can result in mutations in the sodium ion channel (gene SCN5A) of the myocytes in the heart. Brugada syndrome can result in ventricular fibrillation and potentially death. It is a major cause of sudden unexpected cardiac death in young, otherwise healthy people. While the characteristic patterns of Brugada syndrome on an electrocardiogram may be seen regularly, often the abnormal pattern is only seen spontaneously due to unknown triggers or after challenged by particular drugs. Ajmaline is used intravenously to test for Brugada syndrome since they both affect the sodium ion channel. In an afflicted person who was induced with ajmaline, the electrocardiogram would show the characteristic pattern of the syndrome where the ST segment is abnormally elevated above the baseline. Due to complications that could arise with the ajmaline challenge, a specialized doctor should perform the administration in a specialized center capable of extracorporeal membrane oxygenator support. See also Salimuzzaman Siddiqui (1897–1994) Pakistani organic chemist Hellmuth Kleinsorge (1920–2001) German medical doctor References Alkaloids found in Rauvolfia Antiarrhythmic agents Diagnostic cardiology HERG blocker Quinolizidine alkaloids Secondary alcohols Sodium channel blockers Tryptamine alkaloids Unani medicine
Ajmaline
[ "Chemistry" ]
1,372
[ "Quinolizidine alkaloids", "Alkaloids by chemical classification", "Tryptamine alkaloids" ]
7,013,264
https://en.wikipedia.org/wiki/Lorajmine
Lorajmine (17-monochloroacetylajmaline) is a drug that is a potent sodium channel blocker (more specifically, a class Ia antiarrhythmic agent) that was used for treating arrhythmia. It is derived from ajmaline, an alkaloid from the roots of Rauvolfia serpentina, by synthetically adding a chloroacetate residue. References Sodium channel blockers Secondary alcohols Carboxylate esters Indole alkaloids
Lorajmine
[ "Chemistry" ]
111
[ "Alkaloids by chemical classification", "Indole alkaloids" ]
7,013,274
https://en.wikipedia.org/wiki/Prajmaline
Prajmaline (Neo-gilurythmal) is a class Ia antiarrhythmic agent which has been available since the 1970s. Class Ia drugs increase the time one action potential lasts in the heart. Prajmaline is a semi-synthetic propyl derivative of ajmaline, with a higher bioavailability than its predecessor. It acts to stop arrhythmias of the heart through a frequency-dependent block of cardiac sodium channels. Mechanism Prajmaline causes a resting block in the heart. A resting block is the depression of a person's Vmax after a resting period. This effect is seen more in the atrium than the ventricle. The effects of some Class I antiarrhythmics are only seen in a patient who has a normal heart rate (~1 Hz). This is due to the effect of a phenomenon called reverse use dependence. The higher the heart rate, the less effect Prajmaline will have. Uses The drug Prajmaline has been used to treat a number of cardiac disorders. These include: coronary artery disease, angina, paroxysmal tachycardia and Wolff–Parkinson–White syndrome. Prajmaline has been indicated in the treatment of certain disorders where other antiarrhythmic drugs were not effective. Administration Prajmaline can be administered orally, parenterally or intravenously. Three days after the last dose, a limited effect has been observed. Therefore, it has been suggested that treatment of arrhythmias with Prajmaline must be continuous to see acceptable results. Pharmacokinetics The main metabolites of Prajmaline are: 21-carboxyprajmaline and hydroxyprajmaline. Twenty percent of the drug is excreted in the urine unchanged. Daily therapeutic dose is 40–80 mg. Distribution half-life is 10 minutes. Plasma protein binding is 60%. Oral bioavailability is 80%. Elimination half-life is 6 hours. Volume of distribution is 4-5 L/kg. Side Effects There are no significant adverse side-effects of Prajmaline when taken alone and with a proper dosage. Patients who are taking other treatments for their symptoms (e.g. beta blockers and nifedipine) have developed minor transient conduction defects when given Prajmaline. Overdose An overdose of Prajmaline is possible. The range of symptoms seen during a Prajmaline overdose include: no symptoms, nausea/vomiting, bradycardia, tachycardia, hypotension, and death. Other Potential Uses Due to Prajmaline's sodium channel-blocking properties, it has been shown to protect rat white matter from anoxia (82 +/- 15%). The concentration used causes little suppression of the preanoxic response. References Alkaloids Sodium channel blockers Secondary alcohols
Prajmaline
[ "Chemistry" ]
610
[ "Organic compounds", "Biomolecules by chemical classification", "Natural products", "Alkaloids" ]
7,013,296
https://en.wikipedia.org/wiki/Sparteine
Sparteine is a class 1a antiarrhythmic agent and sodium channel blocker. It is an alkaloid and can be extracted from scotch broom. It is the predominant alkaloid in Lupinus mutabilis, and is thought to chelate the bivalent metals calcium and magnesium. It is not FDA approved for human use as an antiarrhythmic agent, and it is not included in the Vaughan Williams classification of antiarrhythmic drugs. It is also used as a chiral ligand in organic chemistry, especially in syntheses involving organolithium reagents. Biosynthesis Sparteine is a lupin alkaloid containing a tetracyclic bis-quinolizidine ring system derived from three C5 chains of lysine, or more specifically, -lysine. The first intermediate in the biosynthesis is cadaverine, the decarboxylation product of lysine catalyzed by the enzyme lysine decarboxylase (LDC). Three units of cadaverine are used to form the quinolizidine skeleton. The mechanism of formation has been studied enzymatically, as well as with tracer experiments, but the exact route of synthesis still remains unclear. Tracer studies using 13C-15N-doubly labeled cadaverine have shown three units of cadaverine are incorporated into sparteine and two of the C-N bonds from two of the cadaverine units remain intact. The observations have also been confirmed using 2H NMR labeling experiments. Enzymatic evidence then showed that the three molecules of cadaverine are transformed to the quinolizidine ring via enzyme bound intermediates, without the generation of any free intermediates. Originally, it was thought that conversion of cadaverine to the corresponding aldehyde, 5-aminopentanal, was catalyzed by the enzyme diamine oxidase. The aldehyde then spontaneously converts to the corresponding Schiff base, Δ1-piperideine. Coupling of two molecules occurs between the two tautomers of Δ1-piperideine in an aldol-type reaction. The imine is then hydrolyzed to the corresponding aldehyde/amine. The primary amine is then oxidized to an aldehyde followed by formation of the imine to yield the quinolizidine ring. Via 17-oxosparteine synthase More recent enzymatic evidence has indicated the presence of 17-oxosparteine synthase (OS), a transaminase enzyme. The deaminated cadaverine is not released from the enzyme, thus is can be assumed that the enzyme catalyzes the formation of the quinolizidine skeleton in a channeled fashion . 7-oxosparteine requires four units of pyruvate as the NH2 acceptors and produces four molecules of alanine. Both lysine decarboxylase and the quinolizidine skeleton-forming enzyme are localized in chloroplasts. See also Lupinus Lupin poisoning References External links Antiarrhythmic agents Quinolizidine alkaloids Sodium channel blockers
Sparteine
[ "Chemistry" ]
675
[ "Quinolizidine alkaloids", "Alkaloids by chemical classification" ]
7,013,570
https://en.wikipedia.org/wiki/Glycosynthase
The term glycosynthase refers to a class of proteins that have been engineered to catalyze the formation of a glycosidic bond. Glycosynthase are derived from glycosidase enzymes, which catalyze the hydrolysis of glycosidic bonds. They were traditionally formed from retaining glycosidase by mutating the active site nucleophilic amino acid (usually an aspartate or glutamate) to a small non-nucleophilic amino acid (usually alanine or glycine). More modern approaches use directed evolution to screen for amino acid substitutions that enhance glycosynthase activity. The first glycosynthase Two discoveries led to the development of glycosynthase enzymes. The first was that a change of the active site nucleophile of a glycosidase from a carboxylate to another amino acid resulted in a properly folded protein that had no hydrolase activity. The second discovery was that some glycosidase enzymes were able to catalyze the hydrolysis of glycosyl fluorides that had the incorrect anomeric configuration. The enzymes underwent a transglycosidation reaction to form a disaccharide, which was then a substrate for hydrolase activity. The first reported glycosynthase was a mutant of the Agrobacterium sp. β-glucosidase / galactosidase in which the nucleophile glutamate 358 was mutated to an alanine by site directed mutagenesis. When incubated with α-glycosyl fluorides and an acceptor sugar it was found to catalyze the transglycosidation reaction without any hydrolysis. This glycosynthase was used to synthesize a series of di- and trisaccharide products with yields between 64% and 92%. Reaction mechanism The mechanism of a glycosynthase is similar to the hydrolysis reaction of retaining glycosidases except no covalent-enzyme intermediate is formed. Mutation of the active site nucleophile to a non-nucleophilic amino acid prevents the formation of a covalent intermediate. An activated glycosyl donor with a good anomeric-leaving group (often a fluorine) is required. The leaving group is displaced by an alcohol of the acceptor sugar aided by the active site general base amino acid of the enzyme. Modern extensions The first glycosynthase was a retaining exoglycosidase that catalyzed the formation of β 1-4 linked glycosides of glucose and galactose. Glycosynthase enzymes have since been expanded to include mutants of endoglycosidase, as well as mutants of inverting glycosidase. Substrates of glycosynthase include glucose, galactose, mannose, xylose, and glucuronic acid. Modern methods to prepare glycosynthase use directed evolution to introduce modifications, which improve the enzymes function. This process was made available due to the development of high throughput screens for glycosynthase activity. Limitations Glycosynthase have been useful for the preparation of oligosaccharides; however, their use suffers from certain limitations. First, glycosynthase can only be used to synthesize glycosidic linkages for which there is a known glycosidase. That glycosidase must also be first converted into a glycosynthase, which is not always possible. Second, the product of the glycosynthase reaction is often a better substrate for the glycosynthase then the starting material, resulting in the formation of multiple products of varying lengths. Finally, glycosynthase are specific for the donor sugar but often have loose specificity for the acceptor sugar. This can result in different regioselectivity depending on the acceptor resulting in products with different glycosidic linkages. One example is the Agrobacterium sp. β-glucosynthase, which forms a β-1,4-glycoside with glucose as the acceptor, but forms a β-1,3-glycoside with xylose as the acceptor. See also Glucosidase Glycoside hydrolase family 1 References Carbohydrate chemistry Carbohydrates Glycobiology
Glycosynthase
[ "Chemistry", "Biology" ]
970
[ "Biomolecules by chemical classification", "Carbohydrates", "Organic compounds", "Carbohydrate chemistry", "Chemical synthesis", "nan", "Biochemistry", "Glycobiology" ]
7,013,578
https://en.wikipedia.org/wiki/Clothing%20sizes
Clothing sizes are the sizes with which garments sold off-the-shelf are labeled. Sizing systems vary based on the country and the type of garment, such as dresses, tops, skirts, and trousers. There are three approaches: Body dimensions: The label states the range of body measurements for which the product was designed. (For example: bike helmet label stating "head girth: 56–60 cm".) Product dimensions: The label states characteristic dimensions of the product. (For example: jeans label stating inner leg length of the jeans in centimetres or inches (not inner leg measurement of the intended wearer).) Ad hoc sizes: The label states a size number or code with no obvious relationship to any measurement. (For example: Size 12, XL.) Children's clothes sizes are sometimes described by the age of the child, or, for infants, the weight. Traditionally, clothes have been labelled using many different ad hoc size systems, which has resulted in varying sizing methods between different manufacturers made for different countries due to changing demographics and increasing rates of obesity, a phenomenon known as vanity sizing. This results in country-specific and vendor-specific labels incurring additional costs, and can make internet or mail order difficult. Some new standards for clothing sizes being developed are therefore based on body dimensions, such as the EN 13402 "Size designation of clothes". History of standard clothing sizes Before the invention of clothing sizes in the early 1800s, all clothing was made to fit individuals by either tailors or makers of clothing in their homes. Then garment makers noticed that the range of human body dimensions was relatively small (for their demographic). Because of the drape and ease of the fabric, not all measurements are required to obtain a well-fitting apparel in most styles. Sizes were based on: Horizontal torso measurements, which include the neck circumference, the shoulder width, the bustline measurements – over-bust circumference, the full bust circumference, the bust-point separation, and the under-bust (rib-cage) circumference – the natural waist circumference, the upper hip circumference and the lower hip circumference. Vertical torso measurements, which include the back (neck-waist) length, the shoulder-waist length (not the same as the back length, due to the slope of the shoulder), the bust-shoulder length, the bust-waist length, and the two hip-waist lengths. Sleeve measurements, which include the under-arm and over-arm lengths, the fore-arm length, the wrist circumference and the biceps circumference. Pit-to-pit measurement is not a tailoring measurement, but a finished garment measure, used in the second-hand internet marketplace, generally the straight line measure across the garment, laid flat, at the bottom of the armpits. Standards International standards There are several ISO standards for size designation of clothes, but most of them are being revised and replaced by one of the parts of ISO 8559 which closely resembles European Standard EN 13402: ISO 3635:1981, Size designation of clothes: Definitions and body measurement procedure (withdrawn, replaced by ISO 8559-1) ISO 3636:1977, Size designation of clothes: Men's and boys outerwear garments (withdrawn, replaced by ISO 8559-2) ISO 3637:1977, Size designation of clothes: Women's and girls outerwear garments (withdrawn, replaced by ISO 8559-2) ISO 3638:1977, Size designation of clothes: Infants garments (withdrawn, replaced by ISO 8559-2) ISO 4415:1981, Size designation of clothes: Mens and boys underwear, nightwear and shirts (withdrawn, replaced by ISO 8559-2) ISO 4416:1981, Size designation of clothes: Women's and girls' underwear, nightwear, foundation garments and shirts (withdrawn, replaced by ISO 8559-2) ISO 4417:1977, Size designation of clothes: Headwear (withdrawn, replaced by ISO 8559-2) ISO 4418:1978, Size designation of clothes: Gloves (withdrawn, replaced by ISO 8559-2) ISO 5971:1981, 2017, Size designation of clothes: Pantyhose ISO 7070:1982, Size designation of clothes - Hosiery ISO 8559:1989, Garment construction and anthropometric surveys: Body dimensions (withdrawn, replaced by ISO 8559-1) ISO 8559-1:2017, Size designation of clothes: Part 1: Anthropometric definitions for body measurement ISO 8559-2:2017, Size designation of clothes: Part 2: Primary and secondary dimension indicators ISO 8559-3:2018, Size designation of clothes: Part 3: Methodology of the creation of the body measurement tables and intervals ISO 8559-3:2023, Size designation of clothes: Part 4: Determination of the coverage ratios of body measurement tables ISO/TR 10652:1991, Standard sizing systems for clothes (withdrawn) Asian standards Chinese standards GB 1335-81 GB/T 1335.1-2008 Size designation of clothes - Men GB/T 1335.2-2008 Size designation of clothes - Women GB/T 1335.3-2008 Size designation of clothes - Children GB/T 2668-2002 Sizes for coats, jackets and trousers GB/T 14304-2002 Sizes for woolen garments Japanese standards JIS L 4001 (1997) Sizing systems for infants' garments JIS L 4002 (1997) Sizing systems for boys' garments JIS L 4003 (1997) Sizing systems for girls' garments JIS L 4004 (1997) Sizing systems for men's garments JIS L 4005 (1997) Sizing systems for women's garments JIS L 4006 (1997) Sizing systems for foundation garments JIS L 4007 (1997) Sizing systems for Hosiery and Pantyhose Korean standards KS K 0050 (2009) Men's wear KS K 0051 (2004) Women's wear KS K 0052 Infants KS K 0059 Headgear KS K 0070 Brassiere KS K 0037 Dress Shirts KS K 0088 Socks Thai standards Wacoal (1981, 1987) Australian standards L9 - Women's clothing - Apparel Manufacturers Association of NSW - 1959-1970 AS1344-1972, 1975, 1997 Size coding scheme for women's clothing AS1182 - 1980 - Size coding scheme for infants and children's clothing European standards The European Standards Organisation (CEN) produced a series of standards, prefixed with EN 13402: EN 13402-1: Terms, definitions and body measurement procedure (2001, withdrawn and replace by ISO 8559-1:2020) EN 13402-2: Primary and secondary dimensions (2002, withdrawn and replaced by ISO 8559-2:2020) EN 13402-3: Size designation of clothes. Body measurements and intervals (2004, 2007, 2014, 2017) EN 13402-4: Coding system (2006) These are intended to replace the existing national standards of the 33 member states. It is currently in common use for children's clothing, but not yet for adults. The third standard EN 13402-3 seeks to address the problem of irregular or vanity sizing through offering a SI unit based labelling system, which will also pictographically describe the dimensions a garment is designed to fit, per the ISO 3635 standard. German standards DOB-Verband (1983) French standards AFNOR NF G 03-001 (1977) - Human body - Vocabulary - Pictogram; AFNOR EXP G 03-002 (1977) - Women Measures AFNOR EXP G 03-003 (1977) - Men Measures AFNOR EXP G 03-006 (1978) - Measures of babies and young children AFNOR EXP G 03-007 (1977) - Size designation of clothes for men, women and children AFNOR NF G 03-008 (1984) - Tights - Sizes - Designation - Marking Russian standards GOST R 53230-2008 (ISO 4415-1981) Size designation of clothes. Men's and boy's underwear, nightwear and shirts British standards BS 3666:1982 Specification for size designation of women's wear BS 6185:1982 Specification for size designation of men's wear BS 3666:1982, the standard for women's clothing, is rarely followed by manufacturers as it defines sizes in terms of hip and bust measurements only within a limited range. This has resulted in variations between manufacturers and a tendency towards vanity sizing. Yugoslavian standards Slovenia, Croatia, Bosnia and Herzegovina, North Macedonia and Serbia still use the JUS (F.G0.001 1979, F.G0.002 1979, F.G0.003 1979) standards developed in the former Yugoslavia. In addition to typical girth measurements clothing is also marked to identify which of 5 height bands: X-Short, Short, Medium, Tall, X-Tall, and body types: Slim, Normal, or Full, it is designed to fit. American standards US standards CS-151-50 - Infants, Babies, Toddlers and Children's clothing CS 215-58 - Women's Clothing (1958) PS 36-70 - Boys Clothing (1971) PS 42-70 – Women's Clothing (1971) PS 45-71 - Young Men's clothing PS 54-72 - Girls Clothing ASTM D5585-95 (2001) ASTM D6829-02 (2008) ASTM D5585-11 (2011) (withdrawn, 2020) ASTM D6240-98 ASTM D6960-04 – Women's Plus sizes (2004) There is no mandatory clothing size or labeling standard in the US, though a series of voluntary standards have been in place since the 1930s. The US government, however, did attempt to establish a system for women's clothing in 1958 when the National Bureau of Standards published Body Measurements for the Sizing of Women's Patterns and Apparel. The guidelines were made a commercial standard and were even updated in 1970. But the guide was eventually degraded to a voluntary standard until it was abolished altogether in 1983. Private organization ASTM International started to release its own recommended size carts in the 1990s. Since then, the common US misses sizes have not had stable dimensions. Clothing brands and manufacturers size their products according to their preferences. For example, the dimensions of two size 10 dresses from different companies, or even from the same company, may have grossly different dimensions; and both are almost certainly larger than the size 10 dimensions described in the US standard. Vanity sizing may be partly responsible for this deviation (which began in earnest in the 1980s). Women Comparison table Inch-based women's sizes (US/UK) British (UK) and American (US) standard dress sizes, s, are calculated by bust circumference, b, measured in inches, as follows: US: s = b − 28 UK: s = b − 24 Korean women's sizes Japanese women's sizes Note: a Japanese dress marked 13-Y-PP or 13-Y-P would be designed for someone with an 89 cm bust and 89 cm hips, while a dress marked 13-B-T would be targeted at a taller individual with 105 cm hips, but the same 89 cm bust. The B fitting adds 12 cm and the T height modifier 4 cm to the base hip measurement 89 + 16 = 105 cm. Additionally there are a set of age based waist adjustments, such that a dress marketed at someone in their 60s may allow for a waist 9 cm larger than a dress, of the same size, marketed at someone in their 20s. The age based adjustments allow for up to a 3 cm increase in girth, per decade of life. Continental European women's sizes Italian (IT), French (FR) and German (DE) standard dress sizes, s, are calculated by bust circumference, b, and body height, h, both measured in centimetres, as follows: IT: s = FR: s = − 4 = DE: s = − 6 = short, petite, h < 164: s' = = − 3 = tall, h > 170: s' = 2 × s = b − 12 French sizes are also used by Belgian manufacturers and retailers, while German sizes are also used by Austrian, Dutch and Scandinavian ones. Men Comparison tables Continental European men's sizes French (FR) and German (DE) standard suit sizes, s, are calculated by chest circumference, b, measured in centimetres, as follows: FR: s = + 0.5 = DE: s = − 0.5 = short, stocky (kurz, untersetzt): s' = = − 0.25 = portly (Bauchgröße): s' = s + 1 = + 0.5 = tall, lean (lang, schlank): s' = 2 × (s − 1) = b − 3 French sizes are also used by Belgian manufacturers and retailers, while German sizes are also used by Austrian, Dutch and Scandinavian ones. Size dividers Size dividers are used by clothing stores to help customers find the right size. Like index cards, they are found on racks between sizes. There are three basic types: the rectangular, round and the king size. Among the stores that use them are Marshalls and TJ Maxx. Inclusive sizing Inclusive sizing is the practice of having clothing ranges which do not make a distinction between "regular sizes" and "plus sizes". See also Anthropometry Bra size Bust/waist/hip measurements Female body shape Petite size Shoe size Size zero References External links Retail processes and techniques 19th-century fashion 20th-century fashion 21st-century fashion Clothing controversies Dresses Fashion design
Clothing sizes
[ "Physics", "Mathematics", "Engineering" ]
2,884
[ "Sizes in clothing", "Fashion design", "Physical quantities", "Quantity", "Size", "Design" ]
7,013,607
https://en.wikipedia.org/wiki/Glycoside%20hydrolase
In biochemistry, glycoside hydrolases (also called glycosidases or glycosyl hydrolases) are a class of enzymes which catalyze the hydrolysis of glycosidic bonds in complex sugars. They are extremely common enzymes, with roles in nature including degradation of biomass such as cellulose (cellulase), hemicellulose, and starch (amylase), in anti-bacterial defense strategies (e.g., lysozyme), in pathogenesis mechanisms (e.g., viral neuraminidases) and in normal cellular function (e.g., trimming mannosidases involved in N-linked glycoprotein biosynthesis). Together with glycosyltransferases, glycosidases form the major catalytic machinery for the synthesis and breakage of glycosidic bonds. Occurrence and importance Glycoside hydrolases are found in essentially all domains of life. In prokaryotes, they are found both as intracellular and extracellular enzymes that are largely involved in nutrient acquisition. One of the important occurrences of glycoside hydrolases in bacteria is the enzyme beta-galactosidase (LacZ), which is involved in regulation of expression of the lac operon in E. coli. In higher organisms glycoside hydrolases are found within the endoplasmic reticulum and Golgi apparatus where they are involved in processing of N-linked glycoproteins, and in the lysosome as enzymes involved in the degradation of carbohydrate structures. Deficiency in specific lysosomal glycoside hydrolases can lead to a range of lysosomal storage disorders that result in developmental problems or death. Glycoside hydrolases are found in the intestinal tract and in saliva where they degrade complex carbohydrates such as lactose, starch, sucrose and trehalose. In the gut they are found as glycosylphosphatidyl anchored enzymes on endothelial cells. The enzyme lactase is required for degradation of the milk sugar lactose and is present at high levels in infants, but in most populations will decrease after weaning or during infancy, potentially leading to lactose intolerance in adulthood. The enzyme O-GlcNAcase is involved in removal of N-acetylglucosamine groups from serine and threonine residues in the cytoplasm and nucleus of the cell. The glycoside hydrolases are involved in the biosynthesis and degradation of glycogen in the body. Classification Glycoside hydrolases are classified into EC 3.2.1 as enzymes catalyzing the hydrolysis of O- or S-glycosides. Glycoside hydrolases can also be classified according to the stereochemical outcome of the hydrolysis reaction: thus they can be classified as either retaining or inverting enzymes. Glycoside hydrolases can also be classified as exo or endo acting, dependent upon whether they act at the (usually non-reducing) end or in the middle, respectively, of an oligo/polysaccharide chain. Glycoside hydrolases may also be classified by sequence or structure-based methods. Sequence-based classification Sequence-based classifications are one of the most powerful predictive methods for suggesting function for newly sequenced enzymes for which function has not been biochemically demonstrated. A classification system for glycosyl hydrolases, based on sequence similarity, has led to the definition of more than 100 different families. This classification is available on the CAZy (CArbohydrate-Active EnZymes) web site. The database provides a series of regularly updated sequence based classification that allow reliable prediction of mechanism (retaining/inverting), active site residues and possible substrates. The online database is supported by CAZypedia, an online encyclopedia of carbohydrate active enzymes. Based on three-dimensional structural similarities, the sequence-based families have been classified into 'clans' of related structure. Recent progress in glycosidase sequence analysis and 3D structure comparison has allowed the proposal of an extended hierarchical classification of the glycoside hydrolases. Mechanisms Inverting glycoside hydrolases Inverting enzymes utilize two enzymic residues, typically carboxylate residues, that act as acid and base respectively, as shown below for a β-glucosidase. The product of the reaction has an axial position on C1, but some spontaneous changes of conformation can appear. Retaining glycoside hydrolases Retaining glycosidases operate through a two-step mechanism, with each step resulting in inversion, for a net retention of stereochemistry. Again, two residues are involved, which are usually enzyme-borne carboxylates. One acts as a nucleophile and the other as an acid/base. In the first step, the nucleophile attacks the anomeric centre, resulting in the formation of a glycosyl enzyme intermediate, with acidic assistance provided by the acidic carboxylate. In the second step, the now deprotonated acidic carboxylate acts as a base and assists a nucleophilic water to hydrolyze the glycosyl enzyme intermediate, giving the hydrolyzed product. The mechanism is illustrated below for hen egg white lysozyme. An alternative mechanism for hydrolysis with retention of stereochemistry can occur that proceeds through a nucleophilic residue that is bound to the substrate, rather than being attached to the enzyme. Such mechanisms are common for certain N-acetylhexosaminidases, which have an acetamido group capable of neighboring group participation to form an intermediate oxazoline or oxazolinium ion. This mechanism proceeds in two steps through individual inversions to lead to a net retention of configuration. A variant neighboring group participation mechanism has been described for endo-α-mannanases that involves 2-hydroxyl group participation to form an intermediate epoxide. Hydrolysis of the epoxide leads to a net retention of configuration. Nomenclature and examples Glycoside hydrolases are typically named after the substrate that they act upon. Thus glucosidases catalyze the hydrolysis of glucosides and xylanases catalyze the cleavage of the xylose based homopolymer xylan. Other examples include lactase, amylase, chitinase, sucrase, maltase, neuraminidase, invertase, hyaluronidase and lysozyme. Uses Glycoside hydrolases are predicted to gain increasing roles as catalysts in biorefining applications in the future bioeconomy. These enzymes have a variety of uses including degradation of plant materials (e.g., cellulases for degrading cellulose to glucose, which can be used for ethanol production), in the food industry (invertase for manufacture of invert sugar, amylase for production of maltodextrins), and in the paper and pulp industry (xylanases for removing hemicelluloses from paper pulp). Cellulases are added to detergents for the washing of cotton fabrics and assist in the maintenance of colours through removing microfibres that are raised from the surface of threads during wear. In organic chemistry, glycoside hydrolases can be used as synthetic catalysts to form glycosidic bonds through either reverse hydrolysis (kinetic approach) where the equilibrium position is reversed; or by transglycosylation (kinetic approach) whereby retaining glycoside hydrolases can catalyze the transfer of a glycosyl moiety from an activated glycoside to an acceptor alcohol to afford a new glycoside. Mutant glycoside hydrolases termed glycosynthases have been developed that can achieve the synthesis of glycosides in high yield from activated glycosyl donors such as glycosyl fluorides. Glycosynthases are typically formed from retaining glycoside hydrolases by site-directed mutagenesis of the enzymic nucleophile to some other less nucleophilic group, such as alanine or glycine. Another group of mutant glycoside hydrolases termed thioglycoligases can be formed by site-directed mutagenesis of the acid-base residue of a retaining glycoside hydrolase. Thioglycoligases catalyze the condensation of activated glycosides and various thiol-containing acceptors. Various glycoside hydrolases have shown efficacy in degrading matrix polysaccharides within the extracellular polymeric substance (EPS) of microbial biofilms. Medically, biofilms afford infectious microorganisms a variety of advantages over their planktonic, fre-floating counterparts, including greatly increased tolerances to antimicrobial agents and the host immune system. Thus, degrading the biofilm may increase antibiotic efficacy, and potentiate host immune function and healing ability. For example, a combination of alpha-amylase and cellulase was shown to degrade polymicrobial bacterial biofilms from both in vitro and in vivo sources, and increase antibiotic effectiveness against them. Inhibitors Many compounds are known that can act to inhibit the action of a glycoside hydrolase. Nitrogen-containing, 'sugar-shaped' heterocycles have been found in nature, including deoxynojirimycin, swainsonine, australine and castanospermine. From these natural templates many other inhibitors have been developed, including isofagomine and deoxygalactonojirimycin, and various unsaturated compounds such as PUGNAc. Inhibitors that are in clinical use include the anti-diabetic drugs acarbose and miglitol, and the antiviral drugs oseltamivir and zanamivir. Some proteins have been found to act as glycoside hydrolase inhibitors. See also Mucopolysaccharidoses Glucosidase Lysozyme Glycosyltransferase List of glycoside hydrolase families Clans of glycoside hydrolases Hierarchical classification of the TIM-barrel type glycoside hydrolases References External links Cazypedia, an online encyclopedia of the "CAZymes," the carbohydrate-active enzymes and binding proteins involved in the synthesis and degradation of complex carbohydrates Carbohydrate-Active enZYmes Database ExPASy classification Carbohydrates Carbohydrate chemistry EC 3.2.1 Glycobiology
Glycoside hydrolase
[ "Chemistry", "Biology" ]
2,299
[ "Biomolecules by chemical classification", "Carbohydrates", "Organic compounds", "Carbohydrate chemistry", "Chemical synthesis", "nan", "Biochemistry", "Glycobiology" ]
7,014,404
https://en.wikipedia.org/wiki/Transport%20Accident%20Investigation%20Commission
The Transport Accident Investigation Commission (TAIC, ) is a transport safety body of New Zealand. It has its headquarters on the 7th floor of 10 Brandon Street in Wellington. The agency investigates aviation, marine, and rail accidents and incidents occurring in New Zealand, with a view to avoid similar occurrences in the future, rather than ascribing blame to any person. It does not investigate road accidents except where they affect the safety of aviation, marine, or rail (e.g. level crossing or car ferry accidents). It was established by an act of the Parliament of New Zealand (the Transport Accident Investigation Commission Act 1990) on 1 September 1990. TAIC's legislation, functions and powers were modelled on and share some similarities with the National Transportation Safety Board (USA) and the Transportation Safety Board (Canada). It is a standing Commission of Inquiry and an independent Crown entity, and reports to the minister of transport. Initially investigating aviation accidents only, the TAIC's jurisdiction was extended in 1992 to cover railway accidents and later in 1995 to cover marine accidents. In May 2006, the Aviation Industry Association claimed too often the organisation did not find the true cause of accidents, after TAIC released the results of a second investigation into a fatal helicopter crash at Taumarunui in 2001. The commission rejected the criticism, CEO Lois Hutchinson citing the results of a March 2003 audit by the International Civil Aviation Organization. Ron Chippindale, who investigated the Mount Erebus Disaster, was Chief Inspector of Accidents from 1990 to 31 October 1998. He was succeeded as chief investigator of accidents by Capt. Tim Burfoot, John Mockett in 2002, Tim Burfoot again in 2007, Aaron Holman in 2019, Harald Hendel in 2020, and Naveen Kozhuppakalam in 2022. Peer agencies in other countries Australian Transport Safety Bureau Aviation and Railway Accident Investigation Board – South Korea Dutch Safety Board – Netherlands Taiwan Transportation Safety Board – Taiwan Japan Transport Safety Board National Transportation Safety Board – United States National Transportation Safety Committee – Indonesia Safety Investigation Authority – Finland Swedish Accident Investigation Authority – Sweden Swiss Transportation Safety Investigation Board – Switzerland Transportation Safety Board of Canada Transport Safety Investigation Bureau – Singapore References External links New Zealand Rail accident investigators New Zealand independent crown entities 1990 establishments in New Zealand Transport organisations based in New Zealand
Transport Accident Investigation Commission
[ "Technology" ]
466
[ "Railway accidents and incidents", "Rail accident investigators" ]
7,014,430
https://en.wikipedia.org/wiki/N-Octyl%20bicycloheptene%20dicarboximide
N-Octyl bicycloheptene dicarboximide (MGK 264) is an ingredient in some common pesticides. It has no intrinsic pesticidal qualities itself, but rather is a synergist enhancing the potency of pyrethroid ingredients. It is used in a variety of household and veterinary products. MGK 264 is starting to appear on pesticide monitoring lists by states legalizing and mandating pesticide monitoring in medical and recreational cannabis. This is most likely due to the very large amounts of pyrethroids that are used in cannabis monitoring lists and the likelihood of MGK 264 usage to maximize yield. References External links PDF of EPA document on this chemical N-Octyl Bicycloheptene Dicarboximide (MGK-264) Reregistration, Environmental Protection Agency, 2006 Insecticides Imides Cycloalkenes
N-Octyl bicycloheptene dicarboximide
[ "Chemistry" ]
183
[ "Imides", "Functional groups" ]
7,014,943
https://en.wikipedia.org/wiki/Virtual%20queue
Virtual queue is a concept used in both inbound call centers and other businesses to improve wait times for users. Call centers use an Automatic Call Distributor (ACD) to distribute incoming calls to specific resources (agents) in the center. ACDs hold queued calls in First In, First Out order until agents become available. Virtual queue systems allow callers to receive callbacks instead of waiting in an ACD queue. This solution is analogous to the “fast lane” option used at amusement parks, such as Disney's FastPass, in which a computerized system allows park visitors to secure their place in a “virtual queue” rather than waiting in a physical queue. In brick-and-mortar retail and the business world, virtual queuing for large organizations similar to the FastPass and Six Flags' Flash Pass, have been in use since 1999 and 2001 respectively. For small businesses, the virtual queue management solutions come in two types: (a) SMS text notification services and (b) apps on smartphones and tablet devices, with in-app notification and remote queue status views. The online queue often referred to as a virtual-waiting-room is the brain child of UK Inventor and entrepreneur Matt King whose 2005 patented process EP1751954b1 was the first solution online to prevent visitor websurges and crashes. Before that the online queue to prevent visitor websurges and crashes was invented and patented by Masanori Kubo in 2000. The term virtual-waiting-room was coined by Akamai Technologies in 2004 for its Edge Computing web-based service to prevent visitor websurges and crashes and amongst other used by leading online ticketing agencies, proved valuable to MLB.com's (Major League Baseball) successful ticket sales. Overview While there are several different varieties of virtual queuing systems, a standard First In, First Out that maintains the customer's place in line is set to monitor queue conditions until the Estimated Wait Time (EWT) exceeds a predetermined threshold. When the threshold is exceeded, the system intercepts incoming calls before they enter the queue. It informs customers of their EWT and offers the option of receiving a callback in the same amount of time as if they waited on hold. If customers choose to remain in a queue, their calls are routed directly to the queue. Customers who opt for a callback are prompted to enter their phone number and then hang up the phone. A “virtual placeholder” maintains the customers' position in the queue while the ACD queue is worked off. The virtual queuing system monitors the rate at which calls in queue are worked off and launches an outbound call to the customer moments before the virtual placeholder is due to reach the top of the queue. When the callback is answered by the customer, the system asks for confirmation that the correct person is on the line and ready to speak with an agent. Upon receiving confirmation, the system routes the call to the next available agent resource, who handles it as a normal inbound call. Call centers do not measure this "virtual queue" time as "queue time" because the caller is free to pursue other activities instead of listening to hold music and announcements. The voice circuit is released between the ACD and the telecommunications network, so the call does not accrue any queue time or telecommunications charges. Universal Queue Universal queue (UQ) is concept in contact center design whereby multiple communications channels (such as telephone, fax and email) are integrated into a single 'universal queue' to standardize processing and handling, enabling coherent customer relations management (CRM). UQ is generally used for standardised routing, recording, handling, reporting, and management of all communications in a contact center (or across an entire organisation). Although UQ was discussed at least as far back as 2004, difficulties in implementing this system prevented its widespread uptake. As of 2008, there is little data available online regarding existing UQ implementations. Applications Some utility companies (electric, natural gas, telecommunications, and cable television) use virtual queuing to manage seasonal peaks in call center traffic, as well as unexpected traffic spikes due to weather or service interruptions. Call centers that process inbound telesales calls use virtual queuing to reduce the number of abandoned calls. Customer care organizations use virtual queuing to enhance service levels. Insurance claims processing centers use virtual queuing to manage unforeseen peaks due to natural disasters. Various amusement parks around the world have employed a similar virtual queue system for guests wishing to queue for their amusement rides. One of the most notable examples, Disney's Fastpass, issues guests a ticket which details a time for the guest to return and board the attraction. More recent virtual queue systems have utilized technology such as the Q-Bot to reserve a place for them in the queue. Implementations of such a system include the Q-Bot at Legoland parks, the Flash Pass at Six Flags parks and the Q4U at Dreamworld. Virtual queueing apps allow small businesses to operate their virtual queue from an application. Their customers take a virtual queue number and wait remotely instead of waiting on-premises. During the Covid-19 pandemic, virtual queuing became more popular in order to support businesses while store capacity was limited. Companies such as Qudini have provided customers a way to join a queue by scanning a QR code, granting them permission to wait at a safe distance from other customers. Covid-19 encouraged hospitals to implement virtual queue systems that will maintain social distancing. References Dan Merriman, The Total Economic Impact Of Virtual Hold’s Virtual Queuing Solutions, Forrester Research, 2006 David Maister, The Psychology Of Waiting In Lines, 1985 Mukta Kampllikar, Losing Wait, TMTC Journal of Management, 2005 Greg Levin, The Viability of Virtual Queuing Tools, CallCenter Magazine, 2006 Eric Camulli, How to Optimize Skills-Based Routing Using a Virtual Queue, Connections Magazine, Jan/Feb 2007 Jon Arnold, Virtual Queuing – the End of Music on Hold?, Focus, Dec 2010 Shai Berger, Virtual Queuing Reaches A Turning Point, April 2012 Padraig McTiernan, The Business Case for Virtual Queueing, June 2012 Tom Oristian, Virtual Queuing Simulator, Aug 2012 Computer telephony integration Telemarketing
Virtual queue
[ "Technology" ]
1,295
[ "Information technology", "Computer telephony integration" ]
7,015,629
https://en.wikipedia.org/wiki/John%20Wyatt%20%28inventor%29
John Wyatt (April 1700 – 29 November 1766), an English inventor, was born near Lichfield and was related to Sarah Ford, Doctor Johnson's mother. A carpenter by trade he began work in Birmingham on the development of a spinning machine. In 1733 he was working in the mill at New Forge (Powells) Pool, Sutton Coldfield attempting to spin the first cotton thread ever spun by mechanical means. His principal partner was Lewis Paul and together they developed the concept of elongating cotton threads by running them through rollers and then stretching them through a faster second set of rollers. They produced the first ever roller spinning machine but it was very successful. Paul took out thread in 1738 and in 1758, the year before he died. In 1757 the Rev. John Dyer of Northampton recognised the importance of the Paul and Wyatt cotton spinning machine in his poem The Fleece (Dyer, p. 99): A circular machine, of new design In conic shape: it draws and spins a thread Without the tedious toil of needless hands. A wheel invisible, beneath the floor, To ev'ry member of th' harmonius frame, Gives necessary motion. One intent O'erlooks the work; the carded wool, he says, So smoothly lapped around those cylinders, Which gently turning, yield it to yon cirue Of upright spindles, which with rapid whirl Spin out in long extenet an even twine. Wyatt went to work for Matthew Boulton in his foundry in Birmingham. There he invented and produced a weighing machine and experimented with donkey power to run his spinning machine. He was brought down by his debts and was made bankrupt. Despite their failures, their ideas laid the foundations for others who followed, particularly Sir Richard Arkwright. In their book Das Kapital, Karl Marx and Friedrich Engels wrote: Als John Wyatt 1735 seine Spinnmaschine und mit ihr die industrielle Revolution des 18. Jahrhunderts ankündigte, erwähnte er mit keinem Wort, daß statt eines Menschen ein Esel die Maschine treibe, und dennoch fiel diese Rolle dem Esel zu. Eine Maschine, "um ohne Finger zu spinnen", lautete sein Programm.(89) (~ When John Wyatt - in 1735 - announced his spinning machine and with that the industrial revolution of the 18th century, he did not mention that a donkey moved the machine [...].) See also Cotton-spinning machinery#History References Sources English inventors Textile engineers 1700 births 1766 deaths Industrial Revolution in England People of the Industrial Revolution People from Lichfield Spinning 18th-century English engineers
John Wyatt (inventor)
[ "Engineering" ]
564
[ "Textile engineers", "Textile engineering" ]
7,015,763
https://en.wikipedia.org/wiki/Hardy%27s%20inequality
Hardy's inequality is an inequality in mathematics, named after G. H. Hardy. Its discrete version states that if is a sequence of non-negative real numbers, then for every real number p > 1 one has If the right-hand side is finite, equality holds if and only if for all n. An integral version of Hardy's inequality states the following: if f is a measurable function with non-negative values, then If the right-hand side is finite, equality holds if and only if f(x) = 0 almost everywhere. Hardy's inequality was first published and proved (at least the discrete version with a worse constant) in 1920 in a note by Hardy. The original formulation was in an integral form slightly different from the above. Statements General discrete Hardy inequality The general weighted one dimensional version reads as follows:: if , and , General one-dimensional integral Hardy inequality The general weighted one dimensional version reads as follows: If , then If , then Multidimensional Hardy inequalities with gradient Multidimensional Hardy inequality around a point In the multidimensional case, Hardy's inequality can be extended to -spaces, taking the form where , and where the constant is known to be sharp; by density it extends then to the Sobolev space . Similarly, if , then one has for every Multidimensional Hardy inequality near the boundary If is an nonempty convex open set, then for every , and the constant cannot be improved. Fractional Hardy inequality If and , , there exists a constant such that for every satisfying , one has Proof of the inequality Integral version (integration by parts and Hölder) Hardy’s original proof begins with an integration by parts to get Then, by Hölder's inequality, and the conclusion follows. Integral version (scaling and Minkowski) A change of variables gives which is less or equal than by Minkowski's integral inequality. Finally, by another change of variables, the last expression equals Discrete version: from the continuous version Assuming the right-hand side to be finite, we must have as . Hence, for any positive integer , there are only finitely many terms bigger than . This allows us to construct a decreasing sequence containing the same positive terms as the original sequence (but possibly no zero terms). Since for every , it suffices to show the inequality for the new sequence. This follows directly from the integral form, defining if and otherwise. Indeed, one has and, for , there holds (the last inequality is equivalent to , which is true as the new sequence is decreasing) and thus . Discrete version: Direct proof Let and let be positive real numbers. Set . First we prove the inequality Let and let be the difference between the -th terms in the right-hand side and left-hand side of , that is, . We have: or According to Young's inequality we have: from which it follows that: By telescoping we have: proving . Applying Hölder's inequality to the right-hand side of we have: from which we immediately obtain: Letting we obtain Hardy's inequality. See also Carleman's inequality Notes References . External links Inequalities Theorems in real analysis
Hardy's inequality
[ "Mathematics" ]
655
[ "Theorems in mathematical analysis", "Mathematical theorems", "Theorems in real analysis", "Binary relations", "Mathematical relations", "Inequalities (mathematics)", "Mathematical problems" ]
7,015,856
https://en.wikipedia.org/wiki/World%20Trade%20Center%20controlled%20demolition%20conspiracy%20theories
Some conspiracy theories contend that the collapse of the World Trade Center was caused not solely by the airliner crash damage that occurred as part of the September 11 attacks and the resulting fire damage but also by explosives installed in the buildings in advance. Controlled demolition theories make up a major component of 9/11 conspiracy theories. Early advocates such as physicist Steven E. Jones, architect Richard Gage, software engineer Jim Hoffman, and theologian David Ray Griffin proposed that the aircraft impacts and resulting fires themselves alone could not have weakened the buildings sufficiently to initiate the catastrophic collapse and that the buildings would have neither collapsed completely nor at the speeds they did without additional energy involved to weaken their structures. The National Institute of Standards and Technology (NIST) and the magazine Popular Mechanics examined and rejected these theories. Specialists in structural mechanics and structural engineering accept the model of a fire-induced, gravity-driven collapse of the World Trade Center buildings, an explanation that does not involve the use of explosives. NIST "found no corroborating evidence for alternative hypotheses suggesting that the WTC towers were brought down by controlled demolition using explosives planted prior to Sept. 11, 2001." Professors Zdeněk Bažant of Northwestern University, Thomas Eagar, of the Massachusetts Institute of Technology and James Quintiere of the University of Maryland, have also dismissed the controlled-demolition conspiracy theory. In 2006, Jones suggested that thermite or super-thermite may have been used by government insiders with access to such materials and to the buildings themselves to demolish the buildings. In April 2009, Jones, Dane Niels H. Harrit and seven other authors published a paper in The Open Chemical Physics Journal, causing the editor, Prof. Marie-Paule Pileni, to resign as she accused the publisher of printing it without her knowledge; this article was titled 'Active Thermitic Material Discovered in Dust from the 9/11 World Trade Center Catastrophe', and stated that they had found evidence of nano-thermite in samples of the dust that was produced during the collapse of the World Trade Center towers. NIST responded that there was no "clear chain of custody" to prove that the four samples of dust came from the WTC site. Jones invited NIST to conduct its own studies using its own known "chain of custody" dust, but NIST did not investigate. History The controlled demolition conspiracy theories were first suggested in September 2001. Eric Hufschmid's book, Painful Questions: An Analysis of the September 11th Attack, in which the controlled demolition theory is explicitly advocated, was published in September 2002. David Ray Griffin and Steven E. Jones are the best known advocates of the theory. Griffin's book The New Pearl Harbor, published in 2004, has become a reference work for the 9/11 Truth movement. In the same year, Griffin published the book The 9/11 Commission Report: Omissions and Distortions, in which he argues that flaws in the commission's report amounts to a cover-up by government officials and says that the Bush administration was complicit in the 9/11 attacks. Steven E. Jones has been another voice of the proponents of demolition theories. In 2006, he published the paper "Why Indeed Did the WTC Buildings Completely Collapse?". On September 7, 2006, Brigham Young University placed Jones on paid leave citing the "increasingly speculative and accusatory nature" of his statements, pending an official review of his actions. Six weeks later, Jones retired from the university. The structural engineering faculty at the university issued a statement which said that they "do not support the hypotheses of Professor Jones". In its final report, NIST stated that it "found no corroborating evidence for alternative hypotheses suggesting that the WTC towers were brought down by controlled demolition using explosives planted prior to Sept. 11, 2001. NIST also did not find any evidence that missiles were fired at or hit the towers. Instead, photographs and videos from several angles clearly show that the collapse initiated at the fire and impact floors and that the collapse progressed from the initiating floors downward until the dust clouds obscured the view" and posted a FAQ about related issues on its website in August 2006. Allegations of controlled demolition have been found to be devoid of scientific merit by mainstream engineering scholarship. The magazine Popular Mechanics also found the theories lacked scientific support in its special report "Debunking the 9/11 Myths". Articles, letters and comments by controlled demolition advocates have been published in scientific and engineering journals. In April 2008, a letter titled "Fourteen Points of Agreement with Official Government Reports on the World Trade Center Destruction," was published by Steven E. Jones, Frank Legge, Kevin Ryan, Anthony Szamboti and James Gourley in The Open Civil Engineering Journal. A few months later, in July 2008, an article titled "Environmental anomalies at the World Trade Center: evidence for energetic materials," was published by Ryan, Gourley and Jones in the Environmentalist. Later that same year, in October 2008, the Journal of Engineering Mechanics published a comment by chemical engineer and attorney James R. Gourley, in which he describes what he considered fundamental errors in a 2007 paper on the mechanics of progressive collapse by Bažant and Verdure. In the same issue, Bažant and Le rebutted Gourley's arguments, finding his criticisms scientifically incorrect. They suggested future critics should "become acquainted with the relevant material from an appropriate textbook on structural mechanics" or risk "misleading and wrongly influencing the public with incorrect information." In April 2009, Danish chemist Niels H. Harrit, of the University of Copenhagen, and eight other authors published a paper in The Open Chemical Physics Journal, titled, "Active Thermitic Material Discovered in Dust from the 9/11 World Trade Center Catastrophe." The paper concludes that chips consisting of unreacted and partially reacted super-thermite, or nano-thermite, appear to be present in samples of the dust. The editor in chief of the publication subsequently resigned. Internet websites and videos have contributed to the growth of the movement of individuals supporting the theory that planted explosives destroyed the World Trade Center. The website of Architects and Engineers for 9/11 Truth cites the membership of over 2,400 architects and engineers. The controlled demolition theory often includes allegations that U.S. government insiders planned and / or participated in the destruction of the WTC in order to justify the invasion of Iraq and Afghanistan. The theory features prominently in popular entertainment type movies, such as Loose Change, as well as documentaries such as 9/11: Blueprint for Truth, by San Francisco-area architect Richard Gage. While mainstream press has a significant history of dismissing conspiracy theories (i.e., in 2006, the magazine New York reported that a "new generation of conspiracy theorists is at work on a secret history of New York's most terrible day."), the theory has been supported by a number of popular actors, musicians and politicians, including Charlie Sheen, Willie Nelson, former Governor of Minnesota Jesse Ventura, talkshow host Rosie O'Donnell, and actors Ed Asner and Daniel Sunjata. Propositions and hypotheses Main towers On September 11, the North Tower (1 WTC) was hit by American Airlines Flight 11 and the South Tower (2 WTC) was hit by United Airlines Flight 175, both Boeing 767 aircraft. The South Tower collapsed 56 minutes after the impact, and the North Tower collapsed 102 minutes after. An investigation by NIST concluded that the collapse was caused by a combination of damage to support columns and fire insulation from the aircraft impacts and the weakening of columns and floors by jet fuel ignited fires. NIST also found "no corroborating evidence for alternative hypotheses suggesting that the WTC towers were brought down by controlled demolition using explosives planted prior to September 11, 2001". Jones, among others, points to many descriptions by individuals working on the WTC rubble pile suggesting the presence of molten steel in the pile and a stream of molten metal that poured out of the South Tower before it collapsed as evidence of temperatures beyond those produced by the fire. Jones has argued that the molten metal may have been elemental iron, a product of a thermite reaction. Jones and other researchers analyzed samples of dust from the World Trade Center buildings and reported their findings for evidence of nano-thermite in the dust. Jones informed NIST of his findings and NIST responded that there was no "clear chain of custody" proving that the dust indeed came from the WTC site. Jones invited NIST to conduct its own studies with dust under custody of NIST itself, but NIST has not done so. NIST found that the condition of the steel in the wreckage of the towers does not provide conclusive information on the condition of the building before the collapse and concluded that the material coming from the South Tower was molten aluminum from the plane, which would have melted at lower temperatures than steel. NIST also pointed out that cutting through the vertical columns would require planting an enormous amount of explosives inconspicuously in highly secured buildings, then igniting it remotely while keeping it in contact with the columns. The Energetic Materials Research and Testing Center performed a test with conventional thermite and was unable to cut a vertical column, despite the column being much smaller than those used in the World Trade Center. Jones and others have responded that they do not believe that thermite was used, but rather a form of thermite called nano-thermite, a nanoenergetic material developed for military use, propellants, explosives, or pyrotechnics. Historically, explosive applications for traditional thermites have been limited by their relatively slow energy release rates. But because nano-thermites are created from reactant particles with proximities approaching the atomic scale, energy release rates are far improved. The NIST report provides an analysis of the structural response of the building only up to the point where collapse begins, and asserts that the enormous kinetic energy transferred by the falling part of the building makes progressive collapse inevitable once an initial collapse occurs. A paper by Zdeněk Bažant indicates that once collapse began, the kinetic energy imparted by a falling upper section onto the floor below was an order of magnitude greater than that which the lower section could support. Engineers who have investigated the collapses generally agree that controlled demolition is not required to understand the structural response of the buildings. While the top of one of the towers did tilt significantly, it could not ultimately have fallen into the street, they argue, because any such tilting would place sufficient stress on the lower story (acting as a pivot) that it would collapse long before the top had sufficiently shifted its center of gravity. Indeed, they argue, there is very little difference between progressive collapse with or without explosives in terms of the resistance that the structures could provide after collapse began. Controlled demolition of a building to code requires weeks of preparation, including laying large quantities of explosive and cutting through beams, which would have rendered the building highly dangerous and which would have to be done without attracting the attention of the thousands of people who worked in the building. Controlled demolition is traditionally done from the bottom of buildings rather than the top, although there are exceptions depending on structural design. There is little dispute that the collapse started high up at the point where the aircraft struck. Furthermore, any explosives would have to withstand the impact of the airliners. Members of the group Scholars for 9/11 Truth have collected eyewitness accounts of flashes and loud explosions immediately before the fall. Eyewitnesses have repeatedly reported of explosions happening before the collapse of the WTC towers, and the organization "International Center for 9/11 Studies" has published videos obtained from NIST, together with indications about when such explosions could be heard. There are many types of loud sharp noises that are not caused by explosives, and seismographic records of the collapse do not show evidence of explosions. Jones and others have argued that horizontal puffs of smoke seen during the collapse of the towers would indicate that the towers had been brought down by controlled explosions. NIST attributes these puffs to air pressure, created by the decreasing volume of the falling building above, traveling down elevator shafts and exiting from the open elevator shaft doors on lower levels. In September 2011, Iranian president Mahmoud Ahmadinejad, who holds a PhD in Transportation Engineering and Planning, said that it would have been impossible for two jetliners to bring down the towers simply by hitting them and that some kind of planned explosion must have taken place. Al-Qaida sharply criticized Ahmadinejad in their English-language publication, Inspire, calling his assertions "a ridiculous belief that stands in the face of all logic and evidence". 7 World Trade Center Proponents of World Trade Center controlled demolition theories allege that 7 World Trade Center—a 47-story skyscraper that stood across Vesey Street north of the main part of the World Trade Center site—was intentionally destroyed with explosives. Unlike the Twin Towers, 7 World Trade Center was not hit by a plane, although it was hit by debris from the Twin Towers and was damaged by fires which burned for seven hours, until it collapsed completely at about 5:20 p.m. on the evening of September 11 (a new building has been erected on the site of the old and opened in May 2006). Several videos of the collapse event exist in the public domain, thus enabling comparative analysis from different angles of perspective. Proponents typically say the collapse of 7 World Trade Center was not mentioned in the 9/11 Commission Report and that the federal body charged with investigating the event, NIST, required seven years to conduct its investigation and issue a report. In November 2010, Fox News reporter Geraldo Rivera hosted members of a television ad campaign called "BuildingWhat?", a series of commercials in which 9/11 family members ask questions about 7 World Trade Center and call for an investigation into its collapse. Rivera called the television ads "not so easy to dismiss as those demonstrators were," and stated that, "If explosives were involved, that would mean the most obnoxious protesters in recent years ... were right." Days later, Rivera appeared on the program Freedom Watch with legal analyst Judge Andrew Napolitano on the Fox Business Network to discuss the BuildingWhat? TV ad campaign. Napolitano stated, "It's hard for me to believe that [7 World Trade Center] came down by itself. I was gratified to see Geraldo Rivera investigating it." Some proponents of World Trade Center controlled demolition theories suggest that 7 WTC was demolished because it may have served as an operational center for the demolition of the Twin Towers, while others suggest that government insiders may have wanted to destroy key files held in the building pertaining to corporate fraud. The WTC buildings housed dozens of federal, state and local government agencies. According to a statement reported by the BBC, Loose Change film producer Dylan Avery thinks the destruction of the building was suspicious because it housed some unusual tenants, including a clandestine CIA office on the 25th floor, an outpost of the U.S. Secret Service, the Securities and Exchange Commission, and New York City's emergency command center. The former chief counter-terrorism adviser to the President, Richard Clarke, does not think that 7 WTC is mysterious, and said that anyone could have rented floor space in the building. At the time, no steel frame high rise had ever before collapsed because of a fire, although there had been previous cases of collapses or partial collapses of smaller steel buildings due to fire. However, the ability of such a building to be completely destroyed by fire would be demonstrated by the collapse of the Plasco Building in Tehran in 2017 and the Wilton Paes de Almeida Building in São Paulo, Brazil, the following year. In addition, NIST claims debris ejected during the collapse of 1 WTC caused significant structural damage in 7 WTC before the fire. BBC News reported the collapse of 7 WTC twenty minutes before it actually fell. The BBC has stated that many news sources were reporting the imminent collapse of 7 WTC on the day of the attacks. Jane Standley, the reporter who announced the collapse prematurely, called it a "very small and very honest mistake" caused by her thinking on her feet after being confronted with a report she had no way of checking. In the PBS documentary America Rebuilds, which aired in September 2002, Larry Silverstein, the owner of 7 WTC and leaseholder and insurance policy holder for the remainder of the WTC complex, recalled a discussion with the fire department in which doubts about containing the fires were expressed. Silverstein recalled saying, "We've had such terrible loss of life, maybe the smartest thing to do is pull it". "They made that decision to pull", he recalled, "and we watched the building collapse." Silverstein issued a statement that it was the firefighting team, not the building, that was to be pulled, contradicting theorists' allegation that "pull" was used in a demolition-related sense. NIST report In 2002, the National Institute of Standards and Technology (NIST) began a general investigation into the collapse of the World Trade Center but soon made a decision to focus first on the collapse of the Twin Towers. A draft version of its final report on the collapse of 7 WTC was released in August 2008. The agency has blamed the slowness of this investigation on the complexity of the computer model it used, which simulated the collapse from the moment it begins all the way to the ground; and NIST says the time taken on the investigation into 7 WTC is comparable to the time taken to investigate an aircraft crash. The agency also says another 80 boxes of documents related to 7 WTC were found and had to be analyzed. These delays fueled suspicion among those already questioning the validity of the September 11 attacks that the agency was struggling to come up with a plausible conclusion. NIST released its final report on the collapse of 7 World Trade Center on November 20, 2008. Investigators used videos, photographs and building design documents to come to their conclusions. The investigation could not include physical evidence as the materials from the building lacked characteristics allowing them to be positively identified and were therefore disposed of prior to the initiation of the investigation. The report concluded that the building's collapse was due to the effects of the fires which burned for almost seven hours. The fatal blow to the building came when the 13th floor collapsed, weakening a critical steel support column that led to catastrophic failure, and extreme heat caused some steel beams to lose strength, causing further failures throughout the buildings until the entire structure succumbed. Also cited as a factor was the collapse of the nearby towers, which broke the city water main, leaving the sprinkler system in the bottom half of the building without water. NIST considered the possibility that 7 WTC was brought down with explosives and concluded that a blast event did not occur, that the "use of thermite [...] to sever columns in 7 WTC on 9/11/01 was unlikely". The investigation cited as evidence the claim that no blast was audible on recordings of the collapse and that no blast was reported by witnesses, stating that it would have been audible at a level of 130-140 decibels at a distance of half a mile. Demolition proponents say eyewitnesses repeatedly reported explosions happening before the collapse of the towers, and have published videos obtained from NIST, together with indications about when such explosions could be heard in support of the sounds of explosions before collapse. NIST also concluded that it is unlikely that the quantities of thermite needed could have been carried into the building undetected. Demolition advocates have responded that they do not claim that thermite was used, but rather that nano-thermite, far more powerful than thermite, was used. Finally, the NIST investigated and ruled out the theory that fires from the large amount of diesel fuel stored in the building caused the collapse. UAF study University of Alaska Fairbanks (UAF) Professor of Civil Engineering J. Leroy Hulsey subsequently led a 4-year (2015–2019) investigation funded by Architects and Engineers for 9/11 Truth titled "A Structural Reevaluation of the Collapse of World Trade Center 7", taking advantage of the improvement in computing resources since NIST's study. The UAF provides a 256 GB downloadable file that contains "All input data, results data, and simulations that were used or generated during this study." Hulsey's group concluded in their final report: Criticism The American Society of Civil Engineers Structural Engineering Institute issued a statement calling for further discussion of NIST's recommendations, and Britain's Institution of Structural Engineers published a statement in May 2002 welcoming the FEMA report, noting that the report expressed similar views to those held by its group of professionals. Following the publication of Jones' paper "Why Indeed Did the WTC Buildings Completely Collapse?" Brigham Young University responded to Jones' "increasingly speculative and accusatory" statements by placing him on paid leave, and thereby stripping him of two classes, in September 2006, pending a review of his statements and research. Six weeks later, Jones retired from the university. The structural engineering faculty at the university issued a statement which said that they "do not support the hypotheses of Professor Jones". On September 22, 2005, Jones gave a seminar on his hypotheses to a group of his colleagues from the Department of Physics and Astronomy at BYU. According to Jones, all but one of his colleagues agreed after the seminar that an investigation was in order and the lone dissenter came to agreement with Jones' suggestions the next day. Northwestern University Professor of Civil Engineering Zdeněk Bažant, who was the first to offer a published peer-reviewed theory of the collapses, wrote "a few outsiders claiming a conspiracy with planted explosives" as an exception. Bažant and Verdure trace such "strange ideas" to a "mistaken impression" that safety margins in design would make the collapses impossible. One of the effects of a more detailed modeling of the progressive collapse, they say, could be to "dispel the myth of planted explosives". Indeed, Bažant and Verdure have proposed examining data from controlled demolitions in order to better model the progressive collapse of the towers, suggesting that progressive collapse and controlled demolition are not two separate modes of failure (as the controlled-demolition conspiracy theory assumes). Thomas Eagar, a professor of materials science and engineering at the Massachusetts Institute of Technology, also dismissed the controlled-demolition conspiracy theory. Eagar remarked, "These people (in the 9/11 truth movement) use the 'reverse scientific method.' They determine what happened, throw out all the data that doesn't fit their conclusion, and then hail their findings as the only possible conclusion." Regarding Jones' theory that nanothermite was used to bring down the towers, and the assertion that thermite and nanothermite composites were found in the dust and debris were found following the collapse of the three buildings, which was considered to be evidence that explosives brought down the buildings, Brent Blanchard, author of "A History of Explosive Demolition in America", states that questions about the viability of Jones' theories remain unanswered, such as the fact that no demolition personnel noticed any telltale signs of thermite during the eight months of debris removal following the towers' collapse. Blanchard also stated that a verifiable chain of possession needs to be established for the tested beams, which did not occur with the beams Jones tested, raising questions of whether the metal pieces tested could have been cut away from the debris pile with acetylene torches, shears, or other potentially contaminated equipment while on site, or exposed to trace amounts of thermite or other compounds while being handled, while in storage, or while being transferred from Ground Zero to memorial sites. Dave Thomas of Skeptical Inquirer magazine, noting that the residue in question was claimed to be thermitic because of its iron oxide and aluminum composition, pointed out that these substances are found in many items common to the towers. Thomas stated that in order to cut through a vertical steel beam, special high-temperature containment must be added to prevent the molten iron from dropping down, and that the thermite reaction is too slow for it to be practically used in building demolition. Thomas pointed out that when Jesse Ventura hired New Mexico Tech to conduct a demonstration showing nanothermite slicing through a large steel beam, the nanothermite produced copious flame and smoke but no damage to the beam, even though it was in a horizontal, and therefore optimal, position. Preparing a building for a controlled demolition takes considerable time and effort. The tower walls would have had to be opened on dozens of floors. Thousands of pounds of explosives, fuses and ignition mechanisms would need to be sneaked past security and placed in the towers without the tens of thousands of people working in the World Trade Center noticing. Referring to a conversation with Stuart Vyse, a professor of psychology, an article in the Hartford Advocate asks, "How many hundreds of people would you need to acquire the explosives, plant them in the buildings, arrange for the airplanes to crash [...] and, perhaps most implausibly of all, never breathe a single word of this conspiracy?" World Trade Center developer Larry Silverstein said, "Hopefully this thorough report puts to rest the various 9/11 conspiracy theories, which dishonor the men and women who lost their lives on that terrible day." Upon presentation of the NIST's detailed report on the failure of Bldg. 7, Richard Gage, leader of the group Architects & Engineers for 9/11 Truth said, "How much longer do we have to endure the coverup of how Building 7 was destroyed?" in which Dr. S. Shyam Sunder, the lead NIST investigator said he could not explain why the skepticism would not die. "I am really not a psychologist," he said. "Our job was to come up with the best science." James Quintiere, professor of fire protection engineering at the University of Maryland, who does not believe explosives brought down the towers, questioned how the agency came to its conclusions, remarking, "They don't have the expertise on explosives," though he adds that NIST wasted time employing outside experts to consider it. References External links FEMA World Trade Center Building Performance Study NIST and the World Trade Center 9/11 Commission Report Debunking 9/11 Conspiracy theories and Controlled Demolition Myths Journal of Debunking 9/11 Conspiracy Theories Answering the questions of Architects & Engineers for 9/11 Truth Journal of 9/11 Studies 9/11 conspiracy theories World Trade Center Demolition Pseudoscience
World Trade Center controlled demolition conspiracy theories
[ "Engineering" ]
5,493
[ "Construction", "Demolition" ]
7,016,138
https://en.wikipedia.org/wiki/Belum-Temengor
Belum-Temengor is the largest continuous forest complex in Peninsular Malaysia. Specifically, it is located in the Malaysian state of Perak (Hulu Perak) and crosses into Southern Thailand. Belum-Temenggor is divided into two sections. Belum is located up north, right by the Malaysia-Thailand border, while Temenggor is south of Belum. The Royal Belum State Park is entirely contained within the forest complex. Bang Lang National Park is on the Thailand side of the border. Description Belum-Temenggor is believed to have been in existence for over 130 million years, making it one of the world's oldest rainforests, older than both the Amazon and the Congo. In the heart of the forest lies the manmade lake of Tasik Temenggor, covering 15,200 hectares which is dotted with hundreds of islands. The area has been identified as an Environmentally Sensitive Area (ESA) Rank 1 under the Malaysian National Physical Plan and recognized by Birdlife International as an Important Bird Area. The Malaysian federal government has labelled the area as a whole as an essential water catchment area and part of Central Forest Spine and plans to protect the forest under the Malaysian National Forestry Act. Despite that, between the two, only part of Belum Forest Reserve has been gazetted as a State Park while the rest are production forest open for development. Temenggor in particular is facing considerable deforestation due to logging. Environmental organizations such as Malaysian Nature Society and the World Wildlife Fund have been lobbying both the state and the federal government to gazette the area as a park. The state government of Perak, however, has resisted the effort, citing that logging provides the state with more than RM 30 million in revenue. Nevertheless, the state government gazetted , part of the Belum forest reserve as state park on May 3, 2007. There is a plan to convert the natural forest to plantation forest along the East-West Highway. Fauna and flora Belum-Temenggor's relatively untouched forest is home to a wide variety of flora and fauna including 14 of the world's most threatened mammals including the Malayan tiger, Indian elephant, white handed gibbon, Malaysian sunbear and tapir. Other animals include seladang, wild boars, numerous species of deer, pythons and cobras. As of 2019, due to poaching and the depletion of prey, the number of tigers in Belum-Temengor Forest Reserve has declined about 60 percent over a period of 7-8 years, from approximately 60 to 23. Belum Temenggor is home to over 300 avian species. It is the only existing forest where all 10 species of hornbill that inhabit Malaysia are found, namely the white-crowned hornbill, bushy-crested hornbill, wrinkled hornbill, wreathed hornbill, plain-pouched hornbill, black hornbill, Oriental pied hornbill, rhinoceros hornbill, great hornbill and helmeted hornbill. In the forest, one can also find 3,000 species of flowering plants, including 3 species of Rafflesia, the world's largest flower. Malaysia is home to a variety of different insect and arthropod species. Notable examples would include the stalk-eyed fly, violin fly, lantern fly, and a variety of stick insects. The Brown marmorated stink bug is native to parts of East Asia and has now become an invasive species in Europe and North America. Scientists are considering introducing the parasitoid wasp Trissolcus japonicus, which preys on the eggs of the stink bug. See also Geography of Malaysia Temenggor Lake References External links The Malaysian Nature Society. "Every name helps keep our forests intact. We need more." Novista. "Temenggor - Biodiversity In The Face of Danger" Belum Temenggor Official website Eco Adventure Royal Belum Travel Ideas from Virtual Malaysia The Treasure that is Royal Belum Rainforest A Journey to Royal Belum A photo essay eco adventure Travel Ideas for Royal Belum Hulu Perak District Important Bird Areas of Malaysia Titiwangsa Mountains Nature sites of Malaysia Old-growth forests
Belum-Temengor
[ "Biology" ]
859
[ "Old-growth forests", "Ecosystems" ]
7,016,168
https://en.wikipedia.org/wiki/Token%20Ring
Token Ring is a physical and data link layer computer networking technology used to build local area networks. It was introduced by IBM in 1984, and standardized in 1989 as IEEE 802.5. It uses a special three-byte frame called a token that is passed around a logical ring of workstations or servers. This token passing is a channel access method providing fair access for all stations, and eliminating the collisions of contention-based access methods. Token Ring was a successful technology, particularly in corporate environments, but was gradually eclipsed by the later versions of Ethernet. Gigabit Token Ring was standardized in 2001. History A wide range of different local area network technologies were developed in the early 1970s, of which one, the Cambridge Ring, had demonstrated the potential of a token passing ring topology, and many teams worldwide began working on their own implementations. At the IBM Zurich Research Laboratory Werner Bux and Hans Müller, in particular, worked on the design and development of IBM's Token Ring technology, while early work at MIT led to the Proteon ProNet-10 Token Ring network in 1981 the same year that workstation vendor Apollo Computer introduced their proprietary Apollo Token Ring (ATR) network running over 75-ohm RG-6U coaxial cabling. Proteon later evolved a version that ran on unshielded twisted pair cable. 1985 IBM launch IBM launched their own proprietary Token Ring product on October 15, 1985. It ran at , and attachment was possible from IBM PCs, midrange computers and mainframes. It used a convenient star-wired physical topology and ran over shielded twisted-pair cabling. Shortly thereafter it became the basis for the IEEE 802.5 standard. During this time, IBM argued that Token Ring LANs were superior to Ethernet, especially under load, but these claims were debated. In 1988, the faster Token Ring was standardized by the 802.5 working group. An increase to was standardized and marketed during the wane of Token Ring's existence and was never widely used. While a standard was approved in 2001, no products were ever brought to market and standards activity came to a standstill as Fast Ethernet and Gigabit Ethernet dominated the local area networking market. Gallery Comparison with Ethernet Early Ethernet and Token Ring both used a shared transmission medium. They differed in their channel access methods. These differences have become immaterial, as modern Ethernet networks consist of switches and point-to-point links operating in full-duplex mode. Token Ring and legacy Ethernet have some notable differences: Token Ring access is more deterministic, compared to Ethernet's contention-based CSMA/CD. Ethernet supports a direct cable connection between two network interface cards by the use of a crossover cable or through auto-sensing if supported. Token Ring does not inherently support this feature and requires additional software and hardware to operate on a direct cable connection setup. Token Ring eliminates collision by the use of a single-use token and early token release to alleviate the down time. Legacy Ethernet alleviates collision by carrier-sense multiple access and by the use of an intelligent switch; primitive Ethernet devices like hubs could precipitate collisions due to repeating traffic blindly. Token Ring network interface cards contain all of the intelligence required for speed autodetection, routing and can drive themselves on many Multistation Access Units (MAUs) that operate without power (most MAUs operate in this fashion, only requiring a power supply for LEDs). Ethernet network interface cards can theoretically operate on a passive hub to a degree, but not as a large LAN and the issue of collisions is still present. Token Ring employs access priority in which certain nodes can have priority over the token. Unswitched Ethernet did not have a provision for an access priority system as all nodes have equal access to the transmission medium. Multiple identical MAC addresses are supported on Token Ring (a feature used by S/390 mainframes). Switched Ethernet cannot support duplicate MAC addresses without reprimand. Token Ring was more complex than Ethernet, requiring a specialized processor and licensed MAC/LLC firmware for each interface. By contrast, Ethernet included both the (simpler) firmware and the lower licensing cost in the MAC chip. The cost of a token Ring interface using the Texas Instruments TMS380C16 MAC and PHY was approximately three times that of an Ethernet interface using the Intel 82586 MAC and PHY. Initially both networks used expensive cable, but once Ethernet was standardized for unshielded twisted pair with 10BASE-T (Cat 3) and 100BASE-TX (Cat 5(e)), it had a distinct advantage and sales of it increased markedly. Even more significant when comparing overall system costs was the much-higher cost of router ports and network cards for Token Ring vs Ethernet. The emergence of Ethernet switches may have been the final straw. Operation Stations on a Token Ring LAN are logically organized in a ring topology with data being transmitted sequentially from one ring station to the next with a control token circulating around the ring controlling access. Similar token passing mechanisms are used by ARCNET, token bus, 100VG-AnyLAN (802.12) and FDDI, and they have theoretical advantages over the CSMA/CD of early Ethernet. A Token Ring network can be modeled as a polling system where a single server provides service to queues in a cyclic order. Access control The data transmission process goes as follows: Empty information frames are continuously circulated on the ring. When a computer has a message to send, it seizes the token. The computer will then be able to send the frame. The frame is then examined by each successive workstation. The workstation that identifies itself to be the destination for the message copies it from the frame and changes the token back to 0. When the frame gets back to the originator, it sees that the token has been changed to 0 and that the message has been copied and received. It removes the message from the frame. The frame continues to circulate as an empty frame, ready to be taken by a workstation when it has a message to send. Multistation Access Units and Controlled Access Units Physically, a Token Ring network is wired as a star, with 'MAUs' in the center, 'arms' out to each station, and the loop going out-and-back through each. A MAU could present in the form of a hub or a switch; since Token Ring had no collisions many MAUs were manufactured as hubs. Although Token Ring runs on LLC, it includes source routing to forward packets beyond the local network. The majority of MAUs are configured in a 'concentration' configuration by default, but later MAUs also supporting a feature to act as splitters and not concentrators exclusively such as on the IBM 8226. Later IBM would release Controlled Access Units that could support multiple MAU modules known as a Lobe Attachment Module. The CAUs supported features such as Dual-Ring Redundancy for alternate routing in the event of a dead port, modular concentration with LAMs, and multiple interfaces like most later MAUs. This offered a more reliable setup and remote management than with an unmanaged MAU hub. Cabling and interfaces Cabling is generally IBM "Type-1", a heavy two-pair 150 ohm shielded twisted pair cable. This was the basic cable for the "IBM Cabling System", a structured cabling system that IBM hoped would be widely adopted. Unique hermaphroditic connectors, referred to as IBM Data Connectors in formal writing or colloquially as Boy George connectors, were used. The connectors have the disadvantage of being quite bulky, requiring at least panel space, and being relatively fragile. The advantages of the connectors being that they are genderless and have superior shielding over standard unshielded 8P8C. Connectors at the computer were usually DE-9 female. Several other types of cable existed such as type 2, and type 3 cable. In later implementations of Token Ring, Cat 4 cabling was also supported, so 8P8C (RJ45) connectors were used on both of the MAUs, CAUs and NICs; with many of the network cards supporting both 8P8C and DE-9 for backwards compatibility. Technical details Frame types Token When no station is sending a frame, a special token frame circles the loop. This special token frame is repeated from station to station until arriving at a station that needs to send data. Tokens are three octets in length and consist of a start delimiter, an access control octet, and an end delimiter. Abort frame Used by the sending station to abort transmission. Data Data frames carry information for upper-layer protocols, while command frames contain control information and have no data for upper-layer protocols. Data and command frames vary in size, depending on the size of the Information field. Starting delimiter – The starting delimiter consists of a special bit pattern denoting the beginning of the frame. The bits from most significant to least significant are J,K,0,J,K,0,0,0. J and K are code violations. Since Manchester encoding is self-clocking and has a transition for every encoded bit 0 or 1, the J and K codings violate this and will be detected by the hardware. Both the Starting Delimiter and Ending Delimiter fields are used to mark frame boundaries. Access control – This byte field consists of the following bits from most significant to least significant bit order P,P,P,T,M,R,R,R. The P bits are priority bits, T is the token bit which when set specifies that this is a token frame, M is the monitor bit which is set by the Active Monitor (AM) station when it sees this frame, and R bits are reserved bits. Frame control – A one-byte field that contains bits describing the data portion of the frame contents which indicates whether the frame contains data or control information. In control frames, this byte specifies the type of control information. Frame type – 01 indicates LLC frame IEEE 802.2 (data) and ignore control bits; 00 indicates MAC frame and control bits indicate the type of MAC control frame Destination address – A six-byte field used to specify the destination(s) physical address. Source address – Contains physical address of sending station. It is a six-byte field that is either the local assigned address (LAA) or universally assigned address (UAA) of the sending station adapter. Data – A variable length field of 0 or more bytes, the maximum allowable size depending on ring speed containing MAC management data or upper layer information. Maximum length of 4500 bytes. Frame check sequence – A four-byte field used to store the calculation of a CRC for frame integrity verification by the receiver. Ending delimiter – The counterpart to the starting delimiter, this field marks the end of the frame and consists of the following bits from most significant to least significant J,K,1,J,K,1,I,E. I is the intermediate frame bit and E is the error bit. Frame status – A one-byte field used as a primitive acknowledgment scheme on whether the frame was recognized and copied by its intended receiver. A = 1, Address recognized C = 1, Frame copied Active and standby monitors Every station in a Token Ring network is either an active monitor (AM) or standby monitor (SM) station. There can be only one active monitor on a ring at a time. The active monitor is chosen through an election or monitor contention process. The monitor contention process is initiated when the following happens: a loss of signal on the ring is detected. an active monitor station is not detected by other stations on the ring. a particular timer on an end station expires such as the case when a station hasn't seen a token frame in the past 7 seconds. When any of the above conditions take place and a station decides that a new monitor is needed, it will transmit a claim token frame, announcing that it wants to become the new monitor. If that token returns to the sender, it is OK for it to become the monitor. If some other station tries to become the monitor at the same time then the station with the highest MAC address will win the election process. Every other station becomes a standby monitor. All stations must be capable of becoming an active monitor station if necessary. The active monitor performs a number of ring administration functions. The first function is to operate as the master clock for the ring in order to provide synchronization of the signal for stations on the wire. Another function of the AM is to insert a 24-bit delay into the ring, to ensure that there is always sufficient buffering in the ring for the token to circulate. A third function for the AM is to ensure that exactly one token circulates whenever there is no frame being transmitted, and to detect a broken ring. Lastly, the AM is responsible for removing circulating frames from the ring. Token insertion process Token Ring stations must go through a 5-phase ring insertion process before being allowed to participate in the ring network. If any of these phases fail, the Token Ring station will not insert into the ring and the Token Ring driver may report an error. Phase 0 (Lobe Check) – A station first performs a lobe media check. A station is wrapped at the MSAU and is able to send 2000 test frames down its transmit pair which will loop back to its receive pair. The station checks to ensure it can receive these frames without error. Phase 1 (Physical Insertion) – A station then sends a 5-volt signal to the MSAU to open the relay. Phase 2 (Address Verification) – A station then transmits MAC frames with its own MAC address in the destination address field of a Token Ring frame. When the frame returns and if the Address Recognized (AR) and Frame Copied (FC) bits in the frame-status are set to 0 (indicating that no other station currently on the ring uses that address), the station must participate in the periodic (every 7 seconds) ring poll process. This is where stations identify themselves on the network as part of the MAC management functions. Phase 3 (Participation in ring poll) – A station learns the address of its Nearest Active Upstream Neighbour (NAUN) and makes its address known to its nearest downstream neighbour, leading to the creation of the ring map. Station waits until it receives an AMP or SMP frame with the AR and FC bits set to 0. When it does, the station flips both bits (AR and FC) to 1, if enough resources are available, and queues an SMP frame for transmission. If no such frames are received within 18 seconds, then the station reports a failure to open and de-inserts from the ring. If the station successfully participates in a ring poll, it proceeds into the final phase of insertion, request initialization. Phase 4 (Request Initialization) – Finally a station sends out a special request to a parameter server to obtain configuration information. This frame is sent to a special functional address, typically a Token Ring bridge, which may hold timer and ring number information the new station needs to know. Optional priority scheme In some applications there is an advantage to being able to designate one station having a higher priority. Token Ring specifies an optional scheme of this sort, as does the CAN Bus, (widely used in automotive applications) – but Ethernet does not. In the Token Ring priority MAC, eight priority levels, 0–7, are used. When the station wishing to transmit receives a token or data frame with a priority less than or equal to the station's requested priority, it sets the priority bits to its desired priority. The station does not immediately transmit; the token circulates around the medium until it returns to the station. Upon sending and receiving its own data frame, the station downgrades the token priority back to the original priority. Here are the following eight access priority and traffic types for devices that support 802.1Q and 802.1p: Interconnection with Ethernet Bridging solutions for Token Ring and Ethernet networks included the AT&T StarWAN 10:4 Bridge, the IBM 8209 LAN Bridge and the Microcom LAN Bridge. Alternative connection solutions incorporated a router that could be configured to dynamically filter traffic, protocols and interfaces, such as the IBM 2210-24M Multiprotocol Router, which contained both Ethernet and Token Ring interfaces. Operating system support In 2012, David S. Miller merged a patch to remove token ring networking support from the Linux kernel. See also IBM PC Network Protocol Wars - The battle between Internet and OSI standards in the 1980s References General External links IEEE 802.5 Web Site Troubleshooting Cisco Router Token Ring Interfaces Futureobservatory.org discussion of IBM's failure in Token Ring technology What if Ethernet had failed? Network topology Local area networks IEEE 802 IBM PC compatibles IEEE standards Serial buses Link protocols Systems Network Architecture
Token Ring
[ "Mathematics", "Technology" ]
3,490
[ "Network topology", "Computer standards", "Topology", "IEEE standards" ]
7,016,898
https://en.wikipedia.org/wiki/Photosynthetic%20picoplankton
Photosynthetic picoplankton or picophytoplankton is the fraction of the photosynthetic phytoplankton of cell sizes between 0.2 and 2 μm (i.e. picoplankton). It is especially important in the central oligotrophic regions of the world oceans that have very low concentration of nutrients. History 1952: Description of the first truly picoplanktonic species, Chromulina pusilla, by Butcher. This species was renamed in 1960 to Micromonas pusilla and a few studies have found it to be abundant in temperate oceanic waters, although very little such quantification data exists for eukaryotic picophytoplankton. 1979: Discovery of marine Synechococcus by Waterbury and confirmation with electron microscopy by Johnson and Sieburth. 1982: The same Johnson and Sieburth demonstrate the importance of small eukaryotes by electron microscopy. 1983: W.K.W Li and colleagues, including Trevor Platt show that a large fraction of marine primary production is due to organisms smaller than 2 μm. 1986: Discovery of "prochlorophytes" by Chisholm and Olson in the Sargasso Sea, named in 1992 as Prochlorococcus marinus. 1994: Discovery in the Thau lagoon in France of the smallest photosynthetic eukaryote known to date, Ostreococcus tauri, by Courties. 2001: Through sequencing of the ribosomal RNA gene extracted from marine samples, several European teams discover that eukaryotic picoplankton are highly diverse. This finding followed on the first discovery of such eukaryotic diversity in 1998 by Rappe and colleagues at Oregon State University, who were the first to apply rRNA sequencing to eukaryotic plankton in the open-ocean, where they discovered sequences that seemed distant from known phytoplankton The cells containing DNA matching one of these novel sequences were recently visualized and further analyzed using specific probes and found to be broadly distributed. Methods of study Because of its very small size, picoplankton is difficult to study by classic methods such as optical microscopy. More sophisticated methods are needed. Epifluorescence microscopy allows researchers to detect certain groups of cells possessing fluorescent pigments such as Synechococcus which possess phycoerythrin. Flow cytometry measures the size ("forward scatter") and fluorescence of 1,000 in 10,000 cells per second. It allows one to determine very easily the concentration of the various picoplankton populations on marine samples. Three groups of cells (Prochlorococcus, Synechococcus and picoeukaryotes) can be distinguished. For example Synechococcus is characterized by the double fluorescence of its pigments: orange for phycoerythrin and red for chlorophyll. Flow cytometry also allows researchers to sort out specific populations (for example Synechococcus) in order put them in culture, or to make more detailed analyses. Analysis of photosynthetic pigments such as chlorophyll or carotenoids by high precision chromatography (HPLC) allows researchers to determine the various groups of algae present in a sample. Molecular biology techniques: Cloning and sequencing of genes such as that of ribosomal RNA, which allows researchers to determine total diversity within a sample DGGE (denaturing gel electrophoresis) that is faster than the previous approach allows researchers to have an idea of the global diversity within a sample In situ hybridization (FISH) uses fluorescent probes recognizing specific taxon, for example a species, a genus or a class. This original description as a species is now thought to be composed of a number of different cryptic species, a finding that has been confirmed by a genome sequencing project of two strains led by researchers at the Monterey Bay Aquarium Research Institute. Quantitative PCR can be used, as FISH, to determine the abundance of specific groups. It has the main advantage to allow the rapid analysis of a large number of samples simultaneously, but requires more sophisticated controls and calibrations. Composition Three major groups of organisms constitute photosynthetic picoplankton: Cyanobacteria belonging to the genus Synechococcus of a size of 1 μm (micrometer) were first discovered in 1979 by J. Waterbury (Woods Hole Oceanographic Institution). They are quite ubiquitous, but most abundant in relatively mesotrophic waters. Cyanobacteria belonging to the genus Prochlorococcus are particularly remarkable. With a typical size of 0.6 μm, Prochlorococcus was discovered only in 1988 by two American researchers, Sallie W. (Penny) Chisholm (Massachusetts Institute of Technology) and R.J. Olson (Woods Hole Oceanographic Institution). In spite of its small size, this photosynthetic organism is undoubtedly the most abundant of the planet: indeed its density can reach up to 100 million cells per liter and it can be found down to a depth of 150 m in all the intertropical belt. Picoplanktonic eukaryotes are the least well known, as demonstrated by the recent discovery of major groups. Andersen created in 1993 a new class of brown algae, the Pelagophyceae. More surprising still, the discovery in 1994 of a eukaryote of very small size, Ostreococcus tauri, dominating the phytoplanktonic biomass of a French brackish lagoon (étang de Thau), shows that these organisms can also play a major ecological role in coastal environments. In 1999, yet a new class of alga was discovered, the Bolidophyceae, very close genetically of diatoms, but quite different morphologically. At the present time, about 50 species are known belonging to several classes. {|class="wikitable" align="center" bgcolor="#F6FFB2" |+Algal classes containing picoplankton species !Classes !Picoplanktonic genera |---- |Chlorophyceae |Nannochloris |---- |Prasinophyceae |Micromonas, Ostreococcus, Pycnococcus |---- |Prymnesiophyceae |Imantonia |---- |Pelagophyceae |Pelagomonas |---- |Bolidophyceae |Bolidomonas |---- |Dictyochophyceae |Florenciella |} The use of molecular approaches implemented since the 1990s for bacteria, were applied to the photosynthetic picoeukaryotes only 10 years later around 2000. They revealed a very wide diversity and brought to light the importance of the following groups in the picoplankton: Prasinophyceae Haptophyta Cryptophyta In temperate coastal environment, the genus Micromonas (Prasinophyceae) seems dominant. However, in numerous oceanic environments, the dominant species of eukaryotic picoplankton remain still unknown. Ecology Each picoplanktonic population occupies a specific ecological niche in the oceanic environment. The Synechococcus cyanobacterium is generally abundant in mesotrophic environments, such as near the equatorial upwelling or in coastal regions. The Prochlorococcus cyanobacterium replaces it when the waters becomes impoverished in nutrients (i.e., oligotrophic). On the other hand, in temperate regions such as the North Atlantic Ocean, Prochlorococcus is absent because the cold waters prevent its development. The diversity of eukaryotes derives from their presence in a large variety of environments. In oceanic regions, they are often observed at depth, at the base of the well-lit layer (the "euphotic" layer). In coastal regions, certain sorts of picoeukaryotes such as Micromonas dominate. As with larger plankton, their abundance follows a seasonal cycle with a maximum in summer. Thirty years ago, it was hypothesized that the speed of division for micro-organisms in central oceanic ecosystems was very slow, of the order of one week or one month per generation. This hypothesis was supported by the fact that the biomass (estimated for example by the contents of chlorophyll) was very stable over time. However, with the discovery of the picoplankton, it was found that the system was much more dynamic than previously thought. In particular, small predators of a size of a few micrometres which ingest picoplanktonic algae as quickly as they were produced were found to be ubiquitous. This extremely sophisticated predator-prey system is nearly always at equilibrium and results in a quasi-constant picoplankton biomass. This close equivalence between production and consumption makes it extremely difficult to measure precisely the speed at which the system turns over. In 1988, two American researchers, Carpenter and Chang, suggested estimating the speed of cell division of phytoplankton by following the course of DNA replication by microscopy. By replacing the microscope by a flow cytometer, it is possible to follow the DNA content of picoplankton cells over time. This allowed researchers to establish that picoplankton cells are highly synchronous: they replicate their DNA and then divide all at the same time at the end of the day. This synchronization could be due to the presence of an internal biological clock. Genomics In the 2000s, genomics allowed to cross a supplementary stage. Genomics consists in determining the complete sequence of genome of an organism and to list every gene present. It is then possible to get an idea of the metabolic capacities of the targeted organisms and understand how it adapts to its environment. To date, the genomes of several types of Prochlorococcus and Synechococcus, and of a strain of Ostreococcus have been determined. The complete genomes of two different Micromonas strains revealed that they were quite different (different species) and had similarities with land plants. Several other cyanobacteria and of small eukaryotes (Bathycoccus, Pelagomonas) are under sequencing. In parallel, genome analyses begin to be done directly from oceanic samples (ecogenomics or metagenomics), allowing us to access to large sets of gene for uncultivated organisms. {| align=center class="wikitable" |+Genomes of photosynthetic picoplankton strains that have been sequenced to date !Genus !Strain !Sequencing center !Remark |---- |Prochlorococcus |MED4 |JGI | |---- | |SS120 |Genoscope | |---- | |MIT9312 |JGI | |---- | |MIT9313 |JGI | |---- | |NATL2A |JGI | |---- | |CC9605 |JGI | |---- | |CC9901 |JGI | |---- |Synechococcus |WH8102 |JGI | |---- | |WH7803 |Genoscope | |---- | |RCC307 |Génoscope | |---- | |CC9311 |TIGR | |---- |Ostreococcus |OTTH95 |Genoscope | |---- |Micromonas |RCC299 and CCMP1545 |JGI | |} See also Bacterioplankton List of eukaryotic picoplankton species Nanophytoplankton Phytoplankton Picoeukaryote Notes and references Bibliography Cyanobacteria Zehr, J. P., Waterbury, J. B., Turner, P. J., Montoya, J. P., Omoregie, E., Steward, G. F., Hansen, A. & Karl, D. M. 2001. Unicellular cyanobacteria fix N2 in the subtropical North Pacific Ocean. Nature 412:635-8 Eukaryotes Butcher, R. 1952. Contributions to our knowledge of the smaller marine algae. J. Mar. Biol. Assoc. UK. 31:175-91. Manton, I. & Parke, M. 1960. Further observations on small green flagellates with special reference to possible relatives of Chromulina pusilla Butcher. J. Mar. Biol. Assoc. UK. 39:275-98. Eikrem, W., Throndsen, J. 1990. The ultrastructure of Bathycoccus gen. nov. and B. prasinos sp. nov., a non-motile picoplanktonic alga (Chlorophyta, Prasinophyceae) from the Mediterranean and Atlantic. Phycologia 29:344-350 Chrétiennot-Dinet, M. J., Courties, C., Vaquer, A., Neveux, J., Claustre, H., et al. 1995. A new marine picoeucaryote: Ostreococcus tauri gen et sp nov (Chlorophyta, Prasinophyceae). Phycologia 34:285-292 Sieburth, J. M., M. D. Keller, P. W. Johnson, and S. M. Myklestad. 1999. Widespread occurrence of the oceanic ultraplankter, Prasinococcus capsulatus (Prasinophyceae), the diagnostic "Golgi-decapore complex" and the newly described polysaccharide "capsulan". J. Phycol. 35: 1032-1043. Not, F., Valentin, K., Romari, K., Lovejoy, C., Massana, R., Töbe, K., Vaulot, D. & Medlin, L. K. 2007. Picobiliphytes, a new marine picoplanktonic algal group with unknown affinities to other eukaryotes. Science 315:252-4. Vaulot, D., Eikrem, W., Viprey, M. & Moreau, H. 2008. The diversity of small eukaryotic phytoplankton (≤3 μm) in marine ecosystems. FEMS Microbiol. Rev. 32:795-820. Ecology Platt, T., Subba-Rao, D. V. & Irwin, B. 1983. Photosynthesis of picoplankton in the oligotrophic ocean. Nature 300:701-4. Stomp M, Huisman J, de Jongh F, Veraart AJ, Gerla D, Rijkeboer M, Ibelings BW, Wollenzien UIA, Stal LJ. 2004. Adaptive divergence in pigment composition promotes phytoplankton biodiversity. Nature 432: 104-107. Campbell, L., Nolla, H. A. & Vaulot, D. 1994. The importance of Prochlorococcus to community structure in the central North Pacific Ocean. Limnol. Oceanogr. 39:954-61. Molecular Biology and Genomes Rappé, M. S., P. F. Kemp, and S. J. Giovannoni. 1995. Chromophyte plastid 16S ribosomal RNA genes found in a clone library from Atlantic Ocean seawater. J. Phycol. 31: 979-988. Biological oceanography Planktology Aquatic ecology Cyanobacteria Algae
Photosynthetic picoplankton
[ "Biology" ]
3,363
[ "Aquatic ecology", "Algae", "Cyanobacteria", "Ecosystems" ]
7,017,083
https://en.wikipedia.org/wiki/Heterotrophic%20picoplankton
Heterotrophic picoplankton is the fraction of plankton composed by cells between 0.2 and 2 μm that do not perform photosynthesis. They form an important component of many biogeochemical cycles. Cells can be either: prokaryotes Archaea form a major part of the picoplankton in the Antarctic and are abundant in other regions of the ocean. Archaea have also been found in freshwater picoplankton, but do not appear to be so abundant in these environments. eukaryotes Cell structure Nucleic acid content in cells Heterotrophic picoplankton can be divided into two broad categories: high nucleic acid (HNA) content cells and low nucleic acid (LNA) content cells. Nucleic acids are large biomolecules that store and express genomic information. HNA picoplankton dominate in waters that are eutrophic to mesotrophic while low LNA picoplankton dominate in stratified oligotrophic environments. The proportion of HNA picoplankton to LNA picoplankton is a defining characteristic of bacterioplankton communities. Addition of glyphosate, a common herbicide that causes increased levels of phosphorus when introduced to aquatic systems, causes an increase in the ratio of HNA to LNA bacteria. Nucleic acids are a costly compound for cells to synthesize and the increased bioavailable phosphorus in the system likely allows HNA bacteria to rapidly synthesize more nucleic acids and divide. HNA bacterioplankton are larger and more active than LNA picoplankton. HNA cells also have higher specific metabolic and growth rates, likely allowing these type of bacterioplankton to better utilize and exploit sudden increases in nutrients within the water column. The relative abundance of HNA to LNA cells is related to overall system productivity, specifically chlorophyll concentration, though other factors likely also contribute to bacterioplankton distribution. Biogeochemical cycling Dissolved organic matter Heterotrophic picoplankton play a critical role in nutrient and carbon recycling in ecological food webs by transforming and mineralizing organic matter. Aquatic dissolved organic matter is one of the largest organic pools on Earth and a major part of the carbon cycle. The majority of dissolved organic matter is either resistant to transformation or semi-labile, limiting the availability of these compounds to biodegradation. Water bodies accumulate dissolved organic matter via both allochthonous sources, mainly decaying terrestrial plants and soil organic matter, and autochthonous sources, mainly from phytoplankton and macrophytes. As major decomposers of organic matter, heterotrophic bacterioplankton act as an important link between detritus, dissolved organic matter, and higher trophic levels in aquatic systems. Bacterioplankton degrade particulate organic matter into smaller compounds and either assimilate and absorb them or expel them as inorganic carbon. Both of these processes promote transformation of matter within the aquatic system and promote energy flow and are important components of the overall quality of a water body. Heterotrophic bacteria community structure and functionality is used to assess the trophic status and quality of freshwater systems. References Biological oceanography Planktology Aquatic ecology
Heterotrophic picoplankton
[ "Biology" ]
707
[ "Aquatic ecology", "Ecosystems" ]
8,672,923
https://en.wikipedia.org/wiki/HR%203803
HR 3803 or N Velorum (N Vel) is a 3rd-magnitude star on the border between the southern constellations Carina and Vela. Based upon parallax measurements, it is approximately from Earth. It has a spectral classificafion of K5III, indicating that it has evolved from the main sequence and is now a giant star. At this evolutionary stage, N Velorum has expanded to 66 times the size of the Sun and is emitting 776 times its luminosity. Its effective temperature is of 3,964K, 30% cooler than the Sun, which gives it the typical orange hue of K-type stars. In 1752, French astronomer Nicolas Louis de Lacaille divided the former constellation Argo Navis into three separate constellations, and then referenced its stars by extending Bayer's system of star nomenclature; this star was given the designation N Velorum. In 1871 Benjamin Apthorp Gould discovered this star to be variable, but this occurred prior to the standardization of variable star nomenclature by German astronomer Friedrich Wilhelm Argelander during the nineteenth century, so it does not fall into the standard range of variable star designations. References Velorum, N 082668 K-type giants Vela (constellation) 3803 046701 Durchmusterung objects
HR 3803
[ "Astronomy" ]
273
[ "Vela (constellation)", "Constellations" ]
8,672,942
https://en.wikipedia.org/wiki/HD%2074272
HD 74272 is a star in the constellation Vela. It has the Bayer designation n Velorum, while HD 74272 is the identifier from the Henry Draper catalogue. This is a white hued star that is faintly visible to the naked eye with an apparent visual magnitude of 4.74. It is located at a distance of approximately 1,800 light years from the Sun based on parallax. The star is drifting further away with a radial velocity of +17 km/s. This is an aging, massive bright giant star with a stellar classification of A5 II. It is an estimated 30 million years old with 8.8 times the mass of the Sun. Having exhausted the supply of hydrogen at its core, it has expanded to around 33 times the radius of the Sun. The star is radiating 3,287 times the Sun's luminosity from its enlarged photosphere at an effective temperature of 7,595 K. References A-type bright giants Vela (constellation) Velorum, n Durchmusterung objects 074272 046701 3452
HD 74272
[ "Astronomy" ]
229
[ "Vela (constellation)", "Constellations" ]
8,672,984
https://en.wikipedia.org/wiki/Password%20fatigue
Password fatigue is the feeling experienced by many people who are required to remember an excessive number of passwords as part of their daily routine, such as to log in to a computer at work, undo a bicycle lock or conduct banking from an automated teller machine. The concept is also known as password chaos, or more broadly as identity chaos. Causes The increasing prominence of information technology and the Internet in employment, finance, recreation and other aspects of people's lives, and the ensuing introduction of secure transaction technology, has led to people accumulating a proliferation of accounts and passwords. According to a survey conducted in February 2020 by password manager Nordpass, a typical user has 100 passwords. Some factors causing password fatigue are: unexpected demands that a user create a new password unexpected demands that a user create a new password that uses a particular pattern of letters, digits, and special characters demand that the user type the new password twice frequent and unexpected demands for the user to re-enter their password throughout the day as they surf to different parts of an intranet blind typing, both when responding to a password prompt and when setting a new password. Responses Some companies are well organized in this respect and have implemented alternative authentication methods, or have adopted technologies so that a user's credentials are entered automatically. However, others may not focus on ease of use, or even worsen the situation, by constantly implementing new applications with their own authentication system. Single sign-on software (SSO) can help mitigate this problem by only requiring users to remember one password to an application that in turn will automatically give access to several other accounts, with or without the need for agent software on the user's computer. A potential disadvantage is that loss of a single password will prevent access to all services using the SSO system, and moreover theft or misuse of such a password presents a criminal or attacker with many targets. Integrated password management software - Many operating systems provide a mechanism to store and retrieve passwords by using the user's login password to unlock an encrypted password database. Microsoft Windows provides Credential Manager to store usernames and passwords used to log on to websites or other computers on a network; iOS, iPadOS, and macOS share a Keychain feature that provides this functionality; and similar functionality is present in the GNOME and KDE open source desktops. In addition, web browser developers have added similar functionality to all the major browsers. Although, if the user's system is corrupted, stolen or compromised, they can also lose access to sites where they rely on the password store or recovery features to remember their login data. Third-party (add-on) password management software such as KeePass and Password Safe can help mitigate the problem of password fatigue by storing passwords in a database encrypted with a single password. However, this presents problems similar to that of single sign-on in that losing the single password prevents access to all the other passwords while someone else gaining it will have access to them. Password recovery - The majority of password-protected web services provide a password recovery feature that will allow users to recover their passwords via the email address (or other information) tied to that account. However, this system has itself become a target of social engineering attacks by criminals. These criminals obtain enough information about the target to impersonate them and request a reset email, which is then redirected through other means to an account under the attacker's control, enabling the attacker to hijack the account. Passwordless authentication - One solution to eliminate password fatigue is to get rid of passwords entirely. Passwordless authentication services such as Okta, Transmit Security and Secret Double Octopus replace passwords with alternative verification methods such as biometric authentication or security tokens. Unlike SSO or password management software, passwordless authentication does not require a user to create or remember a password at any point. Innovative approaches As password fatigue continues to challenge users, notable advances in password management techniques have emerged to alleviate this burden. These innovative approaches provide alternatives to traditional password-based authentication systems. Here are some notable strategies: Biometric Authentication Biometric authentication methods offer a seamless and secure alternative to traditional passwords, including fingerprint recognition, facial recognition, and iris scanning. Users can authenticate their identities without remembering complex passwords by leveraging unique biological characteristics. Companies like Okta and Transmit Security have developed robust biometric authentication solutions, reducing reliance on traditional passwords. Security Tokens Security tokens, also referred to as hardware tokens or authentication tokens, add an extra layer of security beyond passwords. These physical devices generate a one-time passcode or cryptographic key that users input alongside their passwords for authentication. This two-factor authentication (2FA) method enhances security while reducing the cognitive load of managing multiple passwords. Secret Double Octopus is a notable provider of security token solutions. Passwordless Authentication Passwordless authentication services represent a significant shift in authentication methods by eliminating the need for passwords. Instead, these services utilize alternative verification methods, such as biometric authentication, security keys, or magic email links. By removing passwords from the equation, passwordless authentication significantly simplifies the user experience and reduces the risk of password-related security breaches. Okta, Transmit Security, and Secret Double Octopus are pioneering providers of passwordless authentication solutions. Behavioral Biometrics Emerging technologies in behavioral biometrics analyze unique behavioral patterns, such as typing speed, mouse movements, and touchscreen interactions, for user authentication. By continuously monitoring these behavioral signals, the system can accurately verify a user's identity without requiring an explicit authentication action. Behavioral biometrics provide a seamless authentication experience while minimizing the cognitive load associated with traditional password-based systems. These innovative approaches offer promising alternatives to traditional password management techniques, delivering enhancements in security, usability, and user convenience. As technology advances, further progress in authentication methods will effectively address the ongoing challenge of password fatigue. See also BugMeNot Decision fatigue Identity management Password manager Password strength Security question Usability of web authentication systems Notes External links Noguchi, Yuki. Access Denied, Washington Post, 23 September 2006. Catone, Josh. Bad Form: 61% Use Same Password for Everything, 17 January 2008. Data security Password authentication
Password fatigue
[ "Engineering" ]
1,284
[ "Cybersecurity engineering", "Data security" ]
8,672,988
https://en.wikipedia.org/wiki/HD%2072108
HD 72108 (A Vel, A Velorum) is a star system in the constellation Vela. It is approximately 1640 light years from Earth. The primary component, HD 72108 A, is a blue-white B-type subgiant with an apparent magnitude of +5.33. It is a spectroscopic binary, whose components are separated by 0.176 arcseconds. At a distance of 4 arcseconds away is the third component, the magnitude +7.7 HD 72108 B. The fourth component, HD 72108 C has an apparent magnitude of +9.3, and is 19 arcseconds from the primary. References Velorum, A 072108 B-type subgiants 4 Spectroscopic binaries Vela (constellation) 3358 041616 CD-47 04004
HD 72108
[ "Astronomy" ]
182
[ "Vela (constellation)", "Constellations" ]
8,672,997
https://en.wikipedia.org/wiki/HD%2075063
HD 75063 is a single star in the southern constellation of Vela. It has the Bayer designation of a Velorum, while HD 75063 is the identifier from the Henry Draper Catalogue. This is a naked-eye star with an apparent visual magnitude of 3.87 and has a white hue. The star is located at a distance of approximately 1,900 light-years from the Sun based on parallax measurements and has an absolute magnitude of −4.89. It is drifting further away with a radial velocity of +23 km/s. This object has been stellar classifications of A1III and A0 II, matching a massive A-type giant or bright giant star, respectively. It is an estimated 31 million years old and is spinning with a projected rotational velocity of 30 km/s. The star has 8.6 times the mass of the Sun and around 4.5 times the Sun's radius. The star is radiating 8,670 times the Sun's luminosity from its photosphere at an effective temperature of . References A-type giants A-type bright giants Vela (constellation) Velorum, a Durchmusterung objects 075063 043023 3487
HD 75063
[ "Astronomy" ]
254
[ "Vela (constellation)", "Constellations" ]
8,673,632
https://en.wikipedia.org/wiki/Ronde-bosse
Ronde-bosse, en ronde bosse or encrusted enamel is an enamelling technique developed in France in the late 14th century that produces small three-dimensional figures, or reliefs, largely or entirely covered in enamel. The new method involved the partial concealment of the underlying gold, or sometimes silver, from which the figure was formed. It differs from older techniques which all produced only enamel on a flat or curved surface, and mostly, like champlevé, normally used non-precious metals, such as copper, which were gilded to look like gold. In the technique of enamel en ronde-bosse small figures are created in gold or silver and their surfaces lightly roughened to provide a key for the enamel, which is applied as a paste and fired. In places the framework may only be wire. The term derives from the French term émail en ronde bosse ("enamel in the round"); however in French en ronde bosse merely means "in the round" and is used of any sculpture; in English ronde bosse or en ronde bosse, though usually treated as foreign terms and italicised, are specifically used of the enamel technique, and in recent decades have largely replaced the older English term "encrusted enamel". The technique rapidly reached maturity and produced a group of "exceptionally grand French and Burgundian court commissions, chiefly made c. 1400 but apparently continuing into the second quarter of the fifteenth century". These include the Goldenes Rössl ("Golden Pony") in Altötting, Bavaria, the most famous of the group, the Holy Thorn Reliquary in the British Museum, the Montalto Reliquary, the "Tableau of the Trinity" in the Louvre (possibly made in London), and a handful of other religious works, but the great majority of pieces recorded in princely inventories have been destroyed to recover their gold. After this period smaller works continued to be produced, and there was a revival of larger works c. 1500-1520, although it is not clear where these were made. The technique was used on parts of a relatively large sculpture in Benvenuto Cellini's famous Salt Cellar (1543, Vienna) and remained common through to the Baroque, usually in small works and jewellery. The Russian House of Fabergé made much use of the technique from the 19th century until the Russian Revolution. The technique can be used with both translucent and opaque enamel, but more commonly the latter; translucent enamel is mostly found on reliefs using ronde bosse, such as a plaque with the Entombment of Christ in the Metropolitan Museum of Art, New York. In the works from around 1400, the recently developed white enamel usually predominates. Notes References Campbell, Marian. An Introduction to Medieval Enamels, 1983, HMSO for V&A Museum, Osborne, Harold (ed), The Oxford Companion to the Decorative Arts, 1975, OUP, Stratford, Jenny, and others, Richard II's Treasure; the Riches of a Medieval King, from The Institute of Historical Research and Royal Holloway, University of London. Images of several pieces in ronde-bosse on these pages under "Items": "Image of St Michael", "The swan badge and the Dunstable Swan", "Brooches" External links Morse with the Trinity, c. 1400, National Gallery of Art, Washington Saint Catherine of Alexandria in ronde-bosse; and The Dead Christ with the Virgin, Saint John, and Angels, ca. 1390–1405, both from the Metropolitan Museum of Arts, Vitreous enamel
Ronde-bosse
[ "Chemistry" ]
746
[ "Coatings", "Vitreous enamel" ]
8,673,754
https://en.wikipedia.org/wiki/List%20of%20second%20moments%20of%20area
The following is a list of second moments of area of some shapes. The second moment of area, also known as area moment of inertia, is a geometrical property of an area which reflects how its points are distributed with respect to an arbitrary axis. The unit of dimension of the second moment of area is length to fourth power, L4, and should not be confused with the mass moment of inertia. If the piece is thin, however, the mass moment of inertia equals the area density times the area moment of inertia. Second moments of area Please note that for the second moment of area equations in the below table: and Parallel axis theorem The parallel axis theorem can be used to determine the second moment of area of a rigid body about any axis, given the body's second moment of area about a parallel axis through the body's centroid, the area of the cross section, and the perpendicular distance (d) between the axes. See also List of moments of inertia List of centroids Second polar moment of area References Area moment of inertia Area moments of inertia Moment (physics)
List of second moments of area
[ "Physics", "Mathematics", "Engineering" ]
234
[ "Physical quantities", "Quantity", "Mechanics", "Mechanical engineering", "Moment (physics)" ]
8,674,784
https://en.wikipedia.org/wiki/Soricomorpha
Soricomorpha (from Greek "shrew-form") is a formerly used taxon within the class of mammals. In the past it formed a significant group within the former order Insectivora. However, Insectivora was shown to be polyphyletic and various new orders were split off from it, including Afrosoricida (tenrecs, golden moles, otter shrews), Macroscelidea (elephant shrews), and Erinaceomorpha (hedgehogs and gymnures), with the four remaining extant and recent families of Soricomorpha shown here then being treated as a separate order. Insectivora was left empty and disbanded. Subsequently, Soricomorpha itself was shown to be paraphyletic, because Soricidae shared a more recent common ancestor with Erinaceidae than with other soricomorphs. The combination of Soricomorpha and Erinaceidae, referred to as order Eulipotyphla, has been shown to be monophyletic. Living members of the group range in size from the Etruscan shrew, at about and , to the Cuban solenodon, at about and . Soricomorpha Family Soricidae (shrews) Subfamily Crocidurinae: (white-toothed shrews) Subfamily Soricinae: (red-toothed shrews) Subfamily Myosoricinae: (African white-toothed shrews) Family Talpidae: (moles and close relatives) Subfamily Scalopinae (New World moles and close relatives) Subfamily Talpinae (Old World moles and close relatives) Subfamily Uropsilinae (Chinese shrew-like moles) Family Solenodontidae: solenodons (rare primitive eulipotyphlans of the Caribbean; two extant species) Family † Nesophontidae: West Indian shrews (recently extinct eulipotyphlans of the Caribbean) Family † Heterosoricidae genus †Atasorex genus †Dinosorex genus †Domnina genus †Gobisorex genus †Heterosorex genus †Ingentisorex genus †Lusorex genus †Paradomnina genus †Quercysorex Family † Nyctitheriidae References Taxa named by William King Gregory Extant Eocene first appearances Paraphyletic groups Eulipotyphla Obsolete mammal taxa
Soricomorpha
[ "Biology" ]
508
[ "Phylogenetics", "Paraphyletic groups" ]
8,674,917
https://en.wikipedia.org/wiki/Nanobatteries
Nanobatteries are fabricated batteries employing technology at the nanoscale, particles that measure less than 100 nanometers or 10−7 meters. These batteries may be nano in size or may use nanotechnology in a macro scale battery. Nanoscale batteries can be combined to function as a macrobattery such as within a nanopore battery. Traditional lithium-ion battery technology uses active materials, such as cobalt-oxide or manganese oxide, with particles that range in size between 5 and 20 micrometers (5000 and 20000 nanometers – over 100 times nanoscale). It is hoped that nano-engineering will improve many of the shortcomings of present battery technology, such as volume expansion and power density. Background A battery converts chemical energy to electrical energy and is composed of three general parts: Anode (positive electrode) Cathode (negative electrode) Electrolyte The anode and cathode have two different chemical potentials, which depend on the reactions that occur at either terminus. The electrolyte can be a solid or liquid that is ionically conductive. The boundary between the electrode and electrolyte is called the solid-electrolyte interphase (SEI). Connecting a circuit across the electrodes causes the chemical energy stored in the battery to be converted to electrical energy. Limitations of current battery technology A battery's ability to store charge is dependent on its energy density and power density. It is important that charge can remain stored and that a maximum amount of charge can be stored within a battery. Cycling and volume expansion are also important considerations as well. While many other types of batteries exist, current battery technology is based on lithium-ion intercalation technology for its high power and energy densities, long cycle life and no memory effects. These characteristics have led lithium-ion batteries to be preferred over other battery types. To improve a battery technology, cycling ability and energy and power density must be maximized and volume expansion must be minimized. During lithium intercalation, the volume of the electrode expands, causing mechanical strain. The mechanical strain compromises the structural integrity of the electrode, causing it to crack. Nanoparticles can decrease the amount of strain placed on a material when the battery undergoes cycling, as the volume expansion associated with nanoparticles is less than the volume expansion associated with microparticles. The little volume expansion associated with nanoparticles also improves the reversibility capability of the battery: the ability of the battery to undergo many cycles without losing charge. In current lithium-ion battery technology, lithium diffusion rates are slow. Through nanotechnology, faster diffusion rates can be achieved. Nanoparticles require shorter distances for the transport of electrons, which leads to faster diffusion rates and a higher conductivity, which ultimately leads to a greater power density. Advantages of nanotechnology Using nanotechnology to manufacture of batteries offers the following benefits: Increasing the available power from a battery and decreasing the time required to recharge a battery. These benefits are achieved by coating the surface of an electrode with nanoparticles, increasing the surface area of the electrode thereby allowing more current to flow between the electrode and the chemicals inside the battery. Nanomaterials can be used as a coating to separate the electrodes from any liquids in the battery, when the battery is not in use. In the current battery technology, the liquids and solids interact, causing a low level discharge. This decreases the shelf life of a battery. Disadvantages of nanotechnology Nanotechnology provides its own challenges in batteries: Nanoparticles have low density and high surface area. The greater the surface area, the more likely reactions are to occur at the surface with the air. This serves to destabilize the materials in the battery. Owing to nanoparticle's low density, a higher interparticle resistance exists, decreasing the electrical conductivity of the material. Nanomaterials can be difficult to manufacture, increasing their cost. While nanomaterials may greatly improve the abilities of a battery, they may be cost-prohibitive to make. Active and past research Much research has been performed surrounding lithium-ion batteries to maximize their potential. In order to properly harness clean energy resources, such as solar power, wind power and tidal energy, batteries capable of storing massive amounts of energy used in grid energy storage are required. Lithium iron phosphate electrodes are being researched for potential applications to grid energy storage. Electric vehicles are another technology requiring improved batteries. Electric vehicle batteries currently require large charge times, effectively prohibiting the use for long-distance electric cars. Nanostructured anode materials Graphite and SEI The anode in lithium-ion batteries is almost always graphite. Graphite anodes need to improve their thermal stability and create a higher power capability. Graphite and certain other electrolytes can undergo reactions that reduce the electrolyte and create an SEI (Solid Electrolyte Interphase), effectively reducing the potential of the battery. Nanocoatings at the SEI are currently being researched to stop these reactions from occurring. In Li-ion batteries, the SEI is necessary for thermal stability, but hinders the flow of lithium ions from the electrode to the electrolyte. Park et al. have developed a nanoscale polydopamine coating such that the SEI no longer interferes with the electrode; instead the SEI interacts with the polydopamine coating. Graphene and other carbon materials Graphene has been studied extensively for its use in electrochemical systems such as batteries since its first isolation in 2004. Graphene offers high surface area and good conductivity. In current lithium-ion battery technology, the 2D networks of graphite inhibit smooth lithium-ion intercalation; the lithium ions must travel around the 2D graphite sheets to reach the electrolyte. This slows the charging rates of the battery. Porous graphene materials are currently being studied to improve this problem. Porous graphene involves either formation of defects in the 2D sheet or the creation of a 3D graphene-based porous superstructure. As an anode, graphene would provide space for expansion such that the problem of volume expansion does not occur. 3D graphene has shown extremely high lithium ion extraction rates, indicating a high reversible capacity. As well, the random "house-of-cards" visualization seen below of the graphene anode would allow lithium ions to be stored not only on the internal surface of graphene, but also on the nanopores that exist between the single layers of graphene. Raccichini et al. also outlined the drawbacks of graphene and graphene-based composites. Graphene has a large irreversible mechanism during the first lithiation step. As graphene has a large surface area, this will result in a large initial irreversibility capacity. He proposed that this drawback was so large that graphene-based cells are “unfeasible”. Research is still being done on graphene in anodes. Carbon nanotubes have been used as electrodes for batteries that use intercalation, like lithium-ion batteries, in an effort to improve capacity. Titanium oxides Titanium oxides are another anode material that have been researched for their applications to electric vehicles and grid energy storage. However, low electronic and ionic capabilities, as well as the high cost of titanium oxides have proven this material to be unfavorable to other anode materials. Silicon-based anodes Silicon-based anodes have also been researched, namely for their higher theoretical capacity than that of graphite. Silicon-based anodes have high reaction rates with the electrolyte, low volumetric capacity and an extremely large volume expansion during cycling. However, recent work has been done to decrease volume expansion in silicon-based anodes. By creating a sphere of conductive carbon around the silicon atom, Liu et al. has proven that this small structural change leaves enough room for the silicon to expand and contract without providing mechanical strain on the electrode. Nanostructured cathode materials Carbon nanostructures have been used to increase the capability of electrodes, namely the cathode. In LiSO2 batteries, carbon nanostructuring was able to theoretically increase the energy density of the battery by 70% from the current lithium-ion battery technology. In general, lithium alloys have been found to have an increased theoretical energy density than lithium ions. Traditionally, LiCoO2 has been used as the cathode in lithium-ion batteries. The first successful alternative cathode for use in electric vehicles has been LiFePO4. LiFePO4 has shown increased power density, a longer lifetime and improved safety over LiCoO2. Graphene Graphene could be used to improve the electrical conductivity of cathode materials. LiCoO2, LiMn2O4and LiFePO4 are all commonly used cathode materials in lithium-ion batteries. These cathode materials have typically mixed with other carbon-composite materials to improve their rate capability. As graphene has a higher electrical conductivity than these other carbon-composite materials, like carbon black, graphene has a greater ability to improve these cathode materials more than other carbon-composite additives. Piao et al. has specifically studied porous graphene in relation to just graphene. Porous graphene combined with LiFePO4 was advantageous over just graphene combined with LiFePO4, for improved cycle stability. Porous graphene created good pore channels for the diffusion of lithium ions and prevented the buildup of LiFePO4 particles. Raccichini et al. suggested graphene-based composites as cathodes in sodium-ion batteries. Sodium ions are too large to fit into the typical graphite lattice, so graphene would allow sodium ions to intercalate. Graphene has also been suggested to fix some of the problems related to lithium-sulphur batteries. Problems associated with lithium sulphur batteries include dissolution of the intermediate in the electrolyte, large volume expansion and poor electrical conductivity. Graphene has been mixed with sulphur at the cathode in an attempt to improve the capacity, stability and conductivity of these batteries. Conversion electrodes Conversion electrodes are electrodes where chemical ionic bonds are broken and reformed. A transformation of the crystalline structure of the molecules also occurs. In conversion electrodes, three lithium ions can be accommodated for every metal ion, whereas the current intercalation technology can only accommodate one lithium ion for every metal ion. Larger lithium to metal ion ratios indicate increased battery capacity. A disadvantage of conversion electrodes is its large voltage hysteresis. Mapping Balke et al. is aiming to understand the intercalation mechanism for lithium-ion batteries at the nanoscale. This mechanism is understood at the microscale, but behavior of matter changes depending on the size of the material. Zhu et al. are also mapping the intercalation of lithium ions at the nanoscale using scanning probe microscopy. Mathematical models for lithium battery intercalation have been calculated and are still under investigation. Whittingham suggested that there was no single mechanism by which lithium ions move through the electrolyte of the battery. The movement depended on a variety of factors including, but not limited to, particle size, the thermodynamic state or metastable state of the battery and whether the reaction operated continuously. Their experimental data for LiFePO4 – FePO4 suggested the movement of Li-ions in a curved path rather than a linear straight jump within the electrolyte. Intercalation mechanisms have been studied for polyvalent cations as well. Lee et al. has studied and determined the proper intercalation mechanism for rechargeable zinc batteries. Stretchable electronics Research has also been done to use carbon nanotube fiber springs as electrodes. LiMn2O4 and Li4Ti5O12 are the nanoparticles that have been used as the cathode and anode respectively, and have demonstrated the ability to stretch 300% of their original length. Applications for stretchable electronics include energy storage devices and solar cells. Printable batteries Researchers at the University of California, Los Angeles have successfully developed a "nanotube ink" for manufacturing flexible batteries using printed electronics techniques. A network of carbon nanotubes has been used as a form of electronically conducting nanowires in the cathode of a zinc-carbon battery. Using nanotube ink, the carbon cathode tube and manganese oxide electrolyte components of the zinc-carbon battery can be printed as different layers on a surface, over which an anode layer of zinc foil can be printed. This technology replaces charge collectors like metal sheets or films with a random array of carbon nanotubes. The carbon nanotubes add conductance. Thin and flexible batteries can be manufactured that are less than a millimeter thick. Although discharge currents of the batteries are at present below the level of practical use, the nanotubes in the ink allow the charge to conduct more efficiently than in a conventional battery, such that the nanotube technology could lead to improvements in battery performance. Technology like this is applicable to solar cells, supercapacitors, light-emitting diodes and smart radio frequency identification (RFID) tags. Researching companies Toshiba By using nanomaterial, Toshiba has increased the surface area of the lithium and widened the bottleneck, allowing the particles to pass through the liquid and recharge the battery more quickly. Toshiba states that it tested a new battery by discharging and fully recharging one thousand times at 77 °C and found that it lost only one percent of its capacity, an indication of a long battery life. °C Toshiba's battery is 3.8 mm thick, 62 mm high and 35 mm deep. A123Systems A123Systems has also developed a commercial nano Li-ion battery. A123 Systems claims their battery has the widest temperature range at . Much like Toshiba's nanobattery, A123 Li-ion batteries charge to "high capacity" in five minutes. Safety is a key feature touted by the A123 technology, with a video on their website of a nail drive test, in which a nail is driven through a traditional Li-ion battery and an A123 Li-ion battery, where the traditional battery flames up and bubbles at one end, the A123 battery simply emits a wisp of smoke at the penetration site. Thermal conductivity is another selling point for the A123 battery, with the claim that the A123 battery offers 4 times higher thermal conductivity than conventional Lithium-Ion cylindrical cells. The nanotechnology they employ is a patented nanophosphate technology. Valence Also in the market is Valence Technology, Inc. The technology they are marketing is Saphion Li-ion technology. Like A123, they are using a nanophosphate technology, and different active materials than traditional Li-ion batteries. Altair AltairNano has also developed a nanobattery with a one-minute recharge. The advance that Altair claims to have made is in the optimization of nano-structured lithium titanate spinel oxide (LTO). U.S. Photonics U.S. Photonics is in the process of developing a nanobattery utilizing "environmentally friendly" nanomaterials for both the anode and cathode as well as arrays of individual nano-sized cell containers for the solid polymer electrolyte. U.S. Photonics has received a National Science Foundation SBIR phase I grant for development of nanobattery technology. Sony Produced the first cobalt-based lithium-ion battery in 1991. Since the inception of this first Li-ion battery, the research of nanobatteries has been underway with Sony continuing their strides into the nanobattery field. See also Supercapacitor Nanoelectronics Nanotechnology List of battery types References External links https://web.archive.org/web/20140712040425/http://accelerating.org/articles/phevfuture.html https://web.archive.org/web/20061209094343/http://www.accelerating.org/newsletter/2005/31may05.html http://www.technewsworld.com/story/hardware/41889.html http://www.a123systems.com http://www.valence.com/ https://web.archive.org/web/20070710213510/http://www.altairnano.com/markets_amps.html Overview of Nanobatteries at UnderstandingNano Website Nanoelectronics Battery types
Nanobatteries
[ "Materials_science" ]
3,466
[ "Nanotechnology", "Nanoelectronics" ]
8,676,520
https://en.wikipedia.org/wiki/Polymethine
Polymethines are compounds made up from an odd number of methine groups (CH) bound together by alternating single and double bonds. Compounds made up from an even number of methine groups are known as polyenes. Polymethine dyes Cyanines are synthetic dyes belonging to polymethine group. Anthocyanidins are natural plant pigments belonging to the group of the polymethine dyes. Polymethines are fluorescent dyes that may be attached to nucleic acid probes for different uses, e.g., to accurately count reticulocytes. References Alkenes
Polymethine
[ "Chemistry" ]
128
[ "Organic compounds", "Alkenes" ]
8,676,888
https://en.wikipedia.org/wiki/Sodium%20permanganate
Sodium permanganate is the inorganic compound with the formula NaMnO4. It is closely related to the more commonly encountered potassium permanganate, but it is generally less desirable, because it is more expensive to produce. It is mainly available as the monohydrate. This salt absorbs water from the atmosphere and has a low melting point. Being about 15 times more soluble than KMnO4, sodium permanganate finds some applications where very high concentrations of MnO4− are sought. Preparation and properties Sodium permanganate cannot be prepared analogously to the route to KMnO4 because the required intermediate manganate salt, Na2MnO4, does not form. Thus less direct routes are used including conversion from KMnO4. Sodium permanganate behaves similarly to potassium permanganate. It dissolves readily in water to give deep purple solutions, evaporation of which gives prismatic purple-black glistening crystals of the monohydrate NaMnO4·H2O. The potassium salt does not form a hydrate. Because of its hygroscopic nature, it is less useful in analytical chemistry than its potassium counterpart. It can be prepared by the reaction of manganese dioxide with sodium hypochlorite: 2 MnO2 + 3 NaClO + 2 NaOH → 2 NaMnO4 + 3 NaCl + H2O Applications Because of its high solubility, its aqueous solutions are used as a drilled hole debris remover and etchant in printed circuitry, with a limited utility though. It is gaining popularity in water treatment for taste, odor, and zebra mussel control. The V-2 rocket used it in combination with hydrogen peroxide to drive a steam turbopump. As an oxidizer, sodium permanganate is used in environmental remediation of soil and groundwater contaminated with chlorinated solvents using the remediation technology in situ chemical oxidation, also referred to as ISCO. References Sodium compounds Permanganates Oxidizing agents Disinfectants
Sodium permanganate
[ "Chemistry" ]
426
[ "Redox", "Oxidizing agents", "Permanganates" ]
8,677,967
https://en.wikipedia.org/wiki/Tricholoma%20magnivelare
Tricholoma magnivelare, commonly known as the matsutake, white matsutake, ponderosa mushroom, pine mushroom, or American matsutake, is a gilled mushroom found East of the Rocky Mountains in North America growing in coniferous woodland. These ectomycorrhizal fungi are typically edible species that exist in a symbiotic relationship with various species of pine, commonly jack pine. They belong to the genus Tricholoma, which includes the closely related East Asian songi or matsutake as well as the Western matsutake (T. murrillianum) and Meso-American matsutake (T. mesoamericanum). Taxonomy Until recently, the name Tricholoma magnivelare described all matsutake mushrooms found in North America. Since the early 2000s, molecular data has indicated the presence of separate species in the prior group, with only those found in the Eastern United States and Canada retaining the name T. magnivelare. Description The cap ranges from in width, and is white with reddish-yellow or brown spots. The stalk is tall and 2–6 cm wide. The spores are white. The mycelium is thought to be parasitized by the plant Allotropa virgata, which primarily feeds on matsutake. Chemical Ecology This mushroom is noted for its distinctive odour/flavour. The major compound identified from fresh sporocarps is the fragrant compound, methyl cinnamate. Also, alpha-pinene and bornyl acetate are present in trace amounts in uncrushed samples. Tissue disruption of the sporocarp produces large amounts of 1-octen-3-ol, a compound found in many mushrooms that has a typical mushroom-like odour. Both methyl cinnamate and 1-octen-3-ol have been shown to be potent banana slug (Ariolimax columbianus) antifeedants. Cultures of the secondary mycelium of T. magnivelare did not have any of the compounds found in the sporocarp. The major volatile component of mycelial cultures is 3,5-dichloro-4-methoxybenzaldehyde. 3,5-Dichloro-4-methoxybenzyl alcohol and hexanal were identified as minor components from these cultures. These chlorinated compounds inhibit fungal metabolism: fungal cell wall growth by chitin synthase and melanin biosynthesis. These compounds may keep other fungi from taking over the tree roots that T. magnivelare colonizes. Similar species Similar species in the genus include Tricholoma apium, T. caligatum, T. focale, and T. vernaticum. Other similar species include Catathelasma imperiale, C. ventricosum, Russula brevipes, and the poisonous Amanita smithiana. Uses While tough, the mushroom can be eaten both raw and cooked and is considered choice. In recent years, globalization and wider social acceptability of mushroom hunting has made collection of pine mushrooms widely popular in North America. However, serious poisonings have resulted from confusion of this mushroom with poisonous white Amanita species. Local mushroom hunters sell their harvest daily to local depots, which rush them to airports. The mushrooms are then shipped fresh by air to Asia where demand is high and prices are at a premium. See also List of North American Tricholoma List of Tricholoma species References External links Mushroom-Collecting.com - Matsutake Mykoweb profile of T. magnivelare magnivelare Edible fungi Fungi described in 1873 Fungi of North America Taxa named by Charles Horton Peck Fungus species
Tricholoma magnivelare
[ "Biology" ]
776
[ "Fungi", "Fungus species" ]
8,678,512
https://en.wikipedia.org/wiki/Toftness%20device
The Toftness Radiation Detector was a quack instrument used by some chiropractors. It was patented by Irwing N. Toftness in 1971, and was banned from use in the United States in 1982. Toftness claimed that it detected electromagnetic radiation emanating from vertebral subluxations. The device had multiple forms, but a common configuration consisted of a plastic cylinder with a series of plastic lenses inside, as well as a clear plastic "detection plate". The operator would rub their finger against the detection plate while the device was held close to an area of the spine, and report the degree of perceived resistance against the movement of their fingers. An increase in perceived resistance would indicate which area of the body required chiropractic manipulation. Specifically, Toftness made the claim in his 1971 patent that "what is sensed by the operator is a friction or dragging sensation which retards the passage of a finger or fingers over the surface of the deflection plate." Toftness devices were banned by the United States District Court in Wisconsin in January 1982. The Court issued a permanent nationwide injunction against the manufacture, promotion, sale, lease, distribution, shipping, delivery, or use of the Toftness Radiation Detector, or any product which utilizes the same principles as the Toftness Radiation Detector. The United States Court of Appeals for the Seventh Circuit upheld the decision in 1984. According to the United States Food and Drug Administration, the Toftness Radiation Detectors were misbranded under the Food, Drug, and Cosmetic Act because they could not be used safely or effectively for their intended purposes. The devices were purportedly being used to assist with the diagnosis and treatment of injuries, without FDA approval. In 2013, David Toftness, nephew of Irwing N. Toftness, and the Toftness Post-Graduate School of Chiropractic were fined for shipping the devices across state borders. See also Chiropractic Dowsing N-ray Pathological science References External links Disciplinary Action against Harold J. Dykema, D.C. Pseudoscience Chiropractic Radioactive quackery
Toftness device
[ "Chemistry" ]
436
[ "Radioactive quackery", "Radioactivity" ]
8,678,691
https://en.wikipedia.org/wiki/Spring%20house
A spring house, or springhouse, is a small building, usually of a single room, constructed over a spring. While the original purpose of a springhouse was to keep the spring water clean by excluding fallen leaves, animals, etc., the enclosing structure was also used for refrigeration before the advent of ice delivery and, later, electric refrigeration. The water of the spring maintains a constant cool temperature inside the spring house throughout the year. Food that would otherwise spoil, such as meat, fruit, or dairy products, could be kept there, safe from animal depredations as well. Springhouses thus often also served as pumphouses, milkhouses and root cellars. The Tomahawk Spring spring house at Tomahawk, West Virginia, was listed on the National Register of Historic Places in 1994. Gallery See also Ice house (building) Smokehouse Windcatcher References External links Cooling technology Food preservation House types Semi-subterranean structures Springs (hydrology) Vernacular architecture
Spring house
[ "Environmental_science" ]
206
[ "Hydrology", "Springs (hydrology)" ]
8,680,227
https://en.wikipedia.org/wiki/B%C3%BCrgi%E2%80%93Dunitz%20angle
The Bürgi–Dunitz angle (BD angle) is one of two angles that fully define the geometry of "attack" (approach via collision) of a nucleophile on a trigonal unsaturated center in a molecule, originally the carbonyl center in an organic ketone, but now extending to aldehyde, ester, and amide carbonyls, and to alkenes (olefins) as well. The angle was named after crystallographers Hans-Beat Bürgi and Jack D. Dunitz, its first senior investigators. Practically speaking, the Bürgi–Dunitz and Flippin–Lodge angles were central to the development of understanding of chiral chemical synthesis, and specifically of the phenomenon of asymmetric induction during nucleophilic attack at hindered carbonyl centers (see the Cram–Felkin–Anh and Nguyen models). Additionally, the stereoelectronic principles that underlie nucleophiles adopting a proscribed range of Bürgi–Dunitz angles may contribute to the conformational stability of proteins and are invoked to explain the stability of particular conformations of molecules in one hypothesis of a chemical origin of life. Definition In the addition of a nucleophile (Nu) attack to a carbonyl, the BD angle is defined as the Nu-C-O bond angle. The BD angle adopted during an approach by a nucleophile to a trigonal unsaturated electrophile depends primarily on the molecular orbital (MO) shapes and occupancies of the unsaturated carbon center (e.g., carbonyl center), and only secondarily on the molecular orbitals of the nucleophile. Of the two angles which define the geometry of nucleophilic "attack", the second describes the "offset" of the nucleophile's approach toward one of the two substituents attached to the carbonyl carbon or other electrophilic center, and was named the Flippin–Lodge angle (FL angle) by Clayton Heathcock after his contributing collaborators Lee A. Flippin and Eric P. Lodge. These angles are generally construed to mean the angle measured or calculated for a given system, and not the historically observed value range for the original Bürgi–Dunitz aminoketones, or an idealized value computed for a particular system (such as hydride addition to formaldehyde, image at left). That is, the BD and FL angles of the hydride-formaldehyde system produce a given pair of values, while the angles observed for other systems may vary relative to this simplest of chemical systems. Measurement The original Bürgi-Dunitz measurements were of a series of intramolecular amine-ketone carbonyl interactions, in crystals of compounds bearing both functionalities—e.g., methadone and protopine. These gave a narrow range of BD angle values (105 ± 5°); corresponding computations—molecular orbital calculations of the SCF-LCAO-type—describing the approach of the s-orbital of a hydride anion (H−) to the pi-system of the simplest aldehyde, formaldehyde (H2C=O), gave a BD angle value of 107°. Hence, Bürgi, Dunitz, and thereafter many others noted that the crystallographic measurements of the aminoketones and the computational estimate for the simplest nucleophile-electrophile system were quite close to a theoretical ideal, the tetrahedral angle (internal angles of a tetrahedron, 109.5°), and so consistent with a geometry understood to be important to developing transition states in nucleophilic attacks at trigonal centers. In the structure of -methadone (above, left), note the tertiary amine projecting to the lower right, and the carbonyl (CO) group at the center, which engage in an intramolecular interaction in the crystal structure (after rotation around the single bonds connecting them, during the crystallization process). Similarly, in the structure of protopine (above, center), note the tertiary amine at the center of the molecule, part of a ten-membered ring, and the CO group opposite it on the ring; these engage in an intramolecular interaction allowed by changes in the torsion angles of the atoms of the ring. Theory The convergence of observed BD angles can be viewed as arising from the need to maximize overlap between the highest occupied molecular orbital (HOMO) of the nucleophile, and the lowest unoccupied molecular orbital (LUMO) of the unsaturated, trigonal center of the electrophile. (See, in comparison, the related inorganic chemistry concept of the angular overlap model.) In the case of addition to a carbonyl, the HOMO is often a p-type orbital (e.g., on an amine nitrogen or halide anion), and the LUMO is generally understood to be the antibonding π* molecular orbital perpendicular to the plane containing the ketone C=O bond and its substituents (see figure at right above). The BD angle observed for nucleophilic attack is believed to approach the angle that would produce optimal overlap between HOMO and LUMO (based on the principle of the lowering of resulting new molecular orbital energies after such mixing of orbitals of similar energy and symmetry from the participating reactants). At the same time, the nucleophile avoids overlap with other orbitals of the electrophilic group that are unfavorable for bond formation (not apparent in image at right, above, because of the simplicity of the R=R'=H in formaldehyde). Complications Electrostatic and Van der Waals interactions To understand cases of real chemical reactions, the HOMO-LUMO-centered view is modified by understanding of further complex, electrophile-specific repulsive and attractive electrostatic and Van der Waals interactions that alter the altitudinal BD angle, and bias the azimuthal Flippin-Lodge angle toward one substituent or the other (see graphic above). Linear and rotational dynamics BD angle theory was developed based on "frozen" interactions in crystals where the impacts of dynamics at play in the system (e.g., easily changed torsional angles) may be negligible. However, most reaction chemistry of general interest and utility takes place via collisions of molecules rapidly tumbling in solution; accordingly, the dynamics of each situation are sampled effectively, and so are reflected in the outcomes of the reactions. Constrained environments in enzymes and nanomaterials Moreover, in constrained reaction environments such as in enzyme and nanomaterial binding sites, early evidence suggests that BD angles for reactivity can be quite distinct, since reactivity concepts assuming orbital overlaps during random collision are not directly applicable. For instance, the BD value determined for enzymatic cleavage of an amide by a serine protease (subtilisin) was 88°, quite distinct from the hydride-formaldehyde value of 107°; moreover, compilation of literature crystallographic BD angle values for the same reaction mediated by different protein catalysts clustered at 89 ± 7° (i.e., only slightly offset from directly above or below the carbonyl carbon). At the same time, the subtilisin FL value was 8°, and FL angle values from the careful compilation clustered at 4 ± 6° (i.e., only slightly offset from directly behind the carbonyl; see the Flippin–Lodge angle article). See also Flippin–Lodge angle References Physical organic chemistry
Bürgi–Dunitz angle
[ "Chemistry" ]
1,615
[ "Physical organic chemistry" ]
16,001,928
https://en.wikipedia.org/wiki/Concrete%20step%20barrier
A concrete step barrier is a safety barrier used on the central reservation of motorways and dual carriageways as an alternative to the standard steel crash barrier. United Kingdom With effect from January 2005 and based primarily on safety grounds, the UK National Highways policy is that all new motorway schemes are to use high-containment concrete barriers in the central reserve. All existing motorways will introduce concrete barriers into the central reserve as part of ongoing upgrades and through replacement when these systems have reached the end of their useful life. This change of policy applies only to barriers in the central reserve of high-speed roads and not to verge-side barriers. Other routes will continue to use steel barriers. Government policy ensures that all future crash barriers in the UK will be made of concrete unless there are overriding circumstances. Ireland The usage of the concrete step barrier has become widespread in Ireland. As of 2017, of motorways use this barrier. Some motorways such as parts of the M8 and M6 have had the crash barrier since their original construction. Other motorways had it installed as part of their upgrade (M50). Hong Kong Steel guard rails (since 2000s as thrie-beam barrier) and concrete profile barrier are the barrier systems used in expressways in the territory. The designs of their beam barrier are based in American and Australian designs and concrete based in European standards. Degradation processes Various types of aggregate may undergo chemical reactions in concrete, leading to damaging expansive phenomena. The most common are those containing reactive silica, that can react with the alkalis in concrete. Amorphous silica is one of the most reactive mineral components in some aggregates containing e.g., opal, chalcedony, flint. Following the alkali-silica reaction (ASR), an expansive gel can form, that creates extensive cracks and damage on structural members. See also Jersey barrier Constant-slope barrier F-shape barrier Road-traffic safety Traffic barrier References Concrete Road safety Protective barriers
Concrete step barrier
[ "Engineering" ]
400
[ "Structural engineering", "Concrete" ]
16,002,406
https://en.wikipedia.org/wiki/The%20Complete%20Book%20of%20Outer%20Space
The Complete Book of Outer Space is a 1953 collection of essays about space exploration edited by Jeffrey Logan. It first appeared as a magazine, published by Maco Magazine Corp. The first book publication was by Gnome Press in 1953 in an edition of 3,000 copies. Contents Preface, by Kenneth MacLeish "A Preview of the Future: Introduction", by Jeffrey Logan "Development of the Space Ship", by Willy Ley "Station in Space", by Wernher von Braun "Space Medicine", by Heinz Haber "Space Suits", by Donald H. Menzel "The High Altitude Program", by Robert P. Haviland "History of the Rocket Engine", by James H. Wyld "Legal Aspects of Space Travel", by Oscar Schachter "Exploitation of the Moon", by Hugo Gernsback "Life Beyond the Earth", by Willy Ley "Interstellar Flight", by Leslie R. Shepard "The Spaceship in Science Fiction", by Jeffrey Logan "Plea for a Coordinated Space Program", by Wernher von Braun "The Flying Saucer Myth", by Jeffrey Logan "The Panel of Experts" "Chart of the Moon Voyage" "Chart of the Voyage to Mars" "Timetables and Weights" "A Space Travel Dictionary" Reception Groff Conklin of Galaxy Science Fiction said in 1954 that The Complete Book of Outer Space was "a fascinating collection" of pictures and text "of varying value ... but generally an exciting one". References Sources 1953 books Spaceflight books Gnome Press books
The Complete Book of Outer Space
[ "Astronomy" ]
320
[ "Outer space", "Astronomy book stubs", "Astronomy stubs", "Outer space stubs" ]
16,002,442
https://en.wikipedia.org/wiki/DNA%20adenine%20methyltransferase%20identification
DNA adenine methyltransferase identification, often abbreviated DamID, is a molecular biology protocol used to map the binding sites of DNA- and chromatin-binding proteins in eukaryotes. DamID identifies binding sites by expressing the proposed DNA-binding protein as a fusion protein with DNA methyltransferase. Binding of the protein of interest to DNA localizes the methyltransferase in the region of the binding site. Adenine methylation does not occur naturally in eukaryotes and therefore adenine methylation in any region can be concluded to have been caused by the fusion protein, implying the region is located near a binding site. DamID is an alternate method to ChIP-on-chip or ChIP-seq. Description Principle N6-methyladenine (m6A) is the product of the addition of a methyl group (CH3) at position 6 of the adenine. This modified nucleotide is absent from the vast majority of eukaryotes, with the exception of C. elegans, but is widespread in bacterial genomes, as part of the restriction modification or DNA repair systems. In Escherichia coli, adenine methylation is catalyzed by the adenine methyltransferase Dam (DNA adenine methyltransferase), which catalyses adenine methylation exclusively in the palindromic sequence GATC. Ectopic expression of Dam in eukaryotic cells leads to methylation of adenine in GATC sequences without any other noticeable side effect. Based on this, DamID consists in fusing Dam to a protein of interest (usually a protein that interacts with DNA such as transcription factors) or a chromatin component. The protein of interest thus targets Dam to its cognate in vivo binding site, resulting in the methylation of neighboring GATCs. The presence of m6A, coinciding with the binding sites of the proteins of interest, is revealed by methyl PCR. Methyl PCR In this assay the genome is digested by DpnI, which cuts only methylated GATCs. Double-stranded adapters with a known sequence are then ligated to the ends generated by DpnI. Ligation products are then digested by DpnII. This enzyme cuts non-methylated GATCs, ensuring that only fragments flanked by consecutive methylated GATCs are amplified in the subsequent PCR. A PCR with primers matching the adaptors is then carried out, leading to the specific amplification of genomic fragments flanked by methylated GATCs. Specificities of DamID versus Chromatin Immuno-Precipitation Chromatin Immuno-Precipitation, or (ChIP), is an alternative method to assay protein binding at specific loci of the genome. Unlike ChIP, DamID does not require a specific antibody against the protein of interest. On the one hand, this allows to map proteins for which no such antibody is available. On the other hand, this makes it impossible to specifically map posttranslationally modified proteins. Another fundamental difference is that ChIP assays where the protein of interests is at a given time, whereas DamID assays where it has been. The reason is that m6A stays in the DNA after the Dam fusion protein goes away. For proteins that are either bound or unbound on their target sites this does not change the big picture. However, this can lead to strong differences in the case of proteins that slide along the DNA (e.g. RNA polymerase). Known biases and technical issues Plasmid methylation bias Depending on how the experiment is carried out, DamID can be subject to plasmid methylation biases. Because plasmids are usually amplified in E. coli where Dam is naturally expressed, they are methylated on every GATC. In transient transfection experiments, the DNA of those plasmids is recovered along with the DNA of the transfected cells, meaning that fragments of the plasmid are amplified in the methyl PCR. Every sequence of the genome that shares homology or identity with the plasmid may thus appear to be bound by the protein of interest. In particular, this is true of the open reading frame of the protein of interest, which is present in both the plasmid and the genome. In microarray experiments, this bias can be used to ensure that the proper material was hybridized. In stable cell lines or fully transgenic animals, this bias is not observed as no plasmid DNA is recovered. Apoptosis Apoptotic cells degrade their DNA in a characteristic nucleosome ladder pattern. This generates DNA fragments that can be ligated and amplified during the DamID procedure (van Steensel laboratory, unpublished observations). The influence of these nucleosomal fragments on the binding profile of a protein is not known. Resolution The resolution of DamID is a function of the availability of GATC sequences in the genome. A protein can only be mapped within two consecutive GATC sites. The median spacing between GATC fragments is 205 bp in Drosophila (FlyBase release 5), 260 in mouse (Mm9), and 460 in human (HG19). A modified protocol (DamIP), which combines immunoprecipitation of m6A with a Dam variant with less specific target site recognition, may be used to obtain higher resolution data. Cell-type specific methods A major advantage of DamID over ChIP seq is that profiling of protein binding sites can be assayed in a particular cell type in vivo without requiring the physical separation of a subpopulation of cells. This allows for investigation into developmental or physiological processes in animal models. Targeted DamID The targeted DamID (TaDa) approach uses the phenomenon of ribosome reinitiation to express Dam-fusion proteins at appropriately low levels for DamID (i.e. Dam is non-saturating, thus avoiding toxicity). This construct can be combined with cell-type specific promoters resulting in tissue-specific methylation. This approach can be used to assay transcription factor binding in a cell type of interest or alternatively, dam can be fused to Pol II subunits to determine binding of RNA polymerase and thus infer cell-specific gene expression. Targeted DamID has been demonstrated in Drosophila and mouse cells. FRT/FLP-out DamID Cell-specific DamID can also be achieved using recombination mediated excision of a transcriptional terminator cassette upstream of the Dam-fusion protein. The terminator cassette is flanked by FRT recombination sites which can be removed when combined with tissue specific expression of FLP recombinase. Upon removal of the cassette, the Dam-fusion is expressed at low levels under the control of a basal promoter. Variants As well as detection of standard protein-DNA interactions, DamID can be used to investigate other aspects of chromatin biology. Split DamID This method can be used to detect co-binding of two factors to the same genomic locus. The Dam methylase may be expressed in two halves which are fused to different proteins of interest. When both proteins bind to the same region of DNA, the Dam enzyme is reconstituted and is able to methylate the surrounding GATC sites. Chromatin accessibility Due to the high activity of the enzyme, expression of untethered Dam results in methylation of all regions of accessible chromatin. This approach can be used as an alternative to ATAC-seq or DNAse-seq. When combined with cell-type specific DamID methods, expression of untethered Dam can be used to identify cell-type specific promoter or enhancer regions. RNA-DNA interactions A DamID variant known as RNA-DamID can be used to detect interactions between RNA molecules and DNA. This method relies on the expression of a Dam-MCP fusion protein which is able to bind to an RNA that has been modified with MS2 stem-loops. Binding of the Dam-fusion protein to the RNA results in detectable methylation at sites of RNA binding to the genome. Long-range regulatory interactions DNA sequences distal to a protein binding site may be brought into physical proximity through looping of chromosomes. For example, such interactions mediate enhancer and promoter function. These interactions can be detected through the action of Dam methylation. If Dam is targeted to a specific known DNA locus, distal sites brought into proximity due to the 3D configuration of the DNA will also be methylated and can be detected as in conventional DamID. Single cell DamID DamID is usually performed on around 10,000 cells, (although it has been demonstrated with fewer). This means that the data obtained represents the average binding, or probability of a binding event across that cell population. A DamID protocol for single cells has also been developed and applied to human cells. Single cell approaches can highlight the heterogeneity of chromatin associations between cells. References Further reading External links Frequently asked questions about DamID Genetics techniques Molecular biology
DNA adenine methyltransferase identification
[ "Chemistry", "Engineering", "Biology" ]
1,879
[ "Genetics techniques", "Biochemistry", "Genetic engineering", "Molecular biology" ]
16,003,010
https://en.wikipedia.org/wiki/Mocmex
Mocmex is a trojan, which was found in a digital photo frame in February 2008. It was the first serious computer virus on a digital photo frame. The virus was traced back to a group in China. Overview Mocmex collects passwords for online games. The virus is able to recognize and block antivirus protection from more than a hundred security companies and the Windows built-in firewall. Mocmex downloads files from remote locations and hides randomly named files on infected computers. Therefore, the virus is difficult to remove. Furthermore, it spreads to other portable storage devices that were plugged into an infected computer. Industry experts describe the writers of the Trojan Horse as professionals and describe Mocmex as a "nuclear bomb of malware". Protection Though Mocmex can be described as a serious virus, protection is not hard. First of all, updated antivirus programs will recognize Mocmex' signature and quarantine it. Another precaution is to check a digital photo frame for malware on a Macintosh or Linux machine before plugging it into a computer with Windows, or disable autorun on Windows. Effects A large part of digital photo frames were manufactured in China, particularly in Shenzhen. The negative publicity followed by media reports of the Chinese virus is expected to have negative effects on Chinese manufacturers. Mocmex happened just a few months after quality problems with toys manufactured in China raised the attention of Western countries leading to a low quality image for Chinese products. References Digital photography Display technology Trojan horses Hacking in the 2000s
Mocmex
[ "Engineering" ]
311
[ "Electronic engineering", "Display technology" ]
16,003,024
https://en.wikipedia.org/wiki/Maxus%20%28rocket%29
Maxus is a sounding rocket that are used in the MAXUS microgravity rocket programme, a joint venture between Swedish Space Corporation and EADS Astrium Space Transportation used by ESA. It is launched from Esrange Space Center in Sweden and provides access to microgravity for up to 14 minutes. Technical characteristics Overall length: 15.5 m Overall mass: 12 400 kg Payload mass: approx. 800 kg Max. velocity: 3500 m/s Max. acceleration: 15 g Propellant mass: 10 042 kg Motor burn time: 63 s Microgravity: up to 14 minutes Apogee: > 700 km Thrust(max. in vacuum): 500 kN Missions See also Texus Maser Rexus/Bexus Esrange References Sounding rockets of Sweden
Maxus (rocket)
[ "Astronomy" ]
161
[ "Rocketry stubs", "Astronomy stubs" ]
16,004,109
https://en.wikipedia.org/wiki/Pavilion%20Lake
Pavilion Lake is a freshwater lake located in Marble Canyon, British Columbia, Canada home to colonies of freshwater microbialites. Location and local communities It is located between the towns of Lillooet and Cache Creek (29.44 kilometres WNW, as the crow flies, from Cache Creek) and lies along BC Highway 99, 8.85 highway kilometres (northeast then southeast) from Pavilion, British Columbia. There is a small community of lakeshore residences, some recreational and seasonal only, located on the lake's eastern shore adjacent to the highway. The lake is overlooked by the cliffs of Marble Canyon, which is the southern buttress of the Marble Range, and the forests of the northernmost Clear Range. Also overlooking the lake is Chimney Rock (K'lpalekw in Secwepemc'tsn, "Coyote's Penis"), which like the lake and the canyon have spiritual significance to the adjoining native communities, the Tskwaylaxw people of Pavilion and the Bonaparte band of Secwepemc at Upper Hat Creek. One of the rancheries and a rodeo and pow-wow ground of the Pavilion Band is located at Marble Canyon's south entrance. The lake area and its foreshore were added to Marble Canyon Provincial Park in order to protect its special scientific and heritage values. Characteristics The lake demonstrates karst hydrology, with underground inflows from Marble Canyon creeks. The lake has generally low biological productivity, and is classified as ultraoligotrophic. It also features a high degree of water clarity. The lake gets covered with ice annually, and is dimictic, going through two thermal overturns per year. The lake reaches a maximum depth of 65 meters below the surface. It is also a hard water lake, due to its high mineral content. Microbialites and scientific research Part of a karst formation, the lake is most notable for being home to colonies of microbialites, a type of stromatolite. Colonies of microbialites grow from depths of 5 to 55 meters. Low sedimentation rates may allow for continued development of these colonies. One estimate puts microbialite growth at 0.05 mm per year within the last 1,000 years. Research at Pavilion Lake has suggested that most biological activity in microbialite structures occurs near the surface of these structures. The lake's harsh geochemical environment prevents the development of metazoan grazers, also allowing for microbialite development. The lake has been the subject of astrobiology research by NASA, the Canadian Space Agency, and research institutions from around the world. The research falls under the umbrella of the Pavilion Lake Research Project. The Pavilion Lake Research Project has used the site to help train Canadian Space Agency astronauts. See also List of lakes of British Columbia Fraser Canyon Fountain, British Columbia Marble Canyon Pavilion Indian Band (Tskwalaxw First Nation) References External links Pavilion Lake Research Project website Map and pictures from SFU site description of site from SFU website Location map and images from Nature magazine Article in Astrobiology Magazine Astrobiology Lillooet Country Thompson Country Lakes of British Columbia Unincorporated settlements in British Columbia
Pavilion Lake
[ "Astronomy", "Biology" ]
637
[ "Origin of life", "Speculative evolution", "Astrobiology", "Biological hypotheses", "Astronomical sub-disciplines" ]
16,004,359
https://en.wikipedia.org/wiki/Littlewood%E2%80%93Richardson%20rule
In mathematics, the Littlewood–Richardson rule is a combinatorial description of the coefficients that arise when decomposing a product of two Schur functions as a linear combination of other Schur functions. These coefficients are natural numbers, which the Littlewood–Richardson rule describes as counting certain skew tableaux. They occur in many other mathematical contexts, for instance as multiplicity in the decomposition of tensor products of finite-dimensional representations of general linear groups, or in the decomposition of certain induced representations in the representation theory of the symmetric group, or in the area of algebraic combinatorics dealing with Young tableaux and symmetric polynomials. Littlewood–Richardson coefficients depend on three partitions, say , of which and describe the Schur functions being multiplied, and gives the Schur function of which this is the coefficient in the linear combination; in other words they are the coefficients such that The Littlewood–Richardson rule states that is equal to the number of Littlewood–Richardson tableaux of skew shape and of weight . History The Littlewood–Richardson rule was first stated by but though they claimed it as a theorem they only proved it in some fairly simple special cases. claimed to complete their proof, but his argument had gaps, though it was so obscurely written that these gaps were not noticed for some time, and his argument is reproduced in the book . Some of the gaps were later filled by . The first rigorous proofs of the rule were given four decades after it was found, by and , after the necessary combinatorial theory was developed by , , and in their work on the Robinson–Schensted correspondence. There are now several short proofs of the rule, such as , and using Bender-Knuth involutions. used the Littelmann path model to generalize the Littlewood–Richardson rule to other semisimple Lie groups. The Littlewood–Richardson rule is notorious for the number of errors that appeared prior to its complete, published proof. Several published attempts to prove it are incomplete, and it is particularly difficult to avoid errors when doing hand calculations with it: even the original example in contains an error. Littlewood–Richardson tableaux A Littlewood–Richardson tableau is a skew semistandard tableau with the additional property that the sequence obtained by concatenating its reversed rows is a lattice word (or lattice permutation), which means that in every initial part of the sequence any number occurs at least as often as the number . Another equivalent (though not quite obviously so) characterization is that the tableau itself, and any tableau obtained from it by removing some number of its leftmost columns, has a weakly decreasing weight. Many other combinatorial notions have been found that turn out to be in bijection with Littlewood–Richardson tableaux, and can therefore also be used to define the Littlewood–Richardson coefficients. Example Consider the case that , and . Then the fact that can be deduced from the fact that the two tableaux shown at the right are the only two Littlewood–Richardson tableaux of shape and weight . Indeed, since the last box on the first nonempty line of the skew diagram can only contain an entry 1, the entire first line must be filled with entries 1 (this is true for any Littlewood–Richardson tableau); in the last box of the second row we can only place a 2 by column strictness and the fact that our lattice word cannot contain any larger entry before it contains a 2. For the first box of the second row we can now either use a 1 or a 2. Once that entry is chosen, the third row must contain the remaining entries to make the weight (3,2,1), in a weakly increasing order, so we have no choice left any more; in both case it turns out that we do find a Littlewood–Richardson tableau. A more geometrical description The condition that the sequence of entries read from the tableau in a somewhat peculiar order form a lattice word can be replaced by a more local and geometrical condition. Since in a semistandard tableau equal entries never occur in the same column, one can number the copies of any value from right to left, which is their order of occurrence in the sequence that should be a lattice word. Call the number so associated to each entry its index, and write an entry i with index j as i[j]. Now if some Littlewood–Richardson tableau contains an entry with index j, then that entry i[j] should occur in a row strictly below that of (which certainly also occurs, since the entry i − 1 occurs as least as often as the entry i does). In fact the entry i[j] should also occur in a column no further to the right than that same entry (which at first sight appears to be a stricter condition). If the weight of the Littlewood–Richardson tableau is fixed beforehand, then one can form a fixed collection of indexed entries, and if these are placed in a way respecting those geometric restrictions, in addition to those of semistandard tableaux and the condition that indexed copies of the same entries should respect right-to-left ordering of the indexes, then the resulting tableaux are guaranteed to be Littlewood–Richardson tableaux. An algorithmic form of the rule The Littlewood–Richardson as stated above gives a combinatorial expression for individual Littlewood–Richardson coefficients, but gives no indication of a practical method to enumerate the Littlewood–Richardson tableaux in order to find the values of these coefficients. Indeed, for given there is no simple criterion to determine whether any Littlewood–Richardson tableaux of shape and of weight exist at all (although there are a number of necessary conditions, the simplest of which is ); therefore it seems inevitable that in some cases one has to go through an elaborate search, only to find that no solutions exist. Nevertheless, the rule leads to a quite efficient procedure to determine the full decomposition of a product of Schur functions, in other words to determine all coefficients for fixed λ and μ, but varying ν. This fixes the weight of the Littlewood–Richardson tableaux to be constructed and the "inner part" λ of their shape, but leaves the "outer part" ν free. Since the weight is known, the set of indexed entries in the geometric description is fixed. Now for successive indexed entries, all possible positions allowed by the geometric restrictions can be tried in a backtracking search. The entries can be tried in increasing order, while among equal entries they can be tried by decreasing index. The latter point is the key to efficiency of the search procedure: the entry i[j] is then restricted to be in a column to the right of , but no further to the right than (if such entries are present). This strongly restricts the set of possible positions, but always leaves at least one valid position for ; thus every placement of an entry will give rise to at least one complete Littlewood–Richardson tableau, and the search tree contains no dead ends. A similar method can be used to find all coefficients for fixed λ and ν, but varying μ. Littlewood–Richardson coefficients The Littlewood–Richardson coefficients c   appear in the following interrelated ways: They are the structure constants for the product in the ring of symmetric functions with respect to the basis of Schur functions or equivalently c   is the inner product of sν and sλsμ. They express skew Schur functions in terms of Schur functions The c   appear as intersection numbers on a Grassmannian: where σμ is the class of the Schubert variety of a Grassmannian corresponding to μ. c   is the number of times the irreducible representation Vλ ⊗ Vμ of the product of symmetric groups S|λ| × S|μ| appears in the restriction of the representation Vν of S|ν| to S|λ| × S|μ|. By Frobenius reciprocity this is also the number of times that Vν occurs in the representation of S|ν| induced from Vλ ⊗ Vμ. The c   appear in the decomposition of the tensor product of two Schur modules (irreducible representations of special linear groups) c   is the number of standard Young tableaux of shape ν/μ that are jeu de taquin equivalent to some fixed standard Young tableau of shape λ. c   is the number of Littlewood–Richardson tableaux of shape ν/λ and of weight μ. c   is the number of pictures between μ and ν/λ. Special cases Pieri's formula Pieri's formula, which is the special case of the Littlewood–Richardson rule in the case when one of the partitions has only one part, states that where Sn is the Schur function of a partition with one row and the sum is over all partitions λ obtained from μ by adding n elements to its Ferrers diagram, no two in the same column. Rectangular partitions If both partitions are rectangular in shape, the sum is also multiplicity free . Fix a, b, p, and q positive integers with p q. Denote by the partition with p parts of length a. The partitions indexing nontrivial components of are those partitions with length such that For example, . Generalizations Reduced Kronecker coefficients of the symmetric group The reduced Kronecker coefficient of the symmetric group is a generalization of to three arbitrary Young diagrams , which is symmetric under permutations of the three diagrams. Skew Schur functions extended the Littlewood–Richardson rule to skew Schur functions as follows: where the sum is over all tableaux T on μ/ν such that for all j, the sequence of integers λ+ω(T≥j) is non-increasing, and ω is the weight. Newell-Littlewood numbers Newell-Littlewood numbers are defined from Littlewood–Richardson coefficients by the cubic expression Newell-Littlewood numbers give some of the tensor product multiplicities of finite-dimensional representations of classical Lie groups of the types . The non-vanishing condition on Young diagram sizes leads to Newell-Littlewood numbers are generalizations of Littlewood–Richardson coefficients in the sense that Newell-Littlewood numbers that involve a Young diagram with only one row obey a Pieri-type rule: is the number of ways to remove boxes from (from different columns), then add boxes (to different columns) to make . Newell-Littlewood numbers are the structure constants of an associative and commutative algebra whose basis elements are partitions, with the product . For example, Examples The examples of Littlewood–Richardson coefficients below are given in terms of products of Schur polynomials Sπ, indexed by partitions π, using the formula All coefficients with at most 4 are given by: S0Sπ = Sπ for any π. where S0=1 is the Schur polynomial of the empty partition S1S1 = S2 + S11 S2S1 = S3 + S21 S11S1 = S111 + S21 S3S1 = S4 + S31 S21S1 = S31 + S22 + S211 S2S2 = S4 + S31 + S22 S2S11 = S31 + S211 S111S1 = S1111 + S211 S11S11 = S1111 + S211 + S22 Most of the coefficients for small partitions are 0 or 1, which happens in particular whenever one of the factors is of the form Sn or S11...1, because of Pieri's formula and its transposed counterpart. The simplest example with a coefficient larger than 1 happens when neither of the factors has this form: S21S21 = S42 + S411 + S33 + 2S321 + S3111 + S222 + S2211. For larger partitions the coefficients become more complicated. For example, S321S321 = S642 +S6411 +S633 +2S6321 +S63111 +S6222 +S62211 +S552 +S5511 +2S543 +4S5421 +2S54111 +3S5331 +3S5322 +4S53211 +S531111 +2S52221 +S522111 +S444 +3S4431 +2S4422 +3S44211 +S441111 +3S4332 +3S43311 +4S43221 +2S432111 +S42222 +S422211 +S3333 +2S33321 +S333111 +S33222 +S332211 with 34 terms and total multiplicity 62, and the largest coefficient is 4 S4321S4321 is a sum of 206 terms with total multiplicity is 930, and the largest coefficient is 18. S54321S54321 is a sum of 1433 terms with total multiplicity 26704, and the largest coefficient (that of S86543211) is 176. S654321S654321 is a sum of 10873 terms with total multiplicity is 1458444 (so the average value of the coefficients is more than 100, and they can be as large as 2064). The original example given by was (after correcting for 3 tableaux they found but forgot to include in the final sum) S431S221 = S652 + S6511 + S643 + 2S6421 + S64111 + S6331 + S6322 + S63211 + S553 + 2S5521 + S55111 + 2S5431 + 2S5422 + 3S54211 + S541111 + S5332 + S53311 + 2S53221 + S532111 + S4432 + S44311 + 2S44221 + S442111 + S43321 + S43222 + S432211 with 26 terms coming from the following 34 tableaux: ....11 ....11 ....11 ....11 ....11 ....11 ....11 ....11 ....11 ...22 ...22 ...2 ...2 ...2 ...2 ... ... ... .3 . .23 .2 .3 . .22 .2 .2 3 3 2 2 3 23 2 3 3 ....1 ....1 ....1 ....1 ....1 ....1 ....1 ....1 ....1 ...12 ...12 ...12 ...12 ...2 ...1 ...1 ...2 ...1 .23 .2 .3 . .13 .22 .2 .1 .2 3 2 2 2 3 23 23 2 3 3 ....1 ....1 ....1 ....1 ....1 ....1 ....1 ....1 ...2 ...2 ...2 ... ... ... ... ... .1 .3 . .12 .12 .1 .2 .2 2 1 1 23 2 22 13 1 3 2 2 3 3 2 2 3 3 .... .... .... .... .... .... .... .... ...1 ...1 ...1 ...1 ...1 ... ... ... .12 .12 .1 .2 .2 .11 .1 .1 23 2 22 13 1 22 12 12 3 3 2 2 3 23 2 3 3 Calculating skew Schur functions is similar. For example, the 15 Littlewood–Richardson tableaux for ν=5432 and λ=331 are ...11 ...11 ...11 ...11 ...11 ...11 ...11 ...11 ...11 ...11 ...11 ...11 ...11 ...11 ...11 ...2 ...2 ...2 ...2 ...2 ...2 ...2 ...2 ...2 ...2 ...2 ...2 ...2 ...2 ...2 .11 .11 .11 .12 .11 .12 .13 .13 .23 .13 .13 .12 .12 .23 .23 12 13 22 12 23 13 12 24 14 14 22 23 33 13 34 so S5432/331 = Σc  Sμ = S52 + S511 + S4111 + S2221 + 2S43 + 2S3211 + 2S322 + 2S331 + 3S421 . Notes References Zbl0019.25102 External links An online program, decomposing products of Schur functions using the Littlewood–Richardson rule Algebraic combinatorics Invariant theory Representation theory Symmetric functions
Littlewood–Richardson rule
[ "Physics", "Mathematics" ]
3,628
[ "Symmetry", "Group actions", "Algebra", "Combinatorics", "Fields of abstract algebra", "Symmetric functions", "Representation theory", "Algebraic combinatorics", "Invariant theory" ]
16,005,213
https://en.wikipedia.org/wiki/Clinical%20Associate%20%28Psychology%29
In Scotland, a Clinical Associate is a shortened designation for a Clinical Associate in Applied Psychology (CAAP). A Clinical Associate is a specialist regulated mental health professional whose duties include assessing, formulating, and treating clients all within specified ranges of conditions and age. Clinical Associates work either in primary care adult mental health settings or in a range of setting working with children, young people, and their families. History & Development The role of Clinical Associate was first introduced in 2005. Following consultation with National Health Service Scotland (NHS Scotland), NHS Education for Scotland (NES) commissioned a new master's level training program designed to equip graduate Psychologists with the competencies required to deliver circumscribed psychological services. The role of Clinical Associate was developed with an aim to increase access to primary care psychological services in two main specialities: child and adolescent therapies in primary care and adult therapies in primary care. The University of Stirling and the University of Dundee developed the initial one-year Clinical Associate training scheme sponsored by NHS Education for Scotland (NES). Since inception, Master of Science (MSc) applied training programs in Scotland for Clinical Associates have been developed in two specialties: Psychological Therapy in Primary Care (adults) and Early Interventions for Children and Young People. The master's level training program focused on Psychological Therapy in Primary Care (adults) is intended to equip trainees with the ability to assess and treat adults experiencing more common mental health disorders (such as anxiety and depression) while under clinical supervision. The master's level training program focused on Early Interventions for Children and Young People is intended to train skills required to assess and treat children, young people, and their families experiencing more common mental health disorders with a strong emphasis on the early years and early intervention. Although other forms of therapy are also explored, currently the emphasis of existing Clinical Associate in Applied Psychology (CAAP) programs is on a cognitive behavioral therapy (CBT) based approach. Professional Training and Certification to Practice The training required to practice as a Clinical Associate consists of an MSc in either Psychological Therapies in Primary Care (adults) or in Applied Psychology for Children and Young People. Clinical Associate candidates must have a British Psychological Society (BPS) accredited psychology undergraduate degree. It is intended that the undergraduate psychology knowledge base will be further developed during the postgraduate training in one of the Clinical Associate MSc level programs. Currently, it is required that the master's level training programs encompass a theoretical, research, and applied foundation of psychology. The knowledge acquired during the master's level training will then be applied to the needs of specific client groups. Training in the existing Clinical Associate MSc programs generally takes place during a little over one year of full-time study. Once admitted to training, Academic requirements (non-clinical) must be demonstrated to masters MSc level, including the production of an original piece of research which will contribute to the scientific understanding of a relevant area. Requirements to practice are similar to that of Clinical Psychology in the UK (within the appropriate speciality) and include demonstration of competence in clinical practice, assessment, diagnosis, formulation, treatment and research design and evaluation. While enrolled in one of the master's level training programs, Clinical Associates receive a salary from the NHS Health Boards in Scotland. At the end of the training, if they were successful in the program, the candidate would be awarded a master's degree in Applied Psychology (as a Clinical Associate), often with a specific client group listed as expertise. Clinical Associates have a circumscribed nature of expertise and must often consult with senior colleagues (fully licensed psychology practitioners) under whose support and supervision they practice. After graduation, master's level Clinical Associates are classified as having the expertise to work within a specific client group and with specific psychological disorders. As in all psychological fields, it is expected that Clinical Associates remain aware of ongoing research regarding (at least) their specialized client groups. It is intended that, following completion of one of the master's level training programs in Applied Psychology (as a Clinical Associate), newly appointed Clinical associates will apply for Clinical Associate in Applied Psychology (CAAP) job posts in the NHS of Scotland. While award of the MSc in Applied Psychology (Clinical Associate) confers eligibility for a Clinical Associate in Applied Psychology post in the NHS of Scotland, there is no guarantee of employment following training. Master's in Applied Psychology (Clinical Associate) vs. Doctorate in Clinical Psychology/Counselling Psychology The Doctorate in Clinical Psychology, and that in Counselling Psychology are recognized by the British Psychological Society (BPS), as the threshold levels required to work as a largely autonomous scientist practitioner, while the Master's in Applied Psychology (Clinical Associate) is more limited with regard to the level of academic preparation and range of certified practice. Clinical Associate practitioners operate in a relatively circumscribed manner within a particular specialty while holders of a Doctorate in Clinical Psychology can work in a range of environments and specialties. In addition, Applied Psychology (Clinical Associate) practitioners are required to work under the supervision of a qualified Clinical Psychologist/Counselling Psychologist while those with a Doctorate in Clinical Psychology or Counselling Psychology can work, "unsupervised"; aside from the requirements of professional practice. Similar Roles in the United States Although the label of "Clinical Associate" or "Clinical Associate in Applied Psychology (CAAP)" is unique to Scotland, there are other countries that also allow holders of master's degrees in Clinical Psychology to practice in somewhat limited capacities. In the United States, there are a number of U.S. schools that offer master's degree programs in Clinical Psychology. These programs often take 2 to 3 years to complete post-Bachelor's degree and the training usually emphasizes theory and treatment over research, quite often with a focus on school or couples and family counseling. While many graduates of master's-level training programs go on to earn their Doctorate in Clinical Psychology, a large number chose to go directly into practice—often as a Marriage and Family Therapist (MFT), Licensed Psychological Associate (LPA), or Licensed Professional Counselor (LPC). When working under the supervision of a doctoral psychologist, master's graduates can work as Psychological Assistants in clinical, counseling, or research settings. Most master's degree programs do not require an undergraduate major in psychology, but do require coursework in introductory psychology, experimental psychology, and statistics. References External links Scottish Subject Benchmark Statement - Clinical Psychology and Applied Psychology (Clinical Associate) Scotland University of Stirling - MSc in Psychological Therapy in Primary Care (Adult) Program Description Letter regarding "Applied Psychology and Psychologists in NHS Scotland" from the Scottish Government's Director of Health and Social Care Integration University of Dundee MSc in Psychological Therapy in Primary Care (Adult) Program Description University of Edinburgh - MSc in Applied Psychology for Children & Young People Program Description U.S. Bureau of Labor and Statistics, Occupational Outlook Handbook—Psychologists Clinical psychology
Clinical Associate (Psychology)
[ "Biology" ]
1,413
[ "Behavioural sciences", "Behavior", "Clinical psychology" ]
16,005,814
https://en.wikipedia.org/wiki/Nickel%28II%29%20chromate
Nickel(II) chromate (NiCrO4) is an acid-soluble compound, red-brown in color, with high tolerances for heat. It and the ions that compose it have been linked to tumor formation and gene mutation, particularly to wildlife. Synthesis Nickel(II) chromate can be formed in the lab by heating a mixture of chromium(III) oxide and nickel oxide at between 700 °C and 800 °C under oxygen at 1000 atm pressure. It can be produced at 535 °C and 7.3 bar oxygen, but the reaction takes days to complete. If the pressure is too low or temperature too high but above 660 °C, then the nickel chromium spinel NiCr2O4 forms instead. Karin Brandt also claimed to make nickel chromate using a hydrothermal technique. Precipitates of Ni2+ ions with chromate produce a brown substance that contains water. Properties The structure of nickel chromate is the same as for chromium vanadate, CrVO4. Crystals have an orthorhombic structure with unit cell sizes a = 5.482 Å, b = 8.237 Å, c = 6.147 Å. The cell volume is 277.6 Å3 with four formula per unit cell. Nickel chromate is dark in colour, unlike most other chromates which are yellow. The infrared spectrum of nickel chromate show two sets of absorption bands. The first includes lines at 925, 825, and 800 cm−1 due to Cr-O stretching, and the second has lines at 430, 395, 365 (very weak) due to Cr-O rock and bend and 310 cm−1 produced from Ni-O stretching. Reaction When heated at lower oxygen pressure around 600 °C, nickel chromate decomposes to the nickel chromite spinel, nickel oxide and oxygen. 4 NiCrO4 → 2 NiCr2O4 + 2 NiO + 3 O2 (gas) Related Nickel chromates can also crystallize with ligands. For instance, with 1,10-phenanthroline it can form triclinic olive-colored crystals of [Ni(1,10-phenanthroline)CrO4•3H2O]•H2O, orange crystals of Ni(1,10-phenanthroline)3Cr2O7•3H2O, and yellow powdered Ni(1,10-phenanthroline)3Cr2O7•8H2O. References Nickel compounds Chromates Oxidizing agents
Nickel(II) chromate
[ "Chemistry" ]
540
[ "Chromates", "Redox", "Oxidizing agents", "Salts" ]
16,006,094
https://en.wikipedia.org/wiki/Gabexate
Gabexate is a serine protease inhibitor which is used therapeutically (as gabexate mesilate) in the treatment of pancreatitis, disseminated intravascular coagulation, and as a regional anticoagulant for haemodialysis. References Guanidines Benzoate esters Ethyl esters
Gabexate
[ "Chemistry" ]
76
[ "Guanidines", "Functional groups" ]
16,006,394
https://en.wikipedia.org/wiki/Food%20vs.%20fuel
Food versus fuel is the dilemma regarding the risk of diverting farmland or crops for biofuels production to the detriment of the food supply. The biofuel and food price debate involves wide-ranging views and is a long-standing, controversial one in the literature. There is disagreement about the significance of the issue, what is causing it, and what can or should be done to remedy the situation. This complexity and uncertainty are due to the large number of impacts and feedback loops that can positively or negatively affect the price system. Moreover, the relative strengths of these positive and negative impacts vary in the short and long terms, and involve delayed effects. The academic side of the debate is also blurred by the use of different economic models and competing forms of statistical analysis. Biofuel production has increased in recent years. Some commodities, like maize (corn), sugar cane or vegetable oil can be used either as food, feed, or to make biofuels. For example, since 2006, a portion of land that was also formerly used to grow food crops in the United States is now used to grow corn for biofuels, and a larger share of corn is destined for ethanol production, reaching 25% in 2007. Oil price increases since 2003, the desire to reduce oil dependency, and the need to reduce greenhouse gas emissions from transportation have together increased global demand for biofuels. Increased demand tends to improve financial returns on production, making biofuel more profitable and attractive than food production. This, in turn, leads to greater resource inputs to biofuel production, with correspondingly reduced resources put towards the production of food. Global food security issues may result from such economic disincentives to large-scale agricultural food production. There is, in addition, potential for the destruction of habitats with increasing pressure to convert land use to agriculture, for the production of biofuel. Environmental groups have raised concerns about these potential harms for some years, but the issues drew widespread attention worldwide due to the 2007–2008 world food price crisis. Second-generation biofuels could potentially provide solutions to these negative effects. For example, they may allow for combined farming for food and fuel, and electricity could be generated simultaneously. This could be especially beneficial for developing countries and rural areas in developed countries. Some research suggests that biofuel production can be significantly increased without the need for increased acreage. Biofuels are not a new phenomenon. Before industrialisation, horses were the primary (and probably the secondary) source of power for transportation and physical work, requiring food. The growing of crops for horses (typically oats) to carry out physical work is comparable to the growing of crops for biofuels used in engines. However, the earlier, pre-industrial "biofuel" crops were at smaller scale. Brazil has been considered to have the world's first sustainable biofuels economy, and its government claims Brazil's sugar cane-based ethanol industry did not contribute to the 2008 food crisis. A World Bank policy research working paper released in July 2008 concluded that "large increases in biofuel production in the United States and Europe are the main reason behind the steep rise in global food prices" and also stated that "Brazil's sugar-based ethanol did not push food prices appreciably higher.". However, a 2010 study also by the World Bank concluded that their previous study may have overestimated the contribution of biofuel production, as "the effect of biofuels on food prices has not been as large as originally thought, but that the use of commodities by financial investors (the so-called "financialization of commodities") may have been partly responsible for the 2007/08 spike." A 2008 independent study by the OECD also found that the impact of biofuels on food prices are much smaller. Food price inflation From 1974 to 2005, real food prices (adjusted for inflation) dropped by 75%. Food commodity prices were relatively stable after reaching lows in 2000 and 2001. Therefore, recent rapid food price increases are considered extraordinary. A World Bank policy research working paper published in July 2008 found that the increase in food commodity prices was led by grains, with sharp price increases in 2005 despite record crops worldwide. From January 2005 until June 2008, maize prices almost tripled, wheat increased 127 percent, and rice rose 170 percent. The increase in grain prices was followed by increases in fat and oil prices in mid-2006. On the other hand, the study found that sugar cane production has increased rapidly, and it was large enough to keep sugar price increases small except for 2005 and early 2006. The paper concluded that biofuels produced from grains have raised food prices in combination with other related factors by between 70 and 75 percent, but ethanol produced from sugar cane has not contributed significantly to the recent increase in food commodities prices. An economic assessment report published by the OECD in July 2008 found that "the impact of current biofuel policies on world crop prices, largely through increased demand for cereals and vegetable oils, is significant but should not be overestimated. Current biofuel support measures alone are estimated to increase average wheat prices by about 5 percent, maize by around 7 percent and vegetable oil by about 19 percent over the next 10 years." Corn is used to make ethanol and prices went up by a factor of three in less than 3 years (measured in US dollars). Reports in 2007 linked stories as diverse as food riots in Mexico due to rising prices of corn for tortillas and reduced profits at Heineken, the large international brewer, to the increasing use of corn (maize) grown in the US Midwest for ethanol production. In the case of beer, the barley area was cut in order to increase corn production. Barley is not currently used to produce ethanol.) Wheat is up by almost a factor of 3 in three years, while soybeans are up by a factor of 2 in two years (both measured in US dollars). As corn is commonly used as feed for livestock, higher corn prices lead to higher prices for animal source foods. Vegetable oil is used to make biodiesel and has about doubled in price in the last couple years. The prices are roughly tracking crude oil prices. The 2007–2008 world food price crisis is blamed partly on the increased demand for biofuels. During the same period, rice prices went up by a factor of 3 even though rice is not directly used in biofuels. The USDA expects the 2008/2009 wheat season to be a record crop and 8% higher than the previous year. They also expect rice to have a record crop. Wheat prices have dropped from a high of over $12 per bushel in May 2008 to under $8/bushel in May. Rice has also dropped from its highs. According to a 2008 report from the World Bank, the production of biofuel pushed food prices up. These conclusions were supported by the Union of Concerned Scientists in their September 2008 newsletter, in which they remarked that the World Bank analysis "contradicts U.S. Secretary of Agriculture Ed Schaffer's assertion that biofuels account for only a small percentage of rising food prices". According to the October Consumer Price Index released on November 19, 2008, food prices continued to rise in October 2008 and were 6.3 percent higher than in October 2007. Since July 2008, fuel costs dropped by nearly 60 percent. Proposed causes Ethanol fuel as an oxygenate additive The demand for ethanol fuel produced from field corn was spurred in the U.S. by the discovery that methyl tertiary butyl ether (MTBE) was contaminating groundwater. MTBE use as an oxygenate additive was widespread due to the mandates of the Clean Air Act amendments of 1992 to reduce carbon monoxide emissions. As a result, by 2006, MTBE use in gasoline was banned in almost 20 states. There was also concern that widespread and costly litigation might be taken against the U.S. gasoline suppliers, and a 2005 decision refusing legal protection for MTBE, opened a new market for ethanol fuel, the primary substitute for MTBE. At a time when corn prices were around US$2 a bushel, corn growers recognized the potential of this new market and delivered accordingly. This demand shift took place at a time when oil prices were already significantly rising. Other factors The fact that food prices went up at the same time fuel prices went up is not surprising and should not be entirely blamed on biofuels. Energy costs are a significant cost for fertilizer, farming, and food distribution. Also, China and other countries have had significant increases in their imports as their economies have grown. Sugar is one of the main feedstocks for ethanol, and prices are down from two years ago. Part of the food price increase for international food commodities measured in US dollars is due to the dollar being devalued. Protectionism is also an important contributor to price increases. 36% of world grain goes as fodder to feed animals, rather than people. Over long periods of time, population growth and climate change could cause food prices to go up. However, these factors have been around for many years and food prices have jumped up in the last three years, so their contribution to the current problem is minimal. Government regulations of food and fuel markets France, Germany, the United Kingdom, and the United States governments have supported biofuels with tax breaks, mandated use, and subsidies. These policies have the unintended consequence of diverting resources from food production and leading to surging food prices and the potential destruction of natural habitats. Fuel for agricultural use often does not have fuel taxes (farmers get duty-free petrol or diesel fuel). Biofuels may have subsidies and low/no retail fuel taxes. Biofuels compete with retail gasoline and diesel prices which have substantial taxes included. The net result is that it is possible for a farmer to use more than a gallon of fuel to make a gallon of biofuel and still make a profit. There have been thousands of scholarly papers analyzing how much energy goes into making ethanol from corn and how that compares to the energy in the ethanol. A World Bank policy research working paper concluded that food prices have risen by 35 to 40 percent between 2002 and 2008, of which 70 to 75 percent are attributable to biofuels. The "month-by-month" five-year analysis disputes that increases in global grain consumption and droughts were responsible for significant price increases, reporting that this had only a marginal impact. Instead, the report argues that the EU and US drive for biofuels has had by far the biggest impact on food supply and prices, as increased production of biofuels in the US and EU was supported by subsidies and tariffs on imports, and considers that without these policies, price increases would have been smaller. This research also concluded that Brazil's sugar cane-based ethanol has not raised sugar prices significantly and recommends removing tariffs on ethanol imports by both the US and EU, to allow more efficient producers such as Brazil and other developing countries, including many African countries, to produce ethanol profitably for export to meet the mandates in the EU and the US. An economic assessment published by the OECD in July 2008 agrees with the World Bank report recommendations regarding the negative effects of subsidies and import tariffs but finds that the estimated impact of biofuels on food prices is much smaller. The OECD study found that trade restrictions, mainly through import tariffs, protect the domestic industry from foreign competitors but impose a cost burden on domestic biofuel users and limit alternative suppliers. The report is also critical of limited reduction of greenhouse gas emissions achieved from biofuels based on feedstocks used in Europe and North America, finding that the current biofuel support policies would reduce greenhouse gas emissions from transport fuel by no more than 0.8% by 2015, while Brazilian ethanol from sugar cane reduces greenhouse gas emissions by at least 80% compared to fossil fuels. The assessment calls for the need for more open markets in biofuels and feedstocks in order to improve efficiency and lower costs. Oil price increases Oil price increases since 2003 resulted in increased demand for biofuels. Transforming vegetable oil into biodiesel is not very hard or costly, so there is a profitable arbitrage situation if vegetable oil is much cheaper than diesel. Diesel is also made from crude oil, so vegetable oil prices are partially linked to crude oil prices. Farmers can switch to growing vegetable oil crops if those are more profitable than food crops. So all food prices are linked to vegetable oil prices and, in turn, to crude oil prices. A World Bank study concluded that oil prices and a weak dollar explain 25–30% of the total price rise between January 2002 until June 2008. Demand for oil is outstripping the supply of oil and oil depletion is expected to cause crude oil prices to go up over the next 50 years. Record oil prices are inflating food prices worldwide, including those crops that have no relation to biofuels, such as rice and fish. In Germany and Canada, it is now much cheaper to heat a house by burning grain than by using fuel derived from crude oil. With oil at $120 per barrel, a savings of a factor of 3 on heating costs is possible. When crude oil was at $25/barrel there was no economic incentive to switch to a grain fed heater. From 1971 to 1973, around the time of the 1973 oil crisis, corn and wheat prices went up by a factor of 3. There was no significant biofuel usage at that time. US government policy Some argue that the US government's policy of encouraging ethanol from corn is the main cause of food price increases. US federal government ethanol subsidies total $7 billion per year, or $1.90 per gallon. Ethanol provides only 55% as much energy as gasoline per gallon, realizing about a $3.45 per gallon gasoline trade off. Corn is used to feed chickens, cows, and pigs, so higher corn prices lead to higher prices for chicken, beef, pork, milk, cheese, etc. U.S. Senators introduced the BioFuels Security Act in 2006. "It's time for Congress to realize what farmers in America's heartland have known all along: that we have the capacity and ingenuity to decrease our dependence on foreign oil by growing our own fuel", said U.S. Senator for Illinois Barack Obama. Two-thirds of U.S. oil consumption is due to the transportation sector. The Energy Independence and Security Act of 2007 has a significant impact on U.S. Energy Policy. With the high profitability of growing corn, more and more farmers switch to growing corn until the profitability of other crops matches that of corn. So the ethanol/corn subsidies drive up the prices of other farm crops. The US, an important export country for food stocks - will convert 18% of its grain output to ethanol in 2008. Across the US, 25% of the whole corn crop went to ethanol in 2007. The percentage of corn going to biofuel is expected to go up. Since 2004, a US subsidy has been paid to companies that blend biofuel and regular fuel. The European biofuel subsidy is paid at the point of sale. Companies import biofuel to the US, blend 1% or even 0.1% regular fuel, and then ship the blended fuel to Europe, where it can get a second subsidy. These blends are called B99 or B99.9 fuel. The practice is called "splash and dash.". The imported fuel may even come from Europe to the US, get 0.1% regular fuel, and then go back to Europe. For B99.9 fuel the US blender gets a subsidy of $0.999 per gallon. The European biodiesel producers have urged the EU to impose punitive duties on these subsidized imports. In 2007, US lawmakers were also looking at closing this loophole. Freeze on first generation biofuel production The prospects for the use of biofuels could change in a relatively dramatic way in 2014. Petroleum trade groups petitioned the EPA in August 2013 to take into consideration a reduction of renewable biofuel content in transportation fuels. On November 15, 2013, the United States EPA announced a review of the proportion of ethanol that should be required by regulation. The standards established by the Energy Independence and Security Act of 2007 could be modified significantly. The announcement allows sixty days for the submission of commentary about the proposal. Journalist George Monbiot has argued for a 5-year freeze on biofuels while their impact on poor communities and the environment is assessed. A 2007 UN report on biofuel also raises issues regarding food security and biofuel production. Jean Ziegler, then UN Special Rapporteur on food, concluded that while the argument for biofuels in terms of energy efficiency and climate change are legitimate, the effects for the world's hungry of transforming wheat and maize crops into biofuel are "absolutely catastrophic", and terms such use of arable land a "crime against humanity". Ziegler also calls for a five-year moratorium on biofuel production. Ziegler's proposal for a five-year ban was rejected by the U.N. Secretary Ban Ki-moon, who called for a comprehensive review of the policies on biofuels, and said that "just criticising biofuel may not be a good solution". Food surpluses exist in many developed countries. For example, the UK wheat surplus was around 2 million tonnes in 2005. This surplus alone could produce sufficient bioethanol to replace around 2.5% of the UK's petroleum consumption, without requiring any increase in wheat cultivation or reduction in food supply or exports. However, above a few percent, there would be direct competition between first generation biofuel production and food production. This is one reason why many view second-generation biofuels as increasingly important. Non-food crops for biofuel There are different types of biofuels and different feedstocks for them, and it has been proposed that only non-food crops be used for biofuel. This avoids direct competition for commodities like corn and edible vegetable oil. However, as long as farmers are able to derive a greater profit by switching to biofuels, they will. The law of supply and demand predicts that if fewer farmers are producing food the price of food will rise. Second-generation biofuels use lignocellulosic raw material such as forest residues (sometimes referred to as brown waste and black liquor from Kraft process or sulfite process pulp mills). Third generation biofuels (biofuel from algae) use non-edible raw materials sources that can be used for biodiesel and bioethanol. It has long been recognized that the huge supply of agricultural cellulose, the lignocellulosic material commonly referred to as "Nature's polymer", would be an ideal source of material for biofuels and many other products. Composed of lignin and monomer sugars such as glucose, fructose, arabinose, galactose, and xylose, these constituents are very valuable in their own right. To this point in history, there are some methods commonly used to coax "recalcitrant" cellulose to separate or hydrolyse into its lignin and sugar parts, treatment with; steam explosion, supercritical water, enzymes, acids and alkalines. All these methods involve heat or chemicals, are expensive, have lower conversion rates and produce waste materials. In recent years the rise of "mechanochemistry" has resulted in the use of ball mills and other mill designs to reduce cellulose to a fine powder in the presence of a catalyst, a common bentonite or kaolinite clay, that will hydrolyse the cellulose quickly and with low energy input into pure sugar and lignin. Still currently only in pilot stage, this promising technology offers the possibility that any agricultural economy might be able to get rid of its requirement to refine oil for transportation fuels. This would be a major improvement in carbon neutral energy sources and allow the continued use of internal combustion engines on a large scale. Biodiesel Soybean oil, which only represents half of the domestic raw materials available for biodiesel production in the United States, is one of many raw materials that can be used to produce biodiesel. Non-food crops like Camelina, Jatropha, seashore mallow and mustard, used for biodiesel, can thrive on marginal agricultural land where many trees and crops will not grow, or would produce only slow growth yields. Camelina is virtually 100 percent efficient. It can be harvested and crushed for oil and the remaining parts can be used to produce high quality omega-3 rich animal feed, fiberboard, and glycerin. Camelina does not take away from land currently being utilized for food production. Most camelina acres are grown in areas that were previously not utilized for farming. For example, areas that receive limited rainfall that can not sustain corn or soybeans without the addition of irrigation can grow camelina and add to their profitability. Jatropha cultivation provides benefits for local communities: Cultivation and fruit picking by hand is labour-intensive and needs around one person per hectare. In parts of rural India and Africa this provides much-needed jobs - about 200,000 people worldwide now find employment through jatropha. Moreover, villagers often find that they can grow other crops in the shade of the trees. Their communities will avoid importing expensive diesel and there will be some for export too. NBB's Feedstock Development program is addressing production of arid variety crops, algae, waste greases, and other feedstocks on the horizon to expand available material for biodiesel in a sustainable manner. Bioalcohols Cellulosic ethanol is a type of biofuel produced from lignocellulose, a material that comprises much of the mass of plants. Corn stover, switchgrass, miscanthus and woodchip are some of the more popular non-edible cellulosic materials for ethanol production. Commercial investment in such second-generation biofuels began in 2006/2007, and much of this investment went beyond pilot-scale plants. Cellulosic ethanol commercialization is moving forward rapidly. The world's first commercial wood-to-ethanol plant began operation in Japan in 2007, with a capacity of 1.4 million liters/year. The first wood-to-ethanol plant in the United States is planned for 2008 with an initial output of 75 million liters/year. Other second-generation biofuels may be commercialized in the future and compete less with food. Synthetic fuel can be made from coal or biomass and may be commercialized soon. Bioprotein Protein rich feed for cattle/fish/poultry can be produced from biogas/natural gas which is presently used as fuel source. Cultivation of Methylococcus capsulatus bacteria culture by consuming natural gas produces high protein rich feed with tiny land and water foot print. The carbon dioxide gas produced as by product from these plants can also be put to use in cheaper production of algae oil or spirulina from algaculture which can displace the prime position of crude oil in near future. With these proven technologies, abundant natural gas/ biogas availability can impart full global food security by producing highly nutrient food products without any water pollution or greenhouse gas (GHG) emissions. Biofuel from food byproducts and coproducts Biofuels can also be produced from the waste byproducts of food-based agriculture (such as citrus peels or used vegetable oil) to manufacture an environmentally sustainable fuel supply, and reduce waste disposal cost. A growing percentage of U.S. biodiesel production is made from waste vegetable oil (recycled restaurant oils) and greases. Collocation of a waste generator with a waste-to-ethanol plant can reduce the waste producer's operating cost, while creating a more-profitable ethanol production business. This innovative collocation concept is sometimes called holistic systems engineering. Collocation disposal elimination may be one of the few cost-effective, environmentally sound, biofuel strategies, but its scalability is limited by availability of appropriate waste generation sources. For example, millions of tons of wet Florida-and-California citrus peels cannot supply billions of gallons of biofuels. Due to the higher cost of transporting ethanol, it is a local partial solution, at best. Biofuel subsidies and tariffs Some people have claimed that ending subsidies and tariffs would enable sustainable development of a global biofuels market. Taxing biofuel imports while letting petroleum in duty-free does not fit with the goal of encouraging biofuels. Ending mandates, subsidies, and tariffs would end the distortions that current policy is causing. The US ethanol tariff and some US ethanol subsidies are currently set to expire over the next couple years. The EU is rethinking their biofuels directive due to environmental and social concerns. On 18 January 2008 the UK House of Commons Environmental Audit Committee raised similar concerns, and called for a moratorium on biofuel targets. Germany ended their subsidy of biodiesel on 1 January 2008 and started taxing it. Reduce farmland reserves and set asides To avoid overproduction and to prop up farmgate prices for agricultural commodities, the EU has for a long time have had farm subsidy programs to encourage farmers not to produce and leave productive acres fallow. The 2008 crisis prompted proposals to bring some of the reserve farmland back into use, and the used area increased actually with 0.5% but today these areas are once again out of use. According to Eurostat, 18 million hectares has been abandoned since 1990, 7,4 millions hectares are currently set aside, and the EU has recently decided to set aside another 5–7% in so called Ecological Focus Areas, corresponding to 10–12 million hectares. In spite of this reduction of used land, the EU is a net exporter of e.g. wheat. The American Bakers Association has proposed reducing the amount of farmland held in the US Conservation Reserve Program. Currently the US has in the program. In Europe about 8% of the farmland is in set aside programs. Farmers have proposed freeing up all of this for farming. Two-thirds of the farmers who were on these programs in the UK are not renewing when their term expires. Sustainable production of biofuels Second-generation biofuels are now being produced from the cellulose in dedicated energy crops (such as perennial grasses), forestry materials, the co-products from food production, and domestic vegetable waste. Advances in the conversion processes will almost certainly improve the sustainability of biofuels, through better efficiencies and reduced environmental impact of producing biofuels, from both existing food crops and from cellulosic sources. Lord Ron Oxburgh suggests that responsible production of biofuels has several advantages: Produced responsibly they are a sustainable energy source that need not divert any land from growing food nor damage the environment; they can also help solve the problems of the waste generated by Western society; and they can create jobs for the poor where previously were none. Produced irresponsibly, they at best offer no climate benefit and, at worst, have detrimental social and environmental consequences. In other words, biofuels are pretty much like any other product. Far from creating food shortages, responsible production and distribution of biofuels represents the best opportunity for sustainable economic prospects in Africa, Latin America and impoverished Asia. Biofuels offer the prospect of real market competition and oil price moderation. Crude oil would be trading 15 per cent higher and gasoline would be as much as 25 per cent more expensive, if it were not for biofuels. A healthy supply of alternative energy sources will help to combat gasoline price spikes. Continuation of the status quo An additional policy option is to continue the current trends of government incentive for these types of crops to further evaluate the effects on food prices over a longer period of time due to the relatively recent onset of the biofuel production industry. Additionally, by virtue of the newness of the industry we can assume that like other startup industries techniques and alternatives will be cultivated quickly if there is sufficient demand for the alternative fuels and biofuels. What could result from the shock to food prices is a very quick move toward some of the non-food biofuels as are listed above amongst the other policy alternatives. Impact on developing countries Demand for fuel in rich countries is now competing against demand for food in poor countries. The increase in world grain consumption in 2006 happened due to the increase in consumption for fuel, not human consumption. The grain required to fill a fuel tank with ethanol will feed one person for a year. Several factors combine to make recent grain and oilseed price increases impact poor countries more: Poor people buy more grains (e.g. wheat), and are more exposed to grain price changes. Poor people spend a higher portion of their income on food, so increasing food prices influence them more. Aid organizations which buy food and send it to poor countries see more need when prices go up but are able to buy less food on the same budget. The impact is not all negative. The Food and Agriculture Organization (FAO) recognizes the potential opportunities that the growing biofuel market offers to small farmers and aquaculturers around the world and has recommended small-scale financing to help farmers in poor countries produce local biofuel. On the other hand, poor countries that do substantial farming have increased profits due to biofuels. If vegetable oil prices double, the profit margin could more than double. In the past rich countries have been dumping subsidized grains at below cost prices into poor countries and hurting the local farming industries. With biofuels using grains the rich countries no longer have grain surpluses to get rid of. Farming in poor countries is seeing healthier profit margins and expanding. Interviews with local farmers in southern Ecuador provide strong anecdotal evidence that the high price of corn is encouraging the burning of tropical forests in order to grow more. The destruction of tropical forests now account for 20% of all greenhouse gas emissions. National Corn Growers Association US government subsidies for making ethanol from corn have been attacked as the main cause of the food vs fuel problem. To defend themselves, the National Corn Growers Association has published their views on this issue. They consider the "food vs fuel" argument to be a fallacy that is "fraught with misguided logic, hyperbole and scare tactics." Claims made by the NCGA include: Corn growers have been and will continue to produce enough corn so that supply and demand meet and there is no shortage. Farmers make their planting decisions based on signals from the marketplace. If demand for corn is high and projected revenue-per-acre is strong relative to other crops, farmers will plant more corn. In 2007 US farmers planted with corn, 19% more acres than they did in 2006. The U.S. has doubled corn yields over the last 40 years and expects to double them again in the next 20 years. With twice as much corn from each acre, corn can be put to new uses without taking food from the hungry or causing deforestation. US consumers buy things like corn flakes where the cost of the corn per box is around 5 cents. Most of the cost is packaging, advertising, shipping, etc. Only about 19% of the US retail food prices can be attributed to the actual cost of food inputs like grains and oilseeds. So if the price of a bushel of corn goes up, there may be no noticeable impact on US retail food prices. The US retail food price index has gone up only a few percent per year and is expected to continue to have very small increases. Most of the corn produced in the US is field corn, not sweet corn, and not digestible by humans in its raw form. Most corn is used for livestock feed and not human food, even the portion that is exported. Only the starch portion of corn kernels is converted to ethanol. The rest (protein, fat, vitamins and minerals) is passed through to the feed co-products or human food ingredients. One of the most significant and immediate benefits of higher grain prices is a dramatic reduction in federal farm support payments. According to the U.S. Department of Agriculture, corn farmers received $8.8 billion in government support in 2006. Because of higher corn prices, payments are expected to drop to $2.1 billion in 2007, a 76 percent reduction. While the EROEI and economics of corn based ethanol are a bit weak, it paves the way for cellulosic ethanol which should have much better EROEI and economics. While basic nourishment is clearly important, fundamental societal needs of energy, mobility, and energy security are too. If farmers crops can help their country in these areas also, it seems right to do so. Since reaching record high prices in June 2008, corn prices fell 50% by October 2008, declining sharply together with other commodities, including oil. According to a Reuters article, "Analysts, including some in the ethanol sector, say ethanol demand adds about 75 cents to $1.00 per bushel to the price of corn, as a rule of thumb. Other analysts say it adds around 20 percent, or just under 80 cents per bushel at current prices. Those estimates hint that $4 per bushel corn might be priced at only $3 without demand for ethanol fuel.". These industry sources consider that a speculative bubble in the commodity markets holding positions in corn futures was the main driver behind the observed hike in corn prices affecting food supply. Controversy within the international system The United States and Brazil lead the industrial world in global ethanol production, with Brazil as the world's largest exporter and biofuel industry leader. In 2006 the U.S. produced 18.4 billion liters (4.86 billion gallons), closely followed by Brazil with 16.3 billion liters (4.3 billion gallons), producing together 70% of the world's ethanol market and nearly 90% of ethanol used as fuel. These countries are followed by China with 7.5%, and India with 3.7% of the global market share. Since 2007, the concerns, criticisms and controversy surrounding the food vs biofuels issue has reached the international system, mainly heads of states, and inter-governmental organizations (IGOs), such as the United Nations and several of its agencies, particularly the Food and Agriculture Organization (FAO) and the World Food Programme (WFP); the International Monetary Fund; the World Bank; and agencies within the European Union. The 2007 controversy: Ethanol diplomacy in the Americas In March 2007, "ethanol diplomacy" was the focus of President George W. Bush's Latin American tour, in which he and Brazil's president, Luiz Inácio Lula da Silva, were seeking to promote the production and use of sugar cane based ethanol throughout Latin America and the Caribbean. The two countries also agreed to share technology and set international standards for biofuels. The Brazilian sugar cane technology transfer will permit various Central American countries, such as Honduras, Nicaragua, Costa Rica and Panama, several Caribbean countries, and various Andean Countries tariff-free trade with the U.S. thanks to existing concessionary trade agreements. Even though the U.S. imposes a US$0.54 tariff on every gallon of imported ethanol, the Caribbean nations and countries in the Central American Free Trade Agreement are exempt from such duties if they produce ethanol from crops grown in their own countries. The expectation is that using Brazilian technology for refining sugar cane based ethanol, such countries could become exporters to the United States in the short-term. In August 2007, Brazil's President toured Mexico and several countries in Central America and the Caribbean to promote Brazilian ethanol technology. This alliance between the U.S. and Brazil generated some negative reactions. While Bush was in São Paulo as part of the 2007 Latin American tour, Venezuela's President Hugo Chavez, from Buenos Aires, dismissed the ethanol plan as "a crazy thing" and accused the U.S. of trying "to substitute the production of foodstuffs for animals and human beings with the production of foodstuffs for vehicles, to sustain the American way of life." Chavez' complaints were quickly followed by then Cuban President Fidel Castro, who wrote that "you will see how many people among the hungry masses of our planet will no longer consume corn." "Or even worse", he continued, "by offering financing to poor countries to produce ethanol from corn or any other kind of food, no tree will be left to defend humanity from climate change." Daniel Ortega, Nicaragua's President, and one of the preferential recipients of Brazil technical aid, said that "we reject the gibberish of those who applaud Bush's totally absurd proposal, which attacks the food security rights of Latin Americans and Africans, who are major corn consumers", however, he voiced support for sugar cane based ethanol during Lula's visit to Nicaragua. The 2008 controversy: Global food prices As a result of the international community's concerns regarding the steep increase in food prices, on 14 April 2008, Jean Ziegler, the United Nations Special Rapporteur on the Right to Food, at the Thirtieth Regional Conference of the Food and Agriculture Organization (FAO) in Brasília, called biofuels a "crime against humanity", a claim he had previously made in October 2007, when he called for a 5-year ban for the conversion of land for the production of biofuels. The previous day, at their Annual International Monetary Fund and World Bank Group meeting at Washington, D.C., the World Bank's President, Robert Zoellick, stated that "While many worry about filling their gas tanks, many others around the world are struggling to fill their stomachs. And it's getting more and more difficult every day." Luiz Inácio Lula da Silva gave a strong rebuttal, calling both claims "fallacies resulting from commercial interests", and putting the blame instead on U.S. and European agricultural subsidies, and a problem restricted to U.S. ethanol produced from maize. He also said that "biofuels aren't the villain that threatens food security". In the middle of this new wave of criticism, Hugo Chavez reaffirmed his opposition and said that he is concerned that "so much U.S.-produced corn could be used to make biofuel, instead of feeding the world's poor", calling the U.S. initiative to boost ethanol production during a world food crisis a "crime". German Chancellor Angela Merkel said the rise in food prices is due to poor agricultural policies and changing eating habits in developing nations, not biofuels as some critics claim. On the other hand, British Prime Minister Gordon Brown called for international action and said Britain had to be "selective" in supporting biofuels, and depending on the UK's assessment of biofuels' impact on world food prices, "we will also push for change in EU biofuels targets". Stavros Dimas, European Commissioner for the Environment said through a spokeswoman that "there is no question for now of suspending the target fixed for biofuels", though he acknowledged that the EU had underestimated problems caused by biofuels. On 29 April 2008, U.S. President George W. Bush declared during a press conference that "85 percent of the world's food prices are caused by weather, increased demand and energy prices", and recognized that "15 percent has been caused by ethanol". He added that "the high price of gasoline is going to spur more investment in ethanol as an alternative to gasoline. And the truth of the matter is it's in our national interests that our farmers grow energy, as opposed to us purchasing energy from parts of the world that are unstable or may not like us." Regarding the effect of agricultural subsidies on rising food prices, Bush said that "Congress is considering a massive, bloated farm bill that would do little to solve the problem. The bill Congress is now considering would fail to eliminate subsidy payments to multi-millionaire farmers", he continued, "this is the right time to reform our nation's farm policies by reducing unnecessary subsidies". Just a week before this new wave of international controversy began, U.N. Secretary General Ban Ki-moon had commented that several U.N. agencies were conducting a comprehensive review of the policy on biofuels, as the world food price crisis might trigger global instability. He said "We need to be concerned about the possibility of taking land or replacing arable land because of these biofuels", then he added "While I am very much conscious and aware of these problems, at the same time you need to constantly look at having creative sources of energy, including biofuels. Therefore, at this time, just criticising biofuel may not be a good solution. I would urge we need to address these issues in a comprehensive manner." Regarding Jean Ziegler's proposal for a five-year ban, the U.N. Secretary rejected that proposal. A report released by Oxfam in June 2008 criticized biofuel policies of high-income countries as neither a solution to the climate crisis nor the oil crisis, while contributing to the food price crisis. The report concluded that from all biofuels available in the market, Brazilian sugarcane ethanol is not very effective, but it is the most favorable biofuel in the world in term of cost and greenhouse gas balance. The report discusses some existing problems and potential risks, and asks the Brazilian government for caution to avoid jeopardizing its environmental and social sustainability. The report also says that: "Rich countries spent up to $15 billion last year supporting biofuels while blocking cheaper Brazilian ethanol, which is far less damaging for global food security." A World Bank research report published in July 2008 found that from June 2002 to June 2008 "biofuels and the related consequences of low grain stocks, large land use shifts, speculative activity and export bans" pushed prices up by 70 percent to 75 percent. The study found that higher oil prices and a weak dollar explain 25–30% of total price rise. The study said that "large increases in biofuels production in the United States and Europe are the main reason behind the steep rise in global food prices" and also stated that "Brazil's sugar-based ethanol did not push food prices appreciably higher". The Renewable Fuels Association (RFA) published a rebuttal based on the version leaked before its formal release. The RFA critique considers that the analysis is highly subjective and that the author "estimates the impact of global food prices from the weak dollar and the direct and indirect effect of high petroleum prices and attributes everything else to biofuels". An economic assessment by the OECD also published in July 2008 agrees with the World Bank report regarding the negative effects of subsidies and trade restrictions, but found that the impact of biofuels on food prices are much smaller. The OECD study is also critical of the limited reduction of greenhouse gas emissions achieved from biofuels produced in Europe and North America, concluding that the current biofuel support policies would reduce greenhouse gas emissions from transport fuel by no more than 0.8 percent by 2015, while Brazilian ethanol from sugar cane reduces greenhouse gas emissions by at least 80 percent compared to fossil fuels. The assessment calls on governments for more open markets in biofuels and feedstocks in order to improve efficiency and lower costs. The OECD study concluded that "current biofuel support measures alone are estimated to increase average wheat prices by about 5 percent, maize by around 7 percent and vegetable oil by about 19 percent over the next 10 years." Another World Bank research report published in July 2010 found their previous study may have overestimated the contribution of biofuel production, as the paper concluded that "the effect of biofuels on food prices has not been as large as originally thought, but that the use of commodities by financial investors (the so-called "financialization of commodities") may have been partly responsible for the 2007/08 spike". See also Biodiesel Biofuel Biofuel advocacy groups Bioplastics: impact on food Commodity price shocks Corn stoves Deforestation Distillers grains Ethanol economy Ethanol fuel in Australia Ethanol fuel in Brazil Ethanol fuel in Sweden Ethanol fuel in the Philippines Ethanol fuel in the United States Food security Food vs. feed Methanol economy Methanol fuel Malthusian catastrophe Oil depletion Vegetable oil economy World Agricultural Supply and Demand Estimates (monthly report) 2007–2008 world food price crisis References Bibliography . See Chapter 7. Food, Farming, and Land Use. External links Avoiding Bioenergy Competition for Food Crops and Land FAO World Food Situation World Food Security: the Challenges of Climate Change and Bioenergy Global Trade and Environmental Impact Study of the EU Biofuels Mandate by the International Food Policy Institute (IFPRI) March 2010 Policy Research Working Paper WPS 5371: Placing the 2006/08 Commodity Price Boom into Perspective, July 2010 Reconciling food security and bioenergy: priorities for action, Global Change Biology Bioenergy Journal, June 2016. Towards Sustainable Production and Use of Resources: Assessing Biofuels, United Nations Environment Programme, October 2009 Biofuels Peak oil Energy and the environment Energy economics Dilemmas Climate change and agriculture Environmental ethics
Food vs. fuel
[ "Environmental_science" ]
9,378
[ "Energy economics", "Environmental social science", "Environmental ethics" ]
16,006,868
https://en.wikipedia.org/wiki/Carbohydrazide
Carbohydrazide is the chemical compound with the formula OC(N2H3)2. It appears as a white solid that is soluble in water, but not in many organic solvents, such as ethanol, ether or benzene. It decomposes upon melting. A number of carbazides are known where one or more N-H groups are replaced by other substituents. They occur widely in the drugs, herbicides, plant growth regulators, and dyestuffs. Production Industrially the compound is produced by treatment of urea with hydrazine: OC(NH2)2 + 2 N2H4 → OC(N2H3)2 + 2 NH3 It can also be prepared by reactions of other C1-precursors with hydrazine, such as carbonate esters. It can be prepared from phosgene, but this route cogenerates the hydrazinium salt [N2H5]Cl and results in some diformylation. Carbazic acid is also a suitable precursor: N2NH3CO2H + N2H4 → OC(N2H3)2 + H2O Structure The molecule is nonplanar. All nitrogen centers are at least somewhat pyramidal, indicative of weaker C-N pi-bonding. The C-N and C-O distances are about 1.36 and 1.25 Å, respectively. Industrial uses Oxygen scrubber: carbohydrazide is used to remove oxygen in boiler systems. Oxygen scrubbers prevent corrosion. Precursor to polymers: carbohydrazide can be used as a curing agent for epoxide-type resins. Photography: carbohydrazide is used in the silver halide diffusion process as one of the toners. Carbohydrazide is used to stabilize color developers that produce images of the azo-methine and azine classes. Jet fuel: carbohydrazine can be used as a component in jet fuels, as a large amount of heat is being produced when the material is burned. Carbohydrazide has been used to develop ammunition propellants, stabilize soaps, and is used as a reagent in organic synthesis. Salts of carbohydrazide, such as nitrate, dinitrate and perchlorate, can be used as secondary explosives. Complex salts of carbohydrazide, like bis(carbohydrazide)diperchloratocopper(II) and tris(carbohydrazide)nickel(II) perchlorate, can be used as primary explosives in laser detonators. Hazards Heating carbohydrazide may result in an explosion. Carbohydrazide is harmful if swallowed, irritating to eyes, respiratory system, and skin. Carbohydrazide is toxic to aquatic organisms. References Cleaning product components Hydrazides
Carbohydrazide
[ "Technology" ]
610
[ "Components", "Cleaning product components" ]
16,007,103
https://en.wikipedia.org/wiki/The%20Oil%20Factor
The Oil Factor, alternatively known as Behind the War on Terror, is a 2004 movie written and directed by Gerard Ungerman and Audrey Brohy, narrated by Ed Asner. The documentary analyzes the development of some global events since the beginning of the century (especially after the 9/11 terrorist attacks) from the perspective of oil and oil-abundant regions. The documentary aspires to bring an untraditional point of view over the reasons, aspects and motives of this war and the direction of current US foreign policy. Interviews Respondents, featuring in the Oil Factor, include: Zbigniew Brzezinski, former US DoD adviser Noam Chomsky, professor at MIT Gary Schmitt, executive director of the Project for a New American Century Paul Bremer, temporary (Iraqi) coalition leader Karen Kwiatkowski, retired military adviser in The Pentagon Azees Al-Hakim, member of current Iraqi government Michael C. Ruppert, author of From the Wilderness, studying the peak oil issue (among others) Randa Habib, director of the French press agency in Jordan Gen. Piérre-Marie Gallois, energy-strategy analyst David Mulholland, editor of magazine focused on military technology Ahmed Rashid, author of The Taliban book Locations The filmmakers were shooting in several locations of Afghanistan, Pakistan and Iraq (besides the United States), interviewing local people or local authorities mostly on the influences and ramifications of the Operation Enduring Freedom and president Bush's 'spreading of democracy' in the respective regions. Introductory presupposition During the film, the spectator is given some prepositions and axioms that become a basis for film authors' argumentation, such as: Oil is indispensable in every aspect of our modern-way lives. World food production is 95% dependent on hydro-carbon energy. Demand for oil is and will be growing as new markets (e.g. India and China) gains in strength and local consumers start to demand higher life standard. 75% of the world's oil discoveries are located in the Middle East, as well as the ratio of the volume of oil needed to be imported to the United States. From 2010 on, economies of some continents or world regions will run out of oil and that will make them utterly dependent on foreign oil supply. The film says: "The reality is, however, that major conflicts are likely to erupt before any of these players actually runs out of oil." Argumentation On the basis of these presuppositions, the film tries to examine the steps taken in the name of US foreign policy from this point of view. It refers to the motives of the United States in 2000 to build new military bases in the Middle East in order to increase their strategic power. It comes to the conclusion that the best candidate to this was Iraq as the country with the world's second biggest oil reserves, as well as its military being weakened by a dozen years of bombing on a weekly basis. According to David Mulholland, political power in the region is determined by the control of oil exports from this country. Oil Factor is also sceptical about consequences of all current wars both on local populations and American soldiers. Iraq The documentary first analyzes the development of support of Iraqi citizens that showed exacerbation approximately year and half after the invasion. It also touches the issue of 320+ tons of American munition made of depleted uranium since the first Gulf War and its consequences to local inhabitants, as well as the discrepancy of Paul Bremer's pledge to provide truly democratic elections with choosing representatives with pro-American who represent neither the Shiite majority nor Islamists as such. According to Shiites (present in neighboring Iran, current economic US enemy) it mentions the Iran–Contra affair of 1979 and Shiite support of the Hezbollah as an aspect the United States "will not tolerate". Afghanistan The part dedicated to the invasion to Afghanistan (Operation Enduring Freedom), introduced by the film as a "war virtually forgotten by media" begins with a rhetoric question of why the coalition units invaded the "extremely poor and desolated country" and why this military operation, allegedly waged for capturing Osama bin Laden and other Al Qaeda members, involves such vast concentration of American military technologies and building big permanent military bases, supposing this search and destroy mission would last for decades. The documentary answers this question via Ahmed Rashid - according to his reasoning the clandestine reason is upcoming struggle for dwindling energy sources like oil and natural gas, abundantly present in the area of Central-Asian states - Turkmenistan, Uzbekistan, Kyrgyzstan a Kazakhstan. The players of this struggle are Russia, China and the United States of America. While both China and Russia neighbors with at least some of the mentioned countries, USA do not and if they want to import Central-Asian oil or gas, they have to establish a pipeline to the Indian Ocean. Such pipeline would have to run over Pakistan and Afghanistan. While Pakistani authorities would not object the construction, Afghan Taliban members and local warlords embody a peril of the intact existence of the pipeline. American presence in oil-rich regions In the last part of Oil Factor, the filmmakers go in for coalition (and especially US) soldiers, negatively acknowledge media campaigns to aid to recruit another young American men to join US Army and object that clandestine agents are best known and verified way to fight terrorism, instead of huge conventional waging of war. Karen Kwiatkowski concludes: "If you draw a map that connects the dots between all of the bases that we have done since the Cold War ended, what You see is American military hegemony - covering 90 per cent of global energy resources." See also Michael Klare Trans-Afghanistan Pipeline Books Chossudovsky, M.: War and Globalisation: The truth about September 11, Brzezinski, Z.: The Grand Chessboard: American Primacy and Its Geo strategic Imperatives, Basic Books, 1998, , External links Official site The Oil Factor: Behind the War on Terror on IMDb Oil Factor on Google Video Short review on Democracy Now! 2005 films Films about terrorism Petroleum politics Documentary films about petroleum 2000s English-language films
The Oil Factor
[ "Chemistry" ]
1,264
[ "Petroleum", "Petroleum politics" ]
16,008,235
https://en.wikipedia.org/wiki/Centurion%20Guard
Centurion Guard is a PC hardware and software-based security product, developed by Centurion Technologies. It was first released in 1996. There were several different releases and versions of this product, and many were distributed in computers donated to libraries by the Bill & Melinda Gates Foundation. Operating system compatibility Microsoft Windows 7 Microsoft Windows Vista Microsoft Windows XP References External links Centurion Technologies Computer security Proprietary software
Centurion Guard
[ "Technology" ]
82
[ "Computer security stubs", "Computing stubs" ]
16,008,334
https://en.wikipedia.org/wiki/Election%20Markup%20Language
Election Markup Language (EML) is an XML-based standard to support end to end management of election processes. History of EML The OASIS Election and Voter Services Technical Committee, which met for the first time in May 2001, was chartered "To develop a standard for the structured interchange of data among hardware, software, and service providers who engage in any aspect of providing election or voter services to public or private organizations. The services performed for such elections include but are not limited to voter role[sic]/membership maintenance (new voter registration, membership and dues collection, change of address tracking, etc.), citizen/membership credentialing, redistricting, requests for absentee/expatriate ballots, election calendaring, logistics management (polling place management), election notification, ballot delivery and tabulation, election results reporting and demographics." To help establish context for the specifics contained in the XML schemas that make up EML, the Committee also developed a generic end-to-end election process model, initially based on work by election.com, whose CTO chaired the first meetings. Overview of EML Voting is one of the foundations of democratic processes. In addition to providing for the orderly transfer of power, it also cements the citizen's trust and confidence in an organization or government when it operates efficiently. Access to standardized information in the voting process for voters as well as standardized data interchange can better facilitate verification and oversight for election procedures. Standards for clear, robust and precisely understood processes help promote confidence in the results. Election data interchange standardization fosters an open marketplace that stimulates cost effective delivery and adoption of new technology without obsolescing existing investments. However, traditional verification methods and oversight will continue to be vital, and in fact these things become more critical with the use of technology. A healthy democracy requires participation from citizens and continuous independent monitoring of processes, procedures and outcomes. The OASIS EML standard seeks to help facilitate transparency, access and involvement for citizens to the election process. The primary function of an electronic voting system is to capture voter preferences reliably and securely and then report results accurately, while meeting legal requirements for privacy. The process of vote capture occurs between 'a voter' (individual person) and 'an e-voting system' (machine). It is critical that any election system be able to prove that a voter's choice is captured correctly and anonymously, and that the vote is not subject to tampering, manipulation or other sources of undue influence. These universal democratic principles can be summarized as a list of fundamental requirements, or 'six commandments', for electronic voting systems: Keep each voter's choice an inviolable secret. Allow each eligible voter to vote only once, and only for those offices for which he/she is authorized to cast a vote. Do not permit tampering with the voting systems operations, nor allow voters to sell their votes. Report all votes accurately The voting system shall remain operable throughout each election. Keep an audit trail to detect any breach of [2] and [4] but without violating [1]. EML was developed following these guidelines. Design of EML The goal of the committee is to develop an Election Markup Language (EML) for end-to-end use within the election process. This is a set of data and message definitions described as a set of XML schemas and covering a wide range of transactions that occurs during various phases and stages of the life cycle of an election. To achieve this, the committee decided that it required a common terminology and definition of election processes that could be understood internationally. The committee therefore started by defining the generic election process models described here. These processes are illustrative, covering the vast majority of election types and forming a basis for defining the Election Markup Language itself. EML has been designed such that elections that do not follow this process model should still be able to use EML as a basis for the exchange of election-related messages. EML is focused on defining open, secure, standardised and interoperable interfaces between components of election systems and thereby providing transparent and secure interfaces between various parts of an election system. The scope of election security, integrity and audit included in these interface descriptions and the related discussions are intended to cover security issues pertinent only to the standardised interfaces and not to the internal or external security requirements of the various components of election systems. The security requirement for the election system design, implementation or evaluation must be placed within the context of the vulnerabilities and threats analysis of a particular election scenario. As such the references to security within EML are not to be taken as comprehensive requirements for all election systems in all election scenarios, nor as recommendations of sufficiency of approach when addressing all the security aspects of election system design, implementation or evaluation. In fact, the data security mechanisms described in EML documentation are all optional, enabling compliance with EML without regard for system security at all. It is anticipated that implementers may develop a complementary document for a specific election scenario, which refines the security issues defined in this document and determines their specific strategy and approach by leveraging what EML provides. EML is meant to assist and enable the election process and does not require any changes to traditional methods of conducting elections. The extensibility of EML makes it possible to adjust to various e-democracy processes without affecting the process. Conceptually EML simply enables the exchange of data between the various end-to-end election stages and processes in a standardized way. The solution outlined in EML is non-proprietary and will work as a template for any election scenario using electronic systems for all or part of the process. The objective is to introduce a uniform and reliable way to allow election systems to interact with each other. The OASIS EML standard is intended to reinforce public confidence in the election process and to facilitate the job of democracy builders by introducing guidelines for the selection or evaluation of future election systems. For more details on the EML approach see the formal OASIS standard specification. Versions of EML EML v7.0 was adopted as an OASIS Committee Specification in October 2011 EML v6.0 was adopted as an OASIS Committee Specification in August 2010 EML 5.0 was adopted as an OASIS Standard in December 2007. EML-related technologies EML utilizes a number of existing standards: Extensible Markup Language (XML): EML templates are expressed in a standardized XML XML Schema: EML utilizes XSD Schema for defining the information structures supporting the election processes XML Schema. xNAL: eXtensible Name and Address (xNAL) Specifications and Description Document (v3.0) Customer Information Quality Technical Committee OASIS July 2009 UK's APD: Address and Personal Details Fragment v1.1 Technology Policy Team, e-Government Unit, Cabinet Office UK, 1 March 2002 XML-DSig: XML-Signature Syntax and Processing Donald Eastlake et al., World Wide Web Consortium, 10 June 2008 VoiceXML: Voice Extensible Markup Language (VoiceXML) Version 2.0 Scott McGlashan et al. World Wide Web Consortium 16 March 2004 EML endorsements and users Ron Rivest, computer scientist and member of the Technical Guidelines Development Committee of the US Election Assistance Commission was quoted as saying "EML is an example of the kind of consensus-based, publicly available common format that enables the exchange of electronic records between different components in election systems." EML is used by the Australian Electoral Commission for the release of up-to-date counts for federal elections through their "Media Feed". Dutch election law mandates the use of EML, or more specifically the EML_NL dialect which is based on EML 5.0 See also OASIS XML References More - [EML Brochure] OASIS EML document., white paper Brochure on EML and its Capabilities. Document link EML brochure. [Open Secure Voting] Webber et al., white paper White Paper on Open Secure Voting with EML. Document link EML white paper. [Trusted Logic Voting] Trusted Logic Voting with OASIS EML, David Webber, 2005. [EML Case Study - Results Reporting] White Paper to NIST on California use of EML. David Webber, 2009. External links OASIS Election Markup Language Technical Committee Cover Pages: Executive Overview of EML OASIS wiki resources site for EML Election technology XML-based standards
Election Markup Language
[ "Technology" ]
1,733
[ "Computer standards", "XML-based standards" ]
16,008,582
https://en.wikipedia.org/wiki/OGLE-2006-BLG-109Lc
OGLE-2006-BLG-109Lc is an extrasolar planet approximately 4,925 light-years away in the constellation of Sagittarius. The planet was detected orbiting the star OGLE-2006-BLG-109L in 2008 by a research team using Microlensing. The host star is about 50% the mass of the Sun and the planet is about 90% the mass of Saturn. See also Optical Gravitational Lensing Experiment or OGLE 47 Ursae Majoris b OGLE-2005-BLG-390Lb OGLE-2006-BLG-109Lb References External links Sagittarius (constellation) Exoplanets discovered in 2008 Giant planets Exoplanets detected by microlensing
OGLE-2006-BLG-109Lc
[ "Astronomy" ]
156
[ "Sagittarius (constellation)", "Constellations" ]
16,008,681
https://en.wikipedia.org/wiki/OGLE-2006-BLG-109Lb
OGLE-2006-BLG-109Lb is an extrasolar planet approximately 4,920 light-years away in the constellation of Sagittarius. The planet was detected orbiting the star OGLE-2006-BLG-109L in 2008 by a research team using Microlensing. See also Optical Gravitational Lensing Experiment or OGLE 47 Ursae Majoris b OGLE-2005-BLG-390Lb OGLE-2006-BLG-109Lc References External links Sagittarius (constellation) Exoplanets discovered in 2008 Giant planets Exoplanets detected by microlensing
OGLE-2006-BLG-109Lb
[ "Astronomy" ]
132
[ "Sagittarius (constellation)", "Constellations" ]
16,009,385
https://en.wikipedia.org/wiki/Taub%E2%80%93NUT%20space
The Taub–NUT metric (, ) is an exact solution to Einstein's equations. It may be considered a first attempt in finding the metric of a spinning black hole. It is sometimes also used in homogeneous but anisotropic cosmological models formulated in the framework of general relativity. The underlying Taub space was found by , and extended to a larger manifold by , whose initials form the "NUT" of "Taub–NUT". Description Taub's solution is an empty space solution of Einstein's equations with topology R×S3 and metric (or equivalently line element) where and m and l are positive constants. Taub's metric has coordinate singularities at , and Newman, Tamburino and Unti showed how to extend the metric across these surfaces. Related work Kerr metric When Roy Kerr developed the Kerr metric for spinning black holes in 1963, he ended up with a four-parameter solution, one of which was the mass and another the angular momentum of the central body. One of the two other parameters was the NUT-parameter, which he threw out of his solution because he found it to be nonphysical since it caused the metric to be not asymptotically flat, while other sources interpret it either as a gravomagnetic monopole parameter of the central mass, or a twisting property of the surrounding spacetime. Misner spacetime A simplified 1+1-dimensional version of the Taub–NUT spacetime is the Misner spacetime. References Notes Exact solutions in general relativity
Taub–NUT space
[ "Physics", "Mathematics" ]
316
[ "Exact solutions in general relativity", "Mathematical objects", "Equations" ]
16,009,751
https://en.wikipedia.org/wiki/Lexus%20Link
Lexus Link, launched in October 2000, is a subscription-based safety and security service from Lexus. It has been offered as a factory-installed option, available on certain Lexus models (LX, GX, LS, and GS), offering call-center-based telematics services to owners with equipped vehicles in the United States and Canada. The second-generation Lexus Link system utilizes a dedicated cellular phone (dual-mode CDMA/analog), Global Positioning Satellite (GPS) technology and 24-hour live-operator support. In 2009, an expanded system with added functionality, Lexus Enform with Safety Connect, succeeded Lexus Link. History The first generation Lexus Link system was a private-labeled brand of OnStar, operating on Verizon Wireless’ cellular network, available as a factory-installed option on the following vehicles in Model Years 2001-04: LS 430 ('01-'04), GX 470 ('03-'04), LX 470 ('03-'04), SC 430 ('03-'04) and RX 330 ('04). The first generation system was analog-only and is no longer operational. The second generation Lexus Link system was launched October 2005 as a private-label brand of OEM Telematics Services, available as a factory-installed option on MY 2006 and later LX, GX (vehicles produced October 1, 2005 and later) and MY 2007 and later LX, GX, LS, GS vehicles and uses dual-mode (digital/analog) technology operating on Verizon Wireless’ cellular network. The differences between the first and second generation systems are as follows: Services Lexus Link is offered in the continental U.S. and Alaska. Different service packages are offered to customers. While safety and security are the main purpose of the Lexus Link system, further services include driving directions, information assistance, traffic, weather, stock quotes, or Personal Calling. Depending on service package, potential services include: Analog sunset Due to the growth and acceptance of digital cellular systems, many cellular carriers have abandoned analog coverage in favor of digital service. The Federal Communications Commission (FCC) ruled that cellular telephone companies operating in the United States are no longer required to provide analog service after February 2008. As a result, beginning January 1, 2008, Lexus Link service in the U.S. and Canada was only made available to vehicles equipped with dual-mode (analog/digital) equipment. Since the first-generation Lexus Link system uses analog cellular technology and cannot be modified to digital operation, Lexus offered to disable the Lexus Link system and remove the button panel from the vehicle at no cost for owners of model year 2001–2004 vehicles. See also Advanced Automatic Collision Notification BMW Assist Dashtop mobile GPS tracking LoJack MVEDR OnStar G-Book References External links Lexus Link Lexus Global Positioning System Vehicle telematics Vehicle safety technologies Crime prevention Automotive technology tradenames
Lexus Link
[ "Technology", "Engineering" ]
616
[ "Global Positioning System", "Wireless locating", "Aircraft instruments", "Aerospace engineering" ]
16,011,006
https://en.wikipedia.org/wiki/Worst-case%20circuit%20analysis
Worst-case circuit analysis (WCCA or WCA) is a cost-effective means of screening a design to ensure with a high degree of confidence that potential defects and deficiencies are identified and eliminated prior to and during test, production, and delivery. It is a quantitative assessment of the equipment performance, accounting for manufacturing, environmental and aging effects. In addition to a circuit analysis, a WCCA often includes stress and derating analysis, failure modes and effects criticality (FMECA) and reliability prediction (MTBF). The specific objective is to verify that the design is robust enough to provide operation which meets the system performance specification over design life under worst-case conditions and tolerances (initial, aging, radiation, temperature, etc.). Stress and de rating analysis is intended to increase reliability by providing sufficient margin compared to the allowable stress limits. This reduces overstress conditions that may induce failure, and reduces the rate of stress-induced parameter change over life. It determines the maximum applied stress to each component in the system. General information A worst-case circuit analysis should be performed on all circuitry that is safety and financially critical. Worst-case circuit analysis is an analysis technique which, by accounting for component variability, determines the circuit performance under a worst-case scenario (under extreme environmental or operating conditions). Environmental conditions are defined as external stresses applied to each circuit component. It includes temperature, humidity or radiation. Operating conditions include external electrical inputs, component quality level, interaction between parts, and drift due to component aging. WCCA helps in the process of building design reliability into hardware for long-term field operation. Electronic piece-parts fail in two distinct modes: Out-of-tolerance limits: Through this, the circuit continues to operate, though with degraded performance, and ultimately exceeds the circuit's required operating limits. Catastrophic failures may be minimized through MTBF, stress and derating, and FMECA analyses that help to ensure that all components are properly derated, as well as that degradation is occurring “gracefully...” A WCCA permits you to predict and judge the circuit performance limits beneath all of the combos of half tolerances. There are many reasons to perform WCCA. Here are a few that may be impactful to schedule and cost. Methodology Worst-case analysis is the analysis of a device (or system) that assures that the device meets its performance specifications. These are typically accounting for tolerances that are due to initial component tolerance, temperature tolerance, age tolerance and environmental exposures (such as radiation for a space device). The beginning of life analysis comprises the initial tolerance and provides the data sheet limits for the manufacturing test cycle. The end of life analysis provides the additional degradation resulting from the aging and temperature effects on the elements within the device or system. This analysis is usually performed using SPICE, but mathematical models of individual circuits within the device (or system) are needed to determine the sensitivities or the worst-case performance. A computer program is frequently used to total and summarize the results. A WCCA follows these steps: Generate/obtain circuit model Obtain correlation to validate model Determine sensitivity to each component parameter Determine component tolerances Calculate the variance of each component parameter as sensitivity times absolute tolerance Use at least two methods of analysis (e.g. hand analysis and SPICE or Saber, SPICE and measured data) to assure the result Generate a formal report to convey the information produced The design is broken down into the appropriate functional sections. A mathematical model of the circuit is developed and the effects of various part/system tolerances are applied. The circuit's EVA and RSS results are determined for beginning-of-life and end-of-life states. These results are used to calculate part stresses and are applied to other analysis. In order for the WCCA to be useful throughout the product’s life cycle, it is extremely important that the analysis be documented in a clear and concise format. This will allow for future updates and review by other than the original designer. A compliance matrix is generated that clearly identifies the results and all issues. References External links WCCA Simple Comparing of different Methods :DOI: 10.13140/RG.2.2.13287.75689 Mil-Std 785B has a short section on WCCA Why Perform a Worse Case Analysis Aerospace Corporation - Aerospace Corp. Mission Assurance Improvement Workshop: Electrical Design Worst-Case Circuit Analysis: Guidelines and Draft Standard (REV A) (MAIW), TOR-2013-00297 European Cooperation for Space Standardization, See Worst case circuit performance analysis - ECSS-Q-30-01A and ECSS-Q-HB-30-01A and Dependability ECSS-Q-ST-30C Reliability analysis
Worst-case circuit analysis
[ "Engineering" ]
976
[ "Reliability analysis", "Reliability engineering" ]
16,011,394
https://en.wikipedia.org/wiki/Live%20Communications%20Server%202005
Live Communications Server 2005 (LCS 2005), codenamed Vienna, is the second version of a SIP based instant messaging and presence server after Live Communications Server 2003. LCS 2005 was first released in 2005, and was updated with new features with Service Pack 1 in 2006. LCS 2005 has been superseded by Microsoft Office Communications Server 2007. Overview This product allows SIP clients to exchange IMs and presence using the SIMPLE protocol. The client also allows two clients to set up audio/video sessions, application sharing, and file transfer sessions. The product was released in two editions, Standard Edition and Enterprise Edition. The Standard Edition uses a Microsoft SQL Server Desktop Engine (MSDE) (included with the product) to store configuration and user data. Enterprise Edition uses a full version of Microsoft SQL Server (purchased separately). New features to this version compared to the 2003 release is the ability to leverage SQL and remote user access. Presence is conveyed as levels of availability to communicate. Levels of presence support by LCS: Online Busy Do not disturb Be right back Away In a Meeting These presence levels are controlled manually and automatically. Automatic presence changes can be triggered by the following events: Locking the workstation -> Away Screen save launches -> Away User does not touch keyboard or mouse for a configured time -> Away User is in full screen mode -> Do not disturb A user is busy, according to the user's calendar on the Microsoft Exchange Server -> In a meeting Dependencies Microsoft Active Directory Storage of server configuration data Authentication Kerberos NTLM PKI MTLS - used for server to server connections TLS - optionally used for client to server connections Microsoft SQL Server Storage of server configuration data User contact list User watcher list Client Software Microsoft Office Communicator 2005 Windows Messenger Server Roles Both editions of the server software can be installed into several distinct roles: Home Server Director Access Proxy Branch Office Proxy Application Proxy Home Server In Standard Edition, this server role is designed to host data for the users. The user's data is stored in an SQL database on the backend server (on Enterprise Edition) or on the Home Server (on Standard Edition). The server stores each user's list of contacts and watchers. The contact list is the list of users the end user has added to client software in order to facilitate the sending of IM's and for the monitoring the presence. The watcher list is the list of other users that have added this user to their contact list. Director This optional server role is designed to be a kind of traffic cop when you have more than one Home Server role deployed or when you are setting up for remote users to connect to the Home Server. This server does not host any of the user's data, but knows which server each user is homed on, and can therefore redirect or proxy the request. Access Proxy This server role is required to allow remote SIP clients to connect from the internet. This server role would be traditionally deployed in a DMZ network. The server's job would be to scan the SIP traffic and only allow communication that the server had been configured to allow to traverse to the internal network. The traffic would be sent either directly to the internal Home Server or to a Director that would send the traffic to the appropriate home server, based on the user the message was destined to. Branch Office Proxy This role is used to aggregate connections, from a branch office clients, across a single Transport Layer Security (TLS) encrypted link, allowing many remote clients to share a single communication channel. Application Proxy This server role is designed to allow 3rd party developers to leverage the Live Communications Server SIP stack with a custom code running on top of it. This allows 3rd parties to make a gateway server that could be used to communicate with a PBX or other internal telephony infrastructure without having to create a fully functioning SIP stack. Public IM Connectivity (PIC) This is a feature that allows organizations to IM and share presence information between their existing base of Live Communications Server-enabled users and contacts using public IM services provided by MSN, AOL and Yahoo!. This was feature was introduced with LCS 2005 Service Pack 1 in April 2005. External links Live Communications Server 2005 - Technet Live Communications Server 2005 - Technical Reference Live Communications Server 2005 - SDK Instant messaging server software
Live Communications Server 2005
[ "Technology" ]
880
[ "Instant messaging", "Instant messaging server software" ]
16,012,095
https://en.wikipedia.org/wiki/Architecture%20studio
Architecture studio is a class in an undergraduate or graduate professional architecture program (such as a Bachelor of Architecture or Master of Architecture program) in which students receive hands-on instruction in architectural design. Typically, architecture studio classes include distinctive educational techniques, such as "desk crits" (project critiques delivered at a student's desk) and "juries", meetings of the students with more than one tutors around the production of students for a multi-layered open discussion where all students are supposed to participate. Typically, it is equipped with drafting tables, pin-up boards, and a smart board. References Architectural education
Architecture studio
[ "Engineering" ]
127
[ "Architectural education", "Architecture" ]
16,013,871
https://en.wikipedia.org/wiki/New%20Zealand%20Threat%20Classification%20System
The New Zealand Threat Classification System is used by the Department of Conservation to assess conservation priorities of species in New Zealand. The system was developed because the IUCN Red List, a similar conservation status system, had some shortcomings for the unique requirements of conservation ranking in New Zealand. plants, animals, and fungi are evaluated, though the lattermost has yet to be published. Algae were assessed in 2005 but not reassessed since. Other protists have not been evaluated. Categories Species that are ranked are assigned categories: Threatened This category has three major divisions: Nationally Critical - equivalent to the IUCN category of Critically endangered Nationally Endangered - equivalent to the IUCN category of Endangered Nationally Vulnerable - equivalent to the IUCN category of Vulnerable At Risk This has four categories: Declining Recovering Relict Naturally Uncommon Other categories Introduced and Naturalised These are any species that are deliberately or accidentally introduced into New Zealand. Migrant Migrant species are those that visit New Zealand as part of their life cycle. Vagrant Vagrants are taxa that are rare in New Zealand that have made their own way and do not breed successfully. Coloniser These taxa have arrived in New Zealand without human help and reproduce successfully. Data Deficient This category lists taxa for which insufficient information is available to make as assessment on conservation status. Extinct Taxa for which there is no reasonable doubt that no individuals exist are ranked as extinct. For these lists only species that have become extinct since 1840 are listed. Not Threatened If taxa fit into none of the other categories they are listed in the Not Threatened category. Qualifiers A series of qualifiers are used to give additional information on the threat classification: See also Conservation in New Zealand References External links Department of Conservation's New Zealand Threat Classification System website NZTCS Database New Zealand Threat Classification System past conservation status lists and manuals Nature conservation in New Zealand Biota by conservation status system
New Zealand Threat Classification System
[ "Biology" ]
370
[ "Biota by conservation status", "Biota by conservation status system" ]
16,014,461
https://en.wikipedia.org/wiki/Multiple%20satellite%20imaging
Multiple satellite imaging is the process of using multiple satellites to gather more information than a single satellite so that a better estimate of the desired source is possible. Something that cannot be resolved with one telescope might be visible with two or more telescopes. Background Interferometry is the process of combining waves in such a way that they constructively interfere. When two or more independent sources detect a signal at the same given frequency those signals can be combined and the result is better than each one individually. An overview of Astronomical interferometers and a History of astronomical interferometry can be referenced from their respective pages. The NASA Origins Program was created in the 1990s to ultimately search for the origin of the universe. The theory that the Origins Program is based on is: since light travels at a constant speed until it is absorbed by something; there is still light that was part of the first light ever created traveling about the universe and ultimately some of that light is coming in the general direction of Earth. So a satellite system capable of collecting light from the beginning of the universe would be able to tell us more about where we came from. There is also the constant search for life in other worlds. A satellite system using the interferometric technologies mentioned above would be able to have a much higher resolution than any of the current deep space imaging systems. A space systems also reduces the amount of interference due to lack of an atmosphere. Future NASA is currently focused on the Vision for Space Exploration and has reduced current funding for scientific unmanned space exploration in favor of human exploration. These budget cuts have slowed the multiple satellite imaging development and relevant scientific missions as Project Prometheus and Terrestrial Planet Finder have ended as well but research continues. References Blair, Bill and Humberto Calvani. "Far Ultraviolet Spectroscopic Analyzer". 14 January 2008. Johns Hopkins University. 23 January 2008. . Chakrovorty, Suman. Multi-Spacecraft Interferometric Imaging Systems: The Search for New Worlds. Doctoral Dissertation. University of Michigan. 2003. Chambers, Lin H.. "Electromagnetic Spectrum." My NASA Data. 6 November 2007. National Aeronautics and Space Administration. 21 November 2008 . Chung, Soon-Jo, Miller, David W., and de Weck, Olivier L., "ARGOS testbed: study of multidisciplinary challenges of future spaceborne interferometric arrays," Optical Engineering, vol. 43, no.9, September 2004, pp. 2156–2167. (download PDF file) Duffieux, P.M. The Fourier Transform and its Applications to Optics. 2nd edition. New York: John Wiley and Sons, Inc., 1983. Gano, S.E. and et al. “A Baseline Study of Low-Cost, High-Resolution, Imaging System using Wavefront Reconstruction.” A Collection of Technical Papers: AIAA Space 2001 Conference and Exposition, 28 – 30 August 2001, Albuquerque: Aug 2001. Gatelli, Fabio, et al. "The Wavenumber Shift in SAR Interferometry." IEEE Transactions on Geoscience and Remote Sensing 32.4 (1994): 855,856–865. Hussein, Islam I., D. J. Scheeres and D.C. hyland. Interferometric Observatories in Earth Orbit. Published by American Institute of Aeronautics and Astronautics. Danvers: 3 October 2003. Hussein, Islam I. Motion Planning for Multi-Spacecraft Interferometric Imaging Systems. Doctoral Dissertation. University of Michigan. 2005. Hussein, Islam I., Scheeres, Daniel J., Bloch, Anthony M., Hyland, David C., McClamroch, N. Harris. Optimal Motion Planning for Dual-Spacecraft Interferometry. IEEE Transactions on Aerospace and Electronic Systems Vol. 43, No. 2. Published: Apr 2007. Jackson, Randal. "Getting the Big Picture: Multiple telescopes, one target”. Planet Quest. National Aeronautics and Space Administration. 23 October 2007. . Jackson, Randal. "How to take snapshots of distant worlds." Planet Quest. National Aeronautics and Space Administration. 23 October 2007. . Jackson, Randal. "An instrument for ground-based planet searches". Planet Quest. National Aeronautics and Space Administration. Sep 2007. . Jackson, Randal. "Keck Interferometer". Planet Quest. National Aeronautics and Space Administration. Space.com. Sep 2007. . John M. Brayer, PhD "Introduction to Fourier Transforms for Image Processing". University of New Mexico, Albuquerque. Aug 2007 . Joseph W. Goodman. Introduction to Fourier Optics. 3rd Edition. Greenwood Village: Ben Roberts, 2005. Knight, Andrew. Basics of Matlab and Beyond. Boca Raton: Chapman & Hall, 2000. Lori Tyahla. "Introduction – Overview”. The Hubble Space Telescope. 1 November 2006. National Aeronautics and Space Administration. 1 November 2007. . Lavoie, Sue. "Spacecraft and Telescopes." Photo Journal. National Aeronautics and Space Administration. Sep 2007. . Overcast, Marshall C. and Paul W. Nugent. Computing the 2-D Discrete Fourier Transform or Sweet-talking MATLAB into Making Cool Pictures. Owens, Robyn. "Fourier Transform Theory." 29 October 1997. 12 October 2007. . "Point spread function." 28 August 2007. Wikipedia. 7 July 2007. Point spread function. Rarogiewicz, Lu. "Interferometry 101: How light is combined from multiple telescopes." 5 July 2001. 12 September 2007. . Rodenburg, John M. "The Fourier Transform of a one-dimensional aperture (a harbour entrance)". 21 October 2004. . "Spitzer Space Telescope" Spitzer Science Center. California Institute of Technology. Sep 2007 . Steel, William Howard. Interferometry. New York: Cambridge University Press, 1983. Steve Unwin. "Origins." Origins Project. National Aeronautics and Space Administration. Sep 2007. . Steward, E.G. Fourier Optics: An Introduction. 2nd Edition. Chichester: Ellis Horwood Limited Publishers, 1987. Tracy Vogel. "Hubble Essentials." Hubble Site. National Aeronautics and Space Administration. Sep 2007. . Watanabe, Susan. "Jet Propulsion Laboratory: California Institute of Technology". National Aeronautics and Space Administration. Sep 2007 . Weaver, H. Joseph. Theory of Discrete and Continuous Fourier Analysis. New York: John Wiley and Sons, Inc., 1989. Williams, Charles Sumner and Orville A. Becklund. Introduction to the Optical Transfer Function. New York: Wiley, 1989. Yang, Xiangyang, and Yu, Francis, T. S. Introduction to Optical Engineering. New York: Cambridge University Press, 1997. Image processing Interferometers Interferometry Space telescopes Satellite imagery
Multiple satellite imaging
[ "Astronomy", "Technology", "Engineering" ]
1,402
[ "Space telescopes", "Interferometers", "Measuring instruments" ]
16,015,428
https://en.wikipedia.org/wiki/Molecular%20conductance
Molecular Conductance (), or the conductance of a single molecule, is a physical quantity in molecular electronics. Molecular conductance is dependent on the surrounding conditions (e.g. pH, temperature, pressure), as well as the properties of the measuring device. Many experimental techniques have been developed in an attempt to measure this quantity directly, but theorists and experimentalists still face many challenges. Recently, a great deal of progress has been made in the development of reliable conductance-measuring techniques. These techniques can be divided into two categories: molecular film experiments, which measure groups of tens of molecules, and single-molecule-measuring experiments. Molecular film experiments Molecular film experiments generally consist of the sandwiching of a thin layer of molecules between two electrodes which are used to measure the conductance through the layer. Two of the most successful implementations of this concept have been the bulk electrode approach and in the use of nanoelectrodes. In the bulk electrode approach, a molecular film is typically immobilized onto one electrode and an upper electrode is brought into contact with it allowing for a measure of current flow as a function of applied bias voltage. The nanoelectrode class of experiments, in creatively utilizing equipment such as atomic force microscope tips and small-radius wires, are able to perform the same sorts of current versus applied bias measurements but on a much smaller number of molecules as compared to bulk electrode. For instance, the tip of an atomic force microscope can be used as a top electrode and, given the nano-scale radius of curvature of the tip, the number of molecules measured is drastically cut. The difficulties encountered in these experiments have come mainly in dealing with such thin layers of molecules which often results in problems with short-circuiting the electrodes. Single-molecule-measurement More recently, single-molecule-measurement experiments have been developed that are bringing experimenters a better look at molecular conductance. These fall under the categories of scanning probe, which involves fixed electrode, and mechanically formed junction techniques. One example of a mechanically formed junction experiment involves using a movable electrode to make contact with and then pull away from an electrode surface coated with a single layer of molecules. As the electrode is removed from the surface, the molecules that had bonded between the two electrodes begin to detach until eventually one molecule is connected. The atomic-level geometry of the tip-electrode contact has an effect on the conductance and can change from one run of the experiment to the next, so a histogram approach is required. Forming a junction in which the precise contact geometry is known has been one of the main difficulties with this approach. Applications An important first step toward the goal of building electronic devices on the molecular level is the ability to measure and control the electric current through an individual molecule. Based on the anticipated continuation of Moore's Law, which is expected to carry the miniaturization of transistors on integrated circuits into the atomic scale within the next 10 to 20 years, this goal of single-molecule-level circuit design is likely to become widespread throughout the semiconductor industry. Other applications focus on the insight provided by these experiments in the area of charge transport, which is a recurrent phenomenon in many chemical and biological processes. This sort of insight gives researchers the ability to read the chemical information stored in a single molecule electronically, which can then be used in a wide variety of chemical and biosensor applications. References Molecular electronics
Molecular conductance
[ "Chemistry", "Materials_science" ]
696
[ "Nanotechnology", "Molecular physics", "Molecular electronics" ]
2,175,313
https://en.wikipedia.org/wiki/Motion%20lines
In comics and art more broadly, motion lines (also known as movement lines, action lines, speed lines, or zip ribbons) are the abstract lines that appear behind a moving object or person, parallel to its direction of movement, to make it appear as if it is moving quickly. They are common in Japanese manga and anime, of which Speed Racer is a classic example. Lines depicting wind and the trajectory of missiles appear in art as early as the 16th century. By the 19th century artists were drawing naturally occurring speed lines when showing the passage of an object through water or snow, but it was not until the 1870s that artists like Wilhelm Busch and Adolphe Willette began drawing motion lines to depict the movement of objects through air. The French artist Ernest Montaut is usually credited with the invention of speed lines. He used the technique freely in his posters which were produced at a time when auto racing, speedboat racing and aircraft races were in their infancy. The effect is similar to the blur caused by panning in still photography. Carmine Infantino was one of the best known practitioners of motion lines, particularly in his illustration of Silver Age Flash comics. The use of motion lines in art is similar to the lines showing mathematical vectors, which are used to indicate direction and force. A similar effect is found in long-exposure photography, where a camera can capture lights as they move through time and space, blurred along the direction of motion. See also Nude Descending a Staircase, No. 2, for Marcel Duchamp's use of a painterly technique to the same effect Grawlixes References Comics terminology Linear motion
Motion lines
[ "Physics" ]
327
[ "Physical phenomena", "Motion (physics)", "Linear motion" ]
2,175,469
https://en.wikipedia.org/wiki/Non-line-of-sight%20propagation
Non-line-of-sight (NLOS) radio propagation occurs outside of the typical line-of-sight (LOS) between the transmitter and receiver, such as in ground reflections. Near-line-of-sight (also NLOS) conditions refer to partial obstruction by a physical object present in the innermost Fresnel zone. Obstacles that commonly cause NLOS propagation include buildings, trees, hills, mountains, and, in some cases, high voltage electric power lines. Some of these obstructions reflect certain radio frequencies, while some simply absorb or garble the signals; but, in either case, they limit the use of many types of radio transmissions, especially when low on power budget. Lower power levels at a receiver reduce the chance of successfully receiving a transmission. Low levels can be caused by at least three basic reasons: low transmit level, for example Wi-Fi power levels; far-away transmitter, such as 3G more than away or TV more than away; and obstruction between the transmitter and the receiver, leaving no clear path. NLOS lowers the effective received power. Near Line Of Sight can usually be dealt with using better antennas, but Non Line Of Sight usually requires alternative paths or multipath propagation methods. How to achieve effective NLOS networking has become one of the major questions of modern computer networking. Currently, the most common method for dealing with NLOS conditions on wireless computer networks is simply to circumvent the NLOS condition and place relays at additional locations, sending the content of the radio transmission around the obstructions. Some more advanced NLOS transmission schemes now use multipath signal propagation, bouncing the radio signal off other nearby objects to get to the receiver. Non-Line-of-Sight (NLOS) is a term often used in radio communications to describe a radio channel or link where there is no visual line of sight (LOS) between the transmitting antenna and the receiving antenna. In this context LOS is taken Either as a straight line free of any form of visual obstruction, even if it is actually too distant to see with the unaided human eye As a virtual LOS i.e., as a straight line through visually obstructing material, thus leaving sufficient transmission for radio waves to be detected There are many electrical characteristics of the transmission media that affect the radio wave propagation and therefore the quality of operation of a radio channel, if it is possible at all, over an NLOS path. The acronym NLOS has become more popular in the context of wireless local area networks (WLANs) and wireless metropolitan area networks such as WiMAX because the capability of such links to provide a reasonable level of NLOS coverage greatly improves their marketability and versatility in the typical urban environments where they are most frequently used. However NLOS contains many other subsets of radio communications. The influence of a visual obstruction on a NLOS link may be anything from negligible to complete suppression. An example might apply to a LOS path between a television broadcast antenna and a roof mounted receiving antenna. If a cloud passed between the antennas the link could actually become NLOS but the quality of the radio channel could be virtually unaffected. If, instead, a large building was constructed in the path making it NLOS, the channel may be impossible to receive. Beyond line-of-sight (BLOS) is a related term often used in the military to describe radio communications capabilities that link personnel or systems too distant or too fully obscured by terrain for LOS communications. These radios utilize active repeaters, groundwave propagation, tropospheric scatter links, and ionospheric propagation to extend communication ranges from a few kilometers to a few thousand kilometers. Background Radio waves as plane electromagnetic waves From Maxwell's equations we find that radio waves, as they exist in free space in the far field or Fraunhofer region behave as plane waves. In plane waves the electric field, magnetic field and direction of propagation are mutually perpendicular. To understand the various mechanisms that allow successful radio communications over NLOS paths we must consider how such plane waves are affected by the object or objects that visually obstruct the otherwise LOS path between the antennas. It is understood that the terms radio far field waves and radio plane waves are interchangeable. What is line-of-sight? By definition, line of sight is the visual line of sight, that is determined by the ability of the average human eye to resolve a distant object. Our eyes are sensitive to light but optical wavelengths are very short compared to radio wavelengths. Optical wavelengths range from about 400 nanometer (nm) to 700 nm but radio wavelengths range from approximately 1 millimeter (mm) at 300 GHz to 30 kilometers (km) at 10 kHz. Even the shortest radio wavelength is therefore about 2000 times longer than the longest optical wavelength. For typical communications frequencies up to about 10 GHz, the difference is on the order of 60,000 times so it is not always reliable to compare visual obstructions, such as might suggest a NLOS path, with the same obstructions as they might affect a radio propagation path. NLOS links may either be simplex (transmission is in one direction only), duplex (transmission is in both directions simultaneously) or half-duplex (transmission is possible in both directions but not simultaneously). Under normal conditions, all radio links, including NLOSl are reciprocal—which means that the effects of the propagation conditions on the radio channel are identical whether it operates in simplex, duplex, or half-duplex. However, propagation conditions on different frequencies are different, so traditional duplex with different uplink and downlink frequencies is not necessarily reciprocal. Effect of obstruction size In general, the way a plane wave is affected by an obstruction depends on the size of the obstruction relative to its wavelength and the electrical properties of the obstruction. For example, a hot air balloon with multi-wavelength dimensions passing between the transmit and receive antennas could be a significant visual obstruction but is unlikely to affect the NLOS radio propagation much assuming it is constructed from fabric and full of hot air, both of which are good insulators. Conversely, a metal obstruction of dimensions comparable to a wavelength would cause significant reflections. When considering obstruction size, we assume its electrical properties are the most common intermediate or lossy type. Broadly, there are three approximate sizes of obstruction in relationship to a wavelength to consider in a possible NLOS path—those that are: Much smaller than a wavelength The same order as a wavelength Much larger than a wavelength If the obstruction dimensions are much smaller than the wavelength of the incident plane wave, the wave is essentially unaffected. For example, low frequency (LF) broadcasts, also known as long waves, at about 200 kHz has a wavelength of 1500 m and is not significantly affected by most average size buildings, which are much smaller. If the obstruction dimensions are of the same order as a wavelength, there is a degree of diffraction around the obstruction and possibly some transmission through it. The incident radio wave could be slightly attenuated and there might be some interaction between the diffracted wavefronts. If the obstruction has dimensions of many wavelengths, the incident plane waves depend heavily on the electrical properties of the material that forms the obstruction. Effect of electrical properties of obstructions The electrical properties of the material forming an obstruction to radio waves could range from a perfect conductor at one extreme to a perfect insulator at the other. Most materials have both conductor and insulator properties. They may be mixed: for example, many NLOS paths result from the LOS path being obstructed by reinforced concrete buildings constructed from concrete and steel. Concrete is quite a good insulator when dry and steel is a good conductor. Alternatively the material may be a homogeneous lossy material. The parameter that describes to what degree a material is a conductor or insulator is known as , or the loss tangent, given by where is the conductivity of the material in siemens per meter (S/m) is the angular frequency of the RF plane wave in radians per second (rad/s) and is its frequency in hertz (Hz). is the absolute permittivity of free space in farads per meter (F/m) and is the relative permittivity of the material (also known as dielectric constant) and has no units. Good conductors (poor insulators) If the material is a good conductor or a poor insulator and substantially reflects the radio waves that are incident upon it with almost the same power. Therefore, virtually no RF power is absorbed by the material itself and virtually none is transmitted, even if it is very thin. All metals are good conductors and there are of course many examples that cause significant reflections of radio waves in the urban environment, for example bridges, metal clad buildings, storage warehouses, aircraft and electrical power transmission towers or pylons. Good insulators (poor conductors) If the material is a good insulator (or dielectric) or a poor conductor and substantially transmit waves that are incident upon it. Virtually no RF power is absorbed but some can be reflected at its boundaries depending on its relative permittivity compared to that of free space, which is unity. This uses the concept of intrinsic impedance, which is described below. There are few large physical objects that are also good insulators, with the interesting exception of fresh water icebergs but these do not usually feature in most urban environments. However large volumes of gas generally behave as dielectrics. Examples of these are regions of the Earths atmosphere, which gradually reduce in density at increasing altitudes up to 10 to 20 km. At greater altitudes from about 50 km to 200 km various ionospheric layers also behave like dielectrics and are heavily dependent on the influence of the Sun. Ionospheric layers are not neutral gases but plasmas. Plane waves and intrinsic impedance Even if an obstruction is a perfect insulator, it may have some reflective properties on account of its relative permittivity differing from that of the atmosphere. Electrical materials through which plane waves may propagate have a property called intrinsic impedance () or electromagnetic impedance, which is analogous to the characteristic impedance of a cable in transmission line theory. The intrinsic impedance of a homogeneous material is given by: where is the absolute permeability in henries per meter (H/m) and is a constant fixed at H/m is the relative permeability (unitless) is the absolute permittivity in farads per meter (F/m) and is a constant fixed at F/m is the relative permittivity or dielectric constant (unitless) For free space and , therefore the intrinsic impedance of free space is given by which evaluates to approximately 377 . Reflection losses at dielectric boundaries In an analogy of plane wave theory and transmission line theory, the definition of reflection coefficient is a measure of the level of reflection normally at the boundary when a plane wave passes from one dielectric medium to another. For example, if the intrinsic impedance of the first and second media were and respectively, the reflection coefficient of medium 2 relative to 1, , is given by: The logarithmic measure in decibels () of how the transmitted RF signal over the NLOS link is affected by such a reflection is given by: Intermediate materials with finite conductivity Most materials of the type affecting radio wave transmission over NLOS links are intermediate: they are neither good insulators nor good conductors. Radio waves incident upon an obstruction comprising a thin intermediate material are partly reflected at both the incident and exit boundaries and partly absorbed, depending on the thickness. If the obstruction is thick enough the radio wave might be completely absorbed. Because of the absorption, these are often called lossy materials, although the degree of loss is usually extremely variable and often very dependent on the level of moisture present. They are often heterogeneous and comprise a mixture of materials with various degrees of conductor and insulator properties. Such examples are hills, valley sides, mountains (with substantial vegetation) and buildings constructed from stone, brick or concrete but without reinforced steel. The thicker they are the greater the loss. For example, a wall absorbs much less RF power from a normally incident wave than a building constructed from the same material. Modes Passive random reflections Passive random reflections are achieved when plane waves are subject to one or more reflective paths around an object that makes an otherwise LOS radio path into NLOS. The reflective paths might be caused by various objects that could either be metallic (very good conductors such as a steel bridge or an airplane) or relatively good conductors to plane waves such as large expanses of concrete building sides, walls etc. Sometimes this is considered a brute force method because, on each reflection the plane wave undergoes a transmission loss that must be compensated for by a higher output power from the transmit antenna compared to if the link had been LOS. However, the technique is cheap and easy to employ and passive random reflections are widely exploited in urban areas to achieve NLOS. Communication services that use passive reflections include WiFi, WiMax, WiMAX MIMO, mobile (cellular) communications and terrestrial broadcast to urban areas. Passive repeaters Passive repeaters may be used to achieve NLOS links by deliberately installing a precisely designed reflector at a critical position to provide a path around the obstruction. However they are unacceptable in most urban environments due to the bulky reflector requiring critical positioning at perhaps an inaccessible location or at one not acceptable to the planning authorities or the owner of the building. Passive reflector NLOS links also incur substantial loss due to the received signal being a 'double inverse-square law' function of the transmit signal, one for each hop from the transmit antenna to the receive antenna. However, they have been successfully used in rural mountainous areas to extend the range of LOS microwave links around mountains, thus creating NLOS links. In such cases the installation of the more usual active repeater was usually not possible due to problems in obtaining a suitable power supply. Active repeaters An active repeater is a powered piece of equipment essentially comprising a receiving antenna, a receiver, a transmitter and a transmitting antenna. If the ends of the NLOS link are at positions A and C, the repeater is located at position B where links A-B and B-C are in fact LOS. The active repeater may simply amplify the received signal and re-transmit it un-altered at either the same frequency or a different frequency. The former case is simpler and cheaper but requires good isolation between two antennas to avoid feedback, however it does mean that the end of the NLOS link at A or C does not require to change the receive frequency from that used for a LOS link. A typical application might be to repeat or re-broadcast signals for vehicles using car radios in tunnels. A repeater that changes frequency would avoid any feedback problems but would be more difficult to design and expensive and it would require a receiver to change frequency when moving from the LOS to the NLOS zone. A communications satellite is an example of an active repeater that does change frequency. Communications satellites, in most cases, are in geosynchronous orbit at an altitude of 22,300 miles (35,000 km) above the Equator. Groundwave propagation Application of the Poynting Vector to vertically polarized plane waves at LF (30 kHz to 300 kHz) and VLF (3 kHz to 30 kHz) indicates that a component of the field is propagated a few meters into the surface of the Earth. The propagation is very low loss and communications over thousands of kilometers over NLOS links is possible. However, such low frequencies by definition (Nyquist–Shannon sampling theorem) are very low bandwidth, so this type of communication is not widely used. Tropospheric modes Radio waves in the VHF and UHF bands can travel somewhat beyond the visual horizon due to refraction in the troposphere, the bottom layer of the atmosphere below 20 km (12 miles). This is due to changes in the refractive index of air with temperature and pressure. Tropospheric delay is a source of error in radio ranging techniques, such as the Global Positioning System (GPS). In addition, unusual conditions can sometimes allow propagation at greater distances: Tropospheric refraction The obstruction that creates an NLOS link may be the Earth itself, such as would exist if the other end of the link was beyond the optical horizon. A very useful property of the Earth's atmosphere is that, on average, the density of air gas molecules reduces as the altitude increases up to approximately 30 km. Its relative permittivity or dielectric constant reduces steadily from about 1.00536 at the Earth's surface. To model the change in refractive index with altitude, the atmosphere may be approximated to many thin air layers, each of which has a slightly smaller refractive index than the one below. The trajectory of radio waves progressing through such an atmosphere model at each interface, is analogous to optical beams passing from one optical medium to another as predicted by Snell's Law. When the beam passes from a higher to lower refractive index it tends to get bent or refracted away from the normal at the boundary according to Snell's Law. When the curvature of the Earth is taken into account it is found that, on average, radio waves whose initial trajectory is towards the optical horizon follows a path that does not return to the Earth's surface at the horizon, but slightly beyond it. The distance from the transmit antenna to where it does return is approximately equivalent to the optical horizon, had the Earth's radius been 4/3 of its actual value. The '4/3 Earth's radius' is a useful rule of thumb to the radio communication engineers when designing such a NLOS link. The 4/3 Earth radius rule of thumb is an average for the Earth's atmosphere assuming it is reasonably homogenised, absent of temperature inversion layers or unusual meteorological conditions. NLOS links that exploit atmospheric refraction typically operate at frequencies in the VHF and UHF bands, including FM and TV terrestrial broadcast services. Anomalous propagation The phenomenon described above that the atmospheric refractive index, relative permittivity or dielectric constant gradually reduces with increasing height is on account of the reduction of the atmospheric air density with increasing height. Air density is also a function of temperature, which ordinarily also reduces with increasing height. However, these are only average conditions; local meteorological conditions can create phenomena such as temperature inversion layers where a warm layer of air settles above a cool layer. At the interface between them exists a relatively abrupt change in refractive index from a smaller value in the cool layer to a larger value in the warm layer. By analogy with the optical Snell's Law, this can cause significant reflections of radio waves back towards the Earth's surface where they are further reflected, thus causing a ducting effect. The result is that radio waves can propagate well beyond their intended service area with less than normal attenuation. This effect is only apparent in the VHF and UHF spectra and is often exploited by amateur radio enthusiasts to achieve communications over abnormally long distances for the frequencies involved. For commercial communication services it cannot be exploited because it is unreliable (the conditions can form and disperse in minutes) and it can cause interference well outside of the normal service area. Temperature inversion and anomalous propagation can occur at most latitudes but they are more common in tropical climates than temperate climates, usually associated with high pressure areas (anticyclones). Tropospheric ducting Sudden changes in the atmosphere's vertical moisture content and temperature profiles can on random occasions make UHF, VHF and microwave signals propagate hundreds of kilometers (miles) up to about —and for ducting mode even farther—beyond the normal radio-horizon. The inversion layer is mostly observed over high pressure regions, but there are several tropospheric weather conditions which create these randomly occurring propagation modes. Inversion layer's altitude for non-ducting is typically found between and for ducting about , and the duration of the events are typically from several hours up to several days. Higher frequencies experience the most dramatic increase of signal strengths, while on low-VHF and HF the effect is negligible. Propagation path attenuation may be below free-space loss. Some of the lesser inversion types related to warm ground and cooler air moisture content occur regularly at certain times of the year and time of day. A typical example could be the late summer, early morning tropospheric enhancements that bring in signals from distances up to few hundred kilometers (miles) for a couple of hours, until undone by the Sun's warming effect. Tropospheric scattering (troposcatter) At VHF and higher frequencies, small variations (turbulence) in the density of the atmosphere at a height of around can scatter some of the normally line-of-sight beam of radio frequency energy back toward the ground. In tropospheric scatter (troposcatter) communication systems a powerful beam of microwaves is aimed above the horizon, and a high gain antenna over the horizon aimed at the section of the troposphere though which the beam passes receives the tiny scattered signal. Troposcatter systems can achieve over-the-horizon communication between stations apart, and the military developed networks such as the White Alice Communications System covering all of Alaska before the 1960s, when communication satellites largely replaced them. A tropospheric scatter NLOS link typically operates at a few gigahertz using potentially very high transmit powers (typically 3 kW to 30 kW, depending on conditions), very sensitive receivers and very high gain, usually fixed, large reflector antennas. The transmit beam is directed into the troposphere just above the horizon with sufficient power flux density that gas and water vapour molecules cause scattering in a region in the beam path known as the scatter volume. Some components of the scattered energy travel in the direction of the receiver antennas and form the receive signal. Since there are very many particles to cause scattering in this region, the Rayleigh fading statistical model may usefully predict behaviour and performance in this kind of system. Rain scattering Rain scattering is purely a microwave propagation mode and is best observed around 10 GHz, but extends down to a few gigahertz—the limit being the size of the scattering particle size vs. wavelength. This mode scatters signals mostly forwards and backwards when using horizontal polarization and side-scattering with vertical polarization. Forward-scattering typically yields propagation ranges of 800 km (500 miles). Scattering from snowflakes and ice pellets also occurs, but scattering from ice without watery surface is less effective. The most common application for this phenomenon is microwave rain radar, but rain scatter propagation can be a nuisance causing unwanted signals to intermittently propagate where they are not anticipated or desired. Similar reflections may also occur from insects though at lower altitudes and shorter range. Rain also causes attenuation of point-to-point and satellite microwave links. Attenuation values up to 30 dB have been observed on 30 GHz during heavy tropical rain. Lightning scattering Lightning scattering has sometimes been observed on VHF and UHF over distances of about 500 km (300 miles). The hot lightning channel scatters radio-waves for a fraction of a second. The RF noise burst from the lightning makes the initial part of the open channel unusable and the ionization disappears quickly because of recombination at low altitude and high atmospheric pressure. Although the hot lightning channel is briefly observable with microwave radar, no practical use for this mode has been found in communications. Ionospheric propagation The mechanism of ionospheric propagation in supporting NLOS links is similar to that for atmospheric refraction but, in this case, the radio wave refraction occurs not in the atmosphere but in the ionosphere at much greater altitudes. Like its tropospheric counterpart, ionospheric propagation can sometimes be statistically modelled using Rayleigh fading. The ionosphere extends from altitudes of approximately 50 km to 400 km and is divided into distinct plasma layers denoted D, E, F1, and F2 in increasing altitude. Refraction of radio waves by the ionosphere rather than the atmosphere can therefore allow NLOS links of much greater distance for just one refraction path or 'hop' via one of the layers. Under certain conditions radio waves that have undergone one hop may reflect off the Earth's surface and experience more hops, so increasing the range. The positions of these and their ion densities are significantly controlled by the Sun's incident radiation and therefore change diurnally, seasonally and during Sun spot activity. The initial discovery that radio waves could travel beyond the horizon by Marconi in the early 20th century prompted extensive studies of ionospheric propagation for the next 50 years or so, which have yielded various HF link channel prediction tables and charts. Frequencies that are affected by ionospheric propagation range from approximately 500 kHz to 50 MHz but the majority of such NLOS links operate in the 'short wave' or high frequency (HF) frequency bands between 3 MHz and 30 MHz. In the latter half of the twentieth century, alternative means of communicating over large NLOS distances were developed such as satellite communications and submarine optical fiber, both of which potentially carry much larger bandwidths than HF and are much more reliable. Despite their limitations, HF communications only need relatively cheap, crude equipment and antennas so they are mostly used as backups to main communications systems and in sparsely populated remote areas where other methods of communication are not cost effective. Discussion Skywave propagation, also referred to as skip, is any of the modes that rely on reflection and refraction of radio waves from the ionosphere. The ionosphere is a region of the atmosphere from about that contains layers of charged particles (ions) which can refract a radio wave back toward the Earth. A radio wave directed at an angle into the sky can be reflected back to Earth beyond the horizon by these layers, allowing long-distance radio transmission. The F2 layer is the most important ionospheric layer for long-distance, multiple-hop HF propagation, though F1, E, and D-layers also play significant roles. The D-layer, when present during sunlight periods, causes significant amount of signal loss, as does the E-layer whose maximum usable frequency can rise to 4 MHz and above and thus block higher frequency signals from reaching the F2-layer. The layers, or more appropriately "regions", are directly affected by the sun on a daily diurnal cycle, a seasonal cycle and the 11-year sunspot cycle and determine the utility of these modes. During solar maxima, or sunspot highs and peaks, the whole HF range up to 30 MHz can be used usually around the clock and F2 propagation up to 50 MHz is observed frequently depending upon daily solar flux values. During solar minima, or minimum sunspot counts down to zero, propagation of frequencies above 15 MHz is generally unavailable. Although the claim is commonly made that two-way HF propagation along a given path is reciprocal, that is, if the signal from location A reaches location B at a good strength, the signal from location B will be similar at station A because the same path is traversed in both directions. However, the ionosphere is far too complex and constantly changing to support the reciprocity theorem. The path is never exactly the same in both directions. In brief, conditions at the two end-points of a path generally cause dissimilar polarization shifts, hence dissimilar splits into ordinary rays and extraordinary rays (Pedersen rays) which have different propagation characteristics due to differences in ionization density, shifting zenith angles, effects of the Earth's magnetic dipole contours, antenna radiation patterns, ground conditions, and other variables. Forecasting of skywave modes is of considerable interest to amateur radio operators and commercial marine and aircraft communications, and also to shortwave broadcasters. Real-time propagation can be assessed by listening for transmissions from specific beacon transmitters. Finite absorption If an object that changes a LOS link to NLOS is not a good conductor but an intermediate material, it absorbs some of the RF power incident upon it. However, if it has finite thickness the absorption is also finite and the resulting attenuation of the radio waves may be tolerable and an NLOS link may be set up using radio waves that actually pass through the material. As an example, WLANs often use finite absorption NLOS links to communicate between a WLAN access point and WLAN client(s) in the typical office environment. The radio frequencies used, typically a few gigahertz (GHz) normally passes through a few thin office walls and partitions with tolerable attenuation. After many such walls though or after a few thick concrete or similar (non-metallic) walls the NLOS link becomes unworkable. Meteor scattering Meteor scattering relies on reflecting radio waves off the intensely ionized columns of air generated by meteors. While this mode is very short duration, often only from a fraction of second to couple of seconds per event, digital Meteor burst communications allows remote stations to communicate to a station that may be hundreds of miles up to over away, without the expense required for a satellite link. This mode is most generally useful on VHF frequencies between 30 and 250 MHz. Auroral backscatter Intense columns of Auroral ionization at 100 km (60 mile) altitudes within the auroral oval backscatter radio waves, including those on HF and VHF. Backscatter is angle-sensitive—incident ray vs. magnetic field line of the column must be very close to right-angle. Random motions of electrons spiraling around the field lines create a Doppler-spread that broadens the spectra of the emission to more or less noise-like – depending on how high radio frequency is used. The radio-auroras are observed mostly at high latitudes and rarely extend down to middle latitudes. The occurrence of radio-auroras depends on solar activity (flares, coronal holes, CMEs) and annually the events are more numerous during solar cycle maxima. Radio aurora includes the so-called afternoon radio aurora which produces stronger but more distorted signals and after the Harang-minima, the late-night radio aurora (sub-storming phase) returns with variable signal strength and lesser doppler spread. The propagation range for this predominantly back-scatter mode extends up to about 2000 km (1250 miles) in east–west plane, but strongest signals are observed most frequently from the north at nearby sites on same latitudes. Rarely, a strong radio-aurora is followed by Auroral-E, which resembles both propagation types in some ways. Sporadic-E propagation Sporadic E (Es) propagation occurs on HF and VHF bands. It must not be confused with ordinary HF E-layer propagation. Sporadic-E at mid-latitudes occurs mostly during summer season, from May to August in the northern hemisphere and from November to February in the southern hemisphere. There is no single cause for this mysterious propagation mode. The reflection takes place in a thin sheet of ionization around 90 km (55 miles) height. The ionization patches drift westwards at speeds of few hundred km (miles) per hour. There is a weak periodicity noted during the season and typically Es is observed on 1 to 3 successive days and remains absent for a few days to reoccur again. Es do not occur during small hours; the events usually begin at dawn, and there is a peak in the afternoon and a second peak in the evening. Es propagation is usually gone by local midnight. Observation of radio propagation beacons operating around 28.2 MHz, 50 MHz and 70 MHz, indicates that maximum observed frequency (MOF) for Es is found to be lurking around 30 MHz on most days during the summer season, but sometimes MOF may shoot up to 100 MHz or even more in ten minutes to decline slowly during the next few hours. The peak-phase includes oscillation of MOF with periodicity of approximately 5...10 minutes. The propagation range for Es single-hop is typically 1000 to 2000 km (600 to 1250 miles), but with multi-hop, double range is observed. The signals are very strong but also with slow deep fading. Airplane scattering Airplane scattering (or most often reflection) is observed on VHF through microwaves and, besides back-scattering, yields momentary propagation up to 500 km (300 miles) even in mountainous terrain. The most common back-scatter applications are air-traffic radar, bistatic forward-scatter guided-missile and airplane-detecting trip-wire radar, and the US space radar. Earth–Moon–Earth communication Other effects Diffraction Knife-edge diffraction is the propagation mode where radio waves are bent around sharp edges. For example, this mode is used to send radio signals over a mountain range when a line-of-sight path is not available. However, the angle cannot be too sharp or the signal will not diffract. The diffraction mode requires increased signal strength, so higher power or better antennas will be needed than for an equivalent line-of-sight path. Diffraction depends on the relationship between the wavelength and the size of the obstacle. In other words, the size of the obstacle in wavelengths. Lower frequencies diffract around large smooth obstacles such as hills more easily. For example, in many cases where VHF (or higher frequency) communication is not possible due to shadowing by a hill, it is still possible to communicate using the upper part of the HF band where the surface wave is of little use. Diffraction phenomena by small obstacles are also important at high frequencies. Signals for urban cellular telephony tend to be dominated by ground-plane effects as they travel over the rooftops of the urban environment. They then diffract over roof edges into the street, where multipath propagation, absorption and diffraction phenomena dominate. Absorption Low-frequency radio waves travel easily through brick and stone and VLF even penetrates sea-water. As the frequency rises, absorption effects become more important. At microwave or higher frequencies, absorption by molecular resonances in the atmosphere (mostly from water, H2O and oxygen, O2) is a major factor in radio propagation. For example, in the 58–60 GHz band, there is a major absorption peak which makes this band useless for long-distance use. This phenomenon was first discovered during radar research in World War II. Above about 400 GHz, the Earth's atmosphere blocks most of the spectrum while still passing some - up to UV light, which is blocked by ozone - but visible light and some of the near-infrared is transmitted. Heavy rain and falling snow also affect microwave absorption. Effect on positioning In most of the recent localization systems, it is assumed that the received signals propagate through a LOS path. However, infringement of this assumption can result in inaccurate positioning data. For time of arrival based localization systems, the emitted signal can only arrive at the receiver through its NLOS paths. The NLOS error is defined as the extra distance travelled by the received signal with respect to the LOS path. The NLOS error is always positively biased with the magnitude dependent on the propagation environment. References Further reading Bullington, K.; "Radio Propagation Fundamentals"; Bell System Technical Journal Vol. 36 (May 1957); pp 593–625. "Technical Planning Parameters and Methods for Terrestrial Broadcasting" (April 2004); Australian Broadcasting Authority. External links Research on "Non-line-of-sight (NLOS) Localisation for Indoor Environments" by CMR at UNSW Radio frequency propagation
Non-line-of-sight propagation
[ "Physics" ]
7,326
[ "Physical phenomena", "Spectrum (physical sciences)", "Radio frequency propagation", "Electromagnetic spectrum", "Waves" ]
2,175,504
https://en.wikipedia.org/wiki/Method%20of%20moments%20%28statistics%29
In statistics, the method of moments is a method of estimation of population parameters. The same principle is used to derive higher moments like skewness and kurtosis. It starts by expressing the population moments (i.e., the expected values of powers of the random variable under consideration) as functions of the parameters of interest. Those expressions are then set equal to the sample moments. The number of such equations is the same as the number of parameters to be estimated. Those equations are then solved for the parameters of interest. The solutions are estimates of those parameters. The method of moments was introduced by Pafnuty Chebyshev in 1887 in the proof of the central limit theorem. The idea of matching empirical moments of a distribution to the population moments dates back at least to Karl Pearson. Method Suppose that the parameter = () characterizes the distribution of the random variable . Suppose the first moments of the true distribution (the "population moments") can be expressed as functions of the s: Suppose a sample of size is drawn, resulting in the values . For , let be the j-th sample moment, an estimate of . The method of moments estimator for denoted by is defined to be the solution (if one exists) to the equations: The method described here for single random variables generalizes in an obvious manner to multiple random variables leading to multiple choices for moments to be used. Different choices generally lead to different solutions [5], [6]. Advantages and disadvantages The method of moments is fairly simple and yields consistent estimators (under very weak assumptions), though these estimators are often biased. It is an alternative to the method of maximum likelihood. However, in some cases the likelihood equations may be intractable without computers, whereas the method-of-moments estimators can be computed much more quickly and easily. Due to easy computability, method-of-moments estimates may be used as the first approximation to the solutions of the likelihood equations, and successive improved approximations may then be found by the Newton–Raphson method. In this way the method of moments can assist in finding maximum likelihood estimates. In some cases, infrequent with large samples but less infrequent with small samples, the estimates given by the method of moments are outside of the parameter space (as shown in the example below); it does not make sense to rely on them then. That problem never arises in the method of maximum likelihood Also, estimates by the method of moments are not necessarily sufficient statistics, i.e., they sometimes fail to take into account all relevant information in the sample. When estimating other structural parameters (e.g., parameters of a utility function, instead of parameters of a known probability distribution), appropriate probability distributions may not be known, and moment-based estimates may be preferred to maximum likelihood estimation. Alternative method of moments The equations to be solved in the method of moments (MoM) are in general nonlinear and there are no generally applicable guarantees that tractable solutions exist. But there is an alternative approach to using sample moments to estimate data model parameters in terms of known dependence of model moments on these parameters, and this alternative requires the solution of only linear equations or, more generally, tensor equations. This alternative is referred to as the Bayesian-Like MoM (BL-MoM), and it differs from the classical MoM in that it uses optimally weighted sample moments. Considering that the MoM is typically motivated by a lack of sufficient knowledge about the data model to determine likelihood functions and associated a posteriori probabilities of unknown or random parameters, it is odd that there exists a type of MoM that is Bayesian-Like. But the particular meaning of Bayesian-Like leads to a problem formulation in which required knowledge of a posteriori probabilities is replaced with required knowledge of only the dependence of model moments on unknown model parameters, which is exactly the knowledge required by the traditional MoM [1],[2],[5]–[9]. The BL-MoM also uses knowledge of a priori probabilities of the parameters to be estimated, when available, but otherwise uses uniform priors. The BL-MoM has been reported on in only the applied statistics literature in connection with parameter estimation and hypothesis testing using observations of stochastic processes for problems in Information and Communications Theory and, in particular, communications receiver design in the absence of knowledge of likelihood functions or associated a posteriori probabilities [10] and references therein. In addition, the restatement of this receiver design approach for stochastic process models as an alternative to the classical MoM for any type of multivariate data is available in tutorial form at the university website [11, page 11.4]. The applications in [10] and references demonstrate some important characteristics of this alternative to the classical MoM, and a detailed list of relative advantages and disadvantages is given in [11, page 11.4], but the literature is missing direct comparisons in specific applications of the classical MoM and the BL-MoM. Examples An example application of the method of moments is to estimate polynomial probability density distributions. In this case, an approximating polynomial of order is defined on an interval . The method of moments then yields a system of equations, whose solution involves the inversion of a Hankel matrix. Proving the central limit theorem Let be independent random variables with mean 0 and variance 1, then let . We can compute the moments of asExplicit expansion shows thatwhere the numerator is the number of ways to select distinct pairs of balls by picking one each from buckets, each containing balls numbered from to . At the limit, all moments converge to that of a standard normal distribution. More analysis then show that this convergence in moments imply a convergence in distribution. Essentially this argument was published by Chebyshev in 1887. Uniform distribution Consider the uniform distribution on the interval , . If then we have Solving these equations gives Given a set of samples we can use the sample moments and in these formulae in order to estimate and . Note, however, that this method can produce inconsistent results in some cases. For example, the set of samples results in the estimate even though and so it is impossible for the set to have been drawn from in this case. See also Generalized method of moments Decoding methods References References needing to be wikified [4] Pearson, K. (1936), "Method of Moments and Method of Maximum Likelihood", Biometrika 28(1/2), 35–59. [5] Lindsay, B.G. & Basak P. (1993). “Multivariate normal mixtures: a fast consistent method of moments”, Journal of the American Statistical Association 88, 468–476. [6] Quandt, R.E. & Ramsey, J.B. (1978). “Estimating mixtures of normal distributions and switching regressions”, Journal of the American Statistical Association 73, 730–752. [7] https://real-statistics.com/distribution-fitting/method-of-moments/ [8] Hansen, L. (1982). “Large sample properties of generalized method of moments estimators”, Econometrica 50, 1029–1054. [9] Lindsay, B.G. (1982). “Conditional score functions: some optimality results”, Biometrika 69, 503–512. [10] Gardner, W.A., “Design of nearest prototype signal classifiers”, IEEE Transactions on Information Theory 27 (3), 368–372,1981 [11] Cyclostationarity Probability distribution fitting Moment (mathematics)
Method of moments (statistics)
[ "Physics", "Mathematics" ]
1,583
[ "Mathematical analysis", "Moments (mathematics)", "Physical quantities", "Moment (physics)" ]
2,175,518
https://en.wikipedia.org/wiki/Siemens%20star
A Siemens star, or spoke target, is a device used to test the resolution of optical instruments, printers, and displays. It consists of a pattern of bright "spokes" on a dark background that radiate from a common center and become wider as they get further from it. In concept, the spokes only meet at the exact center of the star – the spokes, and the gaps between them, become narrower the closer to the center one looks, but they never touch except at the center. When printed or displayed on a device with limited resolution, however, the spokes touch at some distance from the center. The smallest gap visible is limited by the smallest dot of ink the printer can produce, making the Siemens star a useful tool for comparing two printers' resolutions (DPI). Similarly, it can be applied to a camera's optical resolution by taking photographs of a Siemens star printed at high resolution and comparing photographs from different cameras, to see which retained the center detail the closest. In the field of video production, where it is often called a back focus chart, the Siemens star is widely used to adjust the back focus of removable lenses. It is also used during film or video shoots to help setting the focus in special situations. Siemens stars are similar to the sunburst pattern used as a background in graphic design, as in the Japanese Naval Ensign, Russian Air Force flag and Jordanian Royal Standard. They are useful in drawing the eye to a point on the page. Under optical blur from defocus, a Siemens star (like any periodic pattern) gives rise to the phenomenon of spurious resolution above the resolution limit, i.e. toward the center of the Siemens star. (Spurious resolution appears similar to aliasing, but it is a purely optical phenomenon, so it occurs without need of pixels.) This results in inverted polarity of the stripe pattern: black stripes appear in the place of white stripes and vice versa (and further polarity inversions occur further inward). (The illustration under Optical transfer function shows spurious resolution caused by blurring.) When looking at the Siemens star with slightly blurred vision, e.g., without spectacles or with defocus from staring, this is seen as a shimmering ring around the Siemens star's center that changes size with viewing distance. The star was developed by Siemens & Halske AG (today Siemens) in the 1930s to test the lenses of Siemens narrow-film cameras. See also Secchi disk References External links ISO 15775 chart (pdf) resolution test chart featuring a vector Siemens star Siemens star with n=314 sectors for high resolution and easy calculation (vector graphic in PDF) Visual perception Optical illusions
Siemens star
[ "Physics" ]
548
[ "Optical phenomena", "Physical phenomena", "Optical illusions" ]
2,175,523
https://en.wikipedia.org/wiki/Glucomannan
Glucomannan is a water-soluble polysaccharide that is considered a dietary fiber. It is a hemicellulose component in the cell walls of some plant species. Glucomannan is a food additive used as an emulsifier and thickener. It is a major source of mannan oligosaccharide (MOS) found in nature, the other being galactomannan, which is insoluble. Products containing glucomannan, under a variety of brand names, are marketed as dietary supplements with claims they can relieve constipation and help lower cholesterol levels. Since 2010 they are legally marketed in Europe as helping with weight loss for people who are overweight and eating a diet with restricted calories, but there was no good evidence that glucomannan helped weight loss. Glucomannan may lower LDL cholesterol by up to 10 percent, according to one meta-study. Supplements containing glucomannans pose a risk for choking and bowel obstruction if they are not taken with sufficient water. Other adverse effects include diarrhea, belching, and bloating; in one study people taking glucomannans had higher triglyceride levels. Glucomannans are also used to supplement animal feed for farmed animals, to cause the animals gain weight more quickly. Chemistry Glucomannan is mainly a straight-chain polymer, with a small amount of branching. The component sugars are β-(1→4)-linked D-mannose and D-glucose in a ratio of 1.6:1. The degree of branching is about 8% through β-(1→6)-glucosyl linkages. Glucomannan with α-(1→6)-linked galactose units in side branches is called galactoglucomannan. Biological function In the yeast cell wall, mannan oligosaccharides are present in complex molecules that are linked to the protein moiety. There are two main locations of mannan oligosaccharides in the surface area of Saccharomyces cerevisiae cell wall. They can be attached to the cell wall proteins as part of –O and –N glycosyl groups and also constitute elements of large α-D-mannanose polysaccharides (α-D-Mannans), which are built of α-(1,2)- and α-(1,3)- D-mannose branches (from 1 to 5 rings long), which are attached to long α-(1,6)-D-mannose chains. This specific combination of various functionalities involves mannan oligosaccharides-protein conjugates and highly hydrophilic and structurally variable 'brush-like' mannan oligosaccharides structures that can fit to various receptors of animal digestive tracts, and to the receptors on the surface of bacterial membranes, impacts these molecules' bioactivity. Mannan oligosaccharides-protein conjugates are involved in interactions with the animal's immune system and as result enhance immune system activity. They also play a role in animal antioxidant and antimutagenic defense. Natural sources Glucomannan comprises 40% by dry weight of the roots, or corm, of the konjac plant. Another culinary source is salep, ground from the roots of certain orchids and used in Greek and Turkish cuisine. However, these orchid species are protected in the whole EU and the trade of salep is strictly forbidden. Glucomannan is also a hemicellulose that is present in large amounts in the wood of conifers and in smaller amounts in the wood of dicotyledons. Glucomannan is also a constituent of bacterial, plant and yeast cell wall with differences in the branches or glycosidic linkages in the linear structure. Uses Human food additive Glucomannan is a food additive used as an emulsifier and thickener with the E number E425(ii). Glucomannan-rich salep powder is responsible for the unique textural properties of salep dondurma, a mastic-flavored stretchable and chewy ice cream of Turkish origin. Konjac, also rich in glucomannan, is widely used for its jelly-like texture. It found use in shirataki noodles, in fruit jellies snacks (with choking risk), and as a substitute for gelatin. Human dietary supplement Glucomannan is an ingredient in a variety of dietary supplement products marketed with claims that they aid in weight loss, but medical research has found no good evidence to support its use for this purpose. The claim is that it makes a gel when mixed with water, which can take up space in the stomach and linger there longer than water alone would, inducing a person to feel full after having eaten a smaller amount of food. In Europe and Canada, glucomannan dietary supplements can be marketed with claims to lower cholesterol levels and to relieve constipation. Data from a randomized controlled clinical trial suggests that glucomannan dietary supplements help regulate the hormone ghrelin and might help control appetite in people with Type 2 diabetes. Health risks A health advisory was released by Health Canada stating the following: "Natural health products containing the ingredient glucomannan in tablet, capsule or powder form, which are currently on the Canadian market, have a potential for harm if taken without at least 250 ml or 8 ounces of water or other fluid. The risk includes choking and/or blockage of the throat, esophagus or intestine, according to international adverse reaction case reports. It is also important to note that these products should not be taken immediately before going to bed." Other adverse effects include diarrhea, belching, and bloating; in one study people taking glucomannans had higher triglyceride levels. Consumer issues Several companies have been determined by the Federal Trade Commission (FTC) or the Food and Drug Administration (FDA) to have, at some time, violated the Federal Food, Drug, and Cosmetic Act. The companies include Vitacost, PediaLean, Herbal Worldwide Holdings, BioTrim, and others. The company Obesity Research Institute, the marketer of FiberThin, Zylotrim, Propolene and Lipozene, settled FTC charges that their misleading weight-loss claims violated federal laws by agreeing to pay $1.5 million in consumer redress. In 2001, a number of jelly-type candy products containing konjac-derived glucomannan were barred from import by the U.S. Food and Drug Administration due to choking hazards. Dietary supplements for animals It is also used as dietary supplement for farmed animals in order to help them gain more weight from food, called the feed conversion ratio. The effect of mannan oligosaccharides on animal performance was analysed in meta-analyses for poultry, pigs, and calves. References External links Glucomannan information from Natural Medicines Comprehensive Database via RxList Polysaccharides Edible thickening agents Natural gums
Glucomannan
[ "Chemistry" ]
1,522
[ "Carbohydrates", "Polysaccharides" ]
2,175,587
https://en.wikipedia.org/wiki/Laetiporus%20sulphureus
Laetiporus sulphureus is a species of bracket fungus (fungi that grow on trees) found in Europe and North America. Its common names are sulphur polypore, sulphur shelf, and chicken-of-the-woods. Its fruit bodies grow as striking golden-yellow shelf-like structures on tree trunks and branches. Old fruitbodies fade to pale beige or pale grey. The undersurface of the fruit body is made up of tubelike pores rather than gills. Laetiporus sulphureus is a saprophyte and occasionally a weak parasite, causing brown cubical rot in the heartwood of trees on which it grows. Unlike many bracket fungi, it is edible when young, although adverse reactions have been reported. Taxonomy Laetiporus sulphureus was first described as Boletus sulphureus by French mycologist Pierre Bulliard in 1789. It has had many synonyms and was finally given its current name in 1920 by American mycologist William Murrill. Laetiporus means "with bright pores" and sulphureus means "the colour of sulphur". Investigations in North America have shown that there are several similar species within what has been considered L. sulphureus and that the true L. sulphureus may be restricted to regions east of the Rocky Mountains. Phylogenetic analyses of ITS and nuclear large subunit and mitochondrial small subunit rDNA sequences from North American collections have delineated five distinct clades within the core Laetiporus clade. Sulphureus clade I contains white-pored L. sulphureus isolates, while Sulphureus clade II contains yellow-pored L. sulphureus isolates. Description The fruiting body emerges directly from the trunk of a tree and is initially knob-shaped, but soon expands to fan-shaped shelves, typically growing in overlapping tiers. It is sulphur-yellow to bright orange in color and has a suedelike texture. Old fruitbodies fade to tan or whitish. Each shelf may be anywhere from across and up to thick. The fertile surface is sulphur-yellow with small pores or tubes and produces a white spore print. When fresh, the flesh is succulent with a strong fungal aroma and exudes a yellowish, transparent juice, but soon becomes dry and brittle. Distribution and habitat Laetiporus sulphureus is widely distributed across Europe (April to November) and North America, although its range may be restricted to areas east of the Rockies. It grows on dead or mature hardwoods and has been reported from a very wide variety of host trees, such as Quercus, Prunus, Pyrus, Populus, Salix, Robinia, and Fagus, occasionally also from conifers, from August to October or later, sometimes as early as June. In the Mediterranean region, this species is usually found on Ceratonia and Eucalyptus. It can usually be found growing in clusters. Parasitism The fungus causes brown cubical rot of heartwood in the roots, tree base and stem. After infection, the wood is at first discolored yellowish to red but subsequently becomes reddish-brown and brittle. At the final stages of decay, the wood can be rubbed like powder between the fingers. Guinness world record A specimen weighing was found in the New Forest, Hampshire, United Kingdom, on 15 October 1990. Cultivation Compared with species such as Agaricus bisporus (Swiss Brown mushroom) and the oyster mushroom, commercial cultivation of Laetiporus occurs at a much smaller and less mechanized scale. Uses Due to its taste, Laetiporus sulphureus has been called the chicken polypore and chicken-of-the-woods (not to be confused with Grifola frondosa, the so-called hen-of-the-woods). Many people think that the mushroom tastes like crab or lobster leading to the nickname lobster-of-the-woods. The authors of Mushrooms in Color said that the mushroom tastes good sauteed in butter or prepared in a cream sauce served on toast or rice. It is highly regarded in Germany and North America. Young specimens are edible if they exude large amounts of a clear to pale yellow watery liquid. Only the young outer edges of larger specimens should be collected, as older portions tend to be tough, unpalatable, and bug-infested. The mushroom should not be eaten raw. Certain species of deer consume this type of mushroom. Adverse effects Some people have experienced gastrointestinal upset after eating this mushroom, and it should not be consumed raw. Severe adverse reactions can occur, including vomiting and fever, in about 10% of the population, but this is now thought to be the result of confusion with morphologically similar species such as Laetiporus huroniensis, which grows on hemlock trees, and L. gilbertsonii, which grows on Eucalyptus. Bioactivity The fungus produces the Laetiporus sulphureus lectin (LSL), which exhibits haemolytic and haemagglutination activities. Haemolytic lectins are sugar-binding proteins that lyse and agglutinate cells. These biochemical activities are promoted when bound to carbohydrates. See also Polypore References External links Wood-decay fungi Edible fungi Fungi described in 1789 Fungi of Europe Fungi of North America sulphureus Fungus species
Laetiporus sulphureus
[ "Biology" ]
1,126
[ "Fungi", "Fungus species" ]
2,175,615
https://en.wikipedia.org/wiki/Mixed%20potential%20theory
Mixed potential theory is a theory used in electrochemistry that relates the potentials and currents from differing constituents into a 'weighted' potential at zero net current. In other words, it is an electrode potential resulting from a simultaneous action of more than a single redox couple, while the net electrode current is zero. IUPAC definition According to the IUPAC definition, mixed potential is the potential of an electrode (against a suitable reference electrode, often the standard hydrogen electrode) when an appreciable fraction of the anodic or cathodic current arises from two or more different redox couples, but when the total current on the electrode remains at zero. References Electrochemistry Electrochemical potentials Chemistry theories
Mixed potential theory
[ "Chemistry" ]
147
[ "Electrochemical potentials", "Electrochemistry", "nan", "Electrochemistry stubs", "Physical chemistry stubs" ]
2,175,656
https://en.wikipedia.org/wiki/Mowich%20Lake
Mowich Lake is a lake located in the northwestern corner of Mount Rainier National Park in Washington state at an elevation of . The name "Mowich" derives from the Chinook jargon word for deer. Access to the lake is provided by a long unpaved road which opens to vehicles in mid June to early July. Mowich Lake is also a busy campground during the summer with 30 walk-in tent camping spots. Bathrooms, tables, and trash bins are provided. From the Mowich campground, hikers can reach the Wonderland Trail, Eunice Lake, Tolmie Peak, Spray Park, and Spray Falls. Old-growth trees, waterfalls, creeks, cliffs, and wildflower meadows are also located in and around the area. Fishing is generally poor at Mowich Lake because the area is not stocked with fish and no habitat is provided for natural spawning. This body of water was named Crater Lake in 1883 by the geologist Bailey Willis under the belief that it was formed in a volcanic crater, however I. C. Russell later wrote that the land was shaped instead by ice erosion. Willis concurred. Correspondents to the U.S. Board on Geographic Names (BGN) proposed Mowich to avoid ambiguity with the well-known Crater Lake to the south in Oregon. Mowich Lake was designated the official name by the BGN in 1919. The name of its outflow Crater Creek remained unchanged. References External links Mount Rainier National Park - Carbon and Mowich (U.S. National Park Service) Old-growth forests Lakes of Washington (state) Chinook Jargon place names Mount Rainier National Park Lakes of Pierce County, Washington
Mowich Lake
[ "Biology" ]
338
[ "Old-growth forests", "Ecosystems" ]
2,175,836
https://en.wikipedia.org/wiki/Hering%20illusion
The Hering illusion is one of the geometrical-optical illusions and was discovered by the German physiologist Ewald Hering in 1861. When two straight and parallel lines are presented in front of a radial background (like the spokes of a bicycle), the lines appear as if they were bowed outwards. The Orbison illusion is one of its variants, while the Wundt illusion produces a similar, but inverted effect. There are several possible explanations for why perceptual distortion produced by the radiating pattern. The illusion was ascribed by Hering to an overestimation of the angle made at the points of intersection. If true, then the straightness of the parallel lines yields to that of the radiating lines, implying that there is a hierarchical ordering among components of such illusion. Others have suggested that angle overestimation results from lateral inhibition in visual cortex, while others have postulated a bias inherent in extrapolating 3D angle information from 2D projections. A different framework suggests that the Hering illusion (and several other geometric illusions) are caused by temporal delays with which the visual system must cope. In this framework, the visual system extrapolates current information to “perceive the present”: instead of providing a conscious image of how the world was ~100 ms in the past (when signals first struck the retina), the visual system estimates how the world is likely to look in the next moment. In the case of the Hering illusion, the radial lines trick the visual system into thinking it is moving forward. Since we are not actually moving and the figure is static, we misperceive the straight lines as curved—as they would appear in the next moment. It is possible that both frameworks are compatible. The Hering illusion can also be induced by a background of optic flow (such as dots flowing out from the center of a screen, creating the illusion of forward motion through a starfield). Importantly, the bowing direction is the same whether the flow moves inward or outward. This result is consistent with a role for networks of visual orientation-tuned neurons (e.g., simple cells in primary visual cortex) in the spatial warping. In this framework, under the common condition of forward ego-motion, it is possible that spatial warping counteracts the disadvantage of neural latencies. However, it also demonstrates that any spatial warping that counteracts neural delays is not a precise, on-the-fly computation, but instead a heuristic achieved by a simple mechanism that succeeds under normal circumstances. Researcher Mark Changizi explained the illusion in a 2008 article: "Evolution has seen to it that geometric drawings like this elicit in us premonitions of the near future. The converging lines toward a vanishing point (the spokes) are cues that trick our brains into thinking we are moving forward as we would in the real world, where the door frame (a pair of vertical lines) seems to bow out as we move through it and we try to perceive what that world will look like in the next instant." References Optical illusions
Hering illusion
[ "Physics" ]
630
[ "Optical phenomena", "Physical phenomena", "Optical illusions" ]
2,175,985
https://en.wikipedia.org/wiki/Ehrenstein%20illusion
The Ehrenstein illusion is an optical illusion of brightness or color perception. The visual phenomena was studied by the German psychologist Walter H. Ehrenstein (1899–1961) who originally wanted to modify the theory behind the Hermann grid illusion. In the discovery of the optical illusion, Ehrenstein found that grating patterns of straight lines that stop at a certain point appear to have a brighter centre, compared to the background. Ehrenstein published his book Modifications of the Brightness Phenomenon of L. Hermann to disprove Hermann's theory. He argued that these illusions were not caused by a contrast effect, but rather a brightness effect that needed to be further explored. The effect The Ehrenstein illusion can be classified as a brightness illusion. The borders of a shape affects the observed brightness of an image surface. These effects vary from person to person and can be further enhanced by changing the background of the configuration, or the surroundings of the image surface. The observer perceives the luminance of a surface to be brighter, despite it being identical to the background. Illusory contour Sometimes the "Ehrenstein" is associated with an illusory contour figure where the ends of the dark segments produce the illusion of circles or squares. The apparent circular figures at the centre of the configuration are the same colour as the background, but appear brighter. The brightness effect disappears when the line segments are joint with a circular disk. This fits under the characteristics of an optical illusion, as there are no physical origins to the asymmetrical perceptual sensations we perceive. A similar effect of illusory contour is seen in figures such as the Kanizsa triangle. Variations Ehrenstein carried out variations of the original illusion to test out how the strength of perception can change. In one variation, he changed the thickness of the lines. He found that the lines could be made very thin, and the illusion would still remain. The brightness of the centre increases with the thickness of the lines. However, when the lines become so broad that the central white line was enclosed, the illusion loses its bright appearance and is no longer visible. Paradox of shape perception In 1954, further variations of the original Ehrenstein illusion found that the sides of a square take on an apparent curved shaped when placed inside a pattern of concentric circles. This optical illusion uses geometric factors to create an illusory contour of the shape, unlike the other configurations which use illusory brightness. Ehrenstein also found when he reduced the size of the overall figure, it enhanced the paradox and made the contour appear thicker. The shapes in the image remain constant despite small changes in the overall characteristics of the configuration. The monochrome colours of the images further enhances the illusion of the square becoming curved. Theories and explanations Gestalt psychology can be used to explain theories of illusory contours. It assumes a bottom up approach to complex thinking, meaning people take in and process available information before arriving to a final conclusion. When processing an illusion, preconceived expectations and beliefs about an image or shape is then applied to explain the unknown of an illusion. Cognitive interpretations of a physical stimulus are constructed from the expectations of what we know, rather than what we visually see. This means when viewing an optical illusion or configuration, we intrinsically construct the image as whole again using any cognitive or symbolic cues available. German psychologist Wolfgang Köhler described this behaviour as "explaining away." If sightings can not be explained immediately, the brain invents alternative interpretations which have no factual or perceptual evidence behind it as a way to fill in the gaps. This theory stems from the concept of cognitive dissonance. When the brain holds two conflicting beliefs at the same time, it will do anything to change them so they are consistent with one another. We do this because we feel a sense of discomfort when our norms and expectations are violated. This is seen in the Ehrenstein illusions. The absence of physical stimuli, such as incomplete grids, makes the mind uncomfortable. In order to resolve this issue, the brain produces images of circles and squares to create order and consistency. Visual processing of incomplete or confusing stimuli therefore relies on intrinsic problem-solving processes to restore information. Criticisms Gestalt psychology and cognitive dissonance theories of illusions have been criticised for their unsatisfactory explanations of brightness effects. Although they explore how and when the illusory brightness or contour can vary, there is limited explanation on how the illusory form emerges. Therefore, current descriptions of the Ehrenstein illusion are not sufficient as they do not report the visual and perceptual mechanisms of the phenomena. According to the German neurologist Lothar Spillman, further research is required to create a new theory which accounts for both the characteristics of the illusory stimulus and the perceptual processing of the individual. There is also conflicting evidence against the current explanations of perception. Gestalt psychology assumes that the brain must have some kind of previous expectation which forms the basis of the illusion, before a conscious interpretation of the stimulus takes place. However, recent computational studies seem to suggest that the representation of the stimuli is built up from the brain through the competition and cooperation of visual neurons. Once again, further neurophysiological and psychological research is required to assess whether the current theories are an accurate model of the Ehrenstein illusion. References Optical illusions
Ehrenstein illusion
[ "Physics" ]
1,104
[ "Optical phenomena", "Physical phenomena", "Optical illusions" ]
2,176,027
https://en.wikipedia.org/wiki/Calvatia%20gigantea
Calvatia gigantea, commonly known in English as the giant puffball, is a puffball mushroom commonly found in meadows, fields, and deciduous forests in late summer and autumn. It is found in temperate areas throughout the world. Description According to the Missouri Department of Conservation, Calvatia gigantea typically grows up to wide and high. According to First Nature, it "can grow to 80 cm diameter and weigh several kilograms". A specimen weighing over was recorded in Thunder Bay, Ontario, Canada. The interior of an immature puffball is white, while that of a mature specimen is greenish brown. The fruiting body of a puffball mushroom develops within a few weeks and soon begins to decompose and rot (at which point it is dangerous to eat). Unlike most mushrooms, all the spores of the giant puffball are created inside the fruiting body; large specimens can easily contain several trillion. The spores are yellowish, smooth, and 3–5 μm in size. Similar fungi Giant puffballs resemble the poisonous earthball (Scleroderma citrinum). The latter are distinguished by a much firmer, elastic fruiting body, and having an interior that becomes dark purplish-black with white reticulation early in development. Taxonomy The classification of this species has been revised in recent years. First Nature explains that "puffballs, earthballs, earthstars, stinkhorns and several other kinds of fungi were once thought to be related and were known as the gasteromycetes or 'stomach' fungi, because the fertile material develops inside spherical or pear-shaped fruitbodies." However, many mycologists now believe that "the gasteromycetes" do not share single ancestor; they are polyphyletic. Today, some authors place the giant puffball and other members of genus Calvatia in order Agaricales. The giant puffball has also been placed in two other genera, Lycoperdon and Langermannia, in years past. The current view is that the giant puffball is Calvatia. Conservation status The giant puffball is widespread and common in the United Kingdom. It is protected in parts of Poland and is of conservation concern in Norway. Uses Cooking The large white mushrooms are edible when young, as are all true puffballs, but they can cause digestive issues if the spores have begun to form—as indicated by the color of the flesh being yellowish or greenish-brown instead of pure white. The Lovesick Lake Native Women's Association explains that an overripe puffball "will fall apart when touched or if cut open" and should be discarded. Immature gilled species still contained within their universal veil can be lookalikes for puffballs. Many such species are poisonous or even deadly. To distinguish puffballs from such poisonous fungi, they must be cut open; edible puffballs will have a solid white interior and have "no gills or other imperfections". Medical Puffballs are a known styptic and have long been used as wound dressing, either in powdered form or as slices 3 cm thick. Authors Hui-Yeng Y. Yap, Mohammad Farhan Ariffeen Rosli, et al. found evidence to suggest that C. gigantea was "traditionally used by American Indians, Nigerian and German folks" for this purpose. The authors, however, did not specify the preferred form of wound dressing (e.g., powdered or sliced). New Zealand Māori used it to stem bleeding and treat burns, it was also a food source. References Further reading External links Video footage of mature Giant Puffballs The Giant Puffball VOLATILES OF THE GIANT PUFFBALL MUSHROOM (Calvatia gigantea) Lycoperdaceae Fungi of Europe Fungi of North America Edible fungi Fungi found in fairy rings Puffballs Taxa named by August Batsch Taxa named by Christiaan Hendrik Persoon gigantea Fungi used for fiber dyes Fungus species
Calvatia gigantea
[ "Biology" ]
813
[ "Fungi", "Fungus species" ]
2,176,031
https://en.wikipedia.org/wiki/Healthcare%20Effectiveness%20Data%20and%20Information%20Set
The Healthcare Effectiveness Data and Information Set (HEDIS) is a widely used set of performance measures in the managed care industry, developed and maintained by the National Committee for Quality Assurance (NCQA). HEDIS was designed to allow consumers to compare health plan performance to other plans and to national or regional benchmarks. Although not originally intended for trending, HEDIS results are increasingly used to track year-to-year performance. HEDIS is one component of NCQA's accreditation process, although some plans submit HEDIS data without seeking accreditation. An incentive for many health plans to collect HEDIS data is a Centers for Medicare and Medicaid Services (CMS) requirement that health maintenance organizations (HMOs) submit Medicare HEDIS data in order to provide HMO services for Medicare enrollees under a program called Medicare Advantage. HEDIS was originally titled the "HMO Employer Data and Information Set" as of version 1.0 of 1991. In 1993, Version 2.0 of HEDIS was known as the "Health Plan Employer Data and Information Set". Version 3.0 of HEDIS was released in 1997. In July 2007, NCQA announced that the meaning of "HEDIS" would be changed to "Healthcare Effectiveness Data and Information Set." In current usage, the "reporting year" after the term "HEDIS" is one year following the year reflected in the data; for example, the "HEDIS 2009" reports, available in June 2009, contain analyses of data collected from "measurement year" January–December 2008. Structure The 90 HEDIS measures are divided into six "domains of care": Effectiveness of Care Access/Availability of Care Experience of Care Utilization and Relative Resource Use Health Plan Descriptive Information Measures Collected Using Electronic Clinical Data Systems Measures are added, deleted, and revised annually. For example, a measure for the length of stay after giving birth was deleted after legislation mandating minimum length of stay rendered this measure nearly useless. Increased attention to medical care for seniors prompted the addition of measures related to glaucoma screening and osteoporosis treatment for older adults. Other health care concerns covered by HEDIS are immunizations, cancer screenings, treatment after heart attacks, diabetes, asthma, flu shots, access to services, dental care, alcohol and drug dependence treatment, timeliness of handling phone calls, prenatal and postpartum care, mental health care, well-care or preventive visits, inpatient utilization, drug utilization, and distribution of members by age, sex, and product lines. New measures in HEDIS 2013 are “Asthma Medication Ratio,” “Diabetes Screening for People With Schizophrenia and Bipolar Disorder Who Are Using Antipsychotic Medications,” “Diabetes Monitoring for People With Diabetes and Schizophrenia,” “Cardiovascular Monitoring for People With Cardiovascular Disease and Schizophrenia,” and “Adherence to Antipsychotic Medications for Individuals With Schizophrenia.” Data collection Most HEDIS data is collected through surveys, medical charts and insurance claims for hospitalizations, medical office visits and procedures. Survey measures must be conducted by an NCQA-approved external survey organization. Clinical measures use the administrative or hybrid data collection methodology, as specified by NCQA. Administrative data are electronic records of services, including insurance claims and registration systems from hospitals, clinics, medical offices, pharmacies and labs. For example, a measure titled Childhood Immunization Status requires health plans to identify 2-year-old children who have been enrolled for at least a year. The plans report the percentage of children who received specified immunizations. Plans may collect data for this measure by reviewing insurance claims or automated immunization records, but this method will not include immunizations received at community clinics that do not submit insurance claims. For this measure, plans are allowed to select a random sample of the population and supplement claims data with data from medical records. By doing so, plans may identify additional immunizations and report more favorable and accurate rates. However, the hybrid method is more costly, time-consuming and requires nurses or medical record reviewers who are authorized to review confidential medical records. As of 2019, NCQA is transitioning data collection to a digital process that uses existing electronic data sources rather than surveys and manual data collection. The first six measures available for HEDIS Electronic Clinical Data System (ECDS) reporting include some related to depression, unhealthy alcohol use, and immunization status. Reporting HEDIS results must be audited by an NCQA-approved auditing firm for public reporting. NCQA has an on-line reporting tool called Quality Compass that is available for a fee of several thousand dollars. It provides detailed data on all measures and is intended for employers, consultants and insurance brokers who purchase health insurance for groups. NCQA's web site includes a summary of HEDIS results by health plan. NCQA also collaborates annually with U.S. News & World Report to rank HMOs using an index that combines many HEDIS measures and accreditation status. The "Best Health Plans" list is published in the magazine in October and is available on the magazine's web site. Other local business organizations, governmental agencies and media report HEDIS results, usually when they are released in the fall. Advantages and disadvantages Advantages Proponents cite the following advantages of HEDIS measures: HEDIS measures undergo a selection process that has been described as "rigorous"(p. 205). Steps in the process include assessment of a measure's "importance, scientific soundness and feasibility"; field testing; public comment; a one-year trial period in which results are not reported publicly; and evaluation of publicly reported measures by "statistical analysis, review of audit results and user comments". HEDIS data are useful for "evaluating current performance and setting goals". In some studies, attainment of HEDIS measures is associated with cost-effective practices or with better health outcomes. In a 2002 study, HEDIS measures "generally reflect[ed] cost-effective practices". A 2003 study of Medicare managed care plans determined that plan-level health outcomes were associated with HEDIS measures. An "Acute Outpatient Depression Indicator" score based on a HEDIS measure predicted improvement in depression severity in one 2005 study. As stated in a 2006 Institute of Medicine (IOM) report, "HEDIS measures focus largely on processes of care"; the strengths of process measures include the facts that they "reflect care that patients actually receive," thereby leading to "buy-in from providers," and that they are "directly actionable for quality improvement activities"(p. 179). HEDIS measures are "widely known and accepted"(p. 205). The NCQA claims that over 90% of U.S. health plans use HEDIS measures. Disadvantages HEDIS was described in 1995 as "very controversial". Criticisms of HEDIS measures have included: HEDIS measures do not account for many important aspects of health care quality. They count only a select set of healthcare interventions, for specific at risk patient populations, that can imply that institutions and Providers are giving adequate care. Its purpose is to verify a minimally acceptable level of care is given to specific At Risk Populations. In 1998, HEDIS measures were said to "offer little insight into... [a health] plan’s ability to treat serious illnesses". But, no current studies or evidence can be offered to support this and HEDIS compliance measures are updated yearly to reflect best Practices in healthcare. A 2002 study found "there are numerous non-HEDIS interventions with some evidence of cost effectiveness, particularly interventions to promote healthy behaviors". This may speak more to the specificity of defined healthcare Best Practices interventions to At Risk Populations than to anecdotal interventions with "some evidence". Measures are revised every year to include newer and even more effective Best Practice Guidelines. According to a 2005 study, HEDIS-Medicaid 3.0 measures covered only 22% of the services recommended by the second U.S. Preventive Services Task Force (USPSTF). Though without studies, without expressing what interventions and without reporting what "harm", some groups believe attempts by health care providers to improve their HEDIS measures may cause harm to patients. They have not offered any solutions. As of 2001, there was concern that the asthma HEDIS measure may "encourag[e] more casual prescribing of controller medications" and may place emphasis "on the prescribing of a controller medication rather than on its actual use". There is a risk of hypoglycemia if a provider strives to meet the HEDIS measure concerning a hemoglobin A1c (HbA1c) level of <7% that was adopted in 2006 for HEDIS 2007. NCQA later decided to not report results of the HbA1c<7% measure publicly in 2008, to modify the HbA1c<7% measure for HEDIS 2009 "by adding exclusions for members within a specific age cohort and with certain comorbid conditions," and to add a new HbA1c<8% measure. There is a possible conflict of interest because NCQA "works closely with the managed-care industry". Furthermore, approximately half of NCQA's budget is derived from accreditation fees, "which may create an incentive against setting [HEDIS] standards too high". The process to develop the measures is not completely "transparent," that is, "information about existing conditions, decisions and actions" is not completely "accessible, visible and understandable”. In some cases, attainment of HEDIS measures is not proven to be associated with better health outcomes. But, no evidence or studies can be offered. In 2004, a multi-site study determined that persons with persistent asthma per the HEDIS definition at the time had more "asthma-related adverse events" if they were classified by HEDIS as having appropriate asthma therapy than if they did not have appropriate therapy. This cause of this "unexpected" finding was thought to be that some people with intermittent asthma were miscategorized by HEDIS as having persistent asthma. A 2008 study of 1056 adults with asthma found that "compliance with the HEDIS asthma measure is not favorably associated with relevant patient-oriented outcomes" such as scores on an Asthma Control Test. Although "glaucoma screening in older adults" is a current HEDIS measure, the USPSTF found "insufficient evidence to recommend for or against screening adults for glaucoma" in 2005; as of 2008, the American Academy of Ophthalmology was attempting to convince the USPSTF to review its statement. Furthermore, a 2006 Cochrane review ("last assessed as up-to-date" in 2009) concluded that there was "insufficient evidence to recommend population based screening" for glaucoma because no pertinent randomized controlled trials exist. One summary of the Cochrane review was "population-based screening for glaucoma... is not clinically or cost-effective". But, these articles are not necessarily applicable since HEDIS requires Bi or yearly Diabetic Eye Exams (which include screening for Glaucoma and Optic nerve damage) only for the specific adult patient population of people with Diabetes. This coincides with accepted Ophthalmic care guidelines. A 2001 IOM report noted that "there is incomplete reporting of [HEDIS] measures and health plans resulting in lack of representativeness at the national level"(p. 205). As stated in the 2006 IOM report, the limitations of HEDIS process measures include "sample size constraints for condition-specific measures," "may be confounded by patient compliance and other factors," and "variable extent to which process measures link to important patient outcomes"(p. 179). References HEDIS Measures and Technical Resources External links 2016 HEDIS specification. HEDIS Electronic Clinical Data System (ECDS) Reporting. U.S. News & World Report. America's best health plans. Health plan report card. Washington, D.C.: National Committee for Quality Assurance. HEDIS & quality measurement. Washington, D.C.: National Committee for Quality Assurance. The ironic conundrum of the preference sensitive measures, P4P and HEDIS criteria. Disease Management Care Blog, 2009 Mar 8. Dalzell MD. Will integrity of HEDIS data improve with '98 version? Managed Care, 1998 February. Health informatics Managed care
Healthcare Effectiveness Data and Information Set
[ "Biology" ]
2,539
[ "Health informatics", "Medical technology" ]
2,176,091
https://en.wikipedia.org/wiki/Gaseous%20ionization%20detector
Gaseous ionization detectors are radiation detection instruments used in particle physics to detect the presence of ionizing particles, and in radiation protection applications to measure ionizing radiation. They use the ionising effect of radiation upon a gas-filled sensor. If a particle has enough energy to ionize a gas atom or molecule, the resulting electrons and ions cause a current flow which can be measured. Gaseous ionisation detectors form an important group of instruments used for radiation detection and measurement. This article gives a quick overview of the principal types, and more detailed information can be found in the articles on each instrument. The accompanying plot shows the variation of ion pair generation with varying applied voltage for constant incident radiation. There are three main practical operating regions, one of which each type utilises. Types The three basic types of gaseous ionization detectors are 1) ionization chambers, 2) proportional counters, and 3) Geiger–Müller tubes All of these have the same basic design of two electrodes separated by air or a special fill gas, but each uses a different method to measure the total number of ion-pairs that are collected. The strength of the electric field between the electrodes and the type and pressure of the fill gas determines the detector's response to ionizing radiation. Ionization chamber Ionization chambers operate at a low electric field strength, selected such that no gas multiplication takes place. The ion current is generated by the creation of "ion pairs", consisting of an ion and an electron. The ions drift to the cathode while free electrons drift to the anode under the influence of the electric field. This current is independent of the applied voltage if the device is being operated in the "ion chamber region". Ion chambers are preferred for high radiation dose rates because they have no "dead time"; a phenomenon which affects the accuracy of the Geiger–Müller tube at high dose rates. The advantages are good uniform response to gamma radiation and accurate overall dose reading, capable of measuring very high radiation rates, sustained high radiation levels do not degrade the fill gas. The disadvantages are 1) low output requiring sophisticated electrometer circuit and 2) operation and accuracy easily affected by moisture. Proportional counter Proportional counters operate at a slightly higher voltage, selected such that discrete avalanches are generated. Each ion pair produces a single avalanche so that an output current pulse is generated which is proportional to the energy deposited by the radiation. This is in the "proportional counting" region. The term "gas proportional detector" (GPD) is generally used in radiometric practice, and the property of being able to detect particle energy is particularly useful when using large area flat arrays for alpha and beta particle detection and discrimination, such as in installed personnel monitoring equipment. The wire chamber is a multi-electrode form of proportional counter used as a research tool. The advantages are the ability to measure energy of radiation and provide spectrographic information, discriminate between alpha and beta particles, and that large area detectors can be constructed The disadvantages are that anode wires are delicate and can lose efficiency in gas flow detectors due to deposition, the efficiency and operation affected by ingress of oxygen into fill gas, and measurement windows easily damaged in large area detectors. Micropattern gaseous detectors (MPGDs) are high granularity gaseous detectors with sub-millimeter distances between the anode and cathode electrodes. The main advantages of these microelectronic structures over traditional wire chambers include: count rate capability, time and position resolution, granularity, stability and radiation hardness. Examples of MPGDs are the microstrip gas chamber, the gas electron multiplier and the micromegas detector. Geiger–Müller tube Geiger–Müller tubes are the primary components of Geiger counters. They operate at an even higher voltage, selected such that each ion pair creates an avalanche, but by the emission of UV photons, multiple avalanches are created which spread along the anode wire, and the adjacent gas volume ionizes from as little as a single ion pair event. This is the "Geiger region" of operation. The current pulses produced by the ionising events are passed to processing electronics which can derive a visual display of count rate or radiation dose, and usually in the case of hand-held instruments, an audio device producing clicks. The advantages are that they are a cheap and robust detector with a large variety of sizes and applications, large output signal is produced from tube which requires minimal electronic processing for simple counting, and it can measure the overall gamma dose when using an energy compensated tube. The disadvantages are that it cannot measure the energy of the radiation (no spectrographic information), it will not measure high radiation rates due to dead time, and sustained high radiation levels will degrade fill gas. Guidance on detector type usage The UK Health and Safety Executive has issued a guidance note on the correct portable instrument for the application concerned. This covers all radiation instrument technologies and is useful in selecting the correct gaseous ionisation detector technology for a measurement application. Everyday use Ionization-type smoke detectors are gaseous ionization detectors in widespread use. A small source of radioactive americium is placed so that it maintains a current between two plates that effectively form an ionisation chamber. If smoke gets between the plates where ionization is taking place, the ionized gas can be neutralized leading to a reduced current. The decrease in current triggers a fire alarm. See also Stopping power of radiation particles References Particle detectors Ionising radiation detectors
Gaseous ionization detector
[ "Technology", "Engineering" ]
1,123
[ "Ionising radiation detectors", "Radioactive contamination", "Particle detectors", "Measuring instruments" ]
2,176,160
https://en.wikipedia.org/wiki/Interval%20arithmetic
[[File:Set of curves Outer approximation.png|345px|thumb|right|Tolerance function (turquoise) and interval-valued approximation (red)]] Interval arithmetic (also known as interval mathematics; interval analysis or interval computation) is a mathematical technique used to mitigate rounding and measurement errors in mathematical computation by computing function bounds. Numerical methods involving interval arithmetic can guarantee relatively reliable and mathematically correct results. Instead of representing a value as a single number, interval arithmetic or interval mathematics represents each value as a range of possibilities. Mathematically, instead of working with an uncertain real-valued variable , interval arithmetic works with an interval that defines the range of values that can have. In other words, any value of the variable lies in the closed interval between and . A function , when applied to , produces an interval which includes all the possible values for for all . Interval arithmetic is suitable for a variety of purposes; the most common use is in scientific works, particularly when the calculations are handled by software, where it is used to keep track of rounding errors in calculations and of uncertainties in the knowledge of the exact values of physical and technical parameters. The latter often arise from measurement errors and tolerances for components or due to limits on computational accuracy. Interval arithmetic also helps find guaranteed solutions to equations (such as differential equations) and optimization problems. Introduction The main objective of interval arithmetic is to provide a simple way of calculating upper and lower bounds of a function's range in one or more variables. These endpoints are not necessarily the true supremum or infimum of a range since the precise calculation of those values can be difficult or impossible; the bounds only need to contain the function's range as a subset. This treatment is typically limited to real intervals, so quantities in the form where and are allowed. With one of , infinite, the interval would be an unbounded interval; with both infinite, the interval would be the extended real number line. Since a real number can be interpreted as the interval intervals and real numbers can be freely combined. Example Consider the calculation of a person's body mass index (BMI). BMI is calculated as a person's body weight in kilograms divided by the square of their height in meters. Suppose a person uses a scale that has a precision of one kilogram, where intermediate values cannot be discerned, and the true weight is rounded to the nearest whole number. For example, 79.6 kg and 80.3 kg are indistinguishable, as the scale can only display values to the nearest kilogram. It is unlikely that when the scale reads 80 kg, the person has a weight of exactly 80.0 kg. Thus, the scale displaying 80 kg indicates a weight between 79.5 kg and 80.5 kg, or the interval . The BMI of a man who weighs 80 kg and is 1.80m tall is approximately 24.7. A weight of 79.5 kg and the same height yields a BMI of 24.537, while a weight of 80.5 kg yields 24.846. Since the body mass is continuous and always increasing for all values within the specified weight interval, the true BMI must lie within the interval . Since the entire interval is less than 25, which is the cutoff between normal and excessive weight, it can be concluded with certainty that the man is of normal weight. The error in this example does not affect the conclusion (normal weight), but this is not generally true. If the man were slightly heavier, the BMI's range may include the cutoff value of 25. In such a case, the scale's precision would be insufficient to make a definitive conclusion. The range of BMI examples could be reported as since this interval is a superset of the calculated interval. The range could not, however, be reported as , as the interval does not contain possible BMI values. Multiple intervals Height and body weight both affect the value of the BMI. Though the example above only considered variation in weight, height is also subject to uncertainty. Height measurements in meters are usually rounded to the nearest centimeter: a recorded measurement of 1.79 meters represents a height in the interval . Since the BMI uniformly increases with respect to weight and decreases with respect to height, the error interval can be calculated by substituting the lowest and highest values of each interval, and then selecting the lowest and highest results as boundaries. The BMI must therefore exist in the interval In this case, the man may have normal weight or be overweight; the weight and height measurements were insufficiently precise to make a definitive conclusion. Interval operators A binary operation on two intervals, such as addition or multiplication is defined by In other words, it is the set of all possible values of , where and are in their corresponding intervals. If is monotone for each operand on the intervals, which is the case for the four basic arithmetic operations (except division when the denominator contains ), the extreme values occur at the endpoints of the operand intervals. Writing out all combinations, one way of stating this is provided that is defined for all and . For practical applications, this can be simplified further: Addition: Subtraction: Multiplication: Division: where The last case loses useful information about the exclusion of . Thus, it is common to work with and as separate intervals. More generally, when working with discontinuous functions, it is sometimes useful to do the calculation with so-called multi-intervals of the form The corresponding multi-interval arithmetic maintains a set of (usually disjoint) intervals and also provides for overlapping intervals to unite. Interval multiplication often only requires two multiplications. If , are nonnegative, The multiplication can be interpreted as the area of a rectangle with varying edges. The result interval covers all possible areas, from the smallest to the largest. With the help of these definitions, it is already possible to calculate the range of simple functions, such as For example, if , and : Notation To shorten the notation of intervals, brackets can be used. can be used to represent an interval. Note that in such a compact notation, should not be confused between a single-point interval and a general interval. For the set of all intervals, we can use as an abbreviation. For a vector of intervals we can use a bold font: . Elementary functions Interval functions beyond the four basic operators may also be defined. For monotonic functions in one variable, the range of values is simple to compute. If is monotonically increasing (resp. decreasing) in the interval then for all such that (resp. ). The range corresponding to the interval can be therefore calculated by applying the function to its endpoints: From this, the following basic features for interval functions can easily be defined: Exponential function: for Logarithm: for positive intervals and Odd powers: , for odd For even powers, the range of values being considered is important and needs to be dealt with before doing any multiplication. For example, for should produce the interval when But if is taken by repeating interval multiplication of form then the result is wider than necessary. More generally one can say that, for piecewise monotonic functions, it is sufficient to consider the endpoints , of an interval, together with the so-called critical points within the interval, being those points where the monotonicity of the function changes direction. For the sine and cosine functions, the critical points are at or for , respectively. Thus, only up to five points within an interval need to be considered, as the resulting interval is if the interval includes at least two extrema. For sine and cosine, only the endpoints need full evaluation, as the critical points lead to easily pre-calculated values—namely -1, 0, and 1. Interval extensions of general functions In general, it may not be easy to find such a simple description of the output interval for many functions. But it may still be possible to extend functions to interval arithmetic. If is a function from a real vector to a real number, then is called an interval extension of if This definition of the interval extension does not give a precise result. For example, both and are allowable extensions of the exponential function. Tighter extensions are desirable, though the relative costs of calculation and imprecision should be considered; in this case, should be chosen as it gives the tightest possible result. Given a real expression, its natural interval extension is achieved by using the interval extensions of each of its subexpressions, functions, and operators. The Taylor interval extension (of degree ) is a times differentiable function defined by for some , where is the -th order differential of at the point and is an interval extension of the Taylor remainder.The vector lies between and with , is protected by . Usually one chooses to be the midpoint of the interval and uses the natural interval extension to assess the remainder. The special case of the Taylor interval extension of degree is also referred to as the mean value form. Complex interval arithmetic An interval can be defined as a set of points within a specified distance of the center, and this definition can be extended from real numbers to complex numbers. Another extension defines intervals as rectangles in the complex plane. As is the case with computing with real numbers, computing with complex numbers involves uncertain data. So, given the fact that an interval number is a real closed interval and a complex number is an ordered pair of real numbers, there is no reason to limit the application of interval arithmetic to the measure of uncertainties in computations with real numbers. Interval arithmetic can thus be extended, via complex interval numbers, to determine regions of uncertainty in computing with complex numbers. One can either define complex interval arithmetic using rectangles or using disks, both with their respective advantages and disadvantages. The basic algebraic operations for real interval numbers (real closed intervals) can be extended to complex numbers. It is therefore not surprising that complex interval arithmetic is similar to, but not the same as, ordinary complex arithmetic. It can be shown that, as is the case with real interval arithmetic, there is no distributivity between the addition and multiplication of complex interval numbers except for certain special cases, and inverse elements do not always exist for complex interval numbers. Two other useful properties of ordinary complex arithmetic fail to hold in complex interval arithmetic: the additive and multiplicative properties, of ordinary complex conjugates, do not hold for complex interval conjugates. Interval arithmetic can be extended, in an analogous manner, to other multidimensional number systems such as quaternions and octonions, but with the expense that we have to sacrifice other useful properties of ordinary arithmetic. Interval methods The methods of classical numerical analysis cannot be transferred one-to-one into interval-valued algorithms, as dependencies between numerical values are usually not taken into account. Rounded interval arithmetic To work effectively in a real-life implementation, intervals must be compatible with floating point computing. The earlier operations were based on exact arithmetic, but in general fast numerical solution methods may not be available for it. The range of values of the function for and are for example . Where the same calculation is done with single-digit precision, the result would normally be . But , so this approach would contradict the basic principles of interval arithmetic, as a part of the domain of would be lost. Instead, the outward rounded solution is used. The standard IEEE 754 for binary floating-point arithmetic also sets out procedures for the implementation of rounding. An IEEE 754 compliant system allows programmers to round to the nearest floating-point number; alternatives are rounding towards 0 (truncating), rounding toward positive infinity (i.e., up), or rounding towards negative infinity (i.e., down). The required external rounding for interval arithmetic can thus be achieved by changing the rounding settings of the processor in the calculation of the upper limit (up) and lower limit (down). Alternatively, an appropriate small interval can be added. Dependency problem The so-called "dependency" problem is a major obstacle to the application of interval arithmetic. Although interval methods can determine the range of elementary arithmetic operations and functions very accurately, this is not always true with more complicated functions. If an interval occurs several times in a calculation using parameters, and each occurrence is taken independently, then this can lead to an unwanted expansion of the resulting intervals. As an illustration, take the function defined by The values of this function over the interval are As the natural interval extension, it is calculated as: which is slightly larger; we have instead calculated the infimum and supremum of the function over There is a better expression of in which the variable only appears once, namely by rewriting as addition and squaring in the quadratic. So the suitable interval calculation is and gives the correct values. In general, it can be shown that the exact range of values can be achieved, if each variable appears only once and if is continuous inside the box. However, not every function can be rewritten this way. The dependency of the problem causing over-estimation of the value range can go as far as covering a large range, preventing more meaningful conclusions. An additional increase in the range stems from the solution of areas that do not take the form of an interval vector. The solution set of the linear system is precisely the line between the points and Using interval methods results in the unit square, This is known as the wrapping effect. Linear interval systems A linear interval system consists of a matrix interval extension and an interval vector . We want the smallest cuboid , for all vectors which there is a pair with and satisfying. . For quadratic systems – in other words, for – there can be such an interval vector , which covers all possible solutions, found simply with the interval Gauss method. This replaces the numerical operations, in that the linear algebra method known as Gaussian elimination becomes its interval version. However, since this method uses the interval entities and repeatedly in the calculation, it can produce poor results for some problems. Hence using the result of the interval-valued Gauss only provides first rough estimates, since although it contains the entire solution set, it also has a large area outside it. A rough solution can often be improved by an interval version of the Gauss–Seidel method. The motivation for this is that the -th row of the interval extension of the linear equation. can be determined by the variable if the division is allowed. It is therefore simultaneously. and . So we can now replace by , and so the vector by each element. Since the procedure is more efficient for a diagonally dominant matrix, instead of the system one can often try multiplying it by an appropriate rational matrix with the resulting matrix equation. left to solve. If one chooses, for example, for the central matrix , then is outer extension of the identity matrix. These methods only work well if the widths of the intervals occurring are sufficiently small. For wider intervals, it can be useful to use an interval-linear system on finite (albeit large) real number equivalent linear systems. If all the matrices are invertible, it is sufficient to consider all possible combinations (upper and lower) of the endpoints occurring in the intervals. The resulting problems can be resolved using conventional numerical methods. Interval arithmetic is still used to determine rounding errors. This is only suitable for systems of smaller dimension, since with a fully occupied matrix, real matrices need to be inverted, with vectors for the right-hand side. This approach was developed by Jiri Rohn and is still being developed. Interval Newton method An interval variant of Newton's method for finding the zeros in an interval vector can be derived from the average value extension. For an unknown vector applied to , gives. . For a zero , that is , and thus, must satisfy. . This is equivalent to . An outer estimate of can be determined using linear methods. In each step of the interval Newton method, an approximate starting value is replaced by and so the result can be improved. In contrast to traditional methods, the interval method approaches the result by containing the zeros. This guarantees that the result produces all zeros in the initial range. Conversely, it proves that no zeros of were in the initial range if a Newton step produces the empty set. The method converges on all zeros in the starting region. Division by zero can lead to the separation of distinct zeros, though the separation may not be complete; it can be complemented by the bisection method. As an example, consider the function , the starting range , and the point . We then have and the first Newton step gives. . More Newton steps are used separately on and . These converge to arbitrarily small intervals around and . The Interval Newton method can also be used with thick functions such as , which would in any case have interval results. The result then produces intervals containing . Bisection and covers The various interval methods deliver conservative results as dependencies between the sizes of different interval extensions are not taken into account. However, the dependency problem becomes less significant for narrower intervals. Covering an interval vector by smaller boxes so that is then valid for the range of values. So, for the interval extensions described above the following holds: Since is often a genuine superset of the right-hand side, this usually leads to an improved estimate. Such a cover can be generated by the bisection method such as thick elements of the interval vector by splitting in the center into the two intervals and If the result is still not suitable then further gradual subdivision is possible. A cover of intervals results from divisions of vector elements, substantially increasing the computation costs. With very wide intervals, it can be helpful to split all intervals into several subintervals with a constant (and smaller) width, a method known as mincing. This then avoids the calculations for intermediate bisection steps. Both methods are only suitable for problems of low dimension. Application Interval arithmetic can be used in various areas (such as set inversion, motion planning, set estimation, or stability analysis) to treat estimates with no exact numerical value. Rounding error analysis Interval arithmetic is used with error analysis, to control rounding errors arising from each calculation. The advantage of interval arithmetic is that after each operation there is an interval that reliably includes the true result. The distance between the interval boundaries gives the current calculation of rounding errors directly: Error = for a given interval . Interval analysis adds to rather than substituting for traditional methods for error reduction, such as pivoting. Tolerance analysis Parameters for which no exact figures can be allocated often arise during the simulation of technical and physical processes. The production process of technical components allows certain tolerances, so some parameters fluctuate within intervals. In addition, many fundamental constants are not known precisely. If the behavior of such a system affected by tolerances satisfies, for example, , for and unknown then the set of possible solutions. , can be found by interval methods. This provides an alternative to traditional propagation of error analysis. Unlike point methods, such as Monte Carlo simulation, interval arithmetic methodology ensures that no part of the solution area can be overlooked. However, the result is always a worst-case analysis for the distribution of error, as other probability-based distributions are not considered. Fuzzy interval arithmetic Interval arithmetic can also be used with affiliation functions for fuzzy quantities as they are used in fuzzy logic. Apart from the strict statements and , intermediate values are also possible, to which real numbers are assigned. corresponds to definite membership while is non-membership. A distribution function assigns uncertainty, which can be understood as a further interval. For fuzzy arithmetic only a finite number of discrete affiliation stages are considered. The form of such a distribution for an indistinct value can then be represented by a sequence of intervals. The interval corresponds exactly to the fluctuation range for the stage The appropriate distribution for a function concerning indistinct values and the corresponding sequences. can be approximated by the sequence. where and can be calculated by interval methods. The value corresponds to the result of an interval calculation. Computer-assisted proof Warwick Tucker used interval arithmetic in order to solve the 14th of Smale's problems, that is, to show that the Lorenz attractor is a strange attractor. Thomas Hales used interval arithmetic in order to solve the Kepler conjecture. History Interval arithmetic is not a completely new phenomenon in mathematics; it has appeared several times under different names in the course of history. For example, Archimedes calculated lower and upper bounds 223/71 < π < 22/7 in the 3rd century BC. Actual calculation with intervals has neither been as popular as other numerical techniques nor been completely forgotten. Rules for calculating with intervals and other subsets of the real numbers were published in a 1931 work by Rosalind Cicely Young. Arithmetic work on range numbers to improve the reliability of digital systems was then published in a 1951 textbook on linear algebra by ; intervals were used to measure rounding errors associated with floating-point numbers. A comprehensive paper on interval algebra in numerical analysis was published by Teruo Sunaga (1958). The birth of modern interval arithmetic was marked by the appearance of the book Interval Analysis by Ramon E. Moore in 1966. He had the idea in spring 1958, and a year later he published an article about computer interval arithmetic. Its merit was that starting with a simple principle, it provided a general method for automated error analysis, not just errors resulting from rounding. Independently in 1956, Mieczyslaw Warmus suggested formulae for calculations with intervals, though Moore found the first non-trivial applications. In the following twenty years, German groups of researchers carried out pioneering work around Ulrich W. Kulisch and at the University of Karlsruhe and later also at the Bergische University of Wuppertal. For example, explored more effective implementations, while improved containment procedures for the solution set of systems of equations were due to Arnold Neumaier among others. In the 1960s, Eldon R. Hansen dealt with interval extensions for linear equations and then provided crucial contributions to global optimization, including what is now known as Hansen's method, perhaps the most widely used interval algorithm. Classical methods in this often have the problem of determining the largest (or smallest) global value, but could only find a local optimum and could not find better values; Helmut Ratschek and Jon George Rokne developed branch and bound methods, which until then had only applied to integer values, by using intervals to provide applications for continuous values. In 1988, Rudolf Lohner developed Fortran-based software for reliable solutions for initial value problems using ordinary differential equations. The journal Reliable Computing (originally Interval Computations) has been published since the 1990s, dedicated to the reliability of computer-aided computations. As lead editor, R. Baker Kearfott, in addition to his work on global optimization, has contributed significantly to the unification of notation and terminology used in interval arithmetic. In recent years work has concentrated in particular on the estimation of preimages of parameterized functions and to robust control theory by the COPRIN working group of INRIA in Sophia Antipolis in France. Implementations There are many software packages that permit the development of numerical applications using interval arithmetic. These are usually provided in the form of program libraries. There are also C++ and Fortran compilers that handle interval data types and suitable operations as a language extension, so interval arithmetic is supported directly. Since 1967, Extensions for Scientific Computation (XSC) have been developed in the University of Karlsruhe for various programming languages, such as C++, Fortran, and Pascal. The first platform was a Zuse Z23, for which a new interval data type with appropriate elementary operators was made available. There followed in 1976, Pascal-SC, a Pascal variant on a Zilog Z80 that it made possible to create fast, complicated routines for automated result verification. Then came the Fortran 77-based ACRITH-XSC for the System/370 architecture (FORTRAN-SC), which was later delivered by IBM. Starting from 1991 one could produce code for C compilers with Pascal-XSC; a year later the C++ class library supported C-XSC on many different computer systems. In 1997, all XSC variants were made available under the GNU General Public License. At the beginning of 2000, C-XSC 2.0 was released under the leadership of the working group for scientific computation at the Bergische University of Wuppertal to correspond to the improved C++ standard. Another C++-class library was created in 1993 at the Hamburg University of Technology called Profil/BIAS (Programmer's Runtime Optimized Fast Interval Library, Basic Interval Arithmetic), which made the usual interval operations more user-friendly. It emphasized the efficient use of hardware, portability, and independence of a particular presentation of intervals. The Boost collection of C++ libraries contains a template class for intervals. Its authors are aiming to have interval arithmetic in the standard C++ language. The Frink programming language has an implementation of interval arithmetic that handles arbitrary-precision numbers. Programs written in Frink can use intervals without rewriting or recompilation. GAOL is another C++ interval arithmetic library that is unique in that it offers the relational interval operators used in interval constraint programming. The Moore library is an efficient implementation of interval arithmetic in C++. It provides intervals with endpoints of arbitrary precision and is based on the concepts feature of C++. The Julia programming language has an implementation of interval arithmetics along with high-level features, such as root-finding (for both real and complex-valued functions) and interval constraint programming, via the ValidatedNumerics.jl package. In addition, computer algebra systems, such as Euler Mathematical Toolbox, FriCAS, Maple, Mathematica, Maxima and MuPAD, can handle intervals. A Matlab extension Intlab'' builds on BLAS routines, and the toolbox b4m makes a Profil/BIAS interface. A library for the functional language OCaml was written in assembly language and C. IEEE 1788 standard A standard for interval arithmetic, IEEE Std 1788-2015, has been approved in June 2015. Two reference implementations are freely available. These have been developed by members of the standard's working group: The libieeep1788 library for C++, and the interval package for GNU Octave. A minimal subset of the standard, IEEE Std 1788.1-2017, has been approved in December 2017 and published in February 2018. It should be easier to implement and may speed production of implementations. Conferences and workshops Several international conferences or workshops take place every year in the world. The main conference is probably SCAN (International Symposium on Scientific Computing, Computer Arithmetic, and Verified Numerical Computation), but there is also SWIM (Small Workshop on Interval Methods), PPAM (International Conference on Parallel Processing and Applied Mathematics), REC (International Workshop on Reliable Engineering Computing). See also Affine arithmetic INTLAB (Interval Laboratory) Automatic differentiation Multigrid method Monte-Carlo simulation Interval finite element Fuzzy number Significant figures Karlsruhe Accurate Arithmetic (KAA) Unum References Further reading (11 pages) (NB. About Triplex-ALGOL Karlsruhe, an ALGOL 60 (1963) implementation with support for triplex numbers.) External links Interval arithmetic (Wolfram Mathworld) Validated Numerics for Pedestrians Interval Methods from Arnold Neumaier, University of Vienna SWIM (Summer Workshop on Interval Methods) International Conference on Parallel Processing and Applied Mathematics INTLAB, Institute for Reliable Computing , Hamburg University of Technology Ball arithmetic by Joris van der Hoeven kv - a C++ Library for Verified Numerical Computation Arb - a C library for arbitrary-precision ball arithmetic Arithmetic Computer arithmetic Numerical analysis Data types
Interval arithmetic
[ "Mathematics" ]
5,695
[ "Computational mathematics", "Computer arithmetic", "Arithmetic", "Mathematical relations", "Numerical analysis", "Approximations", "Number theory" ]
2,176,235
https://en.wikipedia.org/wiki/Orbison%20illusion
The Orbison illusion (or Orbison's illusion) is an optical illusion first described by American psychologist William Orbison (1912–1952) in 1939. The illusion consists of a two dimensional figure, such as a circle or square, superimposed over a background of radial lines or concentric circles. The result is an optical illusion in which both the figure and the rectangle which contains it appear distorted; in particular, squares appear slightly bulged, circles appear elliptical, and the containing rectangle appears tilted. References External links Optical illusions
Orbison illusion
[ "Physics" ]
111
[ "Optical phenomena", "Physical phenomena", "Optical illusions" ]
2,176,296
https://en.wikipedia.org/wiki/Wundt%20illusion
The Wundt illusion is an optical illusion that was first described by the German psychologist Wilhelm Wundt in the 19th century. The two red vertical lines are both straight, but they may look as if they are bowed inwards to some observers. The distortion is induced by the crooked lines on the background, as in the Orbison illusion. The Hering illusion produces a similar, but inverted effect. Vertical-horizontal illusion Another variant of the Wundt illusion is the Horizontal–Vertical Illusion, introduced by Wundt in 1858. The two intersecting lines are equal in length although the vertical line appears to be much longer. The horizontal line needs to be extended up to 30% to match the perceptual length of the vertical line. This is not confined to simple line drawings, as this can also be seen in buildings, parking meters, as well as other things viewed in a natural setting. References External links Optical illusions
Wundt illusion
[ "Physics" ]
185
[ "Optical phenomena", "Physical phenomena", "Optical illusions" ]
2,176,354
https://en.wikipedia.org/wiki/List%20of%20XML%20and%20HTML%20character%20entity%20references
In SGML, HTML and XML documents, the logical constructs known as character data and attribute values consist of sequences of characters, in which each character can manifest directly (representing itself), or can be represented by a series of characters called a character reference, of which there are two types: a numeric character reference and a character entity reference. This article lists the character entity references that are valid in HTML and XML documents. A character entity reference refers to the content of a named entity. An entity declaration is created in XML, SGML and HTML documents (before HTML5) by using the <!ENTITY name "value"> syntax in a Document type definition (DTD). Character reference overview In HTML and XML, a numeric character reference refers to a character by its Universal Character Set/Unicode code point, and uses the format: &#xhhhh; or &#nnnn; where the x must be lowercase in XML documents, hhhh is the code point in hexadecimal form, and nnnn is the code point in decimal form. The hhhh (or nnnn) may be any number of hexadecimal (or decimal) digits and may include leading zeros. The hhhh for hexadecimal digits may mix uppercase and lowercase letters, though uppercase is the usual style. However the XML and HTML standards restrict the usable code points to a set of valid values, which is a subset of UCS/Unicode code point values, that excludes all code points assigned to non-characters or to surrogates, and most code points assigned to C0 and C1 controls (with the exception of line separators and tabulations treated as white spaces). In contrast, a character entity reference refers to a sequence of one or more characters by the name of an entity which has the desired characters as its replacement text. The entity must either be predefined (built into the markup language), or otherwise explicitly declared in a Document Type Definition (DTD) (see ). The format is the same as for any entity reference: &name; where name is the case-sensitive name of the entity. The semicolon is usually required in the character entity reference, unless marked otherwise in the table below (see ). Standard public entity sets for characters XML XML specifies five predefined entities needed to support every printable ASCII character: &amp;, &lt;, &gt;, &apos;, and &quot;. The trailing semicolon is mandatory in XML (and XHTML) for these five entities (even if HTML or SGML allows omitting it for some of them, according to their DTD). ISO Entity Sets SGML supplied a comprehensive set of entity declarations for characters widely used in Western technical and reference publishing, for Latin, Greek and Cyrillic scripts. The American Mathematical Society also contributed entities for mathematical characters (see ). HTML Entity Sets Early versions of HTML built in small subsets of these, relating to characters found in three Western 8-bit fonts. MathML Entity Sets The W3C developed a set of entity declarations for MathML characters. XML Entity Sets The W3C MathML Working Group took over maintenance of the ISO public entity sets, combined with the MathML and documents them in XML Entity Definitions for Characters. This set can support the requirements of XHTML, MathML and as an input to future versions of HTML. HTML5 HTML5 adopts the XML entities as named character references, however it restates them without reference to their sources and does not group them into sets. The HTML5 specification additionally provides mappings from the names to Unicode character sequences using JSON. Numerous other entity sets have been developed for special requirements, and for major and minority scripts. However, the advent of Unicode has largely superseded them. Formal public identifiers for HTML DTD entities subsets The full formal public identifier and system identifier for the DTD entities subset (where the character entity name is defined) is actually mapped from one of the following three defined named entities: Formal public identifiers for old ISO entities subsets The ISO entities subsets are old (documented) character subsets, which are given SGML character entity names in ISO 8879 and ISO 9573, and which were used in legacy encodings before the unification within ISO 10646. Their full formal public identifiers are as follows: List of character entity references in HTML HTML5 defines many named entities, references to which act as mnemonic aliases for certain Unicode characters. The HTML5 specification does not allow users to define additional entities, as it no longer accepts any DTD to be referenced or extended inside HTML documents (this is still needed in XHTML, which is based on stricter XML parsing rules but allows referencing or defining a DTD in the document header, because XML does not predefine most HTML entities). In the below table, the "Standard" column indicates the first version of the HTML DTD that defines the character entity reference, and indicates characters that are predefined in XML without needing any DTD. To use one of these character entity references in an HTML or XML document, enter an ampersand (&) followed by the entity name, and a semicolon (mandatory in XML, and strongly recommended in HTML for all entities, even if HTML allows omitting the semicolon only from some entities indicated below by ), e.g., enter &copy; for the copyright symbol . There are no predefined character entities in HTML for characters or sequences of most scripts encoded in the UCS (except a common subset of whitespace, punctuation, mathematical or technical symbols, currency symbols, a few Hebrew symbols used in mathematical notations, and the most common letters in Latin, Greek or Cyrillic). Note also that not all bidirectional controls defined in UCS/Unicode are represented as standard character entities in HTML (not even in HTML5, which defines more general directional elements and attributes for that purpose). Notably, there are no predefined HTML character entities for controls that were added in the UCS/Unicode and formally defined in version 2 of the Unicode Bidi Algorithm. Most entities are predefined in XML and HTML to reference just one character in the UCS, but there are no predefined entities for isolated combining characters, variation selectors, or characters for private use assignments; however the list includes some predefined entities for character sequences of two characters containing some of them. Since HTML 5.0 (and MathML 3.0 which shares the same set en entities), all entities are encoded in Unicode normalization forms C and KC (this was not the case with older versions of HTML and MathML, so older entities that were initially defined with characters for private use assignments, CJK compatibility forms, or in non-NFC forms were modified). However, all valid characters and sequences in the UCS, including all bidirectional controls or private-use assignments (but with the exception of non-whitespace C0 and C1 controls, non-characters, and surrogates) are also usable and valid in HTML, XML, XHTML and MathML, either in plain-text values of attributes or in text elements (by encoding them directly as plain text, or using numeric character references when needed). Notes Entities representing special characters in XHTML The XHTML DTDs explicitly declare 253 entities (including the 5 predefined entities of XML 1.0) whose expansion is a single character, which can therefore be informally referred to as "character entities". These (with the exception of the &apos; entity) have the same names and represent the same characters as the 252 character entities in HTML 4.0. Also, by virtue of being XML, XHTML documents may reference the predefined &apos; entity, which is not one of the 252 character entities in HTML 4.0. Additional entities of any size may be defined on a per-document basis. However, the usability of entity references in XHTML is affected by how the document is being processed: Legacy abbreviated character entities (without the final colon) inherited from HTML 2.0 (and still supported in HTML 5.0) are not supported in XML 1.0 and XHTML; the trailing semicolon must be present in all entity references used in XML and XHTML documents. If the XHTML document is read by a conforming HTML 4.0 processor, then only the 252 HTML 4.0 character entities may safely be used. The use of &apos; or custom entity references may not be supported and may produce unpredictable results (it is recommended to use the numerical character reference &#39; instead). If the document is read by an XML parser that does not or cannot read external entities, then only the five built-in XML character entities can safely be used, although other entities may be used if they are declared in the internal DTD subset. However, modern XML parsers recognize and implement a builtin cache for SGML references to DTDs used by all standard versions of HTML, XHTML, SVG and MathML, without needing to parse and process the external DTD via their URL and without needing to process entities defined in an internal DTD subset of the document. If the document is read by an XML parser that does read external entities and does not implement a builtin cache for well-known DTDs, then the five built-in XML character entities (and numeric character references) can safely be used. The other 248 HTML character entities can be used as long as the XHTML DTD is accessible to the parser at the time the document is read. Other entities may also be used if they are declared in the internal DTD subset and the XML processor can parse internal DTD subsets. HTML 5.0 parsers cannot process XHTML documents, and it's impossible to define a fully validating DTD for HTML5 documents encoded with the XHTML syntax (notably it's impossible to validate all attributes names, notably "data-*" attributes); as well it's still impossible to fully validate (with W3C standard schemas for XML, such as XSD or relax NG) HTML5 documents represented in the XHTML syntax, and for now a custom validator specific to HTML 5.0 is required. Because of the special &apos; case mentioned above, only &quot;, &amp;, &lt;, and &gt; will work in all XHTML processing situations. See also Character encodings in HTML Digraph and Trigraph, a similar concept to enter unavailable characters Escape character HTML decimal character rendering Percent-encoding, used in URLs SGML entity References Further reading Unicode Consortium. See also: Unicode Consortium UnicodeData.txt from the Unicode Consortium World Wide Web Consortium. See also: World Wide Web Consortium XML 1.0 spec HTML 2.0 spec HTML 3.2 spec HTML 4.0 spec HTML 4.01 spec HTML 5 spec XHTML 1.0 spec XML Entity Definitions for Characters The normative reference to RFC 2070 (still found in DTDs defining the character entities for HTML or XHTML) is historic; this RFC (along with other RFC's related to different part of the HTML specification) has been deprecated in favor of the newer informational RFC 2854 which defines the "text/html" MIME type and references directly the W3C specifications for the actual HTML content. Numerical Reference of Unicode code points at Wikibooks W3 HTML5 Character Reference Chart External links Character entity references in HTML 4 at the W3C Webpage for encoding and decoding special characters XML and HTML character entity references HTML XML Unicode
List of XML and HTML character entity references
[ "Technology" ]
2,473
[ "Computing-related lists", "Lists of computer languages" ]
2,176,490
https://en.wikipedia.org/wiki/Jastrow%20illusion
The Jastrow illusion is an optical illusion attributed to the Polish-American psychologist Joseph Jastrow. This optical illusion is known under different names: Ring-Segment illusion, Jastrow illusion, Wundt area illusion or Wundt-Jastrow illusion. The illusion also occurs in the real world. The two toy railway tracks pictured are identical, although the lower one appears to be larger. There are three competing theories on how this illusion occurs. This illusion is often included in magic kits and several versions are sold in magic shops and is commonly known under the name Boomerang Illusion. Origin The oldest reference to this illusion can be found in The World of Wonders, an 1873 book about curiosities of nature, science and art. The two arches are placed on top of each other. They are similar in size, but not the same. The inner radius of the upper arch is the same as the outer radius of the lower arch. The first psychologist to describe this illusion was German psychologist Franz Müller-Lyer in 1889. His article presents a collection of geometrical illusions of size, including what is now known as the Müller-Lyer illusion. His paper also includes the ring segments which we now know as the Jastrow Illusion. Joseph Jastrow extensively researched optical illusions, the most prominent of them being the rabbit–duck illusion, an image that can be interpreted as being both a rabbit or a duck. In 1892 he published a paper which introduced his version of what is now known as the Jastrow illusion. His version is different from the commonly used figure because the two arches taper to one end. On the other side of the Atlantic, German scientist Wilhelm Wundt was also pioneering in psychology research. He wrote one of the first books about geometric optical illusions in which he copied the design previously published by Müller-Lyer. Cause There are several competing explanations of why the brain perceives the difference in size between the ring segments, none of which has been accepted as definitive. One explanation relates to how the mind interprets the two-dimensional images on the retina as a three-dimensional world. Another explanation relates to the fact that the mind can only attend to a small field of vision, which is reconstructed by our consciousness. The most commonly used explanation is that the brain is confused by the difference in size between the large and the small radius. The short side makes the long side appear longer, and the long side makes the short side appear even shorter. Similarity to other optical illusions The Jastrow illusion has been compared with other optical illusions, such as the Fat Face illusion, the Leaning Tower illusion and the Ponzo illusion. Masaki Tomonaga, a researcher at Kyoto University, compared the Jastrow Illusion with the so-called Fat Face Illusion. He conducted experiments with people and chimpanzees to compare this illusion with the classical Jastrow Illusion. Animals are known to observe many of the same optical illusions as humans do, but this was the first study to demonstrate that the Jastrow illusion is also experienced by chimpanzees. The Fat Face illusion happens when two identical images of the same face are aligned vertically, the face at the bottom appears fatter. The effect is much smaller than the Jastrow Illusion, with a size difference of only four percent. The experiment showed that both humans and chimpanzees were fooled by the Jastrow Illusion. As a comparison, none of the subjects picked the wrong rectangle. Human subjects showed a strong Fat Face illusion, but chimpanzees did not perceive the top face as being thinner. Perception research Japanese psychologist Shogu Imai experimented with different versions of the Wundt Illusion in 1960 to find out which combination of measurements creates the strongest illusion. He varied the inner and outer radius, the opening angle of the segment, and the angle of the ends. He also looked at whether the distance between the two shapes, or whether they are horizontally or vertically influences the strength of the illusion. Imai showed different versions of the illusion to a group of people and asked them to rate the perceived difference in size. Imai found that the maximum reported difference was about ten percent. He also found that the inner radius should be 60% of the outer radius to achieve the maximum effect. The ideal opening angle was found to be 80 degrees. The cut angle is most effective at zero degrees, which occurs when the line extends through the centre of the circle segments. He also found that the illusion is strongest when the segments are horizontal and that the ideal distance is just above each other. Overlapping the segments or moving them too far apart destroys the illusion. Manfredo Massironi and his colleagues from the universities of Rome and Verona modified the Jastrow illusion to develop a diagnostic test for unilateral spatial neglect. People that suffer from neglect do not experience the illusion when the overlapping part of the segments is on the side where their perception is missing. When the segments are reversed, they perceive the illusion in the same way as people that do not suffer from neglect. Researchers have also examined the susceptibility of people with autism to a range of optical illusions. This research seems to indicate that people with autism don't experience visual size illusions. This finding is consistent with the idea that autism involves an excessive focus on details. These findings have recently been contradicted. Recent research, which included the Jastrow illusion, placed these findings in doubt. The Jastrow illusion has been used to see whether young children are deceived by geometric optical illusions. Researchers used ring segments that were not equal in size so they could simulate both illusionary and real size differences. The showed the two segments in three configurations. The smaller shaded smaller segment was placed on top to emphasise the difference in size. In the other positions, the smaller piece was placed below or above the larger piece to create the illusion it is bigger. The children were asked to play a game called "Big and Little" and point out which segment was really bigger than the other. In a second version of the test the kids were asked to point out which one looks bigger. The results show that children from the age of five are capable of distinguishing between real differences in size and an apparent difference. References External links Sam Haysom, This bizarre train track optical illusion will mess with your mind, Mashable (2 April 2016). Peter Prevos, The Jastrow Illusion in Magic (2016). Optical illusions
Jastrow illusion
[ "Physics" ]
1,313
[ "Optical phenomena", "Physical phenomena", "Optical illusions" ]
2,176,938
https://en.wikipedia.org/wiki/Dander
Dander is material shed from the body of humans and other animals that have fur, hair, or feathers. The term is similar to dandruff, when an excess of flakes becomes visible. Skin flakes that come off the main body of an animal are dander, while the flakes of skin called dandruff come from the scalp and are composed of epithelial skin cells. The surface layer of mammalian skin is called the stratum corneum, which is shed as part of normal skin replacement. Dander is microscopic, and can be transported through the air in house dust, where it forms the diet of the dust mites. Through the air, dander can enter the mucous membranes in the nose and lungs, causing allergies in susceptible individuals, largely through the mechanism of allergy to proteins in the bodies of the dust mites that live on dander. Dander builds up in carpets, curtains, clothing, mattresses, bedding, and pillows. Environments with fewer textiles and upholstery will generally have less dander accumulation. More pet dander is sloughed off in older animals than in younger animals. Dander build up can be a cause of allergies, such as allergic rhinitis, in humans. Dr. Paivi Salo, an allergy expert at the National Institutes of Health, states that "airborne allergies affect approximately 10-30% of adults and 40% of children." Damp dusting and vacuum cleaners with sealed bodies and fitted with HEPA filters reduce re-distribution of the dander dust, with associated dust mites, into the air. describes dander as a dialect synonym of dandruff, possibly from Yorkshire in England. See also Allergy to cats Allergy to dogs Dandruff Powder down References External links Pet Dander Animal physiology Mammal anatomy
Dander
[ "Biology" ]
384
[ "Animals", "Animal physiology" ]
2,177,013
https://en.wikipedia.org/wiki/Agouti-related%20peptide
Agouti-related protein (AgRP), also called agouti-related peptide, is a neuropeptide produced in the brain by the AgRP/NPY neuron. It is synthesized in neuropeptide Y (NPY)-containing cell bodies located in the ventromedial part of the arcuate nucleus in the hypothalamus. AgRP is co-expressed with NPY and acts to increase appetite and decrease metabolism and energy expenditure. It is one of the most potent and long-lasting of appetite stimulators. In humans, the agouti-related peptide is encoded by the AGRP gene. Structure AgRP is a paracrine signaling molecule made of 112 amino acids (the gene product of 132 amino acids is processed by removal of the N-terminal 20-residue signal peptide domain). It was independently identified by two teams in 1997 based on its sequence similarity with agouti signalling peptide (ASIP), a protein synthesized in the skin controlling coat colour. AgRP is approximately 25% identical to ASIP. The murine homologue of AgRP consists of 111 amino acids (precursor is 131 amino acids) and shares 81% amino acid identity with the human protein. Biochemical studies indicate AgRP to be very stable to thermal denaturation and acid degradation. Its secondary structure consists mainly of random coils and β-sheets that fold into an inhibitor cystine knot motif. AGRP maps to human chromosome 16q22 and Agrp to mouse chromosome 8D1-D2. Function Agouti-related protein is expressed primarily in the adrenal gland, subthalamic nucleus, and hypothalamus, with lower levels of expression in the testis, kidneys, and lungs. The appetite-stimulating effects of AgRP are inhibited by the hormone leptin and activated by the hormone ghrelin. Adipocytes secrete leptin in response to food intake. This hormone acts in the arcuate nucleus and inhibits the AgRP/NPY neuron from releasing orexigenic peptides. Ghrelin has receptors on NPY/AgRP neurons that stimulate the secretion of NPY and AgRP to increase appetite. AgRP is stored in intracellular secretory granules and is secreted via a regulated pathway. The transcriptional and secretory action of AgRP is regulated by inflammatory signals. Levels of AgRP are increased during periods of fasting. It has been found that AgRP stimulates the hypothalamic-pituitary-adrenocortical axis to release ACTH, cortisol and prolactin. It also enhances the ACTH response to IL-1-beta, suggesting it may play a role in the modulation of neuroendocrine response to inflammation. Conversely, AgRP-secreting neurons inhibit the release of TRH from the paraventricular nucleus (PVN), which may contribute to conservation of energy in starvation. This pathway is part of a feedback loop, since TRH-secreting neurons from PVN stimulate AgRP neurons. Mechanism AGRP has been demonstrated to be a competitive antagonist of melanocortin receptors, specifically MC3-R and MC4-R. The melanocortin receptors, MC3-R and MC4-R, are directly linked to metabolism and body weight control. These receptors are activated by the peptide hormone α-MSH (melanocyte-stimulating hormone) and antagonized by the agouti-related protein. Whereas α-MSH acts broadly on most members of the MCR family (with the exception of MC2-R), AGRP is highly specific for only MC3-R and MC4-R. This inverse agonism not only antagonizes the action of melanocortin agonists such as α-MSH but also further decreases the cAMP produced by the affected cells. The exact mechanism by which AgRP inhibits melanocortin-receptor signalling is not completely clear. It has been suggested that Agouti-related protein binds MSH receptors and acts as a competitive antagonist of ligand binding. Studies of Agouti protein in B16 melanoma cells supported this logic. The expression of AgRP in the adrenal gland is regulated by glucocorticoids. The protein blocks α-MSH-induced secretion of corticosterone. History Orthologs of AgRP, ASIP, MCIR, and MC4R have been found in mammalian, teleost fish, and avian genomes. This suggests that the agouti-melanocortin system evolved by gene duplication from individual ligand and receptor genes in the last 500 million years. Role in obesity AgRP induces obesity by chronic antagonism of the MC4-R. Overexpression of AgRP in transgenic mice (or intracerebroventricular injection) causes hyperphagia and obesity, whilst AgRP plasma levels have been found to be elevated in obese human males. Understanding the role AgRP plays in weight gain may assist in developing pharmaceutical models for treating obesity. AgRP mRNA levels have been found to be down regulated following an acute stressful event. Studies suggest that systems involved in the regulation of stress response and of energy balance are highly integrated. Loss or gain of AgRP function may result in inadequate adaptive behavioural responses to environmental events, such as stress, and have potential to contribute to the development of eating disorders. It has been shown that polymorphisms in the AgRP gene have been linked with anorexia nervosa as well as obesity. Some studies suggest that inadequate signalling of AgRP during stress may result in binge eating. Starvation-induced hypothalamic autophagy generates free fatty acids, which in turn regulate neuronal AgRP levels. Role in hunger circuitry According to Mark L. Andermann and Bradford B. Lowell: "...AgRP neurons and the wiring diagram within which they operate can be viewed as the physical embodiment of the intervening variable, hunger." Stimulation of neurons expressing AgRP can induce robust feeding behavior in mice that will trigger: increased food consumption, increased willingness to work for food, and increased investigation of food odors. Despite this, AgRP neurons are rapidly inhibited upon food presentation and the onset of eating. One mechanism which may account for this discrepancy is the fact that AgRP neurons signal with Neuropeptide Y in order to allow for sustained feeding behavior that outlasts the activation of the neurons. AgRP neurons are also sensitive to satiety and hunger hormonal signals. One is an appetite stimulant, ghrelin which makes AgRP neurons more excitable through interactions with specialized ghrelin receptors. Another is a satiety signal, leptin, which modulates AgRP activity through inwardly rectifying potassium channels, which alter the excitability of the neurons. Leptin can also decrease the ability of AgRP neurons to carry out other physiological functions, such as triggering Long Term Potentiation of adjacent neurons. Although AgRP neurons can drive many different phases of feeding behavior, separate AgRP neurons project to different areas of the brain, demonstrating a parallel organizational structure. This is evidenced by different projections of AgRP neurons to various areas of the brain driving different food related behaviors; for example, certain projections will promote increased food consumption, but not increased food odor investigation. Human proteins containing this domain AGRP; ASIP See also Proopiomelanocortin Agouti (gene) References Further reading External links Peripheral membrane proteins Neuropeptides Obesity Peptides Melanocortin receptor antagonists
Agouti-related peptide
[ "Chemistry" ]
1,588
[ "Biomolecules by chemical classification", "Peptides", "Molecular biology" ]
2,177,036
https://en.wikipedia.org/wiki/Perceptual%20psychology
Perceptual psychology is a subfield of cognitive psychology that concerns the conscious and unconscious innate aspects of the human cognitive system: perception. A pioneer of the field was James J. Gibson. One major study was that of affordances, i.e. the perceived utility of objects in, or features of, one's surroundings. According to Gibson, such features or objects were perceived as affordances and not as separate or distinct objects in themselves. This view was central to several other fields as software user interface and usability engineering, environmentalism in psychology, and ultimately to political economy where the perceptual view was used to explain the omission of key inputs or consequences of economic transactions, i.e. resources and wastes. Gerard Egan and Robert Bolton explored areas of interpersonal interactions based on the premise that people act in accordance with their perception of a given situation. While behaviour is obvious, a person's thoughts and feelings are masked. This gives rise to the idea that the most common problems between people are based on the assumption that we can guess what the other person is feeling and thinking. They also offered methods, within this scope, for effective communications. This includes reflective listening, assertion skills, conflict resolution etc. Perceptual psychology is often used in therapy to help a patient better their problem-solving skills. Nativism vs. empiricism Nativist and empiricist approaches to perceptual psychology have been researched and debated to find out which is the basis in the development of perception. Nativists believe humans are born with all the perceptual abilities needed. Nativism is the favoured theory on perception. Empiricists believe that humans are not born with perceptual abilities, but instead must learn them. See also Binding problem Psychophysics Physiological psychology Sociophysics Vision science References Cognitive biases Cognitive psychology
Perceptual psychology
[ "Biology" ]
380
[ "Behavioural sciences", "Behavior", "Cognitive psychology" ]
2,177,071
https://en.wikipedia.org/wiki/B-tagging
b-tagging is a method of jet flavor tagging used in modern particle physics experiments. It is the identification (or "tagging") of jets originating from bottom quarks (or b quarks, hence the name). Importance b-tagging is important because: The physics of bottom quarks is quite interesting; in particular, it sheds light on CP violation. Some important high-mass particles (both recently discovered and hypothetical) decay into bottom quarks. Top quarks very nearly always do so, and the Higgs boson is expected to decay into bottom quarks more than any other particle given its mass has been observed to be about 125 GeV. Identifying bottom quarks helps to identify the decays of these particles. Methods The methods for b-tagging are based on the unique features of b-jets. These include: Hadrons containing bottom quarks have sufficient lifetime that they travel some distance before decaying. On the other hand, their lifetimes are not so high as those of light quark hadrons, so they decay inside the detector rather than escape. The advent of precision silicon detectors within particle detectors has made it possible to identify particles that originate from a place different to where the bottom quark was formed (e.g. the beam–beam collision point in a particle accelerator), and thus indicating the likely presence of a b-jet. The bottom quark is much more massive than anything it decays into. Thus its decay products tend to have higher transverse momentum (momentum perpendicular to the original direction of the bottom quark, and therefore of the b-jet). This causes b-jets to be wider, have higher multiplicities (numbers of constituent particles) and invariant masses, and also to contain low-energy leptons with momentum perpendicular to the jet. These two features can be measured, and jets that have them are more likely to be b-jets. Opposite-side algorithms have been used at the LHCb to tag the flavor in pairs of b quarks using the decay products of B-hadrons to infer the flavor of B-mesons. None of the methods of identifying b-jets are foolproof, and modern particle physics experiments must devote significant time to studying how often they successfully identify b-jets and how often they misidentify other jets. Monte Carlo simulations are used to develop and evaluate the performance of tagging algorithms. Experiments making precise measurements of B mesons (mesons containing b-quarks) also try to identify the particular initial B meson within the jet. This is done in order to observe the oscillation of one meson into another (– oscillation), which allows the measurement of CP violation. See also B-Factory – oscillation References Experimental particle physics Hadrons B_physics
B-tagging
[ "Physics" ]
585
[ "Matter", "Hadrons", "Experimental physics", "Particle physics", "Experimental particle physics", "Subatomic particles" ]
2,177,220
https://en.wikipedia.org/wiki/Patch%20Tuesday
Patch Tuesday (also known as Update Tuesday) is an unofficial term used to refer to when Microsoft, Adobe, Oracle and others regularly release software patches for their software products. It is widely referred to in this way by the industry. Microsoft formalized Patch Tuesday in October 2003. Patch Tuesday is known within Microsoft also as the "B" release, to distinguish it from the "C" and "D" releases that occur in the third and fourth weeks of the month, respectively. Patch Tuesday occurs on the second Tuesday of each month. Critical security updates are occasionally released outside of the normal Patch Tuesday cycle; these are known as "Out-of-band" releases. As far as the integrated Windows Update (WU) function is concerned, Patch Tuesday begins at 10:00 a.m. Pacific Time. Vulnerability information is immediately available in the Security Update Guide. The updates show up in Download Center before they are added to WU, and the KB articles are unlocked later. Daily updates consist of malware database refreshes for Microsoft Defender and Microsoft Security Essentials, these updates are not part of the normal Patch Tuesday release cycle. History Starting with Windows 98, Microsoft included Windows Update, which once installed and executed would check for patches to Windows and its components, which Microsoft would release intermittently. With the release of Microsoft Update, this system also checks for updates for other Microsoft products, such as Microsoft Office, Visual Studio and SQL Server. Earlier versions of Windows Update suffered from two problems: Less experienced users often remained unaware of Windows Update and did not install it. Microsoft countered this issue in Windows ME with the Automatic Updates component, which displayed availability of updates, with the option of automatic installation. Customers with multiple copies of Windows, such as corporate users, not only had to update every Windows deployment in the company but also to uninstall patches issued by Microsoft that broke existing functionality. Microsoft introduced "Patch Tuesday" in October 2003 to reduce the cost of distributing patches after the Blaster worm. This system accumulates security patches over a month, and dispatches them all on the second Tuesday of each month, an event for which system administrators may prepare. The following day, informally known as "Exploit Wednesday", marks the time when exploits may appear in the wild which take advantage on unpatched machines of the newly announced vulnerabilities. Tuesday was chosen as the optimal day of the week to distribute software patches. This is done to maximize the amount of time available before the upcoming weekend to correct any issues that might arise with those patches, while leaving Monday free to address other unexpected issues that might have arisen over the preceding weekend. Security implications An obvious security implication is that security problems that have a solution are withheld from the public for up to a month. This policy is adequate when the vulnerability is not widely known or is extremely obscure, but that is not always the case. There have been cases where vulnerability information became public or actual worms were circulating prior to the next scheduled Patch Tuesday. In critical cases Microsoft issues corresponding patches as they become ready, alleviating the risk if updates are checked for and installed frequently. At the Ignite 2015 event, Microsoft revealed a change in distributing security patches. They release security updates to home PCs, tablets and phones as soon as they are ready, while enterprise customers will stay on the monthly update cycle, which was reworked as Windows Update for Business. Exploit Wednesday Many exploitation events are seen shortly after the release of a patch; analysis of the patch helps exploit developers to immediately take advantage of the previously undisclosed vulnerability, which will remain in unpatched systems. Therefore, the term "Exploit Wednesday" was coined. Discontinued Windows versions Microsoft warned users that it discontinued support for Windows XP starting on April 8, 2014 users running Windows XP afterwards would be at the risk of attacks. As security patches of newer Windows versions can reveal similar (or same) vulnerabilities already present in older Windows versions, this can allow attacks on devices with unsupported Windows versions (cf. "zero-day attacks"). However, Microsoft stopped fixing such (and other) vulnerabilities in unsupported Windows versions, regardless how widely known they became, leaving devices running these Windows versions vulnerable to attacks. Microsoft made a singular exception during the rapid spread of the WannaCry ransomware and released patches in May 2017 for the by then-unsupported Windows XP, Windows 8, and Windows Server 2003 (in addition to then supported Windows versions). For Windows Vista "extended support" was ended April 11, 2017, which will leave vulnerabilities discovered afterwards unfixed, creating the same situation for Vista as for XP before. For Windows 7 (including Service Pack 1), support ended January 14, 2020, and on January 10, 2023, for Windows 8.1; this will cause the same "unfixed vulnerabilities" issue for users of these operating systems. Support for Windows 8 already ended January 12, 2016 (with users having to install Windows 8.1 or Windows 10 to continue to get support), and support for Windows 7 without SP1 was ended April 9, 2013 (with the ability to install SP1 to continue to get support until 2020, or having to install Windows 8.1 or Windows 10 to receive support after 2020). Windows 10 and 11 Starting with Windows 10, Microsoft began releasing feature updates of Windows twice per year. These releases brought new functionalities, and are governed by Microsoft's modern lifecycle policy, which specifies a support period of 18–36 months. This is in contrast to previous Windows versions, which received only infrequent updates via service packs, and whose support was governed by the fixed lifecycle policy. With the release of Windows 11, both Windows 10 and 11 started receiving annual feature updates in the second half of the year. Once a release's support period ends, devices must be updated to the latest feature update in order to receive updates from Microsoft. As such, for Home and Pro editions of Windows 10 and 11, the latest Windows version is downloaded and installed automatically when the device approaches the end of support date. In addition to the commonly used editions like Home and Pro, Microsoft offers specialized Long-Term Servicing Channel (LTSC) versions of Windows 10 with longer support timelines, governed by Microsoft's fixed lifecycle policy. For instance, Windows 10 Enterprise 2016 LTSB will receive extended support until October 13, 2026, and Windows 10 LTSC 2019 will receive extended support until January 9, 2029. Adoption by other companies SAP's "Security Patch Day", when the company advises users to install security updates, was chosen to coincide with Patch Tuesdays. Adobe Systems' update schedule for Flash Player since November 2012 also coincides with Patch Tuesday. One of the reasons for this is that Flash Player comes as part of Windows starting with Windows 8 and Flash Player updates for the built-in and the plugin based version both need to be published at the same time in order to prevent reverse-engineering threats. Oracle's quarterly updates coincide with Patch Tuesday. Bandwidth impact Windows Update uses the Background Intelligent Transfer Service (BITS) to download the updates, using idle network bandwidth. However BITS will use the speed as reported by the network interface (NIC) to calculate bandwidth. This can lead to bandwidth calculation errors, for example when a fast network adapter (e.g. 10 Mbit/s) is connected to the network via a slow link (e.g. 56 kbit/s) according to Microsoft "BITS will compete for the full bandwidth [of the NIC] ... BITS has no visibility of the network traffic beyond the client." Furthermore, the Windows Update servers of Microsoft do not honor the TCP's slow start congestion control strategy. As a result, other users on the same network may experience significantly slower connections from machines actively retrieving updates. This can be particularly noticeable in environments where many machines individually retrieve updates over a shared, bandwidth-constrained link such as those found in many multi-PC homes and small to medium-sized businesses. Bandwidth demands of patching large numbers of computers can be reduced significantly by deploying Windows Server Update Services (WSUS) to distribute the updates locally. In addition to updates being downloaded from Microsoft servers, Windows 10 devices can "share" updates in a peer-to-peer fashion with other Windows 10 devices on the local network, or even with Windows 10 devices on the internet. This can potentially distribute updates faster while reducing usage for networks with a metered connection. See also History of Microsoft Windows Full disclosure (computer security) References Further reading Example of report about vulnerability found in the wild with timing seemingly coordinated with "Patch Tuesday" Example of a quick patch response, not due to a security issue but for DRM-related reasons. External links Microsoft Patch Tuesday Countdown Microsoft Security Bulletin Computer security procedures Microsoft culture History of Microsoft Holidays and observances by scheduling (nth weekday of the month) Tuesday observances Software maintenance
Patch Tuesday
[ "Engineering" ]
1,825
[ "Software engineering", "Cybersecurity engineering", "Computer security procedures", "Software maintenance" ]
2,177,300
https://en.wikipedia.org/wiki/Viloxazine
Viloxazine, sold under the brand name Qelbree among others, is a selective norepinephrine reuptake inhibitor medication that is indicated in the treatment of attention deficit hyperactivity disorder (ADHD) in children and adults. It was marketed for almost 30years as an antidepressant for the treatment of depression before being discontinued and subsequently repurposed as a treatment for ADHD. Viloxazine is taken orally. It was used as an antidepressant in an immediate-release form and is used in ADHD in an extended-release form, latterly with comparable effectiveness to atomoxetine and methylphenidate. Side effects of viloxazine include insomnia, headache, somnolence, fatigue, nausea, vomiting, decreased appetite, dry mouth, constipation, irritability, increased heart rate, and increased blood pressure. Rarely, the medication may cause suicidal thoughts and behaviors. It can also activate mania or hypomania in people with bipolar disorder. Viloxazine acts as a selective norepinephrine reuptake inhibitor (NRI). The immediate-release form has an elimination half-life of 2.5hours while the half-life of the extended-release form is 7hours. Viloxazine was first described by 1972 and was marketed as an antidepressant in Europe in 1974. It was not marketed in the United States at this time. The medication was discontinued in 2002 for commercial reasons. However, it was repurposed for the treatment of ADHD and was reintroduced, in the United States, in April 2021. Viloxazine is a non-stimulant medication; it has no known misuse liability and is not a controlled substance. Medical uses Attention deficit hyperactivity disorder Viloxazine is indicated to treat attention deficit hyperactivity disorder (ADHD) in children age 6 to 12years, adolescents age 13 to 17years, and adults. Analyses of clinical trial data suggest that viloxazine produces moderate reductions in symptoms; it is about as effective as atomoxetine and methylphenidate but with fewer side effects. Depression Viloxazine was previously marketed as an antidepressant for the treatment of major depressive disorder. It was considered to be effective in mild to moderate as well as severe depression with or without co-morbid symptoms. The typical dose range for depression was 100 to 400mg per day in divided doses administered generally two to three times per day. Available forms Viloxazine is available for ADHD in the form of 100, 150, and 200 mg extended-release capsules. These capsules can be opened and sprinkled into food for easier administration. Side effects The most common side effects include drowsiness, headache, and loss of appetite. Psychiatric side effects occur in about 20% of cases; the most common of these is irritability (>5%). Other common side effects include nausea, vomiting, epigastric pain, insomnia, and increased libido. Incidence of some side effects, including headache and drowsiness, appear to be dose-dependent. In the treatment of depression, viloxazine is more tolerable than tricyclic antidepressants such as imipramine and amitryptiline. There were three cases of seizure worldwide, and most animal studies (and clinical trials that included epilepsy patients) indicated the presence of anticonvulsant properties, so viloxazine is not completely contraindicated in patients with epilepsy. Interactions Viloxazine increased plasma levels of phenytoin by an average of 37%. It also was known to significantly increase plasma levels of theophylline and decrease its clearance from the body, sometimes resulting in accidental overdose of theophylline. Pharmacology Pharmacodynamics Viloxazine acts as a selective norepinephrine reuptake inhibitor (sNRI) and this is believed to be responsible for its therapeutic effectiveness in the treatment of conditions like ADHD and depression. The affinities (KD) of viloxazine at the human monoamine transporters are 155 to 630 nM for the norepinephrine transporter (NET), 17,300 nM for the serotonin transporter (SERT), and >100,000 nM for the dopamine transporter (DAT). Viloxazine has negligible affinity for a variety of assessed receptors, including the serotonin 5-HT1A and 5-HT2A receptors, the dopamine D2 receptor, the α1- and α2-adrenergic receptors, the histamine H1 receptor, and the muscarinic acetylcholine receptors (all >10,000 nM). More recent research has found that the pharmacodynamics of viloxazine may be more complex than previously assumed. In 2020, viloxazine was reported to have significant affinity for the serotonin 5-HT2B and 5-HT2C receptors (Ki = 3,900 nM and 6,400 nM) and to act as an antagonist and agonist of these receptors, respectively. It also showed weak antagonistic activity at the serotonin 5-HT7 receptor and the α1B- and β2-adrenergic receptors. These actions, although relatively weak, might be involved in its effects and possibly its therapeutic effectiveness in the treatment of ADHD. Pharmacokinetics Absorption The bioavailability of extended-release viloxazine relative to an instant-release formulation was about 88%. Peak and levels of extended-release viloxazine are proportional over a dosage range of 100 to 400 mg once daily. The time to peak levels is 5 hours with a range of 3 to 9 hours after a single 200 mg dose. A high-fat meal modestly decreases levels of viloxazine and delays the time to peak by about 2 hours. Steady-state levels of viloxazine are reached after 2 days of once-daily administration and no accumulation occurs. Levels of viloxazine are approximately 40 to 50% higher in children age 6 to 11 years compared to children age 12 to 17 years. Distribution The plasma protein binding of viloxazine is 76 to 82% over a concentration range of 0.5 to 10 μg/mL. Metabolism The metabolism of viloxazine is primarily via the cytochrome P450 enzyme CYP2D6 and the UDP-glucuronosyltransferases UGT1A9 and UGT2B15. The major metabolite of viloxazine is 5-hydroxyviloxazine glucuronide. Viloxazine levels are slightly higher in CYP2D6 poor metabolizers relative to CYP2D6 extensive metabolizers. Elimination The elimination of viloxazine is mainly renal. Approximately 90% of the dose is excreted in urine within 24 hours and less than 1% of the dose is recovered in feces. The elimination half-life of instant-release viloxazine is 2 to 5 hours (2–3 hours in the most reliable studies) and the half-life of extended-release viloxazine is 7.02 ± 4.74 hours. Chemistry Viloxazine is a racemic compound with two stereoisomers, the (S)-(–)-isomer being five times as pharmacologically active as the (R)-(+)-isomer. History Viloxazine was discovered by scientists at Imperial Chemical Industries when they recognized that some beta blockers inhibited serotonin reuptake inhibitor activity in the brain at high doses. To improve the ability of their compounds to cross the blood brain barrier, they changed the ethanolamine side chain of beta blockers to a morpholine ring, leading to the synthesis of viloxazine. It was first described in the scientific literature as early as 1972. The medication was first marketed in 1974. Viloxazine was not approved for medical use by the FDA. In 1984, the FDA granted the medication an orphan designation for treatment of cataplexy and narcolepsy with the tentative brand name Catatrol. For unknown reasons however, it was never approved or introduced for these uses in the United States. Viloxazine was withdrawn from markets worldwide in 2002 for commercial reasons unrelated to efficacy or safety. As of 2015, Supernus Pharmaceuticals was developing extended release formulations of viloxazine as a treatment for ADHD and major depressive disorder under the names SPN-809 and SPN-812. Viloxazine was approved for the treatment of ADHD in the United States in April 2021. The benefit of viloxazine was evaluated in three clinical studies, including two in children (ages 6 to 11 years) and one in adolescents (ages 12 to 17 years) with ADHD. In each study, pediatric participants were randomly assigned to receive one of two doses of viloxazine or placebo once daily for 6 to 8 weeks. None of the participants, their parent(s)/caregiver(s), the study sponsor, or the study doctors knew which treatment the participant received during the study. The severity of ADHD symptoms observed at the last week of treatment was significantly greater in participants who received placebo compared with participants who received viloxazine. The severity of ADHD symptoms was assessed using the Attention-Deficit Hyperactivity Disorder Rating Scale 5th Edition (ADHD-RS-5). A fourth study provided information about the safety of viloxazine in adolescents 12 to 17 years of age with ADHD. The FDA approved viloxazine based on evidence from several clinical trial(s) of 1289 participants with attention deficit hyperactivity disorder (ADHD). The trials were conducted at 59 sites in the United States. Society and culture Brand names Viloxazine has been marketed under the brand names Emovit, Qelbree, Vicilan, Viloxazin, Viloxazina, Viloxazinum, Vivalan, and Vivarint. Research Viloxazine has undergone two randomized controlled trials for nocturnal enuresis (bedwetting) in children, both of those times versus imipramine. By 1990, it was seen as a less cardiotoxic alternative to imipramine, and to be especially effective in heavy sleepers. In narcolepsy, viloxazine has been shown to suppress auxiliary symptoms such as cataplexy and also abnormal sleep-onset REM without significantly improving daytime somnolence. In a cross-over trial (56 participants) viloxazine significantly reduced EDS and cataplexy. Viloxazine has also been studied for the treatment of alcoholism, with some success. Viloxazine did not demonstrate efficacy in a double-blind randomized controlled trial versus amisulpride in the treatment of dysthymia. References External links 5-HT2B antagonists 5-HT2C agonists Antidepressants Attention deficit hyperactivity disorder management Drugs developed by AstraZeneca Morpholines Norepinephrine reuptake inhibitors Phenol ethers Wakefulness-promoting agents Withdrawn drugs
Viloxazine
[ "Chemistry" ]
2,384
[ "Drug safety", "Withdrawn drugs" ]
2,177,591
https://en.wikipedia.org/wiki/Aft-crossing%20trajectory
In 2005, a new trajectory that an air-launched rocket could take to put satellites into orbit was tested. Until this time, launch vehicles such as the Pegasus rocket, or rocket planes such as the X-1, X-15, or SpaceShipOne, which were carried under an aircraft pointing in the same direction as the fuselage, would have their engines ignited either just before being air-dropped or a few seconds afterward. They would then be expected to accelerate and climb in front of the carrier aircraft, crossing its flight path. This was considered dangerous due to the potential for a crashes between the rocket and the launch vehicle. The aft-crossing trajectory is an alternate flight path for a rocket. The rocket's rotation (induced by the deployment from the aircraft) is slowed by a small parachute attached to its tail, then ignited once the carrier aircraft has passed it. It is ignited before it is pointing fully vertically, however it will turn to do so, and accelerates to pass behind the carrier aircraft. The principal advantage of this method is its safety for the crew of the carrier aircraft. See also AirLaunch LLC t/Space References Aviation Week & Space Technology June 27, 2005, page 32. Spaceflight
Aft-crossing trajectory
[ "Astronomy" ]
246
[ "Spaceflight", "Outer space" ]
2,178,110
https://en.wikipedia.org/wiki/List%20of%20astronomical%20societies
A list of notable groups devoted to promoting astronomy research and education. International International Astronomical Union (IAU) International Meteor Organization Network for Astronomy School Education The Planetary Society Africa Astronomical Society of Southern Africa Asia China Hong Kong Astronomical Society India Akash Mitra Mandal AstronEra Astronomical Society of India Bangalore Astronomical Society (BAS) Confederation of Indian Amateur Astronomers IUCAA Jyotirvidya Parisanstha Khagol Mandal Khagol Vishwa Wonders of Universe Turkey SpaceTurk Thailand United Arab Emirates Dubai Astronomy Group Europe European Astronomical Society European Association for Astronomy Education France Société astronomique de France Société Française d'Astronomie et d'Astrophysique (SF2A) Germany Astronomische Gesellschaft Vereinigung der Sternfreunde Greece Hellenic Astronomical Society Ireland Irish Astronomical Society Irish Federation of Astronomical Societies Italy Unione Astrofili Italiani Norway Norwegian Astronomical Society CV-Helios Network Poland Polish Astronomical Society Russia Russian Astronomical Society (1891-1932) Eurasian Astronomical Society (1990-) Serbia Astronomical Society Ruđer Bošković United Kingdom Airdrie Astronomical Association Astronomical Society of Edinburgh Astronomical Society of Glasgow Astronomy Centre British Astronomical Association Crayford Manor House Astronomical Society Federation of Astronomical Societies Kielder Observatory Astronomical Society Liverpool Astronomical Society Manchester Astronomical Society Mexborough & Swinton Astronomical Society Northumberland Astronomical Society Nottingham Astronomical Society Royal Astronomical Society Society for Popular Astronomy Society for the History of Astronomy North America Canada Canadian Astronomical Society Royal Astronomical Society of Canada Mexico Nibiru Sociedad Astronomica United States Amateur Astronomers Association of Pittsburgh American Association of Variable Star Observers American Astronomical Society (AAS) American Meteor Society Association of Lunar and Planetary Observers Astronomical League Astronomical Society of the Pacific Escambia Amateur Astronomers Association Indiana Astronomical Society Kaua‘i Educational Association for Science and Astronomy Kopernik Astronomical Society Louisville Astronomical Society Milwaukee Astronomical Society Mohawk Valley Astronomical Society NASA Night Sky Network SETI Institute Shreveport-Bossier Astronomical Society Southern Cross Astronomical Society Oceania Australia Astronomical Society of Australia Astronomical Society of New South Wales Astronomical Society of South Australia Astronomical Society of Victoria Macarthur Astronomical Society Sutherland Astronomical Society New Zealand Dunedin Astronomical Society Royal Astronomical Society of New Zealand Whakatane Astronomical Society South America Brazil Sociedade Astronômica Brasileira See also Amateur astronomy organizations by name Astronomy organizations by name Lists of organizations Societies
List of astronomical societies
[ "Astronomy" ]
466
[ "Astronomy societies", "Astronomy-related lists", "Astronomy organizations" ]