id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
38,157,005
https://en.wikipedia.org/wiki/VNREDSat-1
VNREDSat-1 (short for Vietnam Natural Resources, Environment and Disaster Monitoring Satellite, also VNREDSat-1A) is the first optical Earth Observing satellite of Vietnam; its primary mission is to monitor and study the effects of climate change, predict and take measures to prevent natural disasters, and optimise the management of Vietnam's natural resources. Satellite The VNREDSat-1 was built in Toulouse by EADS Astrium. During the project 15 Vietnamese engineers were integrated and trained by the Astrium team. The VNREDSat-1 system is based on the Astrium operational AstroSat100 satellite, used for the SSOT programme developed with Chile or the ALSAT-2 satellite system developed with Algeria. The 120 kg satellite will image at 2.5 m in panchromatic mode and 10 m in multi-spectral mode (four bands) with a 17.5 km swath, and will orbit at 600–700 km in a Sun-synchronous orbit. Launch The satellite was launched from ELV at the Guiana Space Centre by the Vega VV02 rocket at 02:06:31 UTC on 7 May 2013 together with the PROBA-V and ESTCube-1 satellites. References 2013 in Vietnam Spacecraft launched in 2013 Earth imaging satellites Satellites of Vietnam Spacecraft launched by Vega rockets
VNREDSat-1
Astronomy
274
18,527,330
https://en.wikipedia.org/wiki/Foldy%E2%80%93Wouthuysen%20transformation
The Foldy–Wouthuysen transformation was historically significant and was formulated by Leslie Lawrance Foldy and Siegfried Adolf Wouthuysen in 1949 to understand the nonrelativistic limit of the Dirac equation, the equation for spin-1/2 particles. A detailed general discussion of the Foldy–Wouthuysen-type transformations in particle interpretation of relativistic wave equations is in Acharya and Sudarshan (1960). Its utility in high energy physics is now limited due to the primary applications being in the ultra-relativistic domain where the Dirac field is treated as a quantised field. A canonical transform The FW transformation is a unitary transformation of the orthonormal basis in which both the Hamiltonian and the state are represented. The eigenvalues do not change under such a unitary transformation, that is, the physics does not change under such a unitary basis transformation. Therefore, such a unitary transformation can always be applied: in particular a unitary basis transformation may be picked which will put the Hamiltonian in a more pleasant form, at the expense of a change in the state function, which then represents something else. See for example the Bogoliubov transformation, which is an orthogonal basis transform for the same purpose. The suggestion that the FW transform is applicable to the state or the Hamiltonian is thus not correct. Foldy and Wouthuysen made use of a canonical transform that has now come to be known as the Foldy–Wouthuysen transformation. A brief account of the history of the transformation is to be found in the obituaries of Foldy and Wouthuysen and the biographical memoir of Foldy. Before their work, there was some difficulty in understanding and gathering all the interaction terms of a given order, such as those for a Dirac particle immersed in an external field. With their procedure the physical interpretation of the terms was clear, and it became possible to apply their work in a systematic way to a number of problems that had previously defied solution. The Foldy–Wouthuysen transform was extended to the physically important cases of spin-0 and spin-1 particles, and even generalized to the case of arbitrary spins. Description The Foldy–Wouthuysen (FW) transformation is a unitary transformation on a fermion wave function of the form: where the unitary operator is the 4 × 4 matrix: Above, is the unit vector oriented in the direction of the fermion momentum. The above are related to the Dirac matrices by and , with . A straightforward series expansion applying the commutativity properties of the Dirac matrices demonstrates that above is true. The inverse so it is clear that , where is a 4 × 4 identity matrix. Transforming the Dirac Hamiltonian for a free fermion This transformation is of particular interest when applied to the free-fermion Dirac Hamiltonian operator in biunitary fashion, in the form: Using the commutativity properties of the Dirac matrices, this can be massaged over into the double-angle expression: This factors out into: Choosing a particular representation: Newton–Wigner Clearly, the FW transformation is a continuous transformation, that is, one may employ any value for which one chooses. Choosing a particular value for amounts to choosing a particular transformed representation. One particularly important representation is that in which the transformed Hamiltonian operator is diagonalized. A completely diagonal representation can be obtained by choosing such that the term in vanishes. This is arranged by choosing: In the Dirac-Pauli representation where is a diagonal matrix, is then reduced to a diagonal matrix: By elementary trigonometry, also implies that: so that using in and then simplifying now leads to: Prior to Foldy and Wouthuysen publishing their transformation, it was already known that is the Hamiltonian in the Newton–Wigner (NW) representation (named after Theodore Duddell Newton and Eugene Wigner) of the Dirac equation. What therefore tells us, is that by applying a FW transformation to the Dirac–Pauli representation of Dirac's equation, and then selecting the continuous transformation parameter so as to diagonalize the Hamiltonian, one arrives at the NW representation of Dirac's equation, because NW itself already contains the Hamiltonian specified in (). See this link. If one considers an on-shell mass—fermion or otherwise—given by , and employs a Minkowski metric tensor for which , it should be apparent that the expression is equivalent to the component of the energy-momentum vector , so that is alternatively specified rather simply by . Correspondence between the Dirac–Pauli and Newton–Wigner representations, for a fermion at rest Now consider a fermion at rest, which we may define in this context as a fermion for which . From or , this means that , so that and, from , that the unitary operator . Therefore, any operator in the Dirac–Pauli representation upon which we perform a biunitary transformation, will be given, for an at-rest fermion, by: Contrasting the original Dirac–Pauli Hamiltonian operator with the NW Hamiltonian , we do indeed find the "at rest" correspondence: Transforming the velocity operator In the Dirac–Pauli representation Now, consider the velocity operator. To obtain this operator, we must commute the Hamiltonian operator with the canonical position operators , i.e., we must calculate One good way to approach this calculation, is to start by writing the scalar rest mass as and then to mandate that the scalar rest mass commute with the . Thus, we may write: where we have made use of the Heisenberg canonical commutation relationship to reduce terms. Then, multiplying from the left by and rearranging terms, we arrive at: Because the canonical relationship the above provides the basis for computing an inherent, non-zero acceleration operator, which specifies the oscillatory motion known as zitterbewegung. In the Newton–Wigner representation In the Newton–Wigner representation, we now wish to calculate If we use the result at the very end of section 2 above, , then this can be written instead as: Using the above, we need simply to calculate , then multiply by . The canonical calculation proceeds similarly to the calculation in section 4 above, but because of the square root expression in , one additional step is required. First, to accommodate the square root, we will wish to require that the scalar square mass commute with the canonical coordinates , which we write as: where we again use the Heisenberg canonical relationship . Then, we need an expression for which will satisfy . It is straightforward to verify that: will satisfy when again employing . Now, we simply return the factor via , to arrive at: This is understood to be the velocity operator in the Newton–Wigner representation. Because: it is commonly thought that the zitterbewegung motion arising out of vanishes when a fermion is transformed into the Newton–Wigner representation. Other applications The powerful machinery of the Foldy–Wouthuysen transform originally developed for the Dirac equation has found applications in many situations such as acoustics, and optics. It has found applications in very diverse areas such as atomic systems synchrotron radiation and derivation of the Bloch equation for polarized beams. The application of the Foldy–Wouthuysen transformation in acoustics is very natural; comprehensive and mathematically rigorous accounts. In the traditional scheme the purpose of expanding the optical Hamiltonian in a series using as the expansion parameter is to understand the propagation of the quasi-paraxial beam in terms of a series of approximations (paraxial plus nonparaxial). Similar is the situation in the case of charged-particle optics. Let us recall that in relativistic quantum mechanics too one has a similar problem of understanding the relativistic wave equations as the nonrelativistic approximation plus the relativistic correction terms in the quasi-relativistic regime. For the Dirac equation (which is first-order in time) this is done most conveniently using the Foldy–Wouthuysen transformation leading to an iterative diagonalization technique. The main framework of the newly developed formalisms of optics (both light optics and charged-particle optics) is based on the transformation technique of Foldy–Wouthuysen theory which casts the Dirac equation in a form displaying the different interaction terms between the Dirac particle and an applied electromagnetic field in a nonrelativistic and easily interpretable form. In the Foldy–Wouthuysen theory the Dirac equation is decoupled through a canonical transformation into two two-component equations: one reduces to the Pauli equation in the nonrelativistic limit and the other describes the negative-energy states. It is possible to write a Dirac-like matrix representation of Maxwell's equations. In such a matrix form the Foldy–Wouthuysen can be applied. There is a close algebraic analogy between the Helmholtz equation (governing scalar optics) and the Klein–Gordon equation; and between the matrix form of the Maxwell's equations (governing vector optics) and the Dirac equation. So it is natural to use the powerful machinery of standard quantum mechanics (particularly, the Foldy–Wouthuysen transform) in analyzing these systems. The suggestion to employ the Foldy–Wouthuysen Transformation technique in the case of the Helmholtz equation was mentioned in the literature as a remark. It was only in the recent works, that this idea was exploited to analyze the quasiparaxial approximations for specific beam optical system. The Foldy–Wouthuysen technique is ideally suited for the Lie algebraic approach to optics. With all these plus points, the powerful and ambiguity-free expansion, the Foldy–Wouthuysen Transformation is still little used in optics. The technique of the Foldy–Wouthuysen Transformation results in what is known as nontraditional prescriptions of Helmholtz optics and Maxwell optics respectively. The nontraditional approaches give rise to very interesting wavelength-dependent modifications of the paraxial and aberration behaviour. The nontraditional formalism of Maxwell optics provides a unified framework of light beam optics and polarization. The nontraditional prescriptions of light optics are closely analogous with the quantum theory of charged-particle beam optics. In optics, it has enabled the deeper connections in the wavelength-dependent regime between light optics and charged-particle optics to be seen (see Electron optics). See also Relativistic quantum mechanics Notes Fermions Dirac equation
Foldy–Wouthuysen transformation
Physics,Materials_science
2,205
72,252,877
https://en.wikipedia.org/wiki/Dodecahedral%20bipyramid
In 4-dimensional geometry, the dodecahedral bipyramid is the direct sum of a dodecahedron and a segment, Each face of a central dodecahedron is attached with two pentagonal pyramids, creating 24 pentagonal pyramidal cells, 72 isosceles triangular faces, 70 edges, and 22 vertices. A dodecahedral bipyramid can be seen as two dodecahedral pyramids augmented together at their base. It is the dual of a icosahedral prism. See also Tetrahedral bipyramid Cubic bipyramid Icosahedral bipyramid References External links Dodecahedral tegum 4-polytopes
Dodecahedral bipyramid
Mathematics
137
23,429,132
https://en.wikipedia.org/wiki/C13H9N
{{DISPLAYTITLE:C13H9N}} The molecular formula C13H9N (molar mass: 179.22 g/mol, exact mass: 179.0735 u) may refer to: Acridine Phenanthridine
C13H9N
Chemistry
57
18,032,711
https://en.wikipedia.org/wiki/Geoff%20Tootill
Geoffrey ("Geoff") Colin Tootill (4 March 1922 – 26 October 2017) was an electronic engineer and computer scientist who worked in the Electrical Engineering Department at the University of Manchester with Freddie Williams and Tom Kilburn developing the Manchester Baby, "the world's first wholly electronic stored-program computer". Education Tootill attended King Edward's School, Birmingham on a Classics scholarship and in 1940 gained an entrance exhibition to study Mathematics at Christ's College, Cambridge. He was forced to do the course in two years (missing Part One of the Mathematics Tripos) as his studies were cut short by World War II. After the successful operation of the Manchester Baby computer, he was awarded an MSc by the Victoria University of Manchester for his thesis on "Universal High-Speed Digital Computers: A Small-Scale Experimental Machine". Career On leaving Cambridge in 1942, Tootill managed to get assigned to work on airborne radar at the Telecommunications Research Establishment (TRE) in Malvern. Here, he went out to airfields to troubleshoot problems with the operation of radar in night fighters, designed modifications and oversaw their implementation. He later said that this was the most responsible job that he had in his life. In 1947, he was recruited by Frederic Calland Williams to join another ex-TRE colleague, Tom Kilburn, at Manchester University developing the world's first wholly electronic stored-program computer. In the UK, three projects were then underway to develop a stored program computer (in Cambridge, the NPL and Manchester) and the main technical hurdle was the memory technology. In order to test the cathode-ray tube (CRT) memory designed by FC Williams when it was constructed, Kilburn and Tootill designed an elementary computer, known as the "Manchester Baby". The computer could store 32 instructions or numbers using a single CRT. On 21 June 1948, after months of patient work constructing and testing the Baby piece by piece, coping with the unreliable electronic components of the day, the machine finally ran a routine written by Kilburn (they didn't use the word "program" then) to find the highest proper factor of a number. In Tootill's words "And we saw the thing had done a computation". A day or two later, the Baby ran successfully for 52 minutes to find the highest proper factor of 218, which required c. 3.5m arithmetic operations. After the Baby's first operation in June 1948, Alan Turing moved to Manchester so he could use the Baby for a project that he was working on at the National Physical Laboratory, where they had also been working on developing a computer. Tootill instructed Alan Turing on use of the Manchester Baby and debugged a program Turing had written to run on the Baby. In 1949, Tootill joined Ferranti where he developed the logic design of the first commercial computer (which was based on the Baby). He stayed at Ferranti only briefly and later the same year, he joined the Royal Military College of Science at Shrivenham as a Senior Lecturer on a considerably higher salary, lecturing and leading lab studies on digital computing. In 1956, Tootill joined the Royal Aircraft Establishment (RAE), Farnborough, researching issues for air traffic control systems. Here he wrote, with Stuart Hollingdale, "Electronic Computers", Penguin 1965, which ran through eight printings and was translated into Spanish and Japanese. Tootill was also a founding member of the British Computer Society in 1956. In 1963, Tootill joined the newly formed European Space Research Organisation (ESRO, now the European Space Agency). He set up and directed the Control Centre of ESRO, with its ground stations. In 1969, he was assigned to a bureaucratic post in London, which he did not enjoy. In 1973, he joined the National Physical Laboratory at Teddington, where he developed communications standards for the European Informatics Network, an experimental computer network. Tootill retired in 1982, but remained active in computing. In 1997, drawing on his linguistics background (notably Latin, Greek, French and German), he designed a phonetic algorithm for encoding English names (to recognise that e.g. Deighton and Dayton, Shore and Shaw sound the same) which garnered over 2,000 corporate users as part of a data matching package developed by his son Steve. In 1998, the Computer Conservation Society (in a project led by Christopher P Burton) unveiled a replica of the Baby (which is now in the Museum of Science and Industry (Manchester)) to commemorate the 50th anniversary of the running of the first electronically stored program, based in large part on Tootill's notes and recollections. A page from his June 1948 notebook details the code of the first ever software program, written by Tom Kilburn. Personal life As a boy, Tootill was interested in electronics, and built a radio set. He met Pamela Watson while in Malvern during World War II, where they were both members of the "Flying Rockets Concert Party". He and Pam were married in 1947 and had three sons, Peter, Colin and Stephen and two grandchildren, Mia and Duncan. His first wife Pam died in 1979, and in 1981, Tootill married Joyce Turnbull, who died in 2020. Books References 1922 births 2017 deaths People from Chadderton People educated at King Edward's School, Birmingham Alumni of Christ's College, Cambridge English computer scientists English electrical engineers History of computing in the United Kingdom Members of the British Computer Society Scientists of the National Physical Laboratory (United Kingdom) People associated with the Victoria University of Manchester People associated with the Department of Computer Science, University of Manchester European Space Agency personnel Deaths from pneumonia in the United Kingdom
Geoff Tootill
Technology
1,159
3,682,117
https://en.wikipedia.org/wiki/Embalming%20chemicals
Embalming chemicals are a variety of preservatives, sanitising and disinfectant agents, and additives used in modern embalming to temporarily prevent decomposition and restore a natural appearance for viewing a body after death. A mixture of these chemicals is known as embalming fluid and is used to preserve bodies of deceased persons for both funeral purposes and in medical research in anatomical laboratories. The period for which a body is embalmed is dependent on time, expertise of the embalmer and factors regarding duration of stay and purpose. Typically, embalming fluid contains a mixture of formaldehyde, glutaraldehyde, methanol, and other solvents. The formaldehyde content generally ranges from 5–37% and the methanol content may range from 9–56%. In the United States alone, about 20 million liters (roughly 5.3 million gallons) of embalming fluid are used every year. Method of operation Embalming fluid acts to fix (denature) cellular proteins, meaning that they cannot act as a nutrient source for bacteria; embalming fluid also kills the bacteria themselves. Formaldehyde or glutaraldehyde fixes tissue or cells by irreversibly connecting a primary amine group in a protein molecule with a nearby nitrogen in a protein or DNA molecule through a -CH2- linkage called a Schiff base. The end result also creates the simulation, via color changes, of the appearance of blood flowing under the skin. Modern embalming is not done with a single fixative. Instead, various chemicals are used to create a mixture, called an arterial solution, which is uniquely generated for the needs of each case. For example, a body needing to be repatriated overseas needs a higher index (percentage of diluted preservative chemical) than one simply for viewing (known in the United States and Canada as a funeral visitation) at a funeral home before cremation or burial. Process Embalming fluid is injected into the arterial system of the deceased's abdomen and a trocar is inserted into the body cavity. The organs in the chest cavity and the abdomen are then punctured and drained of gas and fluid contents. Many other bodily fluids may also be displaced and removed from the body using the arterial system and in the case of cavity treatment aspirated from the body and replaced with a specialty fluid known as cavity fluid. Chemicals and additives It is important to distinguish between an arterial chemical (or fluid), which is generally taken to be the product in its original composition, and an arterial solution, which is a diluted mixture of chemicals and made to order for each body. Non-preservative chemicals in an arterial solution are generally called "accessory chemicals" or co/pre-injectants, depending on their time of utilization. Potential ingredients in an arterial solution include: Preservative (Arterial) Chemical. These are commonly a percentage (normally 18–37%) based mixture of formaldehyde, glutaraldehyde or in some cases phenol which are then diluted to gain the final index of the arterial solution. Methanol is used to hold the formaldehyde in solution. Formalin refers specifically to 37% aqueous formaldehyde and is not commonly used in funeral embalming but rather in the preservation of anatomical specimens. Water Conditioner. These are designed to balance the "hardness" of water (the presence of other trace chemicals that change the water's pH or neutrality) and to help reduce the deceased's acidity, a by-product of decomposition, as formaldehyde works best in an alkaline environment. Additionally, water conditioners may be used to help inactivate chemotherapy drugs and antibiotics, which may bind to and render ineffectual the preservative chemical. Cell Conditioner. These chemicals act to prepare cells for absorption of arterial fluid and help break up clots in the bloodstream. Dyes. Active dyes are used to restore the body's natural colouration and counterstain against conditions such as jaundice as well as to indicate distribution of arterial fluid. Inactive dyes are used by the manufacturer of the arterial fluid to give a pleasant color to the fluid in the bottle but do nothing for the appearance of the embalmed body. Humectants. These are added to dehydrated and emaciated bodies to help restore tissue to a more natural and hydrated appearance. Anti-Edemic Chemicals. The opposite of humectants, these are designed to draw excessive fluid (edema) from a body. Additional Disinfectants. For certain cases, such as tissue gas, speciality chemicals such as Omega Decomp Factor, Triton-28, STOP or Dispray (Topical) can be arterially injected to kill tissue gas. Water. Most arterial solutions are a mix of some of the preceding chemicals with tepid water. Cases done without the addition of water are referred to as "waterless." Waterless embalming is more common in difficult cases or those requiring a very high degree of preservation, such as instances of an extended delay between death and final disposition. Cavity Fluid. This is a generally a very high-index formaldehyde or glutaraldehyde solution injected undiluted directly via the trocar incision into the body cavities to treat the viscera. In cases of tissue gas, phenol based products are often used instead. History Prior to the advent of the modern range of embalming chemicals a variety of alternative additives have been used by embalmers, including epsom salts for edema cases, but these are of limited effectiveness and can be chalked up as "embalmer tricks", as the validity of their use has never been demonstrated by professional embalmers or mortuary science programs. During the American Civil War, the Union Army, wanting to transport dead soldiers from the battlefields back home for burial, consulted with Dr. Thomas Holmes, who developed a technique that involved draining a corpse's blood and embalming it with a fluid made with arsenic for preservation. Embalming chemicals are generally produced by specialist manufacturers. The oldest embalming fluid company was founded as the Hill Fluid Company, in 1878, and was then incorporated by Dr. A.A. Bakker, as the Champion Company, in 1880, making The Champion Company 143 years old. Champion was still owned and operated by the Bakker Family until the death of Dr. Bakker's granddaughter, in the late 1970's. Champion still operates today and is still family owned by the Giankopulous Family. They continuously operate today. They are located in Springfield, OH. The Frigid Fluid Company was founded in 1892, followed by the Dodge Company in 1893, with other companies including Egyptian, now U.S. Chemical, as well as Kelco Supply Company (formerly L H Kellogg), Pierce Chemical Company (now owned by The Wilbert Company), Bondol Chemical Company, and Hydrol Chemical Company. There are many smaller and regional producers as well. Some funeral homes produce their own embalming fluids, although this practice has declined in recent decades as commercially available products have become of better quality and more readily available. Following the EU Biocides Legislation some pressure was brought to reduce the use of formaldehyde. IARC Classes Formaldehyde as a Class 1 Carcinogen. There are alternatives to formaldehyde and phenol-based fluids, but these are technically not preservatives but rather sanitising agents and are not widely accepted. However, The Champion Company has always been aware of the safety of the embalmer and created and distributed lower exposure fluids with less HCHO and by the 1990s Champion was the first to create and distribute HCHO Free Fluids. Only The Champion and The Dodge Company sell those fluids. Environmental effects Despite genuine concerns, formaldehyde is a naturally occurring substance, of which human beings produce approximately 1.5 oz a day as a normal part of a healthy metabolism. Formaldehyde also occurs naturally in many fruits, such as bananas, apples, and carrots, and does not bioaccumulate in either plants or animals. Formaldehyde works to fixate the tissue of the deceased. This is the characteristic that also makes concentrated formaldehyde hazardous when not handled using appropriate personal protective equipment. The carbon atom in formaldehyde, CH2O, carries a slight positive charge due to the high electronegativity of the oxygen double bonded with the carbon. The electropositive carbon will react with a negatively charged molecule and other electron-rich species. As a result, the carbon in the formaldehyde molecule bonds with electron-rich nitrogen groups called amines found in plant and animal tissue. This leads to formaldehyde cross-linking, bonding proteins with other proteins and DNA, rendering them dysfunctional or no longer useful. This is the reason for usage of formaldehyde as a preservative, as it thus prevents cellular decay and renders the tissue unsuitable for use as a nutrient source for bacteria. Formaldehyde is carcinogenic in humans and animals at excessive levels because the cross-linking can cause DNA to keep cells from halting the replication process. This unwarranted replication of cells can lead to cancer. Unicellular organisms found in the soil and groundwater are also quite sensitive to cross-linking, experiencing damage at a concentration of 0.3 mg to 22 mg per liter. Formaldehyde also affects aquatic invertebrates, with crustaceans being the most sensitive type. The range of concentration damaging them is 0.4 mg to 20 mg per liter. Formaldehyde released from the cremation of embalmed bodies enters the atmosphere and can remain suspended for up to 250 hours. It is readily soluble in water so it will bond with moisture in the atmosphere and rain down onto plants, animals, and water supplies below. As a result, formaldehyde content in precipitation can range from 110 μg to 1380 μg per liter. These concerns notwithstanding, according to the American Chemistry Council, formaldehyde, as a ubiquitous chemical produced by living beings, is eminently biodegradable by both sunlight in air and bacteria in soil and water. The growth of the environment movement has caused some people to consider green burials where there are either no aldehyde-based chemicals used in the embalming process, or there is no embalming process at all. Embalming fluid meeting specific criteria for such burials is commercially available, and although it is not as effective as aldehyde-based solutions, is approved by the Green Burial Association of America. Only the Champion Company has created and distributed their 4th generation of fluids called "Enigma", created in the early 2000's. All of Champion's enigma products have been approved by the green Burial Council. See also Glass House (British Columbia) – a building in British Columbia constructed with empty embalming fluid bottles References Further reading Abrams, J.L. Embalming. 2008. Frederick, L.G.; Strub, Clarence G. [1959] (1989). The Principles and Practice of Embalming, 5th ed., Dallas, TX: Professional Training Schools Inc & Robertine Frederick. . Mayer, Robert G. (2000). Embalming: History, Theory and Practice, 3rd ed., McGraw-Hill/Appleton & Lange. . External links Official Champion Company Website Official Kelco Supply Company Website Official Pierce Chemical Company Website Official Frigid Fluid Company Website Official Trinity Fluids, LLC website Official Dodge Company Website Official Aardbalm Company Website CGI Embalming using Aardbalm Chemical substances by use Death customs Anatomical preservation
Embalming chemicals
Chemistry
2,417
138,888
https://en.wikipedia.org/wiki/VESA%20Display%20Power%20Management%20Signaling
VESA Display Power Management Signaling (VESA DPMS) is a standard from the VESA consortium for power management of video monitors. Example usage includes turning off, or putting the monitor into standby after a period of idle time to save power. Some commercial displays also incorporate this technology. History VESA issued DPMS 1.0 in 1993, basing their work on the United States Environmental Protection Agency's (EPA) earlier Energy Star power management specifications. Subsequent revisions were included in future VESA BIOS Extensions. Design The standard defines how to signal the H-sync and V-sync pins in a standard SVGA monitor to trigger the monitor's power saving capabilities. DPMS defines four modes: normal, standby, suspended and off. When in the "off" state, some power may still be drawn in order to power indicator lights. The standard is: Reception By the late 1990s, most new monitors implemented at least one DPMS level. DPMS does not define implementation details of its various power levels; while in a CRT-based display the three steps could logically be mapped to three blocks to be shut down in order of increasing savings, thermal stress, and warm-up time (video amplifier, deflection, filaments) not all designs would be trivially adaptable to this model, and neither would the technologies that commercially succeeded CRT monitors. Partially due to this reason, most major operating environments (such as Microsoft Windows and the MacOS family) do not implement the 3-level DPMS specification either, offering only a single timer. Notable exceptions include X11 and the XFCE Power Manager. DPMS competed with the alternative Nutek power saving system, in which the monitor recognizes a black picture produced by a screensaver and enters a power saving mode after an appropriate delay. See also Display Data Channel DDC References External links VESA Display Power Management Signaling (DPMS) Standard (requires purchase of the specification) VESA Standards Listing Display Power Management Signaling
VESA Display Power Management Signaling
Engineering
415
62,018,822
https://en.wikipedia.org/wiki/Betty%20Sullivan
Betty Julia Sullivan (31 May 1902 — 25 December 1999) was an American biochemist between the 1920s and 1940s at Russell Miller Milling Company. In 1947, Sullivan began her executive career as research director and vice-president for Russell Miller until the company became part of Peavey Company in 1958. After the merger, Sullivan remained in her executive roles before leaving in 1967 to co-start an agribusiness consulting company. While working at Experience Inc., Sullivan became director of the company in 1975 and retired in 1992. During her career, Sullivan was the first woman to receive the Osbourne Medal from the American Association of Cereal Chemists, in 1948. In 1954, Sullivan was awarded the Garvan–Olin Medal from the American Chemical Society. Early life and education On May 31, 1902, Sullivan was born in Minneapolis, Minnesota. Sullivan attended the University of Minnesota for her Bachelor of Science in 1922. She is of Irish ancestry. In the mid-1920s, Sullivan left the United States and completed a master's degree at the University of Paris in 1925. The following year, she conducted research at the Pasteur Institute in 1926. In 1935, Sullivan returned to the University of Minnesota for a Doctor of Philosophy in biochemistry and a minor degree in organic chemistry. Sullivan wrote her Bachelor of Science thesis on the chemical reactions in pinene and her PhD thesis about the lipids found in wheat. Career In 1922, Sullivan started her chemistry career as a lab assistant for the Russell Miller Milling Company in 1922. While at Russell Miller, Sullivan was promoted to head chemist in 1927 and research director in 1947. While researching the food chemistry of wheat and flour, Sullivan simultaneously held the position of vice-president. After Russell Miller became a part of Peavey Company in 1958, Sullivan continued her research and executive positions with Peavey while worked in food processing to create new products. When Sullivan left Peavey in 1967, she co-created an agribusiness consulting company called Experience Inc. During her time with Experience Inc. Sullivan held various positions including president in 1969 and director in 1975 before her 1992 retirement. Awards and honors In 1948, Sullivan became the first woman to be awarded the Osbourne Medal by the American Association of Cereal Chemists. Sullivan was also awarded the Garvan–Olin Medal in 1954 by the American Chemical Society. Death On 25 December 1999, Sullivan died in Bloomington, Minnesota. References 1902 births 1999 deaths Recipients of the Garvan–Olin Medal American women chemists Food chemists 20th-century American women 20th-century American chemists University of Paris alumni Chemists from Minnesota
Betty Sullivan
Chemistry
525
7,114,714
https://en.wikipedia.org/wiki/Isostere
Classical Isosteres are molecules or ions with similar shape and often electronic properties. Many definitions are available. but the term is usually employed in the context of bioactivity and drug development. Such biologically-active compounds containing an isostere is called a bioisostere. This is frequently used in drug design: the bioisostere will still be recognized and accepted by the body, but its functions there will be altered as compared to the parent molecule. History and additional definitions Non-classical isosteres do not obey the above classifications, but they still produce similar biological effects in vivo. Non-classical isosteres may be made up of similar atoms, but their structures do not follow an easily definable set of rules. The isostere concept was formulated by Irving Langmuir in 1919, and later modified by Grimm. Hans Erlenmeyer extended the concept to biological systems in 1932. Classical isosteres are defined as being atoms, ions and molecules that had identical outer shells of electrons, This definition has now been broadened to include groups that produce compounds that can sometimes have similar biological activities. Some evidence for the validity of this notion was the observation that some pairs, such as benzene, thiophene, furan, and even pyridine, exhibited similarities in many physical and chemical properties. References Theoretical chemistry Drug discovery
Isostere
Chemistry,Biology
277
22,646,791
https://en.wikipedia.org/wiki/Oregon%20Iron%20Company%20Furnace
The Oregon Iron Company Furnace, or Oswego Iron Furnace, is an iron furnace used by the Oregon Iron Company, in Lake Oswego, Oregon's George Rogers Park, in the United States. The structure was added to the National Register of Historic Places in 1974 and underwent a major renovation in 2010. The current furnace is the only structure that remains of the original iron company, and is the oldest industrial landmark in the state of Oregon. History Before 1862, the majority of Oregon's iron came from the East Coast of the United States. Due to the long and difficult journey, these imported iron products from the East were sold up for up to ten times what they originally cost. Aaron K. Olds, a blacksmith who worked with the first iron on Lake Superior, also created the first iron from Oswego ore in 1862 in his iron forge, despite iron ore being discovered there as early as 1841 by Robert Moore, founder of Linn City. According to his daughter Ellenette Olds Booth, Olds had started the forge with intention to smelt the iron ore. Smelting the ore is an important step that would allow Olds to separate the iron from the raw ore, but the ore proved refractory, and Olds was unable to smelt down the iron. However, Olds and his business partner H.S. Jacobs, eventually turned a significant profit from creating horseshoe nails and miner's picks that were displayed in Jacobs' wagon shop window on Portland's Front Street. With Olds' profits, a group of investors from Portland saw the economic opportunity that was now available in forming an iron company based in Oregon. The project was, in part, spearheaded by Olds' competitor Henry D. Green, who had purchased the water rights to Sucker Creek and a few acres of land above Oswego Landing in 1862. Water rights were important to hold within the industry, as it would provide the steam power necessary to fuel the engines of the furnaces and work the iron. The area that he purchased is present-day George Rogers Park, and would later also be used as the site of the Oregon Iron Company. Henry, along with his brother John and H.C. Leonard, were influential investors as they also owned the Portland Water Company at the time. As they were stakeholders in the water industry, they also hoped to have a hand in production of iron in the west, rather than pay high prices for iron in iron water pipes imported from the East or other competitors. After the foundation of the Oregon Iron Company in 1865, William S. Ladd was elected president of the company. H.C. Leonard became the vice-president and the first plant superintendent, and Henry D. Green became the secretary and company director. Twenty stockholders had invested a total of $500,000 in the company by February 1865, with some stockholders being from New York and San Francisco. Water disputes and Eastern competitors suspended the company several times before finally operations shut down indefinitely in 1894. Construction and renovation George D. Wilbur from Connecticut supervised the construction of the furnaces which began in 1866, including the one that remains standing today. Wilbur modeled all the stone furnaces after the furnaces of the Barnum Richardson Company in Lime Rock, Connecticut. The furnaces are 32 feet tall with gothic-style arches lining the walls, 34 feet square at the base, and 26 feet square at the roof. These furnaces were constructed from material from various places. For instance, basalt from the north side of Lake Oswego was used in stonework. Firebrick was imported from Great Britain, and used in the shafts, chimneys and heat exchangers of the furnaces. In order to melt down and refine the iron ore into workable metal iron, the stone walls of the furnaces were constructed to withstand temperatures of up to 2,800°F. Ultimately, the cost of construction was $126,000. In July 2009, work on the deteriorating arches of the Oregon Iron Company Furnace began. However, after the discovery of more deterioration than previously expected, more materials were necessary and extended the completion by three months. Ultimately, it took nine months and $918,000 for the crews to complete the renovation before its completion in March 2010. See also National Register of Historic Places listings in Clackamas County, Oregon National Register of Historic Places listings in Oregon Blast furnace Oswego Lake References External links Ann Fulton, "Oregon Iron & Steel Company." The Oregon Encyclopedia. Retrieved 2021-06-01. "Iron Industry History." Lake Oswego Preservation Society. Retrieved 2021-06-01. Jewel Lansing, "William S. Ladd (1826-1893)." The Oregon Encyclopedia. Retrieved 2021-06-01, Wikipedia Student Program 1866 establishments in Oregon Buildings and structures in Lake Oswego, Oregon Industrial buildings and structures on the National Register of Historic Places in Oregon Industrial buildings completed in 1866 Industrial furnaces National Register of Historic Places in Clackamas County, Oregon
Oregon Iron Company Furnace
Chemistry
1,009
24,042,988
https://en.wikipedia.org/wiki/Incidental%20take%20permit
An incidental take permit is a permit issued under Section 10 of the United States Endangered Species Act (ESA) to private, non-federal entities undertaking otherwise lawful projects that might result in the take of an endangered or threatened species. Application for an incidental take permit is subject to certain requirements, including preparation by the permit applicant of a conservation plan. "Take" is defined by the ESA as harass, harm, pursue, hunt, shoot, wound, kill, trap, capture, or collect any threatened or endangered species. Harm may include significant habitat modification where it actually kills or injures a listed species through impairment of essential behavior (e.g., nesting or reproduction). In the 1982 ESA amendments, Congress authorized the United States Fish and Wildlife Service (through the Secretary of the Interior) to issue permits for the "incidental take" of endangered and threatened wildlife species (Section 10a(1)B of the ESA). Thus, permit holders can proceed with an activity, such as construction or other economic development, that may result in the "incidental" taking of a listed species. The 1982 amendment requires that permit applicants design, implement, and secure funding for a Habitat Conservation Plan or "HCP" that minimizes and mitigates harm to the impacted species during the proposed project. The HCP is a legally binding agreement between the Secretary of the Interior and the permit holder. Note: This article contains public domain text from US government federal agencies. References External links US Fish and Wildlife Endangered Species Program. Office of Protected Resources, National Marine Fisheries Service. Endangered species Environmental law in the United States Habitat
Incidental take permit
Biology
330
29,874,968
https://en.wikipedia.org/wiki/Wengania
Wengania is a non-differentaited, non-mineralized algal thallus under a millimeter in diameter. References Fossil algae
Wengania
Biology
31
21,689,422
https://en.wikipedia.org/wiki/Edge%20dominating%20set
In graph theory, an edge dominating set for a graph G = (V, E) is a subset D ⊆ E such that every edge not in D is adjacent to at least one edge in D. An edge dominating set is also known as a line dominating set. Figures (a)–(d) are examples of edge dominating sets (thick red lines). A minimum edge dominating set is a smallest edge dominating set. Figures (a) and (b) are examples of minimum edge dominating sets (it can be checked that there is no edge dominating set of size 2 for this graph). Properties An edge dominating set for G is a dominating set for its line graph L(G) and vice versa. Any maximal matching is always an edge dominating set. Figures (b) and (d) are examples of maximal matchings. Furthermore, the size of a minimum edge dominating set equals the size of a minimum maximal matching. A minimum maximal matching is a minimum edge dominating set; Figure (b) is an example of a minimum maximal matching. A minimum edge dominating set is not necessarily a minimum maximal matching, as illustrated in Figure (a); however, given a minimum edge dominating set D, it is easy to find a minimum maximal matching with |D| edges (see, e.g., ). Algorithms and computational complexity Determining whether there is an edge dominating set of a given size for a given graph is an NP-complete problem (and therefore finding a minimum edge dominating set is an NP-hard problem). show that the problem is NP-complete even in the case of a bipartite graph with maximum degree 3, and also in the case of a planar graph with maximum degree 3. There is a simple polynomial-time approximation algorithm with approximation factor 2: find any maximal matching. A maximal matching is an edge dominating set; furthermore, a maximal matching M can be at worst 2 times as large as a smallest maximal matching, and a smallest maximal matching has the same size as the smallest edge dominating set. Also the edge-weighted version of the problem can be approximated within factor 2, but the algorithm is considerably more complicated (; ). show that finding a better than (7/6)-approximation is NP-hard. Schmied & Viehmann proved that the Problem is UGC-hard to approximate to within any constant better than 3/2. References . Minimum edge dominating set (optimisation version) is the problem GT3 in Appendix B (page 370). Minimum maximal matching (optimisation version) is the problem GT10 in Appendix B (page 374). . . Edge dominating set (decision version) is discussed under the dominating set problem, which is the problem GT2 in Appendix A1.1. Minimum maximal matching (decision version) is the problem GT10 in Appendix A1.1. . . . Richard Schmied & Claus Viehmann (2012), "Approximating edge dominating set in dense graphs", Theor. Comput. Sci. 414(1), pp. 92-99. External links Pierluigi Crescenzi, Viggo Kann, Magnús Halldórsson, Marek Karpinski, Gerhard Woeginger (2000), "A compendium of NP optimization problems": Minimum Edge Dominating Set, Minimum Maximal Matching. NP-complete problems Computational problems in graph theory
Edge dominating set
Mathematics
694
75,619,418
https://en.wikipedia.org/wiki/Hydrocodone/guaifenesin
Hydrocodone/guaifenesin, sold under the brand name Obredon among others, is a fixed-dose combination medication used for the treatment of cough. It contains hydrocodone, as the bitartrate, an opioid agonist; and guaifenesin, an expectorant. It is taken by mouth. Hydrocodone/guaifenesin was approved for medical use in the United States in 2014. Adverse effects In the US, the label for hydrocodone/guaifenesin contains a black box warning about addiction, abuse, and misuse. References Combination drugs Expectorants
Hydrocodone/guaifenesin
Chemistry
137
11,512,414
https://en.wikipedia.org/wiki/Fomes%20meliae
Fomes meliae is a plant pathogen that causes wood rot on nectarine, peach and Platanus sp. (Sycamore). See also List of Platanus diseases List of peach and nectarine diseases References Fungi described in 1897 Fungal tree pathogens and diseases Stone fruit tree diseases Polyporaceae Taxa named by Lucien Marcus Underwood Fungus species
Fomes meliae
Biology
71
66,042,896
https://en.wikipedia.org/wiki/Union%20of%20Textiles%2C%20Chemicals%20and%20Paper
The Union of Textiles, Chemicals and Paper (, GTCP; ) was a trade union representing workers in various industries in Switzerland. In 1903, various local unions of dyers, trimmers, weavers and embroiderers formed a loose federation. In 1908, this was reformed as the more centralised Swiss Textile Workers' Union. It affiliated to the Swiss Trade Union Federation in 1914, although this prompted most of the weavers and embroiderers, not yet working in factories, to leave and form an independent union, rejoining only in 1948. By 1919, the union had 23,991 members, but this fell to 7,626 in 1925 and remained low for the following decades. In 1926, the Union of Paper and Graphical Assistants was dissolved, the paper workers transferring to the Swiss Textile Workers' Union. In 1937, the union renamed itself as the Union of Textile and Factory Workers, reflecting its interest in organising workers not previously organised by any union. The bulk of these workers were in the chemical industry, and recruitment was hugely successful, membership reaching 38,648 in 1946, during a period in which the union was involved in several strikes. After 1947, the union avoided industrial action, and its membership steadily fell. In 1963, it renamed itself as the GTCP. By 1991, it had only 11,581 members, of whom 70% worked in the chemical industry, 20% in textiles, and 10% in paper. In 1993, it merged with the Union of Construction and Wood, to form the Union of Construction and Industry. References Chemical industry trade unions Textile and clothing trade unions Trade unions established in 1908 Trade unions disestablished in 1993 Trade unions in Switzerland
Union of Textiles, Chemicals and Paper
Chemistry
345
76,164,635
https://en.wikipedia.org/wiki/Blue%20light%20spectrum
The blue light spectrum, characterized by wavelengths between 400 and 500 nanometers, has a broad impact on human health, influencing numerous physiological processes in the human body. Although blue light is essential for regulating circadian rhythms, improving alertness, and supporting cognitive function, its widespread presence has raised worries about its possible effects on general well-being. Prolonged exposure to blue light poses hazards to the well-being of the eye and may cause symptoms like dry eyes, weariness, and blurred vision. As our dependence on digital devices and artificial lighting increases, the complex pathways of the blue light spectrum that affect biological processes is crucial to understand. To reduce the hazards of blue light exposure, effective management strategies can be implemented, including limiting screen time before bed and using blue light filter. The blue light spectrum is an essential part of the visible spectrum with wavelengths of about 400-480 nm. Blue light is primarily generated by Light-Emitting Diodes (LED) lighting and digital screens, it has now become prevalent in the world around us. LED lighting creates white light by combining blue light with other wavelengths, often with a yellow garnet phosphor. Blue lights from digital screens, including computers, smartphones, and tablets, emit significant amounts of blue light, contributing to constant exposure throughout the day and night. Blue light has a significant impact on numerous physiological processes in human health. The widespread use of blue light in modern technology brings up a concern about the potential consequences of excessive blue light exposure. Such exposure has been associated with disruptions in ocular health, sleep patterns, and well-being. Sources Natural Sunlight is the primary natural source of blue light, which is essential for regulating the circadian rhythm. Excessive exposure to sunlight without proper eye protection can lead to eye damage and cause vision issues. Artificial LED lighting, digital screens, and fluorescent bulbs are examples of common artificial blue light sources. LED lighting is widely used due to its durability and energy efficiency. It emits more blue light than traditional incandescent bulbs, potentially impacting the quality of sleep and eye health if used excessively at night. Blue light is emitted by digital screens such as computers, tablets, smartphones, and televisions, which can lead to extended exposure in modern lives. Digital screen overuse, especially before bed, can cause dry eyes, eye strain, and irregular sleep patterns. Fluorescent lighting emits blue light and is frequently used in public areas and workplaces. Long-term use of fluorescent light bulbs can cause eye strain, exhaustion, and circadian rhythm problems, especially in interior spaces with little natural light exposure. Mechanism The short wavelength and high energy of blue light make it highly effective in penetrating the human eye and inducing biological effects Effects on cornea The cornea is located at the front of the eyeball and serves as the initial point where light enters the eye. Blue light exposure to the cornea increases the production of reactive oxygen species (ROS), molecules in corneal epithelial cells. This activates a signalling pathway involving ROS, triggering inflammation in human corneal epithelial cells. Oxidative damage and potential cell death contribute to inflammation in the eye and the development of dry eyes. Blue light disrupts the balance of the tear film on the cornea. Prolonged exposure to blue light leads to an increased rate of tear evaporation, resulting in dryness of the cornea and the development of dry eye syndrome. Effects on lens The lens is located at the entrance of the eyeball after light passes through the pupil. The lens is capable of filtering blue light, reducing retinal light damage occurrence. Blue light is absorbed by the structural proteins, enzymes, and protein metabolites found in the lens. The absorption of blue light creates yellow pigments in the lens's protein. The lens progressively darkens and turns yellow. Blue light is absorbed by the lens, preventing blue light from reaching the retina at the back of the eye. To prevent retinal damage, the lens has to lower transparency. This reaction causes visual impairment and the development of cataracts, a cloudy region in the lens. Cumulative exposure to blue light also induces an increase in the production of ROS, free radicals, in the lens epithelial cells (hLECs) mitochondria. Accumulation of oxidative damage by free radicals in the lens contributes to the development of cataracts. Effects on retina The retina is a receiver of light signals and plays a crucial part in the process of visual formation. The retina is located at the back of the eye. Blue light can induce photochemical damage to the retina by passing through lenses and into the retina. Two primary types of cells contribute to vision formation within the retina: photoreceptors (including rod and cone cells), and retinal pigment epithelium (RPE) cells. Photoreceptors are responsible for detection of light particles and convert them into detectable signals, initiating the visual process. RPE cells are located below the photoreceptor layer and maintain the integrity and functionality of the retina. The primary cause of blue light’s effects on the retina is the production of ROS that leads to oxidative stress, meaning the imbalance between the generation of harmful reactive free radicals and the body’s ability to conduct detoxification. Retinal chromophores like lipofuscin and melanin absorb light energy, causing the generation of ROS and oxidative damage to retinal cells. The accumulation of oxidative stress from excessive exposure to blue light causes photochemical damage to the retina. Phototoxicity is caused by lipofuscin, which builds up inside RPE cells as a consequence of photoreceptor metabolism that is enhanced by exposure to blue light. This oxidative stress damages DNA integrity and interferes with protein homeostasis and mitochondrial activity within retinal cells, potentially contributing to disorders like cellular damage, retinal degeneration and eyesight impairment. Psychological effects The impact of blue light exposure on human health highlights the significance of reducing blue light exposure, particularly when using screens for prolonged periods of time, to protect ocular health and reduce the risk of vision-related issues. Sleep disturbance The circadian rhythm governs the sleep-wake cycle over a roughly 24-hour cycle, and is regulated by the suprachiasmatic nucleus (SCN) in the brain. The SCN communicates with specialised cells called intrinsically photosensitive retinal ganglion cells (ipRGCs), to synchronise the internal biological clocks with external light-dark cycles. When ipRGCs are activated by blue light, a signalling cascade is initiated, enabling the alignment of internal biological clocks with environmental light cues. Exposure to blue light during daylight hours suppresses the secretion of melatonin, a hormone critical for circadian rhythm regulation. Melatonin is synthesised by the pineal gland, located in the middle of the brain, in response to darkness, signalling the body’s transition to sleep. However, exposure to blue light at night disrupts the production and release of melatonin, leading to sleep disturbances. Melatonin is released in the blood circulation to reach target tissues in the central and peripheral regions. The amount of blue light received by ipRGCs regulates the circadian rhythm to control cycles of alertness and sleepiness. The more light stimulation, the less signals are transmitted to the pineal gland through the SCN of the hypothalamus to produce melatonin. Blue light exposure, particularly in the evening or at night, suppresses the production and release of melatonin. When light stimulates and activates the SCN, the paraventricular nucleus (PVN) of the hypothalamus receives more signals from a neurotransmitter called GABA. GABA is an inhibitory neurotransmitter that aids in controlling neuronal activity. Both the neuronal pathway PVN and the pineal gland experience a decrease in activity as a response. This suppresses the release of melatonin. The suppression of melatonin release disrupts the body's natural circadian rhythm and interferes with the body's ability to fall asleep and achieve a restful sleep state, potentially leading to sleep disorders such as insomnia. Ocular health Harmful impacts on the well-being of the eye after prolonged exposure to blue light, particularly from digital screens or fluorescent lamps, have been observed. Systematic reviews have highlighted the association between blue light exposure and digital eye strain. Digital screens emit significant amounts of blue light with shorter wavelength and higher energy compared to other visible light, which can cause symptoms such as eye fatigue, eye dryness, blurred vision, irritation, and headaches. Blue light exposure can lead to light-induced damage to the retina, a phenomenon known as photochemical damage. When the eye is exposed to excessive levels of blue light from sources such as digital screens, a series of photochemical reactions within the retina can be stimulated. The photochemical reactions cause the production of ROS, inducing oxidative stress and damage cellular components in the eye such as ipRGCs. Management The management of blue light exposure is crucial in preventing associated eye disorders and promoting overall well-being. People can promote healthier lifestyles, preserve eye and general health, and lessen the risk of related health problems like digital eye strain and sleep disturbances by taking these preventive measures to manage blue light exposure. Limit on screen time The approach of limiting screen time is effective, especially before sleep. Research has shown that a higher average screen time is correlated to eye fatigue and discomfort. Growing evidence suggests that youth physical and mental functioning may be negatively impacted by insufficient sleep, both in terms of quantity and quality. By establishing a consistent bedtime routine that includes reducing electronic device usage before sleep, it can optimise the production of melatonin, enhancing sleep quality. Stopping using digital devices an hour before bedtime has been shown to increase the quality and length of sleep. Filtering lens Employing blue light-blocking eyewear, such as glasses with specialised lenses, offers an additional means of protection against excessive blue light exposure, particularly for individuals with extended screen time. Studies have been conducted on blue light filtering eyeglasses, which uses special blue light blocking lenses for eye protection against blue light. All visible light wavelengths can be transmitted through the spectacle lens, but some portions of the blue-violet light spectrum are selectively attenuated by coating the specifically-designed front and posterior sides of the lens. The blue-light filtering glasses can lessen the signs of digital eye strain and prevent causing phototoxic retinal damage. There are various blue light filter options available in the current eyeglasses market at different price points. Digital Screen Use in the Workplace Generally, over the past five to ten years, digital screen use has increased substantially with the rise of smartphone, tablet, and computer usage. Digital screen use has dramatically increased since the COVID-19 pandemic, as at home office setups were commonly for professional reasons. Since the pandemic, these remote working solutions have remained popular, and now more than ever, people work remotely. This shift from primarily natural lighting during work/school days to a mixture of artificial blue light exposure has led researchers to look into the amount of blue light exposure people receive the health impacts caused by blue light exposure, and preventative measures that are effective in blocking blue light. Blue light exposure during daylight hours ensures that our biological needs are in balance and affects our bodies and minds in order to regulate human behavior and circadian rhythm. Overexposure to blue light can lead to harmful health effects. Workplace Blue Light Exposure Office workplaces around the globe have experienced major change since March 2020. Before the COVID-19 pandemic, a typical office worker completed their daily tasks in an office setting. Meetings, conferences, and tasks could be completed in person, placing a limit on how much time workers spent doing work on their computers or phones. As more office workplaces make the switch to remote working, every aspect of their employees' jobs must be completed using technology. Those who use electronic displays everyday are exposed to greater amounts of blue lighting than people who are generally exposed to blue light at most times of day. Computer Vision Syndrome Increased exposure to blue light via digital screens can negatively impact ocular health by contributing to a condition known as Computer Vision Syndrome (CVS) or digital eye strain. CVS classifies a group of vision problems associated with computer use. About 70% of computer users are affected by CVS. Symptoms of CVS include eyestrain, headaches, blurred vision, and dry eyes. CVS is identified via comprehensive eye examination through methods such as reviewing patient history to determine risk factors, visual acuity measurements, refraction examination, and evaluating eye focus. Preventative Measures The American Optometric Association (AOA) recommends adjusting how a computer is viewed to prevent and treat CVS. According to the American Optometric Association: "Optimally, the computer screen should be 15 to 20 degrees below eye level (about 4 or 5 inches) as measured from the center of the screen and 20-28 inches from the eyes." Reference materials should be positioned in a way so that the head does not need to reposition when looking back and forth from the document and the screen. Ideally, reference materials should be positioned above the keyboard and below the monitor. A document holder placed beside the monitor is a helpful tool to achieve this prevention measure. The computer screen in use should also be positioned in a way that it avoids glare from overhead lighting and windows. Using curtains or blinds on nearby windows, desk lamps, screen glare filters and switching overhead light bulbs to lower wattage bulbs can prevent the development of CVS. The AOA also recommends taking rest breaks when working on computers via the 20-20-20 method. Following this method, users should take a 20 second break every 20 minutes and stare at something else 20 feet away. Blinking frequently is also recommended to prevent the development of dry eyes, as blinking helps keep the surface of the eye moist. Related Research In 2021, a group of researchers (Hwang Y, Shin D, Eun J, Suh B, Lee J) conducted a research project studying computer-based interventions for CVS. The study explored the interface elements of computer-based interventions for CVS to set design guidelines based on the pros and cons of each element. The study found that technology based solutions that induce eye resting reduces the prevalence of CVS in computer users. Customizable interfaces enhance user engagement with the intervention system. According to the study, "Among the various interface elements that are being implemented in computer-based interventions for CVS, we found that the instruction page of the eye resting strategy, goal setting for eye resting, compliment feedback after completing eye resting, mid-size popup window, and symptom-like visual effects that provide an alarm for the eye resting time greatly affected user participation in the eye resting behavior." Future research based off of these findings should aim to explore technologies such as facial-recognition and eye-tracking software to create more personalized CVS interventions. References Electromagnetic spectrum
Blue light spectrum
Physics
3,124
5,260,755
https://en.wikipedia.org/wiki/Hubble%3A%2015%20Years%20of%20Discovery
Hubble – 15 Years of Discovery () is a book that formed part of the European Space Agency's 15th anniversary celebration activities for the 1990 launch of the NASA/ESA Hubble Space Telescope. Its main emphasis is on the exquisite Hubble images that have enabled astronomers to gain entirely new insights into the workings of a huge range of different astronomical objects. Hubble has provided the visual overview of the underlying astrophysical processes taking place in these objects, ranging from planets in the Solar System to galaxies in the young Universe. This book shows the close relationship between the results of great scientific value and of eye-catching beauty and artistic potential. The book published by Springer has 120 pages, measures 30 x 25 cm and is in full-colour. It has been translated into Finnish, Portuguese and German. Some versions include a copy of the Hubble – 15 Years of Discovery documentary (distributed in 860,000 copies). The book is authored by Lars Lindberg Christensen and Bob Fosbury. It is illustrated by Martin Kornmesser. External links Hubble - 15 Years of Discovery The European Homepage for the NASA/ESA Hubble Space Telescope Hubble Heritage Project Space Telescope European Coordinating Facility ESO/ ST-ECF Science Archive European Space Agency Hubble Space Telescope
Hubble: 15 Years of Discovery
Astronomy
258
37,994,267
https://en.wikipedia.org/wiki/HD%20102839
HD 102839 is a class G6Ib (yellow supergiant) star in the constellation Musca. Its apparent magnitude is 4.98 and it is approximately 1,550 light years away from Earth based on parallax. References Musca G-type supergiants 102839 4538 057696 PD-69 01595
HD 102839
Astronomy
75
12,634,546
https://en.wikipedia.org/wiki/Critical%20Reviews%20in%20Biomedical%20Engineering
Critical Reviews in Biomedical Engineering is a bimonthly peer-reviewed scientific journal published by Begell House covering biomedical engineering, bioengineering, clinical engineering, and related subjects. The editor-in-chief is Chenzhong Li. External links Biomedical engineering journals Bimonthly journals English-language journals Begell House academic journals
Critical Reviews in Biomedical Engineering
Engineering,Biology
68
18,436,662
https://en.wikipedia.org/wiki/Rational%20motion
In kinematics, the motion of a rigid body is defined as a continuous set of displacements. One-parameter motions can be defined as a continuous displacement of moving object with respect to a fixed frame in Euclidean three-space (E3), where the displacement depends on one parameter, mostly identified as time. Rational motions are defined by rational functions (ratio of two polynomial functions) of time. They produce rational trajectories, and therefore they integrate well with the existing NURBS (Non-Uniform Rational B-Spline) based industry standard CAD/CAM systems. They are readily amenable to the applications of existing computer-aided geometric design (CAGD) algorithms. By combining kinematics of rigid body motions with NURBS geometry of curves and surfaces, methods have been developed for computer-aided design of rational motions. These CAD methods for motion design find applications in animation in computer graphics (key frame interpolation), trajectory planning in robotics (taught-position interpolation), spatial navigation in virtual reality, computer-aided geometric design of motion via interactive interpolation, CNC tool path planning, and task specification in mechanism synthesis. Background There has been a great deal of research in applying the principles of computer-aided geometric design (CAGD) to the problem of computer-aided motion design. In recent years, it has been well established that rational Bézier and rational B-spline based curve representation schemes can be combined with dual quaternion representation of spatial displacements to obtain rational Bézier and B-spline motions. Ge and Ravani, developed a new framework for geometric constructions of spatial motions by combining the concepts from kinematics and CAGD. Their work was built upon the seminal paper of Shoemake, in which he used the concept of a quaternion for rotation interpolation. A detailed list of references on this topic can be found in and. Rational Bézier and B-spline motions Let denote a unit dual quaternion. A homogeneous dual quaternion may be written as a pair of quaternions, ; where . This is obtained by expanding using dual number algebra (here, ). In terms of dual quaternions and the homogeneous coordinates of a point of the object, the transformation equation in terms of quaternions is given by where and are conjugates of and , respectively and denotes homogeneous coordinates of the point after the displacement. Given a set of unit dual quaternions and dual weights respectively, the following represents a rational Bézier curve in the space of dual quaternions. where are the Bernstein polynomials. The Bézier dual quaternion curve given by above equation defines a rational Bézier motion of degree . Similarly, a B-spline dual quaternion curve, which defines a NURBS motion of degree 2p, is given by, where are the pth-degree B-spline basis functions. A representation for the rational Bézier motion and rational B-spline motion in the Cartesian space can be obtained by substituting either of the above two preceding expressions for in the equation for point transform. In what follows, we deal with the case of rational Bézier motion. The trajectory of a point undergoing rational Bézier motion is given by, where is the matrix representation of the rational Bézier motion of degree in Cartesian space. The following matrices (also referred to as Bézier Control Matrices) define the affine control structure of the motion: where . In the above equations, and are binomial coefficients and are the weight ratios and In above matrices, are four components of the real part and are four components of the dual part of the unit dual quaternion . Example See also Quaternion and Dual quaternion NURBS Computer animation Robotics Robot kinematics Computational geometry CNC machining Mechanism design References External links Computational Design Kinematics Lab Robotics and Spatial Systems Laboratory (RASSL) Robotics and Automation Laboratory Kinematics
Rational motion
Physics,Technology
821
41,175
https://en.wikipedia.org/wiki/Frame%20slip
In the reception of framed data, a frame slip is the loss of synchronization between a received frame and the receiver clock signal, causing a frame misalignment event, and resulting in the loss of the data contained in the received frame. A frame slip should not be confused with a dropped frame where synchronization is not lost, as in the case of buffer overflow, for example. References Synchronization Data transmission
Frame slip
Engineering
92
20,332,795
https://en.wikipedia.org/wiki/Copyright%20aspects%20of%20hyperlinking%20and%20framing
In copyright law, the legal status of hyperlinking (also termed "linking") and that of framing concern how courts address two different but related Web technologies. In large part, the legal issues concern use of these technologies to create or facilitate public access to proprietary media content — such as portions of commercial websites. When hyperlinking and framing have the effect of distributing, and creating routes for the distribution of content (information) that does not come from the proprietors of the Web pages affected by these practices, the proprietors often seek the aid of courts to suppress the conduct, particularly when the effect of the conduct is to disrupt or circumvent the proprietors' mechanisms for receiving financial compensation. Linking In Perfect 10, Inc. v. Google, Inc., 508 F.3d 1146 (9th Cir. 2007), the Ninth Circuit held that when Google stored thumbnail versions of Perfect 10's magazine images on its server to communicate them to Google's users, Google prima facie violated Perfect 10's copyright. But the court also held that Google had a valid fair use defense. Id. at 1163-64. The U.S. Court of Appeals for the Ninth Circuit in the Perfect 10 case, held that, when Google provided links to images, Google did not violate the provisions of the copyright law prohibiting unauthorized reproduction and distribution of copies of a work: "Because Google's computers do not store the photographic images, Google does not have a copy of the images for purposes of the Copyright Act." This expedient has been challenged as copyright infringement. See the Arriba Soft and Perfect 10 cases (below). In the Perfect 10 case, Perfect 10 argued that Google's image pages caused viewers to believe they were seeing the images on Google's website. The court brushed this argument aside: "While in-line linking and framing may cause some computer users to believe they are viewing a single Google Web page, the Copyright Act, unlike the Trademark Act, does not protect a copyright holder against acts that cause consumer confusion." Using an ordinary (deep) hyperlink to the image at the remote server, so that users must click on a link on the hosting page to jump to the image. The HTML code would be https://www.supremecourt.gov/images/sectionbanner13.png. This has been protested because it bypasses everything at the other site but the image. Such protests have been largely ineffective. This argument on Kelly's behalf is articulated in the amicus curiae brief supporting Kelly by the American Society of Media Photographers:[I]t was the actual display of the full-size images of Kelly’s work stripped from the original context that was not fair use. Merely linking to Kelly's originating home page, on the other hand, without free-standing display of the full-size images, would not run afoul of the fair use limits established by the Panel. It is striking that nowhere in [the adversaries'] briefs do they explain why linking could not be constructed in this fashion. History of copyright litigation in field European Union The European Court of Justice's binding ruling in 2014 was that embedding a work could not be a violation of copyright: In September 2016, the European Court of Justice ruled that knowingly linking to an unauthorized posting of a copyrighted work for commercial gain constituted infringement of the exclusive right to communicate the work to the public. The case surrounded GeenStijl and Sanoma; in 2011, photos were leaked from an upcoming issue of the Dutch version of Playboy (published by Sanoma) and hosted on a website known as FileFactory. GeenStijl covered the leak by displaying a thumbnail of one of the images and linking to the remainder of the unauthorized copies. The court ruled in favor of Sanoma, arguing that GeenStijl's authors knowingly reproduced and communicated a copyrighted work to the public without consent of its author, and that GeenStijl had profited from the unauthorized publication. Belgium Copiepresse v. Google In September 2006, Copiepresse, a Belgian association of French-speaking newspaper editors, sued Google and obtained an injunctive order from the Belgian Court of First Instance that Google must stop deep linking to Belgian newspapers without paying royalties, or else pay a fine of €1 million daily. Many newspaper columns were critical of Copiepresse's stance. Denmark Danish Newspaper Publishers Association v. Newsbooster The Bailiff's Court of Copenhagen ruled in July 2002 against the Danish Website Newsbooster, holding, in a suit brought by the Danish Newspaper Publishers Association (DNPA), that Newsbooster had violated Danish copyright law by deep linking to newspaper articles on Danish newspapers' Internet sites. Newsbooster's service allows users to enter keywords to search for news stories, and then deep links to the stories are provided. The DNPA said that this conduct was "tantamount to theft." The court ruled in favor of the DNPA, not because of the mere act of linking but because Newsbooster used the links to gain commercial advantage over the DNPA, which was unlawful under the Danish Marketing Act. The court enjoined Newsbooster's service. home A/S v. Ofir A-S The Maritime and Commercial Court in Copenhagen took a somewhat different view in 2005 in a suit that home A/S, a real estate chain, brought against Ofir A-S, an Internet portal (OFiR), which maintains an Internet search engine. home A/S maintains an Internet website that has a searchable database of its current realty listings. Ofir copied some database information, which the court held unprotected under Danish law, and also Ofir's search engine provided deep links to the advertisements for individual properties that home A/S listed, thus by-passing the home page and search engine of home A/S. The court held that the deep linking did not create infringement liability. The Court found that search engines are desirable as well as necessary to the function of the Internet; that it is usual that search engines provide deep links; and that businesses that offer their services on the Internet must expect that deep links will be provided to their websites. Ofir's site did not use banner advertising and its search engine allowed users, if they so chose, to go to a home page rather than directly to the advertisement of an individual property. The opinion does not appear to distinguish or explain away the difference in result from that of the Newsbooster case. Germany Holtzbrinck v. Paperboy In July 2003 a German Federal Superior Court held that the Paperboy search engine could lawfully deep link to news stories. An appellate court then overturned the ruling, but the German Federal Supreme Court reversed in favor of Paperboy. "A sensible use of the immense wealth of information offered by the World Wide Web is practically impossible without drawing on the search engines and their hyperlink services (especially deep links)," the German court said. Decision I-20 U 42/11 Dusseldorf Court of Appeal 8 October 2011 In Germany making content available to the public on a Website by embedding the content with inline links now appears to be copyright infringement. This applies even though a copy has never been taken and kept of an image and even though the image is never "physically" part of the website. The Düsseldorf appeal court overruled the lower Court of First Instance in this case. The Defendant had included links on his blog to two photographs that appeared on the Claimant's website. No prior permission had been sought or obtained. Scotland Shetland Times Ltd v Wills The first suit of prominence in the field was Shetland Times Ltd v Wills, Scottish Court of Session (Edinburgh, 24 October 1996). The Shetland Times challenged use by Wills of deep linking to pages of the newspaper on which selected articles of interest appeared. The objection was that defendant Wills thus by-passed the front and intervening pages on which advertising and other material appeared in which the plaintiff had an interest but defendant did not. The Times obtained an interim interdict (Scottish for preliminary injunction) and the case then settled. United States Washington Post v. Total News In February 1997 the Washington Post, CNN, the Los Angeles Times, Dow Jones (Wall Street Journal), and Reuters sued Total News Inc. for framing their news stories on the Total News Web page. The case was settled in June 1997, on the basis that linking without framing would be used in the future. Ticketmaster v. Microsoft In April 1997 Ticketmaster sued Microsoft in Los Angeles federal district court for deep linking. Ticketmaster objected to Microsoft's bypassing the home and intermediate pages on Ticketmaster's site, claiming that Microsoft had "pilfered" its content and diluted its value. Microsoft's Answer raised a number of defenses explained in detail in its pleadings, including implied license, contributory negligence, and voluntary assumption of the risk. Microsoft also argued that Ticketmaster had breached an unwritten Internet code, under which any website operator has the right to link to anyone else's site. A number of articles in the trade press derided Ticketmaster's suit. The case was settled in February 1999, on confidential terms; Microsoft stopped the deep linking and instead used a link to Ticketmaster's home page. A later case, Ticketmaster Corp. v. Tickets.com, Inc. (2000), yielded a ruling in favour of deep linking. Kelly v. Arriba Soft The first important U.S. decision in this field was that of the Ninth Circuit in Kelly v. Arriba Soft Corp. Kelly complained, among other things, that Arriba's search engine used thumbnails to deep link to images on his Web page. The court found that Arriba's use was highly transformative, in that it made available to Internet users a functionality not previously available, and that was not otherwise readily provided — an improved way to search for images (by using visual cues instead of verbal cues). This factor, combined with the relatively slight economic harm to Kelly, tipped the fair use balance decisively in Arriba's favour. As in other cases, Kelly objected to linking because it caused users to bypass his home page and intervening pages. He was unable, however, to show substantial economic harm. Kelly argued largely that the part of the copyright statute violated was the public display right ((5)). He was aware of the difficulties under the reproduction and distribution provisions (17 U.S.C. §§ 106(1) and (3)), which require proof that the accused infringer trafficked in copies of the protected work. The court focused on the fair use defense, however, under which it ruled in Arriba's favour. Perfect 10 v. Amazon In Perfect 10, Inc. v. Amazon.com, Inc., the Ninth Circuit again considered whether an image search engine's use of thumbnail was a fair use. Although the facts were somewhat closer than in the Arriba Soft case, the court nonetheless found the accused infringer's use fair because it was "highly transformative." The court explained: We conclude that the significantly transformative nature of Google's search engine, particularly in light of its public benefit, outweighs Google's superseding and commercial uses of the thumbnails in this case. … We are also mindful of the Supreme Court's direction that "the more transformative the new work, the less will be the significance of other factors, like commercialism, that may weigh against a finding of fair use." In addition, the court specifically addressed the copyright status of linking, in the first U.S. appellate decision to do so: Google does not … display a copy of full-size infringing photographic images for purposes of the Copyright Act when Google frames in-line linked images that appear on a user's computer screen. Because Google's computers do not store the photographic images, Google does not have a copy of the images for purposes of the Copyright Act. In other words, Google does not have any "material objects … in which a work is fixed … and from which the work can be perceived, reproduced, or otherwise communicated" and thus cannot communicate a copy. Instead of communicating a copy of the image, Google provides HTML instructions that direct a user's browser to a website publisher's computer that stores the full-size photographic image. Providing these HTML instructions is not equivalent to showing a copy. First, the HTML instructions are lines of text, not a photographic image. Second, HTML instructions do not themselves cause infringing images to appear on the user's computer screen. The HTML merely gives the address of the image to the user's browser. The browser then interacts with the computer that stores the infringing image. It is this interaction that causes an infringing image to appear on the user's computer screen. Google may facilitate the user's access to infringing images. However, such assistance raised only contributory liability issues and does not constitute direct infringement of the copyright owner's display rights. … While in-line linking and framing may cause some computer users to believe they are viewing a single Google Web page, the Copyright Act, unlike the Trademark Act, does not protect a copyright holder against acts that cause consumer confusion. State of U.S. law after Arriba Soft and Perfect 10 The Arriba Soft case stood for the proposition that deep linking and actual reproduction in reduced-size copies (or preparation of reduced-size derivative works) were both excusable as fair use because the defendant's use of the work did not actually or potentially divert trade in the marketplace from the first work; and also it provided the public with a previously unavailable, very useful function of the kind that copyright law exists to promote (finding desired information on the Web). The Perfect 10 case involved similar considerations, but more of a balancing of interests was involved. The conduct was excused because the value to the public of the otherwise unavailable, useful function outweighed the impact on Perfect 10 of Google's possibly superseding use. Moreover, in Perfect 10, the court laid down a far-reaching precedent in favour of linking and framing, which the court gave a complete pass under copyright. It concluded that "in-line linking and framing may cause some computer users to believe they are viewing a single Google Web page, [but] the Copyright Act . . . does not protect a copyright holder against acts that cause consumer confusion." A February 2018 summary judgement from the District Court for the Southern District of New York created a new challenge to the established cases. In Goldman v. Breitbart, Justin Goldman, a photographer, posted his on-the-street photograph of Tom Brady with Boston Celtics GM Danny Ainge and others to Snapchat, which became popular over social media such as Twitter on rumors that Brady was helping with the Celtics' recruitment. Several news organizations subsequently published stories embedding the tweets with Goldman's photograph in the stories. Goldman took legal action against nine of these news agencies, claiming they violated his copyright. Judge Katherine Forrest decided in favour of Goldman and asserting the news sites violated his copyright, rejecting elements of the Perfect 10 ruling. Forrest said that as the news agencies had to take specific steps to embed the tweets with the photograph in their stories, wrote the stories to highlight those, and otherwise was not providing an automated service like Google's search engine. References Computer law Copyright law Hypertext
Copyright aspects of hyperlinking and framing
Technology
3,210
1,349,885
https://en.wikipedia.org/wiki/Ductile%20iron
Ductile iron, also known as ductile cast iron, nodular cast iron, spheroidal graphite iron, spheroidal graphite cast iron and SG iron, is a type of graphite-rich cast iron discovered in 1943 by Keith Millis. While most varieties of cast iron are weak in tension and brittle, ductile iron has much more impact and fatigue resistance, due to its nodular graphite inclusions. Augustus F. Meehan was awarded in January 1931 for inoculating iron with calcium silicide to produce ductile iron subsequently licensed as Meehanite, still produced . In October 1949 Keith Dwight Millis, Albert Paul Gagnebin and Norman Boden Pilling, all working for INCO, received on a cast ferrous alloy using magnesium for ductile iron production. Metallurgy Ductile iron is not a single material but part of a group of materials which can be produced with a wide range of properties through control of their microstructure. The common defining characteristic of this group of materials is the shape of the graphite. In ductile irons, graphite is in the form of nodules rather than flakes as in grey iron. Whereas sharp graphite flakes create stress concentration points within the metal matrix, rounded nodules inhibit the creation of cracks, thus providing the enhanced ductility that gives the alloy its name. Nodule formation is achieved by adding nodulizing elements, most commonly magnesium (magnesium boils at 1100 °C and iron melts at 1500 °C) and, less often now, cerium (usually in the form of mischmetal). Tellurium has also been used. Yttrium, often a component of mischmetal, has also been studied as a possible nodulizer. Austempered ductile iron (ADI; i.e., austenite tempered) was discovered in the 1950s but was commercialized and achieved success only some years later. In ADI, the metallurgical structure is manipulated through a sophisticated heat treating process. Composition Elements such as copper or tin may be added to increase tensile and yield strength while simultaneously reducing ductility. Improved corrosion resistance can be achieved by replacing 15–30% of the iron in the alloy with varying amounts of nickel, copper, or chromium. Other ductile iron compositions often have a small amount of sulfur as well. Silicon as a graphite formation element can be partially replaced by aluminum to provide better oxidation protection. Applications Much of the annual production of ductile iron is in the form of ductile iron pipe, used for water and sewer lines. It competes with polymeric materials such as PVC, HDPE, LDPE and polypropylene, which are all much lighter than steel or ductile iron; being softer and weaker, these require protection from physical damage. Ductile iron is specifically useful in many automotive components, where strength must surpass that of aluminum but more expensive steel is not necessarily required. Other major industrial applications include off-highway diesel trucks, class 8 trucks, agricultural tractors, and oil well pumps. In the wind power industry ductile iron is used for hubs and structural parts like machine frames. Ductile iron is suitable for large and complex shapes and high (fatigue) loads. Ductile iron is used in many piano harps (the iron plates which anchor piano strings). Ductile iron is used for vises. Previously, regular cast iron or steel was commonly used. The properties of ductile iron make it a significant upgrade in strength and durability from cast iron without having to use steel, which is expensive and has poor castability. See also Malleable iron Wrought iron References Bibliography External links Ductile Iron Society Ductile Iron Pipe Research Association Cast iron Ferrous alloys
Ductile iron
Chemistry
771
70,584,868
https://en.wikipedia.org/wiki/Tetradecadienyl%20acetates
Various tetradecadienyl acetate compounds serve as insect mating pheromones especially among the Pyralidae. These include: Prionoxystus robiniaea mating attractant Accosus centerensis mating attractant Borkhausenia schefferella mating attractant Conistra vaccinii mating attractant (abbr. Z9,E11-14:Ac) Spodoptera littoralis and S. litura mating attractant and mating inhibitor. Female pheromone, lures males. Used by McVeigh and Bettany 1986 and Downham et al., 1995 over the course of three years in a 99:1 with . Although they achieved good mating disruption this did not result in lower egg mass or population. The results of Campion et al., 1980 suggest that may be due to the need for other, minor female volatiles. Martinez et al., 1993 study control of its synthesis in S. littoralis by hormones, finding that the reduction step may be controlled by pheromone biosynthesis activating neuropeptide. Plodia interpunctella mating inhibitor (abbr. Z9,E12-14:Ac) In 2006 the United States Environmental Protection Agency granted an exemption to permit use without regard to the residue on resulting food. This is thought to be the first registration for indoor use in the United States of any sex pheromone to disrupt mating. Produced by species: Adoxophyes fasciata synergistic attractant Anagasta kuehniella mating attractant produced by both male and female Cadra cautella female-produced mating attractant and mating inhibitor (found by Kuwahara et al., 1971) C. figulilella female-produced mating attractant Elasmopalpus lignosellus mating disruptor Ephestia elutella mating attractant Plodia interpunctella (also by Kuwahara 1971) References Insect pheromones Insecticides Insect ecology Biological control agents of pest insects Insect reproduction
Tetradecadienyl acetates
Chemistry
432
32,998,405
https://en.wikipedia.org/wiki/Abell%20383
Abell 383 is a galaxy cluster in the Abell catalogue. See also List of Abell clusters References 383 Abell richness class 2 Active galaxies Galaxy clusters Eridanus (constellation)
Abell 383
Astronomy
42
17,421,384
https://en.wikipedia.org/wiki/Chresonym
In biodiversity informatics, a chresonym is the cited use of a taxon name, usually a species name, within a publication. The term is derived from the Greek χρῆσις chresis meaning "a use" and refers to published usage of a name. The technical meaning of the related term synonym is for different names that refer to the same object or concept. As noted by Hobart and Rozella B. Smith, zoological systematists had been using "the term (synonymy) in another sense as well, namely in reference to all occurrences of any name or set of names (usually synonyms) in the literature." Such a "synonymy" could include multiple listings, one for each place the author found a name used, rather than a summarized list of different synonyms. The term "chresonym" was created to replace this second sense of the term "synonym." The concept of synonymy is furthermore different in the zoological and botanical codes of nomenclature. A name that correctly refers to a taxon is further termed an orthochresonym while one that is applied incorrectly for a given taxon may be termed a heterochresonym. Orthochresonymy Species names consist of a genus part and a species part to create a binomial name. Species names often also include a reference to the original publication of the name by including the author and sometimes the year of publication of the name. As an example, the sperm whale, Physeter catodon, was first described by Carl Linnaeus in his landmark 1758 10th edition of Systema Naturae. Thus, the name may also be referenced as Physeter catodon Linnaeus 1758. That name was also used by Harmer in 1928 to refer to the species in the Proceedings of the Linnean Society of London and of course, it has appeared in numerous other publications since then. Taxonomic catalogues, such as Catalog of Living Whales by Philip Hershkovitz, may reference this usage with a Genus+species+authorship convention that may appear to indicate a new species (a homonym) when in fact it is referencing a particular usage of a species name (a chresonym). Hershkovitz, for example refers to Physeter catodon Harmer 1928, which can cause confusion as this name+author combination really refers to the same name that Linnaeus first published in 1758. Heterochresonymy Nepenthes rafflesiana, a species of pitcher plant, was described by William Jack in 1835. The name Nepenthes rafflesiana as used by Hugh Low in 1848 is a heterochresonym. Cheek and Jebb (2001) explain the situation thus: Low, ... accidentally, or otherwise, had described what we know as N. rafflesiana as Nepenthes × hookeriana and vice versa in his book "Sarawak, its Inhabitants and Productions" (1848). Masters was the first author to note this in the Gardeners' Chronicle..., where he gives the first full description and illustration of Nepenthes × hookeriana. The description that Maxwell Tylden Masters provided in 1881 for the taxon that had previously been known to gardeners as Nepenthes hookeriana (an interchangeable form of the name for the hybrid Nepenthes × hookeriana) differs from Low's description. The International Code of Nomenclature for algae, fungi, and plants does not require that descriptions from so long ago include specification of a type specimen, and types can be chosen later to fit these old names. Since the descriptions differ, Low's and Masters' name have different types. Masters therefore created a later homonym, which, according to the rules of the code is illegitimate. See also Biodiversity Synonym (taxonomy) Glossary of scientific naming References Biodiversity Taxonomy (biology)
Chresonym
Biology
785
52,991,019
https://en.wikipedia.org/wiki/SPT-100
SPT-100 is a Hall-effect ion thruster, part of the SPT-family of thrusters. SPT stands for Stationary Plasma Thruster. It creates a stream of electrically charged xenon ions accelerated by an electric field and confined by a magnetic field. The thruster is manufactured by Russian OKB Fakel, and was first launched onboard the Gals-1 satellite in 1994. In 2003, Fakel debuted a second generation of the thruster, called SPT-100B, and in 2011, it presented further upgrades in SPT-100M prototypes. As of 2011, SPT-100 thrusters were used in 18 Russian and 14 foreign spacecraft, including IPSTAR-II, Telstar-8, and Ekspress A and AM constellations. Specifications See also PPS-1350 SPT-140 References External links Stationary plasma thrusters(PDF) Ion engines Spacecraft propulsion Spacecraft components
SPT-100
Physics,Chemistry
190
4,348,888
https://en.wikipedia.org/wiki/Modular%20rocket
A modular rocket is a kind of multistage rocket which has components that can interchanged for different missions. Several such rockets use similar concepts such as unified modules to minimize expenses on manufacturing, transportation and for optimization of support infrastructure for flight preparations. The National Launch System study (1991-1992) looked at future launchers in a modular (cluster) fashion. This concept has existed since the creation of NASA. Examples Saturn C A government commission, the "Saturn Vehicle Evaluation Committee" (better known as the Silverstein Committee), assembled in 1959 to recommend specific directions that NASA could take with the existing Army rocket program (Jupiter, Redstone, Sergeant). NASA's Space Exploration Program Council (1959-1963) was tasked with developing the launch architecture for the new Saturn rocket series, called Saturn C. The Saturn C architecture consisted of five different stages (S-I, S-II, S-III, S-IV, and S-V/Centaur) that could be stacked vertically for specific rockets to meet various NASA payload and mission requirements. This work led to development of the Saturn I, Saturn IB, and Saturn V rockets. Atlas V The Atlas V expendable launch system uses the liquid fueled Common Core Booster as its first stage. In many configurations, a single CCB is used with strap-on solid rocket boosters. A proposed configuration for heavier loads strapped together three CCBs for the first stage. The Common Core Booster utilizes the Russian made RD-180 burning RP-1 fuel with liquid oxygen producing a thrust of 3.8 MN. The liquid propellant tanks use an isogrid design for strength, replacing previous Atlas tank designs which were pressure stabilized. The length of the common core booster is , and has a diameter of . Delta IV The Delta IV launcher family used the liquid fuel Common Booster Core as the first stage of the various rocket configurations. One or three modules could be used as the first stage. In most configurations a single CBC is used with or without strap-on SRBs. Three CBCs together formed the first stage of the Heavy configuration. The CBC used the Rocketdyne RS-68 engine and burned liquid hydrogen with liquid oxygen producing a thrust of . Angara The Universal Rocket Module (URM) is the modular liquid fueled first stage of the Angara expendable launch system. Depending on the configuration, the first stage can consist of 1, 3, 5 or 8 URMs. Each URM uses a Russian-made RD-191 engine burning RP-1 fuel with liquid oxygen producing a thrust of 1.92 MN. Falcon Heavy The Falcon Heavy launch vehicle consists of a strengthened Falcon 9 Block 5 center core with two regular Falcon 9 Block 5 core stages with aerodynamic nosecones mounted on top of both acting as liquid-fuel strap-on boosters. Each core is powered by nine Merlin 1D engines burning rocket-grade kerosene fuel with liquid oxygen producing almost of thrust, and all three cores together producing over 22 MN of thrust. A first design of the Falcon Heavy included a unique propellant crossfeed capability, where fuel and oxidizer to power most of the engines on the center core would be fed from the two side cores, up until the side cores would be near empty and ready for the first separation event. However, due to its extreme complexity this feature was cancelled in 2015 leaving each of the three cores to burn its own fuel. Later evaluations revealed that the propellant needed for each side booster to land (reuse) are already close to the margins so there is really no advantage to crossfeed. Like the single stick Falcon 9, each Falcon Heavy booster core is reusable. The Falcon Heavy Test Flight demonstrated the two side boosters landing simultaneously near their launch site, while the central booster attempted a landing on SpaceX's Autonomous spaceport drone ship, which resulted in a hard landing near the ship. During the second mission all three boosters landed softly. A Falcon Heavy launch that succeeds in recovering all three core boosters has the same material expenditure as the Falcon 9, i.e. the upper stage and potentially the payload fairing. As such, the difference in cost between a Falcon 9 and a Falcon Heavy launch is limited, mainly to the extra fuel and refurbishing three as opposed to one booster core. See also Evolved Expendable Launch Vehicle Liquid Rocket Booster History: UR-700 External links EELV: The Next Stage of Space Launch Angara page by Khrunichev Space Center (Russian) Angara page on RussianSpaceWeb References Rocketry Spacecraft components
Modular rocket
Engineering
931
17,869,553
https://en.wikipedia.org/wiki/Early%20Cambrian%20geochemical%20fluctuations
The start of the Cambrian period is marked by "fluctuations" in a number of geochemical records, including Strontium, Sulfur and Carbon isotopic excursions. While these anomalies are difficult to interpret, a number of possibilities have been put forward. They probably represent changes on a global scale, and as such may help to constrain possible causes of the Cambrian explosion. The chemical signature may be related to continental break-up, the end of a "global glaciation", or a catastrophic drop in productivity caused by a mass extinction just before the beginning of the Cambrian. Isotopes Isotopes are different forms of elements; they have a different number of neutrons in the nucleus, meaning they have very similar chemical properties, but different mass. The weight difference means that some isotopes are discriminated against in chemical processes – for example, plants find it easier to incorporate the lighter 12C than heavy 13C. Other isotopes are only produced as a result of the radioactive decay of other elements, such as 87Sr, the daughter isotope of 87Rb. Rb, and therefore 87Sr, is common in the crust, so abundance of 87Sr in a sample of sediment (relative to 86Sr) is related to the amount of sediment which originated in the crust, as opposed to from the oceans. The ratios of three major isotopes, 87Sr / 86Sr, 34S / 32S and 13C / 12C, undergo dramatic fluctuations around the beginning of the Cambrian. Carbon isotopes Carbon has 2 stable isotopes, carbon-12 (12C) and carbon-13 (13C). The ratio between the two is denoted , and represents a number of factors. Because organic matter preferentially takes up the lighter 12C, an increase in productivity increases the of the rest of the system, and vice versa. Some carbon reservoirs are very isotopically light: for instance, biogenic methane, produced by bacterial decomposition, has a of −60‰ – vast, when 1‰ is a large fluctuation! An injection of carbon from one of these reservoirs could therefore account for the early Cambrian drop in . Causes often suggested for changes in the ratio of 13C to 12C found in rocks include: A mass extinction. Chemistry is largely driven by electro-magnetic forces, and lighter isotopes such as 12C respond to these more quickly than heavier ones such as 13C. So living organisms generally contain a disproportionate amount of 12C. A mass extinction would increase the amount of 12C available to be included in rocks and therefore reduce the ratio of 13C to 12C. A methane “burp”. In permafrosts and continental shelves methane produced by bacteria gets trapped in “cages” of water molecules, forming a mixture called a clathrate. This methane is very rich in 12C because it has been produced by organisms. Clathrates may dissociate (break up) suddenly if the temperature rises or the pressure on them drops. Such dissociations release the 12C-rich methane and thus reduce the ratio of 13C to 12C as this carbon is gradually incorporated into rocks (methane in the atmosphere breaks down into carbon dioxide and water; carbon dioxide reacts with minerals to form carbonate rocks). References Cambrian events Evolution of the biosphere Isotopes
Early Cambrian geochemical fluctuations
Physics,Chemistry,Biology
679
77,988,688
https://en.wikipedia.org/wiki/Markarian%201014
Markarian 1014 known as PG 0157+001 is a quasar located in the constellation Cetus. It is located at a distance of 2.47 billion light years from Earth and is classified as a Seyfert galaxy as well as an ultraluminous infrared galaxy (ULIRG). Characteristics Markarian 1014 is an active nucleus-dominated galaxy with a total far-infrared luminosity of 9.93 x 1011 erg s−1 cm−2. Apart from being radio-quiet, it contains optical emission lines considered broad, measured with a full-width half maximum of Hβ > 4000 km s−1. In additional to optical emission lines, Markarian 1014 shows emission features of Lyα, N v and O vi, as well as polycyclic aromatic hydrocarbon. Markarian 1014 is also one of the brightest quasars classified as a warm ULIRG. It is currently in a transitional phase from a typical ULIRG to an ultraviolet-excessive quasar. It has an X-ray emission measured at 2-10 KeV luminosity of 1043.80 erg s−1 when exhibiting a molecular outflow. The mass of the black hole in the center of Markarian 1014 is estimated 2.5+0.6-0.6 x 108 MΘ based on an MBH measurement carried out by the Seoul National University AGN Monitoring Project. According to imaging and spectra of its host galaxy, Markarian 1014 is described as spiral-like, but also has a budge + disk morphology. It has a curved tidal tail found extending 60 kiloparsecs towards the north-east, suggesting it has gone through a major merger with a disk galaxy. The tidal tail is known to show lengthy low surface brightness extension with another secondary tail shown faint but rotating symmetrically. Furthermore, the galaxy has twisted spiral isotopes within the 4 kiloparsec central radius hinting its spiral disk is undergoing a starburst or tidal debris caused by the merger. There is also the presence of carbon monoxide (CO) emission in the galaxy. Based on the relationship between its brightness and hydrogen gas (H2) surface density, the gas mass is estimated 4 x 1010 MΘ. A 8.4 -GHz VLA image shows Markarian 1014 has a triple structure along the east–west direction. On both sides of its central core, two lobes are found with 1.1 arcsec from each other. There is also another component found faint and located at the optical nucleus position. According to the spectral index of the component, it is -1.11 ± 0.02 between 5 and 45 GHz. Stellar population A B' - R' color map is presented for Markarian 1014. According to spectroscopy made on its regions with a steeper blue continuum spectrum, it has a young stellar population of stars aged between 180 and 290 million years old. These stars are mainly found inside a clump at the eastern region and along the north edge of its tidal tail, and both southwest and east from its nucleus. The galaxy also has other regions that are seen as redder in a B' - R' color map. This suggests much older stars aged approximately 1 billion years old but with little contribution from the old underlying population. References 1014 Cetus 07551 Quasars Seyfert galaxies Galaxy mergers Luminous infrared galaxies 01572+0009
Markarian 1014
Astronomy
704
21,151,504
https://en.wikipedia.org/wiki/OGLE-TR-132b
OGLE-TR-132b is an extrasolar planet orbiting the star OGLE-TR-132. In 2003 the Optical Gravitational Lensing Experiment (OGLE) detected periodic dimming in the star's light curve indicating a transiting, planetary-sized object. Since low-mass red dwarfs and brown dwarfs may mimic a planet radial velocity measurements were necessary to calculate the mass of the body. In 2004 the object was proved to be a new transiting extrasolar planet. The planet has a mass 1.14 times that of Jupiter. Since the planet's inclination is known, this represents the best measured true mass of the planet, rather than simply the minimum mass as is the case when the inclination is unknown. It orbits the star (OGLE-TR-132) in an extremely close orbit, even closer than the famous planets 51 Pegasi b and HD 209458 b. The planet races around the star every 1 day 16.6 hours. The radius of the planet is only 18% larger than Jupiter's, despite the heating effect by the star. Planets of its kind are sometimes called "super-hot Jupiters". See also OGLE-TR-113b OGLE-TR-10b OGLE-TR-111b OGLE-TR-56b OGLE2-TR-L9b References External links Exoplanets discovered in 2004 Giant planets Carina (constellation) Hot Jupiters Transiting exoplanets
OGLE-TR-132b
Astronomy
300
67,984,129
https://en.wikipedia.org/wiki/Praseodymium%28IV%29%20fluoride
Praseodymium(IV) fluoride (also praseodymium tetrafluoride) is a binary inorganic compound, a highly oxidised metal salt of praseodymium and fluoride with the chemical formula PrF4. Synthesis Praseodymium(IV) fluoride can be prepared by the effect of krypton difluoride on praseodymium(IV) oxide: Praseodymium(IV) fluoride can also be made by the dissolution of sodium hexafluoropraseodymate(IV) in liquid hydrogen fluoride: Properties Praseodymium(IV) fluoride forms light yellow crystals. The crystal structure is anticubic and isomorphic to that of uranium tetrafluoride UF4. It decomposes when heated: Due to the high normal potential of the tetravalent praseodymium cations (Pr3+ / Pr4+: +3.2 V), praseodymium(IV) fluoride decomposes in water, releasing oxygen, O2. See also Praseodymium(III) fluoride Uranium tetrafluoride References Fluorides Praseodymium compounds Inorganic compounds Lanthanide halides
Praseodymium(IV) fluoride
Chemistry
273
41,228,673
https://en.wikipedia.org/wiki/Internal%20combustion%20engine
An internal combustion engine (ICE or IC engine) is a heat engine in which the combustion of a fuel occurs with an oxidizer (usually air) in a combustion chamber that is an integral part of the working fluid flow circuit. In an internal combustion engine, the expansion of the high-temperature and high-pressure gases produced by combustion applies direct force to some component of the engine. The force is typically applied to pistons (piston engine), turbine blades (gas turbine), a rotor (Wankel engine), or a nozzle (jet engine). This force moves the component over a distance. This process transforms chemical energy into kinetic energy which is used to propel, move or power whatever the engine is attached to. The first commercially successful internal combustion engines were invented in the mid-19th century. The first modern internal combustion engine, the Otto engine, was designed in 1876 by the German engineer Nicolaus Otto. The term internal combustion engine usually refers to an engine in which combustion is intermittent, such as the more familiar two-stroke and four-stroke piston engines, along with variants, such as the six-stroke piston engine and the Wankel rotary engine. A second class of internal combustion engines use continuous combustion: gas turbines, jet engines and most rocket engines, each of which are internal combustion engines on the same principle as previously described. In contrast, in external combustion engines, such as steam or Stirling engines, energy is delivered to a working fluid not consisting of, mixed with, or contaminated by combustion products. Working fluids for external combustion engines include air, hot water, pressurized water or even boiler-heated liquid sodium. While there are many stationary applications, most ICEs are used in mobile applications and are the primary power supply for vehicles such as cars, aircraft and boats. ICEs are typically powered by hydrocarbon-based fuels like natural gas, gasoline, diesel fuel, or ethanol. Renewable fuels like biodiesel are used in compression ignition (CI) engines and bioethanol or ETBE (ethyl tert-butyl ether) produced from bioethanol in spark ignition (SI) engines. As early as 1900 the inventor of the diesel engine, Rudolf Diesel, was using peanut oil to run his engines. Renewable fuels are commonly blended with fossil fuels. Hydrogen, which is rarely used, can be obtained from either fossil fuels or renewable energy. History Various scientists and engineers contributed to the development of internal combustion engines. In 1791, John Barber developed the gas turbine. In 1794 Thomas Mead patented a gas engine. Also in 1794, Robert Street patented an internal combustion engine, which was also the first to use liquid fuel, and built an engine around that time. In 1798, John Stevens built the first American internal combustion engine. In 1807, French engineers Nicéphore Niépce (who went on to invent photography) and Claude Niépce ran a prototype internal combustion engine, using controlled dust explosions, the Pyréolophore, which was granted a patent by Napoleon Bonaparte. This engine powered a boat on the Saône river in France. In the same year, Swiss engineer François Isaac de Rivaz invented a hydrogen-based internal combustion engine and powered the engine by electric spark. In 1808, De Rivaz fitted his invention to a primitive working vehicle – "the world's first internal combustion powered automobile". In 1823, Samuel Brown patented the first internal combustion engine to be applied industrially. In 1854, in the UK, the Italian inventors Eugenio Barsanti and Felice Matteucci obtained the certification: "Obtaining Motive Power by the Explosion of Gases". In 1857 the Great Seal Patent Office conceded them patent No.1655 for the invention of an "Improved Apparatus for Obtaining Motive Power from Gases". Barsanti and Matteucci obtained other patents for the same invention in France, Belgium and Piedmont between 1857 and 1859. In 1860, Belgian engineer Jean Joseph Etienne Lenoir produced a gas-fired internal combustion engine. In 1864, Nicolaus Otto patented the first atmospheric gas engine. In 1872, American George Brayton invented the first commercial liquid-fueled internal combustion engine. In 1876, Nicolaus Otto began working with Gottlieb Daimler and Wilhelm Maybach, patented the compressed charge, four-cycle engine. In 1879, Karl Benz patented a reliable two-stroke gasoline engine. Later, in 1886, Benz began the first commercial production of motor vehicles with an internal combustion engine, in which a three-wheeled, four-cycle engine and chassis formed a single unit. In 1892, Rudolf Diesel developed the first compressed charge, compression ignition engine. In 1926, Robert Goddard launched the first liquid-fueled rocket. In 1939, the Heinkel He 178 became the world's first jet aircraft. Etymology At one time, the word engine (via Old French, from Latin , "ability") meant any piece of machinery—a sense that persists in expressions such as siege engine. A "motor" (from Latin , "mover") is any machine that produces mechanical power. Traditionally, electric motors are not referred to as "engines"; however, combustion engines are often referred to as "motors". (An electric engine refers to a locomotive operated by electricity.) In boating, an internal combustion engine that is installed in the hull is referred to as an engine, but the engines that sit on the transom are referred to as motors. Applications Reciprocating piston engines are by far the most common power source for land and water vehicles, including automobiles, motorcycles, ships and to a lesser extent, locomotives (some are electrical but most use diesel engines). Rotary engines of the Wankel design are used in some automobiles, aircraft and motorcycles. These are collectively known as internal-combustion-engine vehicles (ICEV). Where high power-to-weight ratios are required, internal combustion engines appear in the form of combustion turbines, or sometimes Wankel engines. Powered aircraft typically use an ICE which may be a reciprocating engine. Airplanes can instead use jet engines and helicopters can instead employ turboshafts; both of which are types of turbines. In addition to providing propulsion, aircraft may employ a separate ICE as an auxiliary power unit. Wankel engines are fitted to many unmanned aerial vehicles. ICEs drive large electric generators that power electrical grids. They are found in the form of combustion turbines with a typical electrical output in the range of some 100 MW. Combined cycle power plants use the high temperature exhaust to boil and superheat water steam to run a steam turbine. Thus, the efficiency is higher because more energy is extracted from the fuel than what could be extracted by the combustion engine alone. Combined cycle power plants achieve efficiencies in the range of 50–60%. In a smaller scale, stationary engines like gas engines or diesel generators are used for backup or for providing electrical power to areas not connected to an electric grid. Small engines (usually 2‐stroke single cylinder gasoline/petrol engines) are a common power source for lawnmowers, string trimmers, chainsaws, leaf blowers, pressure washers, radio-controlled cars, snowmobiles, jet skis, outboard motors, mopeds, and motorcycles. Classification There are several possible ways to classify internal combustion engines. Reciprocating By number of strokes: Two-stroke engine Clerk cycle Day cycle Four-stroke engine (Otto cycle) Six-stroke engine By type of ignition: Compression-ignition engine Spark-ignition engine (commonly found as gasoline engines) By mechanical/thermodynamic cycle (these cycles are infrequently used but are commonly found in hybrid vehicles, along with other vehicles manufactured for fuel efficiency): Atkinson cycle Miller cycle Rotary Wankel engine Pistonless rotary engine Continuous combustion Gas turbine engine Turbojet, through a propelling nozzle Turbofan, through a duct-fan Turboprop, through an unducted propeller, usually with variable pitch Turboshaft, a gas turbine optimized for producing mechanical torque instead of thrust Ramjet, similar to a turbojet but uses vehicle speed to compress (ram) the air instead of a compressor. Scramjet, a variant of the ramjet that uses supersonic combustion. Rocket engine Reciprocating engines Structure The base of a reciprocating internal combustion engine is the engine block, which is typically made of cast iron (due to its good wear resistance and low cost) or aluminum. In the latter case, the cylinder liners are made of cast iron or steel, or a coating such as nikasil or alusil. The engine block contains the cylinders. In engines with more than one cylinder they are usually arranged either in 1 row (straight engine) or 2 rows (boxer engine or V engine); 3 or 4 rows are occasionally used (W engine) in contemporary engines, and other engine configurations are possible and have been used. Single-cylinder engines (or thumpers) are common for motorcycles and other small engines found in light machinery. On the outer side of the cylinder, passages that contain cooling fluid are cast into the engine block whereas, in some heavy duty engines, the passages are the types of removable cylinder sleeves which can be replaceable. Water-cooled engines contain passages in the engine block where cooling fluid circulates (the water jacket). Some small engines are air-cooled, and instead of having a water jacket the cylinder block has fins protruding away from it to cool the engine by directly transferring heat to the air. The cylinder walls are usually finished by honing to obtain a cross hatch, which is able to retain more oil. A too rough surface would quickly harm the engine by excessive wear on the piston. The pistons are short cylindrical parts which seal one end of the cylinder from the high pressure of the compressed air and combustion products and slide continuously within it while the engine is in operation. In smaller engines, the pistons are made of aluminum; while in larger applications, they are typically made of cast iron. In performance applications, pistons can also be titanium or forged steel for greater strength. The top surface of the piston is called its crown and is typically flat or concave. Some two-stroke engines use pistons with a deflector head. Pistons are open at the bottom and hollow except for an integral reinforcement structure (the piston web). When an engine is working, the gas pressure in the combustion chamber exerts a force on the piston crown which is transferred through its web to a gudgeon pin. Each piston has rings fitted around its circumference that mostly prevent the gases from leaking into the crankcase or the oil into the combustion chamber. A ventilation system drives the small amount of gas that escapes past the pistons during normal operation (the blow-by gases) out of the crankcase so that it does not accumulate contaminating the oil and creating corrosion. In two-stroke gasoline engines the crankcase is part of the air–fuel path and due to the continuous flow of it, two-stroke engines do not need a separate crankcase ventilation system. The cylinder head is attached to the engine block by numerous bolts or studs. It has several functions. The cylinder head seals the cylinders on the side opposite to the pistons; it contains short ducts (the ports) for intake and exhaust and the associated intake valves that open to let the cylinder be filled with fresh air and exhaust valves that open to allow the combustion gases to escape. The valves are often poppet valves but they can also be rotary valves or sleeve valves. However, 2-stroke crankcase scavenged engines connect the gas ports directly to the cylinder wall without poppet valves; the piston controls their opening and occlusion instead. The cylinder head also holds the spark plug in the case of spark ignition engines and the injector for engines that use direct injection. All CI (compression ignition) engines use fuel injection, usually direct injection but some engines instead use indirect injection. SI (spark ignition) engines can use a carburetor or fuel injection as port injection or direct injection. Most SI engines have a single spark plug per cylinder but some have 2. A head gasket prevents the gas from leaking between the cylinder head and the engine block. The opening and closing of the valves is controlled by one or several camshafts and springs—or in some engines—a desmodromic mechanism that uses no springs. The camshaft may press directly the stem of the valve or may act upon a rocker arm, again, either directly or through a pushrod. The crankcase is sealed at the bottom with a sump that collects the falling oil during normal operation to be cycled again. The cavity created between the cylinder block and the sump houses a crankshaft that converts the reciprocating motion of the pistons to rotational motion. The crankshaft is held in place relative to the engine block by main bearings, which allow it to rotate. Bulkheads in the crankcase form a half of every main bearing; the other half is a detachable cap. In some cases a single main bearing deck is used rather than several smaller caps. A connecting rod is connected to offset sections of the crankshaft (the crankpins) in one end and to the piston in the other end through the gudgeon pin and thus transfers the force and translates the reciprocating motion of the pistons to the circular motion of the crankshaft. The end of the connecting rod attached to the gudgeon pin is called its small end, and the other end, where it is connected to the crankshaft, the big end. The big end has a detachable half to allow assembly around the crankshaft. It is kept together to the connecting rod by removable bolts. The cylinder head has an intake manifold and an exhaust manifold attached to the corresponding ports. The intake manifold connects to the air filter directly, or to a carburetor when one is present, which is then connected to the air filter. It distributes the air incoming from these devices to the individual cylinders. The exhaust manifold is the first component in the exhaust system. It collects the exhaust gases from the cylinders and drives it to the following component in the path. The exhaust system of an ICE may also include a catalytic converter and muffler. The final section in the path of the exhaust gases is the tailpipe. Four-stroke engines The top dead center (TDC) of a piston is the position where it is nearest to the valves; bottom dead center (BDC) is the opposite position where it is furthest from them. A stroke is the movement of a piston from TDC to BDC or vice versa, together with the associated process. While an engine is in operation, the crankshaft rotates continuously at a nearly constant speed. In a 4-stroke ICE, each piston experiences 2 strokes per crankshaft revolution in the following order. Starting the description at TDC, these are: Intake, induction or suction: The intake valves are open as a result of the cam lobe pressing down on the valve stem. The piston moves downward increasing the volume of the combustion chamber and allowing air to enter in the case of a CI engine or an air-fuel mix in the case of SI engines that do not use direct injection. The air or air-fuel mixture is called the charge in any case. Compression: In this stroke, both valves are closed and the piston moves upward reducing the combustion chamber volume which reaches its minimum when the piston is at TDC. The piston performs work on the charge as it is being compressed; as a result, its pressure, temperature and density increase; an approximation to this behavior is provided by the ideal gas law. Just before the piston reaches TDC, ignition begins. In the case of a SI engine, the spark plug receives a high voltage pulse that generates the spark which gives it its name and ignites the charge. In the case of a CI engine, the fuel injector quickly injects fuel into the combustion chamber as a spray; the fuel ignites due to the high temperature. Power or working stroke: The pressure of the combustion gases pushes the piston downward, generating more kinetic energy than is required to compress the charge. Complementary to the compression stroke, the combustion gases expand and as a result their temperature, pressure and density decreases. When the piston is near to BDC the exhaust valve opens. In the blowdown, the combustion gases expand irreversibly due to the leftover pressure—in excess of back pressure, the gauge pressure on the exhaust port. Exhaust: The exhaust valve remains open while the piston moves upward expelling the combustion gases. For naturally aspirated engines a small part of the combustion gases may remain in the cylinder during normal operation because the piston does not close the combustion chamber completely; these gases dissolve in the next charge. At the end of this stroke, the exhaust valve closes, the intake valve opens, and the sequence repeats in the next cycle. The intake valve may open before the exhaust valve closes to allow better scavenging. Two-stroke engines The defining characteristic of this kind of engine is that each piston completes a cycle every crankshaft revolution. The 4 processes of intake, compression, power and exhaust take place in only 2 strokes so that it is not possible to dedicate a stroke exclusively for each of them. Starting at TDC the cycle consists of: Power: While the piston is descending the combustion gases perform work on it, as in a 4-stroke engine. The same thermodynamics for the expansion apply. Scavenging: Around 75° of crankshaft rotation before BDC the exhaust valve or port opens, and blowdown occurs. Shortly thereafter the intake valve or transfer port opens. The incoming charge displaces the remaining combustion gases to the exhaust system and a part of the charge may enter the exhaust system as well. The piston reaches BDC and reverses direction. After the piston has traveled a short distance upwards into the cylinder the exhaust valve or port closes; shortly the intake valve or transfer port closes as well. Compression: With both intake and exhaust closed the piston continues moving upwards compressing the charge and performing work on it. As in the case of a 4-stroke engine, ignition starts just before the piston reaches TDC and the same consideration on the thermodynamics of the compression on the charge apply. While a 4-stroke engine uses the piston as a positive displacement pump to accomplish scavenging taking 2 of the 4 strokes, a 2-stroke engine uses the last part of the power stroke and the first part of the compression stroke for combined intake and exhaust. The work required to displace the charge and exhaust gases comes from either the crankcase or a separate blower. For scavenging, expulsion of burned gas and entry of fresh mix, two main approaches are described: Loop scavenging, and Uniflow scavenging. SAE news published in the 2010s that 'Loop Scavenging' is better under any circumstance than Uniflow Scavenging. Crankcase scavenged Some SI engines are crankcase scavenged and do not use poppet valves. Instead, the crankcase and the part of the cylinder below the piston is used as a pump. The intake port is connected to the crankcase through a reed valve or a rotary disk valve driven by the engine. For each cylinder, a transfer port connects in one end to the crankcase and in the other end to the cylinder wall. The exhaust port is connected directly to the cylinder wall. The transfer and exhaust port are opened and closed by the piston. The reed valve opens when the crankcase pressure is slightly below intake pressure, to let it be filled with a new charge; this happens when the piston is moving upwards. When the piston is moving downwards the pressure in the crankcase increases and the reed valve closes promptly, then the charge in the crankcase is compressed. When the piston is moving downwards, it also uncovers the exhaust port and the transfer port and the higher pressure of the charge in the crankcase makes it enter the cylinder through the transfer port, blowing the exhaust gases. Lubrication is accomplished by adding two-stroke oil to the fuel in small ratios. Petroil refers to the mix of gasoline with the aforesaid oil. This kind of 2-stroke engine has a lower efficiency than comparable 4-strokes engines and releases more polluting exhaust gases for the following conditions: They use a total-loss oiling system: all the lubricating oil is eventually burned along with the fuel. There are conflicting requirements for scavenging: On one side, enough fresh charge needs to be introduced in each cycle to displace almost all the combustion gases but introducing too much of it means that a part of it gets in the exhaust. They must use the transfer port(s) as a carefully designed and placed nozzle so that a gas current is created in a way that it sweeps the whole cylinder before reaching the exhaust port so as to expel the combustion gases, but minimize the amount of charge exhausted. 4-stroke engines have the benefit of forcibly expelling almost all of the combustion gases because during exhaust the combustion chamber is reduced to its minimum volume. In crankcase scavenged 2-stroke engines, exhaust and intake are performed mostly simultaneously and with the combustion chamber at its maximum volume. The main advantage of 2-stroke engines of this type is mechanical simplicity and a higher power-to-weight ratio than their 4-stroke counterparts. Despite having twice as many power strokes per cycle, less than twice the power of a comparable 4-stroke engine is attainable in practice. In the US, 2-stroke engines were banned for road vehicles due to the pollution. Off-road only motorcycles are still often 2-stroke but are rarely road legal. However, many thousands of 2-stroke lawn maintenance engines are in use. Blower scavenged Using a separate blower avoids many of the shortcomings of crankcase scavenging, at the expense of increased complexity which means a higher cost and an increase in maintenance requirement. An engine of this type uses ports or valves for intake and valves for exhaust, except opposed piston engines, which may also use ports for exhaust. The blower is usually of the Roots-type but other types have been used too. This design is commonplace in CI engines, and has been occasionally used in SI engines. CI engines that use a blower typically use uniflow scavenging. In this design the cylinder wall contains several intake ports placed uniformly spaced along the circumference just above the position that the piston crown reaches when at BDC. An exhaust valve or several like that of 4-stroke engines is used. The final part of the intake manifold is an air sleeve that feeds the intake ports. The intake ports are placed at a horizontal angle to the cylinder wall (I.e: they are in plane of the piston crown) to give a swirl to the incoming charge to improve combustion. The largest reciprocating IC are low speed CI engines of this type; they are used for marine propulsion (see marine diesel engine) or electric power generation and achieve the highest thermal efficiencies among internal combustion engines of any kind. Some diesel–electric locomotive engines operate on the 2-stroke cycle. The most powerful of them have a brake power of around 4.5 MW or 6,000 HP. The EMD SD90MAC class of locomotives are an example of such. The comparable class GE AC6000CW, whose prime mover has almost the same brake power, uses a 4-stroke engine. An example of this type of engine is the Wärtsilä-Sulzer RTA96-C turbocharged 2-stroke diesel, used in large container ships. It is the most efficient and powerful reciprocating internal combustion engine in the world with a thermal efficiency over 50%. For comparison, the most efficient small four-stroke engines are around 43% thermally-efficient (SAE 900648); size is an advantage for efficiency due to the increase in the ratio of volume to surface area. See the external links for an in-cylinder combustion video in a 2-stroke, optically accessible motorcycle engine. Historical design Dugald Clerk developed the first two-cycle engine in 1879. It used a separate cylinder which functioned as a pump in order to transfer the fuel mixture to the cylinder. In 1899 John Day simplified Clerk's design into the type of 2 cycle engine that is very widely used today. Day cycle engines are crankcase scavenged and port timed. The crankcase and the part of the cylinder below the exhaust port is used as a pump. The operation of the Day cycle engine begins when the crankshaft is turned so that the piston moves from BDC upward (toward the head) creating a vacuum in the crankcase/cylinder area. The carburetor then feeds the fuel mixture into the crankcase through a reed valve or a rotary disk valve (driven by the engine). There are cast in ducts from the crankcase to the port in the cylinder to provide for intake and another from the exhaust port to the exhaust pipe. The height of the port in relationship to the length of the cylinder is called the "port timing". On the first upstroke of the engine there would be no fuel inducted into the cylinder as the crankcase was empty. On the downstroke, the piston now compresses the fuel mix, which has lubricated the piston in the cylinder and the bearings due to the fuel mix having oil added to it. As the piston moves downward it first uncovers the exhaust, but on the first stroke there is no burnt fuel to exhaust. As the piston moves downward further, it uncovers the intake port which has a duct that runs to the crankcase. Since the fuel mix in the crankcase is under pressure, the mix moves through the duct and into the cylinder. Because there is no obstruction in the cylinder of the fuel to move directly out of the exhaust port prior to the piston rising far enough to close the port, early engines used a high domed piston to slow down the flow of fuel. Later the fuel was "resonated" back into the cylinder using an expansion chamber design. When the piston rose close to TDC, a spark ignited the fuel. As the piston is driven downward with power, it first uncovers the exhaust port where the burned fuel is expelled under high pressure and then the intake port where the process has been completed and will keep repeating. Later engines used a type of porting devised by the Deutz company to improve performance. It was called the Schnurle Reverse Flow system. DKW licensed this design for all their motorcycles. Their DKW RT 125 was one of the first motor vehicles to achieve over 100 mpg as a result. Ignition Internal combustion engines require ignition of the mixture, either by spark ignition (SI) or compression ignition (CI). Before the invention of reliable electrical methods, hot tube and flame methods were used. Experimental engines with laser ignition have been built. Spark ignition process The spark-ignition engine was a refinement of the early engines which used Hot Tube ignition. When Bosch developed the magneto it became the primary system for producing electricity to energize a spark plug. Many small engines still use magneto ignition. Small engines are started by hand cranking using a recoil starter or hand crank. Prior to Charles F. Kettering of Delco's development of the automotive starter all gasoline engined automobiles used a hand crank. Larger engines typically power their starting motors and ignition systems using the electrical energy stored in a lead–acid battery. The battery's charged state is maintained by an automotive alternator or (previously) a generator which uses engine power to create electrical energy storage. The battery supplies electrical power for starting when the engine has a starting motor system, and supplies electrical power when the engine is off. The battery also supplies electrical power during rare run conditions where the alternator cannot maintain more than 13.8 volts (for a common 12 V automotive electrical system). As alternator voltage falls below 13.8 volts, the lead-acid storage battery increasingly picks up electrical load. During virtually all running conditions, including normal idle conditions, the alternator supplies primary electrical power. Some systems disable alternator field (rotor) power during wide-open throttle conditions. Disabling the field reduces alternator pulley mechanical loading to nearly zero, maximizing crankshaft power. In this case, the battery supplies all primary electrical power. Gasoline engines take in a mixture of air and gasoline and compress it by the movement of the piston from bottom dead center to top dead center when the fuel is at maximum compression. The reduction in the size of the swept area of the cylinder and taking into account the volume of the combustion chamber is described by a ratio. Early engines had compression ratios of 6 to 1. As compression ratios were increased, the efficiency of the engine increased as well. With early induction and ignition systems the compression ratios had to be kept low. With advances in fuel technology and combustion management, high-performance engines can run reliably at 12:1 ratio. With low octane fuel, a problem would occur as the compression ratio increased as the fuel was igniting due to the rise in temperature that resulted. Charles Kettering developed a lead additive which allowed higher compression ratios, which was progressively abandoned for automotive use from the 1970s onward, partly due to lead poisoning concerns. The fuel mixture is ignited at different progressions of the piston in the cylinder. At low rpm, the spark is timed to occur close to the piston achieving top dead center. In order to produce more power, as rpm rises the spark is advanced sooner during piston movement. The spark occurs while the fuel is still being compressed progressively more as rpm rises. The necessary high voltage, typically 10,000 volts, is supplied by an induction coil or transformer. The induction coil is a fly-back system, using interruption of electrical primary system current through some type of synchronized interrupter. The interrupter can be either contact points or a power transistor. The problem with this type of ignition is that as RPM increases the availability of electrical energy decreases. This is especially a problem, since the amount of energy needed to ignite a more dense fuel mixture is higher. The result was often a high RPM misfire. Capacitor discharge ignition was developed. It produces a rising voltage that is sent to the spark plug. CD system voltages can reach 60,000 volts. CD ignitions use step-up transformers. The step-up transformer uses energy stored in a capacitance to generate electric spark. With either system, a mechanical or electrical control system provides a carefully timed high-voltage to the proper cylinder. This spark, via the spark plug, ignites the air-fuel mixture in the engine's cylinders. While gasoline internal combustion engines are much easier to start in cold weather than diesel engines, they can still have cold weather starting problems under extreme conditions. For years, the solution was to park the car in heated areas. In some parts of the world, the oil was actually drained and heated overnight and returned to the engine for cold starts. In the early 1950s, the gasoline Gasifier unit was developed, where, on cold weather starts, raw gasoline was diverted to the unit where part of the fuel was burned causing the other part to become a hot vapor sent directly to the intake valve manifold. This unit was quite popular until electric engine block heaters became standard on gasoline engines sold in cold climates. Compression ignition process For ignition, diesel, PPC and HCCI engines rely solely on the high temperature and pressure created by the engine in its compression process. The compression level that occurs is usually twice or more than a gasoline engine. Diesel engines take in air only, and shortly before peak compression, spray a small quantity of diesel fuel into the cylinder via a fuel injector that allows the fuel to instantly ignite. HCCI type engines take in both air and fuel, but continue to rely on an unaided auto-combustion process, due to higher pressures and temperature. This is also why diesel and HCCI engines are more susceptible to cold-starting issues, although they run just as well in cold weather once started. Light duty diesel engines with indirect injection in automobiles and light trucks employ glowplugs (or other pre-heating: see Cummins ISB#6BT) that pre-heat the combustion chamber just before starting to reduce no-start conditions in cold weather. Most diesels also have a battery and charging system; nevertheless, this system is secondary and is added by manufacturers as a luxury for the ease of starting, turning fuel on and off (which can also be done via a switch or mechanical apparatus), and for running auxiliary electrical components and accessories. Most new engines rely on electrical and electronic engine control units (ECU) that also adjust the combustion process to increase efficiency and reduce emissions. Lubrication Surfaces in contact and relative motion to other surfaces require lubrication to reduce wear, noise and increase efficiency by reducing the power wasting in overcoming friction, or to make the mechanism work at all. Also, the lubricant used can reduce excess heat and provide additional cooling to components. At the very least, an engine requires lubrication in the following parts: Between pistons and cylinders Small bearings Big end bearings Main bearings Valve gear (The following elements may not be present): Tappets Rocker arms Pushrods Timing chain or gears. Toothed belts do not require lubrication. In 2-stroke crankcase scavenged engines, the interior of the crankcase, and therefore the crankshaft, connecting rod and bottom of the pistons are sprayed by the two-stroke oil in the air-fuel-oil mixture which is then burned along with the fuel. The valve train may be contained in a compartment flooded with lubricant so that no oil pump is required. In a splash lubrication system no oil pump is used. Instead the crankshaft dips into the oil in the sump and due to its high speed, it splashes the crankshaft, connecting rods and bottom of the pistons. The connecting rod big end caps may have an attached scoop to enhance this effect. The valve train may also be sealed in a flooded compartment, or open to the crankshaft in a way that it receives splashed oil and allows it to drain back to the sump. Splash lubrication is common for small 4-stroke engines. In a forced (also called pressurized) lubrication system, lubrication is accomplished in a closed-loop which carries motor oil to the surfaces serviced by the system and then returns the oil to a reservoir. The auxiliary equipment of an engine is typically not serviced by this loop; for instance, an alternator may use ball bearings sealed with their own lubricant. The reservoir for the oil is usually the sump, and when this is the case, it is called a wet sump system. When there is a different oil reservoir the crankcase still catches it, but it is continuously drained by a dedicated pump; this is called a dry sump system. On its bottom, the sump contains an oil intake covered by a mesh filter which is connected to an oil pump then to an oil filter outside the crankcase. From there it is diverted to the crankshaft main bearings and valve train. The crankcase contains at least one oil gallery (a conduit inside a crankcase wall) to which oil is introduced from the oil filter. The main bearings contain a groove through all or half its circumference; the oil enters these grooves from channels connected to the oil gallery. The crankshaft has drillings that take oil from these grooves and deliver it to the big end bearings. All big end bearings are lubricated this way. A single main bearing may provide oil for 0, 1 or 2 big end bearings. A similar system may be used to lubricate the piston, its gudgeon pin and the small end of its connecting rod; in this system, the connecting rod big end has a groove around the crankshaft and a drilling connected to the groove which distributes oil from there to the bottom of the piston and from then to the cylinder. Other systems are also used to lubricate the cylinder and piston. The connecting rod may have a nozzle to throw an oil jet to the cylinder and bottom of the piston. That nozzle is in movement relative to the cylinder it lubricates, but always pointed towards it or the corresponding piston. Typically forced lubrication systems have a lubricant flow higher than what is required to lubricate satisfactorily, in order to assist with cooling. Specifically, the lubricant system helps to move heat from the hot engine parts to the cooling liquid (in water-cooled engines) or fins (in air-cooled engines) which then transfer it to the environment. The lubricant must be designed to be chemically stable and maintain suitable viscosities within the temperature range it encounters in the engine. Cylinder configuration Common cylinder configurations include the straight or inline configuration, the more compact V configuration, and the wider but smoother flat or boxer configuration. Aircraft engines can also adopt a radial configuration, which allows more effective cooling. More unusual configurations such as the H, U, X, and W have also been used. Multiple cylinder engines have their valve train and crankshaft configured so that pistons are at different parts of their cycle. It is desirable to have the pistons' cycles uniformly spaced (this is called even firing) especially in forced induction engines; this reduces torque pulsations and makes inline engines with more than 3 cylinders statically balanced in its primary forces. However, some engine configurations require odd firing to achieve better balance than what is possible with even firing. For instance, a 4-stroke I2 engine has better balance when the angle between the crankpins is 180° because the pistons move in opposite directions and inertial forces partially cancel, but this gives an odd firing pattern where one cylinder fires 180° of crankshaft rotation after the other, then no cylinder fires for 540°. With an even firing pattern, the pistons would move in unison and the associated forces would add. Multiple crankshaft configurations do not necessarily need a cylinder head at all because they can instead have a piston at each end of the cylinder called an opposed piston design. Because fuel inlets and outlets are positioned at opposed ends of the cylinder, one can achieve uniflow scavenging, which, as in the four-stroke engine is efficient over a wide range of engine speeds. Thermal efficiency is improved because of a lack of cylinder heads. This design was used in the Junkers Jumo 205 diesel aircraft engine, using two crankshafts at either end of a single bank of cylinders, and most remarkably in the Napier Deltic diesel engines. These used three crankshafts to serve three banks of double-ended cylinders arranged in an equilateral triangle with the crankshafts at the corners. It was also used in single-bank locomotive engines, and is still used in marine propulsion engines and marine auxiliary generators. Diesel cycle Most truck and automotive diesel engines use a cycle reminiscent of a four-stroke cycle, but with temperature increase by compression causing ignition, rather than needing a separate ignition system. This variation is called the diesel cycle. In the diesel cycle, diesel fuel is injected directly into the cylinder so that combustion occurs at constant pressure, as the piston moves. Otto cycle The Otto cycle is the most common cycle for most cars' internal combustion engines that use gasoline as a fuel. It consists of the same major steps as described for the four-stroke engine: Intake, compression, ignition, expansion and exhaust. Five-stroke engine In 1879, Nicolaus Otto manufactured and sold a double expansion engine (the double and triple expansion principles had ample usage in steam engines), with two small cylinders at both sides of a low-pressure larger cylinder, where a second expansion of exhaust stroke gas took place; the owner returned it, alleging poor performance. In 1906, the concept was incorporated in a car built by EHV (Eisenhuth Horseless Vehicle Company); and in the 21st century Ilmor designed and successfully tested a 5-stroke double expansion internal combustion engine, with high power output and low SFC (Specific Fuel Consumption). Six-stroke engine The six-stroke engine was invented in 1883. Four kinds of six-stroke engines use a regular piston in a regular cylinder (Griffin six-stroke, Bajulaz six-stroke, Velozeta six-stroke and Crower six-stroke), firing every three crankshaft revolutions. These systems capture the waste heat of the four-stroke Otto cycle with an injection of air or water. The Beare Head and "piston charger" engines operate as opposed-piston engines, two pistons in a single cylinder, firing every two revolutions rather than every four like a four-stroke engine. Other cycles The first internal combustion engines did not compress the mixture. The first part of the piston downstroke drew in a fuel-air mixture, then the inlet valve closed and, in the remainder of the down-stroke, the fuel-air mixture fired. The exhaust valve opened for the piston upstroke. These attempts at imitating the principle of a steam engine were very inefficient. There are a number of variations of these cycles, most notably the Atkinson and Miller cycles. Split-cycle engines separate the four strokes of intake, compression, combustion and exhaust into two separate but paired cylinders. The first cylinder is used for intake and compression. The compressed air is then transferred through a crossover passage from the compression cylinder into the second cylinder, where combustion and exhaust occur. A split-cycle engine is really an air compressor on one side with a combustion chamber on the other. Previous split-cycle engines have had two major problems—poor breathing (volumetric efficiency) and low thermal efficiency. However, new designs are being introduced that seek to address these problems. The Scuderi Engine addresses the breathing problem by reducing the clearance between the piston and the cylinder head through various turbocharging techniques. The Scuderi design requires the use of outwardly opening valves that enable the piston to move very close to the cylinder head without the interference of the valves. Scuderi addresses the low thermal efficiency via firing after top dead center (ATDC). Firing ATDC can be accomplished by using high-pressure air in the transfer passage to create sonic flow and high turbulence in the power cylinder. Combustion turbines Jet engine Jet engines use a number of rows of fan blades to compress air which then enters a combustor where it is mixed with fuel (typically JP fuel) and then ignited. The burning of the fuel raises the temperature of the air which is then exhausted out of the engine creating thrust. A modern turbofan engine can operate at as high as 48% efficiency. There are six sections to a turbofan engine: Fan Compressor Combustor Turbine Mixer Nozzle Gas turbines A gas turbine compresses air and uses it to turn a turbine. It is essentially a jet engine which directs its output to a shaft. There are three stages to a turbine: 1) air is drawn through a compressor where the temperature rises due to compression, 2) fuel is added in the combustor, and 3) hot air is exhausted through turbine blades which rotate a shaft connected to the compressor. A gas turbine is a rotary machine similar in principle to a steam turbine and it consists of three main components: a compressor, a combustion chamber, and a turbine. The temperature of the air, after being compressed in the compressor, is increased by burning fuel in it. The heated air and the products of combustion expand in a turbine, producing work output. About of the work drives the compressor: the rest (about ) is available as useful work output. Gas turbines are among the most efficient internal combustion engines. The General Electric 7HA and 9HA turbine combined cycle electrical plants are rated at over 61% efficiency. Brayton cycle A gas turbine is a rotary machine somewhat similar in principle to a steam turbine. It consists of three main components: compressor, combustion chamber, and turbine. The air is compressed by the compressor where a temperature rise occurs. The temperature of the compressed air is further increased by combustion of injected fuel in the combustion chamber which expands the air. This energy rotates the turbine which powers the compressor via a mechanical coupling. The hot gases are then exhausted to provide thrust. Gas turbine cycle engines employ a continuous combustion system where compression, combustion, and expansion occur simultaneously at different places in the engine—giving continuous power. Notably, the combustion takes place at constant pressure, rather than with the Otto cycle, constant volume. Wankel engines The Wankel engine (rotary engine) does not have piston strokes. It operates with the same separation of phases as the four-stroke engine with the phases taking place in separate locations in the engine. In thermodynamic terms it follows the Otto engine cycle, so may be thought of as a "four-phase" engine. While it is true that three power strokes typically occur per rotor revolution, due to the 3:1 revolution ratio of the rotor to the eccentric shaft, only one power stroke per shaft revolution actually occurs. The drive (eccentric) shaft rotates once during every power stroke instead of twice (crankshaft), as in the Otto cycle, giving it a greater power-to-weight ratio than piston engines. This type of engine was most notably used in the Mazda RX-8, the earlier RX-7, and other vehicle models. The engine is also used in unmanned aerial vehicles, where the small size and weight and the high power-to-weight ratio are advantageous. Forced induction Forced induction is the process of delivering compressed air to the intake of an internal combustion engine. A forced induction engine uses a gas compressor to increase the pressure, temperature and density of the air. An engine without forced induction is considered a naturally aspirated engine. Forced induction is used in the automotive and aviation industry to increase engine power and efficiency. It particularly helps aviation engines, as they need to operate at high altitude. Forced induction is achieved by a supercharger, where the compressor is directly powered from the engine shaft or, in the turbocharger, from a turbine powered by the engine exhaust. Fuels and oxidizers All internal combustion engines depend on combustion of a chemical fuel, typically with oxygen from the air (though it is possible to inject nitrous oxide to do more of the same thing and gain a power boost). The combustion process typically results in the production of a great quantity of thermal energy, as well as the production of steam and carbon dioxide and other chemicals at very high temperature; the temperature reached is determined by the chemical make up of the fuel and oxidizers (see stoichiometry), as well as by the compression and other factors. Fuels The most common modern fuels are made up of hydrocarbons and are derived mostly from fossil fuels (petroleum). Fossil fuels include diesel fuel, gasoline and petroleum gas, and the rarer use of propane. Except for the fuel delivery components, most internal combustion engines that are designed for gasoline use can run on natural gas or liquefied petroleum gases without major modifications. Large diesels can run with air mixed with gases and a pilot diesel fuel ignition injection. Liquid and gaseous biofuels, such as ethanol and biodiesel (a form of diesel fuel that is produced from crops that yield triglycerides such as soybean oil), can also be used. Engines with appropriate modifications can also run on hydrogen gas, wood gas, or charcoal gas, as well as from so-called producer gas made from other convenient biomass. Experiments have also been conducted using powdered solid fuels, such as the magnesium injection cycle. Presently, fuels used include: Petroleum: Petroleum spirit (North American term: gasoline, British term: petrol) Diesel fuel. Autogas (liquified petroleum gas). Propane. Compressed natural gas. Jet fuel (aviation fuel) Residual fuel Coal: Gasoline can be made from carbon (coal) using the Fischer–Tropsch process Diesel fuel can be made from carbon using the Fischer–Tropsch process Biofuels and vegetable oils: Peanut oil and other vegetable oils. Woodgas, from an onboard wood gasifier using solid wood as a fuel Biofuels: Biobutanol (replaces gasoline). Biodiesel (replaces petrodiesel). Dimethyl Ether (replaces petrodiesel). Bioethanol and biomethanol (wood alcohol) and other biofuels (see Flexible-fuel vehicle). Biogas Hydrogen (mainly spacecraft rocket engines) Even fluidized metal powders and explosives have seen some use. Engines that use gases for fuel are called gas engines and those that use liquid hydrocarbons are called oil engines; however, gasoline engines are also often colloquially referred to as "gas engines" ("petrol engines" outside North America). The main limitations on fuels are that it must be easily transportable through the fuel system to the combustion chamber, and that the fuel releases sufficient energy in the form of heat upon combustion to make practical use of the engine. Diesel engines are generally heavier, noisier, and more powerful at lower speeds than gasoline engines. They are also more fuel-efficient in most circumstances and are used in heavy road vehicles, some automobiles (increasingly so for their increased fuel efficiency over gasoline engines), ships, railway locomotives, and light aircraft. Gasoline engines are used in most other road vehicles including most cars, motorcycles, and mopeds. In Europe, sophisticated diesel-engined cars have taken over about 45% of the market since the 1990s. There are also engines that run on hydrogen, methanol, ethanol, liquefied petroleum gas (LPG), biodiesel, paraffin and tractor vaporizing oil (TVO). Hydrogen Hydrogen could eventually replace conventional fossil fuels in traditional internal combustion engines. Alternatively fuel cell technology may come to deliver its promise and the use of the internal combustion engines could even be phased out. Although there are multiple ways of producing free hydrogen, those methods require converting combustible molecules into hydrogen or consuming electric energy. Unless that electricity is produced from a renewable source—and is not required for other purposes—hydrogen does not solve any energy crisis. In many situations the disadvantage of hydrogen, relative to carbon fuels, is its storage. Liquid hydrogen has extremely low density (14 times lower than water) and requires extensive insulation—whilst gaseous hydrogen requires heavy tankage. Even when liquefied, hydrogen has a higher specific energy but the volumetric energetic storage is still roughly five times lower than gasoline. However, the energy density of hydrogen is considerably higher than that of electric batteries, making it a serious contender as an energy carrier to replace fossil fuels. The 'Hydrogen on Demand' process (see direct borohydride fuel cell) creates hydrogen as needed, but has other issues, such as the high price of the sodium borohydride that is the raw material. Oxidizers Since air is plentiful at the surface of the earth, the oxidizer is typically atmospheric oxygen, which has the advantage of not being stored within the vehicle. This increases the power-to-weight and power-to-volume ratios. Other materials are used for special purposes, often to increase power output or to allow operation under water or in space. Compressed air has been commonly used in torpedoes. Compressed oxygen, as well as some compressed air, was used in the Japanese Type 93 torpedo. Some submarines carry pure oxygen. Rockets very often use liquid oxygen. Nitromethane is added to some racing and model fuels to increase power and control combustion. Nitrous oxide has been used—with extra gasoline—in tactical aircraft, and in specially equipped cars to allow short bursts of added power from engines that otherwise run on gasoline and air. It is also used in the Burt Rutan rocket spacecraft. Hydrogen peroxide power was under development for German World War II submarines. It may have been used in some non-nuclear submarines, and was used on some rocket engines (notably the Black Arrow and the Messerschmitt Me 163 rocket fighter). Other chemicals such as chlorine or fluorine have been used experimentally, but have not been found practical. Cooling Cooling is required to remove excessive heat—high temperature can cause engine failure, usually from wear (due to high-temperature-induced failure of lubrication), cracking or warping. Two most common forms of engine cooling are air-cooled and water-cooled. Most modern automotive engines are both water and air-cooled, as the water/liquid-coolant is carried to air-cooled fins and/or fans, whereas larger engines may be singularly water-cooled as they are stationary and have a constant supply of water through water-mains or fresh-water, while most power tool engines and other small engines are air-cooled. Some engines (air or water-cooled) also have an oil cooler. In some engines, especially for turbine engine blade cooling and liquid rocket engine cooling, fuel is used as a coolant, as it is simultaneously preheated before injecting it into a combustion chamber. Starting Internal combustion engines must have their cycles started. In reciprocating engines this is accomplished by turning the crankshaft (Wankel Rotor Shaft) which induces the cycles of intake, compression, combustion, and exhaust. The first engines were started with a turn of their flywheels, while the first vehicle (the Daimler Reitwagen) was started with a hand crank. All ICE engined automobiles were started with hand cranks until Charles Kettering developed the electric starter for automobiles. This method is now the most widely used, even among non-automobiles. As diesel engines have become larger and their mechanisms heavier, air starters have come into use. This is due to the lack of torque in electric starters. Air starters work by pumping compressed air into the cylinders of an engine to start it turning. Two-wheeled vehicles may have their engines started in one of four ways: By pedaling, as on a bicycle By pushing the vehicle and then engaging the clutch, known as "run-and-bump starting" By kicking downward on a single pedal, known as "kick starting" By an electric starter, as in cars There are also starters where a spring is compressed by a crank motion and then used to start an engine. Some small engines use a pull-rope mechanism called "recoil starting", as the rope rewinds itself after it has been pulled out to start the engine. This method is commonly used in pushed lawn mowers and other settings where only a small amount of torque is needed to turn an engine over. Turbine engines are frequently started by an electric motor or by compressed air. Measures of engine performance Engine types vary greatly in a number of different ways: energy efficiency fuel/propellant consumption (brake specific fuel consumption for shaft engines, thrust specific fuel consumption for jet engines) power-to-weight ratio thrust to weight ratio torque curves (for shaft engines), thrust lapse (jet engines) compression ratio for piston engines, overall pressure ratio for jet engines and gas turbines Energy efficiency Once ignited and burnt, the combustion products—hot gases—have more available thermal energy than the original compressed fuel-air mixture (which had higher chemical energy). This available energy is manifested as a higher temperature and pressure that can be converted into kinetic energy by the engine. In a reciprocating engine, the high-pressure gases inside the cylinders drive the engine's pistons. Once the available energy has been removed, the remaining hot gases are vented (often by opening a valve or exposing the exhaust outlet) and this allows the piston to return to its previous position (top dead center, or TDC). The piston can then proceed to the next phase of its cycle, which varies between engines. Any thermal energy that is not translated into work is normally considered a waste product and is removed from the engine either by an air or liquid cooling system. Internal combustion engines are considered heat engines (since the release of chemical energy in combustion has the same effect as heat transfer into the engine) and as such their theoretical efficiency can be approximated by idealized thermodynamic cycles. The thermal efficiency of a theoretical cycle cannot exceed that of the Carnot cycle, whose efficiency is determined by the difference between the lower and upper operating temperatures of the engine. The upper operating temperature of an engine is limited by two main factors; the thermal operating limits of the materials, and the auto-ignition resistance of the fuel. All metals and alloys have a thermal operating limit, and there is significant research into ceramic materials that can be made with greater thermal stability and desirable structural properties. Higher thermal stability allows for a greater temperature difference between the lower (ambient) and upper operating temperatures, hence greater thermodynamic efficiency. Also, as the cylinder temperature rises, the fuel becomes more prone to auto-ignition. This is caused when the cylinder temperature nears the flash point of the charge. At this point, ignition can spontaneously occur before the spark plug fires, causing excessive cylinder pressures. Auto-ignition can be mitigated by using fuels with high auto-ignition resistance (octane rating), however it still puts an upper bound on the allowable peak cylinder temperature. The thermodynamic limits assume that the engine is operating under ideal conditions: a frictionless world, ideal gases, perfect insulators, and operation for infinite time. Real world applications introduce complexities that reduce efficiency. For example, a real engine runs best at a specific load, termed its power band. The engine in a car cruising on a highway is usually operating significantly below its ideal load, because it is designed for the higher loads required for rapid acceleration. In addition, factors such as wind resistance reduce overall system efficiency. Vehicle fuel economy is measured in miles per gallon or in liters per 100 kilometers. The volume of hydrocarbon assumes a standard energy content. Even when aided with turbochargers and stock efficiency aids, most engines retain an average efficiency of about 18–20%. However, the latest technologies in Formula One engines have seen a boost in thermal efficiency past 50%. There are many inventions aimed at increasing the efficiency of IC engines. In general, practical engines are always compromised by trade-offs between different properties such as efficiency, weight, power, heat, response, exhaust emissions, or noise. Sometimes economy also plays a role in not only the cost of manufacturing the engine itself, but also manufacturing and distributing the fuel. Increasing the engine's efficiency brings better fuel economy but only if the fuel cost per energy content is the same. Measures of fuel efficiency and propellant efficiency For stationary and shaft engines including propeller engines, fuel consumption is measured by calculating the brake specific fuel consumption, which measures the mass flow rate of fuel consumption divided by the power produced. For internal combustion engines in the form of jet engines, the power output varies drastically with airspeed and a less variable measure is used: thrust specific fuel consumption (TSFC), which is the mass of propellant needed to generate impulses that is measured in either pound force-hour or the grams of propellant needed to generate an impulse that measures one kilonewton-second. For rockets, TSFC can be used, but typically other equivalent measures are traditionally used, such as specific impulse and effective exhaust velocity. Air and noise pollution Air pollution Internal combustion engines such as reciprocating internal combustion engines produce air pollution emissions, due to incomplete combustion of carbonaceous fuel. The main derivatives of the process are carbon dioxide , water and some soot—also called particulate matter (PM). The effects of inhaling particulate matter have been studied in humans and animals and include asthma, lung cancer, cardiovascular issues, and premature death. There are, however, some additional products of the combustion process that include nitrogen oxides and sulfur and some uncombusted hydrocarbons, depending on the operating conditions and the fuel-air ratio. Carbon dioxide emissions from internal combustion engines (particularly ones using fossil fuels such as gasoline and diesel) contribute to human-induced climate change. Increasing the engine's fuel efficiency can reduce, but not eliminate, the amount of emissions as carbon-based fuel combustion produces . Since removing from engine exhaust is impractical, there is increasing interest in alternatives. Sustainable fuels such as biofuels, synfuels, and electric motors powered by batteries are examples. Not all of the fuel is completely consumed by the combustion process. A small amount of fuel is present after combustion, and some of it reacts to form oxygenates, such as formaldehyde or acetaldehyde, or hydrocarbons not originally present in the input fuel mixture. Incomplete combustion usually results from insufficient oxygen to achieve the perfect stoichiometric ratio. The flame is "quenched" by the relatively cool cylinder walls, leaving behind unreacted fuel that is expelled with the exhaust. When running at lower speeds, quenching is commonly observed in diesel (compression ignition) engines that run on natural gas. Quenching reduces efficiency and increases knocking, sometimes causing the engine to stall. Incomplete combustion also leads to the production of carbon monoxide (CO). Further chemicals released are benzene and 1,3-butadiene that are also hazardous air pollutants. Increasing the amount of air in the engine reduces emissions of incomplete combustion products, but also promotes reaction between oxygen and nitrogen in the air to produce nitrogen oxides (). is hazardous to both plant and animal health, and leads to the production of ozone (). Ozone is not emitted directly; rather, it is a secondary air pollutant, produced in the atmosphere by the reaction of and volatile organic compounds in the presence of sunlight. Ground-level ozone is harmful to human health and the environment. Though the same chemical substance, ground-level ozone should not be confused with stratospheric ozone, or the ozone layer, which protects the earth from harmful ultraviolet rays. Carbon fuels containing sulfur produce sulfur monoxides (SO) and sulfur dioxide () contributing to acid rain. In the United States, nitrogen oxides, PM, carbon monoxide, sulfur dioxide, and ozone, are regulated as criteria air pollutants under the Clean Air Act to levels where human health and welfare are protected. Other pollutants, such as benzene and 1,3-butadiene, are regulated as hazardous air pollutants whose emissions must be lowered as much as possible depending on technological and practical considerations. , carbon monoxide and other pollutants are frequently controlled via exhaust gas recirculation which returns some of the exhaust back into the engine intake. Catalytic converters are used to convert exhaust chemicals to (a greenhouse gas), (water vapour, also a greenhouse gas) and (nitrogen). Non-road engines The emission standards used by many countries have special requirements for non-road engines which are used by equipment and vehicles that are not operated on the public roadways. The standards are separated from the road vehicles. Noise pollution Significant contributions to noise pollution are made by internal combustion engines. Automobile and truck traffic operating on highways and street systems produce noise, as do aircraft flights due to jet noise, particularly supersonic-capable aircraft. Rocket engines create the most intense noise. Idling Internal combustion engines continue to consume fuel and emit pollutants while idling. Idling is reduced by stop-start systems. Carbon dioxide formation A good way to estimate the mass of carbon dioxide that is released when one litre of diesel fuel (or gasoline) is combusted can be found as follows: As a good approximation the chemical formula of diesel is . In reality diesel is a mixture of different molecules. As carbon has a molar mass of 12 g/mol and hydrogen (atomic) has a molar mass of about 1 g/mol, the fraction by weight of carbon in diesel is roughly . The reaction of diesel combustion is given by: 2 + 3n 2n + 2n Carbon dioxide has a molar mass of 44 g/mol as it consists of 2 atoms of oxygen (16 g/mol) and 1 atom of carbon (12 g/mol). So 12 g of carbon yields 44 g of carbon dioxide. Diesel has a density of 0.838 kg per litre. Putting everything together the mass of carbon dioxide that is produced by burning 1 litre of diesel can be calculated as: The figure obtained with this estimation is close to the values found in the literature. For gasoline, with a density of 0.75 kg/L and a ratio of carbon to hydrogen atoms of about 6 to 14, the estimated value of carbon dioxide emission from burning 1 litre of gasoline is: Parasitic loss The term parasitic loss is often applied to devices that take energy from the engine in order to enhance the engine's ability to create more energy or convert energy to motion. In the internal combustion engine, almost every mechanical component, including the drivetrain, causes parasitic loss and could thus be characterized as a parasitic load. Examples Bearings, oil pumps, piston rings, valve springs, flywheels, transmissions, driveshafts, and differentials all act as parasitic loads that rob the system of power. These parasitic loads can be divided into two categories: those inherent to the working of the engine and those drivetrain losses incurred in the systems that transfer power from the engine to the road (such as the transmission, driveshaft, differentials and axles). For example, the former category (engine parasitic loads) includes the oil pump used to lubricate the engine, which is a necessary parasite that consumes power from the engine (its host). Another example of an engine parasitic load is a supercharger, which derives its power from the engine and creates more power for the engine. The power that the supercharger consumes is parasitic loss and is usually expressed in kilowatt or horsepower. While the power that the supercharger consumes in comparison to what it generates is small, it is still measurable or calculable. One of the desirable features of a turbocharger over a supercharger is the lower parasitic loss of the former. Drivetrain parasitic losses include both steady state and dynamic loads. Steady state loads occur at constant speeds and may originate in discrete components such as the torque converter, the transmission oil pump, and/or clutch drag, and in seal/bearing drag, churning of lubricant and gear windage/friction found throughout the system. Dynamic loads occur under acceleration and are caused by inertia of rotating components and/or increased friction. Measurement While rules of thumb such as a 15% power loss from drivetrain parasitic loads have been commonly repeated, the actual loss of energy due to parasitic loads varies between systems. It can be influenced by powertrain design, lubricant type and temperature and many other factors. In automobiles, drivetrain loss can be quantified by measuring the difference between power measured by an engine dynamometer and a chassis dynamometer. However, this method is primarily useful for measuring steady state loads and may not accurately reflect losses due to dynamic loads. More advanced methods can be used in a laboratory setting, such as measuring in-cylinder pressure measurements, flow rate and temperature at certain points, and testing of individual parts or sub-assemblies to determine friction and pumping losses. For example, in a dynamometer test by Hot Rod magazine, a Ford Mustang equipped with a modified 357ci small-block Ford V8 engine and an automatic transmission had a measured drivetrain power loss averaging 33%. In the same test, a Buick equipped with a modified 455ci V8 engine and a 4-speed manual transmission was measured to have an average drivetrain power loss of 21%. Laboratory testing of a heavy-duty diesel engine determined that 1.3% of the fuel energy input was lost to parasitic loads of engine accessories such as water and oil pumps. Reduction Automotive engineers and tuners commonly make design choices that reduce parasitic loads in order to improve efficiency and power output. These may involve the choice of major engine components or systems, such as the use of dry sump lubrication system over a wet sump system. Alternately, this can be effected through substitution of minor components available as aftermarket modifications, such as exchanging a directly engine-driven fan for one equipped with a fan clutch or an electric fan. Another modification to reduce parasitic loss, usually seen in track-only cars, is the replacement of an engine-driven water pump for an electrical water pump. The reduction in parasitic loss from these changes may be due to reduced friction or many other variables that cause the design to be more efficient. See also References Bibliography Patents: Further reading External links Combustion video – in-cylinder combustion in an optically accessible, 2-stroke engine Animated Engines – explains a variety of types Intro to Car Engines – Cut-away images and a good overview of the internal combustion engine Walter E. Lay Auto Lab – Research at The University of Michigan YouTube – Animation of the components and built-up of a 4-cylinder engine YouTube – Animation of the internal moving parts of a 4-cylinder engine Next generation engine technologies retrieved May 9, 2009 How Car Engines Work Unusual Internal-Combustion Engines Aircraft Engine Historical Society (AEHS) – AEHS Home 19th-century inventions Air pollution Internal Internal combustion Piston engines
Internal combustion engine
Physics,Chemistry,Technology,Engineering
14,216
18,877,738
https://en.wikipedia.org/wiki/Tire-derived%20fuel
Tire-derived fuel (TDF) is composed of shredded scrap tires. Tires may be mixed with coal or other fuels, such as wood or chemical wastes, to be burned in concrete kilns, power plants, or paper mills. An EPA test program concluded that, with the exception of zinc emissions, potential emissions from TDF are not expected to be very much different from other conventional fossil fuels, as long as combustion occurs in a well-designed, well-operated and well-maintained combustion device. In the United States in 2017, about 43% of scrap tires (1,736,340 tons or 106 million tires) were burnt as tire-derived fuel. Cement manufacturing was the largest user of TDF, at 46%, pulp and paper manufacturing used 29% and electric utilities used 25%. Another 25% of scrap tires were used to make ground rubber, 17% were disposed of in landfills and 16% had other uses. Theory Historically, there has not been any volume use for scrap tires other than burning that has been able to keep up with the volume of waste generated yearly. Tires produce the same energy as petroleum and approximately 25% more energy than coal. Burning tires is lower on the hierarchy of reducing waste than recycling, but it is better than placing the tire waste in a landfill or dump, where there is a possibility for uncontrolled tire fires or the harboring of disease vectors such as mosquitoes. Tire Derived Fuel is an interim solution to the scrap tire waste problem. Advances in tire recycling technology might one day provide a solution other than burning by reusing tire derived material in high volume applications. Characteristics Tire derived fuel is usually consumed in the form of shredded or chipped material with most of the metal wire from the tire's steel belts removed. The analytical properties of this refined material are published in TDF Produced From Scrap Tires with 96+% Wire Removed. Tires are typically composed of about 1 to 1.5% Zinc oxide, which is a well known component used in the manufacture of tires and is also toxic to aquatic and plant life. The chlorine content in tires is due primarily to the chlorinated butyl rubber liner that slows the leak rate of air. The Rubber Manufacturers Association (RMA) is a very good source for compositional data and other information on tires. The use of TDF for heat production is controversial due to the possibility for toxin production. Reportedly, polychlorinated dibenzodioxins and furans are produced during the combustion process and there is supportive evidence to suggest that this is true under some incineration conditions. Other toxins such as NOx, SOx and heavy metals are also produced, though whether these levels of toxins are higher or lower than conventional coal and oil fired incinerators is not clear. Environmental impact While environmental controversy surrounding use of this fuel is wide and varied, the greatest supported evidence of toxicity comes from the presence of dioxins and furans in the flue gases. Zinc has also been found to dissolve into storm water, from shredded rubber, at acutely toxic levels for aquatic life and plants. A study of dioxin and furan content of stack gasses at a variety of cement mills, paper mills, boilers, and power plants conducted in the 1990s shows a wide and inconsistent variation in dioxin and furan output when fueled partially by TDF as compared to the same facilities powered by only coal. Some facilities added as little as 4% TDF and experienced as much as a 4,140% increase in dioxin and furan emissions. Other facilities added as much as 30% TDF and experienced dioxin and furan emissions increases of only as much as 58%. Still other facilities used as much as 8% TDF and experienced a decrease of as much as 83% of dioxin and furan emissions. One facility conducted four tests with two tests resulting in decreased emissions and two resulting in increased emissions. Another facility also conducted four tests and had widely varying increases in emissions. A 2004 study showed that huge polyaromatic emissions are generated from combustion of tire rubber, at a minimum, 2 orders of magnitude higher than coal alone. The study concludes with, "atmospheric contamination dramatically increases when tire rubber is used as the fuel. Other different combustion variables compared to the ones used for coal combustion should be used to avoid atmospheric contamination by toxic, mutagenic, and carcinogenic pollutants, as well as hot-gas cleaning systems and COx capture systems." References Fuels Chemical engineering Tires
Tire-derived fuel
Chemistry,Engineering
924
33,989,553
https://en.wikipedia.org/wiki/Cyanophos
Cyanophos is a cholinesterase inhibitor used as an insecticide and avicide; for example, against rice stem borers and house flies. It is part of the chemical class of organophosphorus compounds, and is a yellow to reddish-yellow transparent liquid. Safety Cyanophos can enter the body via inhalation, ingestion, and contact with the skin and eyes. Symptoms of cyanophos poisoning resemble those of the chemical weapon sarin and include dyspnea, vomiting, diarrhea, abdominal pain, bronchorrhea, blurred vision, and opsoclonus. It is classified as an extremely hazardous substance in the United States as defined in Section 302 of the U.S. Emergency Planning and Community Right-to-Know Act (42 U.S.C. 11002), and is subject to strict reporting requirements by facilities which produce, store, or use it in significant quantities. Synonyms BAY 34727 Bayer 34727 Ciafos Cyanofos Cyanox Cyap ENT 25,675 O,O-dimethyl O-(4-cyanophenyl) phosphorothioate O,O-dimethyl O-(p-cyanophenyl) phosphorothioate O,O-dimethyl O-4-cyanophenyl phosphorothioate O,O-dimethyl O-4-cyanophenyl thiophosphate O,O-dimethyl-O-p-cyanophenyl phosphorothioate O-p-cyanophenyl O,O-dimethyl phosphorothioate Phosphorothioic acid O-(4-cyanophenyl) O,O-dimethyl ester Phosphorothioic acid, O,O-dimethyl ester, O-ester with p-hydroxybenzonitrile Phosphorothioic acid, O-p-cyanophenyl O,O-dimethyl ester S 4084 Sumitomo S 4084 References Acetylcholinesterase inhibitors Organophosphate insecticides Nitriles Organothiophosphate esters Methyl esters
Cyanophos
Chemistry
491
32,131,629
https://en.wikipedia.org/wiki/Partial%20cyclic%20order
In mathematics, a partial cyclic order is a ternary relation that generalizes a cyclic order in the same way that a partial order generalizes a linear order. Definition Over a given set, a partial cyclic order is a ternary relation that is: cyclic, i.e. it is invariant under a cyclic permutation: asymmetric: transitive: and Constructions Direct sum Direct product Power Dedekind–MacNeille completion Extensions linear extension, Szpilrajn extension theorem standard example The relationship between partial and total cyclic orders is more complex than the relationship between partial and total linear orders. To begin with, not every partial cyclic order can be extended to a total cyclic order. An example is the following relation on the first thirteen letters of the alphabet: {acd, bde, cef, dfg, egh, fha, gac, hcb} ∪ {abi, cij, bjk, ikl, jlm, kma, lab, mbc}. This relation is a partial cyclic order, but it cannot be extended with either abc or cba; either attempt would result in a contradiction. The above was a relatively mild example. One can also construct partial cyclic orders with higher-order obstructions such that, for example, any 15 triples can be added but the 16th cannot. In fact, cyclic ordering is NP-complete, since it solves 3SAT. This is in stark contrast with the recognition problem for linear orders, which can be solved in linear time. Notes References Further reading Order theory Circles
Partial cyclic order
Mathematics
321
15,002,414
https://en.wikipedia.org/wiki/Document-oriented%20database
A document-oriented database, or document store, is a computer program and data storage system designed for storing, retrieving and managing document-oriented information, also known as semi-structured data. Document-oriented databases are one of the main categories of NoSQL databases, and the popularity of the term "document-oriented database" has grown with the use of the term NoSQL itself. XML databases are a subclass of document-oriented databases that are optimized to work with XML documents. Graph databases are similar, but add another layer, the relationship, which allows them to link documents for rapid traversal. Document-oriented databases are inherently a subclass of the key-value store, another NoSQL database concept. The difference lies in the way the data is processed; in a key-value store, the data is considered to be inherently opaque to the database, whereas a document-oriented system relies on internal structure in the document in order to extract metadata that the database engine uses for further optimization. Although the difference is often negligible due to tools in the systems, conceptually the document-store is designed to offer a richer experience with modern programming techniques. Document databases contrast strongly with the traditional relational database (RDB). Relational databases generally store data in separate tables that are defined by the programmer, and a single object may be spread across several tables. Document databases store all information for a given object in a single instance in the database, and every stored object can be different from every other. This eliminates the need for object-relational mapping while loading data into the database. Documents The central concept of a document-oriented database is the notion of a document. While each document-oriented database implementation differs on the details of this definition, in general, they all assume documents encapsulate and encode data (or information) in some standard format or encoding. Encodings in use include XML, YAML, JSON, as well as binary forms like BSON. Documents in a document store are roughly equivalent to the programming concept of an object. They are not required to adhere to a standard schema, nor will they have all the same sections, slots, parts or keys. Generally, programs using objects have many different types of objects, and those objects often have many optional fields. Every object, even those of the same class, can look very different. Document stores are similar in that they allow different types of documents in a single store, allow the fields within them to be optional, and often allow them to be encoded using different encoding systems. For example, the following is a document, encoded in JSON: { "firstName": "Bob", "lastName": "Smith", "address": { "type": "Home", "street1":"5 Oak St.", "city": "Boys", "state": "AR", "zip": "32225", "country": "US" }, "hobby": "sailing", "phone": { "type": "Cell", "number": "(555)-123-4567" } } A second document might be encoded in XML as: <contact> <firstname>Bob</firstname> <lastname>Smith</lastname> <phone type="Cell">(123) 555-0178</phone> <phone type="Work">(890) 555-0133</phone> <address> <type>Home</type> <street1>123 Back St.</street1> <city>Boys</city> <state>AR</state> <zip>32225</zip> <country>US</country> </address> </contact> These two documents share some structural elements with one another, but each also has unique elements. The structure and text and other data inside the document are usually referred to as the document's content and may be referenced via retrieval or editing methods, (see below). Unlike a relational database where every record contains the same fields, leaving unused fields empty; there are no empty 'fields' in either document (record) in the above example. This approach allows new information to be added to some records without requiring that every other record in the database share the same structure. Document databases typically provide for additional metadata to be associated with and stored along with the document content. That metadata may be related to facilities the datastore provides for organizing documents, providing security, or other implementation specific features. CRUD operations The core operations that a document-oriented database supports for documents are similar to other databases, and while the terminology is not perfectly standardized, most practitioners will recognize them as CRUD: Creation (or insertion) Retrieval (or query, search, read or find) Update (or edit) Deletion (or removal) Keys Documents are addressed in the database via a unique key that represents that document. This key is a simple identifier (or ID), typically a string, a URI, or a path. The key can be used to retrieve the document from the database. Typically the database retains an index on the key to speed up document retrieval, and in some cases the key is required to create or insert the document into the database. Retrieval Another defining characteristic of a document-oriented database is that, beyond the simple key-to-document lookup that can be used to retrieve a document, the database offers an API or query language that allows the user to retrieve documents based on content (or metadata). For example, you may want a query that retrieves all the documents with a certain field set to a certain value. The set of query APIs or query language features available, as well as the expected performance of the queries, varies significantly from one implementation to another. Likewise, the specific set of indexing options and configuration that are available vary greatly by implementation. It is here that the document store varies most from the key-value store. In theory, the values in a key-value store are opaque to the store, they are essentially black boxes. They may offer search systems similar to those of a document store, but may have less understanding about the organization of the content. Document stores use the metadata in the document to classify the content, allowing them, for instance, to understand that one series of digits is a phone number, and another is a postal code. This allows them to search on those types of data, for instance, all phone numbers containing 555, which would ignore the zip code 55555. Editing Document databases typically provide some mechanism for updating or editing the content (or metadata) of a document, either by allowing for replacement of the entire document, or individual structural pieces of the document. Organization Document database implementations offer a variety of ways of organizing documents, including notions of Collections: groups of documents, where depending on implementation, a document may be enforced to live inside one collection, or may be allowed to live in multiple collections Tags and non-visible metadata: additional data outside the document content Directory hierarchies: groups of documents organized in a tree-like structure, typically based on path or URI Sometimes these organizational notions vary in how much they are logical vs physical, (e.g. on disk or in memory), representations. Relationship to other databases Relationship to key-value stores A document-oriented database is a specialized key-value store, which itself is another NoSQL database category. In a simple key-value store, the document content is opaque. A document-oriented database provides APIs or a query/update language that exposes the ability to query or update based on the internal structure in the document. This difference may be minor for users that do not need richer query, retrieval, or editing APIs that are typically provided by document databases. Modern key-value stores often include features for working with metadata, blurring the lines between document stores. Relationship to search engines Some search engine (aka information retrieval) systems like Apache Solr and Elasticsearch provide enough of the core operations on documents to fit the definition of a document-oriented database. Relationship to relational databases In a relational database, data is first categorized into a number of predefined types, and tables are created to hold individual entries, or records, of each type. The tables define the data within each record's fields, meaning that every record in the table has the same overall form. The administrator also defines the relationships between the tables, and selects certain fields that they believe will be most commonly used for searching and defines indexes on them. A key concept in the relational design is that any data that may be repeated is normally placed in its own table, and if these instances are related to each other, a column is selected to group them together, the foreign key. This design is known as database normalization. For example, an address book application will generally need to store the contact name, an optional image, one or more phone numbers, one or more mailing addresses, and one or more email addresses. In a canonical relational database, tables would be created for each of these rows with predefined fields for each bit of data: the CONTACT table might include FIRST_NAME, LAST_NAME and IMAGE columns, while the PHONE_NUMBER table might include COUNTRY_CODE, AREA_CODE, PHONE_NUMBER and TYPE (home, work, etc.). The PHONE_NUMBER table also contains a foreign key column, "CONTACT_ID", which holds the unique ID number assigned to the contact when it was created. In order to recreate the original contact, the database engine uses the foreign keys to look for the related items across the group of tables and reconstruct the original data. In contrast, in a document-oriented database there may be no internal structure that maps directly onto the concept of a table, and the fields and relationships generally don't exist as predefined concepts. Instead, all of the data for an object is placed in a single document, and stored in the database as a single entry. In the address book example, the document would contain the contact's name, image, and any contact info, all in a single record. That entry is accessed through its key, which allows the database to retrieve and return the document to the application. No additional work is needed to retrieve the related data; all of this is returned in a single object. A key difference between the document-oriented and relational models is that the data formats are not predefined in the document case. In most cases, any sort of document can be stored in any database, and those documents can change in type and form at any time. If one wishes to add a COUNTRY_FLAG to a CONTACT, this field can be added to new documents as they are inserted, this will have no effect on the database or the existing documents already stored. To aid retrieval of information from the database, document-oriented systems generally allow the administrator to provide hints to the database to look for certain types of information. These work in a similar fashion to indexes in the relational case. Most also offer the ability to add additional metadata outside of the content of the document itself, for instance, tagging entries as being part of an address book, which allows the programmer to retrieve related types of information, like "all the address book entries". This provides functionality similar to a table, but separates the concept (categories of data) from its physical implementation (tables). In the classic normalized relational model, objects in the database are represented as separate rows of data with no inherent structure beyond that given to them as they are retrieved. This leads to problems when trying to translate programming objects to and from their associated database rows, a problem known as object-relational impedance mismatch. Document stores more closely, or in some cases directly, map programming objects into the store. These are often marketed using the term NoSQL. Implementations XML database implementations Most XML databases are document-oriented databases. See also Database theory Data hierarchy Data analysis Full-text search In-memory database Internet Message Access Protocol (IMAP) Machine-readable document Multi-model database NoSQL Object database Online database Real-time database Relational database Content management system Notes References Further reading Assaf Arkin. (2007, September 20). Read Consistency: Dumb Databases, Smart Services. External links DB-Engines Ranking of Document Stores by popularity, updated monthly Data management Database management systems Types of databases Data analysis Databases
Document-oriented database
Technology
2,577
3,410,264
https://en.wikipedia.org/wiki/Classification%20scheme%20%28information%20science%29
In information science and ontology, a classification scheme is an arrangement of classes or groups of classes. The activity of developing the schemes bears similarity to taxonomy, but with perhaps a more theoretical bent, as a single classification scheme can be applied over a wide semantic spectrum while taxonomies tend to be devoted to a single topic. In the abstract, the resulting structures are a crucial aspect of metadata, often represented as a hierarchical structure and accompanied by descriptive information of the classes or groups. Such a classification scheme is intended to be used for the classification of individual objects into the classes or groups, and the classes or groups are based on characteristics which the objects (members) have in common. The ISO/IEC 11179 metadata registry standard uses classification schemes as a way to classify administered items, such as data elements, in a metadata registry. Some quality criteria for classification schemes are: Whether different kinds are grouped together. In other words, whether it is a grouping system or a pure classification system. In case of grouping, a subset (subgroup) does not have (inherit) all the characteristics of the superset, which makes that the knowledge and requirements about the superset are not applicable for the members of the subset. Whether the classes have overlaps. Whether subordinates (may) have multiple superordinates. Some classification schemes allow that a kind of thing has more than one superordinate others do not. Multiple supertypes for one subtype implies that the subordinate has the combined characteristics of all its superordinates. This is called multiple inheritance (of characteristics from multiple superordinates to their subordinates). Whether the criteria for belonging to a class or group are well defined. Whether the kinds of relations between the concepts are made explicit and well defined. Whether subtype-supertype relations are distinguished from composition relations (part-whole relations) and from object-role relations. In linguistics In linguistics, subordinate concepts are described as hyponyms of their respective superordinates; typically, a hyponym is 'a kind of' its superordinate. Benefits of using classification schemes Using one or more classification schemes for the classification of a collection of objects has many benefits. Some of these include: It allows a user to find an individual object quickly on the basis of its kind or group. It makes it easier to detect duplicate objects. It conveys semantics (meaning) of an object from the definition of its kind, which meaning is not conveyed by the name of the individual object or its way of spelling. Knowledge and requirements about a kind of thing can be applied to other objects of that kind. Kinds of classification schemes The following are examples of different kinds of classification schemes. This list is in approximate order from informal to more formal: thesaurus – a collection of categorized concepts, denoted by words or phrases, that are related to each other by narrower term, wider term and related term relations. taxonomy – a formal list of concepts, denoted by controlled words or phrases, arranged from abstract to specific, related by subtype-supertype relations or by superset-subset relations. data model – an arrangement of concepts (entity types), denoted by words or phrases, that have various kinds of relationships. Typically, but not necessarily, representing requirements and capabilities for a specific scope (application area). network (mathematics) – an arrangement of objects in a random graph. ontology – an arrangement of concepts that are related by various well defined kinds of relations. The arrangement can be visualized in a directed acyclic graph. One example of a classification scheme for data elements is a representation term. See also ISO/IEC 11179 Faceted classification Metadata Ontology (computer science) Representation class Representation term Simple Knowledge Organisation System Semantic spectrum References External links OECD Glossary of Statistical Terms – Classification Schemes ISO/IEC 11179-2:2005 Metadata registries (MDR) – Part 2: Classification Nancy Lawler's presentation on Classification Schemes Metadata Metadata registry ISO/IEC 11179 Classification systems
Classification scheme (information science)
Technology
813
46,586,556
https://en.wikipedia.org/wiki/Gudkov%27s%20conjecture
In real algebraic geometry, Gudkov's conjecture, also called Gudkov’s congruence, (named after Dmitry Gudkov) was a conjecture, and is now a theorem, which states that a M-curve of even degree obeys the congruence where is the number of positive ovals and the number of negative ovals of the M-curve. (Here, the term M-curve stands for "maximal curve"; it means a smooth algebraic curve over the reals whose genus is , where is the number of maximal components of the curve.) The theorem was proved by the combined works of Vladimir Arnold and Vladimir Rokhlin. See also Hilbert's sixteenth problem Tropical geometry References Conjectures that have been proved Theorems in algebraic geometry Real algebraic geometry
Gudkov's conjecture
Mathematics
162
58,904,997
https://en.wikipedia.org/wiki/Luis%20M.%20Campos
Luis M. Campos is a Professor in the Department of Chemistry at Columbia University. Campos leads a research team focused on nanostructured materials, macromolecular systems, and single-molecule electronics. Early life and career Campos was born in Guadalajara, Mexico. He remained in his hometown until the age of 11, where he moved to Los Angeles, California. Campos attended California State University, Dominguez Hills, graduating with a B.Sc. in chemistry in 2001. After completing his undergraduate degree, Campos conducted research at King’s College London on theoretical organic photochemistry Campos attended the University of California, Los Angeles (UCLA) as a graduate student, where he worked under Prof. Miguel García-Garibay and Prof. Kendall Houk’s supervision. During his doctoral studies, Campos also performed research at the University of Minnesota with Prof. Donald G. Truhlar during the summer of 2003, and at the Johannes Kepler University Linz in Austria with Prof. Niyazi Serdar Sarıçiftçi in 2004 and 2005. Campos was awarded an NSF Predoctoral Fellowship, a Paul & Daisy Soros Fellowship, and the Saul & Silvia Winstein Award during his graduate studies. He received a Ph.D. from the Department of Chemistry and Biochemistry in 2006. Campos then conducted postdoctoral research from 2006 to 2010 at the University of California, Santa Barbara, where he worked with polymer chemist Prof. Craig Hawker on functionalization and cross-linking of polymers using the thiol-ene reaction. Campos has started his independent academic career in 2011 as an Assistant Professor in the Columbia University Department of Chemistry. In 2016, he was promoted to Associate Professor. In 2023, he was promoted to Professor. Research Campos research group explores molecular, macromolecular, and nanostructured materials that allow for advanced functional systems to be formed. The group is trained to adjust such materials using molecular design. Campos’ main strategy is to be able to understand a structure in order to better produce materials to help advance biology, engineering, physics, and processing. Nanostructured materials Nanostructured materials deal with block copolymers and assembles themselves. Campos and colleagues developed copolymers that can self-assemble into different nanoparticles. The research aspires to develop a light-weight, energy efficient devices from the polymers by understanding how to control architecture of these block copolymers. Molecular and macromolecular systems Campos and colleagues also work on the development of chemistry for the next generation solar cell technologies. Specifically, they have made several important contributions to making singlet fission materials that can create triplet pairs. By controlling the molecular structure of the organic molecules they synthesize, the physical properties of the molecules can be manipulated. This material is utilized to generate parts required for organic photovoltaics. Single-molecule electronics Studies involving single-molecule transport demonstrate how particular designs lead the synthesis of macromolecular materials. This also allows for chemists to adjust the functionality of a chemical. This research allows for exceptional transport technology. Awards and honors Campos has received recognition for his academic work. He has received several awards throughout his post-graduate career. Such awards include the 2016 ACS Arthur C. Cope Scholar Award, 2016 C&E News Talented 12, 2016 Camille Dreyfus Teacher-Scholar Award, 2015 ONR Young Investigator Award, 2015 Cottrell Scholar Award, and the 2014 NSF CAREER Award. He has served as an associate editor of the journal Chemical Science since 2018. References External links 21st-century American chemists Columbia University faculty California State University, Dominguez Hills alumni University of California, Los Angeles alumni Year of birth missing (living people) Living people People from Guadalajara, Jalisco Mexican chemists American organic chemists
Luis M. Campos
Chemistry
768
7,672,374
https://en.wikipedia.org/wiki/List%20of%20protein%20structure%20prediction%20software
This list of protein structure prediction software summarizes notable used software tools in protein structure prediction, including homology modeling, protein threading, ab initio methods, secondary structure prediction, and transmembrane helix and signal peptide prediction. Software list Below is a list which separates programs according to the method used for structure prediction. Homology modeling Threading and fold recognition Ab initio structure prediction Secondary structure prediction Detailed list of programs can be found at List of protein secondary structure prediction programs See also List of protein secondary structure prediction programs Comparison of nucleic acid simulation software List of software for molecular mechanics modeling Molecular design software Protein design External links bio.tools, finding more tools References Lists of software Protein methods Protein structure Structural bioinformatics software Proteomics
List of protein structure prediction software
Chemistry,Technology,Biology
155
11,034
https://en.wikipedia.org/wiki/Fluid%20dynamics
In physics, physical chemistry and engineering, fluid dynamics is a subdiscipline of fluid mechanics that describes the flow of fluids – liquids and gases. It has several subdisciplines, including (the study of air and other gases in motion) and (the study of water and other liquids in motion). Fluid dynamics has a wide range of applications, including calculating forces and moments on aircraft, determining the mass flow rate of petroleum through pipelines, predicting weather patterns, understanding nebulae in interstellar space and modelling fission weapon detonation. Fluid dynamics offers a systematic structure—which underlies these practical disciplines—that embraces empirical and semi-empirical laws derived from flow measurement and used to solve practical problems. The solution to a fluid dynamics problem typically involves the calculation of various properties of the fluid, such as flow velocity, pressure, density, and temperature, as functions of space and time. Before the twentieth century, "hydrodynamics" was synonymous with fluid dynamics. This is still reflected in names of some fluid dynamics topics, like magnetohydrodynamics and hydrodynamic stability, both of which can also be applied to gases. Equations The foundational axioms of fluid dynamics are the conservation laws, specifically, conservation of mass, conservation of linear momentum, and conservation of energy (also known as the first law of thermodynamics). These are based on classical mechanics and are modified in quantum mechanics and general relativity. They are expressed using the Reynolds transport theorem. In addition to the above, fluids are assumed to obey the continuum assumption. At small scale, all fluids are composed of molecules that collide with one another and solid objects. However, the continuum assumption assumes that fluids are continuous, rather than discrete. Consequently, it is assumed that properties such as density, pressure, temperature, and flow velocity are well-defined at infinitesimally small points in space and vary continuously from one point to another. The fact that the fluid is made up of discrete molecules is ignored. For fluids that are sufficiently dense to be a continuum, do not contain ionized species, and have flow velocities that are small in relation to the speed of light, the momentum equations for Newtonian fluids are the Navier–Stokes equations—which is a non-linear set of differential equations that describes the flow of a fluid whose stress depends linearly on flow velocity gradients and pressure. The unsimplified equations do not have a general closed-form solution, so they are primarily of use in computational fluid dynamics. The equations can be simplified in several ways, all of which make them easier to solve. Some of the simplifications allow some simple fluid dynamics problems to be solved in closed form. In addition to the mass, momentum, and energy conservation equations, a thermodynamic equation of state that gives the pressure as a function of other thermodynamic variables is required to completely describe the problem. An example of this would be the perfect gas equation of state: where is pressure, is density, and is the absolute temperature, while is the gas constant and is molar mass for a particular gas. A constitutive relation may also be useful. Conservation laws Three conservation laws are used to solve fluid dynamics problems, and may be written in integral or differential form. The conservation laws may be applied to a region of the flow called a control volume. A control volume is a discrete volume in space through which fluid is assumed to flow. The integral formulations of the conservation laws are used to describe the change of mass, momentum, or energy within the control volume. Differential formulations of the conservation laws apply Stokes' theorem to yield an expression that may be interpreted as the integral form of the law applied to an infinitesimally small volume (at a point) within the flow. Classifications Compressible versus incompressible flow All fluids are compressible to an extent; that is, changes in pressure or temperature cause changes in density. However, in many situations the changes in pressure and temperature are sufficiently small that the changes in density are negligible. In this case the flow can be modelled as an incompressible flow. Otherwise the more general compressible flow equations must be used. Mathematically, incompressibility is expressed by saying that the density of a fluid parcel does not change as it moves in the flow field, that is, where is the material derivative, which is the sum of local and convective derivatives. This additional constraint simplifies the governing equations, especially in the case when the fluid has a uniform density. For flow of gases, to determine whether to use compressible or incompressible fluid dynamics, the Mach number of the flow is evaluated. As a rough guide, compressible effects can be ignored at Mach numbers below approximately 0.3. For liquids, whether the incompressible assumption is valid depends on the fluid properties (specifically the critical pressure and temperature of the fluid) and the flow conditions (how close to the critical pressure the actual flow pressure becomes). Acoustic problems always require allowing compressibility, since sound waves are compression waves involving changes in pressure and density of the medium through which they propagate. Newtonian versus non-Newtonian fluids All fluids, except superfluids, are viscous, meaning that they exert some resistance to deformation: neighbouring parcels of fluid moving at different velocities exert viscous forces on each other. The velocity gradient is referred to as a strain rate; it has dimensions . Isaac Newton showed that for many familiar fluids such as water and air, the stress due to these viscous forces is linearly related to the strain rate. Such fluids are called Newtonian fluids. The coefficient of proportionality is called the fluid's viscosity; for Newtonian fluids, it is a fluid property that is independent of the strain rate. Non-Newtonian fluids have a more complicated, non-linear stress-strain behaviour. The sub-discipline of rheology describes the stress-strain behaviours of such fluids, which include emulsions and slurries, some viscoelastic materials such as blood and some polymers, and sticky liquids such as latex, honey and lubricants. Inviscid versus viscous versus Stokes flow The dynamic of fluid parcels is described with the help of Newton's second law. An accelerating parcel of fluid is subject to inertial effects. The Reynolds number is a dimensionless quantity which characterises the magnitude of inertial effects compared to the magnitude of viscous effects. A low Reynolds number () indicates that viscous forces are very strong compared to inertial forces. In such cases, inertial forces are sometimes neglected; this flow regime is called Stokes or creeping flow. In contrast, high Reynolds numbers () indicate that the inertial effects have more effect on the velocity field than the viscous (friction) effects. In high Reynolds number flows, the flow is often modeled as an inviscid flow, an approximation in which viscosity is completely neglected. Eliminating viscosity allows the Navier–Stokes equations to be simplified into the Euler equations. The integration of the Euler equations along a streamline in an inviscid flow yields Bernoulli's equation. When, in addition to being inviscid, the flow is irrotational everywhere, Bernoulli's equation can completely describe the flow everywhere. Such flows are called potential flows, because the velocity field may be expressed as the gradient of a potential energy expression. This idea can work fairly well when the Reynolds number is high. However, problems such as those involving solid boundaries may require that the viscosity be included. Viscosity cannot be neglected near solid boundaries because the no-slip condition generates a thin region of large strain rate, the boundary layer, in which viscosity effects dominate and which thus generates vorticity. Therefore, to calculate net forces on bodies (such as wings), viscous flow equations must be used: inviscid flow theory fails to predict drag forces, a limitation known as the d'Alembert's paradox. A commonly used model, especially in computational fluid dynamics, is to use two flow models: the Euler equations away from the body, and boundary layer equations in a region close to the body. The two solutions can then be matched with each other, using the method of matched asymptotic expansions. Steady versus unsteady flow A flow that is not a function of time is called steady flow. Steady-state flow refers to the condition where the fluid properties at a point in the system do not change over time. Time dependent flow is known as unsteady (also called transient). Whether a particular flow is steady or unsteady, can depend on the chosen frame of reference. For instance, laminar flow over a sphere is steady in the frame of reference that is stationary with respect to the sphere. In a frame of reference that is stationary with respect to a background flow, the flow is unsteady. Turbulent flows are unsteady by definition. A turbulent flow can, however, be statistically stationary. The random velocity field is statistically stationary if all statistics are invariant under a shift in time. This roughly means that all statistical properties are constant in time. Often, the mean field is the object of interest, and this is constant too in a statistically stationary flow. Steady flows are often more tractable than otherwise similar unsteady flows. The governing equations of a steady problem have one dimension fewer (time) than the governing equations of the same problem without taking advantage of the steadiness of the flow field. Laminar versus turbulent flow Turbulence is flow characterized by recirculation, eddies, and apparent randomness. Flow in which turbulence is not exhibited is called laminar. The presence of eddies or recirculation alone does not necessarily indicate turbulent flow—these phenomena may be present in laminar flow as well. Mathematically, turbulent flow is often represented via a Reynolds decomposition, in which the flow is broken down into the sum of an average component and a perturbation component. It is believed that turbulent flows can be described well through the use of the Navier–Stokes equations. Direct numerical simulation (DNS), based on the Navier–Stokes equations, makes it possible to simulate turbulent flows at moderate Reynolds numbers. Restrictions depend on the power of the computer used and the efficiency of the solution algorithm. The results of DNS have been found to agree well with experimental data for some flows. Most flows of interest have Reynolds numbers much too high for DNS to be a viable option, given the state of computational power for the next few decades. Any flight vehicle large enough to carry a human ( > 3 m), moving faster than is well beyond the limit of DNS simulation ( = 4 million). Transport aircraft wings (such as on an Airbus A300 or Boeing 747) have Reynolds numbers of 40 million (based on the wing chord dimension). Solving these real-life flow problems requires turbulence models for the foreseeable future. Reynolds-averaged Navier–Stokes equations (RANS) combined with turbulence modelling provides a model of the effects of the turbulent flow. Such a modelling mainly provides the additional momentum transfer by the Reynolds stresses, although the turbulence also enhances the heat and mass transfer. Another promising methodology is large eddy simulation (LES), especially in the form of detached eddy simulation (DES) — a combination of LES and RANS turbulence modelling. Other approximations There are a large number of other possible approximations to fluid dynamic problems. Some of the more commonly used are listed below. The Boussinesq approximation neglects variations in density except to calculate buoyancy forces. It is often used in free convection problems where density changes are small. Lubrication theory and Hele–Shaw flow exploits the large aspect ratio of the domain to show that certain terms in the equations are small and so can be neglected. Slender-body theory is a methodology used in Stokes flow problems to estimate the force on, or flow field around, a long slender object in a viscous fluid. The shallow-water equations can be used to describe a layer of relatively inviscid fluid with a free surface, in which surface gradients are small. Darcy's law is used for flow in porous media, and works with variables averaged over several pore-widths. In rotating systems, the quasi-geostrophic equations assume an almost perfect balance between pressure gradients and the Coriolis force. It is useful in the study of atmospheric dynamics. Multidisciplinary types Flows according to Mach regimes While many flows (such as flow of water through a pipe) occur at low Mach numbers (subsonic flows), many flows of practical interest in aerodynamics or in turbomachines occur at high fractions of (transonic flows) or in excess of it (supersonic or even hypersonic flows). New phenomena occur at these regimes such as instabilities in transonic flow, shock waves for supersonic flow, or non-equilibrium chemical behaviour due to ionization in hypersonic flows. In practice, each of those flow regimes is treated separately. Reactive versus non-reactive flows Reactive flows are flows that are chemically reactive, which finds its applications in many areas, including combustion (IC engine), propulsion devices (rockets, jet engines, and so on), detonations, fire and safety hazards, and astrophysics. In addition to conservation of mass, momentum and energy, conservation of individual species (for example, mass fraction of methane in methane combustion) need to be derived, where the production/depletion rate of any species are obtained by simultaneously solving the equations of chemical kinetics. Magnetohydrodynamics Magnetohydrodynamics is the multidisciplinary study of the flow of electrically conducting fluids in electromagnetic fields. Examples of such fluids include plasmas, liquid metals, and salt water. The fluid flow equations are solved simultaneously with Maxwell's equations of electromagnetism. Relativistic fluid dynamics Relativistic fluid dynamics studies the macroscopic and microscopic fluid motion at large velocities comparable to the velocity of light. This branch of fluid dynamics accounts for the relativistic effects both from the special theory of relativity and the general theory of relativity. The governing equations are derived in Riemannian geometry for Minkowski spacetime. Fluctuating hydrodynamics This branch of fluid dynamics augments the standard hydrodynamic equations with stochastic fluxes that model thermal fluctuations. As formulated by Landau and Lifshitz, a white noise contribution obtained from the fluctuation-dissipation theorem of statistical mechanics is added to the viscous stress tensor and heat flux. Terminology The concept of pressure is central to the study of both fluid statics and fluid dynamics. A pressure can be identified for every point in a body of fluid, regardless of whether the fluid is in motion or not. Pressure can be measured using an aneroid, Bourdon tube, mercury column, or various other methods. Some of the terminology that is necessary in the study of fluid dynamics is not found in other similar areas of study. In particular, some of the terminology used in fluid dynamics is not used in fluid statics. Characteristic numbers Terminology in incompressible fluid dynamics The concepts of total pressure and dynamic pressure arise from Bernoulli's equation and are significant in the study of all fluid flows. (These two pressures are not pressures in the usual sense—they cannot be measured using an aneroid, Bourdon tube or mercury column.) To avoid potential ambiguity when referring to pressure in fluid dynamics, many authors use the term static pressure to distinguish it from total pressure and dynamic pressure. Static pressure is identical to pressure and can be identified for every point in a fluid flow field. A point in a fluid flow where the flow has come to rest (that is to say, speed is equal to zero adjacent to some solid body immersed in the fluid flow) is of special significance. It is of such importance that it is given a special name—a stagnation point. The static pressure at the stagnation point is of special significance and is given its own name—stagnation pressure. In incompressible flows, the stagnation pressure at a stagnation point is equal to the total pressure throughout the flow field. Terminology in compressible fluid dynamics In a compressible fluid, it is convenient to define the total conditions (also called stagnation conditions) for all thermodynamic state properties (such as total temperature, total enthalpy, total speed of sound). These total flow conditions are a function of the fluid velocity and have different values in frames of reference with different motion. To avoid potential ambiguity when referring to the properties of the fluid associated with the state of the fluid rather than its motion, the prefix "static" is commonly used (such as static temperature and static enthalpy). Where there is no prefix, the fluid property is the static condition (so "density" and "static density" mean the same thing). The static conditions are independent of the frame of reference. Because the total flow conditions are defined by isentropically bringing the fluid to rest, there is no need to distinguish between total entropy and static entropy as they are always equal by definition. As such, entropy is most commonly referred to as simply "entropy". See also List of publications in fluid dynamics List of fluid dynamicists References Further reading Originally published in 1879, the 6th extended edition appeared first in 1932. Originally published in 1938. Encyclopedia: Fluid dynamics Scholarpedia External links National Committee for Fluid Mechanics Films (NCFMF), containing films on several subjects in fluid dynamics (in RealMedia format) Gallery of fluid motion, "a visual record of the aesthetic and science of contemporary fluid mechanics," from the American Physical Society List of Fluid Dynamics books Piping Aerodynamics Continuum mechanics
Fluid dynamics
Physics,Chemistry,Engineering
3,710
237,617
https://en.wikipedia.org/wiki/Anatomical%20snuffbox
The anatomical snuff box or snuffbox or foveola radialis is a triangular deepening on the radial, dorsal aspect of the hand—at the level of the carpal bones, specifically, the scaphoid and trapezium bones forming the floor. The name originates from the use of this surface for placing and then sniffing powdered tobacco, or "snuff." It is sometimes referred to by its French name tabatière. Structure Boundaries The medial border (ulnar side) of the snuffbox is the tendon of the extensor pollicis longus The lateral border (radial side) is a pair of parallel and intimate tendons, of the extensor pollicis brevis and the abductor pollicis longus. (Accordingly, the anatomical snuffbox is most visible, having a more pronounced concavity, during thumb extension.) The proximal border is formed by the styloid process of the radius The distal border is formed by the approximate apex of the schematic snuffbox isosceles triangle. The floor of the snuffbox varies depending on the position of the wrist, but both the trapezium and primarily the scaphoid can be palpated. Neurovascular anatomy Deep to the tendons which form the borders of the anatomical snuff box lies the radial artery, which passes through the anatomical snuffbox on its course from the normal radial pulse detecting area, to the proximal space in between the first and second metacarpals to contribute to the superficial and deep palmar arches. In the anatomical snuffbox, the radial artery is closely related (<2 mm) with the superficial branch of radial nerve near the styloid process of radius in 48%, while in 24% the radial artery is openly related to the lateral cutaneous nerve of forearm. The cephalic vein arises within the anatomical snuffbox, while the dorsal cutaneous branch of the radial nerve can be palpated by stroking along the extensor pollicis longus with the dorsal aspect of a fingernail. Clinical significance The radius and scaphoid articulate deep to the snuffbox to form the basis of the wrist joint. In the event of a fall onto an outstretched hand (FOOSH), this is the area through which the brunt of the force will focus. This results in these two bones being the most often fractured of the wrist. In a case where there is localized tenderness within the snuffbox, knowledge of wrist anatomy leads to the speedy conclusion that the fracture is likely to be of the scaphoid. This is understandable as the scaphoid is a small, oddly shaped bone whose purpose is to facilitate mobility rather than confer stability to the wrist joint . In the event of inordinate application of force over the wrist, this small scaphoid is likely to be the weak link . Scaphoid fracture is one of the most frequent causes of medico-legal issues. An anatomical anomaly in the vascular supply to the scaphoid is the area to which the blood supply is first delivered. Blood enters the scaphoid distally. Consequently, in the event of a fracture the proximal segment of the scaphoid will be devoid of a vascular supply, and will—if action is not taken—avascularly necrose within a sufferer's snuffbox. Due to the small size of the scaphoid and its shape, it is difficult to determine, early on, whether or not the scaphoid is indeed fractured with an x-ray. Further complications include; carpal instability (ligament disruption) and fracture-dislocations. Additional images See also Anatomical terminology Anatomical terms of bone Cephalic vein References External links "Instant Anatomy" Hand Anatomy
Anatomical snuffbox
Biology
808
75,468,149
https://en.wikipedia.org/wiki/Xuan%20tu
Xuan tu or Hsuan thu () is a diagram given in the ancient Chinese astronomical and mathematical text Zhoubi Suanjing indicating a proof of the Pythagorean theorem. Zhoubi Suanjing is one of the oldest Chinese texts on mathematics. The exact date of composition of the book has not been determined. Some estimates of the date range as far back as 1100 BCE, while others estimate the date as late as 200 CE. However, from astronomical evidence available in the book it would appear that much of the material in the book is from the time of Confucius, that is, the 6th century BCE. Hsuan thu represents one of the earliest known proofs of the Pythagorean theorem and also one of the simplest. The text in Zhoubi Suanjing accompanying the diagram has been translated as follows: "The art of numbering proceeds from the circle and the square. The circle is derived from the square and the square from the rectangle (literally, the T-square or the carpenter's square). The rectangle originates from the fact that 9x9 = 81 (that is, the multiplication table or properties of numbers as such). Thus, let us cut a rectangle (diagonally) and make the width 3 (units) wide and the height 4 (units) long. The diagonal between the two corners will then be 5 (units) long. Now after drawing a square on the diagonal, circumscribe it by half-rectangles like that which has been left outside, so as to form a (square) plate. Thus the (four) outer half-rectangles of width 3, length 4 and diagonal 5, together make two rectangles (of area 24); then (when this is subtracted from the square plate of area 24) the remainder is of area 25. This (process) is called piling up 'piling up the rectangles' (chi chu)." The hsuan thu diagram makes use of the 3,4,5 right triangle to demonstrate the Pythagorean theorem. However the Chinese people seems to have generalized its conclusion to all right triangles. The hsuan thu diagram, in its generalized form can be found in the writings of the Indian mathematician Bhaskara II (c. 1114–1185). The description of this diagram appears in verse 129 of Bijaganita of Bhaskara II. There is a legend that Bhaskara's proof of the Pythagorean theorem consisted of only just one word, namely, "Behold!". However, using the notations of the diagram, the theorem follows from the following equation: References Chinese mathematics Pythagorean theorem Euclidean plane geometry History of geometry Proof without words
Xuan tu
Mathematics
573
191,133
https://en.wikipedia.org/wiki/Local%20community
A local community has been defined as a group of interacting people living in a common location. The word is often used to refer to a group that is organized around common values and is attributed with social cohesion within a shared geographical location, generally in social units larger than a household. The word can also refer to the national community or global community. The word "community" is derived from the Old French communité which is derived from the Latin communitas (cum, "with/together" and munus, "gift"), a broad term for fellowship or organized society. A sense of community refers to people's perception of interconnection and interdependence, shared responsibility, and common goals. Understanding a community entails having knowledge of community needs and resources, having respect for community members, and involving key community members in programs. Benefits of local community The author Robert Putnam refers to the value which comes from social networks as social capital in his book Bowling Alone: The Collapse and Revival of American Community. He writes that social capital "makes an enormous difference in our lives", that "a society characterized by generalized reciprocity is more efficient that a distrustful society" and that economic sociologists have shown a minimized economic wealth if social capital is lacking. Putnam reports that the first use of the social capital theory was by L. J. Hanifan, a practical reformer during the Progressive Era in the United States of America. The following description of social capital is a quote from L.J. Hanifan in Putnam's Book: Employment Putnam reported that many studies have shown that the highest predictor of job satisfaction is the presence of social connection in the workplace. He writes that "people with friends at work are happier at work." And that "social networks provide people with advice, a bonus, a promotion, and other strategic information, and letters of recommendation." Community engagement has been proven to counteract the most negative attributes of poverty and a high amount of social capital has been shown to reduce crime. Local community and health "Social connectedness matters to our lives in the most profound way." -Robert Putnam. Robert Putnam reports, in the chapter Health and Happiness from his book Bowling Alone, that recent public research shows social connection impacts all areas of human health, this includes psychological and physical aspects of human health. Putnam says "...beyond a doubt that social connectedness is one of the most powerful determinates of our well being." In particular it is face to face connections which have been shown to have greater impacts then non-face to face relationships. Specific health benefits of strong social relationships are a decrease in the likelihood of: seasonal viruses, heart attacks, strokes, cancer, depression, and premature death of all sorts. Online initiatives There are online initiatives to improve local communities like LOCAL (www.localchange.com). Community sustainability Sustainability in community programs is the capacity of programs (services designed to meet the needs of community members) to continuously respond to community issues. A sustained program maintains a focus consonant with its original goals and objectives, including the individuals, families, and communities it was originally intended to serve. Programs change regarding the breadth and depth of their programming. Some become aligned with other organizations and established institutions, whereas others maintain their independence. Understanding the community context in which programs serving the community function has an important influence on program sustainability and success. See table: Local economy According to Washington state's Sustain South Sound organization, the top ten reasons to buy locally are: To strengthen local economy: Studies have shown that buying from an independent, locally owned business, significantly raises the number of times your money is used to make purchases from other local businesses, service providers and farms—continuing to strengthen the economic base of the community. Increase jobs: Small local businesses are the largest employer nationally in the United States of America. Encourage local prosperity: A growing body of economic research shows that in an increasingly homogenized world, entrepreneurs and skilled workers are more likely to invest and settle in communities that preserve their one-of-a-kind businesses and distinctive character. Reduce environmental impact: Locally owned businesses can make more local purchases requiring less transportation and generally set up shop in town or city centers as opposed to developing on the fringe. This means contributing less to greenhouse gas emissions, sprawl, congestion, habitat loss and pollution. Support community groups: Non-profit organizations receive an average 250% more support from smaller business owners than they do from large businesses. Keep your community unique: Where we shop, where we eat and have fun—all of it makes our community home. Get better service: Local businesses often hire people with a better understanding of the products they are selling and take more time to get to know customers. Invest in community: Local businesses are owned by people who live in the community, are less likely to leave, and are more invested in the community's future. Put your taxes to good use: Local businesses in town centers require comparatively little infrastructure investment and make more efficient use of public services as compared to nationally owned stores entering the community. Buy what you want, not what someone wants you to buy: A marketplace of tens of thousands of small businesses is the best way to ensure innovation and low prices over the long-term. A multitude of small businesses, each selecting products based not on a national sales plan but on their own interests and the needs of their local customers, guarantees a much broader range of product choices. Suggested reading A Guide to Community Visioning; Hands-On Information For Local Communities. Oregon Visions Project. See also Local history Local museum Local purchasing References Human geography Localism (politics) Sociological terminology Types of communities Urban studies and planning terminology
Local community
Environmental_science
1,158
17,032,752
https://en.wikipedia.org/wiki/Centre%20of%20Canada
There are several ways of determining the centre of Canada giving different locations. Longitude The rural village of Taché, Mantioba, east of Winnipeg on the Trans-Canada Highway, has a sign at 96°48'35"W that proclaims it the longitudinal centre of Canada. The sign was upgraded with the opening of Centre of Canada Park in 2017. In effect, it marks the north-south line midway between the extreme points of Canada on the east and west, including islands (including Newfoundland since 1949). Latitude The latitudinal centre of Canada (including islands, but excluding Canada's claim to the North Pole) is a line at 62 degrees 24 minutes North. Intersection of latitude and longitude The intersection of these two lines is one definition of the centre point of Canada, as explained by the Atlas of Canada's website: The nearest inhabited places to this point are Baker Lake, Nunavut well to the north, and Arviat to the east. A sign that proclaims the point as the geographic centre of Canada was added in 1959. Pole of inaccessibility The pole of inaccessibility of Canada (the point furthest from any coastline or land border) is near Jackfish River, Alberta at 34-115-17-W4 (Latitude: 59°1′ 48" N, Longitude: 112°49′ 12" W). References External links The centre of controversy: Where is Canada's middle? - Maclean's Geographical centres Geography of Canada
Centre of Canada
Physics,Mathematics
310
67,065
https://en.wikipedia.org/wiki/Word-sense%20disambiguation
Word-sense disambiguation is the process of identifying which sense of a word is meant in a sentence or other segment of context. In human language processing and cognition, it is usually subconscious. Given that natural language requires reflection of neurological reality, as shaped by the abilities provided by the brain's neural networks, computer science has had a long-term challenge in developing the ability in computers to do natural language processing and machine learning. Many techniques have been researched, including dictionary-based methods that use the knowledge encoded in lexical resources, supervised machine learning methods in which a classifier is trained for each distinct word on a corpus of manually sense-annotated examples, and completely unsupervised methods that cluster occurrences of words, thereby inducing word senses. Among these, supervised learning approaches have been the most successful algorithms to date. Accuracy of current algorithms is difficult to state without a host of caveats. In English, accuracy at the coarse-grained (homograph) level is routinely above 90% (as of 2009), with some methods on particular homographs achieving over 96%. On finer-grained sense distinctions, top accuracies from 59.1% to 69.0% have been reported in evaluation exercises (SemEval-2007, Senseval-2), where the baseline accuracy of the simplest possible algorithm of always choosing the most frequent sense was 51.4% and 57%, respectively. Variants Disambiguation requires two strict inputs: a dictionary to specify the senses which are to be disambiguated and a corpus of language data to be disambiguated (in some methods, a training corpus of language examples is also required). WSD task has two variants: "lexical sample" (disambiguating the occurrences of a small sample of target words which were previously selected) and "all words" task (disambiguation of all the words in a running text). "All words" task is generally considered a more realistic form of evaluation, but the corpus is more expensive to produce because human annotators have to read the definitions for each word in the sequence every time they need to make a tagging judgement, rather than once for a block of instances for the same target word. History WSD was first formulated as a distinct computational task during the early days of machine translation in the 1940s, making it one of the oldest problems in computational linguistics. Warren Weaver first introduced the problem in a computational context in his 1949 memorandum on translation. Later, Bar-Hillel (1960) argued that WSD could not be solved by "electronic computer" because of the need in general to model all world knowledge. In the 1970s, WSD was a subtask of semantic interpretation systems developed within the field of artificial intelligence, starting with Wilks' preference semantics. However, since WSD systems were at the time largely rule-based and hand-coded they were prone to a knowledge acquisition bottleneck. By the 1980s large-scale lexical resources, such as the Oxford Advanced Learner's Dictionary of Current English (OALD), became available: hand-coding was replaced with knowledge automatically extracted from these resources, but disambiguation was still knowledge-based or dictionary-based. In the 1990s, the statistical revolution advanced computational linguistics, and WSD became a paradigm problem on which to apply supervised machine learning techniques. The 2000s saw supervised techniques reach a plateau in accuracy, and so attention has shifted to coarser-grained senses, domain adaptation, semi-supervised and unsupervised corpus-based systems, combinations of different methods, and the return of knowledge-based systems via graph-based methods. Still, supervised systems continue to perform best. Difficulties Differences between dictionaries One problem with word sense disambiguation is deciding what the senses are, as different dictionaries and thesauruses will provide different divisions of words into senses. Some researchers have suggested choosing a particular dictionary, and using its set of senses to deal with this issue. Generally, however, research results using broad distinctions in senses have been much better than those using narrow ones. Most researchers continue to work on fine-grained WSD. Most research in the field of WSD is performed by using WordNet as a reference sense inventory for English. WordNet is a computational lexicon that encodes concepts as synonym sets (e.g. the concept of car is encoded as { car, auto, automobile, machine, motorcar }). Other resources used for disambiguation purposes include Roget's Thesaurus and Wikipedia. More recently, BabelNet, a multilingual encyclopedic dictionary, has been used for multilingual WSD. Part-of-speech tagging In any real test, part-of-speech tagging and sense tagging have proven to be very closely related, with each potentially imposing constraints upon the other. The question whether these tasks should be kept together or decoupled is still not unanimously resolved, but recently scientists incline to test these things separately (e.g. in the Senseval/SemEval competitions parts of speech are provided as input for the text to disambiguate). Both WSD and part-of-speech tagging involve disambiguating or tagging with words. However, algorithms used for one do not tend to work well for the other, mainly because the part of speech of a word is primarily determined by the immediately adjacent one to three words, whereas the sense of a word may be determined by words further away. The success rate for part-of-speech tagging algorithms is at present much higher than that for WSD, state-of-the art being around 96% accuracy or better, as compared to less than 75% accuracy in word sense disambiguation with supervised learning. These figures are typical for English, and may be very different from those for other languages. Inter-judge variance Another problem is inter-judge variance. WSD systems are normally tested by having their results on a task compared against those of a human. However, while it is relatively easy to assign parts of speech to text, training people to tag senses has been proven to be far more difficult. While users can memorize all of the possible parts of speech a word can take, it is often impossible for individuals to memorize all of the senses a word can take. Moreover, humans do not agree on the task at hand – give a list of senses and sentences, and humans will not always agree on which word belongs in which sense. As human performance serves as the standard, it is an upper bound for computer performance. Human performance, however, is much better on coarse-grained than fine-grained distinctions, so this again is why research on coarse-grained distinctions has been put to test in recent WSD evaluation exercises. Sense inventory and algorithms' task-dependency A task-independent sense inventory is not a coherent concept: each task requires its own division of word meaning into senses relevant to the task. Additionally, completely different algorithms might be required by different applications. In machine translation, the problem takes the form of target word selection. The "senses" are words in the target language, which often correspond to significant meaning distinctions in the source language ("bank" could translate to the French – that is, 'financial bank' or – that is, 'edge of river'). In information retrieval, a sense inventory is not necessarily required, because it is enough to know that a word is used in the same sense in the query and a retrieved document; what sense that is, is unimportant. Discreteness of senses Finally, the very notion of "word sense" is slippery and controversial. Most people can agree in distinctions at the coarse-grained homograph level (e.g., pen as writing instrument or enclosure), but go down one level to fine-grained polysemy, and disagreements arise. For example, in Senseval-2, which used fine-grained sense distinctions, human annotators agreed in only 85% of word occurrences. Word meaning is in principle infinitely variable and context-sensitive. It does not divide up easily into distinct or discrete sub-meanings. Lexicographers frequently discover in corpora loose and overlapping word meanings, and standard or conventional meanings extended, modulated, and exploited in a bewildering variety of ways. The art of lexicography is to generalize from the corpus to definitions that evoke and explain the full range of meaning of a word, making it seem like words are well-behaved semantically. However, it is not at all clear if these same meaning distinctions are applicable in computational applications, as the decisions of lexicographers are usually driven by other considerations. In 2009, a task – named lexical substitution – was proposed as a possible solution to the sense discreteness problem. The task consists of providing a substitute for a word in context that preserves the meaning of the original word (potentially, substitutes can be chosen from the full lexicon of the target language, thus overcoming discreteness). Approaches and methods There are two main approaches to WSD – deep approaches and shallow approaches. Deep approaches presume access to a comprehensive body of world knowledge. These approaches are generally not considered to be very successful in practice, mainly because such a body of knowledge does not exist in a computer-readable format, outside very limited domains. Additionally due to the long tradition in computational linguistics, of trying such approaches in terms of coded knowledge and in some cases, it can be hard to distinguish between knowledge involved in linguistic or world knowledge. The first attempt was that by Margaret Masterman and her colleagues, at the Cambridge Language Research Unit in England, in the 1950s. This attempt used as data a punched-card version of Roget's Thesaurus and its numbered "heads", as an indicator of topics and looked for repetitions in text, using a set intersection algorithm. It was not very successful, but had strong relationships to later work, especially Yarowsky's machine learning optimisation of a thesaurus method in the 1990s. Shallow approaches do not try to understand the text, but instead consider the surrounding words. These rules can be automatically derived by the computer, using a training corpus of words tagged with their word senses. This approach, while theoretically not as powerful as deep approaches, gives superior results in practice, due to the computer's limited world knowledge. There are four conventional approaches to WSD: Dictionary- and knowledge-based methods: These rely primarily on dictionaries, thesauri, and lexical knowledge bases, without using any corpus evidence. Semi-supervised or minimally supervised methods: These make use of a secondary source of knowledge such as a small annotated corpus as seed data in a bootstrapping process, or a word-aligned bilingual corpus. Supervised methods: These make use of sense-annotated corpora to train from. Unsupervised methods: These eschew (almost) completely external information and work directly from raw unannotated corpora. These methods are also known under the name of word sense discrimination. Almost all these approaches work by defining a window of n content words around each word to be disambiguated in the corpus, and statistically analyzing those n surrounding words. Two shallow approaches used to train and then disambiguate are Naïve Bayes classifiers and decision trees. In recent research, kernel-based methods such as support vector machines have shown superior performance in supervised learning. Graph-based approaches have also gained much attention from the research community, and currently achieve performance close to the state of the art. Dictionary- and knowledge-based methods The Lesk algorithm is the seminal dictionary-based method. It is based on the hypothesis that words used together in text are related to each other and that the relation can be observed in the definitions of the words and their senses. Two (or more) words are disambiguated by finding the pair of dictionary senses with the greatest word overlap in their dictionary definitions. For example, when disambiguating the words in "pine cone", the definitions of the appropriate senses both include the words evergreen and tree (at least in one dictionary). A similar approach searches for the shortest path between two words: the second word is iteratively searched among the definitions of every semantic variant of the first word, then among the definitions of every semantic variant of each word in the previous definitions and so on. Finally, the first word is disambiguated by selecting the semantic variant which minimizes the distance from the first to the second word. An alternative to the use of the definitions is to consider general word-sense relatedness and to compute the semantic similarity of each pair of word senses based on a given lexical knowledge base such as WordNet. Graph-based methods reminiscent of spreading activation research of the early days of AI research have been applied with some success. More complex graph-based approaches have been shown to perform almost as well as supervised methods or even outperforming them on specific domains. Recently, it has been reported that simple graph connectivity measures, such as degree, perform state-of-the-art WSD in the presence of a sufficiently rich lexical knowledge base. Also, automatically transferring knowledge in the form of semantic relations from Wikipedia to WordNet has been shown to boost simple knowledge-based methods, enabling them to rival the best supervised systems and even outperform them in a domain-specific setting. The use of selectional preferences (or selectional restrictions) is also useful, for example, knowing that one typically cooks food, one can disambiguate the word bass in "I am cooking basses" (i.e., it's not a musical instrument). Supervised methods Supervised methods are based on the assumption that the context can provide enough evidence on its own to disambiguate words (hence, common sense and reasoning are deemed unnecessary). Probably every machine learning algorithm going has been applied to WSD, including associated techniques such as feature selection, parameter optimization, and ensemble learning. Support Vector Machines and memory-based learning have been shown to be the most successful approaches, to date, probably because they can cope with the high-dimensionality of the feature space. However, these supervised methods are subject to a new knowledge acquisition bottleneck since they rely on substantial amounts of manually sense-tagged corpora for training, which are laborious and expensive to create. Semi-supervised methods Because of the lack of training data, many word sense disambiguation algorithms use semi-supervised learning, which allows both labeled and unlabeled data. The Yarowsky algorithm was an early example of such an algorithm. It uses the ‘One sense per collocation’ and the ‘One sense per discourse’ properties of human languages for word sense disambiguation. From observation, words tend to exhibit only one sense in most given discourse and in a given collocation. The bootstrapping approach starts from a small amount of seed data for each word: either manually tagged training examples or a small number of surefire decision rules (e.g., 'play' in the context of 'bass' almost always indicates the musical instrument). The seeds are used to train an initial classifier, using any supervised method. This classifier is then used on the untagged portion of the corpus to extract a larger training set, in which only the most confident classifications are included. The process repeats, each new classifier being trained on a successively larger training corpus, until the whole corpus is consumed, or until a given maximum number of iterations is reached. Other semi-supervised techniques use large quantities of untagged corpora to provide co-occurrence information that supplements the tagged corpora. These techniques have the potential to help in the adaptation of supervised models to different domains. Also, an ambiguous word in one language is often translated into different words in a second language depending on the sense of the word. Word-aligned bilingual corpora have been used to infer cross-lingual sense distinctions, a kind of semi-supervised system. Unsupervised methods Unsupervised learning is the greatest challenge for WSD researchers. The underlying assumption is that similar senses occur in similar contexts, and thus senses can be induced from text by clustering word occurrences using some measure of similarity of context, a task referred to as word sense induction or discrimination. Then, new occurrences of the word can be classified into the closest induced clusters/senses. Performance has been lower than for the other methods described above, but comparisons are difficult since senses induced must be mapped to a known dictionary of word senses. If a mapping to a set of dictionary senses is not desired, cluster-based evaluations (including measures of entropy and purity) can be performed. Alternatively, word sense induction methods can be tested and compared within an application. For instance, it has been shown that word sense induction improves Web search result clustering by increasing the quality of result clusters and the degree diversification of result lists. It is hoped that unsupervised learning will overcome the knowledge acquisition bottleneck because they are not dependent on manual effort. Representing words considering their context through fixed-size dense vectors (word embeddings) has become one of the most fundamental blocks in several NLP systems. Even though most of traditional word-embedding techniques conflate words with multiple meanings into a single vector representation, they still can be used to improve WSD. A simple approach to employ pre-computed word embeddings to represent word senses is to compute the centroids of sense clusters. In addition to word-embedding techniques, lexical databases (e.g., WordNet, ConceptNet, BabelNet) can also assist unsupervised systems in mapping words and their senses as dictionaries. Some techniques that combine lexical databases and word embeddings are presented in AutoExtend and Most Suitable Sense Annotation (MSSA). In AutoExtend, they present a method that decouples an object input representation into its properties, such as words and their word senses. AutoExtend uses a graph structure to map words (e.g. text) and non-word (e.g. synsets in WordNet) objects as nodes and the relationship between nodes as edges. The relations (edges) in AutoExtend can either express the addition or similarity between its nodes. The former captures the intuition behind the offset calculus, while the latter defines the similarity between two nodes. In MSSA, an unsupervised disambiguation system uses the similarity between word senses in a fixed context window to select the most suitable word sense using a pre-trained word-embedding model and WordNet. For each context window, MSSA calculates the centroid of each word sense definition by averaging the word vectors of its words in WordNet's glosses (i.e., short defining gloss and one or more usage example) using a pre-trained word-embedding model. These centroids are later used to select the word sense with the highest similarity of a target word to its immediately adjacent neighbors (i.e., predecessor and successor words). After all words are annotated and disambiguated, they can be used as a training corpus in any standard word-embedding technique. In its improved version, MSSA can make use of word sense embeddings to repeat its disambiguation process iteratively. Other approaches Other approaches may vary differently in their methods: Domain-driven disambiguation; Identification of dominant word senses; WSD using Cross-Lingual Evidence. WSD solution in John Ball's language independent NLU combining Patom Theory and RRG (Role and Reference Grammar) Type inference in constraint-based grammars Other languages Hindi: Lack of lexical resources in Hindi have hindered the performance of supervised models of WSD, while the unsupervised models suffer due to extensive morphology. A possible solution to this problem is the design of a WSD model by means of parallel corpora. The creation of the Hindi WordNet has paved way for several Supervised methods which have been proven to produce a higher accuracy in disambiguating nouns. Local impediments and summary The knowledge acquisition bottleneck is perhaps the major impediment to solving the WSD problem. Unsupervised methods rely on knowledge about word senses, which is only sparsely formulated in dictionaries and lexical databases. Supervised methods depend crucially on the existence of manually annotated examples for every word sense, a requisite that can so far be met only for a handful of words for testing purposes, as it is done in the Senseval exercises. One of the most promising trends in WSD research is using the largest corpus ever accessible, the World Wide Web, to acquire lexical information automatically. WSD has been traditionally understood as an intermediate language engineering technology which could improve applications such as information retrieval (IR). In this case, however, the reverse is also true: web search engines implement simple and robust IR techniques that can successfully mine the Web for information to use in WSD. The historic lack of training data has provoked the appearance of some new algorithms and techniques, as described in Automatic acquisition of sense-tagged corpora. External knowledge sources Knowledge is a fundamental component of WSD. Knowledge sources provide data which are essential to associate senses with words. They can vary from corpora of texts, either unlabeled or annotated with word senses, to machine-readable dictionaries, thesauri, glossaries, ontologies, etc. They can be classified as follows: Structured: Machine-readable dictionaries (MRDs) Ontologies Thesauri Unstructured: Collocation resources Other resources (such as word frequency lists, stoplists, domain labels, etc.) Corpora: raw corpora and sense-annotated corpora Evaluation Comparing and evaluating different WSD systems is extremely difficult, because of the different test sets, sense inventories, and knowledge resources adopted. Before the organization of specific evaluation campaigns most systems were assessed on in-house, often small-scale, data sets. In order to test one's algorithm, developers should spend their time to annotate all word occurrences. And comparing methods even on the same corpus is not eligible if there is different sense inventories. In order to define common evaluation datasets and procedures, public evaluation campaigns have been organized. Senseval (now renamed SemEval) is an international word sense disambiguation competition, held every three years since 1998: Senseval-1 (1998), Senseval-2 (2001), (2004), and its successor, SemEval (2007). The objective of the competition is to organize different lectures, preparing and hand-annotating corpus for testing systems, perform a comparative evaluation of WSD systems in several kinds of tasks, including all-words and lexical sample WSD for different languages, and, more recently, new tasks such as semantic role labeling, gloss WSD, lexical substitution, etc. The systems submitted for evaluation to these competitions usually integrate different techniques and often combine supervised and knowledge-based methods (especially for avoiding bad performance in lack of training examples). In recent years 2007-2012, the WSD evaluation task choices had grown and the criterion for evaluating WSD has changed drastically depending on the variant of the WSD evaluation task. Below enumerates the variety of WSD tasks: Task design choices As technology evolves, the Word Sense Disambiguation (WSD) tasks grows in different flavors towards various research directions and for more languages: Classic monolingual WSD evaluation tasks use WordNet as the sense inventory and are largely based on supervised/semi-supervised classification with the manually sense annotated corpora: Classic English WSD uses the Princeton WordNet as it sense inventory and the primary classification input is normally based on the SemCor corpus. Classical WSD for other languages uses their respective WordNet as sense inventories and sense annotated corpora tagged in their respective languages. Often researchers will also tapped on the SemCor corpus and aligned bitexts with English as its source language Cross-lingual WSD evaluation task is also focused on WSD across 2 or more languages simultaneously. Unlike the Multilingual WSD tasks, instead of providing manually sense-annotated examples for each sense of a polysemous noun, the sense inventory is built up on the basis of parallel corpora, e.g. Europarl corpus. Multilingual WSD evaluation tasks focused on WSD across 2 or more languages simultaneously, using their respective WordNets as its sense inventories or BabelNet as multilingual sense inventory. It evolved from the Translation WSD evaluation tasks that took place in Senseval-2. A popular approach is to carry out monolingual WSD and then map the source language senses into the corresponding target word translations. Word Sense Induction and Disambiguation task is a combined task evaluation where the sense inventory is first induced from a fixed training set data, consisting of polysemous words and the sentence that they occurred in, then WSD is performed on a different testing data set. Software Babelfy, a unified state-of-the-art system for multilingual Word Sense Disambiguation and Entity Linking BabelNet API, a Java API for knowledge-based multilingual Word Sense Disambiguation in 6 different languages using the BabelNet semantic network WordNet::SenseRelate, a project that includes free, open source systems for word sense disambiguation and lexical sample sense disambiguation UKB: Graph Base WSD, a collection of programs for performing graph-based Word Sense Disambiguation and lexical similarity/relatedness using a pre-existing Lexical Knowledge Base pyWSD, python implementations of Word Sense Disambiguation (WSD) technologies See also Controlled natural language Entity linking Judicial interpretation Semantic unification Sentence boundary disambiguation Syntactic ambiguity References Works cited Further reading External links Computational Linguistics Special Issue on Word Sense Disambiguation (1998) Word Sense Disambiguation Tutorial by Rada Mihalcea and Ted Pedersen (2005). Natural language processing Computational linguistics Semantics Lexical semantics Ambiguity
Word-sense disambiguation
Technology
5,463
208,811
https://en.wikipedia.org/wiki/Menger%20sponge
In mathematics, the Menger sponge (also known as the Menger cube, Menger universal curve, Sierpinski cube, or Sierpinski sponge) is a fractal curve. It is a three-dimensional generalization of the one-dimensional Cantor set and two-dimensional Sierpinski carpet. It was first described by Karl Menger in 1926, in his studies of the concept of topological dimension. Construction The construction of a Menger sponge can be described as follows: Begin with a cube. Divide every face of the cube into nine squares in a similar manner to a Rubik's Cube. This sub-divides the cube into 27 smaller cubes. Remove the smaller cube in the middle of each face, and remove the smaller cube in the center of the larger cube, leaving 20 smaller cubes. This is a level-1 Menger sponge (resembling a void cube). Repeat steps two and three for each of the remaining smaller cubes, and continue to iterate ad infinitum. The second iteration gives a level-2 sponge, the third iteration gives a level-3 sponge, and so on. The Menger sponge itself is the limit of this process after an infinite number of iterations. Properties The th stage of the Menger sponge, , is made up of smaller cubes, each with a side length of (1/3)n. The total volume of is thus . The total surface area of is given by the expression . Therefore, the construction's volume approaches zero while its surface area increases without bound. Yet any chosen surface in the construction will be thoroughly punctured as the construction continues so that the limit is neither a solid nor a surface; it has a topological dimension of 1 and is accordingly identified as a curve. Each face of the construction becomes a Sierpinski carpet, and the intersection of the sponge with any diagonal of the cube or any midline of the faces is a Cantor set. The cross-section of the sponge through its centroid and perpendicular to a space diagonal is a regular hexagon punctured with hexagrams arranged in six-fold symmetry. The number of these hexagrams, in descending size, is given by the following recurrence relation:, with . The sponge's Hausdorff dimension is ≅ 2.727. The Lebesgue covering dimension of the Menger sponge is one, the same as any curve. Menger showed, in the 1926 construction, that the sponge is a universal curve, in that every curve is homeomorphic to a subset of the Menger sponge, where a curve means any compact metric space of Lebesgue covering dimension one; this includes trees and graphs with an arbitrary countable number of edges, vertices and closed loops, connected in arbitrary ways. Similarly, the Sierpinski carpet is a universal curve for all curves that can be drawn on the two-dimensional plane. The Menger sponge constructed in three dimensions extends this idea to graphs that are not planar and might be embedded in any number of dimensions. In 2024, Broden, Nazareth, and Voth proved that all knots can also be found within a Menger sponge. The Menger sponge is a closed set; since it is also bounded, the Heine–Borel theorem implies that it is compact. It has Lebesgue measure 0. Because it contains continuous paths, it is an uncountable set. Experiments also showed that cubes with a Menger sponge-like structure could dissipate shocks five times better for the same material than cubes without any pores. Formal definition Formally, a Menger sponge can be defined as follows (using set intersection): where is the unit cube and MegaMenger MegaMenger was a project aiming to build the largest fractal model, pioneered by Matt Parker of Queen Mary University of London and Laura Taalman of James Madison University. Each small cube is made from six interlocking folded business cards, giving a total of 960 000 for a level-four sponge. The outer surfaces are then covered with paper or cardboard panels printed with a Sierpinski carpet design to be more aesthetically pleasing. In 2014, twenty level-three Menger sponges were constructed, which combined would form a distributed level-four Menger sponge. Similar fractals Jerusalem cube A Jerusalem cube is a fractal object first described by Eric Baird in 2011. It is created by recursively drilling Greek cross-shaped holes into a cube. The construction is similar to the Menger sponge but with two different-sized cubes. The name comes from the face of the cube resembling a Jerusalem cross pattern. The construction of the Jerusalem cube can be described as follows: Start with a cube. Cut a cross through each side of the cube, leaving eight cubes (of rank +1) at the corners of the original cube, as well as twelve smaller cubes (of rank +2) centered on the edges of the original cube between cubes of rank +1. Repeat the process on the cubes of ranks 1 and 2. Iterating an infinite number of times results in the Jerusalem cube. Since the edge length of a cube of rank N is equal to that of 2 cubes of rank N+1 and a cube of rank N+2, it follows that the scaling factor must satisfy , therefore which means the fractal cannot be constructed using points on a rational lattice. Since a cube of rank N gets subdivided into 8 cubes of rank N+1 and 12 of rank N+2, the Hausdorff dimension must therefore satisfy . The exact solution is which is approximately 2.529 As with the Menger sponge, the faces of a Jerusalem cube are fractals with the same scaling factor. In this case, the Hausdorff dimension must satisfy . The exact solution is which is approximately 1.786 Others A Mosely snowflake is a cube-based fractal with corners recursively removed. A tetrix is a tetrahedron-based fractal made from four smaller copies, arranged in a tetrahedron. A Sierpinski–Menger snowflake is a cube-based fractal in which eight corner cubes and one central cube are kept each time at the lower and lower recursion steps. This peculiar three-dimensional fractal has the Hausdorff dimension of the natively two-dimensional object like the plane i.e. =2 See also Apollonian gasket Cantor cube Koch snowflake Sierpiński tetrahedron Sierpiński triangle List of fractals by Hausdorff dimension References Further reading . External links Menger sponge at Wolfram MathWorld The 'Business Card Menger Sponge' by Dr. Jeannine Mosely – an online exhibit about this giant origami fractal at the Institute For Figuring An interactive Menger sponge Interactive Java models Puzzle Hunt — Video explaining Zeno's paradoxes using Menger–Sierpinski sponge Menger sphere, rendered in SunFlow Post-It Menger Sponge – a level-3 Menger sponge being built from Post-its The Mystery of the Menger Sponge. Sliced diagonally to reveal stars Woolly Thoughts Level 2 Menger Sponge by two "Mathekniticians" Dickau, R.: Jerusalem Cube Further discussion. Miller, P.: Discussion of explicitly defined Menger sponges for stress testing in 3d display and rendering systems Iterated function system fractals Topological spaces Cubes Fractal curves Eponymous curves
Menger sponge
Mathematics
1,551
51,333,954
https://en.wikipedia.org/wiki/%C3%89cole%20Nationale%20Sup%C3%A9rieure%20des%20Mines%20de%20Rabat%20%28Mines%20Rabat%29
The École Nationale Supérieure des Mines de Rabat abbreviated as ENSMR, and also called Mines Rabat in French or Rabat School of Mines in English is a leading Grande école engineering school in Morocco. The previous school's name was École Nationale de l'Industrie Minérale (ENIM; National School of the Mineral Industry). Based in Rabat, Mines Rabat is one of the oldest engineering schools in Morocco. Mines Rabat is a member of the Conférence des grandes écoles (CGE). The course for the engineering program lasts three years and the admission is done mainly by the common national competition (CNC) after making two or three years of preparatory classes. Grandes Écoles are institutions of higher education that are separate from, but parallel and connected to the main framework of the Moroccan-French public university system. Similar to the Ivy League in the United States, Oxbridge in the UK, and C9 League in China, Grandes Écoles are elite academic institutions that admit students through an extremely competitive process. Mines Rabat's Alumni go on to occupy elite positions within government, administration, and corporate firms in Morocco. Despite its small size (fewer than 300 students are accepted each year, after a very selective exam), it is a crucial part of the infrastructure of the Moroccan industry. Based in Rabat, it is one of the oldest engineering schools in Morocco. Mines Rabat is a member of the Conférence des grandes écoles (CGE). The course for the engineering program lasts three years and the admission is done mainly by the common national competition (CNC) after making two or three years of preparatory classes. In the limit of available places candidates can be admitted to the Engineering Cycle by level: Associate Bachelor University's Master The engineering cycle is 3 years for applicants holding an associate's or a bachelor's degree and is 2 years for applicants holding a, master's degree. The Ph.D. and Deng cycles are 3 to 5 years for applicants holding an engineering degree or a master's degree. The school has similarities with the Mines ParisTech, Mines Saint-Étienne, and Mines Nancy schools in France, Columbia School of Mines, Colorado School of Mines in the USA, and Royal School of Mines in the UK Admissions Admission to Mines Rabat in the normal cycle is made through a very selective entrance examination and requires at least two years of preparation after high school in preparatory classes. Admission includes a week of written examinations during the spring followed sometimes by oral examinations over the summer. History The school was established in 1972 and now about 300 Moroccan students are admitted each year. Foreign students, having followed a class préparatoire curriculum (generally, African students) can also enter through the same competitive exam. Finally, some foreign students come for a single year from other top institutions in Africa. Rankings Mines Rabat is ranked among the top 5 Moroccan Grandes Ecoles, though it doesn't appear in international rankings due to its very limited number of students (900 students per year for the class of 2022). Preparatory classes: The classic admission path into Grandes Écoles To enter the Diplôme d'Ingénieur curriculum of Grandes Écoles, students traditionally have to complete the first two years of their curriculum in the very intensive preparatory classes, most often in an institution outside the Grande École. University students pursuing an Associate of Science can take the university admission path examination: Admitted students admitted with associates from universities need to pursue a 3 years cycle of engineering at the school to get the "Diplôme d'Ingénieur" University graduates with one of the following degrees can also apply to get admitted to the engineering cycle of the school. The Diplôme d'Ingénieur (Combined Bachelor's/Master's degree in Engineering) Grandes Écoles of Engineering usually offers several master's degree programs, the most important of which is the Diplôme d'Ingénieur (Engineer's Degree equivalent to a combined BS/MS in Engineering). Because of the strong selection of the students and of the very high quality of the curriculum, the Diplôme d'Ingénieur (combined BS/MS degree in Engineering)) gives the right to bear the title of an Ingénieur, is one of the most prestigious degrees in Morocco. The degree is protected by law and submitted to strict government supervision. It is more valued by companies than a university degree in terms of career opportunities and wages. At the end of these preparatory classes, the students take nationwide, extremely selective competitive exams for entrance into Grandes Écoles, where they complete their curriculum for three years. 1st year at Mines Rabat - equivalent to - senior year of BSc. 2nd year at Mines Rabat - equivalent to - 1st year of MSc. 3rd (final) year at Mines Rabat - equivalent to - 2nd year of MSc. Options and majors The Mines Rabat has a total of 15 engineering options: Energy Engineering Operations Planning Protection of Soil and Basement Environment and Industrial Safety Computer Engineering Production Systems Electromechanical Industrial Maintenance Mechanical Engineering and Development Industrial Engineering Process Engineering Materials and Quality Control Hydro-Geotechnical Engineering Renewable energy Doctoral program (DEng/PhD) The school also has a doctoral program open to students with a master's degree or equivalent. Doctoral students generally work in the laboratories of the school; they may also work in external institutes or establishments. The Doctor of Engineering (DEng) program takes three to five years to complete. International Agreements signed with: France: Central Group of Schools (École Centrale Paris, École Centrale de Lyon, École Centrale de Marseille, École Centrale de Casablanca ...) Groupe des écoles des mines (GEM) (Mines ParisTech, Mines Saint-Étienne (ENSM SE), Mines Nancy, École des mines d'Alès, ...) National Polytechnic Institute of Lorraine (INPL) École Nationale Supérieure de Mécanique et des Microtechniques (ENSMM) Aix Marseille University University of Technology of Compiègne (UTC) INSA Lyon Belgium: Faculté polytechnique de Mons Université catholique de Louvain (UCL) Switzerland: École Polytechnique Fédérale de Lausanne (EPFL) Canada: Ecole Polytechnique de Montreal Laval University United States: Georgia Institute of Technology Tunisia: National Engineering School of Tunis (ENIT) See also Schools of mines, for a list of schools of mines internationally References External links CGE "Grandes Ecoles" organisation scheme vs. the classic university scheme CGE Institut Mines-Télécom Higher Education in France and the United States Ranking Web of Universities French-English translation for resume K12 Academics DEng vs. PhD - Doctor of Engineering List of CTI accredited programs Education in Morocco Schools of mines 1972 establishments in Morocco Universities and colleges established in 1972
École Nationale Supérieure des Mines de Rabat (Mines Rabat)
Engineering
1,397
910,545
https://en.wikipedia.org/wiki/Kunai
A is a Japanese tool thought to be originally derived from the masonry trowel. The two widely recognized kinds are the short kunai (小苦無 shō-kunai) and the big kunai (大苦無 dai-kunai). Although a basic tool, the kunai, in the hands of a martial arts expert, could be used as a multi-functional weapon. The kunai is commonly associated with the ninja, who in folklore used them to climb walls. Design A Kunai normally had a leaf-shaped wrought blade in lengths ranging from and a handle with a ring on the pommel for attaching a rope. The attached rope allowed the kunai'''s handle to be wrapped to function as a grip, or to be strapped to a stick as a makeshift spear; to be tied to the body for concealment; to be used as an anchor or piton, and sometimes to be used as the Chinese rope dart. Contrary to popular belief, kunai were not designed to be used primarily as throwing weapons. Instead, kunai were primarily tools and, when used as weapons, were stabbing and thrusting implements. Varieties of kunai include short, long, narrow-bladed, saw-toothed, and wide-bladed. In some cases, the kunai and the Nishikori, a wide-bladed saw with a dagger-type handle, are difficult to distinguish. Uses The kunai was originally used by peasants as a multi-purpose gardening tool and by workers of stone and masonry. The blade is made of soft iron and is left unsharpened because the edges are used to smash relatively soft materials such as plaster and wood, for digging holes, and for prying. Normally, only the tip is sharpened. Weapon Many ninja weapons were adapted from farming tools, not unlike those used by Shaolin monks in China. Since kunai were cheaply produced farming tools of proper size and weight and could be easily sharpened, they were readily available to be converted into simple weapons. As a weapon, the kunai is larger and heavier than a shuriken and with the grip could also be used in hand-to-hand combat more readily than a shuriken. As with ninjutsu, the exaggeration persistent in ninja myths played a large role in creating the popular culture image of kunai. In fictional depictions of ninjas, the kunai is commonly portrayed as a steel knife that is used for stabbing or particularly throwing, sometimes confused with the shuriken. Masonry The kunai'' was used in masonry to shape stonework. See also List of martial arts weapons Hori hori Shikoro blade Shuriken Tantō Throwing knife Trowel References Sources Further reading External links Gardening tools Japanese knives Japanese martial arts terminology Ninjutsu artefacts Mechanical hand tools Throwing weapons Weapons of Japan
Kunai
Physics
580
7,619,428
https://en.wikipedia.org/wiki/List%20of%20Canadian%20plants%20by%20family%20X%E2%80%93Z
Main page: List of Canadian plants by family Families: A | B | C | D | E | F | G | H | I J K | L | M | N | O | P Q | R | S | T | U V W | X Y Z Xyridaceae Xyris difformis — Carolina yellow-eyed-grass Xyris montana — northern yellow-eyed-grass Zannichelliaceae Zannichellia palustris — horned pondweed Zosteraceae Phyllospadix scouleri — Scouler's surf-grass Phyllospadix serrulatus — serrulate surf-grass Phyllospadix torreyi — Torrey's surf-grass Zostera marina — sea-wrack Canada,family,X
List of Canadian plants by family X–Z
Biology
169
29,645,983
https://en.wikipedia.org/wiki/Thompson%20order%20formula
In mathematical finite group theory, the Thompson order formula, introduced by John Griggs Thompson , gives a formula for the order of a finite group in terms of the centralizers of involutions, extending the results of . Statement If a finite group G has exactly two conjugacy classes of involutions with representatives t and z, then the Thompson order formula states Here a(x) is the number of pairs (u,v) with u conjugate to t, v conjugate to z, and x in the subgroup generated by uv. gives the following more complicated version of the Thompson order formula for the case when G has more than two conjugacy classes of involution. where t and z are non-conjugate involutions, the sum is over a set of representatives x for the conjugacy classes of involutions, and a(x) is the number of ordered pairs of involutions u,v such that u is conjugate to t, v is conjugate to z, and x is the involution in the subgroup generated by tz. Proof The Thompson order formula can be rewritten as where as before the sum is over a set of representatives x for the classes of involutions. The left hand side is the number of pairs on involutions (u,v) with u conjugate to t, v conjugate to z. The right hand side counts these pairs in classes, depending the class of the involution in the cyclic group generated by uv. The key point is that uv has even order (as if it had odd order then u and v would be conjugate) and so the group it generates contains a unique involution x. References Finite groups
Thompson order formula
Mathematics
376
17,727,869
https://en.wikipedia.org/wiki/ITools%20Resourceome
iTools is a distributed infrastructure for managing, discovery, comparison and integration of computational biology resources. iTools employs Biositemap technology to retrieve and service meta-data about diverse bioinformatics data services, tools, and web-services. iTools is developed by the National Centers for Biomedical Computing as part of the NIH Road Map Initiative. See also Biositemaps References External links Interactive iTools Server Knowledge representation Bioinformatics
ITools Resourceome
Engineering,Biology
92
404,130
https://en.wikipedia.org/wiki/Piecewise%20function
In mathematics, a piecewise function (also called a piecewise-defined function, a hybrid function, or a function defined by cases) is a function whose domain is partitioned into several intervals ("subdomains") on which the function may be defined differently. Piecewise definition is actually a way of specifying the function, rather than a characteristic of the resulting function itself. Terms like piecewise linear, piecewise smooth, piecewise continuous, and others are very common. The meaning of a function being piecewise , for a property is roughly that the domain of the function can be partitioned into pieces on which the property holds, but is used slightly differently by different authors. Sometimes the term is used in a more global sense involving triangulations; see Piecewise linear manifold. Notation and interpretation Piecewise functions can be defined using the common functional notation, where the body of the function is an array of functions and associated subdomains. A semicolon or comma may follow the subfunction or subdomain columns. The or is rarely omitted at the start of the right column. The subdomains together must cover the whole domain; often it is also required that they are pairwise disjoint, i.e. form a partition of the domain. In order for the overall function to be called "piecewise", the subdomains are usually required to be intervals (some may be degenerated intervals, i.e. single points or unbounded intervals). For bounded intervals, the number of subdomains is required to be finite, for unbounded intervals it is often only required to be locally finite. For example, consider the piecewise definition of the absolute value function: For all values of less than zero, the first sub-function () is used, which negates the sign of the input value, making negative numbers positive. For all values of greater than or equal to zero, the second sub-function is used, which evaluates trivially to the input value itself. The following table documents the absolute value function at certain values of : In order to evaluate a piecewise-defined function at a given input value, the appropriate subdomain needs to be chosen in order to select the correct sub-function—and produce the correct output value. Examples A step function or piecewise constant function, composed of constant sub-functions Piecewise linear function, composed of linear sub-functions Broken power law, a function composed of power-law sub-functions Spline, a function composed of polynomial sub-functions, often constrained to be smooth at the joints between pieces B-spline PDIFF and some other common Bump functions. These are infinitely differentiable, but analyticity holds only piecewise. Continuity and differentiability of piecewise-defined functions A piecewise-defined function is continuous on a given interval in its domain if the following conditions are met: its sub-functions are continuous on the corresponding intervals (subdomains), there is no discontinuity at an endpoint of any subdomain within that interval. The pictured function, for example, is piecewise-continuous throughout its subdomains, but is not continuous on the entire domain, as it contains a jump discontinuity at . The filled circle indicates that the value of the right sub-function is used in this position. For a piecewise-defined function to be differentiable on a given interval in its domain, the following conditions have to fulfilled in addition to those for continuity above: its sub-functions are differentiable on the corresponding open intervals, the one-sided derivatives exist at all intervals' endpoints, at the points where two subintervals touch, the corresponding one-sided derivatives of the two neighboring subintervals coincide. Some sources only examine the function definition, while others acknowledge the property iff the function admits a partition into a piecewise definition that meets the conditions. Applications In applied mathematical analysis, "piecewise-regular" functions have been found to be consistent with many models of the human visual system, where images are perceived at a first stage as consisting of smooth regions separated by edges (as in a cartoon); a cartoon-like function is a C2 function, smooth except for the existence of discontinuity curves. In particular, shearlets have been used as a representation system to provide sparse approximations of this model class in 2D and 3D. Piecewise defined functions are also commonly used for interpolation, such as in nearest-neighbor interpolation. See also Piecewise linear continuation References Functions and mappings
Piecewise function
Mathematics
941
52,795,346
https://en.wikipedia.org/wiki/P4-t-Bu
{{DISPLAYTITLE:P4-t-Bu}} P4-t-Bu is a readily accessible chemical from the group of neutral, peralkylated sterically hindered polyaminophosphazenes, which are extremely strong bases but very weak nucleophiles, with the formula . "t-Bu" stands for tert-butyl –. "P4" stands for the fact that this molecule has 4 phosphorus atoms. P4-t-Bu can also be regarded as tetrameric triaminoiminophosphorane of the basic structure . The homologous series of P1 to P7 polyaminophosphazenes of the general formula [(R^1_2N)_3P=N-]_\mathit{x}-(R^1_2N)_{3\!-\mathit{x}}P=NR2 with preferably methyl groups as R1, a methyl group or tert-butyl group as and even-numbered x between 0 and 6 (P4-t-Bu: R1 = Me, R2 = t-Bu and x = 3) has been developed by Reinhard Schwesinger; the resulting phosphazene bases are therefore also referred to as Schwesinger superbases. Preparation The convergent synthesis of P4-t-Bu is derived from phosphorus pentachloride (1) and leads in branch [] to the well-characterized aminotris via the non-isolated chlorine (dimethylamino)phosphonium chloride (2) via [(Dimethylamino)phosphonium tetrafluoroborate (3) and further via [] to the liquid iminotris (dimethylamino) phosphorane(4) and in branch [] with phosphorus pentachloride and tert-butylammonium chloride to tert-butylphosphorimide trichloride (5) The reaction [] of excess (4) with (5) yields the hydrochloride of the target product P4-t-Bu (6) in 93% yield which is also converted into the tetrafluoroborate salt (7) from which the free base (8) can be obtained almost quantitatively with potassium methoxide/sodium amide or with potassium amide in liquid ammonia. The transfer of the hygroscopic and readily water-soluble hydrochlorides and the liquid free bases into the tetrafluoroborates, which are difficult to solubilize in water, facilitate the handling of the substances considerably. The relatively uncomplicated convergent synthesis with easily accessible reactants and very good yields of the intermediates make P4-t-Bu an interesting phosphazene superbase. Properties P4-t-Bu is one of the strongest neutral nitrogenous bases with an extrapolated pKa value of 42.1 in acetonitrile and is compared to the strong base DBU with a pKa value of 24.3 by 18 orders of magnitude more basic. The compound is very soluble in non-polar solvents, such as hexane, toluene or tetrahydrofuran, and is usually commercially available as a 0.8 to 1 molar solution in hexane. Already in weakly acidic media protonation produces the extremely delocalized and soft P4-t-Bu-H cation and causes besides a very strong solubilization effect also an extreme acceleration of addition reactions even at temperatures below -78 °C. P4-t-Bu owes its extraordinarily high basicity with low nucleophilicity to its very high steric hindrance and the involvement of many donor groups in conjugation with the spatially demanding structure of the cation formed by protonation. P4-t-Bu is an extremely hygroscopic solid which is thermally stable up to 120 °C and chemically stable to (dry) oxygen and bases. Traces of water and protic impurities can be eliminated by addition of bromoethane. The base is both very hydrophilic and very lipophilic and can be recovered easily and almost completely from reaction mixtures by the formation of the sparingly soluble tetrafluoroborate salt. Because of its extremely weak Lewis basicity, the cation of P4-t-Bu suppresses typical side reactions of metal organyls (such as aldol condensations) as can be caused by lithium amides such as lithium diisopropylamide (LDA). Applications The neutral superbase P4-t-Bu is superior to ionic bases if those are sensitive to oxidation or side reactions (such as acylation) when they cause solubility problems or Lewis acid catalysed side reactions (such as aldol reactions, epoxy ring opening etc). The dehydrohalogenation of n-alkyl bromides yields the alkene, such as the reaction 1-bromooctane with P4-t-Bu which yields 1-octene almost quantitatively (96%) under mild conditions, compared to the potassium tert-butoxide/18-crown-6 system with only 75% yield. Alkylations on weakly acidic methylene groups (e.g. in the case of carboxylic esters or nitriles) proceed with high yield and selectivity. For example, by the reaction of 8-phenylmenthylphenylacetate with iodoethane in the presence of P4-t-Bu only the monoethyl derivative in the Z configuration is obtained in 95% yield. Succinonitrile reacts with iodoethane in the presence of P4-t-Bu in 98% yield to give the tetraethyl derivative without undergoing a Thorpe-Ziegler reaction to form a cyclic α-ketonitrile. Trifluoromethylation of ketones (such as benzophenone) is also possible at room temperature in good yields up to 84% with the inert fluoroform (HFC-23) in the presence of P4-t-Bu and tris(trimethylsilyl)amine. Intramolecular cyclization of ortho-alkynylphenyl ethers leads in the presence of P4-t-Bu under mild conditions without metal catalysts to substituted benzofurans. Due to its extreme basicity it was suggested early on that P4-t-Bu should be suited as an initiator for anionic polymerization. With the ethyl acetate/P4-t-Bu initiator system, poly(methyl methacrylate) (PMMA) with narrow polydispersity and molar masses up to 40,000 g·mol−1 could be obtained in THF. Anionic polymerization of Ethylene oxide with the initiator system n-Butyllithium/P4-t-Bu yields well-defined Polyethylene oxides with low polydispersity. Cyclic siloxanes (such as hexamethylcyclotrisiloxane or decamethylcyclopentasiloxane) can also be polymerized with catalytic amounts of P4-t-Bu and water or silanols as initiators under good molecular weight control to thermally very stable polysiloxanes having decomposition temperatures of >450 °C. Because of its extreme basicity, P4-t-Bu eagerly absorbs water and carbon dioxide, both of which inhibit anionic polymerization. Heating to temperatures >100 °C removes CO2 and water and restores the anionic polymerization. The extreme hygroscopy of the phosphazene base P4-t-Bu as a substance and in solutions requires a great effort for storage and handling and prevents its broader use. References Phosphorus compounds Nitrogen(−III) compounds Non-nucleophilic bases Superbases Tert-butyl compounds
P4-t-Bu
Chemistry
1,720
60,322,595
https://en.wikipedia.org/wiki/NGC%204297
NGC 4297 is a lenticular galaxy located about 200 million light-years away in the constellation Virgo. It was discovered by astronomer William Herschel on April 13, 1784. It forms an interacting pair with NGC 4296. See also List of NGC objects (4001–5000) References External links 4297 +01-32-018 039940 Virgo (constellation) Astronomical objects discovered in 1784 Lenticular galaxies Interacting galaxies Discoveries by William Herschel
NGC 4297
Astronomy
99
585,102
https://en.wikipedia.org/wiki/Julius%20von%20Mayer
Julius Robert von Mayer (25 November 1814 – 20 March 1878) was a German physician, chemist, and physicist and one of the founders of thermodynamics. He is best known for enunciating in 1841 one of the original statements of the conservation of energy or what is now known as one of the first versions of the first law of thermodynamics, namely that "energy can be neither created nor destroyed". In 1842, Mayer described the vital chemical process now referred to as oxidation as the primary source of energy for any living creature. He also proposed that plants convert light into chemical energy. His achievements were overlooked and priority for the discovery in 1842 of the mechanical equivalent of heat was attributed to James Joule in the following year. Early life Mayer was born on 25 November 1814 in Heilbronn, Württemberg (Baden-Württemberg, modern day Germany), the son of a pharmacist. He grew up in Heilbronn. After completing his Abitur, he studied medicine at the University of Tübingen, where he was a member of the Corps Guestphalia, a German Student Corps. During 1838 he attained his doctorate as well as passing the Staatsexamen. After a stay in Paris (1839/40) he left as a ship's physician on a Dutch three-mast sailing ship for a journey to Jakarta. Although he had hardly been interested before this journey in physical phenomena, his observation that storm-whipped waves are warmer than the calm sea started him thinking about the physical laws, in particular about the physical phenomenon of warmth and the question whether the directly developed heat alone (the heat of combustion), or the sum of the quantities of heat developed in direct and indirect ways are to be accounted for in the burning process. After his return in February 1841 Mayer dedicated his efforts to solve this problem. In 1841 he settled in Heilbronn and married. Development of ideas Even as a young child, Mayer showed an intense interest with various mechanical mechanisms. He was a young man who performed various experiments of the physical and chemical variety. In fact, one of his favorite hobbies was creating various types of electrical devices and air pumps. It was obvious that he was intelligent. Hence, Mayer attended Eberhard-Karls University in May 1832. He studied medicine during his time there. In 1837, he and some of his friends were arrested for wearing the couleurs of a forbidden organization. The consequences for this arrest included a one year expulsion from the college and a brief period of incarceration. This diversion sent Mayer traveling to Switzerland, France, and the Dutch East Indies. Mayer drew some additional interest in mathematics and engineering from his friend Carl Baur through private tutoring. In 1841, Mayer returned to Heilbronn to practice medicine, but physics became his new passion. In June 1841 he completed his first scientific paper entitled "On the Quantitative and Qualitative Determination of Forces". It was largely ignored by other professionals in the area. Then, Mayer became interested in the area of heat and its motion. He presented a value in numerical terms for the mechanical equivalent of heat. He also was the first person to describe the vital chemical process now referred to as oxidation as the primary source of energy for any living creature. In 1848 he calculated that in the absence of a source of energy the Sun would cool down in only 5000 years, and he suggested that the impact of meteorites kept it hot. Since he was not taken seriously at the time, his achievements were overlooked and credit was given to James Joule. Mayer almost committed suicide after he discovered this fact. He spent some time in mental institutions to recover from this and the loss of some of his children. Several of his papers were published due to the advanced nature of the physics and chemistry. He was awarded an honorary doctorate in 1859 by the philosophical faculty at the University of Tübingen. His overlooked work was revived in 1862 by fellow physicist John Tyndall in a lecture at the London Royal Institution. In July 1867 Mayer published "Die Mechanik der Wärme." This publication dealt with the mechanics of heat and its motion. On 5 November 1867 Mayer was awarded personal nobility by the Kingdom of Württemberg (von Mayer) which is the German equivalent of a British knighthood. von Mayer died in Germany in 1878. After Sadi Carnot stated it for caloric, Mayer was the first person to state the law of the conservation of energy, one of the most fundamental tenets of modern day physics. The law of the conservation of energy states that the total mechanical energy of a system remains constant in any isolated system of objects that interact with each other only by way of forces that are conservative. Mayer's first attempt at stating the conservation of energy was a paper he sent to Johann Christian Poggendorff's Annalen der Physik, in which he postulated a conservation of force (Erhaltungssatz der Kraft). However, owing to Mayer's lack of advanced training in physics, it contained some fundamental mistakes and was not published. Mayer continued to pursue the idea steadfastly and argued with the Tübingen physics professor Johann Gottlieb Nörremberg, who rejected his hypothesis. Nörremberg did, however, give Mayer a number of valuable suggestions on how the idea could be examined experimentally; for example, if kinetic energy transforms into heat energy, water should be warmed by vibration. Mayer not only performed this demonstration, but determined also the quantitative factor of the transformation, calculating the mechanical equivalent of heat. The result of his investigations was published 1842 in the May edition of Justus von Liebig's Annalen der Chemie und Pharmacie. It was translated as Remarks on the Forces of Inorganic Nature In his booklet Die organische Bewegung im Zusammenhang mit dem Stoffwechsel (The Organic Movement in Connection with the Metabolism, 1845) he specified the numerical value of the mechanical equivalent of heat: at first as 365 kgf·m/kcal, later as 425 kgf·m/kcal; the modern values are 4.184 kJ/kcal (426.6 kgf·m/kcal) for the thermochemical calorie and 4.1868 kJ/kcal (426.9 kgf·m/kcal) for the international steam table calorie. This relation implies that, although work and heat are different forms of energy, they can be transformed into one another. This law is now called the first law of thermodynamics, and led to the formulation of the general principle of conservation of energy, definitively stated by Hermann von Helmholtz in 1847. Mayer's relation Mayer derived a relation between specific heat at constant pressure and the specific heat at constant volume for an ideal gas. The relation is: , where CP,m is the molar specific heat at constant pressure, CV,m is the molar specific heat at constant volume and R is the gas constant. Later life Mayer was aware of the importance of his discovery, but his inability to express himself scientifically led to degrading speculation and resistance from the scientific establishment. Contemporary physicists rejected his principle of conservation of energy, and even acclaimed physicists Hermann von Helmholtz and James Prescott Joule viewed his ideas with hostility. The former doubted Mayer's qualifications in physical questions, and a bitter dispute over priority developed with the latter. In 1848 two of his children died rapidly in succession, and Mayer's mental health deteriorated. He attempted suicide on 18 May 1850 and was committed to a mental institution. After he was released, he was a broken man and only timidly re-entered public life in 1860. However, in the meantime, his scientific fame had grown and he received a late appreciation of his achievement, although perhaps at a stage where he was no longer able to enjoy it. He continued to work vigorously as a physician until his death. Honors 1840 Mayer received the Knight Cross of the Order of the Crown (Württemberg). 1869 Mayer received the prix Poncelet. The Robert-Mayer-Gymnasium and the Robert-Mayer-Volks- und Schulsternwarte in Heilbronn bear his name. In chemistry, he invented Mayer's reagent which is used in detecting alkaloids. Works Ueber das Santonin : eine Inaugural-Dissertation, welche zur Erlangung der Doctorwürde in der Medicin & Chirurgie unter dem Praesidium von Wilhelm Rapp im July 1838 der öffentlichen Prüfung vorlegt Julius Robert Mayer . M. Müller, Heilbronn 1838 Digital edition by the University and State Library Düsseldorf References Further reading External links 1814 births 1878 deaths 19th-century German physicists Recipients of the Copley Medal People from Heilbronn Thermodynamicists
Julius von Mayer
Physics,Chemistry
1,815
412,984
https://en.wikipedia.org/wiki/Glide%20reflection
In geometry, a glide reflection or transflection is a geometric transformation that consists of a reflection across a hyperplane and a translation ("glide") in a direction parallel to that hyperplane, combined into a single transformation. Because the distances between points are not changed under glide reflection, it is a motion or isometry. When the context is the two-dimensional Euclidean plane, the hyperplane of reflection is a straight line called the glide line or glide axis. When the context is three-dimensional space, the hyperplane of reflection is a plane called the glide plane. The displacement vector of the translation is called the glide vector. When some geometrical object or configuration appears unchanged by a transformation, it is said to have symmetry, and the transformation is called a symmetry operation. Glide-reflection symmetry is seen in frieze groups (patterns which repeat in one dimension, often used in decorative borders), wallpaper groups (regular tessellations of the plane), and space groups (which describe e.g. crystal symmetries). Objects with glide-reflection symmetry are in general not symmetrical under reflection alone, but two applications of the same glide reflection result in a double translation, so objects with glide-reflection symmetry always also have a simple translational symmetry. When a reflection is composed with a translation in a direction perpendicular to the hyperplane of reflection, the composition of the two transformations is a reflection in a parallel hyperplane. However, when a reflection is composed with a translation in any other direction, the composition of the two transformations is a glide reflection, which can be uniquely described as a reflection in a parallel hyperplane composed with a translation in a direction parallel to the hyperplane. A single glide is represented as frieze group p11g. A glide reflection can be seen as a limiting rotoreflection, where the rotation becomes a translation. It can also be given a Schoenflies notation as S2∞, Coxeter notation as [∞+,2+], and orbifold notation as ∞×. Frieze groups In the Euclidean plane, reflections and glide reflections are the only two kinds of indirect (orientation-reversing) isometries. For example, there is an isometry consisting of the reflection on the x-axis, followed by translation of one unit parallel to it. In coordinates, it takes This isometry maps the x-axis to itself; any other line which is parallel to the x-axis gets reflected in the x-axis, so this system of parallel lines is left invariant. The isometry group generated by just a glide reflection is an infinite cyclic group. Combining two equal glide reflections gives a pure translation with a translation vector that is twice that of the glide reflection, so the even powers of the glide reflection form a translation group. In the case of glide-reflection symmetry, the symmetry group of an object contains a glide reflection, and hence the group generated by it. If that is all it contains, this type is frieze group p11g. Example pattern with this symmetry group: A typical example of glide reflection in everyday life would be the track of footprints left in the sand by a person walking on a beach. Frieze group nr. 6 (glide-reflections, translations and rotations) is generated by a glide reflection and a rotation about a point on the line of reflection. It is isomorphic to a semi-direct product of Z and C2. Example pattern with this symmetry group: For any symmetry group containing some glide-reflection symmetry, the translation vector of any glide reflection is one half of an element of the translation group. If the translation vector of a glide reflection is itself an element of the translation group, then the corresponding glide-reflection symmetry reduces to a combination of reflection symmetry and translational symmetry. Wallpaper groups Glide-reflection symmetry with respect to two parallel lines with the same translation implies that there is also translational symmetry in the direction perpendicular to these lines, with a translation distance which is twice the distance between glide reflection lines. This corresponds to wallpaper group pg; with additional symmetry it occurs also in pmg, pgg and p4g. If there are also true reflection lines in the same direction then they are evenly spaced between the glide reflection lines. A glide reflection line parallel to a true reflection line already implies this situation. This corresponds to wallpaper group cm. The translational symmetry is given by oblique translation vectors from one point on a true reflection line to two points on the next, supporting a rhombus with the true reflection line as one of the diagonals. With additional symmetry it occurs also in cmm, p3m1, p31m, p4m and p6m. In the Euclidean plane 3 of 17 wallpaper groups require glide reflection generators. p2gg has orthogonal glide reflections and 2-fold rotations. cm has parallel mirrors and glides, and pg has parallel glides. (Glide reflections are shown below as dashed lines) Space groups Glide planes are noted in the Hermann–Mauguin notation by a, b or c, depending on which axis the glide is along. (The orientation of the plane is determined by the position of the symbol in the Hermann–Mauguin designation.) If the axis is not defined, then the glide plane may be noted by g. When the glide plane is parallel to the screen, these planes may be indicated by a bent arrow in which the arrowhead indicates the direction of the glide. When the glide plane is perpendicular to the screen, these planes can be represented either by dashed lines when the glide is parallel to the plane of the screen or dotted lines when the glide is perpendicular to the plane of the screen. Additionally, a centered lattice can cause a glide plane to exist in two directions at the same time. This type of glide plane may be indicated by a bent arrow with an arrowhead on both sides when the glide plan is parallel to the plane of the screen or a dashed and double-dotted line when the glide plane is perpendicular to the plane of the screen. There is also the n glide, which is a glide along the half of a diagonal of a face, and the d glide, which is along a fourth of either a face or space diagonal of the unit cell . The latter is often called the diamond glide plane as it features in the diamond structure. The n glide plane may be indicated by diagonal arrow when it is parallel to the plane of the screen or a dashed-dotted line when the glide plane is perpendicular to the plane of the screen. A d glide plane may be indicated by a diagonal half-arrow if the glide plane is parallel to the plane of the screen or a dashed-dotted line with arrows if the glide plane is perpendicular to the plane of the screen. If a d glide plane is present in a crystal system, then that crystal must have a centered lattice. In today's version of Hermann–Mauguin notation, the symbol e is used in cases where there are two possible ways of designating the glide direction because both are true. For example if a crystal has a base-centered Bravais lattice centered on the C face, then a glide of half a cell unit in the a direction gives the same result as a glide of half a cell unit in the b direction. The isometry group generated by just a glide reflection is an infinite cyclic group. Combining two equal glide plane operations gives a pure translation with a translation vector that is twice that of the glide reflection, so the even powers of the glide reflection form a translation group. In the case of glide-reflection symmetry, the symmetry group of an object contains a glide reflection and the group generated by it. For any symmetry group containing a glide reflection, the glide vector is one half of an element of the translation group. If the translation vector of a glide plane operation is itself an element of the translation group, then the corresponding glide plane symmetry reduces to a combination of reflection symmetry and translational symmetry. Examples and applications Glide symmetry can be observed in nature among certain fossils of the Ediacara biota; the machaeridians; and certain palaeoscolecid worms. It can also be seen in many extant groups of sea pens. In Conway's Game of Life, a commonly occurring pattern called the glider is so named because it repeats its configuration of cells, shifted by a glide reflection, after two steps of the automaton. After four steps and two glide reflections, the pattern returns to its original orientation, shifted diagonally by one unit. Continuing in this way, it moves across the array of the game. See also Screw axis Lattice (group) Notes References External links Glide Reflection at cut-the-knot Euclidean symmetries Transformation (function) Crystallography
Glide reflection
Physics,Chemistry,Materials_science,Mathematics,Engineering
1,781
31,212,987
https://en.wikipedia.org/wiki/Fine%20motor%20skill
Fine motor skill (or dexterity) is the coordination of small muscles in movement with the eyes, hands and fingers. The complex levels of manual dexterity that humans exhibit can be related to the nervous system. Fine motor skills aid in the growth of intelligence and develop continuously throughout the stages of human development. Types of motor skills Motor skills are movements and actions of the bone structures. Typically, they are categorised into two groups: gross motor skills and fine motor skills. Gross motor skills are involved in movement and coordination of the arms, legs, and other large body parts. They involve actions such as running, crawling and swimming. Fine motor skills are involved in smaller movements that occur in the wrists, hands, fingers, feet and toes. Specifically, single joint movements are fine motor movements and require fine motor skills. They involve smaller actions such as picking up objects between the thumb and finger, writing carefully, and blinking. Developmental stages Through each developmental stage, motor skills gradually develop. They are first seen during infancy, toddler-hood, preschool and school age. "Basic" fine motor skills gradually develop and are typically mastered between the ages of 6–12 in children. Fine motor skills develop with age and practice. If deemed necessary, occupational therapy can help improve overall fine motor skills. Infancy Early fine motor skills are involuntary reflexes. The most notable involuntary reflex is the Darwinian reflex, a primitive reflex displayed in various newborn primates species. These involuntary muscle movements are temporary and often disappear after the first two months. After eight weeks, an infant will begin to voluntarily use fingers to touch. However, infants have not learned to grab at this stage. Hand–eye coordination begins to develop at two to five months. Infants begin to reach for and grasp objects at this age. In 1952, Piaget found that even before infants are able to reach for and successfully grasp objects they see, they demonstrate competent hand-mouth coordination. A study was done by Philippe Rochat at Emory University in 1992 to test the relation between progress in the control of posture and the developmental transition from two-handed to one-handed engagement in reaching. It was found that the object reached for needed to be controlled. The precision of the reach is potentially maximized when placed centrally. It was also found that the posture needed to be controlled because infants that were not able to sit on their own used bimanual reaches in all postural positions except sitting upright, where they would reach one-handed. As a result, their grasping phases will not have been maximized because of the decrease in body control. On the other hand, if the infant does not have body control, it would be hard for them to get a hold of an object because their reach will be limited. When "non-sitting" infants reached bimanually, while seated upright, they often ended up falling forward. Regardless of whether they can self-sit, infants can adjust their two handed engagement in relation to the arrangement of the objects being reached for. Analysis of hand-to-hand distance during reaching indicates that in the prone and supine posture, non-sitting infants moved their hands simultaneously towards the midline of their bodies as they reached which is not observed by stable sitting infants in any position. Non-sitter infants, although showing strong tendencies toward bimanual reaching, tend to reach with one hand when sat. Sitter infants show a majority of differentiated reaches in all posture conditions. A study conducted by Esther Thelen on postural control during infancy used the dynamic systems approach to observe motor development. The findings suggest that early reaching is constrained by head and shoulder instability. The relationship between posture and reaching is tight. Thus, head control and body stability are necessary for the emergence of grasping. The next developmental milestone is between seven and twelve months, when a series of fine motor skills begins to develop. These include increase in grip, enhancement of vision, pointing with the index finger, smoothly transferring objects from one hand to the other, as well as using the pincer grip (with the thumb and index fingers) to pick up tiny objects with precision. A lot of factors change in grasping when the infant becomes seven months. The infant will have a better chance of grasping because they can sit up. Therefore, the infant will not fall over. The infant grasping also changes. The infant starts to hold objects more properly when age increases. Toddlerhood By the time a child is one year old, their fine motor skills have developed to hold and look at objects. As children manipulate objects with purpose, they gain experience identifying objects based on their shape, size, and weight. This develops the child's fine motor skills, and their understanding of the world. A toddler will show hand dominance. Preschool Children typically attend preschool between the ages of 2 and 5. At this time, the child is capable of grasping objects using the static tripod grasp, which is the combined use of the index, thumb, and middle finger. A preschool child's motor skills are moderate, allowing the child to cut shapes out of paper, draw or trace over vertical lines with crayons, button their clothes, and pick up objects. A preferred hand dominates the majority of their activities. They also develop sensory awareness and interpret their environment by using their senses and moving accordingly. After the static tripod grasp, the next form is the dynamic tripod grasp. These are shown in a series through Schneck and Henderson's Grip Form chart. Based on the accuracy and form of hold the child will be ranked either from 1–10 or 1–5 of how well they are able to complete the dynamic tripod grasp while properly writing. In conjunction with accuracy and precision the child will be able to properly position a writing utensil in terms of implement diameter as well as form and grip strength. Proper handwriting and drawing fall deeper into a category of graphomotor skills. The National Centre of Teaching and Learning illustrates the abilities that preschool children should have improved through their fine motor skills in several domains. Children use their motor skills by sorting and manipulating geometric shapes, making patterns, and using measurement tools to build their math skills. By using writing tools and reading books, they build their language and literacy. Arts and crafts activities like cutting and gluing paper, finger painting, and dressing up develops their creativity. Parents can support this development by intervening when the child does not perform the fine motor activity correctly, making use of several senses in a learning activity, and offer activities that the child will be successful with. Developmental disabilities may stop a child doing things that involve motor skills such as drawing or building blocks. Fine motor skills acquired during this stage aids in the later advancement and understanding of subjects such as science and reading. A study by the American Journal of Occupational Therapy, which included twenty-six preschool children who had received occupational therapy on a weekly basis, showed overall advancements in their fine motor skill area. The results showed a link between in-hand manipulation, hand–eye coordination, and grasping strength with the child's motor skills, self-care and social function. These children were shown to have better mobility and self-sustainment. School age During the ages between five and seven, the fine motor skills will have developed. As the child interacts with objects the movements of the elbows and shoulders should be less apparent, as should the movements of wrist and fingers. From the ages of three–five years old, girls advance their fine motor skills more than boys. Girls develop physically at an earlier age than boys; this is what allows them to advance their motor skills at a faster rate during prepubescent ages. Boys advance in gross motor skills later on at around age five and up. Girls are more advanced in balance and motor dexterity. Children should be able to make precise cuts with scissors, for example, cutting out squares and holding them in a more common and mature manner. The child's movements should become fluid as the arms and hands become more in sync with each other. The child should also be able to write more precisely on lines, and print letters and numbers with greater clarity. Common problems Fine motor skills can become impaired due to injury, illness, stroke, congenital deformities, cerebral palsy, or developmental disabilities. Problems with the brain, spinal cord, peripheral nerves, muscles, or joints can also have an effect on fine motor skills, and can decrease control. If an infant or child up to age five is not developing their fine motor skills, they will show signs of difficulty controlling their hands, fingers, and face. In young children, delays in learning sitting or walking is an early sign that there will be issues with fine motor skills, and may also show signs of difficulty with tasks such as cutting with scissors, drawing lines, or folding clothes. If a child has difficulty with these, they might have poor hand–eye coordination and could need therapy to improve their skills. Assessment Fine motor skills can be assessed with standardized and non-standardized tests in children and adults. Fine-motor assessments can include force matching tasks. Humans exhibit a high degree of accuracy in force matching tasks where an individual is instructed to match a reference force applied to a finger with the same or different finger. Humans show high accuracy during grip force matching tasks. These aspects of manual dexterity are apparent in the ability of humans to effectively use tools, and perform hard manipulation tasks such as handling unstable objects. Another assessment is called The Peabody Developmental Scales (PDMS). PDMS is a test for children 0–7 that examines the child's ability to grasp a variety of objects, the development of hand–eye coordination, and the child's overall finger dexterity. Similar to PDMS, visual–motor integration assessment, VMI-R, is an assessment that examines the visual motor integration system which demonstrates and points out possible learning disabilities that are often related to delays in visual perception and fine-motor skills such as poor hand–eye coordination. Because additionally advancements in mathematics and language skills are directly correlated to the development of the fine motor system, it is essential that children acquire the fine motor skills that are needed to interact with the environment at an early stage. Examples of tests include: Purdue Pegboard Test Box and Blocks Test Strength-dexterity test See also Hand–eye coordination Spatial awareness Depth perception Sleight of hand References External links Fine Motor Control - MedlinePlus (2011) Watch How You Hold That Crayon - The New York Times (2010) Aptitude Motor control Motor skills
Fine motor skill
Biology
2,127
21,728,703
https://en.wikipedia.org/wiki/MAGIChip
MAGIChips, also known as "microarrays of gel-immobilized compounds on a chip" or "three-dimensional DNA microarrays", are devices for molecular hybridization produced by immobilizing oligonucleotides, DNA, enzymes, antibodies, and other compounds on a photopolymerized micromatrix of polyacrylamide gel pads of 100x100x20 μm or smaller size. This technology is used for analysis of nucleic acid hybridization, specific binding of DNA, and low-molecular weight compounds with proteins, and protein-protein interactions. The gel pads increase the surface for hybridization to 50 times, compared to typical microarrays which are printed on flat surface of a glass slide that is usually treated by chemical compounds on which the probes adhere. A probe density of more than 1012 molecules per gel pad can be achieved due to 3D nature of the gel pads. The array is based on a glass surface that has small polyacrylamide gel units affixed to it. Each gel unit functions as an individual reaction cell as it is surrounded by a hydrophobic glass surface that prevents mixing of the solution in the gel units. This lays a foundation for performing ligation, single base extension, PCR amplification of DNA, on-chip MALDI-TOF mass spectrometry and other reactions. Historical background MAGIChip technology was developed as a result of collaboration between Dr. David Stahl at University of Washington and Dr Andrei Mirzabekov, formerly of Argonne National laboratory. Andrei Mirzabekov initiated the development of the DNA sequencing by hybridization with oligonucleotides: a novel method in 1988. This method was a foundation for the biotechnology that uses biological microchips to identify DNA structures rapidly, which is of great importance in the fight against a variety of diseases. A joint research project was announced in 1998 among Motorola Inc, Packard Instrument Company and the U.S. Department of Energy's Argonne National Laboratory. In 1999, the researchers at Argonne National Lab pushed the development of microarray-type biochip technology they co-designed with the Engelhardt Institute to ward off a worldwide outbreak of tuberculosis. Motorola developed manufacturing processes to mass-produce biochips, and Packard developed and manufactured the analytical instruments to process and analyze the biochips. Argonne's contribution, in conjunction with Engelhardt Institute of Molecular Biology (EIMB), was intellectual property in the form of 19 inventions related to biological microchips. But this collaboration between EIMB in Moscow and Argonne National Laboratory at Illinois and two other US-based commercial partners collapsed as result of argument on contractual arrangement between the parties in 2001. As a result of this dispute, Dr Andrei Mirzabekov resigned as a director of Argonne's Biochip Technology Centre. Method Arrays of gel elements (pads) are created on the glass surface (micromatrix) which is followed by application and chemical immobilization of different compounds (probes) onto these gel pads. Test sample is then added to this micromatrix containing immobilized probes in gel pads and molecular recognition reactions are allowed to take place under specified conditions. The test sample is fluorescent labelled to monitor the molecular interactions. The analysis of molecular interaction patterns is done by using specialized software. The array of gel elements on a glass slide is prepared by ‘’’photopolymerization‘’’. The acrylamide solution to be polymerized is applied to the polymerization chamber. Polymerization chamber consists of a quartz mask, two Teflon spacers, and a microscopic glass slide, clamped together by two metal clamps. The inner side of quartz mask has ultraviolet (UV)-transparent windows arranged in a specified spatial manner in a non-transparent chromium film. Assembled chamber containing the acrylamide gel is exposed to UV light to allow polymerization in only those positions of the chamber that are situated directly under the transparent windows. Oligonucleotides or DNA fragments need to be activated to contain chemically reactive groups to facilitate coupling with the activated gel elements. Probe activation depends on the chemistry of activation of the polyacrylamide gels. Thus to immobilize in the aldehyde-containing gel the probe should have reactive amino group and if the gels are activated by introduction of amino groups, the probes should contain free aldehyde group. Probes are usually prepared by introduction of chemically active groups in terminal position of the oligonucleotides during their synthesis. Probes for immobilization are transferred into gel elements of micromatrix by using dispensing robots. The fibre-optic pin of the robots has a hydrophobic side surface and a hydrophilic tip, and operates at a dew temperature to prevent evaporation of the sample during transfer. The activated probes are chemically immobilized by coupling oligonucleotides bearing amino or aldehyde groups with gel supports containing aldehyde or amino groups respectively. The target molecules are labelled with fluorescent dyes. The fluorescent detection enables monitoring the process in real time with high spatial resolution. The criteria for labelling procedure includes – It should be simple, fast and inexpensive It should be applicable to both RNA and DNA targets It should be compatible with fragmentation required to decrease secondary structure formation It should allow incorporation of one label into one fragment to ensure proper quantification of the hybridization intensity It should allow coupling of multiple dyes On-chip amplification reactions On-chip amplification of the hybridization reaction serves as a very useful tool when the DNA or protein under study are present in relatively small proportion in the molecular population applied to the chip, e.g., when one is dealing with a single copy gene or mRNA of low abundance. In a single base extension method, a primer is hybridized to DNA and extended by a dideoxyribonucleoside triphosphate that matches the nucleotide at a polymorphic site. By performing this reaction at a temperature above the melting temperature of the duplex between the DNA and immobilized probe allows rapid association/dissociation of the target DNA. Thus the same DNA molecule reacts with many individual primers, leading to amplification of the primers in each individual gel pads. This procedure was applied to the identification of beta-globin gene mutation in the patients of beta thalassemia patients and to detection of anthrax toxin gene. The chips also provides a good platform for performing PCR directly on the chip (in individual gel pads) as it is easy to isolate each gel pad from its neighbour unlike typical microarray chips which face serious problems in doing the same task For the analysis of hybridization results obtained with fluorescently labelled target molecules fluorescence microscopes are employed. The instrument is equipped with controlled-temperature sample table to vary the temperature in the chip-containing reaction chamber during the course of the experiment. A cooled charge-coupled device (CCD) camera is used to record the light signals from the chip, which are then sent to the computer program for quantitative evaluation of the hybridization signals over the entire chip. Data generated by these experiments is stored in a database and analyzed by software that help provide evaluation, in silico experimentation, and hardware and software quality control. Types oligonucleotide chips Customized oligonucleotide biochips are designed to interrogate test samples of known nucleotide sequences. For example, known genes in cases when one is interested in their expression levels under certain conditions, genes that are known to contain point mutations, or to be polymorphic in a given population. The success of the microarray depends on the proper choice of probes in these cases. A set of potential hybridization probes are created for each DNA sequence that form perfect duplexes with that sequence. The potential probes that may create ambiguities in the interpretation of the hybridization pattern are excluded on the basis of AT vs GC content, and the propensity to form hairpins and other types of stable secondary structures that may drastically affect the intensity of hybridization. One of the cases of successful applications of customized oligonucleotide chips include detection of beta-thalassemia mutation in patients. For the diagnostics of beta-thalassemia mutations, a simple chip was designed that contained six probes corresponding to different beta-thalassemia genotypes and hybridized with PCR-amplified DNA from healthy humans and patients. The hybridization results showed the expected significant differences in signal intensity between matched and mismatched duplexes, thus allowing reliable identification of both homozygous and heterozygous mutations. rRNA chips These chips haves been developed for ribosomal RNA (rRNA) targets, commonly used for detecting bacteria. rRNA are very abundant in the cell comprising about 80% of the RNA content of the typical eukaryotic cell. The rRNA is pre-amplified by bacteria and there are present in several thousand copies per cell, making a good target for microassays. Single nucleotide polymorphisms present in the bacterial rRNA sequence are used to differentiate bacteria at the genus, species and strain level. This is a unique feature of this microchip that does not require PCR based amplification. The process for detecting bacterial is relatively simple. The bacteria are cultured, washed and pelleted. Lysosome is used to lyse the pellets - to destroy the cell walls and release the nucleic acid. Lysed bacteria are passed through a colourmn preparation where nucleic acid from the cell is immobilized and other debris is washed out. All the processes after lysis - isolation, purification, fragmentation and labelling of target rRNA's - are stable chemical reactions Fragments <500 bp easily hybridize to the gel matrix. The total number of eluted off the chip is determined by UV spectrophotometer. The process from sample preparation to identification of organisms based on automated algorithms take place in 2 hours. cDNA cDNAs obtained from reverse transcription of mRNA population extracted from the cells in varying physiological and experimental conditions are used as immobilized probes. These arrays are widely used to study gene expression. The potential obstacle in using cDNAs is due to the difficulty of injecting and evenly distributing long molecules into the gel pads. This problem is resolved by developing polyacrylamide gels that contain larger average pore size. Another way to approach this problem is to randomly fragment the cDNA into relatively small pieces before immobilization Proteins Protein chips can be prepared that contain different proteins immobilized as probes in a way that preserves their biological activity. A large pore gel is used to prevent the diffusion of protein into the gel. There are two ways to immobilize proteins to the gel pads. The first is based on activation of the gel with glutyraldehyde. In the second procedure the gel is activated by partial substitution of amino groups with hydrazide groups. The reaction between hydrazide and aldehyde groups efficiently cross-links the protein to the cell. Protein microchips show the high specificity in molecular recognition reactions as seen in solution. Interaction between antigen and their specific antibodies can be studied on-chip in variety of experimental conditions. Either the antigen or antibody can be immobilized and monitored by both direct and indirect methods. In direct method, one uses target molecules labelled with fluorescent dye and in the indirect method the reaction is detected using the labelled molecule that specifically recognizes the target. These chips can be used to study enzymatic activity of immobilized enzymes by coating the chip with solution containing specific substrates. The reaction is monitored by detecting the formation of coloured or fluorescent precipitates Other applications They can be used to study singlenucleotide polymorphisms (SNPs) in its method. Because bacterial DNA is highly conserved over time, SNPs are useful to identify bacterial on the chip, and since SNPs are the most abundant variations in the human genome, they have become the primary markers for genetic studies for mapping and identifying susceptible genes for complex diseases. They can be used to detect virulence factors which are toxins and proteins that invade the organism. These toxins tend to have small number of copy transcripts and produced under very specific conditions found in the host. Here identification strategy focus on the single copy DNA sequence where MAGIChips are very effective. The protein biochips makes it very exciting as proteins are contained within a single cell and they can all be analyzed in one array platform. Protein biochips can be used to identify protein biomarkers for diagnosing diseases or a particular stage of the disease. They can also help to delineate relationship between protein structure and function of the protein and identify the function of a protein or different proteins throughout the same or different cell types. Although the MAGIChips need some modifications, the applications and techniques are quite standard. The chips can be used as a diagnostic tool in clinics by virtue of its rapid detection time, high-throughput, result confidence, hierarchical identification and quantification. With them it is possible to achieve time required for sample collection to reporting of results in clinical settings within 2 hours. The rapid turnaround time is an attractive attribute of point-of-care testing while patient awaits the results. High-throughput nature of these devices allow thousands of microbial probes for species specific and even strain-specific identification at the same time on a single chip, reducing the amount of sample needed to conduct multiple tests. Potential threats posed by use of bacteria, viruses and fungi as biological weapons against humans, agriculture and environment warrants development of technology for accurate and sensitive detection within a very short time. MAGIChip prospective technology which has been used for discrimination of important viruses. Fungal probes have been introduced into rRNA chips for agricultural research in genetics, reproduction, diseases and even crop protection. Thousands of gene can be targeted simultaneously to look for genetic diversity or microbial infestation by nature or by intentional release. See also Biochip Protein Microarrays Antibody microarray Cellular Microarray Chemical Compound Microarray DNA Microarray MicroArray and Gene Expression Lab-on-a-Chip References Microarrays
MAGIChip
Chemistry,Materials_science,Biology
2,953
1,451,702
https://en.wikipedia.org/wiki/Seth%20Lloyd
Seth Lloyd (born August 2, 1960) is a professor of mechanical engineering and physics at the Massachusetts Institute of Technology. His research area is the interplay of information with complex systems, especially quantum systems. He has performed seminal work in the fields of quantum computation, quantum communication and quantum biology, including proposing the first technologically feasible design for a quantum computer, demonstrating the viability of quantum analog computation, proving quantum analogs of Shannon's noisy channel theorem, and designing novel methods for quantum error correction and noise reduction. Biography Lloyd was born on August 2, 1960. He graduated from Phillips Academy in 1978 and received a bachelor of arts degree from Harvard College in 1982. He earned a certificate of advanced study in mathematics and a master of philosophy degree from Cambridge University in 1983 and 1984, while on a Marshall Scholarship. Lloyd was awarded a doctorate by Rockefeller University in 1988 (advisor Heinz Pagels) after submitting a thesis on Black Holes, Demons, and the Loss of Coherence: How Complex Systems Get Information, and What They Do With It. From 1988 to 1991, Lloyd was a postdoctoral fellow in the High Energy Physics Department at the California Institute of Technology, where he worked with Murray Gell-Mann on applications of information to quantum-mechanical systems. From 1991 to 1994, he was a postdoctoral fellow at Los Alamos National Laboratory, where he worked at the Center for Nonlinear Systems on quantum computation. In 1994, he joined the faculty of the Department of Mechanical Engineering at MIT. Starting in 1988, Lloyd was an external faculty member at the Santa Fe Institute for more than 30 years. In his 2006 book, Programming the Universe, Lloyd contends that the universe itself is one big quantum computer producing what we see around us, and ourselves, as it runs a cosmic program. According to Lloyd, once we understand the laws of physics completely, we will be able to use small-scale quantum computing to understand the universe completely as well. Lloyd states that we could have the whole universe simulated in a computer in 600 years provided that computational power increases according to Moore's Law. However, Lloyd shows that there are limits to rapid exponential growth in a finite universe, and that it is very unlikely that Moore's Law will be maintained indefinitely. Lloyd directs the Center for Extreme Quantum Information Theory (xQIT) at MIT. He has made influential contributions to a broad range of topics, mostly in the wider field of quantum information science. Among his most cited works are the first proposal for a digital quantum simulator, a general framework for quantum metrology, the first treatment of quantum computation with continuous variables, dynamical decoupling as a method of quantum error avoidance, quantum algorithms for equation solving and machine learning or research on the possible relevance of quantum effects in biological phenomena, especially photosynthesis, an effect he has also collaborated to exploit technologically. According to Clarivate he had in July 2023 in total 199 peer-reviewed publications which were cited more than 22,600 times leading to an h index of 61. Epstein affair During July 2019, reports surfaced that MIT and other institutions had accepted funding from convicted sex offender Jeffrey Epstein. In the ensuing scandal, the director of the MIT Media Lab, Joi Ito, resigned from MIT as a result of his association with Epstein. Lloyd's connections to Epstein also drew criticism: Lloyd had acknowledged receiving funding from Epstein in 19 of his papers. On August 22, 2019, Lloyd published a letter apologizing for accepting grants (totaling $225,000) from Epstein. Despite this, the controversy continued. In January 2020, at the request of the MIT Corporation, the law firm Goodwin Procter issued a report on all of MIT's interactions with Epstein. As a result of the report, on January 10, 2020, Lloyd was placed on paid administrative leave. Lloyd has vigorously denied that he misled MIT about the source of the funds he received from Epstein. This denial was validated by a subsequent MIT investigation that concluded that Lloyd did not attempt to circumvent the MIT vetting process, nor try to conceal the name of the donor, and Lloyd was allowed to continue his tenured faculty position at MIT. However, most but not all members of MIT's fact-finding committee concluded that Lloyd had violated MIT's conflict of interest policy by not revealing crucial publicly known information about Epstein's background to MIT, as a result of which Lloyd will be subject to a series of administrative actions for 5 years. Honors 2007 Fellow of the American Physical Society 2012 International Quantum Communication Award Works Lloyd, S., Programming the Universe: A Quantum Computer Scientist Takes On the Cosmos, Knopf, March 14, 2006, 240 p., Interview: Quantum Hanky Panky: A Conversation with Seth Lloyd (video), Edge Foundation, 2016 Interview: The Computational Universe: Seth Lloyd (video), Edge Foundation, 2002 Lecture: The Black Hole of Finance (video), Santa Fe Institute Movie: In 2022 Lloyd starred in the short film Steeplechase directed by Andrey Kezzyn, which thematizes closed timelike curves, a topic Lloyd has also addressed in his scientific work. See also Digital physics Nuclear magnetic resonance quantum computer Quantum Aspects of Life Simulated reality Notes External links Google Scholar page Personal web page "Crazy Molecules: Pulse Berlin Interview" Programming the Universe Radio Interview from This Week in Science September 26, 2006 Broadcast American mechanical engineers Complex systems scientists Harvard College alumni Rockefeller University alumni MIT School of Engineering faculty Living people 1960 births American people of Welsh descent Santa Fe Institute people New England Complex Systems Institute Quantum information scientists Quantum biology Marshall Scholars Fellows of the American Physical Society
Seth Lloyd
Physics,Biology
1,136
30,048,655
https://en.wikipedia.org/wiki/Proper%20right%20and%20proper%20left
Proper right and proper left are conceptual terms used to unambiguously convey relative direction when describing an image or other object. The "proper right" hand of a figure is the hand that would be regarded by that figure as its right hand. In a frontal representation, that appears on the left as the viewer sees it, creating the potential for ambiguity if the hand is just described as the "right hand". The terms are mainly used in discussing images of humans, whether in art history, medical contexts such as x-ray images, or elsewhere, but they can be used in describing any object that has an unambiguous front and back (for example furniture) or, when describing things that move or change position, with reference to the original position. However a more restricted use may be preferred, and the internal instructions for cataloguing objects in the "Inventory of American Sculpture" at the Smithsonian American Art Museum say that "The terms 'proper right' and 'proper left' should be used when describing figures only". In heraldry, right and left is always used in the meaning of proper right and proper left, as for the imaginary bearer of a coat of arms; to avoid confusion, the Latin terms dexter and sinister are often used. The alternative is to use language that makes it clear that the viewer's perspective is being used. The swords in the illustrations might be described as: "to the left as the viewer sees it", "at the view's left", "at the viewer's left", and so on. However these formulations do not work for freestanding sculpture in the round, where the viewer might be at any position around the sculpture. A British 19th-century manual for military drill contrasts "proper left" with "present left" when discussing the orientation of formations performing intricate movements on a parade ground, "proper" meaning the orientation at the start of the drill. The terms are analogous to the nautical port and starboard, where "port" is to a watercraft as "proper left" is to a sculpture, and they are used for essentially the same reason. Their use obviates the need for potentially ambiguous language such as "my right", "your left", and so on, by expressing the direction in a manner that holds true regardless of the relative orientations of the object and observer. Another example is stage right and left in the theatre, which uses the actor's orientation, "stage right" equating to the audience's "house left". Examples of usage This is from the auction catalogue description of an African wood figure: There is extensive insect loss in the proper right leg, some at the proper right elbow, and at the fronts of both feet. There is a chip off the proper right breast, and the proper right leg was broken off and reglued. Describing an Indian sculpture:The figure standing on the yakṣī's proper left, however, is not a mirror image of the other male ... Notes Art history Orientation (geometry) Handedness Visual arts terminology
Proper right and proper left
Physics,Chemistry,Mathematics,Biology
624
10,473,782
https://en.wikipedia.org/wiki/Commission%20on%20Human%20Medicines
The Commission on Human Medicines (CHM) is a committee of the UK's Medicines and Healthcare products Regulatory Agency. It was formed in October 2005, and assumed the responsibilities of the Medicines Commission and the Committee on Safety of Medicines. Membership in this various and extensive body is listed on a governmental website. The CHM's responsibilities include advising the UK government ministers on matters relating to regulation of human medicinal products, giving advice in relation to the safety, quality and efficacy of human medicinal products, and promoting the collection and investigation of information relating to adverse reactions for human medicines. Background to the establishment The Medicines and Healthcare products Regulatory Agency undertook a public consultation on proposals to amend the advisory body structure laid down in the Medicines Act 1968 in February 2005. Ministers agreed to a new structure with the establishment of the Commission that amalgamated the responsibilities of the Medicines Commission and the Committee on Safety of Medicines. The commission was established under Section 2 of the Medicines Act 1968 (SI 2005 No. 1094). Expert Advisory Groups The work done by the CHM is parcelled out to Expert Advisory Groups (EAGs), which in effect constitute a subcommittee structure. The EAG chairs and members are also required to follow the NHS Code of Practice. Three statutory EAGs - namely Pharmacovigilance; Chemistry, Pharmacy and Standards; and Biologicals/Vaccines - are appointed by the NHS Appointments Commission because they are also standing members of the commission. A list of all EAGs, as they were on 16 May 2011: Anti-infectives / HIV / Hepatology Biologicals / Vaccines Cardiovascular / Diabetes / Renal / Respiratory / Allergy Chemistry, Pharmacy and Standards Clinical Trials Dermatology / Rheumatology / Gastroenterology / Immunology Medicines for Women's Health Neurology / Pain Management / Psychiatry Oncology and Haematology Paediatric Medicines Patient Information Pharmacovigilance Terms of reference The duties of the Commission which came into being on 30 October 2005 are set out in Section 3 of the Medicines Act 1968, as amended by the Medicines (Advisory Bodies) Regulations 2005 and include the following: to advise ministers on matters relating to human medicinal products, except those that fall under the remit of Advisory Board on the Registration of Homoeopathic Products (ABRH) and Herbal Medicines Advisory Committee (HMAC); to advise the Licensing Authority (LA) where the LA has a duty to consult the commission or where the LA chooses to consult the Commission including giving advice in relation to the safety, quality and efficacy of human medicinal products; to consider representations made in relation to the commission's advice (either in writing or at a hearing) by an applicant or by a licence or marketing authorisation holder; to promote the collection and investigation of information relating to adverse reactions for human medicines (except for those products that fall within the remit of ABRH or HMAC) for the purposes of enabling such advice to be given. Chairs The first Chairman of the Committee on Safety of Medicines was Sir Derrick Dunlop. Other Chairmen are listed at Committee on Safety of Medicines. The chairs of the Commission on Human Medicines have been 2005–2012. Professor Sir Gordon Duff 2013-2020. Professor Stuart Ralston 2021-Current. Professor Sir Munir Pirmohamed References External links Pharmacy organisations in the United Kingdom Organizations established in 2005 Clinical pharmacology 2005 establishments in the United Kingdom Non-departmental public bodies of the United Kingdom government
Commission on Human Medicines
Chemistry
698
20,634,608
https://en.wikipedia.org/wiki/NGC%203539
NGC 3539 is a lenticular galaxy in the constellation Ursa Major. It was discovered in April 1831 by John Herschel. It is a member of the galaxy cluster Abell 1185. References External links 3539 Lenticular galaxies Ursa Major 033799
NGC 3539
Astronomy
56
38,026,052
https://en.wikipedia.org/wiki/Cantabrian%20albarcas
A Cantabrian albarca is a rustic wooden shoe in one piece, which has been used particularly by the peasants of Cantabria, Spain. In the neighbouring province of Asturias madreñas are still being widely used in rural areas. Cantabrian albarcas are similar to other clogs from Europe, but have significant features and different characteristics in terms of woodworking process and in their use. They have a characteristic set of three dowels on the bottom of the shoe. History The beginning of the use of this footwear in the northern regions of Spain (especially in Cantabria) is unknown, but it is already mentioned in a document from 1657, in which King Philip IV requested the Pope to create the Diocese of Santander. In the Cadastre of the Marquis of La Ensenada, in 1752, the profession of albarquero is recorded in several villages in the western part of Cantabria. Given the humid climate of the area, it is a very appropriate footwear to protect the feet from water and dirt on the ground from certain tasks carried out in the stable, in the meadows and on the farmland. It is practical for walking on rough terrain, muddy ground, and also in the snow, because the ‘tarugos’ or lower heels give elevation to the foot and lend agility to the gait. Today, this traditional handicraft has become a traditional craft. Today, this traditional craft has been left to a few albarquero craftsmen, who only make albarcas to order, sometimes for use and sometimes as a typical souvenir of the Cantabrian region, both in natural size and in small format. The craft of albarca making is tending to disappear with the last remaining craftsmen in very few villages today, being replaced by machines, in which a profile reader runs along the surface of the albarca to be reproduced and transports its reading, by means of a set of bars, to blades that cut the excess wood and achieve a perfect duplicate. These machine-made albarcas are the ones that can be bought today in shops, imported to Cantabria from other places. Although the use of albarcas as footwear has become almost extinct, this has not prevented this typical footwear of the north from being considered a cultural and therefore tourist resource. This is how the albarca plays a role in the Saja-Nansa ecomuseum, as this footwear was typical of this region centuries ago. The Saja-Nansa eco-museum tries to care for and maintain these customs, as well as to conserve and present to new generations this set of heritage elements that aim to produce and communicate a certain knowledge. The Saja-Nansa ecomuseum is a place for the preservation of the albarca. On the Children's Day of Cantabria, which is celebrated on the Magdalena in Santander, a collection of a hundred albarcas (from the Castro Valnera Cultural Association) was exhibited, which attracted more than 35,000 people. As a result, the albarca is still present in various associations and festivals in Cantabria. The Castro Valnera Cultural Association is a good example of this, as it brought together a hundred albarcas, belonging to private collections, representative of all the Cantabrian regions in a unique exhibition that had a great impact, both in terms of the number of visitors and the interest shown. The event was attended by both the regional and national press. In an act of collaboration, on the Children's Day of Cantabria (a Festival of Regional Tourist Interest) held in Santander, the pieces that formed part of the exhibition were exhibited on the Magdalena peninsula so that they could be admired by more than 35,000 people. In 2006, the City Council of Cantabria organised an exhibition of the pieces that were part of the exhibition. In 2006, the Town Council of Cartes (Cantabria) organised in Santiago de Cartes the day of the albarca, on the occasion of the festivities of San Cipriano (a festival of Regional Tourist Interest). In 2006, the exhibition was held in Santiago de Cartes (Cantabria). See also Geta (footwear) List of shoe styles References Culture of Cantabria Cantabrian symbols Spanish clothing Spanish folklore Footwear Clogs (shoes) Folk footwear Footwear by country Safety clothing Shoes Sandals
Cantabrian albarcas
Mathematics
908
42,215,079
https://en.wikipedia.org/wiki/Nu%20Leonis
ν Leonis, Latinised as Nu Leonis, is a binary star system in the zodiac constellation of Leo. It is faintly visible to the naked eye with an apparent visual magnitude of 5.15; parallax measurements indicate it is around 500 light years away. At this distance, the visual extinction from interstellar dust is 0.33 magnitudes. It is 0.05 degree north of the ecliptic, so it can be occulted by the moon or planets. This is a single-lined spectroscopic binary system with an orbital period of 137.3 days and an eccentricity of 0.7. The primary component is a B-type subgiant star with a stellar classification of B6 IV. It has about 3.37 times the mass of the Sun, 2.3 times the Sun's radius, and radiates 244 times the luminosity of the Sun from an outer atmosphere with an effective temperature of 9,552 K. The rotation rate is moderate with a projected rotational velocity of 100 km/s. Little is known about the companion. References B-type subgiants Spectroscopic binaries Leo (constellation) Leonis, Nu Leonis, 27 086360 048883 3937 Durchmusterung objects
Nu Leonis
Astronomy
264
15,418,662
https://en.wikipedia.org/wiki/DLEU2
Deleted in lymphocytic leukemia 2 (non-protein coding) is a long non-coding RNA that in humans is encoded by the DLEU2 gene. In humans it is located on chromosome 13q14. The DLEU2 gene was originally identified as a potential tumour suppressor gene and is often deleted in patients with B-cell chronic lymphocytic leukemia. See also Long non-coding RNA References Further reading Non-coding RNA
DLEU2
Chemistry
95
33,803,593
https://en.wikipedia.org/wiki/Institut%20f%C3%BCr%20Integrierte%20Produktion%20Hannover
Institut für Integrierte Produktion Hannover (IPH), which literally translates as "Hanover institute of integrated production", is a non-profit limited company providing research and development, consulting, and training in industrial engineering. History On January 1, 1988, three German engineering professors founded IPH as a spin-off company of Leibniz University Hannover. As the non-profit company dealt with computer-integrated manufacturing, it was originally called “CIM-Fabrik Hannover” (CIM factory of Hanover). The name was later changed to IPH – Institut für Integrierte Produktion Hannover. In 1991, the company and its 26 employees moved from the inner city to “Wissenschaftspark” in Marienwerder, in the Northwest of Hanover. To create room for an increasing number of employees, the company building was extended just eight years later. The death of professor Eckart Doege, co-founder and managing partner of the IPH, in 2004 marked the end of an era. Bernd-Arno Behrens, professor of forming at Leibniz University Hannover, was appointed as his successor. Co-founders professor Hans Kurt Toenshoff and professor Hans-Peter Wiendahl left the company in 2007 resp. 2008. They were succeeded by two professors of Leibniz University Hannover: Ludger Overmeyer, professor of automation engineering, and Peter Nyhuis, professor of production systems and logistics. The change of the management board led to a strategic transformation of research topics. In addition to logistics, production automation, and process technology, xxl goods was added to the IPH portfolio as another research topic. The research engineers apply the term xxl goods to products such as planes, ships, wind energy plants but also motor parts of utility vehicles, and jet engines. The company’s aim is to promote research dealing with the production of these large scale goods. Currently, IPH is the only research institute exploring this theme from a scientific point of view. Organization The company is composed of three departments focusing on logistics, production automation, and process technology. Research dealing with xxl goods is done by all departments. IPH is run by three managing partners and a managing director. As a non-profit research company, it is funded by public research funding but also by the money earned through industry consulting. Management board Bernd-Arno Behrens (professor at Leibniz University Hannover) Peter Nyhuis (professor at Leibniz University Hannover) Ludger Overmeyer (professor at Leibniz University Hannover) Georg Ullmann Advisory board Joerg Seume (professor at Leibniz University Hannover) Friedrich-Wilhelm Bach (professor at Leibniz University Hannover) Sabine Johannsen (member of the management board of NBank GmbH) Volker Bartels (director of Sennheiser electronic GmbH & Co. KG) Andreas Jaeger (director of Jaeger Gummi und Kunststoff GmbH) Kai Brueggemann (plant and site manager of Airbus Operations GmbH, Bremen) Business activities The IPH offers Research and development, consulting, and training in production engineering. Customers range from local small and medium-sized businesses to multinational companies. Fields of activity include: Process technology Production automation Logistics XXL goods Process technology Since 2000 the IPH has been part of the special research field dealing with flashless precision forging (“Sonderforschungsbereich 489”). Funded by the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG), this consortium conducts research on the process chain of high-performance parts forged without flash. According to Doege et al., the term “flashless precision forging” is commonly used in two ways: On the one hand, it describes forging without flash. On the other hand, it is used for forged parts with a tolerance of IT 7 to IT 9. Latest research findings indicate that it is possible to forge complicated parts, such as crankshafts, without flash. In order to forge crankshafts without flash, appropriate preforming processes are necessary. At IPH, the main focus is on multi-directional forging and cross wedge rolling. IPH engineers also conduct research on cross wedge rolling within the context of process chain design of warm forging. The influence of temperature on both the process and part quality is studied for cross wedge rolling and forge rolling. Also, wearing of forging tools is investigated. A recent approach to reduce wear is the coating of parts with layers of DLC. Further research efforts include hydro forming (e.g. of titanium). In this context, material and composite material are studied. “Tailored hybrid tubes” is another research area about to be investigated. The term describes material combined of both steel and aluminum. A new way of forging developed by IPH is hybrid forging. This technique combines both forging and joining of massive parts and sheet metal in one single operation. Another development fostered by IPH is a module for automatized stud welding with tip ignition that is integrated into sheet metal working tools. This development reduces the process chain significantly. As a result, costs of extra positioning units and time needed for positioning are cut. In the field of sheet metal forming, efforts are made to increase effectiveness of sheet forming machines. The key performance indicator OEE is used to determine improvements in the elimination of perturbations during the forming process. Production automation In the field of production automation, IPH focuses on artificial intelligence, distributed systems, and the use of wireless communication on production sites. Since the beginning of the new millennium, the IPH has been researching the use of methods of artificial intelligence in industrial engineering. The company’s main focus is on the performance-oriented and cost-effective design of interlinked assembly lines through the use of data mining. Recent research centers on the positioning of cooling channels in injection molding tools, the design of pre-forming geometries for forging processes, and autonomously controlled automated guided vehicles (AGV). Distributed systems are also subject of research at IPH. As a result of a project funded by the German Federal Ministry of Education and Research (Bundesministerium für Bildung und Forschung, BMBF), an electric tool log was developed. In another research project funded by the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG), intelligent diamond cutting discs were designed. In this particular case, the distributed system consisted of piezoelectric sensors detecting tool vibrations, a measuring system processing and enhancing signals, and a radio module processing signals to a measuring computer. IPH also concentrates its research efforts on wireless communication and the use of this technology on production sites. To fight the production of counterfeit medications, a new method of integrating RFID tags and antennas into drug packages was developed as part of the EZ pharm project (www.ez-pharm.de). Lately, the Zigbee technology has been applied to an abrasive sheet during a cutoff grinding process. Recent research in the field of production automation focuses on optical communication. To enable automated guided vehicles to detect their position, a system based on visible light is being designed (www.isi-walk.de ). Also, for the use in intralogistics, a new method of identifying goods is being developed. Logistics With regard to logistics, IPH engineers do research on how to design and control production networks efficiently, both ecologically and economically. For example, the project "synchronization of logistic responsiveness in production networks" revealed that structural interactions within networks have a strong influence on dynamic behavior and thus logistic performance. Furthermore, a scientific method aiming at the economic and organizational planning and assessment of transformability in supply chains is currently being developed as part of the research project “ISI-WALK” (www.isi-walk.de ). The method is supposed to help companies to identify e.g. the right point of time to initiate transformation processes. In various research projects, IPH employees investigate in-plant production logistics. In the special research field dealing with flashless precision forging (“Sonderforschungsbereich 489”), a new batch size calculation method has been developed. It allows for the consideration of bulk forming tool endurance, and thus helps forging companies to avoid additional costs. An IPH development that has become part of business life is the logistics key performance indicator system developed in a research project called LogiBEST. The KPI system applies to procurement, production, and distribution. Based on this KPI system, the Association of German Engineers (Verein Deutscher Ingenieure, VDI) developed its guideline “VDI-Richtlinie 4400“. References External links Official website of research efforts on XXL goods (German only) Official website of the special research field on flashless precision forging (German only) Research institutes in Lower Saxony Engineering organizations Buildings and structures in Hanover
Institut für Integrierte Produktion Hannover
Engineering
1,852
77,834,184
https://en.wikipedia.org/wiki/Vidofludimus
Vidofludimus is an investigational new drug that is being evaluated to treat Crohn's disease and ulcerative colitis. It is a dihydroorotate dehydrogenase inhibitor. References Anti-inflammatory agents Carboxamides Biphenyls Carboxylic acids Cyclopentanes Fluorobenzene derivatives 3-Methoxyphenyl compounds
Vidofludimus
Chemistry
85
758,919
https://en.wikipedia.org/wiki/Rescue%20of%20the%20Danish%20Jews
The Danish resistance movement, with the assistance of many Danish citizens, managed to evacuate 7,220 of Denmark's 7,800 Jews, plus 686 non-Jewish spouses, by sea to nearby neutral Sweden during the Second World War. The arrest and deportation of Danish Jews was ordered by the German leader Adolf Hitler, but the efforts to save them started earlier due to the plans being leaked on September 28, 1943, by German diplomat Georg Ferdinand Duckwitz. The rescue is considered one of the largest actions of collective resistance to aggression in the countries occupied by Germany during the Second World War. As a result of the rescue, and of the following Danish intercession on behalf of the 464 Danish Jews who were captured and deported to the Theresienstadt concentration camp in the Protectorate of Bohemia and Moravia, 99% of Denmark's Jewish population survived the Holocaust. Deportation order Denmark's German occupiers began planning the deportation to Nazi concentration camps of the 7,800 or so Jews in Denmark after it became apparent that the Nazis could not expect the Danish government to support this. German diplomat Georg Ferdinand Duckwitz unsuccessfully attempted to assure safe harbor for the Danish Jews in Sweden; the Swedish government told Duckwitz it would accept the Danish Jews only if approved by the Nazis, who ignored the request for approval. On September 28, 1943, Duckwitz leaked word of the plans for the operation against Denmark's Jews to Hans Hedtoft, chairman of the Danish Social Democratic Party. Hedtoft contacted the Danish Resistance Movement and the head of the Jewish community, C. B. Henriques, who in turn alerted the acting chief rabbi, Marcus Melchior. At the early morning services, on September 29, the day prior to the Rosh Hashanah services, Jews were warned by Rabbi Melchior of the planned German action and urged to go into hiding immediately and to spread the word to all their Jewish friends and relatives. The German action to deport Danish Jews prompted the Danish state church and all political parties except the pro-Nazi National Socialist Workers' Party of Denmark (NSWPD) immediately to denounce the action and to pledge solidarity with the Jewish fellow citizens. For the first time they openly opposed the occupation. At once the Danish bishops issued a hyrdebrev—a pastoral letter to all citizens. The letter was distributed to all Danish ministers, to be read out in every church on the following Sunday. The early phases of the rescue were improvised. When Danish civil servants at several levels in different ministries learned of the German plan to round up all Danish Jews, they independently pursued various measures to find the Jews and hide them. Some simply contacted friends and asked them to go through telephone books and warn those with Jewish-sounding names to go into hiding. Most Jews hid for several days or weeks, uncertain of their fate. Sweden's readiness and logistics [[File:Mystic Seaport Gerda III.JPG|thumb|From October 1943 the boat Gerda III of the Danish Lighthouse and Buoy Service was used to ferry Jewish refugees from German-occupied Denmark to neutral Sweden. With a group of some ten refugees on board for each trip, the vessel set out for her official lighthouse duties before detouring to the Swedish coast. Between the lighthouse manager's daughter Henny Sinding Sundø and Gerda III'''s crew (Skipper Otto Andersen, John Hansen, Gerhardt Steffensen, and Einar Tønnesen), they together ferried approximately 300 Jews to safety.]] Although most Danish Jews were in hiding, most would likely have been caught eventually if safe passage to Sweden had not been secured. Sweden had earlier been receiving Norwegian Jews with some sort of Swedish connection. But the actions to save the Norwegians were not entirely effective due to the lack of experience dealing with the German authorities. When martial law was introduced in Denmark on August 29, the Swedish Ministry for Foreign Affairs (UD) realized that the Danish Jews were in immediate danger. In a letter dated August 31, the Swedish ambassador in Copenhagen was given clearance by the Chief Legal Officer Gösta Engzell (who had represented Sweden at the 1938 Évian Conference, held to discuss Jewish refugees fleeing the Nazi regime) to issue Swedish passports to "rescue Danish Jews and bring them here." On October 2, the Swedish government announced in an official statement that Sweden was prepared to accept all Danish Jews in Sweden. It was a message parallel to an earlier unofficial statement made to the German authorities in Norway. The Jews were smuggled and transported out of Denmark over the Øresund strait from Zealand to Sweden—a passage of varying time depending on the specific route and the weather, but averaging under an hour on the choppy winter sea. Some were transported in large fishing boats of up to 20 tons, but others were carried to freedom in rowboats or kayaks. The ketch Albatros and the lighthouse tender Gerda III were two of the ships used to smuggle Jews to Sweden. Some refugees were smuggled inside freight rail cars on the regular ferries between Denmark and Sweden, this route being suited for the very young or old who were too weak to endure a rough sea passage. Danish Resistance Movement operatives had broken into empty freight cars sealed by the Germans after inspection, helped refugees onto the cars, and then resealed the cars with forged or stolen German seals to forestall further inspection. Fishermen charged on average 1,000 Danish kroner per person for the transport, but some charged up to 50,000 kroner. The average monthly wage at the time was less than 500 kroner, and half of the rescued Jews belonged to the working class. Prices were determined by the market principles of supply and demand, as well as by the fishermen's perception of the risk. The Danish Resistance Movement took an active role in organizing the rescue and providing financing, mostly from wealthy Danes who donated large sums of money to the endeavor. In all the rescue is estimated to have cost around 20 million kroner, about half of which were paid by Jewish families and half from donations and collections. 2 October Swedish radio broadcast The Danish physicist Niels Bohr, whose mother was Jewish, made a determined stand for his fellow countrymen in a personal appeal to the Swedish king and government ministers. King Gustav V granted him an audience after a persuasive call from Greta Garbo, who knew Bohr. He was spirited off to Sweden, whose government arranged immediate transport for him to the United States to work on the then top-secret Manhattan Project. When Bohr arrived on Swedish soil, government representatives told him he had to board an aircraft immediately for the United States. Bohr refused. He told the officials—and eventually the king—that until Sweden announced over its airwaves and through its press that its borders would be open to receive the Danish Jews, he was not going anywhere. Bohr wrote of these events himself. As related by the historian Richard Rhodes, on 30 September Bohr persuaded Gustaf to make public Sweden's willingness to provide asylum, and on 2 October Swedish radio broadcast that Sweden was ready to receive the Jewish refugees. Rhodes and other historians interpret Bohr's actions in Sweden as being a necessary precursor without which mass rescue could not have occurred. According to Paul A. Levine, however—who does not mention the Bohr factor at all—the Swedish Ministry for Foreign Affairs acted on clear instructions given much earlier by Prime Minister Per Albin Hansson and Foreign Minister Christian Günther, following a policy already established in 1942. Rescues During the first days of the rescue action, Jews moved into the many fishing harbors on the Danish coast to await passage, but officers of the Gestapo became suspicious of activity around harbors (and on the night of October 6, about 80 Jews were caught hiding in the loft of the church at Gilleleje, their hiding place having been betrayed by a Danish girl who was in love with a German soldier). Subsequent rescues had to take place from isolated points along the coast. While waiting their turn, the Jews took refuge in the woods and in cottages away from the coast, out of sight of the Gestapo. Danish harbor police and civil police often cooperated with the rescue effort. During the early stages, the Gestapo was undermanned and the German army and navy were called in to reinforce the Gestapo in its effort to prevent transportation taking place. The local Germans in command, for their own political calculations and through their own inactivity, may have actually facilitated the escape. Partial list of Danish rescuers and resisters While only a few Danes, mostly non-resistance members who happened to be known by the Jew he or she helped, made the Yad Vashem list, there were several hundreds, if not a few thousands, of ordinary Danes who took part in the rescue efforts. They most often worked within small spontaneously organized groups and "under cover". Known only by their fictitious names, they could generally not be identified by those who were helped and thus did not meet the Yad Vashem criteria for the "Righteous Among the Nations" honor. Below is a partial list of some of the more significant rescuers, both within and outside the formal resistance movement, whose names have surfaced over the years:Werner, Emmy E. A Conspiracy of Decency, Westview Press, 2002 Fanny Arnskov of the Danish chapter of the Women's International League for Peace and Freedom Aage and Gerda Bertelsen, rescuers, Gerda was a leader of the Lyngby Group Ellen Christensen, resistance fighter, rescuer, nurse Knud Dyby Richard and Vibeke Ege Elsinore Sewing Club () sprang up to covertly ferry Jews to safety Jørgen Gersfelt Gunnar Gregersen Hilbert Hansen Steffen Hansen Jørgen Jenk Ole Lippmann Ejler Haubirk Ole Helwig Leif B. Hendil, who established the Danish-Swedish Refugee Service Erik Husfeldt, leader in Frode Jakobsen's Ring, member of the Danish Freedom Council Signe (Mogensen) Jansen Robert Jensen Erling Kiær Elsebeth Kieler Jørgen Kieler Karl Henrik Køster Gurli Larsen Thormod Larsen Jens Lillelund of the Holger Danske resistance group Ebba Lund Steffen Lund Ellen W. Nielsen Svend Otto Nielsen ("John") Robert Petersen Henry Rasmussen Paul Kristian Brandt Rehberg Børge Rønne Find Sandgren Ole Secher Svenn Seehusen Laust Sørensen Erik Stærmose Mogens Staffeldt Henny Sinding Sundø Henry Thomsen "Righteous among the nations" At their initial insistence, the Danish resistance movement wished to be honored only as a collective effort by Yad Vashem in Israel as being part of the "Righteous Among the Nations"; only a handful are individually named for that honor. Instead, the rescue of the Jews of Denmark is represented at Yad Vashem by a tree planting to the King and the Danish Resistance movement—and by an authentic fishing boat from the Danish village of Gilleleje. Similarly, the US Holocaust Museum in Washington, D.C., has on permanent exhibit an authentic rescue boat used in several crossings in the rescue of some 1400 Jews. Georg Ferdinand Duckwitz, the German official who leaked word of the round-up, is also on the Yad Vashem list.For an alternative interpretation of Duckwitz's role in the rescue of the Danish Jews, see: Vilhjálmsson, Vilhjálmur Ö. "Ich weiss, was ich zu tun habe" RAMBAM 15:2006 Arrests and deportation to Theresienstadt In Copenhagen, the deportation order was carried out on the Jewish New Year, the night of October 1–2, when the Germans assumed all Jews would be gathered at home. The roundup was organized by the SS who used two police battalions and about 50 Danish volunteer members of the Waffen SS chosen for their familiarity with Copenhagen and northern Zealand. The SS organized themselves in five-man teams, each with a Dane, a vehicle, and a list of addresses to check. Most teams found no one, but one team found four Jews on the fifth address checked. A bribe of 15,000 kroner was rejected, and the cash was destroyed. Of 580 Danish Jews who failed to escape to Sweden, 464 were arrested. They were allowed to bring two blankets, food for three or four days, and a small suitcase. They were transported to the harbour, Langelinie, where a couple of large ships awaited them. One of the Danish Waffen-SS members believed the Jews were being sent to Danzig. The Danish Jews were sent ultimately to the Theresienstadt concentration camp in German-occupied Czechoslovakia. After these Jews' deportation, leading Danish civil servants persuaded the Germans to accept packages of food and medicine for the prisoners; furthermore, Denmark persuaded the Germans not to deport the Danish Jews to extermination camps. This was achieved by Danish political pressure, using the Danish Red Cross to frequently monitor the condition of the Danish Jews at Theresienstadt. A total of 51 Danish Jews—mostly elderly—died of disease at Theresienstadt. Rescue by Folke Bernadotte In April 1945, as the war drew to a close, 425 surviving Danish Jews (a few having been born in the camp) were among the several thousand Jews rescued by an operation led by Folke Bernadotte of the Swedish Red Cross who organized the transporting of interned Norwegians, Danes and western European inmates from German concentration camps to hospitals in Sweden. Around 15,000 people were taken to safety in the White Buses of the Bernadotte expedition. Aftermath About 116 Danish Jews remained hidden in Denmark until the war's end, a few died of accidents or committed suicide, and a handful had special permission to stay. The casualties among Danish Jews during the Holocaust were among the lowest of the occupied countries of Europe. Yad Vashem records only 102 Jews from Denmark who were murdered in the Shoah. The unsuccessful German deportation attempt and the actions to save the Jews were important steps in linking the resistance movement to broader anti-Nazi sentiments in Denmark. In many ways, October 1943 and the rescuing of the Jews marked a change in most people's perception of the war and the occupation, thereby giving a "subjective-psychological" foundation for the legend. A few days after the roundup, a small news item in the New York Daily News reported the story about the wearing of the Star of David . Later, the story gained its popularity in Leon Uris's novel Exodus and in its movie adaptation. The political theorist Hannah Arendt also mentions it during the discussion of Denmark in her book of reportage, Eichmann in Jerusalem. Explanations Different explanations have been advanced to explain the success of efforts to protect the Danish Jewish population in the light of less success at similar operations elsewhere in Nazi-occupied Europe:Leni Yahil, The Rescue of Danish Jewry, Test of a Democracy, 1966 The German Reich plenipotentiary of Denmark, Werner Best, despite instigating the roundup via a telegram he sent to Hitler on October 8, 1943, did not act to enforce it. He was aware of the efforts by Duckwitz to have the roundup cancelled, and knew about the potential escape of the Jews to Sweden, but he turned a blind eye to it, as did the Wehrmacht (which was guarding the Danish coast), in order to preserve Germany's relationship with Denmark. According to the account of a survivor, Best himself had warned the survivor to escape. Logistically, the operation was relatively simple. Denmark's Jewish population was small, both in relative and absolute terms, and most of Denmark's Jews lived in or near Copenhagen, only a short sea voyage from neutral Sweden (typically ). Although hazardous, the boat ride was easier to conceal than a comparable land journey. Since the mid-19th century, a particular brand of romantic nationalism had evolved in Denmark. The traits of this nationalism included emphasis on the importance of "smallness", close-knit communities, and traditions—this nationalism being largely a response to Denmark's failure to assert itself as a great power and its losses in the Gunboat War and the Second War of Schleswig. Some historians, such as Leni Yahil (The Rescue of Danish Jewry: Test of a Democracy, 1969), believe that the Danish form of non-aggressive nationalism, influenced by Danish spiritual leader N. F. S. Grundtvig, encouraged the Danes to identify with the plight of the Jews, even though small-scale anti-Semitism had been present in Denmark long before the German invasion. Denmark's Jewish population had long been thoroughly integrated into Danish society, and some members of the small Jewish community had risen to prominence. Consequently, most Danes perceived the Nazis' action against Denmark's Jews as an affront to all Danes, and rallied to the protection of their country's citizens. The deportation of Jews in Denmark came one year after the deportations of Jews in Norway. That created an outrage in all of Scandinavia, alerted the Danish Jews, and pushed the Swedish government to declare that it would receive all Jews who managed to escape the Nazis. Popular culture Films The Only Way 1970 film about the escape of Danish Jews to Sweden during World War IIA Day in October 1991 film about the escape of Danish Jews to Sweden during World War IIMiracle at Midnight 1998 American TV movie about the escape of Danish Jews to Sweden during World War IIThe Danish Solution: The Rescue of the Jews in Denmark 2003 documentary about the escape of Danish Jews to Sweden during World War IIAcross the Waters, 2016 film based on the true story of Niels Børge Lund Ferdinandsen, who rescued the Danish Jews during World War II BooksA Night of Watching (1967) a work of historical fiction by Elliot Arnold about the escape of Danish Jews to Sweden during World War II.Number the Stars (1989) a work of historical fiction by Lois Lowry about the escape of Danish Jews to Sweden during World War II.Harboring Hope: The True Story of How Henny Sinding Helped Denmark’s Jews Escape the Nazis (2023) a work of historical fiction by Susan Hood (Harper Collins). See also Denmark in World War II Rescue of the Bulgarian Jews History of Jews Nazi war crimes Hedvig Delbo, Gestapo agent Notes References Bak, Sofie Lene: Nothing to Speak Of: Wartime Experiences of the Danish Jews. U. of Chicago (Museum Tuscalum Press) 2010, . Bertelsen, Aage. October '43. New York: Putnam, 1954. Buckser, Andrew. "Rescue and Cultural Context During the Holocaust: Grundtvigian Nationalism and the Rescue of the Danish Jews". Shofar 19(2), 2001. . Gulmann, Søren & Karina Søby Madsen. The Elsinore Sewing Club. Forlaget fantastiske fortællinger, 2018. . Herbert, Ulrich: Best. Biographische Studien über Radikalismus, Weltanschauung und Vernunft 1903–1989. Habilitationsschrift. Dietz, Bonn 1996, . Kieler, Jørgen. Hvorfor gjorde vi det. [Why did we do it?]. Copenhagen, Denmark, Gyldendal, 1993. Levine, Paul A. (1996). From Indifference to Activism: Swedish Diplomacy and the Holocaust 1938–1944. Uppsala. Pundik, Herbert: Die Flucht der dänischen Juden 1943 nach Schweden. Husum, 1995. . Vilhjálmsson, Vilhjálmur Örn (2006) "Ich weiss, was ich zu tun habe". Rambam 15:2006 (English abstract at the end of the article). Vilhjálmsson, Vilhjálmur Örn and Blüdnikow, Bent. "Rescue, Expulsion, and Collaboration: Denmark's Difficulties with Its World War II Past". Jewish Political Studies Review'' 18:3–4 (Fall 2006). External links United States Holocaust Memorial Museum: The Rescue of the Jews of Denmark "The Fate of the Danish Jews" Denmark and the Holocaust - an article in Yad Vashem resource center, by Carol Rittner Rescue, Expulsion, and Collaboration:Denmark's Difficulties with its World War II Past. (By Vilhjálmur Örn Vilhjálmsson and Bent Blüdnikow) The tip-off from a Nazi that saved my grandparents 1943 in Denmark Danish resistance movement Danish Righteous Among the Nations Evacuations during World War II Jewish Danish history Jewish Swedish history Rescue of Jews during the Holocaust Sweden in World War II The Holocaust in Denmark The Holocaust and Sweden Opposition to antisemitism in Denmark
Rescue of the Danish Jews
Biology
4,288
49,391,653
https://en.wikipedia.org/wiki/Craterellus%20calicornucopioides
Craterellus calicornucopioides is an edible fungus in the family Cantharellaceae. Described by David Arora and Jonathan L. Frank in 2015, is the North American version of the similar European species Craterellus cornucopioides. Molecular phylogenetics has shown that they are, however, distinct species. C. calicornucopioides associates with and fruits in the vicinity of oaks, manzanita, madrone, and Vaccinium. See also Craterellus atrocinereus References External links Edible fungi Cantharellales Fungi of North America Fungi described in 2015 Fungus species [[Category:Mycorrhizal associates of oaks Californian cuisine ]]
Craterellus calicornucopioides
Biology
146
50,263
https://en.wikipedia.org/wiki/Domain%20of%20a%20function
In mathematics, the domain of a function is the set of inputs accepted by the function. It is sometimes denoted by or , where is the function. In layman's terms, the domain of a function can generally be thought of as "what x can be". More precisely, given a function , the domain of is . In modern mathematical language, the domain is part of the definition of a function rather than a property of it. In the special case that and are both sets of real numbers, the function can be graphed in the Cartesian coordinate system. In this case, the domain is represented on the -axis of the graph, as the projection of the graph of the function onto the -axis. For a function , the set is called the codomain: the set to which all outputs must belong. The set of specific outputs the function assigns to elements of is called its range or image. The image of f is a subset of , shown as the yellow oval in the accompanying diagram. Any function can be restricted to a subset of its domain. The restriction of to , where , is written as . Natural domain If a real function is given by a formula, it may be not defined for some values of the variable. In this case, it is a partial function, and the set of real numbers on which the formula can be evaluated to a real number is called the natural domain or domain of definition of . In many contexts, a partial function is called simply a function, and its natural domain is called simply its domain. Examples The function defined by cannot be evaluated at 0. Therefore, the natural domain of is the set of real numbers excluding 0, which can be denoted by or . The piecewise function defined by has as its natural domain the set of real numbers. The square root function has as its natural domain the set of non-negative real numbers, which can be denoted by , the interval , or . The tangent function, denoted , has as its natural domain the set of all real numbers which are not of the form for some integer , which can be written as . Other uses The term domain is also commonly used in a different sense in mathematical analysis: a domain is a non-empty connected open set in a topological space. In particular, in real and complex analysis, a domain is a non-empty connected open subset of the real coordinate space or the complex coordinate space Sometimes such a domain is used as the domain of a function, although functions may be defined on more general sets. The two concepts are sometimes conflated as in, for example, the study of partial differential equations: in that case, a domain is the open connected subset of where a problem is posed, making it both an analysis-style domain and also the domain of the unknown function(s) sought. Set theoretical notions For example, it is sometimes convenient in set theory to permit the domain of a function to be a proper class , in which case there is formally no such thing as a triple . With such a definition, functions do not have a domain, although some authors still use it informally after introducing a function in the form . See also Argument of a function Attribute domain Bijection, injection and surjection Codomain Domain decomposition Effective domain Endofunction Image (mathematics) Lipschitz domain Naive set theory Range of a function Support (mathematics) Notes References Functions and mappings Basic concepts in set theory
Domain of a function
Mathematics
694
11,403,891
https://en.wikipedia.org/wiki/Tony%20Wright%20%28sleep%20deprivation%29
Tony Wright is an author and consciousness researcher from Penzance, Cornwall, England. He claims to hold the world record for sleep deprivation. Sleep deprivation record Wright claimed the world sleep deprivation record in May 2007 with 266 continuous hours of sleeplessness. He based his record-breaking attempt on the belief that Randy Gardner was officially recognized by the Guinness World Records as holding the deprivation record of 264 hours. However, the Guinness record was actually for 11½ days, or 276 hours, and was set by Toimi Soini in Hamina, Finland, from February 5 to the 15th, 1964, and Wright did not in fact break the Guinness record. However, Wright's friend Graham Gynn asserts that the Gardner record was the accepted record in the sleep research community. Regardless, Wright's record claim was not credited by the Guinness Book of Records, since after 1990 it no longer accepted records related to sleep deprivation due to the health risks. Theories Wright claimed that his deliberate insomnia was made possible in part by his biochemically complex diet of raw foods (carrot juice, bananas, avocados, pineapple and nuts). He also asserted that his motivation for breaking the world sleep deprivation record was neither fame nor fortune, but that his intention was to promote his radical theories of human neurological degeneration that were proposed in his self-published book. References External links "How man pushed sleepless limits". BBC, May 25, 2007 Tony Wright website BBC Video nation - "Sleepless in Penzance" A full synopsis of the book 'Left In The Dark'. . Retrieved on 2022-03-22. The full, free, edition of the book 'Left In The Dark'. . Retrieved on 2022-03-22. Conscious TV Interview www.brainwaving.com : An article entitled 'Consciousness and the Direction of Structure', by Tony Wright Audio interview in July 2009 with Bryan Crump on Radio New Zealand National (mp3) Der Spiegel Article - November 2009 (German) Der Spiegel Interview - November 2009 (German) Interview with Trevor Smith for disinfo.com - April 2012 Sleeplessness and sleep deprivation People from Penzance Year of birth missing (living people) Living people
Tony Wright (sleep deprivation)
Biology
457
2,545,866
https://en.wikipedia.org/wiki/Rolls-Royce%20Crecy
The Rolls-Royce Crecy was a British experimental two-stroke, 90-degree, V12, liquid-cooled aero-engine of 1,593.4 cu.in (26.11 L) capacity, featuring sleeve valves and direct petrol injection. Initially intended for a high-speed "sprint" interceptor fighter, the Crecy was later seen as an economical high-altitude long-range powerplant. Developed between 1941 and 1946, it was among the most advanced two-stroke aero-engines ever built. The engine never reached flight trials and the project was cancelled in December 1945, overtaken by the progress of jet engine development. The engine was named after the Battle of Crécy, after Rolls-Royce chose battles as the theme for naming their two-stroke aero engines. Rolls-Royce did not develop any other engines of this type. Design and development Origins Sir Henry Tizard, Chairman of the Aeronautical Research Committee (ARC), was a proponent of a high-powered "sprint" engine for fighter aircraft and had foreseen the need for such a powerplant as early as 1935 with the threat of German air power looming. It has been suggested that Tizard influenced his personal friend Harry Ricardo to develop what eventually became the Crecy. The idea was officially discussed for the first time at an engine sub-committee meeting in December 1935. Previous experience gained between 1927 and 1930 using two converted Rolls-Royce Kestrel engines through an Air Ministry contract had proven the worth of further research into a two-stroke sleeve-valved design. Both these engines had initially been converted to diesel sleeve-valved operation with a lower power output than the original design being noted along with increased mechanical failures, although one converted Kestrel was subsequently used successfully by Captain George Eyston in a land-speed record car named Speed of the Wind. The second engine was further converted to petrol injection which then gave a marked power increase over the standard Kestrel. Single-cylinder development began in 1937 under project engineer Harry Wood using a test unit designed by Ricardo. The Crecy was originally conceived as a compression ignition engine and Rolls-Royce had previously converted a Kestrel engine to run on Diesel. By the time they started development of the Crecy itself, in conjunction with the Ricardo company, the decision had been taken by the Air Ministry to revert to a more conventional spark-ignition layout, although still retaining fuel injection. Technical description The Crecy has been described as one of the most advanced two-stroke aero engines ever built. The first complete V12 engine was built in 1941, designed by a team led by Harry Wood with Eddie Gass as the Chief Designer. Bore was 5.1 in (129.5 mm), stroke 6.5 in (165.1 mm), compression ratio 7:1 and weight 1,900 lb (862 kg). The firing angle was 30 degrees BTDC, and 15 lbf/in² (100 kPa) supercharger boost was typical. In bench-testing it produced , but there were problems with vibration and the cooling of the pistons and sleeves. The thrust produced by the exceptionally loud two-stroke exhaust was estimated as being equivalent to a 30% increase in power at the propeller on top of the rated output of the engine. The power of the engine was interesting in its own right, but the additional exhaust thrust at high speed could have made it a useful stop gap between engines such as the Rolls-Royce Merlin and anticipated jet engines. Serial numbers were even, because Rolls-Royce practice was to have even numbers for engines rotating clockwise when viewed from the front. Sleeve valves The reciprocating sleeve valves were open-ended, rather than sealing in a junk head. The open end uncovered the exhaust ports high in the cylinder wall at the bottom of the sleeves' stroke, leaving the ports cut into the sleeve to handle the incoming charge only. The sleeves had a stroke of 30% of the piston travel at 1.950 in (49.5 mm) and operated 15 degrees in advance of the crankshaft. The Crecy sleeve valves were of similar construction but differed in their operation compared to the rotary sleeve valve design that was pioneered by Roy Fedden, and used successfully for the first time in an aircraft engine, the Bristol Perseus, in 1932. Supercharging and exhaust turbine Supercharging was used to force the charge into the cylinder, rather than crankcase compression, as on most two-stroke engines. This allowed the use of a conventional lubrication system, instead of the total-loss type found in many two-stroke engines. Stratified charge was used: the fuel was injected into a bulb-like extension of the combustion chamber where the twin spark plugs ignited the rich mixture. Operable air-fuel ratios of from 15 to 23:1 were available to govern the power produced between maximum and 60%. The rich mixture maintained near the spark plugs reduced detonation, allowing higher compression ratios or supercharger boost. Supercharger throttling was used as well to achieve idling. The supercharger throttles were novel vortex types, varying the effective angle of attack of the impeller blades from 60 to 30 degrees. This reduced the power required to drive the supercharger when throttled, and hence fuel consumption at cruising power. Later testing involved the use of an exhaust turbine which was a half-scale version of that used in the Whittle W.1 turbojet, the first British jet engine to fly. Unlike a conventional turbocharger the turbine was coupled to the engine's accessory driveshaft and acted as a power recovery device. It was thought that using the turbine would lower fuel consumption allowing the engine to be used in larger transport aircraft. This was confirmed during testing, but failures due to severe overheating and drive shaft fractures were experienced. Test summary table The following table summarises the test running programme, hours run, and highlights some of the failures experienced. Data from: Cancellation The progress of jet engine development overtook that of the Crecy and replaced the need for this engine. As a result, work on the project ceased in December 1945 at which point only six complete examples had been built, however an additional eight V-twins were built during the project. Crecy s/n 10 achieved on 21 December 1944 which after adjustment for the inclusion of an exhaust turbine would have equated to . Subsequent single-cylinder tests carried out on the Ricardo E65 engine achieved the equivalent of for the complete engine. By June 1945 a total of 1,060 hours had been run on the V12 engines with a further 8,600 hours of testing on the V-twins. The fate of the six Crecy engines remains unknown. The Crecy proved a unique exercise and Rolls-Royce did not develop any other two-stroke aero engines, the whole concept of advanced piston engines at that time being overtaken by the advent of the practical jet engine. Applications (projected) In the summer of 1941, Supermarine Spitfire Mk II P7674 was delivered to Hucknall and fitted with a Crecy mock-up to enable cowling drawings and system details to be designed. It was planned for the first production Spitfire Mk III to be delivered to Hucknall in early 1942 for fitting of an airworthy Crecy, but this never took place. A Royal Aircraft Establishment report (No. E.3932) of March 1942 estimated the performance of the Spitfire fitted with a Crecy engine and compared this to a Griffon 61-powered variant. The report stated that the Crecy's maximum power output would be too much for the Spitfire airframe but that a derated version would have considerable performance gains over the Griffon-powered fighter. Studies on the de Havilland Mosquito also showed it to raise complex problems with Crecy installation. In 1942 Rolls-Royce Hucknall received a North American P-51 Mustang for engine installation trials. This prompted a series of studies for a Crecy version and the Mustang turned out a more suitable mount than the Spitfire. However these studies were not taken further. As the possibility of a flight-worthy engine approached, on 28 March 1943 Hawker Henley L3385 was delivered to Hucknall for fitting with a Crecy. However the engine never became available and the aircraft remained at Hucknall until it was scrapped on 11 September 1945. From 1943 a number of postwar transport projects were considered, taking advantage of the Crecy's unique characteristics for land, sea and air applications. None got further than the drawing-board. Specifications See also References Notes Citations Bibliography Nahum, A., Foster-Pegg, R.W., Birch, D. The Rolls-Royce Crecy, Rolls-Royce Heritage Trust. Derby, England. 1994 Gunston, Bill. World Encyclopedia of Aero Engines. Cambridge, England. Patrick Stephens Limited, 1989. Hiett,G.F., Robson, J.V.B. A High-Power Two-Cycle Sleeve-Valve Engine for Aircraft: A Description of the Development of the Two-Cycle Petrol-Injection Research Units Built and Tested in the Laboratory of Messrs Ricardo & Co. Ltd. Journal: Aircraft Engineering and Aerospace Technology. Year: 1950 Volume: 22 Issue: 1 Page: 21 - 23. Lumsden, Alec. British Piston Engines and their Aircraft. Marlborough, Wiltshire: Airlife Publishing, 2003. . Rubbra, A.A. Rolls-Royce Piston Aero Engines - a designer remembers: Historical Series no 16 :Rolls-Royce Heritage Trust, 1990. Crecy Turbo-compound engines Two-stroke aircraft piston engines Sleeve valve engines 1940s aircraft piston engines V12 aircraft engines
Rolls-Royce Crecy
Technology
1,992
7,363,669
https://en.wikipedia.org/wiki/Fischer%20glycosidation
Fischer glycosidation (or Fischer glycosylation) refers to the formation of a glycoside by the reaction of an aldose or ketose with an alcohol in the presence of an acid catalyst. The reaction is named after the German chemist, Emil Fischer, winner of the Nobel Prize in chemistry, 1902, who developed this method between 1893 and 1895. Commonly, the reaction is performed using a solution or suspension of the carbohydrate in the alcohol as the solvent. The carbohydrate is usually completely unprotected. The Fischer glycosidation reaction is an equilibrium process and can lead to a mixture of ring size isomers, and anomers, plus in some cases, small amounts of acyclic forms. With hexoses, short reactions times usually lead to furanose ring forms, and longer reaction times lead to pyranose forms. With long reaction times the most thermodynamically stable product will result which, owing to the anomeric effect, is usually the alpha anomer. See also Fischer–Speier esterification - a more general reaction where an alcohol and carboxylic acid are coupled to form an ester Helferich method - a glycosidation carried out with phenol References Carbohydrate chemistry Glycosides Substitution reactions Organic reactions Name reactions Emil Fischer
Fischer glycosidation
Chemistry
292
685,032
https://en.wikipedia.org/wiki/Security-evaluated%20operating%20system
In computing, security-evaluated operating systems have achieved certification from an external security-auditing organization, the most popular evaluations are Common Criteria (CC) and FIPS 140-2. Oracle Solaris Trusted Solaris 8 was a security-focused version of the Solaris Unix operating system. Aimed primarily at the government computing sector, Trusted Solaris adds detailed auditing of all tasks, pluggable authentication, mandatory access control, additional physical authentication devices, and fine-grained access control(FGAC). Versions of Trusted Solaris through version 8 are Common Criteria certified. Trusted Solaris Version 8 received the EAL 4 certification level augmented by a number of protection profiles. BAE Systems' STOP BAE Systems' STOP version 6.0.E received an EAL4+ in April 2004 and the 6.1.E version received an EAL5+ certification in March 2005. STOP version 6.4 U4 received an EAL5+ certification in July 2008. Versions of STOP prior to STOP 6 have held B3 certifications under TCSEC. While STOP 6 is binary compatible with Linux, it does not derive from the Linux kernel. See for an overview of the system. Red Hat Enterprise Linux Red Hat Enterprise Linux Version 7.1 achieved EAL4+ in October 2016. Red Hat Enterprise Linux Version 6.2 on 32 bit x86 Architecture achieved EAL4+ in December 2014. Red Hat Enterprise Linux Version 6.2 with KVM Virtualization for x86 Architectures achieved EAL4+ in October 2012. Red Hat Enterprise Linux 5 achieved EAL4+ in June 2007. Novell SUSE Linux Enterprise Server Novell's SUSE Linux Enterprise Server 15 is certified for IBM Z, Arm and x86-64 at CAPP/EAL4+ in August 2021. See. Novell's SUSE Linux Enterprise Server 9 running on an IBM eServer was certified at CAPP/EAL4+ in February 2005. See News release at heise.de. Microsoft Windows The following versions of Microsoft Windows have received EAL 4 Augmented ALC_FLR.3 certification: Windows 2008 Server (64-bit), Enterprise (64-bit) and Datacenter, as well as Windows Vista Enterprise (both 32-bit and 64-bit) attained EAL 4 Augmented (colloquially referred to as EAL 4+) ALC_FLR.3 status in 2009. Windows 2000 Server, Advanced Server, and Professional, each with Service Pack 3 and Q326886 Hotfix operating on the x86 platform were certified as CAPP/EAL 4 Augmented ALC_FLR.3 in October 2002. (This includes standard configurations as Domain Controller, Server in a Domain, Stand-alone Server, Workstation in a Domain, Stand-alone Workstation) Windows XP Professional and Embedded editions, with Service Pack 2, and Windows Server 2003 Standard and Enterprise editions (32-bit and 64-bit), with Service Pack 1, were all certified in December 2005. Mac OS X Apple's Mac OS X and Mac OS X Server running 10.3.6 both with the Common Criteria Tools Package installed were certified at CAPP/EAL3 in January 2005. Apple's Mac OS X & Mac OS X Server running the latest version 10.4.6 have not yet been fully evaluated however the Common Criteria Tools package is available. GEMSOS Gemini Multiprocessing Secure Operating System is a TCSEC A1 system that runs on x86 processor type COTS hardware. OpenVMS and SEVMS The SEVMS enhancement to VMS was a CC B1/B3 system formerly of Digital Equipment Corporation (DEC). A standard OpenVMS installation is rated as CC C2. Green Hills INTEGRITY-178B Green Hills Software's INTEGRITY-178B real-time operating system was certified at Common Criteria EAL6+ in September 2008, running on an embedded PowerPC processor on a Compact PCI card. Unisys MCP The Unisys MCP operating system includes an implementation of the DoD Orange Book C2 specification, the controlled access protection sub-level of discretionary protection. MCP/AS obtained the C2 rating in August, 1987. Unisys OS 2200 The Unisys OS 2200 operating system includes an implementation of the DoD Orange Book B1, Labeled security protection level specification. OS 2200 first obtained a successful B1 evaluation in September, 1989. Unisys maintained that evaluation until 1994 through the National Computer Security Center Rating Maintenance Phase (RAMP) of the Trusted Product Evaluation Program. See also Comparison of operating systems Security-focused operating system Trusted operating system Notes External links The common criteria portal's products list has an "Operating Systems" category containing CC certification results References Operating system security Computer security procedures
Security-evaluated operating system
Engineering
980
10,684,424
https://en.wikipedia.org/wiki/Watertown%20Arsenal
The Watertown Arsenal was a major American arsenal located on the northern shore of the Charles River in Watertown, Massachusetts. The site is now registered on the ASCE's List of Historic Civil Engineering Landmarks and on the US National Register of Historic Places, and it is home to a park, restaurants, mixed use office space, and formerly served as the national headquarters for athenahealth. History The arsenal was established in 1816, on of land, by the United States Army for the receipt, storage, and issuance of ordnance. In this role, it replaced the earlier Charlestown Arsenal. The arsenal's earliest plan incorporated 12 buildings aligned along a north–south axis overlooking the river. Alexander Parris, later designer of Quincy Market, was architect. Buildings included a military store and arsenal, as well as shops and housing for officers and men. All were made of brick with slate roofs in the Federal style, and a high wall enclosed the compound. By 1819 all buildings were completed and occupied. The arsenal's site, duties, and buildings grew gradually until the American Civil War, enlarging beyond the original quadrangle. During the war it greatly expanded to produce field and coastal gun carriages, and the war's impetus led to the quick construction of a large machine shop and smith shop built as contemporary factories, as well as a number of smaller buildings. During the Civil War, a new commander's quarters was commissioned by then-Capt. Thomas J. Rodman, inventor of the Rodman gun. The lavish, , quarters would ultimately become one of the largest commander's quarters on any US military installation. This mansion is now on the National Register of Historic Places. The expense ($63,478.65) was considered wasteful and excessive and drew a stern rebuke from Congress, which then promoted Rodman to brigadier general and sent him to command Rock Island Arsenal on the frontier in Illinois, where he built an even larger commander's quarters. Activities and new construction at the Watertown Arsenal continued to gradually expand until the early 1890s. Activities changed decisively in 1892 when Congress authorized modernization to gun carriage manufacturing. At this point the arsenal became a manufacturing complex rather than storage depot. A number of major buildings were constructed, which over time began to reflect typical industrial facilities rather than the earlier arsenal styles. In 1897 an additional were purchased, and a hospital built. Scientific management as designed by arsenal commander Charles Brewster Wheeler was implemented between 1908 and 1915. It was considered by the War Department as successful in saving money over the alternatives; but it was so hated by the work force that the Congress eventually overturned its use. During World War I the arsenal nearly tripled in size. Building #311 was then reported to be one of the largest steel-frame structures in the United States, sized to accommodate both very large gun carriages and the equipment used to construct them. Railroad tracks ran throughout the arsenal complex. World War II brought an additional with existing industrial buildings, as the arsenal produced steel artillery pieces. In 1959–1960, a research nuclear reactor (Horace Hardy Lester Reactor) was constructed on site, for material research programs, and operated there until 1970. In 1968 the Army ceased operations at the arsenal; were sold to the Watertown Redevelopment Authority, while the remaining were converted to the United States Army Materials and Mechanics Research Center, renamed the United States Materials Technology Laboratory in 1985. In 1995 all Army activity ceased and the remainder of the site was converted to civilian use. The Armory site was formerly included on US EPA's National Priorities List of highly contaminated sites, more widely known as Superfund. The site was removed from the NPL in 2006. See also List of Historic Civil Engineering Landmarks National Register of Historic Places listings in Middlesex County, Massachusetts List of military installations in Massachusetts References Bibliography . First published in 1960 by Harvard University Press. Republished in 1985 by Princeton University Press, with a new foreword by Merritt Roe Smith. Earls, Alan R., 2007 Watertown Arsenal, (Images of America), Arcadia Publishing, Charleston, SC, United States (). () External links Official Website of the former Watertown Arsenal Commander's Quarters Historic American Engineering Record documentation, filed under Watertown, Middlesex County, MA: Industrial buildings and structures on the National Register of Historic Places in Massachusetts Historic districts on the National Register of Historic Places in Massachusetts Buildings and structures in Watertown, Massachusetts Armories on the National Register of Historic Places in Massachusetts Historic American Engineering Record in Massachusetts Historic Civil Engineering Landmarks Massachusetts in the American Civil War United States Army arsenals during World War II Superfund sites in Massachusetts National Register of Historic Places in Middlesex County, Massachusetts 1816 establishments in Massachusetts Installations of the United States Army in Massachusetts Former installations of the United States Army
Watertown Arsenal
Engineering
961
77,687,680
https://en.wikipedia.org/wiki/Ammonium%20hexafluoroferrate
Ammonium hexafluoroferrate is an inorganic chemical compound with the chemical formula . Synthesis Ammonium hexafluoroferrate can be obtained by reacting ferric fluoride trihydrate and ammonium fluoride in water. Physical properties Ammonium hexafluoroferrate is isomorphous with the analogous compounds of aluminum and trivalent titanium, vanadium, and chromium. It crystallizes in a cubic lattice. The compound's thermal decomposition products are ferrous fluoride and ferric fluoride. Chemical properties The compound reacts with xenon difluoride to produce , , Xe, and HF. Uses Ammonium hexafluoroferrate is used as a fire retardant. References Fluoro complexes Ferrates Ammonium compounds Fluorometallates Hexafluorides
Ammonium hexafluoroferrate
Chemistry
187
17,709,159
https://en.wikipedia.org/wiki/Phlegmatizer
A phlegmatizer is a compound that minimizes the explosive tendency of another compound or material. The term is derived from the word phlegmatic, meaning 'not easily excited'. Many chemical compounds that are potentially explosive have useful non-explosive applications. One large family of phlegmatizers are phthalate esters, which are used as solvents to minimize the explosive tendency of organic peroxides, such as dibenzoyl peroxide and MEKP, which are widely used initiators for polymerizations. References Liquid explosives Organic peroxide explosives
Phlegmatizer
Chemistry
118
1,350,362
https://en.wikipedia.org/wiki/Gas%20focusing
Gas focusing, also known as ionic focusing. Rather than being dispersed, a beam of charged particles travelling in an inert gas environment sometimes becomes narrower. This is ascribed to the generation of gas ions which diffuse outwards, neutralizing the particle beam globally, and producing an intense radial electric field which applies a radially inward force to the particles in the beam. See also Vacuum tube Teleforce References Sabchevski S P and Mladenov G M 1994 J. Physics D: Applied Physics 27 690-697 Mladenov G., and Sabchevski S., Potential distribution and space-charge neutralization in intense electron beams - an overview "Vacuum", 2001,v62, N2-3, pp. 113–122 External links Ionic focusing Plasma technology and applications
Gas focusing
Physics
166
1,906,854
https://en.wikipedia.org/wiki/R/K%20selection%20theory
In ecology, selection theory relates to the selection of combinations of traits in an organism that trade off between quantity and quality of offspring. The focus on either an increased quantity of offspring at the expense of reduced individual parental investment of -strategists, or on a reduced quantity of offspring with a corresponding increased parental investment of -strategists, varies widely, seemingly to promote success in particular environments. The concepts of quantity or quality offspring are sometimes referred to as "cheap" or "expensive", a comment on the expendable nature of the offspring and parental commitment made. The stability of the environment can predict if many expendable offspring are made or if fewer offspring of higher quality would lead to higher reproductive success. An unstable environment would encourage the parent to make many offspring, because the likelihood of all (or the majority) of them surviving to adulthood is slim. In contrast, more stable environments allow parents to confidently invest in one offspring because they are more likely to survive to adulthood. The terminology of -selection was coined by the ecologists Robert MacArthur and E. O. Wilson in 1967 based on their work on island biogeography; although the concept of the evolution of life history strategies has a longer history (see e.g. plant strategies). The theory was popular in the 1970s and 1980s, when it was used as a heuristic device, but lost importance in the early 1990s, when it was criticized by several empirical studies. A life-history paradigm has replaced the selection paradigm, but continues to incorporate its important themes as a subset of life history theory. Some scientists now prefer to use the terms fast versus slow life history as a replacement for, respectively, versus reproductive strategy. Overview In selection theory, selective pressures are hypothesised to drive evolution in one of two generalized directions: - or -selection. These terms, and , are drawn from standard ecological formula as illustrated in the simplified Verhulst model of population dynamics: where is the population, is the maximum growth rate, is the carrying capacity of the local environment, and (the derivative of population size with respect to time ) is the rate of change in population with time. Thus, the equation relates the growth rate of the population to the current population size, incorporating the effect of the two constant parameters and . (Note that when the population size is greater than the carrying capacity then 1 - N/K is negative, which indicates a population decline or negative growth.) The choice of the letter came from the German Kapazitätsgrenze (capacity limit), while came from rate. r-selection -selected species are those that emphasize high growth rates, typically exploit less-crowded ecological niches, and produce many offspring, each of which has a relatively low probability of surviving to adulthood (i.e., high , low ). A typical species is the dandelion (genus Taraxacum). In unstable or unpredictable environments, -selection predominates due to the ability to reproduce rapidly. There is little advantage in adaptations that permit successful competition with other organisms, because the environment is likely to change again. Among the traits that are thought to characterize -selection are high fecundity, small body size, early maturity onset, short generation time, and the ability to disperse offspring widely. Organisms whose life history is subject to -selection are often referred to as -strategists or -selected. Organisms that exhibit -selected traits can range from bacteria and diatoms, to insects and grasses, to various semelparous cephalopods, certain families of birds, such as dabbling ducks, and small mammals, particularly rodents. K-selection By contrast, -selected species display traits associated with living at densities close to carrying capacity and typically are strong competitors in such crowded niches, that invest more heavily in fewer offspring, each of which has a relatively high probability of surviving to adulthood (i.e., low , high ). In scientific literature, -selected species are occasionally referred to as "opportunistic" whereas -selected species are described as "equilibrium". In stable or predictable environments, -selection predominates as the ability to compete successfully for limited resources is crucial and populations of -selected organisms typically are very constant in number and close to the maximum that the environment can bear (unlike -selected populations, where population sizes can change much more rapidly). Traits that are thought to be characteristic of -selection include large body size, long life expectancy, and the production of fewer offspring, which often require extensive parental care until they mature. Organisms whose life history is subject to -selection are often referred to as -strategists or -selected. Organisms with -selected traits include large organisms such as elephants, sharks, humans, and whales, but also smaller long-lived organisms such as Arctic terns, parrots, and eagles. Continuous spectrum Although some organisms are identified as primarily - or -strategists, the majority of organisms do not follow this pattern. For instance, trees have traits such as longevity and strong competitiveness that characterise them as -strategists. In reproduction, however, trees typically produce thousands of offspring and disperse them widely, traits characteristic of -strategists. Similarly, reptiles such as sea turtles display both - and -traits: Although sea turtles are large organisms with long lifespans (provided they reach adulthood), they produce large numbers of unnurtured offspring. The dichotomy can be re-expressed as a continuous spectrum using the economic concept of discounted future returns, with -selection corresponding to large discount rates and -selection corresponding to small discount rates. Ecological succession In areas of major ecological disruption or sterilisation (such as after a major volcanic eruption, as at Krakatoa or Mount St. Helens), - and -strategists play distinct roles in the ecological succession that regenerates the ecosystem. Because of their higher reproductive rates and ecological opportunism, primary colonisers typically are -strategists and they are followed by a succession of increasingly competitive flora and fauna. The ability of an environment to increase energetic content, through photosynthetic capture of solar energy, increases with the increase in complex biodiversity as species proliferate to reach a peak possible with strategies. Eventually a new equilibrium is approached (sometimes referred to as a climax community), with -strategists gradually being replaced by -strategists which are more competitive and better adapted to the emerging micro-environmental characteristics of the landscape. Traditionally, biodiversity was considered maximized at this stage, with introductions of new species resulting in the replacement and local extinction of endemic species. However, the intermediate disturbance hypothesis posits that intermediate levels of disturbance in a landscape create patches at different levels of succession, promoting coexistence of colonizers and competitors at the regional scale. Application While usually applied at the level of species, selection theory is also useful in studying the evolution of ecological and life history differences between subspecies, for instance the African honey bee, A. m. scutellata, and the Italian bee, A. m. ligustica. At the other end of the scale, it has also been used to study the evolutionary ecology of whole groups of organisms, such as bacteriophages. Other researchers have proposed that the evolution of human inflammatory responses is related to selection. Some researchers, such as Lee Ellis, J. Philippe Rushton, and Aurelio José Figueredo, have attempted to apply selection theory to various human behaviors, including crime, sexual promiscuity, fertility, IQ, and other traits related to life history theory. Rushton developed "differential theory" to attempt to explain variations in behavior across human races. Differential theory has been debunked as being devoid of empirical basis, and has also been described as a key example of scientific racism. Status Although selection theory became widely used during the 1970s, it also began to attract more critical attention. In particular, a review in 1977 by the ecologist Stephen C. Stearns drew attention to gaps in the theory, and to ambiguities in the interpretation of empirical data for testing it. In 1981, a review of the selection literature by Parry demonstrated that there was no agreement among researchers using the theory about the definition of - and -selection, which led him to question whether the assumption of a relation between reproductive expenditure and packaging of offspring was justified. A 1982 study by Templeton and Johnson showed that in a population of Drosophila mercatorum under -selection the population actually produced a higher frequency of traits typically associated with -selection. Several other studies contradicting the predictions of selection theory were also published between 1977 and 1994. When Stearns reviewed the status of the theory again in 1992, he noted that from 1977 to 1982 there was an average of 42 references to the theory per year in the BIOSIS literature search service, but from 1984 to 1989 the average dropped to 16 per year and continued to decline. He concluded that theory was a once useful heuristic that no longer serves a purpose in life history theory. More recently, the panarchy theories of adaptive capacity and resilience promoted by C. S. Holling and Lance Gunderson have revived interest in the theory, and use it as a way of integrating social systems, economics, and ecology. Writing in 2002, Reznick and colleagues reviewed the controversy regarding selection theory and concluded that: Alternative approaches are now available both for studying life history evolution (e.g. Leslie matrix for an age-structured population) and for density-dependent selection (e.g. variable density lottery model). See also Evolutionary game theory Life history theory Minimax/maximin strategy Ruderal species Semelparity and iteroparity Survivorship curve Trivers–Willard hypothesis References 1967 introductions Ecological theories Evolutionary biology concepts Mating systems Population ecology Race and intelligence controversy Selection
R/K selection theory
Biology
2,017
8,674
https://en.wikipedia.org/wiki/DECT
Digital Enhanced Cordless Telecommunications (DECT) is a cordless telephony standard maintained by ETSI. It originated in Europe, where it is the common standard, replacing earlier standards, such as CT1 and CT2. Since the DECT-2020 standard onwards, it also includes IoT communication. Beyond Europe, it has been adopted by Australia and most countries in Asia and South America. North American adoption was delayed by United States radio-frequency regulations. This forced development of a variation of DECT called DECT 6.0, using a slightly different frequency range, which makes these units incompatible with systems intended for use in other areas, even from the same manufacturer. DECT has almost completely replaced other standards in most countries where it is used, with the exception of North America. DECT was originally intended for fast roaming between networked base stations, and the first DECT product was Net3 wireless LAN. However, its most popular application is single-cell cordless phones connected to traditional analog telephone, primarily in home and small-office systems, though gateways with multi-cell DECT and/or DECT repeaters are also available in many private branch exchange (PBX) systems for medium and large businesses, produced by Panasonic, Mitel, Gigaset, Ascom, Cisco, Grandstream, Snom, Spectralink, and RTX. DECT can also be used for purposes other than cordless phones, such as baby monitors, wireless microphones and industrial sensors. The ULE Alliance's DECT ULE and its "HAN FUN" protocol are variants tailored for home security, automation, and the internet of things (IoT). The DECT standard includes the generic access profile (GAP), a common interoperability profile for simple telephone capabilities, which most manufacturers implement. GAP-conformance enables DECT handsets and bases from different manufacturers to interoperate at the most basic level of functionality, that of making and receiving calls. Japan uses its own DECT variant, J-DECT, which is supported by the DECT forum. The New Generation DECT (NG-DECT) standard, marketed as CAT-iq by the DECT Forum, provides a common set of advanced capabilities for handsets and base stations. CAT-iq allows interchangeability across IP-DECT base stations and handsets from different manufacturers, while maintaining backward compatibility with GAP equipment. It also requires mandatory support for wideband audio. DECT-2020 New Radio, marketed as NR+ (New Radio plus), is a 5G data transmission protocol which meets ITU-R IMT-2020 requirements for ultra-reliable low-latency and massive machine-type communications, and can co-exist with earlier DECT devices. Standards history The DECT standard was developed by ETSI in several phases, the first of which took place between 1988 and 1992 when the first round of standards were published. These were the ETS 300-175 series in nine parts defining the air interface, and ETS 300-176 defining how the units should be type approved. A technical report, ETR-178, was also published to explain the standard. Subsequent standards were developed and published by ETSI to cover interoperability profiles and standards for testing. Named Digital European Cordless Telephone at its launch by CEPT in November 1987; its name was soon changed to Digital European Cordless Telecommunications, following a suggestion by Enrico Tosato of Italy, to reflect its broader range of application including data services. In 1995, due to its more global usage, the name was changed from European to Enhanced. DECT is recognized by the ITU as fulfilling the IMT-2000 requirements and thus qualifies as a 3G system. Within the IMT-2000 group of technologies, DECT is referred to as IMT-2000 Frequency Time (IMT-FT). DECT was developed by ETSI but has since been adopted by many countries all over the World. The original DECT frequency band (1880–1900 MHz) is used in all countries in Europe. Outside Europe, it is used in most of Asia, Australia and South America. In the United States, the Federal Communications Commission in 2005 changed channelization and licensing costs in a nearby band (1920–1930 MHz, or 1.9 GHz), known as Unlicensed Personal Communications Services (UPCS), allowing DECT devices to be sold in the U.S. with only minimal changes. These channels are reserved exclusively for voice communication applications and therefore are less likely to experience interference from other wireless devices such as baby monitors and wireless networks. The New Generation DECT (NG-DECT) standard was first published in 2007; it was developed by ETSI with guidance from the Home Gateway Initiative through the DECT Forum to support IP-DECT functions in home gateway/IP-PBX equipment. The ETSI TS 102 527 series comes in five parts and covers wideband audio and mandatory interoperability features between handsets and base stations. They were preceded by an explanatory technical report, ETSI TR 102 570. The DECT Forum maintains the CAT-iq trademark and certification program; CAT-iq wideband voice profile 1.0 and interoperability profiles 2.0/2.1 are based on the relevant parts of ETSI TS 102 527. The DECT Ultra Low Energy (DECT ULE) standard was announced in January 2011 and the first commercial products were launched later that year by Dialog Semiconductor. The standard was created to enable home automation, security, healthcare and energy monitoring applications that are battery powered. Like DECT, DECT ULE standard uses the 1.9 GHz band, and so suffers less interference than Zigbee, Bluetooth, or Wi-Fi from microwave ovens, which all operate in the unlicensed 2.4 GHz ISM band. DECT ULE uses a simple star network topology, so many devices in the home are connected to a single control unit. A new low-complexity audio codec, LC3plus, has been added as an option to the 2019 revision of the DECT standard. This codec is designed for high-quality voice and music applications such as wireless speakers, headphones, headsets, and microphones. LC3plus supports scalable 16-bit narrowband, wideband, super wideband, fullband, and 24-bit high-resolution fullband and ultra-band coding, with sample rates of 8, 16, 24, 32, 48 and 96 kHz and audio bandwidth of up to 48 kHz. DECT-2020 New Radio protocol was published in July 2020; it defines a new physical interface based on cyclic prefix orthogonal frequency-division multiplexing (CP-OFDM) capable of up to 1.2Gbit/s transfer rate with QAM-1024 modulation. The updated standard supports multi-antenna MIMO and beamforming, FEC channel coding, and hybrid automatic repeat request. There are 17 radio channel frequencies in the range from 450MHz up to 5,875MHz, and channel bandwidths of 1,728, 3,456, or 6,912kHz. Direct communication between end devices is possible with a mesh network topology. In October 2021, DECT-2020 NR was approved for the IMT-2020 standard, for use in Massive Machine Type Communications (MMTC) industry automation, Ultra-Reliable Low-Latency Communications (URLLC), and professional wireless audio applications with point-to-point or multicast communications; the proposal was fast-tracked by ITU-R following real-world evaluations.<ref name=etsi-tr-103810/> The new protocol will be marketed as NR+ (New Radio plus) by the DECT Forum. OFDMA and SC-FDMA modulations were also considered by the ESTI DECT committee. OpenD is an open-source framework designed to provide a complete software implementation of DECT ULE protocols on reference hardware from Dialog Semiconductor and DSP Group; the project is maintained by the DECT forum. Application The DECT standard originally envisaged three major areas of application: Domestic cordless telephony, using a single base station to connect one or more handsets to the public telecommunications network. Enterprise premises cordless PABXs and wireless LANs, using many base stations for coverage. Calls continue as users move between different coverage cells, through a mechanism called handover. Calls can be both within the system and to the public telecommunications network. Public access, using large numbers of base stations to provide high capacity building or urban area coverage as part of a public telecommunications network. Wireless microphone systems, for Speech optimized applications with Automatic frequency and interference management. Of these, the domestic application (cordless home telephones) has been extremely successful. The enterprise PABX market, albeit much smaller than the cordless home market, has been very successful as well, and all the major PABX vendors have advanced DECT access options available. The public access application did not succeed, since public cellular networks rapidly out-competed DECT by coupling their ubiquitous coverage with large increases in capacity and continuously falling costs. There has been only one major installation of DECT for public access: in early 1998 Telecom Italia launched a wide-area DECT network known as "Fido" after much regulatory delay, covering major cities in Italy. The service was promoted for only a few months and, having peaked at 142,000 subscribers, was shut down in 2001. DECT has been used for wireless local loop as a substitute for copper pairs in the "last mile" in countries such as India and South Africa. By using directional antennas and sacrificing some traffic capacity, cell coverage could extend to over . One example is the corDECT standard. The first data application for DECT was Net3 wireless LAN system by Olivetti, launched in 1993 and discontinued in 1995. A precursor to Wi-Fi, Net3 was a micro-cellular data-only network with fast roaming between base stations and 520 kbit/s transmission rates. Data applications such as electronic cash terminals, traffic lights, and remote door openers also exist, but have been eclipsed by Wi-Fi, 3G and 4G which compete with DECT for both voice and data. Characteristics The DECT standard specifies a means for a portable phone or "Portable Part" to access a fixed telephone network via radio. Base station or "Fixed Part" is used to terminate the radio link and provide access to a fixed line. A gateway is then used to connect calls to the fixed network, such as public switched telephone network (telephone jack), office PBX, ISDN, or VoIP over Ethernet connection. Typical abilities of a domestic DECT Generic Access Profile (GAP) system include multiple handsets to one base station and one phone line socket. This allows several cordless telephones to be placed around the house, all operating from the same telephone jack. Additional handsets have a battery charger station that does not plug into the telephone system. Handsets can in many cases be used as intercoms, communicating between each other, and sometimes as walkie-talkies, intercommunicating without telephone line connection. DECT operates in the 1880–1900 MHz band and defines ten frequency channels from 1881.792 MHz to 1897.344 MHz with a band gap of 1728 kHz. DECT operates as a multicarrier frequency-division multiple access (FDMA) and time-division multiple access (TDMA) system. This means that the radio spectrum is divided into physical carriers in two dimensions: frequency and time. FDMA access provides up to 10 frequency channels, and TDMA access provides 24 time slots per every frame of 10ms. DECT uses time-division duplex (TDD), which means that down- and uplink use the same frequency but different time slots. Thus a base station provides 12 duplex speech channels in each frame, with each time slot occupying any available channel thus 10 × 12 = 120 carriers are available, each carrying 32 kbit/s. DECT also provides frequency-hopping spread spectrum over TDMA/TDD structure for ISM band applications. If frequency-hopping is avoided, each base station can provide up to 120 channels in the DECT spectrum before frequency reuse. Each timeslot can be assigned to a different channel in order to exploit advantages of frequency hopping and to avoid interference from other users in asynchronous fashion. DECT allows interference-free wireless operation to around outdoors. Indoor performance is reduced when interior spaces are constrained by walls. DECT performs with fidelity in common congested domestic radio traffic situations. It is generally immune to interference from other DECT systems, Wi-Fi networks, video senders, Bluetooth technology, baby monitors and other wireless devices. Technical properties ETSI standards documentation ETSI EN 300 175 parts 1–8 (DECT), ETSI EN 300 444 (GAP) and ETSI TS 102 527 parts 1–5 (NG-DECT) prescribe the following technical properties: Audio codec: mandatory: 32kbit/s G.726 ADPCM (narrow band), 64kbit/s G.722 sub-band ADPCM (wideband) optional: 64kbit/s G.711 μ-law/A-law PCM (narrow band), 32kbit/s G.729.1 (wideband), 32kbit/s MPEG-4 ER AAC-LD (wideband), 64kbit/s MPEG-4 ER AAC-LD (super-wideband) Frequency: the DECT physical layer specifies RF carriers for the frequency ranges 1880 MHz to 1980 MHz and 2010 MHz to 2025 MHz, as well as 902 MHz to 928 MHz and 2400 MHz to 2483,5 MHz ISM band with frequency-hopping for the U.S. market. The most common spectrum allocation is 1880 MHz to 1900 MHz; outside Europe, 1900 MHz to 1920 MHz and 1910 MHz to 1930 MHz spectrum is available in several countries. in Europe, as well as South Africa, Asia, Hong Kong, Australia, and New Zealand in Korea in Taiwan (J-DECT) in Japan in China (until 2003) in Brazil in Latin America (DECT 6.0) in the United States and Canada Carriers (1.728 MHz spacing): 10 channels in Europe and Latin America 8 channels in Taiwan 5 channels in the US, Brazil, Japan 3 channels in Korea Time slots: 2 × 12 (up and down stream) Channel allocation: dynamic Average transmission power: 10 mW (250 mW peak) in Europe & Japan, 4 mW (100 mW peak) in the US Physical layer The DECT physical layer uses FDMA/TDMA access with TDD. Gaussian frequency-shift keying (GFSK) modulation is used: the binary one is coded with a frequency increase by 288 kHz, and the binary zero with frequency decrease of 288 kHz. With high quality connections, 2-, 4- or 8-level differential PSK modulation (DBPSK, DQPSK or D8PSK), which is similar to QAM-2, QAM-4 and QAM-8, can be used to transmit 1, 2, or 3 bits per each symbol. QAM-16 and QAM-64 modulations with 4 and 6 bits per symbol can be used for user data (B-field) only, with resulting transmission speeds of up to 5,068Mbit/s. DECT provides dynamic channel selection and assignment; the choice of transmission frequency and time slot is always made by the mobile terminal. In case of interference in the selected frequency channel, the mobile terminal (possibly from suggestion by the base station) can initiate either intracell handover, selecting another channel/transmitter on the same base, or intercell handover, selecting a different base station altogether. For this purpose, DECT devices scan all idle channels at regular 30s intervals to generate a received signal strength indication (RSSI) list. When a new channel is required, the mobile terminal (PP) or base station (FP) selects a channel with the minimum interference from the RSSI list. The maximum allowed power for portable equipment as well as base stations is 250 mW. A portable device radiates an average of about 10 mW during a call as it is only using one of 24 time slots to transmit. In Europe, the power limit was expressed as effective radiated power (ERP), rather than the more commonly used equivalent isotropically radiated power (EIRP), permitting the use of high-gain directional antennas to produce much higher EIRP and hence long ranges. Data link control layer The DECT media access control layer controls the physical layer and provides connection oriented, connectionless and broadcast services to the higher layers. The DECT data link layer uses Link Access Protocol Control (LAPC), a specially designed variant of the ISDN data link protocol called LAPD. They are based on HDLC. GFSK modulation uses a bit rate of 1152 kbit/s, with a frame of 10ms (11520bits) which contains 24 time slots. Each slots contains 480 bits, some of which are reserved for physical packets and the rest is guard space. Slots 0–11 are always used for downlink (FP to PP) and slots 12–23 are used for uplink (PP to FP). There are several combinations of slots and corresponding types of physical packets with GFSK modulation: Basic packet (P32) 420 or 424 bits "full slot", used for normal speech transmission. User data (B-field) contains 320 bits. Low-capacity packet (P00) 96 bits at the beginning of the time slot ("short slot"). This packet only contains 64-bit header (A-field) used as a dummy bearer to broadcast base station identification when idle. Variable capacity packet (P00j) 100 + j or 104 + j bits, either two half-slots (0 ≤ j ≤ 136) or "long slot" (137 ≤ j ≤ 856). User data (B-field) contains j bits. P64 (j = 640), P67 (j = 672) "long slot", used by NG-DECT/CAT-iq wideband voice and data. High-capacity packet (P80) 900 or 904 bits, "double slot". This packet uses two time slots and always begins in an even time slot. The B-field is increased to 800 bits.. The 420/424 bits of a GFSK basic packet (P32) contain the following fields: 32 bits synchronization code (S-field): constant bit string AAAAE98AH for FP transmission, 55551675H for PP transmission 388 bits data (D-field), including 64 bits header (A-field): control traffic in logical channels C, M, N, P, and Q 320 bits user data (B-field): DECT payload, i.e. voice data 4 bits error-checking (X-field): CRC of the B-field 4 bits collision detection/channel quality (Z-field): optional, contains a copy of the X-field The resulting full data rate is 32 kbit/s, available in both directions. Network layer The DECT network layer always contains the following protocol entities: Call Control (CC) Mobility Management (MM) Optionally it may also contain others: Call Independent Supplementary Services (CISS) Connection Oriented Message Service (COMS) Connectionless Message Service (CLMS) All these communicate through a Link Control Entity (LCE). The call control protocol is derived from ISDN DSS1, which is a Q.931-derived protocol. Many DECT-specific changes have been made. The mobility management protocol includes the management of identities, authentication, location updating, on-air subscription and key allocation. It includes many elements similar to the GSM protocol, but also includes elements unique to DECT. Unlike the GSM protocol, the DECT network specifications do not define cross-linkages between the operation of the entities (for example, Mobility Management and Call Control). The architecture presumes that such linkages will be designed into the interworking unit that connects the DECT access network to whatever mobility-enabled fixed network is involved. By keeping the entities separate, the handset is capable of responding to any combination of entity traffic, and this creates great flexibility in fixed network design without breaking full interoperability. DECT GAP is an interoperability profile for DECT. The intent is that two different products from different manufacturers that both conform not only to the DECT standard, but also to the GAP profile defined within the DECT standard, are able to interoperate for basic calling. The DECT standard includes full testing suites for GAP, and GAP products on the market from different manufacturers are in practice interoperable for the basic functions. Security The DECT media access control layer includes authentication of handsets to the base station using the DECT Standard Authentication Algorithm (DSAA). When registering the handset on the base, both record a shared 128-bit Unique Authentication Key (UAK). The base can request authentication by sending two random numbers to the handset, which calculates the response using the shared 128-bit key. The handset can also request authentication by sending a 64-bit random number to the base, which chooses a second random number, calculates the response using the shared key, and sends it back with the second random number. The standard also provides encryption services with the DECT Standard Cipher (DSC). The encryption is fairly weak, using a 35-bit initialization vector and encrypting the voice stream with 64-bit encryption. While most of the DECT standard is publicly available, the part describing the DECT Standard Cipher was only available under a non-disclosure agreement to the phones' manufacturers from ETSI. The properties of the DECT protocol make it hard to intercept a frame, modify it and send it later again, as DECT frames are based on time-division multiplexing and need to be transmitted at a specific point in time. Unfortunately very few DECT devices on the market implemented authentication and encryption procedures and even when encryption was used by the phone, it was possible to implement a man-in-the-middle attack impersonating a DECT base station and revert to unencrypted mode which allows calls to be listened to, recorded, and re-routed to a different destination. After an unverified report of a successful attack in 2002, members of the deDECTed.org project actually did reverse engineer the DECT Standard Cipher in 2008, and as of 2010 there has been a viable attack on it that can recover the key. In 2012, an improved authentication algorithm, the DECT Standard Authentication Algorithm 2 (DSAA2), and improved version of the encryption algorithm, the DECT Standard Cipher 2 (DSC2), both based on AES 128-bit encryption, were included as optional in the NG-DECT/CAT-iq suite. DECT Forum also launched the DECT Security certification program which mandates the use of previously optional security features in the GAP profile, such as early encryption and base authentication. Profiles Various access profiles have been defined in the DECT standard: Public Access Profile (PAP) (deprecated) Generic Access Profile (GAP) ETSI EN 300 444 Cordless Terminal Mobility (CTM) Access Profile (CAP) ETSI EN 300 824 Data access profiles DECT Packet Radio System (DPRS) ETSI EN 301 649 DECT Multimedia Access Profile (DMAP) DECT Evolution and Audio Solution (DA14495) Multimedia in the Local Loop Access Profile (MRAP) Open Data Access Profile (ODAP) Radio in the Local Loop (RLL) Access Profile (RAP) ETSI ETS 300 765 Interworking profiles (IWP) DECT/ISDN Interworking Profile (IIP) ETSI EN 300 434 DECT/GSM Interworking Profile (GIP) ETSI EN 301 242 DECT/UMTS Interworking Profile (UIP) ETSI TS 101 863 Additional specifications DECT 6.0 DECT 6.0 is a North American marketing term for DECT devices manufactured for the United States and Canada operating at 1.9 GHz. The "6.0" does not equate to a spectrum band; it was decided the term DECT 1.9 might have confused customers who equate larger numbers (such as the 2.4 and 5.8 in existing 2.4 GHz and 5.8 GHz cordless telephones) with later products. The term was coined by Rick Krupka, marketing director at Siemens and the DECT USA Working Group / Siemens ICM. In North America, DECT suffers from deficiencies in comparison to DECT elsewhere, since the UPCS band (1920–1930 MHz) is not free from heavy interference. Bandwidth is half as wide as that used in Europe (1880–1900 MHz), the 4 mW average transmission power reduces range compared to the 10 mW permitted in Europe, and the commonplace lack of GAP compatibility among US vendors binds customers to a single vendor. Before 1.9 GHz band was approved by the FCC in 2005, DECT could only operate in unlicensed 2.4 GHz and 900 MHz Region 2 ISM bands; some users of Uniden WDECT 2.4 GHz phones reported interoperability issues with Wi-Fi equipment. North-American products may not be used in Europe, Pakistan, Sri Lanka, and Africa, as they cause and suffer from interference with the local cellular networks. Use of such products is prohibited by European Telecommunications Authorities, PTA, Telecommunications Regulatory Commission of Sri Lanka and the Independent Communication Authority of South Africa. European DECT products may not be used in the United States and Canada, as they likewise cause and suffer from interference with American and Canadian cellular networks, and use is prohibited by the Federal Communications Commission and Innovation, Science and Economic Development Canada. DECT 8.0 HD is a marketing designation for North American DECT devices certified with CAT-iq 2.0 "Multi Line" profile. NG-DECT/CAT-iq Cordless Advanced Technology—internet and quality (CAT-iq) is a certification program maintained by the DECT Forum. It is based on New Generation DECT (NG-DECT) series of standards from ETSI. NG-DECT/CAT-iq contains features that expand the generic GAP profile with mandatory support for high quality wideband voice, enhanced security, calling party identification, multiple lines, parallel calls, and similar functions to facilitate VoIP calls through SIP and H.323 protocols. There are several CAT-iq profiles which define supported voice features: CAT-iq 1.0 "HD Voice" (ETSI TS 102 527-1): wideband audio, calling party line and name identification (CLIP/CNAP) CAT-iq 2.0 "Multi Line" (ETSI TS 102 527-3): multiple lines, line name, call waiting, call transfer, phonebook, call list, DTMF tones, headset, settings CAT-iq 2.1 "Green" (ETSI TS 102 527-5): 3-party conference, call intrusion, caller blocking (CLIR), answering machine control, SMS, power-management CAT-iq Data light data services, software upgrade over the air (SUOTA) (ETSI TS 102 527-4) CAT-iq IOT Smart Home connectivity (IOT) with DECT Ultra Low Energy (ETSI TS 102 939) CAT-iq allows any DECT handset to communicate with a DECT base from a different vendor, providing full interoperability. CAT-iq 2.0/2.1 feature set is designed to support IP-DECT base stations found in office IP-PBX and home gateways. DECT-2020 DECT-2020, also called NR+, is a new radio standard by ETSI for the DECT bands worldwide. The standard was designed to meet a subset of the ITU IMT-2020 5G requirements that are applicable to IOT and Industrial internet of things. DECT-2020 is compliant with the requirements for Ultra Reliable Low Latency Communications URLLC and massive Machine Type Communication (mMTC) of IMT-2020. DECT-2020 NR has new capabilities compared to DECT and DECT Evolution: Better multipath operation (OFDM Cyclic Prefix) Better radio sensitivity (OFDM and Turbocodes) Better resistance to radio interference (co-channel interference rejection) Better bandwidth utilization Mesh deployment The DECT-2020 standard has been designed to co-exist in the DECT radio band with existing DECT deployments. It uses the same Time Division slot timing and Frequency Division center frequencies and uses pre-transmit scanning to minimize co-channel interference. DECT for data networks Other interoperability profiles exist in the DECT suite of standards, and in particular the DPRS (DECT Packet Radio Services) bring together a number of prior interoperability profiles for the use of DECT as a wireless LAN and wireless internet access service. With good range (up to indoors and using directional antennae outdoors), dedicated spectrum, high interference immunity, open interoperability and data speeds of around 500 kbit/s, DECT appeared at one time to be a superior alternative to Wi-Fi. The protocol capabilities built into the DECT networking protocol standards were particularly good at supporting fast roaming in the public space, between hotspots operated by competing but connected providers. The first DECT product to reach the market, Olivetti's Net3, was a wireless LAN, and German firms Dosch & Amand and Hoeft & Wessel built niche businesses on the supply of data transmission systems based on DECT. However, the timing of the availability of DECT, in the mid-1990s, was too early to find wide application for wireless data outside niche industrial applications. Whilst contemporary providers of Wi-Fi struggled with the same issues, providers of DECT retreated to the more immediately lucrative market for cordless telephones. A key weakness was also the inaccessibility of the U.S. market, due to FCC spectrum restrictions at that time. By the time mass applications for wireless Internet had emerged, and the U.S. had opened up to DECT, well into the new century, the industry had moved far ahead in terms of performance and DECT's time as a technically competitive wireless data transport had passed. Health and safety DECT uses UHF radio, similar to mobile phones, baby monitors, Wi-Fi, and other cordless telephone technologies. In North America, the 4 mW average transmission power reduces range compared to the 10 mW permitted in Europe. The UK Health Protection Agency (HPA) claims that due to a mobile phone's adaptive power ability, a European DECT cordless phone's radiation could actually exceed the radiation of a mobile phone. A European DECT cordless phone's radiation has an average output power of 10 mW but is in the form of 100 bursts per second of 250 mW, a strength comparable to some mobile phones. Most studies have been unable to demonstrate any link to health effects, or have been inconclusive. Electromagnetic fields may have an effect on protein expression in laboratory settings but have not yet been demonstrated to have clinically significant effects in real-world settings. The World Health Organization has issued a statement on medical effects of mobile phones which acknowledges that the longer term effects (over several decades) require further research. See also GSM Interworking Profile (GIP) IP-DECT CT2 (DECT's predecessor in Europe) Net3 CorDECT WDECT Unlicensed Personal Communications Services Microcell Wireless local loop References Footnotes Standards ETSI EN 300 175 V2.9.1 (2022-03). Digital Enhanced Cordless Telecommunications (DECT) Common Interface (CI) ETSI TS 103 636 v1.5.1 (2024-03). DECT-2020 New Radio (NR) Digital Enhanced Cordless Telecommunications (DECT) Further reading Technical Report: Multicell Networks based on DECT and CAT-iq . Dosch & Amand Research External links DECT Forum at dect.org DECT information at ETSI DECTWeb.com Open source implementation of a DECT stack Broadband ETSI Local loop Mobile telecommunications standards Software-defined radio Wireless communication systems DECT
DECT
Technology,Engineering
6,775
75,765,840
https://en.wikipedia.org/wiki/Cothenius%20Medal
Cothenius Medal is a medal awarded by the German National Academy of Sciences Leopoldina (known as the Leopoldina) for outstanding scientific achievement during the life of the awardee. The medal was created to honour Christian Andreas Cothenius, who was the personal physician to Frederick the Great. In 1743, Cothenius became a fellow of the Leopoldina, later president of the learned society that had been created by Emperor Leopold I. When Cothenius died, he left a sum of money in his will to the society with the condition that the interest on the money should be used to award a gold medal, every two years by answering a question in medicine whereby some new truth could be established. Up until 1864, the award came with a prize but was then converted into an award for the promotion of research over the whole period of a person's life. Each medal bears the Latin inscription "Praemium virtutis salutem mortalium provehentibus sancitum" (Created in recognition of the ability of those who promote the good of mortals). Cothenius Medal awardees, 1959–2023 Cothenius Medal awardees, 1864–1953 Cothenius Medal awardees, 1792–1861 References Cothenius Medal Awards established in 1792 Biology awards Chemistry awards Physics awards 1743 in science Geology awards
Cothenius Medal
Technology
269
18,996,438
https://en.wikipedia.org/wiki/TIE1
Tyrosine kinase with immunoglobulin-like and EGF-like domains 1 also known as TIE1 is an angiopoietin receptor which in humans is encoded by the TIE1 gene. Function TIE1 is a cell surface protein expressed exclusively in endothelial cells, however it has also been shown to be expressed in immature hematopoietic cells and platelets. TIE1 upregulates the cell adhesion molecules (CAMs) VCAM-1, E-selectin, and ICAM-1 through a p38-dependent mechanism. Attachment of monocyte derived immune cells to endothelial cells is also enhanced by TIE1 expression. TIE1 has a proinflammatory effect and may play a role in the endothelial inflammatory diseases such as atherosclerosis. See also References External links Tyrosine kinase receptors Proteins
TIE1
Chemistry
184
44,970,293
https://en.wikipedia.org/wiki/Kepler-442b
Kepler-442b (also known by its Kepler object of interest designation KOI-4742.01) is a confirmed near-Earth-sized exoplanet, likely rocky, orbiting within the habitable zone of the K-type main-sequence star Kepler-442, about from Earth in the constellation of Lyra. The planet orbits its host star at a distance of about with an orbital period of roughly 112.3 days. It has a mass of around 2.3 and has a radius of about 1.34 times that of Earth. It is one of the more promising candidates for potential habitability, as its parent star is at least 40% less massive than the Sun – thus, it can have a lifespan of about 30 billion years. The planet was discovered by NASA's Kepler spacecraft using the transit method, in which it measures the dimming effect that a planet causes as it crosses in front of its star. NASA announced the confirmation of the exoplanet on 6 January 2015. Physical characteristics Mass, radius, and temperature Kepler-442b is a super-Earth, an exoplanet with a mass and radius bigger than Earth's but smaller than the ice giants Uranus and Neptune. It has an equilibrium temperature of . It has a radius of 1.34 and the mass estimated to be 2.36 . According to Ethan Siegel, this puts the planet "right on the border" between likely being a rocky planet and a Mini-Neptune gas planet. The surface gravity on Kepler-442b would be 30% stronger than Earth, assuming a rocky composition similar to that of Earth. Host star The planet orbits a (K-type) star named Kepler-442. The star has a mass of 0.61 and a radius of 0.60 . It has a temperature of and is around 2.9 billion years old, with some uncertainty. In comparison, our Sun is 4.6 billion years old and has a temperature of . The star is somewhat metal-poor, with a metallicity (Fe/H) of −0.37, or 43% of the solar amount. Its luminosity () is 12% that of the Sun. The star's apparent magnitude, or how bright it appears from Earth's perspective, is 14.76. Therefore, it is too dim to be seen with the naked eye. Orbit Kepler-442b orbits its host star with an orbital period of 112 days. It has an orbital radius of about (slightly larger than the distance of Mercury from the Sun, which is approximately ). It receives about 70% of Earth's sunlight from the Sun. Habitability The planet is in the habitable zone of its star, a region where liquid water could exist on the planet's surface. It is one of the most Earth-like planets yet found in size and temperature. It is just outside the zone (around ) in which tidal forces from its host star would be enough to fully tidally lock it. As of July 2018, Kepler-442b was considered the most habitable non-tidally-locked exoplanet discovered. Stellar factors K-type main-sequence stars are smaller than the Sun and live longer, remaining on the main sequence for 18 to 34 billion years compared to the Sun's estimated lifespan of 10 billion years. Despite these properties, the small M-type and K-type stars can threaten life. Because of their high stellar activity at the beginning of their lives, they emit strong solar winds. The duration of this period is inversely linked to the size of the star. However, because of the uncertainty of the age of Kepler-442, it is likely it may have passed this stage, making Kepler-442b potentially more suitable for habitability. Tidal effects and further reviews Because Kepler-442b is closer to its star than Earth is to the Sun, the planet will probably rotate much more slowly than Earth; its day could be weeks or months long (see Tidal effects on rotation rate, axial tilt, and orbit). This is reflected in its orbital distance, just outside of the point where the tidal interactions from its star would be strong enough to tidally lock it. Kepler-442b's axial tilt (obliquity) is likely tiny, in which case it would not have tilt-induced seasons as Earth and Mars do. Its orbit is probably close to circular (eccentricity 0.04), so it will also lack eccentricity-induced seasonal changes like Mars. One review essay in 2015 concluded that Kepler-442b, Kepler-186f, and Kepler-62f were likely the best candidates for being potentially habitable planets. Also, according to an index developed in 2015, Kepler-442b is even more likely to be habitable than a hypothetical "Earth twin" with physical and orbital parameters matching those of Earth. Going by this index, Earth has a rating of 0.829, but Kepler-442b has a rating of 0.836. The actual habitability is uncertain because Kepler-442b's atmosphere and surface are unknown. The paper introducing the habitability index clarifies that a higher-than-Earth value "does not mean these planets are 'more habitable' than Earth". Discovery and follow-up studies In 2009, NASA's Kepler spacecraft was completing observing stars on its photometer, the instrument it uses to detect transit events when a planet crosses in front of and dims its host star for a brief and roughly regular period. In this last test, Kepler observed 50,000 stars in the Kepler Input Catalog, including Kepler-442; the telescope sent the preliminary light curves to the Kepler science team for analysis, who chose prominent planetary companions from the bunch for follow-up at observatories. Observations for the potential exoplanet candidates took place between 13 May 2009 and 17 March 2012. After observing the respective transits, which for Kepler-442b occurred roughly every 113 days (its orbital period), the scientists eventually concluded that a planetary body was responsible for the periodic 113-day transits. The discovery, along with the unique planetary systems of the stars Kepler-438 and Kepler-440, was announced on 6 January 2015. Kepler-442b, located approximately 370 parsecs (1,200 light-years) away, presents a challenge for current telescopes and even the upcoming generation of planned ones to ascertain its mass or the presence of an atmosphere due to the considerable distance from its host star. The Kepler spacecraft concentrated on a limited portion of the sky, limiting its ability to gather comprehensive data. However, upcoming planet-hunting space telescopes like TESS and CHEOPS are poised to survey nearby stars across the entire celestial sphere, potentially shedding light on the properties of distant exoplanets like Kepler-442b. The James Webb Space Telescope and future large ground-based telescopes can then study nearby stars with planets to analyze atmospheres, determine masses, and infer compositions. Additionally, the Square Kilometer Array would significantly improve radio observations over the Arecibo Observatory and Green Bank Telescope. See also List of potentially habitable exoplanets Life habitable zones Kepler-62f Kepler-186f Kepler-452b Kepler-440b Kepler-438b References External links NASA – Mission overview. NASA – Kepler Discoveries – Summary Table. NASA – Kepler-442b at The NASA Exoplanet Archive. NASA – Kepler-442b at Extrasolar Planets Encyclopaedia. Habitable Exolanets Catalog at UPR-Arecibo. 442b Exoplanets discovered in 2015 Kepler-442 Transiting exoplanets Lyra Near-Earth-sized exoplanets in the habitable zone
Kepler-442b
Astronomy
1,603
344,922
https://en.wikipedia.org/wiki/Neuroevolution%20of%20augmenting%20topologies
NeuroEvolution of Augmenting Topologies (NEAT) is a genetic algorithm (GA) for the generation of evolving artificial neural networks (a neuroevolution technique) developed by Kenneth Stanley and Risto Miikkulainen in 2002 while at The University of Texas at Austin. It alters both the weighting parameters and structures of networks, attempting to find a balance between the fitness of evolved solutions and their diversity. It is based on applying three key techniques: tracking genes with history markers to allow crossover among topologies, applying speciation (the evolution of species) to preserve innovations, and developing topologies incrementally from simple initial structures ("complexifying"). Performance On simple control tasks, the NEAT algorithm often arrives at effective networks more quickly than other contemporary neuro-evolutionary techniques and reinforcement learning methods, as of 2006. Algorithm Traditionally, a neural network topology is chosen by a human experimenter, and effective connection weight values are learned through a training procedure. This yields a situation whereby a trial and error process may be necessary in order to determine an appropriate topology. NEAT is an example of a topology and weight evolving artificial neural network (TWEANN) which attempts to simultaneously learn weight values and an appropriate topology for a neural network. In order to encode the network into a phenotype for the GA, NEAT uses a direct encoding scheme which means every connection and neuron is explicitly represented. This is in contrast to indirect encoding schemes which define rules that allow the network to be constructed without explicitly representing every connection and neuron, allowing for more compact representation. The NEAT approach begins with a perceptron-like feed-forward network of only input neurons and output neurons. As evolution progresses through discrete steps, the complexity of the network's topology may grow, either by inserting a new neuron into a connection path, or by creating a new connection between (formerly unconnected) neurons. Competing conventions The competing conventions problem arises when there is more than one way of representing information in a phenotype. For example, if a genome contains neurons A, B and C and is represented by [A B C], if this genome is crossed with an identical genome (in terms of functionality) but ordered [C B A] crossover will yield children that are missing information ([A B A] or [C B C]), in fact 1/3 of the information has been lost in this example. NEAT solves this problem by tracking the history of genes by the use of a global innovation number which increases as new genes are added. When adding a new gene the global innovation number is incremented and assigned to that gene. Thus the higher the number the more recently the gene was added. For a particular generation if an identical mutation occurs in more than one genome they are both given the same number, beyond that however the mutation number will remain unchanged indefinitely. These innovation numbers allow NEAT to match up genes which can be crossed with each other. Implementation The original implementation by Ken Stanley is published under the GPL. It integrates with Guile, a GNU scheme interpreter. This implementation of NEAT is considered the conventional basic starting point for implementations of the NEAT algorithm. Extensions rtNEAT In 2003, Stanley devised an extension to NEAT that allows evolution to occur in real time rather than through the iteration of generations as used by most genetic algorithms. The basic idea is to put the population under constant evaluation with a "lifetime" timer on each individual in the population. When a network's timer expires, its current fitness measure is examined to see whether it falls near the bottom of the population, and if so, it is discarded and replaced by a new network bred from two high-fitness parents. A timer is set for the new network and it is placed in the population to participate in the ongoing evaluations. The first application of rtNEAT is a video game called Neuro-Evolving Robotic Operatives, or NERO. In the first phase of the game, individual players deploy robots in a 'sandbox' and train them to some desired tactical doctrine. Once a collection of robots has been trained, a second phase of play allows players to pit their robots in a battle against robots trained by some other player, to see how well their training regimens prepared their robots for battle. Phased pruning An extension of Ken Stanley's NEAT, developed by Colin Green, adds periodic pruning of the network topologies of candidate solutions during the evolution process. This addition addressed concern that unbounded automated growth would generate unnecessary structure. HyperNEAT HyperNEAT is specialized to evolve large scale structures. It was originally based on the CPPN theory and is an active field of research. cgNEAT Content-Generating NEAT (cgNEAT) evolves custom video game content based on user preferences. The first video game to implement cgNEAT is Galactic Arms Race, a space-shooter game in which unique particle system weapons are evolved based on player usage statistics. Each particle system weapon in the game is controlled by an evolved CPPN, similarly to the evolution technique in the NEAT Particles interactive art program. odNEAT odNEAT is an online and decentralized version of NEAT designed for multi-robot systems. odNEAT is executed onboard robots themselves during task execution to continuously optimize the parameters and the topology of the artificial neural network-based controllers. In this way, robots executing odNEAT have the potential to adapt to changing conditions and learn new behaviors as they carry out their tasks. The online evolutionary process is implemented according to a physically distributed island model. Each robot optimizes an internal population of candidate solutions (intra-island variation), and two or more robots exchange candidate solutions when they meet (inter-island migration). In this way, each robot is potentially self-sufficient and the evolutionary process capitalizes on the exchange of controllers between multiple robots for faster synthesis of effective controllers. See also Evolutionary acquisition of neural topologies References Bibliography Implementations Stanley's original, mtNEAT and rtNEAT for C++ ECJ, JNEAT, NEAT 4J, ANJI for Java SharpNEAT for C# MultiNEAT () and mtNEAT for C++ and Python neat-python for Python NeuralFit (not an exact implementation) and neat-python for Python Encog for Java and C# peas for Python RubyNEAT for Ruby neatjs for Javascript Neataptic for Javascript (not an exact implementation) Neat-Ex for Elixir EvolutionNet for C++ goNEAT for Go (programming language) External links NEAT Homepage () "Evolutionary Complexity Research Group at UCF" - Ken Stanley's current research group NERO: Neuro-Evolving Robotic Operatives - an example application of rtNEAT GAR: Galactic Arms Race - an example application of cgNEAT "PicBreeder.org" - Online, collaborative art generated by CPPNs evolved with NEAT. "EndlessForms.com" - A 3D version of Picbreeder, where you interactively evolve 3D objects that are encoded with CPPNs and evolved with NEAT. BEACON Blog: What is neuroevolution? MarI/O - Machine Learning for Video Games, a YouTube video demonstrating an implementation of NEAT learning to play Super Mario World "GekkoQuant.com" - A visual tutorial series on NEAT, including solving the classic pole balancing problem using NEAT in R "Artificial intelligence learns Mario level in just 34 attempts NEAT explained via MarI/O program Evolutionary algorithms and artificial neuronal networks Evolutionary computation Genetic algorithms
Neuroevolution of augmenting topologies
Biology
1,552
75,479,357
https://en.wikipedia.org/wiki/Polyelectrolyte%20theory%20of%20the%20gene
The polyelectrolyte theory of the gene proposes that for a linear genetic biopolymer dissolved in water, such as DNA, to undergo Darwinian evolution anywhere in the universe, it must be a polyelectrolyte, a polymer containing repeating ionic charges. These charges maintain the uniform physical properties needed for Darwinian evolution, regardless of the information encoded in the genetic biopolymer. DNA is such a molecule. Regardless of its nucleic acid sequence, the negative charges on its backbone dominate the physical interactions of the molecule to such a degree that it maintains uniform physical properties such as its aqueous solubility and double-helix structure. The polyelectrolyte theory of the gene was proposed by Steven A. Benner and Daniel Hutter in 2002 and has largely remained a theoretical framework astrobiologists have used to think about how life may be detected beyond Earth. This idea was later linked by Benner to Erwin Schrödinger's view of the gene as an "aperiodic crystal" to make a robust, universally generalized concept of a genetic biopolymer—a biopolymer acting as a unit of inheritance in Darwinian evolution. Benner and others who built on his work have proposed methods for how to concentrate and identify genetic biopolymers on other planets and moons within the solar system using electrophoresis, which uses an electric field to concentrate charged compounds. Although few have tested the polyelectrolyte theory of the gene, in 2019, lab experiments challenged the universality of this idea. This work was able to create non-electrolyte polymers capable of limited Darwinian evolution, but only up to a length of 72 nucleotides. Physical structure of polyelectrolytes A polyelectrolyte is a polymer with repeating electrostatically charged units. In the context of the polyelectrolyte theory of the gene, this polyelectrolyte is a biopolymer—a polymer derived from a living system—with a repeated ionically charged unit, similar to the genetic biopolymer in modern biology, DNA. Although RNA does not act as a genetic biopolymer archive in modern biology—except in the case of some viruses such as coronavirus and HIV—the RNA World hypothesis suggests that RNA may have preceded DNA as life’s first genetic biopolymer. The nucleotide building blocks that make up DNA and RNA are connected by negatively charged phosphate groups. These phosphodiester linkages create the repeating negative charges on the molecule’s backbone that give DNA and RNA their polyelectrolyte nature. Polyelectrolytes in the context of genetic biopolymers To participate in Darwinian evolution, which can be described as "descent with modification", a unit of inheritance must be capable of imperfect replication to occasionally produce a new modified unit of inheritance, which must still be capable of being replicated. This imperfect replication leads to the variation on which Darwinian evolution can act. The polyelectrolyte theory of the gene attempts to understand modern biology’s unit of inheritance, DNA, at a generalizable level. In 2002, Steven A. Benner and Daniel Hutter identified the repeated charges in DNA's phosphodiester linkages as crucial to its function as a genetic biopolymer. They proposed with the polyelectrolyte theory of the gene that repeated ionic charges—positive or negative—are a general requirement for all water-dissolved genetic biopolymers to undergo Darwinian evolution anywhere in the cosmos. This concept works in tandem with the view of the gene as an "aperiodic crystal" as proposed by Erwin Schrödinger in his 1944 book "What Is Life?". An aperiodic crystal, as Schrödinger describes it, has a discrete set of molecular building blocks in a non-repeating arrangement. DNA is an aperiodic crystal composed of discrete nucleobases (A, T, C, and G), which are arranged based on the information they encode, not in any repeated format. While this idea of an "aperiodic crystal" was not initially linked to the polyelectrolyte theory of the gene, Benner, in later work, connected the two. Polyelectrolytes remain physically uniform regardless of the information encoded In biochemistry, the structure of a biomolecule dictates its function, and therefore changes in structure cause changes in function. To work as a unit of inheritance, the genetic biopolymer must maintain shape and, therefore, physical and chemical consistency, regardless of the information the structure encodes. DNA is such a molecule. No matter what the nucleic acid sequence is, DNA maintains a consistent double helix structure and, therefore, the consistent physical properties that allow it to remain dissolved in water and be replicated by cellular machinery. The polyelectrolyte theory of the gene reasons that DNA can maintain its shape regardless of mutations because the negative charges on the phosphate backbone dominate the physical interactions of the molecule to such a degree that changes in the nucleic acid sequence, the encoded information, do not affect the overall physical behavior of the molecule. For example, thymidine nucleotides (T) are very soluble in water while guanosine nucleotides (G) are more insoluble; however, an oligonucleotide—a short polynucleotide sequence—composed of only thymine and one composed of only guanine has the same overall structure and physical properties. If changes in the nucleic acid sequence, which encodes genetic information, change the physical properties of DNA, these changes could break down the mechanism by which DNA replicates. This physical uniformity is very rare in nature. Take another biopolymer, for example, proteins. The nucleic acid sequence in DNA codes for the sequence of amino acids that make up proteins. A change to even a single amino acid in the primary sequence of a protein can completely change the physical properties of that protein. For example, the sickle-cell trait is caused by a single mutation of an adenine to a thymine in the hemoglobin gene, causing a switch from a glutamic acid to a valine. This completely changes the three-dimensional structure of hemoglobin and thus changes the physical properties of the protein that lead to the sickle-cell trait. Proteins are sensitive to changes in amino acid sequence because the 20 different amino acid side chains form bonds and partial bonds with each other. In addition, the protein backbone has a dipole moment—having partially positive and partially negative sides—which can further create interactions within the molecule. These side-chain and backbone interactions are sensitive to changes in the environment and amino acid sequence. It is unlikely that a protein could act as a genetic biomolecule because changes in amino acid sequence lead to changes in overall physical structure and properties. Another non-electrolyte biopolymer would suffer the same challenges as a protein when acting as a genetic biomolecule. Changes in physical properties with changes in encoded information would mean that such a molecule would struggle to be replicated with certain sequences of encoded information, as those sequences would result in physical properties incompatible with replication. This problem means that the hypothetical protein gene would not be able to explore all possible genetic sequences, as certain sequences would cause the molecule to fail to be replicated based on the physical structure of its gene, not on the fitness of what the gene codes for. Benner and Hutter initially described this property of DNA as being "capable of surviving modifications in constitution without loss of properties essential for replication" or the acronym COSMIC-LOPER. This acronym gives scientists a shorthand way of describing the complex idea of a genetic biopolymer having the physical uniformity regardless of encoded information that allows it to be replicated. Although RNA is often described as a genetic biopolymer because of its theorized role as life’s first unit of inheritance (RNA World), it is not entirely COSMIC-LOPER. RNA, especially sequences high in guanine (G), is capable of folding and performing enzyme-type chemistry. Folding in guanine-rich RNA sequences prevents the templating ability of RNA and thus its ability to be replicated in an RNA-world scenario, for the same reason it would be difficult for a protein-based gene to replicate. Repeated ionic charges increase solubility in water The repeated negative charges increase the solubility of DNA and RNA in water. Because ionic charges are highly soluble in water, having them on the molecule's backbone increases the molecule's solubility. If the backbone of a hypothetical genetic biopolymer were linked together in a non-ionic fashion, the solubility of the whole molecule would decrease. Solubility is important because, in order to be replicated, DNA—or any other genetic biomolecule—must be soluble to interact with replicative machinery. Repeated ionic charges promote Watson–Crick base pairing specificity The repeated negative charges of the DNA backbone electrostatically repel each other, preventing interactions both within and between DNA strands. This repulsion promotes specific interactions along the Watson–Crick 'edge' of the nucleobases, promoting Watson–Crick base pairing specificity—A pairs with T and C pairs with G. Repeated ionic charges prevent folding The repeated negative charges on the backbone keep DNA and many RNA molecules from folding and allow them to act as templates. In water, molecules take on a conformation that is the most energetically favorable, with the lowest Gibbs free energy. This configuration maximizes favorable interactions (hydrogen bonding, positive-negative charge interactions, van der Waals interactions) and minimizes unfavorable interactions (i.e., hydrophilic-hydrophobic interactions and like charge interactions). In the case of double-stranded DNA and RNA, the most energetically favorable form is the linear double helix configuration because it maximizes interactions between base pairs and between the negatively charged backbone and the surrounding water molecules while minimizing interactions between the negatively charged phosphodiester linkages of the backbone. If the double-stranded DNA or RNA molecule folded, it would exchange favorable water-backbone interactions for unfavorable backbone-backbone interactions. A biopolymer without an ionically charged backbone, like proteins, would not produce unfavorable backbone-backbone interaction during folding and thus would readily fold and aggregate. This inherent tendency towards linearity improves DNA’s ability to act as a template for replication because folded and aggregated conformations are inaccessible to replication machinery. Lab experiments Lab experiments conducted with non-electrolyte analogs of DNA and RNA initially inspired Benner and Hutton to publish on the polyelectrolyte theory of the gene. During the late ‘80s and '90s, scientists developed synthetic DNA-like molecules to bind to and silence unwanted mRNA gene products as a way to treat disease. As part of this exploratory research, researchers developed a variety of non-electrolyte RNA and DNA analogs that would be able to cross the cell membrane, which DNA and RNA are incapable of doing because of their charged backbones. One of these analogs substituted a sulfone (SO₂) for the natural phosphodiester (PO₂⁻) linkage. While initial experiments showed the sulfone analog to have very similar properties to DNA as a dimer—two nucleotides linked together—when longer sulfone analogs were synthesized, they folded, lost Watson–Crick base pair specificity, and had dramatic changes in physical properties due to small changes in nucleic acid sequence. The reduction in the quality of the traits that make DNA a good genetic molecule was seen with all the nonionic linkers that were tested as of 2002. The closest non-electrolyte analog to maintaining the qualities of DNA was the polyamide-linked nucleic acid analog (PNA), which replaced the phosphodiester linkage of DNA with an uncharged N-(2-aminoethyl)glycine linkage. Even Benner and Hutter questioned if PNA might disprove their polyelectrolyte hypothesis; however, even though PNA maintained the qualities of DNA up to a length of 20 nucleotides, beyond that length, the molecules started to lose Watson–Crick base pair specificity, aggregated, and became sensitive to changes in nucleic acid sequence. Lab experiments that challenge the polyelectrolyte theory of the gene In 2019, a group led by Philipp Holliger in Cambridge, England, developed non-electrolyte P-alkylphosphonate nucleic acids (phNA) DNA analogs that were able to undergo templated synthesis and directed evolution. The phNA analogs substituted the charged oxygen on DNA’s phosphate backbone with an uncharged methyl or ethyl group. While other DNA analogs have been shown to undergo templated synthesis and directed evolution, this discovery was the first time a non-electrolyte DNA analog had been shown to have these properties and the first time the polyelectrolyte theory of the gene had been experimentally challenged. However, the Template-directed synthesis of phNA was only performed up to a length of 72 nucleotides. This is around the length of the shortest naturally occurring gene, tRNA, but is roughly an order of magnitude shorter than the genome of the smallest free-living organism. The human genome for reference is 3.05×10⁹ base pairs long. As an "agnostic biosignature" Since its inception, the polyelectrolyte theory of the gene has been put in the context of searching for life in the universe. This theory, combined with Schrödinger's view of a gene as an aperiodic crystal, provides a so-called "agnostic biosignature", a sign of life that does not presuppose any biochemistry. In other words, a generalized view of life should hold anywhere in the universe. Since the theorized genetic polyelectrolyte biomolecules could be charged either positively or negatively, as in the case of DNA and RNA, they can be concentrated in water with an electric field using electrophoresis or electrodialysis. This hypothetical concentration device has been called an agnostic life-finding device. Similar to how electrophoresis works to separate DNA molecules, negatively charged molecules, like DNA or RNA, would be attracted to a positively charged anode, and positively charged genetic biomolecules would be attracted to a negatively charged cathode. Once the polyelectrolyte biomolecule has been concentrated, Benner suggests the molecules be tested for size and shape uniformity. In addition, the molecules should be tested for the use of a limited number of building blocks arranged in a non-repeating fashion, an aperiodic crystal structure. Benner has suggested that this could be done using matrix-assisted laser desorption ionization (MALDI) paired with an orbitrap high-resolution mass spectrometer. Another suggested approach has been to use nanopore sequencing technology, although questions of whether the solar radiation experienced during transit and on-site would affect the functionality of the device remain. While space agencies have yet to use any of these proposed systems for life detection, they may be used in the future on Mars, Enceladus, and Europa. Despite the polyelectrolyte theory of the gene and the aperiodic crystal view of the gene being described as agnostic biosignatures, these theories are terra-, or earth-life, centric. It is unknown what life on another world might be; while it is often stated that life of any kind needs biomolecules and water, this may not be true. References Wikipedia Student Program Origin of life
Polyelectrolyte theory of the gene
Biology
3,280
2,013,320
https://en.wikipedia.org/wiki/Squround
A squround or scround is a container with a shape between a square and a round tub. It resembles an oval but is sometimes closer to a rectangle with rounded corners. These allow the contents to be easily scooped out of the container. The name is a portmanteau for "square round" (cartons), referring to a compromise between a square and a round carton. As an adjective, squround has been applied to other objects, such as watches or swimming pools. Usage within food packaging Ice cream squround containers The term applies mostly to ice cream packaging design, where the switch to a squround from paperboard bricks, cylindrical half-gallons and other containers is motivated by consumer preference, as well as cost effectiveness. These packages are more rectangular than square, but the side edges are rounded, while top and bottom surfaces are completely flat. Squround packaging affords some of the consumer appeal of traditional cylindrical packaging, while also packing tightly like brick-shaped square cartons. The container is usually made of paperboard but can have thermoformed or injection molded plastic components. There is usually a separate lid made of paperboard, plastic, or both. It offers several advantages over other ice cream packages: It can be easily scooped out It packs more tightly than previous designs It allows more efficient use of retail shelf space and home freezer space It allows for better brand recognition (over the round half-gallon) since the flatter front is a more legible "billboard" for each flavor The lid can have tamper-evident features, usually in the form of a tab to break before the lid can be removed The lid has a tighter seal Although squrounds are available in traditional half-gallon sizes, there exists a trend toward marketing non-traditional 56-ounce, and in recent years, smaller 48-ounce sized cartons. The downsizing in carton size has not seemed to negatively affect unit sales. Mayfield Dairy, which announced the switch to squround cartons in January 2003, told Food Engineering in April that they expect to sell the same number of 56 oz. units in 2003 as it sold 64 oz. cartons in 2002. Breyers, which in 2000 was an early adopter of the smaller package for its "Ice Cream Parlor" brands, uses the smaller package across all its ice cream flavors. In 2008, they changed to a smaller 48 oz container. Other squround containers Outside the sector of Ice-Cream, Nestlé have also produced squround containers for its Nescafe range of large instant coffee tins. They attributed the new design to make the containers "simple to hold, pour and store". In a separate interview, they also stated that the design will help reduce losing lids and make their tins easier to grasp. See also Squircle Fillet (mechanics) Chamfer References Yam, K.L., Encyclopedia of Packaging Technology, John Wiley & Sons, 2009, External links Food packaging Ice cream Geometric shapes
Squround
Mathematics
617
25,267,926
https://en.wikipedia.org/wiki/Drip%20chamber
A drip chamber, also known as drip bulb, is a device used to allow gas (such as air) to rise out from a fluid so that it is not passed downstream. It is commonly employed in delivery systems of intravenous therapy and acts to prevent air embolism. The use of a drip chamber also allows an estimate of the rate at which fluid is administered. For a fluid of a given viscosity, drips from a hole of known size will be of nearly identical volume, and the number of drips in a minute can be counted to gauge the rate of flow. In this instance the rate of flow is usually controlled by a clamp on the infusion tubing; this affects the resistance to flow. However, other sources of resistance (such as whether the vein is kinked or compressed by the patient's position) cannot be so directly controlled and a change in position may change the rate of flow leading to inadvertently rapid or slow infusion. Where this might be problematic an infusion pump can be used which gives a more accurate measurement of flow rate. Drip chambers can be classified into macro-drip (about 10 to 20 gtts/ml) and micro-drip (about 60 gtts/ml) based on their drop factors. For a given drip chamber (when the fluid drips from the hole into the chamber) drop factor means number of drops per ml of the IV fluid. Flow rate can be calculated with the help of the observations from the drip chamber and its drop factor. The unit of flow rate is gtts/min, where gtts means guttae (Latin plural noun meaning “drops”). References Drug delivery devices Medical devices
Drip chamber
Chemistry,Biology
343
2,897,014
https://en.wikipedia.org/wiki/William%20Radcliffe%20Birt
William Radcliffe Birt FRAS (15 July 1804–14 December 1881) was an English amateur astronomer in the 19th century. Birt was employed by John Herschel to carry out a great deal of meteorogical research on atmospheric waves, from 1839 to 1843. A lot of his work is held in the Scientist's Collection at the American Philosophical Society. Probably on Herschel's recommendation, Birt became involved with the Kew Observatory in the later 1840s under the Directorship of Francis Ronalds. He analysed and published the latter's detailed atmospheric electricity and meteorological observations. They also worked together on a new design of kite for making meteorological recordings in the upper air. Birt was formally appointed in late 1849 as Ronalds’ assistant but their relationship soured shortly afterwards and Birt was requested by the Kew Committee to leave in mid-1850. In 1850 he published ‘The Hurricane Guide: Being an attempt to Connect the Rotary Gale or Revolving Storm with Atmospheric Waves’. Later, in 1853, he subsequently published a second book a ‘Handbook on the Law of Storms: being a digest of the Principal Facts of Revolving Storms’ a second edition of which appeared in 1879. Although Birt published some papers on general astronomy, he is best remembered for his work in selenography, particularly work published in the Astronomical Register. He was the president of the short lived Selenographical Society. Birt was able to use the Hartwell Observatory for some of his work. For the last decade of his life he published a monthly guide to lunar observation. He also published, jointly with the Rev. W. J. B. Richards an introductory series of articles on how to observe the Moon. The lunar crater Birt is named after him. References Further reading Obituary in MNRAS, (1882), v. 42, p. 142-144. Obituary in The Astronomical Register, (1882), v. 20 p.12-13. External links Vladimir Jankovic, 'John Herschel's and William Radcliffe Birt's research on atmospheric waves' Scientists Collection 1804 births 1881 deaths Amateur astronomers English meteorologists 19th-century English astronomers Fellows of the Royal Astronomical Society
William Radcliffe Birt
Astronomy
442
31,789,393
https://en.wikipedia.org/wiki/Langlands%E2%80%93Shahidi%20method
In mathematics, the Langlands–Shahidi method provides the means to define automorphic L-functions in many cases that arise with connected reductive groups over a number field. This includes Rankin–Selberg products for cuspidal automorphic representations of general linear groups. The method develops the theory of the local coefficient, which links to the global theory via Eisenstein series. The resulting L-functions satisfy a number of analytic properties, including an important functional equation. The local coefficient The setting is in the generality of a connected quasi-split reductive group G, together with a Levi subgroup M, defined over a local field F. For example, if G = Gl is a classical group of rank l, its maximal Levi subgroups are of the form GL(m) × Gn, where Gn is a classical group of rank n and of the same type as Gl, l = m + n. F. Shahidi develops the theory of the local coefficient for irreducible generic representations of M(F). The local coefficient is defined by means of the uniqueness property of Whittaker models paired with the theory of intertwining operators for representations obtained by parabolic induction from generic representations. The global intertwining operator appearing in the functional equation of Langlands' theory of Eisenstein series can be decomposed as a product of local intertwining operators. When M is a maximal Levi subgroup, local coefficients arise from Fourier coefficients of appropriately chosen Eisenstein series and satisfy a crude functional equation involving a product of partial L-functions. Local factors and functional equation An induction step refines the crude functional equation of a globally generic cuspidal automorphic representation to individual functional equations of partial L-functions and γ-factors: The details are technical: s a complex variable, S a finite set of places (of the underlying global field) with unramified for v outside of S, and is the adjoint action of M on the complex Lie algebra of a specific subgroup of the Langlands dual group of G. When G is the special linear group SL(2), and M = T is the maximal torus of diagonal matrices, then π is a Größencharakter and the corresponding γ-factors are the local factors of Tate's thesis. The γ-factors are uniquely characterized by their role in the functional equation and a list of local properties, including multiplicativity with respect to parabolic induction. They satisfy a relationship involving Artin L-functions and Artin root numbers when v gives an archimedean local field or when v is non-archimedean and is a constituent of an unramified principal series representation of M(F). Local L-functions and root numbers ε are then defined at every place, including , by means of Langlands classification for p-adic groups. The functional equation takes the form where and are the completed global L-function and root number. Examples of automorphic L-functions , the Rankin–Selberg L-function of cuspidal automorphic representations of GL(m) and of GL(n). , where τ is a cuspidal automorphic representation of GL(m) and π is a globally generic cuspidal automorphic representation of a classical group G. , with τ as before and r a symmetric square, an exterior square, or an Asai representation of the dual group of GL(n). A full list of Langlands–Shahidi L-functions depends on the quasi-split group G and maximal Levi subgroup M. More specifically, the decomposition of the adjoint action can be classified using Dynkin diagrams. A first study of automorphic L-functions via the theory of Eisenstein Series can be found in Langlands' Euler Products, under the assumption that the automorphic representations are everywhere unramified. What the Langlands–Shahidi method provides is the definition of L-functions and root numbers with no other condition on the representation of M other than requiring the existence of a Whittaker model. Analytic properties of L-functions Global L-functions are said to be nice if they satisfy: extend to entire functions of the complex variable s. are bounded in vertical strips. (Functional Equation) . Langlands–Shahidi L-functions satisfy the functional equation. Progress towards boundedness in vertical strips was made by S. S. Gelbart and F. Shahidi. And, after incorporating twists by highly ramified characters, Langlands–Shahidi L-functions do become entire. Another result is the non-vanishing of L-functions. For Rankin–Selberg products of general linear groups it states that is non-zero for every real number t. Applications to functoriality and to representation theory of p-adic groups Functoriality for the classical groups: A cuspidal globally generic automorphic representation of a classical group admits a Langlands functorial lift to an automorphic representation of GL(N), where N depends on the classical group. Then, the Ramanujan bounds of W. Luo, Z. Rudnick and P. Sarnak for GL(N) over number fields yield non-trivial bounds for the generalized Ramanujan conjecture of the classical groups. Symmetric powers for GL(2): Proofs of functoriality for the symmetric cube and for the symmetric fourth powers of cuspidal automorphic representations of GL(2) were made possible by the Langlands–Shahidi method. Progress towards higher Symmetric powers leads to the best possible bounds towards the Ramanujan–Peterson conjecture of automorphic cusp forms of GL(2). Representations of p-adic groups: Applications involving Harish-Chandra μ functions (from the Plancherel formula) and to complementary series of p-adic reductive groups are possible. For example, GL(n) appears as the Siegel Levi subgroup of a classical group G. If π is a smooth irreducible ramified supercuspidal representation of GL(n, F) over a field F of p-adic numbers, and is irreducible, then: is irreducible and in the complementary series for 0 < s < 1; is reducible and has a unique generic non-supercuspidal discrete series subrepresentation; is irreducible and never in the complementary series for s > 1. Here, is obtained by unitary parabolic induction from if G = SO(2n), Sp(2n), or U(n+1, n); if G = SO(2n+1) or U(n, n). References Automorphic forms Representation theory
Langlands–Shahidi method
Mathematics
1,392
50,957,870
https://en.wikipedia.org/wiki/SD-WAN
A Software-Defined Wide Area Network (SD-WAN) is a wide area network that uses software-defined networking technology, such as communicating over the Internet using overlay tunnels which are encrypted when destined for internal organization locations. If standard tunnel setup and configuration messages are supported by all of the network hardware vendors, SD-WAN simplifies the management and operation of a WAN by decoupling the networking hardware from its control mechanism. This concept is similar to how software-defined networking implements virtualization technology to improve data center management and operation. In practice, proprietary protocols are used to set up and manage an SD-WAN, meaning there is no decoupling of the hardware and its control mechanism. A key application of SD-WAN is to allow companies to build higher-performance WANs using lower-cost and commercially available Internet access, enabling businesses to partially or wholly replace more expensive private WAN connection technologies such as MPLS. When SD-WAN traffic is carried over the Internet, there are no end-to-end performance guarantees. Carrier MPLS VPN WAN services are not carried as Internet traffic, but rather over carefully-controlled carrier capacity, and do come with an end-to-end performance guarantee. History WANs were very important for the development of networking in general and for a long time one of the most important applications of networks both for military and enterprise applications. The ability to communicate data over long distances was one of the main driving factors for the development of data communications, as it made it possible to overcome the distance limitations, as well as shortening the time necessary to exchange messages with other parties. Legacy WANs allowed communication over circuits connecting two or more endpoints. Earlier networking supported point-to-point communication over a slow speed circuit, usually between two fixed locations. As networking progressed, WAN circuits became faster and more flexible. Innovations like circuit and packet switching (in the form of X.25, ATM and later Internet Protocol or Multiprotocol Label Switching) allowed communication to become more dynamic, supporting ever-growing networks. The need for strict control, security and quality of service (QOS) meant that multinational corporations were very conservative in leasing and operating their WANs. National regulations restricted the companies that could provide local service in each country, and complex arrangements were necessary to establish truly global networks. All that changed with the growth of the Internet, which permitted entities around the world to connect to each other. However, over the first years, the uncontrolled nature of the Internet was not considered adequate or safe for private corporate use. Independent of safety concerns, connectivity to the Internet became a necessity to the point where every branch required Internet access. At first, due to safety concerns, private communications were still done via WAN, and communication with other entities (including customers and partners) moved to the Internet. As the Internet grew in reach and maturity, companies started to evaluate how to leverage it for private corporate communications. During the early 2000s, application delivery over the WAN became an important topic of research and commercial innovation. Over the next decade, increasing computing power made it possible to create software-based appliances that were able to analyze traffic and make informed decisions without delays, making it possible to create large-scale overlay networks over the public Internet that could replicate all the functionality of legacy WANs, at a fraction of the cost. SD-WAN combines several networking aspects to create full-fledged private networks, with the ability to dynamically share network bandwidth across the connection points. Additional enhancements include central controllers, zero-touch provisioning, integrated analytics and on-demand circuit provisioning, with some network intelligence based in the cloud, allowing centralized policy management and security. Networking publications started using the term SD-WAN to describe this new networking trend as early as 2014. With the rapid shift to remote work as a result of lockdowns and stay at home orders during the COVID-19 pandemic, SD-WAN grew in popularity as a way of connecting remote workers. Overview WANs allow companies to extend their computer networks over large distances, connecting remote branch offices to data centers and to each other, and delivering applications and services required to perform business functions. Due to the physical constraints imposed by the propagation time over large distances, and the need to integrate multiple service providers to cover global geographies (often crossing nation boundaries), WANs face important operational challenges, including network congestion, packet delay variation, packet loss, and even service outages. Modern applications such as VoIP calling, videoconferencing, streaming media, and virtualized applications and desktops require low latency. Bandwidth requirements are also increasing, especially for applications featuring high-definition video. It can be expensive and difficult to expand WAN capability, with corresponding difficulties related to network management and troubleshooting. SD-WAN products are designed to address these network problems. By enhancing or even replacing traditional branch routers with virtualization appliances that can control application-level policies and offer a network overlay, less expensive consumer-grade Internet links can act more like a dedicated circuit. This simplifies the setup process for branch personnel. SD-WAN products can be physical appliances or software based only. Components The MEF Forum has defined an SD-WAN architecture consisting of an SD-WAN edge, SD-WAN gateway, SD-WAN controller and SD-WAN orchestrator. SD-WAN edge The SD-WAN edge is a physical or virtual network function that is placed at an organization's branch/regional/central office site, data center, and in public or private cloud platforms. MEF Forum has published the first SD-WAN service standard, MEF 70 which defines the fundamental characteristics of an SD-WAN service plus service requirements and attributes. SD-WAN gateway SD-WAN gateways provide access to the SD-WAN service in order to shorten the distance to cloud-based services or the user, and reduce service interruptions. A distributed network of gateways may be included in an SD-WAN service by the vendor or setup and maintained by the organization using the service. By sitting outside the headquarters in the cloud, the gateway also reduces headquarters traffic. SD-WAN orchestrator The SD-WAN orchestrator is a cloud hosted or on-premises web management tool that allows configuration, provisioning and other functions when operating an SD-WAN. It simplifies application traffic management by allowing central implementation of an organization's business policies. SD-WAN controller The SD-WAN controller functionality, which can be placed in the orchestrator or in an SD-WAN gateway, is used to make forwarding decisions for application flows. Application flows are IP packets that have been classified to determine their user application or grouping of applications to which they are associated. The grouping of application flows based on a common type, e.g., conferencing applications, is referred to as an Application Flow Group in MEF 70. Per MEF 70, the SD-WAN Edge classifies incoming IP packets at the SD-WAN UNI (SD-WAN user network interface), determines, via OSI Layer 2 through Layer 7 classification, which application flow the IP packets belong to, and then applies the policies to block the application flow or allow the application flows to be forwarded based on the availability of a route to the destination SD-WAN UNI on a remote SD-WAN Edge. This helps ensure that application performance meets service level agreements (SLAs). Required characteristics Тhe Gartner research firm has defined an SD-WAN as having four required characteristics: The ability to support multiple connection types, such as MPLS, last mile fiber optic network or through high speed cellular networks e.g. 4G LTE and 5G wireless technologies The ability to do dynamic path selection, for load sharing and resiliency purposes A simple interface that is easy to configure and manage The ability to support VPNs, and third party services such as WAN optimization controllers, firewalls and web gateways Features Features of SD-WANs include resilience, quality of service (QoS), security, and performance, with flexible deployment options; simplified administration and troubleshooting; and online traffic engineering. Resilience A resilient SD-WAN reduces network downtime. To be resilient, the technology must feature real-time detection of outages and automatic switch over (fail over) to working links. Quality of service SD-WAN technology supports quality of service by having application level awareness, giving bandwidth priority to the most critical applications. This may include dynamic path selection, sending an application on a faster link, or even splitting an application between two paths to improve performance by delivering it faster. Security SD-WAN communication is usually secured using IPsec, a staple of WAN security. Application optimization SD-WANs can improve application delivery using caching, storing recently accessed information in memory to speed future access. Self-healing networks SD-WANs can incorporate artificial intelligence for IT operations (AIOps) for continuous troubleshooting and fixes to network issues. Deployment options Most SD-WAN products are available as pre-configured appliances, placed at the network edge in data centers, branch offices and other remote locations. There are also virtual appliances that can work on existing network hardware, or the appliance can be deployed as a virtual appliance on the cloud in environments such as Amazon Web Services (AWS), Unified Communications as a service (UCaaS) or as Software as a Service (SaaS). This allows enterprises to benefit from SD-WAN services as they migrate application delivery from corporate servers to cloud based services such as Salesforce.com and Google apps. Administration and troubleshooting As with network equipment in general, GUIs may be preferred to command line interface (CLI) methods of configuration and control. Other beneficial administrative features include automatic path selection, the ability to centrally configure each end appliance by pushing configuration changes out, and even a true software defined networking approach that lets all appliances and virtual appliances be configured centrally based on application needs rather than underlying hardware. Online traffic engineering With a global view of network status, a controller that manages SD-WAN can perform careful and adaptive traffic engineering by assigning new transfer requests according to current usage of resources (links). For example, this can be achieved by performing central calculation of transmission rates at the controller and rate-limiting at the senders (end-points) according to such rates. Secure access service edge (SASE) SD-WAN is a core component of secure access service edge solutions (SASE) which incorporate network and security capabilities to more efficiently and securely connect distributed work environments (branch office, headquarters, home office, remote) to distributed applications located in data centers, cloud infrastructure, or delivered by SaaS services. With SASE, SD-WAN is combined with other network and security technologies including cloud access security broker (CASB), Secure Web Gateway, Data Loss Prevention (DLP), Zero Trust Network Access (ZTNA), Firewall, and other capabilities to connect and protect users and applications. In December 2021, Gartner research firm estimated that by 2025, 50% of SD-WAN purchases will be part of a single vendor SASE offering. Complementary technology SD-WAN versus WAN optimization There are some similarities between SD-WAN and WAN optimization, the name given to the collection of techniques used to increase data-transfer efficiencies across WANs. The goal of each is to accelerate application delivery between branch offices and data centers, but SD-WAN technology focuses additionally on cost savings and efficiency, specifically by allowing lower cost network links to perform the work of more expensive leased lines, whereas WAN Optimization focuses squarely on improving packet delivery. An SD-WAN utilizing virtualization techniques assisted with WAN Optimization traffic control allows network bandwidth to dynamically grow or shrink as needed. SD-WAN technology and WAN optimization can be used separately or together, and some SD-WAN vendors are adding WAN optimization features to their products. WAN edge routers A WAN edge router is a device that routes data packets between different WAN locations, giving enterprise access to a carrier network. Also called a boundary router, it is unlike a core router, which only sends packets within a single network. SD-WANs can work as an overlay to simplify the management of existing WAN edge routers, by lowering dependence on routing protocols. SD-WAN can also potentially be an alternative to WAN Edge routers. SD-WAN versus hybrid WAN SD-WANs are similar to hybrid WANs, and sometimes the terms are used interchangeably, but they are not identical. A hybrid WAN consists of different connection types, and may have a software defined network (SDN) component, but doesn't have to. SD-WAN versus MPLS Cloud-based SD-WAN offers advanced features, such as enhanced security, seamless cloud, and support for mobile users, that result naturally from the use of cloud infrastructure. As a result, cloud-based SD-WAN can replace MPLS, enabling organizations to release resources once tied to WAN investments and create new capabilities. An overview discussing three typical reasons to compare MPLS with SD-WAN. Specifically where IT teams need to retain MPLS due to contract commitments and where the Enterprise migrates from MPLS to an Internet-based SD WAN. Testing and validation As there is no standard algorithm for SD-WAN controllers, device manufacturers each use their own proprietary algorithm in the transmission of data. These algorithms determine which traffic to direct over which link and when to switch traffic from one link to another. Given the breadth of options available in relation to both software and hardware SD-WAN control solutions, it's imperative they be tested and validated under real-world conditions within a lab setting prior to deployment. There are multiple solutions available for testing purposes, ranging from purpose-built network emulation appliances which can apply specified network impairments to the network being tested in order to reliably validate performance, to software-based solutions. Marketplace Network World IT website divides the SD-WAN vendor market into three groups: established networking vendors who are adding SD-WAN products to their offerings, WAN specialists who are starting to integrate SD-WAN functionality into their products, and startups focused specifically on the SD-WAN market. The global SD-WAN market stood at $ 3.25 billion in 2021 and the market is expected to grow 30% in 2022. According to SD-WAN market Report Datavagyanik, North America accounted for more than 77% of the market. Alternatively, a market overview by Nemertes Research groups SD-WAN vendors into categories based on their original technology space, and which are "Pure-play SD-WAN providers", "WAN optimization vendors", "Link-aggregation vendors", and "General network vendors". While Network World's second category (startups focused specifically on the SD-WAN market), is generally equivalent to Nemertes' "Pure-play SD-WAN providers" category, Nemertes offers a more detailed view of the preexisting WAN and overall networking providers. Additionally, Nemertes Research also describes the in-net side of the SD-WAN market, describing the go-to-market strategy of connectivity providers entering the SD-WAN market. These providers include "Network-as-a-service vendors", "Carriers or telcos", "Content delivery networks" and "Secure WAN providers". Open source MEF 70 standardizes SD-WAN service attributes and uses standard IPv4 and IPv6 routing protocols. SD-WAN services also use standard IPsec encryption protocols. Additional standardization for other SD-WAN functions and related security functionality not covered in MEF 70 are under development at the MEF Forum. There are several opensource SD-WAN solutions and opensource SD-WAN implementations available. For example, the Linux Foundation has three projects that intersect with and help the SD-WAN market: ONAP, OpenDaylight Project, and the Tungsten Fabric (formerly Juniper Networks' OpenContrail). References Computing terminology Configuration management Data transmission Network architecture Telecommunications Wide area networks
SD-WAN
Technology,Engineering
3,298
10,040,741
https://en.wikipedia.org/wiki/Punch%20%28tool%29
A punch is a tool used to indent or create a hole through a hard surface. They usually consist of a hard metal rod with a narrow tip at one end and a broad flat "butt" at the other. When used, the narrower end is pointed against a target surface and the broad end is struck with a hammer or mallet, causing the blunt force of the blow to be transmitted through the rod body and focused more sharply onto a small area. Typically, woodworkers use a ball-peen hammer to strike a punch. Use Punches are used to drive fasteners such as nails and dowels, making a hole, or forming an indentation or impression of the tip on a work piece. Decorative punches may also be used to create a pattern or even form an image. Pin Metal pins and similar connectors are driven in or out of holes using a pin punch. For removal, first use a starter punch to loosen the pin, then use a pin punch to finish. Center A center punch is used to mark the center of a point. It is usually used to mark the center of a hole when drilling holes. A drill has the tendency to "wander" if it does not start in a recess. A center punch forms a large enough dimple to "guide" the tip of the drill. The tip of a center punch has an angle between 60 and 90 degrees. When drilling larger holes, where the drill bit is wider than the indentation produced by a center punch, the drilling of a pilot hole is usually needed. An automatic center punch operates without the need for a hammer. Prick punch A prick punch is similar to a center punch but used for marking out. It has a sharper angled tip to produce a narrower and deeper indentation. The indentation can then be enlarged with a center punch for drilling. The tip of a prick punch is 60 degrees (the angle depends on what type of prick punch one is using). It is also known as a dot punch. Transfer A transfer punch is a punch (usually in an index set) of a specific outer diameter that is non-tapered and extends the entire length of the punch (except for the tip). It is used to tightly fit the tolerances of an existing hole and, when struck, precisely transfer the center of that hole to another surface. It can be used, for example, to duplicate the hole patterns in a part, or precisely set locations for threaded holes (created by drilling and tapping) to bolt an object to a surface. Drift A drift "punch" is misleadingly named; it is not used as a punch in the traditional sense of the term. A drift punch, or drift pin, or lineup punch, is used as an aid in aligning bolt or rivet holes prior to inserting a fastener. A drift punch is constructed as a tapered rod, with the hammer acting on the large end of the taper. The long end of a drift punch is placed into the semi-aligned bolt holes of two separate components, and then driven into the hole. As it is driven in, the taper forces the two components into alignment, allowing for easy insertion of the fastener. Unlike most punches, force is never (and should never be) applied to the tip, or end of a drift pin. Roll pins Roll pin punches are used to drive roll pins. Standard pin punches should never be used on a roll pin. Because of the hollow, thin wall construction of a roll pin, a standard pin punch will often collapse, mar or distort the end of the pin or be driven into, and jammed inside, the hollow core of the roll pin. When choosing a roll pin punch, select one that is no larger than the compressed diameter of the pin. If a punch is used that is larger than the pin, the surrounding metal in which the pin is seated can be damaged. Also, a roll pin punch should not be used which is smaller than the compressed diameter of the pin. If this occurs, it may be possible to drive the punch through the hollow center of the roll pin. Roll pin punches are designed with a small projection in the center of the pin tip to support the circumference of the roll pin. The tips of roll pin punches are not flat and should never be used on regular solid pins. If a roll pin punch is used on a solid pin, it will mar or mark the pin. If the end of a roll pin punch is damaged or deformed, it should be discarded. It is virtually impossible to regrind the tip of the roll pin punch and properly shape the center projection. When using a roll pin punch, make sure the axis of the shank of the roll pin punch is in line with the axis of the roll pin. Do not cant the roll pin punch off to one side. When you strike the roll pin punch, hit it directly on the top of its head. If you strike the head of the roll pin punch at an angle you may bend the shank. Letter Also known as letter stamps or number stamps, letter punches are used to emboss the impression of a letter or number into a workpiece. They are most common in the reverse image, this allows the result to be immediately readable, regardless if they may be made as a positive image. This is essential in the case of die or mold making and ensures that the finished product will be readable, as a die is a negative image. Hallmark Specially-made stamps used to strike hallmarks for metal, maker, manufacturing date (also known as date letter), city (or county), fineness, or assay office, to certify the content of noble metals—such as platinum, gold, silver. Tablet press These punches are a part of a tablet press. Unlike most punches, tablet press punches have a concave ending in the shape of the desired tablet. There are the lower and the upper punches to compress the powder in between. See also Cookie cutter Nailset Pointillé Punchcutting Punching Punching machine References Hand tools Metalworking hand tools Woodworking hand tools
Punch (tool)
Engineering
1,246
25,940,434
https://en.wikipedia.org/wiki/Epiblem
In botany, epiblem is a tissue that replaces the epidermis in most roots and in stems of submerged aquatic plants. It is usually located between the epidermis and cortex in the root or stem of a plant. References Plant roots Botany
Epiblem
Biology
52
1,655,002
https://en.wikipedia.org/wiki/Think%20Secret
Think Secret, founded in 1998, was a web site which specialized in publishing reports and rumors about Apple Inc. The name of the site was a play on Apple's one-time advertising slogan, "Think Different". Think Secret's archives reached as far back as May 3, 1999. On December 20, 2007, it was announced that the site would eventually shut down as part of a legal settlement. The site officially shut down on February 14, 2008, and now shows the statement "The publication Think Secret is no longer in operation." when trying to access it. Predicted Mac Mini release In December 2004, Think Secret published rumors of a new Mac and a new piece of word-processing software. Apple files suit Apple subsequently sued Think Secret editor "Nick dePlume", claiming that the site's reports violated trade secret law. The rumors were confirmed on January 11, 2005, at Macworld in San Francisco when Apple's CEO Steve Jobs introduced the Mac mini and the iWork productivity suite. Editor identified Prior to Think Secret's legal troubles, the identity of its editor was unknown outside the Mac journalism community, as he had always written under the pen-name "Nick dePlume". It was later discovered by bloggers at Black Vortex that he was Nicholas Ciarelli. Suit settled, site to discontinue publishing The lawsuit with Apple was settled on December 20, 2007. The site said that, "as part of the confidential settlement, no sources were revealed and Think Secret will no longer be published." Think Secret was officially shut down on February 14, 2008, giving a Forbidden error upon entering the site. See also Apple community References Internet properties established in 1998 Internet properties disestablished in 2007 Macintosh websites
Think Secret
Technology
354
58,022,154
https://en.wikipedia.org/wiki/Vildagliptin/metformin
Vildagliptin/metformin, sold under the brand name Eucreas among others, is a fixed-dose combination anti-diabetic medication for the treatment of type 2 diabetes. It was approved for use in the European Union in November 2007, and the approval was updated in 2008. It combines 50 mg vildagliptin with either 500, 850, or 1000 mg metformin. The most common side effects include nausea (feeling sick), vomiting, diarrhea, abdominal (tummy) pain and loss of appetite. Medical uses Vildagliptin/metformin is indicated in the treatment of type-2 diabetes mellitus: it is indicated in the treatment of adults who are unable to achieve sufficient glycaemic control at their maximally tolerated dose of oral metformin alone or who are already treated with the combination of vildagliptin and metformin as separate tablets. it is indicated in combination with a sulphonylurea (i.e. triple combination therapy) as an adjunct to diet and exercise in patients inadequately controlled with metformin and a sulphonylurea. it is indicated in triple combination therapy with insulin as an adjunct to diet and exercise to improve glycaemic control in patients when insulin at a stable dose and metformin alone do not provide adequate glycaemic control. References External links Adamantanes Biguanides Carboxamides Combination diabetes drugs Dipeptidyl peptidase-4 inhibitors Drugs with unknown mechanisms of action Guanidines Nitriles Drugs developed by Novartis Pyrrolidines Tertiary alcohols
Vildagliptin/metformin
Chemistry
341
796,063
https://en.wikipedia.org/wiki/Sky%20City%201000
Sky City 1000 is a proposed skyscraper for the Tokyo metropolitan area. It was announced in 1989 at the height of the Japanese asset price bubble. The proposal consists of a building tall and wide at the base, with a total floor area of . The design, proposed in 1989 by the Takenaka Corporation, would have housed between 35,000 and 36,000 full-time residents as well as 100,000 workers. It comprised 14 concave dish-shaped "Space Plateaus" stacked one upon the other. The interior of the plateaus would have contained greenspace, and the edges of the building would have contained apartments. The building would have also housed offices, commercial facilities, schools, theatres, and other modern amenities. The Sky City was featured on Discovery Channel's Extreme Engineering in 2003. Land prices in Japan were the highest in the world at the time, but Kisho Kurokawa, one of Japan's most famous architects, has said that staggeringly ambitious buildings employing highly sophisticated engineering are still cheap, because companies pay 90 percent of the cost for the land and only 10 percent for the building. Tokyo's only fire helicopter has even been used in simulation tests to see what the danger would be if a fire were to break out in the building. To mitigate this, triple-decker high speed elevators were proposed and prototyped in labs outside Tokyo. Although the Sky City gained more serious attention than many of its alternatives, it was never carried out, similarly to projects such as X-Seed 4000 and to ultra-high density, mixed use concepts such as Paolo Soleri's Arcology and Le Corbusier's Ville Radieuse. If completed, Sky City 1000 would be the tallest man-made structure in the world surpassing the Burj Khalifa. See also Bionic Tower Madinat al-Hareer Sky Mile Tower Shimizu Mega-City Pyramid References Proposed skyscrapers in Japan Unbuilt buildings and structures in Japan Skyscrapers in Tokyo Unbuilt skyscrapers Proposed arcologies
Sky City 1000
Technology
413
10,239,308
https://en.wikipedia.org/wiki/Nightcap%20%28garment%29
A nightcap is a cloth cap worn with other nightwear such as pajamas, a onesie, a nightshirt, or a nightgown; historically worn in the cold climates of Northern Europe. Nightcaps are somewhat similar to knit caps worn for warmth outdoors. Design Women's night caps were usually a long piece of cloth wrapped around the head, or a triangular cloth tied under the chin. Men's nightcaps were traditionally pointed hats with a long top, sometimes with a pom-pom on the end. The long end could be used like a scarf to keep the back of the neck warm. History From the Middle Ages to the 20th century, nightcaps were worn in Northern Europe, such as the British Isles and Scandinavia, especially during the cold winters before central heating became available. People often tended to think that cold air was harmful and unwholesome to health, so a nightcap protected them, especially if they had a receding hairline or sensitive head, etc. In the Tyburn and Newgate days of British judicial hanging history, the hood used to cover the prisoner's face was a nightcap supplied by the prisoner, if he could afford it. Nightcaps were worn by many women in the Victorian era, but were seen as old-fashioned by the Edwardian era. Some women still wore nightcaps, similar to mobcaps, to protect their elaborate curly hairstyles that were fashionable. Edwardian men wore nightcaps as well. In the 1920s and 1930s, the boudoir cap became popular among some European women. Fiction Nightcaps are less commonly worn in modern times, but are often featured in animation and other media, as part of a character's nightwear. Nightcaps became associated with the fictional sleepers Ebenezer Scrooge and Wee Willie Winkie. The hat has become typical nightwear for a sleeper especially in comical drawings or cartoons along with children's stories, plays, and films; for example, in several Lupin III animations Daisuke Jigen has worn one as a continuation of the "hat covering eyes" gag, and in The Science of Discworld Rincewind has one with the word "Wizzard" stitched onto it. Related caps People with curly and Afro-textured hair often wear a form of night cap to protect their hair while sleeping, typically a silk or satin wrap or bonnet. See also Smoking cap List of headgear References Caps Pointed hats Nightwear
Nightcap (garment)
Biology
511
18,753,256
https://en.wikipedia.org/wiki/Euler%27s%20theorem%20%28differential%20geometry%29
In the mathematical field of differential geometry, Euler's theorem is a result on the curvature of curves on a surface. The theorem establishes the existence of principal curvatures and associated principal directions which give the directions in which the surface curves the most and the least. The theorem is named for Leonhard Euler who proved the theorem in . More precisely, let M be a surface in three-dimensional Euclidean space, and p a point on M. A normal plane through p is a plane passing through the point p containing the normal vector to M. Through each (unit) tangent vector to M at p, there passes a normal plane PX which cuts out a curve in M. That curve has a certain curvature κX when regarded as a curve inside PX. Provided not all κX are equal, there is some unit vector X1 for which k1 = κX1 is as large as possible, and another unit vector X2 for which k2 = κX2 is as small as possible. Euler's theorem asserts that X1 and X2 are perpendicular and that, moreover, if X is any vector making an angle θ with X1, then The quantities k1 and k2 are called the principal curvatures, and X1 and X2 are the corresponding principal directions. Equation () is sometimes called Euler's equation . See also Differential geometry of surfaces Dupin indicatrix References Full 1909 text (now out of copyright) . Differential geometry of surfaces Theorems in differential geometry Leonhard Euler
Euler's theorem (differential geometry)
Mathematics
311