id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
14,598,730
https://en.wikipedia.org/wiki/Toeplitz%20operator
In operator theory, a Toeplitz operator is the compression of a multiplication operator on the circle to the Hardy space. Details Let be the unit circle in the complex plane, with the standard Lebesgue measure, and be the Hilbert space of complex-valued square-integrable functions. A bounded measurable complex-valued function on defines a multiplication operator on . Let be the projection from onto the Hardy space . The Toeplitz operator with symbol is defined by where " | " means restriction. A bounded operator on is Toeplitz if and only if its matrix representation, in the basis , has constant diagonals. Theorems Theorem: If is continuous, then is Fredholm if and only if is not in the set . If it is Fredholm, its index is minus the winding number of the curve traced out by with respect to the origin. For a proof, see . He attributes the theorem to Mark Krein, Harold Widom, and Allen Devinatz. This can be thought of as an important special case of the Atiyah-Singer index theorem. Axler-Chang-Sarason Theorem: The operator is compact if and only if . Here, denotes the closed subalgebra of of analytic functions (functions with vanishing negative Fourier coefficients), is the closed subalgebra of generated by and , and is the space (as an algebraic set) of continuous functions on the circle. See . See also References . . . . Reprinted by Dover Publications, 1997, . Operator theory Hardy spaces Linear operators
Toeplitz operator
[ "Mathematics" ]
312
[ "Mathematical analysis", "Functions and mappings", "Mathematical analysis stubs", "Mathematical objects", "Linear operators", "Mathematical relations" ]
14,598,828
https://en.wikipedia.org/wiki/Perceptual%20robotics
Perceptual robotics is an interdisciplinary science linking Robotics and Neuroscience. It investigates biologically motivated robot control strategies, concentrating on perceptual rather than cognitive processes and thereby sides with J. J. Gibson's view against the Poverty of the stimulus theory. As a working definition, the following quote from Chapter 64 by H. Bülthoff, C. Wallraven and M. Giese from The Springer Handbook of Robotics, edited by Bruno Siciliano and Oussama Khatib, published by Springer in 2007, could be used: In the following we will apply the term Perceptual Robotics to signify the design of robots based on principles that are derived from human perception on all three levels in the sense of Marr. This includes a realization in terms of specific neural circuits as well as the transfer of more abstract biologically-inspired strategies for the solution of relevant computational problems. See also David Marr (neuroscientist) (including a short description of the three levels of perception) PERCRO Perceptual Robotics Laboratory, Scuola Superiore Sant'Anna, Pisa, Italy Robotics {
Perceptual robotics
[ "Engineering" ]
232
[ "Robotics", "Automation" ]
14,598,833
https://en.wikipedia.org/wiki/Los%20Angeles%20Airways%20Flight%20417
Los Angeles Airways Flight 417 was a Sikorsky S-61 helicopter that crashed on August 14, 1968 in the city of Compton, California. All eighteen passengers and three crew members were killed. The aircraft was destroyed by impact and fire. According to the National Transportation Safety Board the probable cause of the accident was fatigue failure. The accident happened when the (arbitrarily designated) yellow blade, one of five main rotor blades, separated from the spindle that attached the blade to the rotor head. Following failure, the helicopter was uncontrollable and it fell to the ground. The fatigue crack originated in an area of substandard hardness and inadequate shot peening. History Los Angeles Airways (LAA) Flight 417, piloted by Captain Kenneth Lee Waggoner, former USMC helicopter pilot, was a regularly scheduled passenger flight from Los Angeles International Airport to the Disneyland Heliport in Anaheim, California. The aircraft and crew had completed three round trips to various destinations in the Greater Los Angeles Metropolitan Area beginning at 0607 PDT and departed the ramp at Los Angeles for Flight 417 at 1026. The flight, operating under Visual Flight Rules was cleared by Air traffic control to take off and proceed eastbound at 10:28:15. At 10:29:30 the flight reported to Hawthorne Tower that it was departing Los Angeles eastbound along Imperial Highway at . At 10:32:55 Air Traffic Control advised, "L.A. four seventeen, seven miles east, radar service terminated". The flight acknowledged, "Four seventeen thank you". This was the last known radio contact with the flight. Statements were obtained from 91 witnesses. A consensus of their observations indicates that the helicopter was proceeding along a normal flightpath when a loud noise or unusual sound was heard. A main rotor blade was either observed to separate or was seen separated in the vicinity of the main rotor disc. As the helicopter fell in variously described gyrations, the tail cone either folded or separated. In order to establish an approximate altitude for the flight, several comparative flights were conducted in a similar helicopter. Most witnesses indicated the flights at to appeared to be most accurate. Wreckage The aircraft crashed in Lueders Park in Compton, a recreational park located in a residential area bordering Rosecrans Avenue. The entire fuselage, both engines, main rotor head assembly, four main rotor blades, and the pylon assembly were located in the main impact area. The fifth main rotor blade (yellow) including the sleeve and part of spindle, was located approximately north-west of the main wreckage site. Minor parts associated with this rotor blade were scattered over a three-block area northwest of the park. Examination of the yellow blade spindle (S/N AJ19) revealed a fatigue fracture in the shank of the spindle adjacent to the shoulder in the inboard end of the shank. Aircraft a Sikorsky S-61L helicopter, serial number 61031 was the prototype for the S-61L, and had accumulated 11,863.64 total flying hours prior to the day of the accident. It is estimated that approximately 3.17 hours were flown on August 14, 1968. The aircraft was equipped with two General Electric CT58-140-1 turboshaft engines. The aircraft was serviced with of JP-4 fuel and had a takeoff gross weight of , which was below the maximum allowable takeoff weight of . The computed centre of gravity at the time of the accident was from datum, which is forward of the main rotor hub centerline. The allowable limits are from for a gross weight of . The estimated gross weight at the time of the accident was . Findings In the course of the investigation by the National Transportation Safety Board (NTSB) they made the following findings: The aircraft gross weight and center of gravity were within limits. The crewmembers were qualified for the flight. The yellow main rotor blade separated in flight rendering the aircraft uncontrollable. Blade separation was due to fatigue failure of the spindle. The fatigue crack was a high-cycle, low-stress type which propagated over a long period of time. The crack initiated because of a combination of the following factors: Metal hardness below specifications associated with a banded microstructure. Improper shot peening of the base metal surface. Possible detrimental effect of residual tensile stress from the plating. Pitting that may have been present in the base metal surface. It is believed that the crack was present at the last Magnaglo inspection of the part, and it is not known why it was not detected. NTSB recommendation and FAA reaction Following the initial evidence of a metal fatigue type failure, the National Transportation Safety Board recommended on August 16, 1968 to the Federal Aviation Administration: On the same date the FAA issued Emergency Airworthiness Directive 68-19-07. The directive has since been amended twice and now requires the following action: See also List of accidents and incidents involving commercial aircraft Los Angeles Airways Flight 841 References External links Amendment 39-2450; Airworthiness Directive 68-19-07 Air Times: Collector's Guide to Airline Timetables. Shows promotional material listing Disneyland as a destination. NTSB Accident Brief on Flight 417 Los Angeles Airways Helicopter at Disneyland Airliner accidents and incidents in California Airliner accidents and incidents caused by mechanical failure Los Angeles Airways accidents and incidents 1968 in Los Angeles Disasters in Los Angeles Compton, California Aviation accidents and incidents in the United States in 1968 August 1968 events in the United States Accidents and incidents involving the Sikorsky S-61
Los Angeles Airways Flight 417
[ "Materials_science" ]
1,128
[ "Airliner accidents and incidents caused by mechanical failure", "Mechanical failure" ]
14,598,839
https://en.wikipedia.org/wiki/Arsenic%20pentachloride
Arsenic pentachloride is a chemical compound of arsenic and chlorine. This compound was first prepared in 1976 through the UV irradiation of arsenic trichloride, AsCl3, in liquid chlorine at −105 °C. AsCl5 decomposes at around −50 °C. The structure of the solid was finally determined in 2001. AsCl5 is similar to phosphorus pentachloride, PCl5 in having a trigonal bipyramidal structure where the equatorial bonds are shorter than the axial bonds (As-Cleq = 210.6 pm, 211.9 pm; As-Clax= 220.7 pm). The pentachlorides of the elements above and below arsenic in group 15, phosphorus pentachloride and antimony pentachloride are much more stable and the instability of AsCl5 appears anomalous. The cause is believed to be due to incomplete shielding of the nucleus in the 4p elements following the first transition series (i.e. gallium, germanium, arsenic, selenium, bromine, and krypton) which leads to stabilisation of their 4s electrons making them less available for bonding. This effect has been termed the d-block contraction and is similar to the f-block contraction normally termed the lanthanide contraction. References Arsenic(V) compounds Chlorides Arsenic halides Substances discovered in the 1970s
Arsenic pentachloride
[ "Chemistry" ]
297
[ "Chlorides", "Inorganic compounds", "Salts" ]
14,598,881
https://en.wikipedia.org/wiki/Floor%20sanding
Floor sanding is the process of removing the top surfaces of a wooden floor by sanding with abrasive materials. A variety of floor materials can be sanded, including timber, cork, particleboard, and sometimes parquet. Some floors are laid and designed for sanding. Many old floors are sanded after the previous coverings are removed and suitable wood is found hidden beneath. Floor sanding usually involves three stages: Preparation, sanding, and coating with a protective sealant. Drum Sander Machines All modern sanding projects are completed with specialized sanding machines. Drum sander machines come in two versions. There are 110v and 220v floor sanders. 220v drum sanders are more powerful and remove more wood material than the 110v machine. Most homeowners who want to refinish their floors themselves use the 110v version as they are more readily available at tool rental stores. Belt sanders are preferred for the continuous sandpaper belt design to prevent sanding machine marks in floors. Feathering is an industry term used by handling the machine in such a way as to avoid deep scratch marks during start and finish. The belt sander was invented by Eugen Laegler in 1969 out of Güglingen, Germany. 90% of the area can be reached with the belt/drum sander. The remaining 10% left such as edges, corners, under cabinets, and stairs, are sanded by an edge sanding machine. A rotary machine known as a multi disc sander or buffer is then used for the final sanding steps. The buffers take abrasive discs, which rotate in same plain is the floor itself. The power of the stripping relies on the weight of the machine and therefore can be useful for surface treatments like buffing, light sanding or stripping old sealants. In the belt sanders the abrasive material is fitted and secured tight between a drum and a tension device. The belt moves vertically, along the grain of the floor surface, which assures a powerful stripping, good finish and a lasting abrasive. In drum sanders it is fitted just around the drum itself, which is less secure and retains a risk of leaving marks on a newly sanded surface. A buffing machine is used also in the final stages of wood floor refinishing. This is a rotary machine with attached fine abrasives which helps remove differences between the vertical and horizontal circulations of the sanding drums and the disk of the edging machines. These fine abrasives also help to smooth the final finish by removing minor imperfections on the surface prior and between re-coatings. Process Preparation is the first stage of the wood floor sanding process. All nails which protrude above the boards are punched down. Nails can severely damage the sanding machines which are being used. Staples or tacks used to fasten previous coverings (if any) are removed to reduce the possibility of damage. Some brands or types of adhesives which have been used to secure coverings may need to be removed. Some adhesives, oils, and varnishes, will clog sandpaper and can even make sanding impossible. After the floor is prepared, the sanding begins. The first cut is done with coarse-grit sandpaper to remove old coatings and to make the floor flat. The best method when using a drum sander is to start out with a lower grit belt sandpaper. For oak, maple, and ash hardwoods, It is recommended to start with 40 grit, then with each subsequent sanding pass, go up in sandpaper grit e.g. 60, 80, and finish with 100 grit. When wood floor planks are warped, cupped, or significantly uneven, it may require multiple passes. The differences in height between the boards are flattened uniformly. The large sanders are used across the grain of the timber. The most common paper used for the first cut is 40 grit. The areas which cannot be reached by the large sanders are sanded by an edger, at the same grit paper as the rest of the floor. If filling of holes or boards is desired this is the stage where this is usually done. 80 grit papers are usually used for the second cut. The belt sander is used inline with the grain of the timber in this cut. A finishing machine is then used to create the final finish. The grit paper used is of personal preference, however 100-150 grit papers are usually used. The sanded floor is coated with polyurethane, oils, or other sealants. If it is an oil-based sealant, then it is highly poisonous, having a high volatile organic compound content, so wearing a suitable respirator mask is recommended. Issues Sanding removes all patina, and can change the character of old floors. The result does not always suit the character of the building. Sanding old boards sometimes exposes worm eaten cores, effectively ruining the floor's appearance. This can reduce the sale price, or even cause the floor to require replacement. Sanding removes material, and timber floors have a limit to how much they can be sanded. Improper sanding, often caused by using an inferior sanding machine, can lead to 'chatter marks'. These occur when the sander has not been correctly positioned over the area to be sanded, the edge of the sander catches and creates a rippling effect over the wood or parquet floor. Often these marks can only be discerned after the stain or sealant has been applied. References Floors Wood products Woodworking
Floor sanding
[ "Engineering" ]
1,132
[ "Structural engineering", "Floors" ]
14,599,476
https://en.wikipedia.org/wiki/Casey%27s%20theorem
In mathematics, Casey's theorem, also known as the generalized Ptolemy's theorem, is a theorem in Euclidean geometry named after the Irish mathematician John Casey. Formulation of the theorem Let be a circle of radius . Let be (in that order) four non-intersecting circles that lie inside and tangent to it. Denote by the length of the exterior common bitangent of the circles . Then: Note that in the degenerate case, where all four circles reduce to points, this is exactly Ptolemy's theorem. Proof The following proof is attributable to Zacharias. Denote the radius of circle by and its tangency point with the circle by . We will use the notation for the centers of the circles. Note that from Pythagorean theorem, We will try to express this length in terms of the points . By the law of cosines in triangle , Since the circles tangent to each other: Let be a point on the circle . According to the law of sines in triangle : Therefore, and substituting these in the formula above: And finally, the length we seek is We can now evaluate the left hand side, with the help of the original Ptolemy's theorem applied to the inscribed quadrilateral : Further generalizations It can be seen that the four circles need not lie inside the big circle. In fact, they may be tangent to it from the outside as well. In that case, the following change should be made: If are both tangent from the same side of (both in or both out), is the length of the exterior common tangent. If are tangent from different sides of (one in and one out), is the length of the interior common tangent. The converse of Casey's theorem is also true. That is, if equality holds, the circles are tangent to a common circle. Applications Casey's theorem and its converse can be used to prove a variety of statements in Euclidean geometry. For example, the shortest known proof of Feuerbach's theorem uses the converse theorem. References External links Shailesh Shirali: "'On a generalized Ptolemy Theorem'". In: Crux Mathematicorum, Vol. 22, No. 2, pp. 49-53 Theorems about circles Euclidean geometry Articles containing proofs
Casey's theorem
[ "Mathematics" ]
469
[ "Articles containing proofs" ]
14,599,551
https://en.wikipedia.org/wiki/ISBEM
iSBEM is a free of cost proprietary software interface to the Simplified Building Energy Model (SBEM) which is designed for the purpose of indicating compliance with UK building regulations part L2a and L2b in England and section 6 in Scotland as regards carbon emissions from non domestic buildings. The latest version at time of writing is V 5.6.a which is available as a download from the EPBD NCM website. It has recently underwent additional programming to allow it to be used for producing EPCs for Non-Domestic Buildings. Several commercial software tools exist, offering a more 'user-friendly' front-end to the SBEM calculation engine than iSBEM. The SBEM calculation engine has some limitations and is explicitly distributed as a 'Not a Design Tool'. For more complex buildings, accredited Dynamic Simulation Modelling tools should be used for Part L compliance and EPCs. References Building engineering
ISBEM
[ "Engineering" ]
184
[ "Building engineering", "Civil engineering", "Architecture" ]
14,599,790
https://en.wikipedia.org/wiki/Quantaloid
In mathematics, a quantaloid is a category enriched over the category Sup of complete lattices with supremum-preserving maps. In other words, for any objects a and b the Hom object between them is not just a set but a complete lattice, in such a way that composition of morphisms preserves all joins: The endomorphism lattice of any object in a quantaloid is a quantale, whence the name. References Category theory
Quantaloid
[ "Mathematics" ]
97
[ "Functions and mappings", "Mathematical structures", "Category theory stubs", "Mathematical objects", "Fields of abstract algebra", "Mathematical relations", "Category theory" ]
14,600,693
https://en.wikipedia.org/wiki/S3M
S3M (Scream Tracker 3 Module) is a module file format, the successor to the STM format used by the original Scream Tracker. Both formats are based on the original MOD format used on the Amiga computer. The S3M format has many differences compared to its predecessors. The format is a hybrid of digital playback and synthesized instruments. The official format specification document covers space for 16 digital channels and 14 synthesized ones with two unused slots out of a total of 32. Separate volume channel in pattern data. Supports more instruments than MOD or STM (99 instead of 31). Default panning of channels can be specified by the composer. Extra-fine pitch slides are added Instruments are not limited to a fixed sample rate for a given note. The format stores the instrument's sample rate at middle C. The period table used by S3M is smaller than the one used by the MOD format (only 12 entries, compared to between 36 and 60 for the MOD variations) and uses larger values in order to be able to compute the extra-fine pitch slides. The playback routines, however, use relatively straightforward formulas to get the final period values used in playback. The key formula for this takes into account the instrument's stored sample rate at middle C. One feature of the S3M format which is seldom used, is the format's support for FM instruments. These were designed to be played back on sound cards that included an OPL2 or compatible FM synthesis chip. More recently, with the necessary CPU power available, it is possible to perform the same synthesis in software. Two examples of such software are the Adplug plugin for the Windows audio player Winamp and the open-source audio module tracker OpenMPT as of version 1.28.01.00. Media player support S3M files released on the Demoscene's music scene in the 1990s were commonly played on PCs using dedicated mod/s3m players (such as DMP) or using the tracker software (like Scream Tracker). Some more-common/contemporary music players can play these files, although fidelity to original sound and results can vary according to the individual file. Software includes: foobar2000 VLC media player XMPlay AIMP Winamp JetAudio See also Module file References Audio compression Module file formats Digital audio
S3M
[ "Engineering" ]
472
[ "Audio engineering", "Audio compression" ]
14,600,698
https://en.wikipedia.org/wiki/Dimitris%20Potiropoulos
Dimitris Potiropoulos is a Greek architect, Chairman & Founding Partner of the architectural practice Potiropoulos+Partners. He was born in Athens, Greece to his parents Rigas Potiropoulos and Aliki Potiropoulou, (the family Palaska). He studied Architecture at Technische Hochschule Darmstadt in Germany. During his studies he served as a faculty member at the Chair of Free Hand Drawing, and he was awarded with a special commendation for his project "Residential Proposal in the Historical Centre of Reutlingen". He specialized on Architectural Composition – Special issues of Building Design. Dimitris Potiropoulos worked initially as an architectural assistant and later collaborated as a fully qualified architect with various architectural practices, among which the practice of Konstantinos Kapsabelis, the practice of Prof. Helmut Striffler, and the practice A.N. Tombazis and Associates Architects. At the same time, he started his own practice and took part in architectural competitions. In 1989, along with his spouse Liana Nella-Potiropoulou, he established the firm "Potiropoulos + Partners" one of Greece's most accomplished architectural practices with international recognition. In 2019 their son Rigas Potiropoulos joined as a partner. The firm maintains offices in Athens GR & London UK. Awards & recognition received by the firm since its inception include the 1st prize for the Natural History Museum on Samos, Greece, the 3rd Prize for the Complex of the "Technical Chamber of Greece" in Maroussi, Attica, Greece, the 2nd prize for the New Acropolis Museum, Athens, Greece in collaboration with Studio Daniel Libeskind, the 1st prize for the Restoration of the Listed Building Complex of the Silk-mill "Ekmetzoglou" in Volos, Greece, a special commendation for the entry for the Grand Egyptian Museum in Cairo, Egypt, etc. The project "Kindergarten of German School of Athens" in Maroussi, Athens, Greece was nominated for the Mies van der Rohe Awards 2015. The firm has also gained international recognition and has won several awards such as: Architizer A+ Award, German Design Award, A Design Award, World Architecture Award, Big SEE Architecture Award, Iconic Award.THE FIRM has been nominated for Mies van der Rohe Awards 2015 – European Union Prize for Contemporary Architecture as well. The work of "Potiropoulos + Partners" includes well known buildings such as A. Trichas Residence in Philothei, Athens, Greece, the Olympic Airlines Airport Services Building Complex in Athens International Airport "Eleftherios Venizelos", Athens, Greece, the restoration and reuse of Listed Hotel «Grande Albergo delle Rose», Rhodes, Greece, the Olympic Tennis Centre in the Athens Olympic Sports Center (Ο.Α.Κ.Α.), Athens, Greece, the extension and renovation of Mercedes Benz Hellas Central Facilities in N. Kifissia, Athens, Greece, Flisvos Marina, Athens, Greece, the Cultural Centre Square in Gerakas, Athens, Greece, the Kindergarten of German School of Athens in Maroussi, Athens, Greece, «Evmareia» Touristic Complex in Brestova Zagorie, Croatia and PROJECT "X"– Research, Education, Conference and Sports Centre of ELPEN S.A. in Spata Business Area, Athens, Greece. In 2009 the monograph "Potiropoulos D+L Architects" was published by "Potamos Editions" and includes selected works of the practice of the period 1989-2009.  It is foreworded by Daniel Libeskind and Prof. Dimitris Philippides. In the publication titled Readings of Greek Post-war Architecture (Kaleidoskopio Editions, 2014), the author Panayiotis Tsakopoulos selected the work of Potiropoulos+Partners as one of the 18 most representative samples of architecture in post-war Greece. Dimitris Potiropoulos is a founding member of the "Hellenic Institute of Architecture". His lectures and publications concern issues of architectural theory as well as projects and studies emerged by his practice.  His work has been published both in Greek and international press and has been presented in exhibitions in Greece and abroad, such as: 1st Biennale for Young Greek Architects – Athens, Greece, 1996, 2nd Biennale for Young Greek Architects – Athens, Greece, 1998, Landscapes of Modernization; Greek Architecture 1960s and 1990s – Rotterdam, the Netherlands, 1999, International Architectural Exhibition – Belgrade, Serbia, 2000, Pan-Hellenic Exhibition of Architectural Project – Patras, Greece, 2000/2003/2006, The shape of Space; 40 Years Architectural Trends – Athens, Greece, 2008 among others. References External links KTIRIO Potiropoulos+Partners ARCHITIZER | Potiropoulos+Partners ARCHETYPE | Potiropoulos+Partners Athens Voice | Athens' "next day" architecture Lifo | 30 years Potiropoulos+Partners Andro | Potiropoulos+Partners Architect | The Helix Grad Review | Beachfront Villa e-Travel News | Active Materiality Huffington Post | New, emblematic buildings are being built in Attica, Greece Architect | International award for the Football Stadium of PAE Larissa Complex KTIRIO | Retail & office building in Athens, Greece Marie Claire | International Interior Design Award for "Shedia Home" Kataskeves Ktirion | Hotel and luxury villas tourist complex in Croatia Architectural design Architects from Athens Technische Universität Darmstadt alumni
Dimitris Potiropoulos
[ "Engineering" ]
1,118
[ "Design", "Architectural design", "Architecture" ]
14,601,018
https://en.wikipedia.org/wiki/Ogden%20hyperelastic%20model
The Ogden material model is a hyperelastic material model used to describe the non-linear stress–strain behaviour of complex materials such as rubbers, polymers, and biological tissue. The model was developed by Raymond Ogden in 1972. The Ogden model, like other hyperelastic material models, assumes that the material behaviour can be described by means of a strain energy density function, from which the stress–strain relationships can be derived. Ogden material model In the Ogden material model, the strain energy density is expressed in terms of the principal stretches , as: where , and are material constants. Under the assumption of incompressibility one can rewrite as In general the shear modulus results from With and by fitting the material parameters, the material behaviour of rubbers can be described very accurately. For particular values of material constants the Ogden model will reduce to either the Neo-Hookean solid (, ) or the Mooney-Rivlin material (, , , with the constraint condition ). Using the Ogden material model, the three principal values of the Cauchy stresses can now be computed as . Uniaxial tension We now consider an incompressible material under uniaxial tension, with the stretch ratio given as , where is the stretched length and is the original unstretched length. The pressure is determined from incompressibility and boundary condition , yielding: . Equi-biaxial tension Considering an incompressible material under eqi-biaxial tension, with . The pressure is determined from incompressibility, and boundary condition , gives: . Other hyperelastic models For rubber and biological materials, more sophisticated models are necessary. Such materials may exhibit a non-linear stress–strain behaviour at modest strains, or are elastic up to huge strains. These complex non-linear stress–strain behaviours need to be accommodated by specifically tailored strain-energy density functions. The simplest of these hyperelastic models, is the Neo-Hookean solid. where is the shear modulus, which can be determined by experiments. From experiments it is known that for rubbery materials under moderate straining up to 30–70%, the Neo-Hookean model usually fits the material behaviour with sufficient accuracy. To model rubber at high strains, the one-parametric Neo-Hookean model is replaced by more general models, such as the Mooney-Rivlin solid where the strain energy is a linear combination of two invariants The Mooney-Rivlin material was originally also developed for rubber, but is today often applied to model (incompressible) biological tissue. For modeling rubbery and biological materials at even higher strains, the more sophisticated Ogden material model has been developed. References F. Cirak: Lecture Notes for 5R14: Non-linear solid mechanics, University of Cambridge. R.W. Ogden: Non-Linear Elastic Deformations, Continuum mechanics Solid mechanics
Ogden hyperelastic model
[ "Physics" ]
595
[ "Solid mechanics", "Mechanics", "Classical mechanics", "Continuum mechanics" ]
14,601,271
https://en.wikipedia.org/wiki/Autoacceleration
In polymer chemistry, autoacceleration (gel effect) is a dangerous reaction behavior that can occur in free-radical polymerization systems. It is due to the localized increases in viscosity of the polymerizing system that slow termination reactions. The removal of reaction obstacles therefore causes a rapid increase in the overall rate of reaction, leading to possible reaction runaway and altering the characteristics of the polymers produced. It is also known as the Trommsdorff–Norrish effect after German chemist Johann Trommsdorff and British chemist Ronald G.W. Norrish. Background Autoacceleration of the overall rate of a free-radical polymerization system has been noted in many bulk polymerization systems. The polymerization of methyl methacrylate, for example, deviates strongly from classical mechanism behavior around 20% conversion; in this region the conversion and molecular mass of the polymer produced increases rapidly. This increase of polymerization is usually accompanied by a large rise in temperature if heat dissipation is not adequate. Without proper precautions, autoacceleration of polymerization systems could cause metallurgic failure of the reaction vessel or, worse, explosion. To avoid the occurrence of thermal runaway due to autoacceleration, suspension polymerization techniques are employed to make polymers such as polystyrene. The droplets dispersed in the water are small reaction vessels, but the heat capacity of the water lowers the temperature rise, thus moderating the reaction. Causes Norrish and Smith, Trommsdorff, and later, Schultz and Harborth, concluded that autoacceleration must be caused by a totally different polymerization mechanism. They rationalized through experiment that a decrease in the termination rate was the basis of the phenomenon. This decrease in termination rate, kt, is caused by the raised viscosity of the polymerization region when the concentration of previously formed polymer molecules increases. Before autoacceleration, chain termination by combination of two free-radical chains is a very rapid reaction that occurs at very high frequency (about one in 104 collisions). However, when the growing polymer molecules – with active free-radical ends – are surrounded in the highly viscous mixture consisting of a growing concentration of "dead" polymer, the rate of termination becomes limited by diffusion. The Brownian motion of the larger molecules in the polymer "soup" is restricted, therefore limiting the frequency of their effective (termination) collisions. Results With termination collisions restricted, the concentration of active polymerizing chains and simultaneously the consumption of monomer rises rapidly. Assuming abundant unreacted monomer, viscosity changes affect the macromolecules but do not prove high enough to prevent smaller molecules – such as the monomer – from moving relatively freely. Therefore, the propagation reaction of the free-radical polymerization process is relatively insensitive to changes in viscosity. This also implies that at the onset of autoacceleration the overall rate of reaction increases relative to the rate of un-autoaccelerated reaction given by the overall rate of reaction equation for free-radical polymerization: where is the rate of polymerization is the concentration of monomer is the concentration of initiator is the dissociation constant is the rate constant for propagation is the rate constant for chain transfer is the fraction of initiators which initiate chain growth Approximately, as the termination decreases by a factor of 4, the overall rate of reaction will double. The decrease of termination reactions also allows radical chains to add monomer for longer time periods, raising the mass-average molecular mass dramatically. However, the number-average molecular mass only increases slightly, leading to broadening of the molecular mass distribution (high dispersity, very polydispersed product). References Bibliography Dvornic, Petar R., and Jacovic S. Milhailo. "The Viscosity Effect on Autoacceleration of the Rate of Free Radical Polymerization". Wiley InterScience. 6 December 2007. Polymer chemistry Reaction mechanisms
Autoacceleration
[ "Chemistry", "Materials_science", "Engineering" ]
810
[ "Reaction mechanisms", "Materials science", "Polymer chemistry", "Physical organic chemistry", "Chemical kinetics" ]
14,601,332
https://en.wikipedia.org/wiki/Social%20value%20orientations
In social psychology, social value orientation (SVO) is a person's preference about how to allocate resources (e.g. money) between the self and another person. SVO corresponds to how much weight a person attaches to the welfare of others in relation to the own. Since people are assumed to vary in the weight they attach to other peoples' outcomes in relation to their own, SVO is an individual difference variable. The general concept underlying SVO has become widely studied in a variety of different scientific disciplines, such as economics, sociology, and biology under a multitude of different names (e.g. social preferences, other-regarding preferences, welfare tradeoff ratios, social motives, etc.). Historical background The SVO construct has its history in the study of interdependent decision making, i.e. strategic interactions between two or more people. The advent of Game theory in the 1940s provided a formal language for describing and analyzing situations of interdependence based on utility theory. As a simplifying assumption for analyzing strategic interactions, it was generally presumed that people only consider their own outcomes when making decisions in interdependent situations, rather than taking into account the interaction partners' outcomes as well. However, the study of human behavior in social dilemma situations, such as the prisoner's dilemma, revealed that some people do in fact appear to have concerns for others. In the Prisoner's dilemma, participants are asked to take the role of two criminals. In this situation, they are to pretend that they are a pair of criminals being interrogated by detectives in separate rooms. Both participants are being offered a deal and have two options. That is, the participant may remain silent or confess and implicate his or her partner. However, if both participants choose to remain silent, they will be set free. If both participants confess they will receive a moderate sentence. Conversely, if one participant remains silent while the other confesses, the person who confesses will receive a minimal sentence while the person who remained silent (and was implicated by their partner) will receive a maximum sentence. Thus, participants have to make the decision to cooperate with or compete with their partner. When used in the lab, the dynamics of this situation are stimulated as participants play for points or for money. Participants are given one of two choices, labeled option C or D. Option C would be the cooperative choice and if both participants choose to be cooperative then they will both earn points or money. On the other hand, Option D is the competitive choice. If just one participants chooses option D, that participant will earn points or money while the other player will lose money. However, if both participant pick D, then both of them will lose money. In addition to displaying participant's social value orientations, it also displays the dynamics of a mixed-motives situation. From behavior in strategic situations it is not possible, though, to infer peoples' motives, i.e. the joint outcome they would choose if they alone could determine it. The reason is that behavior in a strategic situation is always a function of both peoples' preferences about joint outcomes and their beliefs about the intentions and behavior of their interaction partners. In an attempt to assess peoples' preferences over joint outcomes alone, disentangled from their beliefs about the other persons' behavior, David M. Messick and Charles G. McClintock in 1968 devised what has become known as the decomposed game technique. Basically, any task where one decision maker can alone determine which one out of at least two own-other resource allocation options will be realized is a decomposed game (also often referred to as dictator game, especially in economics, where it is often implemented as a constant-sum situation). By observing which own-other resource allocation a person chooses in a decomposed game, it is possible to infer that person's preferences over own-other resource allocations, i.e. social value orientation. Since there is no other person making a decision that affects the joint outcome, there is no interdependence, and therefore a potential effect of beliefs on behavior is ruled out. To give an example, consider two options, A and B. If you choose option A, you will receive $100, and another (unknown) person will receive $10. If you choose option B, you will receive $85, and the other (unknown) person will also receive $85. This is a decomposed game. If a person chooses option B, we can infer that this person does not only consider the outcome for the self when making a decision, but also takes into account the outcome for the other. Conceptualization When people seek to maximize their gains, they are said to be proself. But when people are also concerned with others' gains and losses, they are said to be prosocial. There are four categories within SVO. Individualistic and competitive SVOs are proself while cooperative and altruistic SVOs are prosocial: Individualistic orientation: Members of this category are concerned only with their own outcomes. They make decisions based on what they think they will personally achieve, without concern for others' outcomes. They are focused only on their own outcomes and therefore do not get involved with other group members. They neither assist nor interfere. However their actions may indirectly impact other members of the group but such impact is not their goal. Competitive orientation: Competitors much like individualists strive to maximize their own outcomes, but in addition they seek to minimize others outcomes. disagreements and arguments are viewed as win-lose situations and competitors find satisfaction in forcing their ideas upon others. A competitor has the belief that each person should get the most they can in each situation and play to win every time. Those with competitive SVOs are more likely to find themselves in conflicts. Competitors cause cooperators to react with criticism to their abrasive styles. However, competitors rarely modify their behavior in response to these complaints because they are relatively unconcerned with maintaining interpersonal relations. Cooperative orientation: Cooperators tend to maximize their own outcomes as well as other's outcomes. They prefer strategies that generate win-win situations. When dealing with other people they believe that it is better if everyone comes out even in a situation. Altruistic orientation: altruists are motivated to help other who are in need. Members of this category are low in self-interest. They willingly sacrifice their own outcomes in the hopes of helping others achieve gain. However, in 1973 Griesinger and Livingston provided a geometric framework of SVO (the SVO ring, see Figure 1) with which they could show that SVO is in principle not a categorical, but a continuous construct that allows for an infinite number of social value orientations. The basic idea was to represent outcomes for the self (on the x-axis) and for the other (on the y-axis) on a Cartesian plane, and represent own-other payoff allocation options as coordinates on a circle centered at the origin of the plane. If a person chooses a particular own-other outcome allocation on the ring, that person's SVO can be represented by the angle of the line starting at the origin of the Cartesian plane and intersecting the coordinates of the respective chosen own-other outcome allocation. If, for instance, a person would choose the option on the circle that maximizes the own outcome, this would refer to an SVO angle of , indicating a perfectly individualistic SVO. An angle of would indicate a perfectly cooperative (maximizing joint outcomes) SVO, while an angle of would indicate a perfectly competitive (maximizing relative gain) SVO. This conceptualization indicates that SVO is a continuous construct, since there is an infinite number of possible SVOs, because angular degrees are continuous. This advancement in the conceptualization of the SVO construct also clarified that SVO as originally conceptualized can be represented in terms of a utility function of the following form , where is the outcome for the self, is the outcome for the other, and the parameters indicate the weight a person attaches to the own outcome () and the outcome for the other (). Measurement Several different measurement methods exist for assessing SVO. The basis for any of these measures is the decomposed game technique, i.e. a set of non-constant-sum dictator games. The most commonly used SVO measures are the following. Ring Measure The Ring measure was devised by Wim B. G. Liebrand in 1984 and is based on the geometric SVO framework proposed by Griesinger and Livingston in 1973. In the Ring measure, subjects are asked to choose between 24 pairs of options that allocate money to the subject and the "other". The 24 pairs of outcomes correspond to equally spaced adjacent own-other-payoff allocations on an SVO ring, i.e. a circle with a certain radius centered at the origin of the Cartesian plane. The vertical axis (y) measures the number of points or amount of money allocated to the other and the horizontal axis (x) measures the amount allocated to the self. Each pair of outcomes corresponds to two adjacent points on the circle. Adding up a subject's 24 choices yields a motivational vector with a certain length and angle. The length of the vector indicates the consistency of a subject's choice behavior, while the angle indicates that subject's SVO. Subjects are then categorized into one out of eight SVO categories according to their SVO angle, given a sufficiently consistent choice pattern. This measure allows for the detection of uncommon pathological SVOs, such as masochism, sadomasochism, or martyrdom, which would indicate that a subject attaches a negative weight () to the outcome for the self given the utility function described above. Triple-Dominance Measure The triple-dominance measure is directly based on the use of decomposed games as suggested by Messick and McClintock (1968). Concretely, the triple-dominance measure consists of nine items, each of which asks a subject to choose one out of three own-other-outcome allocations. The three options do have the same characteristics in each of the items. One option maximizes the outcome for the self, a second option maximizes the sum of the outcomes for the self and the other (joint outcome), and the third option maximizes the relative gain (i.e. the difference between the outcome for the self and the outcome for the other). If a subject chooses an option indicating a particular SVO in at least six out of the nine items, the subject is categorized accordingly. That is, a subject is categorized as cooperative/prosocial, individualistic, or competitive. Slider Measure The Slider measure assess SVO on a continuous scale, rather than categorizing subjects into nominal motivational groups. The instrument consists of 6 primary and 9 secondary items. In each item of the paper-based version of the Slider measure, a subject has to indicate her most preferred own-other outcome allocation out of nine options. From a subject's choices in the primary items, the SVO angle can be computed. There is also an online version of the Slider measure, where subjects can slide along a continuum of own-other payoff allocations in the items, allowing for a very precise assessment of a person's SVO. The secondary items can be used for differentiating between the motivations to maximize the joint outcome and to minimize the difference in outcomes (inequality aversion) among prosocial subjects. The SVO Slider Measure has been shown to be more reliable than previously used measures, and yields SVO scores on a continuous scale. Neuroscience and Social Value Orientation Some recent papers have explored whether Social Value Orientation is somehow reflected on human brain activity. The first functional magnetic resonance imaging study of Social Value Orientation revealed that response of the amygdala to economic inequity (i.e., absolute value of reward difference between self and the other) is correlated with the degree of prosocial orientation. A functional magnetic resonance imaging study found that responses of Medial Prefrontal Cortex - an area that is typically associated with social cognition- mirrored preferences over competitive, individualistic and cooperative allocations. Similar findings in this or neighboring areas (ventromedial and dorsomedial prefrontal cortex) have been reported elsewhere. Stylized facts SVO has been shown to be predictive of important behavioral variables, such as: fiscal behavior cooperative behavior in social dilemmas helping behavior donation behavior proenvironmental behavior negotiation behavior Furthermore, it has been shown that individualism is prevalent among very young children, and that the frequency of expressions of prosocial and competitive SVOs increases with age. Among adults, it has been shown repeatedly that prosocial SVOs are most frequently observed (up to 60 percent), followed by individualistic SVOs (about 30-40 percent), and competitive SVOs (about 5-10 percent). Evidence also suggests that SVO is first and foremost determined by socialization, and that genetic predisposition plays a minor role in SVO development. Broader perspectives The SVO construct is rooted in social psychology, but has also been studied in other disciplines, such as economics. However, the general concept underlying SVO is inherently interdisciplinary, and has been studied under different names in a variety of different scientific fields; it is the concept of distributive preferences. Originally, the SVO construct as conceptualized by the SVO ring framework did not include preferences such as inequality aversion, which is a distributive preference heavily studied in experimental economics. This particular motivation can also not be assessed with commonly used measures of SVO, except with the SVO Slider Measure. The original SVO concept can be extended, though, by representing peoples' distributive preferences in terms of utility functions, as is standard in economics. For instance, a representation of SVO that includes the expression of a motivation to minimize differences between outcomes could be formalized as follows. . Several utility functions as representations of peoples' concerns for the welfare of others have been devised and used (for a very prominent example, see Fehr & Schmidt, 1999) in economics. It is a challenge for future interdisciplinary research to combine the findings from different scientific disciplines and arrive at a unifying theory of SVO. Representing SVO in terms of a utility function and going beyond the construct's original conceptualization may facilitate the achievement of this ambitious goal. References See also Cooperation Clyde Kluckhohn and his Social Values Orientation Theory Experimental economics Experimental psychology Human behavior Motivation Rokeach Value Survey Social dilemma Social preferences Social psychology Value system Game theory Moral psychology Social psychology concepts
Social value orientations
[ "Mathematics" ]
3,012
[ "Game theory" ]
14,601,492
https://en.wikipedia.org/wiki/Calcium%28I%29%20chloride
Calcium(I) chloride (CaCl) is a diatomic molecule observed in certain gases. A solid with the composition CaCl was reported in 1953; however, later efforts to reproduce this work failed. Molecules of CaCl have been observed in the atmospheres of carbon stars. References Calcium compounds Chlorides Alkaline earth metal halides
Calcium(I) chloride
[ "Chemistry" ]
71
[ "Chlorides", "Inorganic compounds", "Salts" ]
14,601,527
https://en.wikipedia.org/wiki/Simple%20cycle%20combustion%20turbine
A simple-cycle combustion turbine (SCCT) is a type of gas turbine typically used in the power generation, aviation (jet engine), and oil and gas industries (for electricity generation and mechanical drives). The simple-cycle combustion turbine follows the Brayton Cycle and differs from a combined cycle operation in that it has only one power cycle (i.e. no provision for waste heat recovery). Advantages There are several advantages of an SCCT. The primary advantage of a SCCT is the high power generated to weight (or size) ratio, when compared to alternatives. Another advantage is the ability for it to quickly reach full power, unlike other baseload power plants that may have a minimum time of being online once started. This "minimum up" is a common term in the power industry when referring to this requirement. Therefore, SCCTs are usually used as peaking power plants, which can operate from several hours per day to a couple of dozen hours per year, depending on the electricity demand and the generating capacity of the region. In areas with a shortage of baseload and load following power plant capacity, a gas turbine power plant may regularly operate during most hours of the day and even into the evening. A typical large simple-cycle gas turbine may produce 100 to 300 megawatts of power and have 35–40% thermal efficiency. The most efficient turbines have reached 46% efficiency. For power generation applications, the investment costs are cheaper than combined cycle combustion turbine plants (in 2003, the Energy Information Administration estimated that the cost of a combined cycle plant was US$500–550/kW, as opposed to the SCCT cost of US$389/kW), but at reduced efficiency. SCCTs require smaller capital investment than either coal or nuclear power plants and can be scaled to generate small or large amounts of power. Also, the actual construction process can take as little as several weeks to a few months, compared to years for base load power plants that require extensive plumbing and large waste heat dissipation systems. Disadvantages A simple cycle combustion turbine has a lower thermal efficiency than a combined cycle machine. Although they may be less expensive to build, simple cycle combustion turbines, due to their low efficiency, cost more to run than most other plants. This results in increased cost per kWh during peak electrical loads. Compared to combined cycle NG turbines, the lower efficiency increases the amount of NG fuel required to produce the same amount of electricity. References Gas turbines
Simple cycle combustion turbine
[ "Technology" ]
502
[ "Engines", "Gas turbines" ]
14,601,729
https://en.wikipedia.org/wiki/Peanut%20agglutinin
Peanut agglutinin (PNA) is plant lectin protein derived from the fruits of Arachis hypogaea. Peanut agglutinin may also be referred to as Arachis hypogaea lectin. Lectins recognise and bind particular sugar sequences in carbohydrates; peanut agglutinin binds the carbohydrate sequence Gal-β(1-3)-GalNAc. The name "peanut agglutinin" originates from its ability to stick together (agglutinate) cells, such as neuraminidase-treated erythrocytes, which have glycoproteins or glycolipids on their surface which include the Gal-β(1-3)-GalNAc carbohydrate sequence. Structure The protein is 273 amino acids in length with the first 23 residues acting as a signal peptide which is subsequently cleaved. It has a Uniprot accession of P02872. There are over 20 structures of this protein in the PDB which reveal and all beta-sheet protein with a tetrameric quaternary structure. It is a member of the Lectin_legB PFAM family. Available Structures of peanut agglutinin Uses in cell biology and biochemistry Because peanut agglutinin specifically binds a particular carbohydrate sequence it finds use in a range of methods for cell biology and biochemistry. For example in PNA-affinity chromatography the binding specificity of peanut agglutinin is used to isolate glycosylated molecules which have the sugar sequence Gal-β(1-3)-GalNAc. Peanut agglutinin activity is inhibited by lactose and galactose which compete for the binding site. Other uses include: Potent anti-T cell activity. Distinguishing between human lymphocyte subsets. Identification of cone cell inner and outer segments and to a lesser extent rod cell inner segments in the mammalian retina. Tumour tissue determination for transitional mucosa malignancies. Identification of mammalian-infective metacyclic promastigote Leishmania major parasites from other life cycle forms also found in the sandfly host. Identification of the outer acrosome membrane in sperm, indicating acrosome integrity. See also List of histologic stains that aid in diagnosis of cutaneous conditions References Plant lectins Legume lectins Glycoproteins Peanut products
Peanut agglutinin
[ "Chemistry", "Biology" ]
513
[ "Biochemistry", "Biotechnology stubs", "Biochemistry stubs", "Glycoproteins", "Glycobiology" ]
14,602,178
https://en.wikipedia.org/wiki/Coherent%20topology
In topology, a coherent topology is a topology that is uniquely determined by a family of subspaces. Loosely speaking, a topological space is coherent with a family of subspaces if it is a topological union of those subspaces. It is also sometimes called the weak topology generated by the family of subspaces, a notion that is quite different from the notion of a weak topology generated by a set of maps. Definition Let be a topological space and let be a family of subsets of each with its induced subspace topology. (Typically will be a cover of .) Then is said to be coherent with (or determined by ) if the topology of is recovered as the one coming from the final topology coinduced by the inclusion maps By definition, this is the finest topology on (the underlying set of) for which the inclusion maps are continuous. is coherent with if either of the following two equivalent conditions holds: A subset is open in if and only if is open in for each A subset is closed in if and only if is closed in for each Given a topological space and any family of subspaces there is a unique topology on (the underlying set of) that is coherent with This topology will, in general, be finer than the given topology on Examples A topological space is coherent with every open cover of More generally, is coherent with any family of subsets whose interiors cover As examples of this, a weakly locally compact space is coherent with the family of its compact subspaces. And a locally connected space is coherent with the family of its connected subsets. A topological space is coherent with every locally finite closed cover of A discrete space is coherent with every family of subspaces (including the empty family). A topological space is coherent with a partition of if and only is homeomorphic to the disjoint union of the elements of the partition. Finitely generated spaces are those determined by the family of all finite subspaces. Compactly generated spaces (in the sense of Definition 1 in that article) are those determined by the family of all compact subspaces. A CW complex is coherent with its family of -skeletons Topological union Let be a family of (not necessarily disjoint) topological spaces such that the induced topologies agree on each intersection Assume further that is closed in for each Then the topological union is the set-theoretic union endowed with the final topology coinduced by the inclusion maps . The inclusion maps will then be topological embeddings and will be coherent with the subspaces Conversely, if is a topological space and is coherent with a family of subspaces that cover then is homeomorphic to the topological union of the family One can form the topological union of an arbitrary family of topological spaces as above, but if the topologies do not agree on the intersections then the inclusions will not necessarily be embeddings. One can also describe the topological union by means of the disjoint union. Specifically, if is a topological union of the family then is homeomorphic to the quotient of the disjoint union of the family by the equivalence relation for all ; that is, If the spaces are all disjoint then the topological union is just the disjoint union. Assume now that the set A is directed, in a way compatible with inclusion: whenever . Then there is a unique map from to which is in fact a homeomorphism. Here is the direct (inductive) limit (colimit) of in the category Top. Properties Let be coherent with a family of subspaces A function from to a topological space is continuous if and only if the restrictions are continuous for each This universal property characterizes coherent topologies in the sense that a space is coherent with if and only if this property holds for all spaces and all functions Let be determined by a cover Then If is a refinement of a cover then is determined by In particular, if is a subcover of is determined by If is a refinement of and each is determined by the family of all contained in then is determined by Let be an open or closed subspace of or more generally a locally closed subset of Then is determined by Let be a quotient map. Then is determined by Let be a surjective map and suppose is determined by For each let be the restriction of to Then If is continuous and each is a quotient map, then is a quotient map. is a closed map (resp. open map) if and only if each is closed (resp. open). Given a topological space and a family of subspaces there is a unique topology on that is coherent with The topology is finer than the original topology and strictly finer if was not coherent with But the topologies and induce the same subspace topology on each of the in the family And the topology is always coherent with As an example of this last construction, if is the collection of all compact subspaces of a topological space the resulting topology defines the k-ification of The spaces and have the same compact sets, with the same induced subspace topologies on them. And the k-ification is compactly generated. See also Notes References General topology
Coherent topology
[ "Mathematics" ]
1,063
[ "General topology", "Topology" ]
14,602,299
https://en.wikipedia.org/wiki/Liana%20Nella-Potiropoulou
Liana Nella-Potiropoulou is a Greek architect, Founding Partner of the architectural practice Potiropoulos+Partners. She was born in Athens, Greece to her parents Konstantinos and Sofia Nella. She studied architecture at National Technical University of Athens (NTUA). During her studies she received the 17th November» award. She earned her master's degree in architectural design and theory at the University of Pennsylvania, where she received the "Frank Miles" Theory of Architecture Award. During her studies Liana Nella-Potiropoulou worked for the practice of prof. Pavlos Mylonas, and later she collaborated with the "Hellenic Design Centre" and the practice A.N. Tombazis and Associates Architects. At the same time, she started her own practice and took part in architectural competitions. In 1989, along with her spouse Dimitris Potiropoulos, she established the firm Potiropoulos + Partners, one of Greece’s most accomplished architectural practices with international recognition. In 2019 their son Rigas Potiropoulos joined as a partner.  The firm maintains offices in Athens GR & London UK. The work of Potiropoulos + Partners includes well-known buildings such as A. Trichas Residence in Philothei, Athens, Greece, the Olympic Airlines Airport Services Building Complex in Athens International Airport “Eleftherios Venizelos”, Athens, Greece, the restoration and reuse of Listed Hotel «Grande Albergo delle Rose», Rhodes, Greece, the Olympic Tennis Centre in the Athens Olympic Sports Center (Ο.Α.Κ.Α.), Athens, Greece, the extension and renovation of Mercedes Benz Hellas Central Facilities in N. Kifissia, Athens, Greece, Flisvos Marina, Athens, Greece, the Cultural Centre Square in Gerakas, Athens, Greece, the Kindergarten of German School of Athens in Maroussi, Athens, Greece, «Evmareia» Touristic Complex in Brestova Zagorie, Croatia and PROJECT "X"– Research, Education, Conference and Sports Centre of ELPEN S.A. in Spata Business Area, Athens, Greece. Awards and recognition received by the firm since its inception include the 1st prize for the Natural History Museum on Samos, Greece, the 3rd Prize for the Complex of the "Technical Chamber of Greece" in Maroussi, Attica, Greece, the 2nd prize for the New Acropolis Museum, Athens, Greece in collaboration with Studio Daniel Libeskind, the 1st prize for the Restoration of the Listed Building Complex of the Silk-mill "Ekmetzoglou" in Volos, Greece, a special commendation for the  entry for the Grand Egyptian Museum in Cairo, Egypt, etc. The project "Kindergarten of German School of Athens" in Maroussi, Athens, Greece was nominated for the Mies van der Rohe Awards 2015. The firm has also gained international recognition and has won several awards such as: Architizer A+ Award, German Design Award, A Design Award, World Architecture Award, Big SEE Architecture Award, Iconic Award.THE FIRM has been nominated for Mies van der Rohe Awards 2015 – European Union Prize for Contemporary Architecture as well. Liana’s Nella-Potiropoulou work has been presented in exhibitions in Greece and abroad, such as: 1st Biennale for Young Greek Architects – Athens, Greece, 1996, 2nd Biennale for Young Greek Architects – Athens, Greece, 1998, Landscapes of Modernization; Greek Architecture 1960s and 1990s – Rotterdam, the Netherlands, 1999, International Architectural Exhibition – Belgrade, Serbia, 2000, Pan-Hellenic Exhibition of Architectural Project – Patras, Greece, 2000/2003/2006, The Scientific Work of Women Engineers – Athens, Greece, 2008, The shape of Space; 40 Years Architectural Trends – Athens, Greece, 2008 among others. In 2009 the monograph "Potiropoulos D+L Architects" was published by "Potamos Editions" and includes selected works of the practice of the period 1989-2009.  It is foreworded by Daniel Libeskind and Prof. Dimitris Philippides. In the publication titled Readings of Greek Post-war Architecture (Kaleidoskopio Editions, 2014), the author Panayiotis Tsakopoulos selected the work of Potiropoulos+Partners as one of the 18 most representative samples of architecture in post-war Greece. Liana Nella-Potiropoulou was teaching Architectural Design in Patras School of Architecture, Patras, Greece. Her lectures and publications concern projects and studies emerged by her practice as well as issues of architectural theory.  Her work has been published both in Greek and international press. References - External Links[edit] Potiropoulos+Partners KTIRIO | Potiropoulos+Partners ARCHITIZER | Potiropoulos+Partners ARCHETYPE | Potiropoulos+Partners Athens Voice | Athens' "next day" architecture Lifo | 30 years Potiropoulos+Partners Andro | Potiropoulos+Partners Architect | The Helix Grad Review | Beachfront Villa e-Travel News | Active Materiality Huffington Post | New, emblematic buildings are being built in Attica, Greece Architect | International award for the Football Stadium of PAE Larissa Complex KTIRIO | Retail & office building in Athens, Greece Marie Claire | International Interior Design Award for "Shedia Home" Kataskeves Ktirion | Hotel and luxury villas tourist complex in Croatia Architectural design Greek women architects Architects from Athens National Technical University of Athens alumni Academic staff of the University of Patras
Liana Nella-Potiropoulou
[ "Engineering" ]
1,130
[ "Design", "Architectural design", "Architecture" ]
14,602,439
https://en.wikipedia.org/wiki/Melanoblast
A melanoblast is a precursor cell of a melanocyte. These cells migrate from the trunk neural crest cells (in terms of axial level from neck to posterior end) dorsolaterally between the ectoderm and dorsal surface of the somites. See also Biological pigment List of human cell types derived from the germ layers References Pigments Biomolecules Pigmentation
Melanoblast
[ "Chemistry", "Biology" ]
83
[ "Natural products", "Organic compounds", "Biomolecules", "Structural biology", "Biochemistry", "Pigmentation", "Molecular biology" ]
14,602,755
https://en.wikipedia.org/wiki/Schreyerite
Schreyerite (V2Ti3O9), is a vanadium, titanium oxide mineral found in the Lasamba Hill, Kwale district in Coast Province, Kenya. It is polymorphous with kyzylkumite. The mineral occurs as exsolution lamellae and particles in rutile, coexisting with kyanite, sillimanite, and tourmaline in a highly metamorphosed gneiss. It was named after German mineralogist and petrologist Werner Schreyer, for his research on mineralogy of rock-forming minerals and petrology of metamorphic rocks both in nature and by experiment. Introduction Investigation of deposits of green vanadium-bearing kornerupine, revealed the presence of a new vanadium mineral through observations in reflected light. Schreyerite was first discovered in the Kwale district, Kenya. Polymorphous with kyzylkumite, it occurs in highly twinned unmixed grains in vanadium-bearing rutile that occurs as idiomorphic crystals in kornerupine-bearing quartz-biotite-sillimanite gneiss. It also occurs in a pyrite deposit at Sartra, Sweden, in a Pb-Zn ore deposit at Rampura Agucha, India, and recently in metamorphic rocks of the Ol’khon complex on the western shore of Lake Baikal, Russia. Instead of the usual intergrowths with rutile, single crystals of schreyerite were found, associated with titanite. Optical and physical properties Schreyerite is a reddish-brown, opaque mineral with metallic luster. Its reflectivity is slightly lower than rutile, and as a result, it is mostly gray. Pleochroism is weak: yellowish brown to reddish brown. When immersed in oil, the contrasts between rutile and schreyerite become clearer, and the color becomes more intense. With crossed polarizers, moderate anisotropism becomes evident, so that the very fine lamellar twinning becomes distinct. It has hardness of 7 and calculated specific gravity of 4.46. References Medenbach, O. and K. Schmetzer (1978) Schreyerite, V2Ti3O9, a new mineral. American Mineralogist, Volume 63, pages I182-l186, 1978 Bernhardt, H.-J., K. Schmetzer, and O. Medenbach (1983) Berdesinskiite, V2TiO5, a new mineral from Kenya and additional data for schreyerite, V2Ti3O9. Grey, L E. and A. F. Reid (1972) Shear structure compounds (Cr,Fe)2Tin-2O2n-1, derived from the a-PbO2 structural type. J. Solid State. 4, 186–194 Dobelin N, Reznitsky L Z, Sklyarov E V, Armbruster T, Medenbach OAmerican (2006) Schreyerite, V2Ti3O9: New occurrence and crystal structure. American Mineralogist, (91) 196–202 Oxide minerals Vanadium minerals Titanium minerals Monoclinic minerals Minerals in space group 15 Polymorphism (materials science)
Schreyerite
[ "Materials_science", "Engineering" ]
693
[ "Polymorphism (materials science)", "Materials science" ]
14,603,289
https://en.wikipedia.org/wiki/Fazia
FAZIA stands for the Four Pi A and Z Identification Array. This is a project which aims at building a new 4pi particle detector for charged particles. It will operate in the domain of heavy-ion induced reactions around the Fermi energy. It groups together more than 10 institutions worldwide in Nuclear Physics. It is planned to work in 2013-2014, coincidentally to the advent of new high intensity particle accelerators for radioactive nuclear beams. A large effort on research and development is currently made, especially on digital electronics and pulse shape analysis, in order to improve the detection capabilities of such particle detectors in different domains, such as charge and mass identification, lower energy thresholds, as well as improved energetic and angular resolutions. References G. Poggi (INFN Firenze, Italy), Isospin effects : toward a new generation array, Proceedings to the XVth GANIL Colloque, Giens, June 2006 O. Lopez (LPC Caen, France), FAZIA for EURISOL: Physics cases, EURISOL Town Meeting, Task 10 (Physics and Instrumentation), CERN, November 2006 L. Bardelli(INFN Firenze, Italy), FAZIA for EURISOL : Instrumentation, EURISOL Town Meeting, Task 10 (Physics and Instrumentation), CERN, November 2006 G. Verde (GANIL, France), presentation for the SPIRAL2 meeting, GANIL, October 2006 External links FAZIA collaboration official website Physics organizations Nuclear physics
Fazia
[ "Physics" ]
306
[ "Nuclear physics" ]
14,603,357
https://en.wikipedia.org/wiki/Chromium%28III%29%20nitrate
Chromium(III) nitrate describes several inorganic compounds consisting of chromium, nitrate and varying amounts of water. Most common is the dark violet hygroscopic solid. An anhydrous green form is also known. Chromium(III) nitrate compounds are of a limited commercial importance, finding some applications in the dyeing industry. It is common in academic laboratories for the synthesis of chromium coordination complexes. Structure The relatively complicated formula - [Cr(H2O)6](NO3)3•3H2O - betray a simple structure of this material. The chromium centers are bound to six aquo ligands, and the remaining volume of the solid is occupied by three nitrate anions and three molecules of water of crystallization. Properties and preparation The anhydrous salt forms green crystals and is very soluble in water (in contrast to anhydrous chromium(III) chloride which dissolves very slowly except under special conditions). At 100 °C it decomposes. The red-violet hydrate is highly soluble in water. Chromium nitrate is used in the production of alkali metal-free catalysts and in pickling. Chromium nitrate can be prepared by dissolving chromium oxide in nitric acid. References Chromium(III) compounds Nitrates
Chromium(III) nitrate
[ "Chemistry" ]
278
[ "Oxidizing agents", "Nitrates", "Salts" ]
14,603,396
https://en.wikipedia.org/wiki/Maintenance%20resource%20management
Maintenance resource management (MRM) training is an aircraft maintenance variant on crew resource management (CRM). Although the term MRM was used for several years following CRM's introduction, the first governmental guidance for standardized MRM training and its team-based safety approach, appeared when the FAA (U.S.) issued Advisory Circular 120-72, Maintenance Resource Management Training in September, 2000. Overview Like CRM, MRM training emphasizes a team approach to human error reduction using principles that seek to improve communications, situational awareness, problem solving, decision making, and teamwork. Unlike traditional coercive and hierarchical top-down safety programs, MRM advocates a decentralized, human-centric approach to safety. MRM encourages work teams to communicate vital operational risk and safety information directly and informally, regardless of rank or position, thus permitting rapid response to prevent impending crises. Some variation of human factors training, whether called MRM or not, is now standard at many commercial airlines, aircraft manufacturers, and aviation-related organizations. Several commercial aviation firms, as well as international aviation safety agencies, began expanding CRM-style training into air traffic control, aircraft design, and aircraft maintenance in the 1990s. Specifically, the aircraft maintenance section of this training expansion gained traction as Maintenance Resource Management (MRM). In an effort to standardize the industry wide training of this team-based safety approach, the FAA (U.S.) issued Advisory Circular 120-72, Maintenance Resource Management Training in September, 2000, and more recently an MRM Results Evaluation Calculator. MRM in military aviation In 2002, the U.S. Coast Guard identified that maintenance error is involved in one of five Coast Guard aviation mishaps at an annual cost of $1 million. In an effort to reduce those maintenance error induced mishaps, the Coast Guard created a Human Factors in Maintenance (HFIM) program. Drawing on data from the Federal Aviation Administration, National Aeronautics and Space Administration, National Transportation Safety Board, and commercial airline sources, the Coast Guard finally implemented a U.S. Navy-developed variant of MRM. Following a study of aviation mishaps over the 10-year period 1992-2002, the U.S. Air Force determined that close to 18% of its aircraft mishaps were directly attributable to maintenance human error. Unlike the more immediate impact of air crew error, maintenance human errors often occur long before the flight where the problems are discovered. These "latent errors" included such mistakes as failure to follow published aircraft manuals, lack of assertive communication among maintenance technicians, poor supervision, and improper assembly practices. In summer 2005, the Air National Guard Aviation Safety Division made the MRM program available to the Air National Guard's 88 flying wings, spread across 54 U.S. states and territories. In 2006, the Defense Safety Oversight Council (DSOC) of the U.S. Department of Defense recognized the mishap prevention value of this maintenance safety program by partially funding a variant of ANG MRM for training throughout the U.S. Air Force. This ANG initiated, DoD-funded version of MRM became known as Air Force Maintenance Resource Management, AF-MRM, and is now widely used in the U.S. Air Force. See also Fatigue Human factors Foreign Object Damage Disruptive Solutions Process References External links Human Error Analysis of Naval Aviation Maintenance Evolution of CRM in pdf University of Texas Human Factors Research Project Crew Resource Management Current Regulatory Paper pdf Neil Krey's CRM Developers Forum Aircraft maintenance
Maintenance resource management
[ "Engineering" ]
728
[ "Aircraft maintenance", "Aerospace engineering" ]
14,603,715
https://en.wikipedia.org/wiki/Darboux%27s%20formula
In mathematical analysis, Darboux's formula is a formula introduced by for summing infinite series by using integrals or evaluating integrals using infinite series. It is a generalization to the complex plane of the Euler–Maclaurin summation formula, which is used for similar purposes and derived in a similar manner (by repeated integration by parts of a particular choice of integrand). Darboux's formula can also be used to derive the Taylor series from calculus. Statement If φ(t) is a polynomial of degree n and f an analytic function then The formula can be proved by repeated integration by parts. Special cases Taking φ to be a Bernoulli polynomial in Darboux's formula gives the Euler–Maclaurin summation formula. Taking φ to be (t − 1)n gives the formula for a Taylor series. References Whittaker, E. T. and Watson, G. N. "A Formula Due to Darboux." §7.1 in A Course in Modern Analysis, 4th ed. Cambridge, England: Cambridge University Press, p. 125, 1990. External links Darboux's formula at MathWorld Mathematical analysis Summability methods
Darboux's formula
[ "Mathematics" ]
250
[ "Sequences and series", "Mathematical analysis", "Summability methods", "Mathematical structures" ]
14,604,221
https://en.wikipedia.org/wiki/Push%20present
A push present (also called a push gift or a baby bauble) is a present a partner or family gives to the mother to mark the occasion of her giving birth to their child. In practice the present may be given before or after the birth, or even in the delivery room. The giving of push presents has supposedly grown in the United States in recent years. However, it is at the discretion of the partner or father. Possible origins "The exact origin of push presents or baby baubles is hard to pinpoint. Some believe this tradition hails back to several hundred years ago, stemming from places such as the UK, India, and Egypt, symbolizing fertility, strength, and the preciousness of new life. Jewelry is thought to have been the most customary gift. These lovely gestures of appreciation were given to the mother to acknowledge and commemorate the effort that went into such a momentous occasion." "Whether old or new, the practice seems to have gained a renewed popularity in the United States over the last few decades and has evolved beyond just jewelry. These gifts have become a cherished way for partners, family members, and friends to express their love and gratitude to the expectant parent for their incredible journey and sacrifices". Until recently it was passed on largely by word of mouth or peer pressure among both mothers and fathers. Though “push present” is a recent term, a gift of jewelry to a new mother has been practiced throughout different cultures and time periods. For example, Napoleon gave the Napoleon Diamond Necklace to his wife Marie Louise upon the birth of their son in 1811. According to Linda Murray, the executive editor of BabyCenter, "It's an expectation of moms these days that they deserve something for bearing the burden for nine months, getting sick, ruining their body." Other sources trace the development of the present to the increased assertiveness of women, allowing them to ask for a present more directly, or the increased involvement of the men in pregnancy, making them more informed of the pain and difficulty of pregnancy and labor. Frequency A 2004 survey of over 30,000 respondents by BabyCenter found that 38% of new mothers received a push present, and 55% of pregnant mothers wanted one, though fewer thought it was actually expected. About 40% of both groups said the baby itself was already a present and did not wish an additional reward. A survey of Today viewers in 2015 found that 45% were opposed to the custom, 28% in support, and 26% did not know what “push present” referred to. The popularity of push presents has been attributed in part to media coverage of celebrities receiving them. Examples include a 10 carat diamond ring given to celebrity stylist Rachel Zoe by her husband Rodger after the 2011 birth of their son, a Bentley given to reality TV star Peggy Tanous of The Real Housewives of Orange County by her husband Micah after the 2007 birth of their daughter, and a diamond and sapphire necklace given to singer Mariah Carey by her husband Nick Cannon after the 2011 birth of their twins. Some couples would prefer increased help in chores or baby care, or save the money for the child's education. According to etiquette expert Pamela Holland, there are no set guidelines for push presents. "The standard is that there is no standard," she said. "It does make sense to have etiquette around wedding or baby shower gifts because you're inviting other people into it. But this is far too intimate to have a rule." In general it is the woman who lets her man know about push presents, not the other way around, although there can be peer pressure from friends to buy one on either the man or the woman. Analysis of conversations on parenting website the BabyCenter's online community over the last three years found that mentions of push presents had increased by 41 per cent in the past 18 months, compared to only a two per cent increase between 2011 and 2012. A poll of 1,200 BabyCenter mothers also revealed that more than a quarter (27%) were expecting, or had already received, a push present this year. Diamonds were the most popular gift in the form of an eternity ring with the prices spent ranging from $600 to $1,700. Tablet computers, charm bracelets and designer watches, and handbags were also popular gifts to celebrate a new arrival. See also desco da parto or birth tray Baby shower References Giving Childbirth Etiquette
Push present
[ "Biology" ]
908
[ "Etiquette", "Behavior", "Human behavior" ]
14,604,610
https://en.wikipedia.org/wiki/Mucin%202
Mucin 2, oligomeric mucus gel-forming, also known as MUC2, is a protein that in humans is encoded by the MUC2 gene. Function This gene encodes a member of the mucin protein family. The protein encoded by this gene, also called mucin 2, is secreted onto mucosal surfaces. Mucin 2 is particularly prominent in the gut where it is secreted from goblet cells in the epithelial lining into the lumen of the large intestine. There, mucin 2, along with small amounts of related-mucin proteins, polymerizes into a gel of which 80% by weight is oligosaccharide side-chains that are added as post-translational modifications to the mucin proteins. This gel provides an insoluble mucous barrier that serves to protect the intestinal epithelium. Genetics The mucin 2 protein features a central domain containing tandem repeats rich in threonine and proline that varies between 50 and 115 copies in different individuals. Alternatively spliced transcript variants of this gene have been described, but their full-length nature is not known. References 02
Mucin 2
[ "Chemistry" ]
243
[ "Biochemistry stubs", "Protein stubs" ]
14,604,844
https://en.wikipedia.org/wiki/Warehouse%20control%20system
A warehouse control system (WCS) is a software application that directs the real-time activities within warehouses and distribution centers (DC). As the “traffic cop” for the warehouse/distribution center, the WCS is responsible for keeping everything running smoothly, maximizing the efficiency of the material handling subsystems and often, the activities of the warehouse associates themselves. It provides a uniform interface to a broad range of material handling equipment such as AS/RS, carousels, conveyor systems, sorters, palletizers, etc. The primary functions of a WCS include: Interfacing to an upper level host system/warehouse management system (WMS) and exchanging information required to manage the daily operations of the distribution center. Allocating work to the various material handling sub-systems to balance system activity to complete the requested workload. Providing real-time directives to operators and material handling equipment controllers to accomplish the order fulfillment and product routing requirements. Dynamically assign cartons to divert locations based on defined sortation algorithms or based on routing/order information received from the Host (if applicable). Generate result data files for reporting and/or upload by the Host system. Operational screens (graphical user interface) and functions to facilitate efficient control and management of the distribution warehouse. Collect statistical data on the operational performance of the system to enable operations personnel to maintain the equipment in peak performance. Each major function is designed to work as part of an integrated process to effectively link the host systems with the lower level control system, while relieving the Host from the real-time requirements such as operator screens and lower level equipment control interfaces. Control hierarchy The typical warehouse/distribution center consists of a multi-tier control architecture in which each level in the control hierarchy has a defined role. The upper most level of the control hierarchy is the warehouse management system (WMS), or host. This system handles the business aspects of the system such as receiving customer orders, allocating inventory, and generating shipping manifests or bills of lading) and invoices based on order fulfillment information and shipping information received from the material handling control system (WCS). It typically interacts with the material handling system on a non-real-time basis. Coordinating the activities of the various material handling sub-systems is the role of the warehouse control system (WCS). The WCS directs the "real-time" data management and interface responsibilities of the material handling system as well as provides common user interface screens for monitoring, control, and diagnostics. As the focal point for managing the operational aspects of the material handling system, the WCS provides the critical link between the non-real time based host and the real-time MHE control system. It receives information from the upper level Host and coordinates the various real-time control devices (conveyors, print and apply applicators, etc.) to accomplish the daily workload. At each decision point, the WCS determines the most efficient routing of the product and transmits directives to the equipment controllers to achieve the desired result. At the lowest level, closest to the physical equipment, are the equipment controller(s). These controllers are typically some form of a programmable logic controller (PLC) or a dedicated, real-time PC control system. They interface to peripheral Input/Output (I/O) devices such as photo-eyes scanners, motors, etc. as well as data collection devices such as bar code scanners (barcode reader) and weigh scales and are responsible for the physical operation of the material handling equipment. The equipment controllers are also responsible for the physical handling of product and tracking it from point-to-point based on the direction from the upper level control systems. Typically a single controller is only concerned with the operations of a defined area or sub-system of the overall material handling system. Ultimately, the control hierarchy within the distribution center reflects the organizational structure of their human counterparts. The management staff (or the WMS) determines the workload to be accomplished for the day while the supervisory staff (WCS) oversees the real-time activities of the warehouse associates (equipment controllers) to complete the daily activities. Each warehouse associate is assigned a specific task based on their area of expertise (order selection – carousel/Pick To Light, transportation - conveyors, etc.). As each operator completes their individual assignment, the supervisor (WCS) assigns the next task based on the current workload. As orders are completed, the (WCS) supervisor reports back to (WMS) management the status of the orders along with any pertinent order information. Best practice Have one WCS in your distribution center Design a lean interface between WCS and WMS Define responsibilities between WCS and WMS (who owns the inventory?) See also Document automation for Supply Chain and Logistics Enterprise resource planning (ERP) Inventory management software Manufacturing resource planning Warehouse management system Warehouse execution system References Modern Materials Handling Magazine: "Simon & Schuster Implements Warehouse Control System" by Bob Trebilcock Modern Materials Handling Magazine: "Oriental Trading Company gets its warehouse under control" by Bob Trebilcock DC Velocity: "Who's In Charge" by James Aaron Cooke "A Top Tier WCS Increases Productivity in Omni-Channel Distribution" by Jen Maloney DM Review: "Warehouse Control Systems Expand" by Thomas R. Cutler "List of WCS's" added by Rudi Lueg Industrial computing
Warehouse control system
[ "Technology", "Engineering" ]
1,108
[ "Industrial computing", "Industrial engineering", "Automation" ]
14,604,898
https://en.wikipedia.org/wiki/OmpA-like%20transmembrane%20domain
OmpA-like transmembrane domain is an evolutionarily conserved domain of bacterial outer membrane proteins. This domain consists of an eight-stranded beta barrel. OmpA is the predominant cell surface antigen in enterobacteria found in about 100,000 copies per cell. The expression of OmpA is tightly regulated by a variety of mechanisms. One mechanism by which OmpA expression is regulated in Vibrio species is by an antisense non-coding RNA called VrrA. Structure The structure consists of an eight-stranded Up-And-Down Beta-Barrel. The strands are connected by four extracellular loops and three intracellular turns. Function Numerous OmpA-like membrane-spanning domains contribute to bacterial virulence by a variety of mechanisms such as binding to host cells or immune regulators such as Factor H. Notable examples include E. coli OmpA and Yersinia pestis Ail. Several of these proteins are vaccine candidates. E. coli OmpA was shown to make specific interactions with the human glycoprotein Ecgp on brain microvascular endothelial cells. Cronobacter sakazakii is a food borne pathogen causing meningitis in neonates and was shown to bind fibronectin via OmpA and this played a significant role in invasion of the blood brain barrier. The Y. pestis protein Ail binds to laminin and heparin, therefore allowing bacterial attachment to host cells. The Borrelia afzelii protein BAPKO_0422, is an OmpA-like transmembrane domain and binds to human Factor H. See also OmpA domain References Protein domains Outer membrane proteins
OmpA-like transmembrane domain
[ "Biology" ]
340
[ "Protein domains", "Protein classification" ]
14,604,961
https://en.wikipedia.org/wiki/Outer%20membrane%20phospholipase%20A1
Outer membrane phospholipase A1 (OMPLA) is an acyl hydrolase with a broad substrate specificity (EC:3.1.1.32.) from the bacterial outer membrane. It has been proposed that Ser164 is the active site of the protein (UniProt ) This integral membrane phospholipase was found in many Gram-negative bacteria and has a broad substrate specificity . The role of OMPLA has been most thoroughly studied in Escherichia coli, where it participates in the secretion of bacteriocins. Bacteriocin release is triggered by a lysis protein (bacteriocin release protein or BRP), followed by a phospholipase dependent accumulation of lysophospholipids and free fatty acids in the outer membrane. The reaction products enhance the permeability of the outer membrane, which allows the semispecific secretion of bacteriocins. One speculative function of OMPLA is related to organic solvent tolerance in bacteria. Structurally, it consists of a 12-stranded antiparallel beta-barrel with a convex and a flat side. The active site residues are exposed on the exterior of the flat face of the beta-barrel. The activity of the enzyme is regulated by reversible dimerisation. Dimer interactions occur exclusively in the membrane-embedded parts of the flat side of the beta-barrel, with polar residues embedded in an apolar environment forming the key interactions. The active site His and Ser residues are located at the exterior of the beta-barrel, at the outer leaflet side of the membrane. This location indicates that under normal conditions the substrate and the active site are physically separated, since in E. coli phospholipids are exclusively located in the inner leaflet of the outer membrane. References Protein domains Outer membrane proteins
Outer membrane phospholipase A1
[ "Biology" ]
389
[ "Protein domains", "Protein classification" ]
14,605,106
https://en.wikipedia.org/wiki/Maltoporin
Maltoporins (or LamB porins) are bacterial outer membrane proteins of the porin family. Maltoporin forms a trimeric structure which facilitates the diffusion of maltodextrins across the outer membrane of Gram-negative bacteria. The membrane channel is formed by an antiparallel beta-barrel. Most pores used for diffusion contain only 16 antiparallel strands, but maltoporin has 18. The structure of maltoporin contains long loops and short turns. The long loops are in contact with the cell exterior and the turns are in contact with the periplasm. This channel is involved in sugar transport. The sugar initially binds to the first greasy residue with van der Waals forces. The sugar continues through the channel by guided diffusion of the sugar along the greasy residues which form a "slide". Maltoporin's original name was LamB because it is a bacteriophage lambda receptor. This channel is specific for maltosaccharides, whose affinity for the channel increases as the length of the chain increases. References Protein domains Outer membrane proteins
Maltoporin
[ "Biology" ]
225
[ "Protein domains", "Protein classification" ]
14,605,198
https://en.wikipedia.org/wiki/Outer%20membrane%20efflux%20protein
The outer membrane efflux protein is a protein family member that forms trimeric (three-piece) channels allowing the export of a variety of substrates in gram-negative bacteria. Each efflux protein is composed of two repeats. The trimeric channel is composed of a 12-stranded beta-barrel that spans the outer membrane, and a long tail helical barrel that spans the periplasm. Examples include the Escherichia coli TolC outer membrane protein, which is required for proper expression of outer membrane protein genes; the Rhizobium nodulation protein; and the Pseudomonas FusA protein, which is involved in resistance to fusaric acid. References Protein domains Protein families Outer membrane proteins
Outer membrane efflux protein
[ "Biology" ]
149
[ "Protein families", "Protein domains", "Protein classification" ]
14,605,319
https://en.wikipedia.org/wiki/Opacity%20porins
Opacity family porins are a family of porins from pathogenic Neisseria. These bacteria possess a repertoire of phase-variable opacity proteins that mediate various pathogen/host cell interactions. These proteins are related to OmpA-like transmembrane domain family. References Protein domains Protein families Outer membrane proteins
Opacity porins
[ "Biology" ]
68
[ "Protein families", "Protein domains", "Protein classification" ]
14,605,490
https://en.wikipedia.org/wiki/FadL%20outer%20membrane%20protein%20transport%20family
Outer membrane transport proteins (OMPP1/FadL/TodX) family includes several proteins that are involved in toluene catabolism and degradation of aromatic hydrocarbons. This family also includes protein FadL involved in translocation of long-chain fatty acids across the outer membrane. It is also a receptor for the bacteriophage T2. Notes References Protein families Outer membrane proteins
FadL outer membrane protein transport family
[ "Biology" ]
85
[ "Protein families", "Protein classification" ]
9,434,174
https://en.wikipedia.org/wiki/Magnesium%20benzoate
Magnesium benzoate is a chemical compound formed from magnesium and benzoic acid. It was once used to treat gout and arthritis. References Benzoates Magnesium compounds
Magnesium benzoate
[ "Chemistry" ]
35
[ "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
9,434,265
https://en.wikipedia.org/wiki/MRC%201138-262
The Spiderweb Galaxy (PGC 2826829, MRC 1138-262) is an irregular galaxy located in the Hydra constellation, with a redshift of 2.156, which is 10.6 billion light years from the Milky Way. It has been imaged by the Hubble Space Telescope on 12 October 2006. It was thoroughly studied through radio astronomy, but it was not until the Hubble Telescope took a mosaic of photographs from May 17 to May 22, 2005, that its true nature became known. This was documented for the first time on October 10, 2006, in The Astrophysical Journal Letters, volume 650, number 1. The photography was carried out using an advanced camera for surveys by a team led by George K. Miley of the Netherlands' Leiden Observatory. General Information Radio-astronomical observations seem to indicate that this is a typical, massive elliptical galaxy, of the type that, with time, transforms into the center of a galactic cluster. However, observations in the band of ultraviolet light indicate that the galaxy possesses an irregular nucleus and a series of "knots" strongly emitting radiation in its interior. Observations in the spectrum of visible light indicate that, in reality, this is a galaxy being formed through the fusion of galaxy groups and clusters, but in a continuous structure, like a spiderweb, with a massive central nucleus and various smaller ones on the periphery. Give that observations are from 2,000 million years after the Big Bang, this study is an important part of understanding galaxy formation and evolution. History In 1948, The Council for Scientific and Industrial Research (now the Commonwealth Scientific and Industrial Research Organisation), part of the University of Sidney, created the Radiophysics Division, led by Bernard Yarnton Mills, who, the next year, began developing new tools and techniques for radio astronomy. The result was the Mills Cross radio telescope, installed in Fleurs (now Badgerys Creek), which began operating in 1964. Between 1958 and 1961, astronomers Bernard Mills, Bruce Slee, and Eric Hill published the Catalogue of Radio Sources (later known as the MSH Catalogue), in which some 2,200 radio sources were introduced. This work was the first study on this wavelength in the southern hemisphere. This work classifies sources of radio emissions into lists that correspond to the time of right ascension, ordered by the minute. In the second part, titled, A Catalogue of Radio Sources Between Declinations -20° and -50°, in the list at 11:36, there appears a never-before-observed object, that has a spectral flux density of 28 x 10−26 W m2 Hz−1 (equivalent to 28 fu, the unity of flux density used in astronomy). Between 1964 and 1968, the astronomers at the Parkes Observatory, in operation since 1961, compiled the Parkes Catalogue, with the intention of expanding the findings of the MSH Catalogue. The first part included 297 sources of radio emissions with a declination between -60° y -90°, 51 of which were previously unknown. The Parkes Catalogue provides the first appearance of code 1138–26, in which 1138 corresponds to the hour and minute of the right ascension and -26 with the degrees of the declination, which are the same features identified in the MHS Catalogue in list 11, number 27. This entry corresponds with the Spiderweb Galaxy. Measurements See also Galaxy merger Interacting galaxy References External links ESA, Flies in a spider’s web: galaxy caught in the making, 12 October 2006 SIMBAD, MRC_1138-262 Interacting galaxies 2826829 Hydra (constellation)
MRC 1138-262
[ "Astronomy" ]
736
[ "Hydra (constellation)", "Constellations" ]
9,434,311
https://en.wikipedia.org/wiki/Calcium%20chlorate
Calcium chlorate is the calcium salt of chloric acid, with the chemical formula Ca(ClO3)2. Like other chlorates, it is a strong oxidizer. Production Calcium chlorate is produced by passing chlorine gas through a hot suspension of calcium hydroxide in water, producing calcium hypochlorite, which disproportionates when heated with excess chlorine to give calcium chlorate and calcium chloride: 6 Ca(OH)2 + 6 Cl2 → Ca(ClO3)2 + 5 CaCl2 + 6 H2O This is also the first step of the Liebig process for the manufacture of potassium chlorate. In theory, electrolysis of hot calcium chloride solution will produce the chlorate salt, analogous to the process used for the manufacture of sodium chlorate. In practice, electrolysis is complicated by calcium hydroxide depositing on the cathode, preventing the flow of current. Reactions When concentrated solutions of calcium chlorate and potassium chloride are combined, potassium chlorate precipitates: Ca(ClO3)2 + 2 KCl → 2 KClO3 + CaCl2 This is the second step of the Liebig process for the manufacture of potassium chlorate. Solutions of calcium chlorate react with solutions of alkali carbonates to give a precipitate of calcium carbonate and the alkali chlorate in solution: Ca(ClO3)2 + Na2CO3 → 2 NaClO3 + CaCO3 On strong heating, calcium chlorate decomposes to give oxygen and calcium chloride: Ca(ClO3)2 → CaCl2 + 3 O2 Cold, dilute solutions of calcium chlorate and sulfuric acid react to give a precipitate of calcium sulfate and chloric acid in solution: Ca(ClO3)2 + H2SO4 → 2 HClO3 + CaSO4 Contact with strong sulfuric acid can result in explosions due to the instability of concentrated chloric acid. Contact with ammonium compounds can also cause violent decomposition due to the formation of unstable ammonium chlorate. Uses Calcium chlorate has been used as an herbicide, like sodium chlorate. Calcium chlorate is occasionally used in pyrotechnics, as an oxidizer and pink flame colorant. Its hygroscopic nature and incompatibility with other common pyrotechnic materials (such as sulfur) limit its utility in these applications. References Chlorates Calcium compounds Oxidizing agents
Calcium chlorate
[ "Chemistry" ]
542
[ "Chlorates", "Redox", "Oxidizing agents", "Salts" ]
9,434,341
https://en.wikipedia.org/wiki/Cobalt%28II%29%20nitrate
Cobalt nitrate is the inorganic compound with the formula Co(NO3)2.xH2O. It is cobalt(II)'s salt. The most common form is the hexahydrate Co(NO3)2·6H2O, which is a red-brown deliquescent salt that is soluble in water and other polar solvents. Composition and structures As well as the anhydrous compound Co(NO3)2, several hydrates of cobalt(II) nitrate exist. These hydrates have the chemical formula Co(NO3)2·nH2O, where n = 0, 2, 4, 6. Anhydrous cobalt(II) nitrate adopts a three-dimensional polymeric network structure, with each cobalt(II) atom approximately octahedrally coordinated by six oxygen atoms, each from a different nitrate ion. Each nitrate ion coordinates to three cobalts. The dihydrate is a two-dimensional polymer, with nitrate bridges between Co(II) centres and hydrogen bonding holding the layers together. The tetrahydrate consists of discrete, octahedral [(H2O)4Co(NO3)2] molecules. The hexahydrate is better described as hexaaquacobalt(II) nitrate, [Co(OH2)6][NO3]2, as it consists of discrete [Co(OH2)6]2+ and [NO3]− ions. Above 55 °C, the hexahydrate converts to the trihydrate and at higher temperatures to the monohydrate. Uses and reactions It is commonly reduced to metallic high purity cobalt. It can be absorbed on to various catalyst supports for use in Fischer–Tropsch catalysis. It is used in the preparation of dyes and inks. Cobalt(II) nitrate is a common starting material for the preparation of coordination complexes such as cobaloximes, carbonatotetraamminecobalt(III), and others. Production The hexahydrate is prepared treating metallic cobalt or one of its oxides, hydroxides, or carbonate with nitric acid: Co + 4 HNO3 + 4 H2O → Co(H2O)6(NO3)2 + 2 NO2 CoO + 2 HNO3 + 5 H2O → Co(H2O)6(NO3)2 CoCO3 + 2 HNO3 + 5 H2O → Co(H2O)6(NO3)2 + CO2 References Cobalt(II) compounds Nitrates Oxidizing agents
Cobalt(II) nitrate
[ "Chemistry" ]
547
[ "Nitrates", "Redox", "Oxidizing agents", "Salts" ]
9,434,427
https://en.wikipedia.org/wiki/F-box%20protein
F-box proteins are proteins containing at least one F-box domain. The first identified F-box protein is one of three components of the SCF complex, which mediates ubiquitination of proteins targeted for degradation by the 26S proteasome. Core components F-box domain is a protein structural motif of about 50 amino acids that mediates protein–protein interactions. It has consensus sequence and varies in few positions. It was first identified in cyclin F. The F-box motif of Skp2, consisting of three alpha-helices, interacts directly with the SCF protein Skp1. F-box domains commonly exist in proteins in cancer with other protein–protein interaction motifs such as leucine-rich repeats (illustrated in the Figure) and WD repeats, which are thought to mediate interactions with SCF substrates. Function F-box proteins have also been associated with cellular functions such as signal transduction and regulation of the cell cycle. In plants, many F-box proteins are represented in gene networks broadly regulated by microRNA-mediated gene silencing via RNA interference. F-box proteins are involved in many plant vegetative and reproduction growth and development. For example, F-box protein-FOA1 involved in abscisic acid (ABA) signaling to affect the seed germination. ACRE189/ACIF1 can regulate cell death and defense when the pathogen is recognized in the Tobacco and Tomato plant. In human cells, under high-iron conditions, two iron atoms stabilise the F-Box FBXL5 and then the complex mediates the ubiquitination of IRP2. Regulation F-box protein levels can be regulated by different mechanisms. The regulation can occur via protein degradation process and association with SCF complex . For example, in yeast, the F-box protein Met30 can be ubiquitinated in a cullin-dependent manner.[11] References Further reading External links Proteins Protein domains Protein structural motifs
F-box protein
[ "Chemistry", "Biology" ]
414
[ "Biomolecules by chemical classification", "Protein classification", "Protein structural motifs", "Protein domains", "Molecular biology", "Proteins" ]
9,434,526
https://en.wikipedia.org/wiki/The%20Sleeping%20Prince%20%28fairy%20tale%29
The Sleeping Prince is a Greek fairy tale collected by in Folktales of Greece. It is Aarne-Thompson 425G: False Bride takes the heroine's place as she tries to stay awake; recognition when heroine tells her story. This is also found as part of Nourie Hadig, and a literary variant forms part of the frame story of the Pentamerone. The tale type was also closely related to AaTh 437, "The Supplanted Bride (The Needle Prince)". However, the last major revision of the International Folktale Classification Index, written in 2004 by German folklorist Hans-Jörg Uther, subsumed tale type AaTh 437 as new type ATU 894, "". Synopsis A king had only his daughter, his wife having died, and had to go to war. The princess promised to stay with her nurse while he was gone. One day, an eagle came by and said she would have a dead man for a husband; it came again the next day. She told her nurse, and her nurse told her to tell the eagle to take her to him. The third day, it came, and she asked; it brought her to a palace, where a prince slept like the dead, and a paper said that whoever had pity on him must watch for three months, three weeks, three days, three hours, and three half-hours without sleeping, and then, when he sneezed, she must bless him and identify herself as the one who watched. He and the whole castle would wake, and he would marry the woman. She watched three months, three weeks, and three days. Then she heard someone offering to hire maids. She hired one for company. The maid persuaded her to sleep, the prince sneezed, and the maid claimed him. She told him to let the princess sleep and when she woke, set to tend the geese. (The fairy tale starts to refer to the prince as the king.) The king had to go to war. He asked the queen what she wanted, and she asked for a golden crown. He asked the goose-girl, and she asked for the millstone of patiences, the hangman's rope, and the butcher's knife, and if he did not bring them, his ship would go neither backward nor forward. He forgot them, and his ship would not move; an old man asked him if he had promised anything, so he bought them. He gave his wife the crown and the other things to the goose-girl. That evening, he went down to her room. She told her story to the things and asked them what she should do. The butcher's knife said to stab herself; the rope, to hang herself; the millstone, to have patience. She asked for the rope again and went to hang herself. The king broke in and saved her. He declared she was his wife and he would hang the other on the rope. She told him only to send her away. They went to her father for his blessing. Analysis Tale type Richard MacGillivray Dawkins described that the "essence" of the tale type involves the heroine being destined to marry "a dead man", which is not dead at all. The prince, in fact, is under a magical sleep in a room in a castle somewhere. The heroine finds him and stays by his side on a long vigil. The heroine hires a maid or slave to help her in the long vigil, but she replaces the heroine and takes credit for awakening the prince. At the end of the tale, the prince, now back to life, is asked by a broken heroine to bring her ("almost always") three objects: a knife, a rope to hang herself with and a stone of patience. Motifs The tale type may start with one of two opening episodes: a bird announces to the heroine she will marry a dead man, and she decides to look for him; or the heroine is with her family on a field or in the forest, goes astray and ends up in the dead prince's tomb, where she begins her long vigil over his body. Variants Distribution Greek scholars Anna Angelopoulou and Aigle Broskou locate variants of type AaTh 425G in Greece, Turkey, Southern Italy, Sicily, Spain, North Africa (among the Berbers) and even in Poland. Israeli professor Dov Noy reported that the tale type 894 was "very popular in Oriental literature", with variants found in India, Iran, Egypt and regionally in Europe (southern and eastern). As for type 437, Richard Dorson stated that it appears "sporadically in Europe", but it is "better known in India". Indian scholar A. K. Ramanujan states that the tale type is known in Europe as "The Needle Prince". In this regard, according to Enzyklopädie des Märchens, type 437 is reported in Europe (South, Southeastern, Eastern and Northeast), in the Caucasus, Middle East, North Africa, Central Asia and India. Europe Scholars Ibrahim Muhawi and Sharif Kanaana stated that "in European tradition" type AaTh 894 is found in association with the story of "The Sleeping Prince". Professor Jack V. Haney stated that type 437 is more common in Ukraine, but "uncommon" in Western Europe. Italy A Sicilian variant was collected by Laura Gonzenbach with the title Der böse Schulmeister und die wandernde Königstochter ("The Evil Schoolmaster and the Wandering Princess"). Greece According to scholars Anna Angélopoulos and Marianthi Kaplanoglou, the tale type AaTh 425G (now included in the general subtype ATU 425A after 2004) is the "most widely disseminated subtype in Greece, with 118 versions". In another Greek variant, The Knife of Slaughter, the Whet-stone of Patience and the Unmelting Candle, a girl is broidering when a bird chirps that she is to marry a "lifeless man". One day, she enters a neighbouring house and sees the body of a prince holding a letter in his hand, telling for someone to hold a vigil for three nights, three days and three weeks. Nearing the end of the vigil, she takes in a gypsy as a companion, who takes the credit for the vigil. After the prince and the gypsy marry, she asks the prince to bring her the titular items: the Knife of Slaughter, the Whet-stone of Patience and the Unmelting Candle. Spain Hispanist located a Spanish tale he numbered as type *445B (a number not added to the revision of the international index, at the time). In this story, the princess holds a vigil on a king that will only awake on St. John's Day. She buys a slave woman for company, who takes her place at the king's bed and passes herself as his saviour. The despondent princess asks the prince to bring her two objects: a hard stone and the branch of bitterness. The king learns these are objects requested by people who are on the verge on taking their own lives. Scholars Wolfram Eberhard and Pertev Naili Boratav considered this story so close to the Turkish tales that they believed it to be a version that developed locally. Armenia According to Armenian scholarship, Armenia also registers similar tales about the heroine's confession to the object of patience. In Armenian tales, the object is called Sabri Xrcig or Doll of Patience, related to the cycle of stories called Le Prince endormi ("The Sleeping Prince"). The "Doll of Patience" (Armenian: Սաբրի խրծիկ; Sabri khrtsik) is a dowry gift, given to the newlywed bride and which acts as her confidante as she moves to an unknown household after marriage. Professor Susan Hoogasian-Villa collected two variants from Armenian tellers in Detroit. In the first, titled Saber Dashee, during a pilgrimage to Jerusalem, a girl loses her way from her family and enters an abandoned house. Inside, a man under a cursed sleep, on whom she has to bear ten years on a vigil. She gets replaced by a gypsy girl, who marries the prince after the vigil. The heroine asks for the Saber Dashee and pours out her story to it. In a second story, The Dead Bridegroom, the trees and the river predict that a girl will marry a dead man. The girl enters a palace that locks behind her, then sees a man in a cursed-like sleep. Hoogasian-Villa noted that it follows very closely the outline of the first variant. Albania In an Albanian tale published by Lucy Garnett with the title The Maiden who was Promised to the Sun, a queen prays to the Sun to give her one daughter, and the Sun agrees, with the condition that she relinquishes the girl to him when she is of age. It does happen and the girl is taken to the Sun. At the Sun's abode, there lives a Koutchedra (kulshedra) that hungers to devour the maiden. She escapes with the help of a stag and returns home (tale type ATU 898, "The Girl Promised to the Sun"). In the second part of the story, the girl enters a garden and opens a locked gate that closes itself behind her. She discovers the petrified body of a prince and she decides to release him from this curse, by holding a vigil for three days, three nights and three weeks without sleeping. Nearing the end of the trial, and feing tired, she hires a slave woman to continue the vigil in her place, when the girl with reassume her position by the prince's side. The slave woman ends up replacing the princess as the man's saviour and marries him. The girl laments her fate to the "Stone of Patience" and the prince overhears her story. Lithuania Lithuanian folklorist , in his analysis of Lithuanian folktales (published in 1936), listed one variant of type *446 (a type not indexed in the international classification, at the time), under the banner Miegas karalaitis ("The Sleeping Prince"). In the only recorded tale, the princess finds the coffin of the sleeping prince and a note to hold a vigil for three nights. Latvia According to the Latvian Folktale Catalogue, in type 437, Neīstā līgava ("The False Bride"), the heroine helps break the curse on the whole kingdom, until a girl comes and takes the credit for the deed. The true heroine asks the prince to bring her a stone or a doll, to which she tells her story. Asia Turkey According to Dov Noy, the Turkish Folktale Catalogue (Typen türkischer Volksmärchen, or TTV) by Wolfram Eberhard and Pertev Naili Boratav registered 38 variants in the country. In their joint work, the Turkish tales were grouped under type TTV 185, . In a Turkish variant collected by folklorist Ignác Kúnos with the title Stone-Patience and Knife-Patience, a poor woman's daughter stays at home when a bird chirps that "death" is her kismet ('fate', 'destiny'). The situation repeats itself, to the mother's concern. She decides to let her daughter walk a bit with the neighbour's daughters to put her mind at ease. When walking with the girls, a huge wall rises out of the ground to isolate the poor woman's daughter from the other, who return to the village to inform the old woman of the occurrence. Back to the girl: she finds a door on the wall, opens it and is transported to a grand palace. The girl opens all doors, filled with treasures and gems, and behind the fortieth door, lies a Bey on a bed holding a note that says a damsel must stay by his side for 40 days to find her kismet. So she decides to follow the note. Time passes, the girl meets a black woman outside of the palace and brings her in to help her vigil. The Bey awakes, sees the black girl and thinks she is his saviour. At the end of the tale, the girl asks the Bey to bring her a stone-of-patience of a yellow colour and a knife-of-patience with brown handle. She gets both items: she tells her woes to the stone, but chooses the knife. The Bey appears in the nick of time to stop her attempt. Iran According to a study by Russian scholar Vladimir Minorsky, the tale type appears in Iran as type 437, Sang-e Sabur, with varied starting episodes: either a voice predicts the heroine's destiny lies with a dead man, or the heroine and her family are in a desert. Either way, the heroine enters a palace alone, the door locks her in, and she meets a prince lying on a slab, his body full of needles. She removes the needles for 40 days, but a Gypsy girl replaces her and marries the prince. At the end of the tale, the heroine tells her woes to a stone of patience and is overheard by the prince. Later, German scholar reported 22 variants of tale type 894, Der Geduldstein, across Iranian sources. In the Iranian tale, the heroine's destiny is predicted to be an unhappy one; she drifts away until she reaches a garden and enters a palace, where a youth is lying as if dead, his body prickled with several pins; the heroine helps the youth for almost 40 days, until she tires herself and buys a slavewoman to cover for her. This causes the youth, now awake, to mistake the slavewoman for his true saviour, and marries her, taking the heroine as their maidservant. At the end of the tale, the heroine asks the prince to bring a patience stone, which she tells her woes to. In a Persian tale collected by Emily Lorimer and David Lockhart Robertson Lorimer, from Kermani, The Story of the Marten-Stone, a king's daughter finds a castle with a sleeping prince inside, his body covered with needles. She begins a long and strenuous vigil, picking each needle for the next 40 days and 40 nights. After her slave girl replaces her as the prince's saviour, she asks for a marten-stone to pour out her woes to. Uzbekistan In an Uzbek tale titled Der brennende Stein or "Горючий камень" ("The Burning Stone"), a girl named Rose Bloom is fetching flowers, when she follows a trail deep into a mansion. Inside it, there lies the body of a man, all riddled with pins. The girl extracts each pin carefully, until she begins to get tired. She hires a servant girl from a passing caravan to continue the vigil on him. The man wakes up and mistakes the servant girl for Rose Bloom. At the end of the tale, Rose Bloom asks the prince to get her a burning stone: she plans to tell her sorrows to the stone until it bursts into a pyre, and intends to throw herself into it. See also Pentamerone The Lord of Lorn and the False Steward The Goose Girl The Young Slave The Maiden with the Rose on her Forehead The Bay-Tree Maiden Sleeping Beauty The Dead Prince and the Talking Doll References Further reading Cardigos, Isabel (2007). "Em Busca Do Belo Adormecido No Mundo Dos Contos Tradicionais". In: Povos E Culturas, n. 11 (Janeiro), 11-31. https://doi.org/10.34632/povoseculturas.2007.8780. (In Portuguese) "L'épingle qui endort". In: Cosquin, Emmanuel. Les Contes indiens et l'occident: petites monographies folkloriques à propos de contes Maures. Paris: Édouard Champion. 1922. pp. 95–190. Dawkins, R. M. (1949). "The Story of Griselda". In: Folklore, 60:4, pp. 363–374. DOI: 10.1080/0015587X.1949.9717955 Goldberg, Christine. "The Knife of Death and the Stone of Patience". In: E.L.O.: Estudos de Literatura Oral. Spring 1995. pp. 103–117 Katrinaki, Emmanouela. Le cannibalisme dans le conte merveilleux grec. Questions d’interprétation et de typologie. Helsinki: Academia Scientiarum Fennica. 2008. Katrinaki, Emmanouela. "Le secret du maitre d'ecole. A propos du conte type ATU 894". In: Cahiers de litterature orale n. 57-58. 2005. pp. 139–164. Sleeping Prince Sleeping Prince Sleep in mythology and folklore ATU 400-459 ATU 850-999 False hero Fairy tales about princes
The Sleeping Prince (fairy tale)
[ "Biology" ]
3,602
[ "Behavior", "Sleep", "Sleep in mythology and folklore" ]
9,435,679
https://en.wikipedia.org/wiki/Linamarin
Linamarin is a cyanogenic glucoside found in the leaves and roots of plants such as cassava, lima beans, and flax. It is a glucoside of acetone cyanohydrin. Upon exposure to enzymes and gut flora in the human intestine, linamarin and its methylated relative lotaustralin can decompose to the toxic chemical hydrogen cyanide; hence food uses of plants that contain significant quantities of linamarin require extensive preparation and detoxification. Ingested and absorbed linamarin is rapidly excreted in the urine and the glucoside itself does not appear to be acutely toxic. Consumption of cassava products with low levels of linamarin is widespread in the low-land tropics. Ingestion of food prepared from insufficiently processed cassava roots with high linamarin levels has been associated with dietary toxicity, particularly with the upper motor neuron disease known as konzo to the African populations in which it was first described by Trolli and later through the research network initiated by Hans Rosling. However, the toxicity is believed to be induced by ingestion of acetone cyanohydrin, the breakdown product of linamarin. Dietary exposure to linamarin has also been reported as a risk factor in developing glucose intolerance and diabetes, although studies in experimental animals have been inconsistent in reproducing this effect and may indicate that the primary effect is in aggravating existing conditions rather than inducing diabetes on its own. The generation of cyanide from linamarin is usually enzymatic and occurs when linamarin is exposed to linamarase, an enzyme normally expressed in the cell walls of cassava plants. Because the resulting cyanide derivatives are volatile, processing methods that induce such exposure are common traditional means of cassava preparation; foodstuffs are usually made from cassava after extended blanching, boiling, or fermentation. Food products made from cassava plants include garri (toasted cassava tubers), porridge-like fufu, the dough agbelima, and cassava flour. Research efforts have developed a transgenic cassava plant that stably downregulates linamarin production via RNA interference. References Plant toxins Cyanogenic glycosides
Linamarin
[ "Chemistry" ]
477
[ "Chemical ecology", "Plant toxins" ]
9,435,784
https://en.wikipedia.org/wiki/Comparative%20physiology
Comparative physiology is a subdiscipline of physiology that studies and exploits the diversity of functional characteristics of various kinds of organisms. It is closely related to evolutionary physiology and environmental physiology. Many universities offer undergraduate courses that cover comparative aspects of animal physiology. According to Clifford Ladd Prosser, "Comparative Physiology is not so much a defined discipline as a viewpoint, a philosophy." History Originally, as narrated in a recent history of the field, physiology focused primarily on human beings, in large part from a desire to improve medical practices. When physiologists first began comparing different species it was sometimes out of simple curiosity to understand how organisms work but also stemmed from a desire to discover basic physiological principles. This use of specific organisms convenient to study specific questions is known as the Krogh Principle. Methodology C. Ladd Prosser, a founder of modern comparative physiology, outlined a broad agenda for comparative physiology in his 1950 edited volume (see summary and discussion in Garland and Carter): 1. To describe how different kinds of animals meet their needs. This amounts to cataloging functional aspects of biological diversity, and has recently been criticized as "stamp collecting" with the suggestion that the field should move beyond that initial, exploratory phase. 2. The use of physiological information to reconstruct phylogenetic relationships of organisms. In principle physiological information could be used just as morphological information or DNA sequence is used to measure evolutionary divergence of organisms. In practice, this has rarely been done, for at least four reasons: physiology doesn't leave many fossil cues, it can't be measured on museum specimens, it is difficult to quantify as compared with morphology or DNA sequences, and physiology is more likely to be adaptive than DNA, and so subject to parallel and convergent evolution, which confuses phylogenetic reconstruction. 3. To elucidate how physiology mediates interactions between organisms and their environments. This is essentially physiological ecology or ecological physiology. 4. To identify "model systems" for studying particular physiological functions. Examples of this include using squid giant axons to understand general principles of nerve transmission, using rattlesnake tail shaker muscles for measurement of in vivo changes in metabolites (because the whole animal can be put in an NMR machine), and the use of ectothermic poikilotherms to study effects of temperature on physiology. 5. To use the "kind of animal" as an experimental variable. "While other branches of physiology use such variables as light, temperature, oxygen tension, and hormone balance, comparative physiology uses, in addition, species or animal type as a variable for each function." 25 years later, Prosser put things this way: "I like to think of it as that method in physiology which uses kind of organism as one experimental variable." Comparative physiologists often study organisms that live in "extreme" environments (e.g., deserts) because they expect to find especially clear examples of evolutionary adaptation. One example is the study of water balance in desert-inhabiting mammals, which have been found to exhibit kidney specializations. Similarly, comparative physiologists have been attracted to "unusual" organisms, such as very large or small ones. As an example, of the latter, hummingbirds have been studied. As another example, giraffe have been studied because of their long necks and the expectation that this would lead to specializations related to the regulation of blood pressure. More generally, ectothermic vertebrates have been studied to determine how blood acid-base balance and pH change as body temperature changes. Funding In the United States, research in comparative physiology is funded by both the National Institutes of Health and the National Science Foundation. Societies A number of scientific societies feature sections on comparative physiology, including: American Physiological Society Australian & New Zealand Society for Comparative Physiology & Biochemistry Canadian Society of Zoologists Japanese Society for Comparative Physiology and Biochemistry Society for Integrative and Comparative Biology Society for Experimental Biology Biographies Knut Schmidt-Nielsen (1915–2007) was a major figure in vertebrate comparative physiology, serving on the faculty at Duke University for many years and training a large number of students (obituary). He also authored several books, including an influential text, all known for their accessible writing style. Grover C. Stephens (1925–2003) was a well-known invertebrate comparative physiologist, serving on the faculty of the University of Minnesota until becoming the founding chairman of the Department of Organismic Biology at the University of California at Irvine in 1964. He was the mentor for numerous graduate students, many of whom have gone on to further build the field (obituary). He authored several books and in addition to being an accomplished biologist was also an accomplished pianist and philosopher. Some journals that publish articles in comparative animal physiology American Journal of Physiology - Regulatory, Integrative and Comparative Physiology Annual Review of Physiology Comparative Biochemistry and Physiology Ecological and Evolutionary Physiology (formerly Physiological and Biochemical Zoology) Integrative and Comparative Biology Journal of Comparative Physiology Journal of Experimental Biology See also August Krogh Claude Bernard Comparative anatomy Ecophysiology Evolutionary physiology Human physiology John Speakman Knut Schmidt-Nielsen Krogh Principle Lancelot Hogben Peter Hochachka Phylogenetic comparative methods Physiology Raymond B. Huey Theodore Garland Jr. References Further reading Anctil, M. 2022. Animal as machine - The quest to understand how animals work and adapt. McGill-Queen's University Press, Montreal & Kingston, London, Chicago. Barrington, E. J. W. 1975. Comparative physiology and the challenge of design. Journal of Experimental Zoology 194:271-286. Clark, A. J. 1927. Comparative physiology of the heart. Cambridge University Press, London. Dantzler, W. H., ed. 1997. Handbook of physiology. Section 13: comparative physiology. Vol. I. Oxford Univ. Press, New York. Dantzler, W. H., ed. 1997. Handbook of physiology. Section 13: comparative physiology. Vol. II. Oxford Univ. Press, New York. viii + 751-1824 pp. Feder, M. E., A. F. Bennett, W. W. Burggren, and R. B. Huey, eds. 1987. New directions in ecological physiology. Cambridge Univ. Press, New York. 364 pp. Garland, T. Jr., and P. A. Carter. 1994. Evolutionary physiology. Annual Review of Physiology 56:579-621. PDF Gordon, M. S., G. A. Bartholomew, A. D. Grinnell, C. B. Jorgensen, and F. N. White. 1982. Animal physiology: principles and adaptations. 4th ed. MacMillan, New York. 635 pages. Greenberg, M. J., P. W. Hochachka, and C. P. Mangum, eds. 1975. New directions in comparative physiology and biochemistry. Journal of Experimental Zoology 194:1-347. Hochachka, P. W., and G. N. Somero. 2002. Biochemical adaptation — mechanism and process in physiological evolution. Oxford University Press. 478 pp. Mangum, C. P., and P. W. Hochachka. 1998. New directions in comparative physiology and biochemistry: mechanisms, adaptations, and evolution. Physiological Zoology 71:471-484. Moyes, C. D., and P. M. Schulte. 2006. Principles of animal physiology. Pearson Benjamin Cummings, San Francisco. 734 pp. Prosser, C. L., ed. 1950. Comparative animal physiology. W. B. Saunders Co., Philadelphia. ix + 888 pp. Randall, D., W. Burggren, and K. French. 2002. Eckert animal physiology: mechanisms and adaptations. 5th ed. W. H. Freeman and Co., New York. 736 pp. + glossary, appendices, index. Schmidt-Nielsen, K. 1972. How animals work. Cambridge University Press, Cambridge. Schmidt-Nielsen, K. 1984. Scaling: why is animal size so important? Cambridge University Press, Cambridge. 241 pp. Schmidt-Nielsen, K. 1997. Animal physiology: adaptation and environment. 5th ed. Cambridge University Press, Cambridge. ix + 607 pp. Schmidt-Nielsen, K. 1998. The camel's nose: memoirs of a curious scientist. 352 pp. The Island Press. Review Somero, G. N. 2000. Unity in Diversity: A perspective on the methods, contributions, and future of comparative physiology. Annual Review of Physiology 62:927-937. Willmer, P., G. Stone, and I. Johnston. 2005. Environmental physiology of animals. Second edition. Blackwell Science, Oxford, U.K. xiii + 754 pp. Physiology Comparisons
Comparative physiology
[ "Biology" ]
1,823
[ "Physiology" ]
9,436,252
https://en.wikipedia.org/wiki/Serial%20binary%20adder
The serial binary adder or bit-serial adder is a digital circuit that performs binary addition bit by bit. The serial full adder has three single-bit inputs for the numbers to be added and the carry in. There are two single-bit outputs for the sum and carry out. The carry-in signal is the previously calculated carry-out signal. The addition is performed by adding each bit, lowest to highest, one per clock cycle. Serial binary addition Serial binary addition is done by a flip-flop and a full adder. The flip-flop takes the carry-out signal on each clock cycle and provides its value as the carry-in signal on the next clock cycle. After all of the bits of the input operands have arrived, all of the bits of the sum have come out of the sum output. Serial binary subtractor The serial binary subtractor operates the same as the serial binary adder, except the subtracted number is converted to its two's complement before being added. Alternatively, the number to be subtracted is converted to its ones' complement, by inverting its bits, and the carry flip-flop is initialized to a 1 instead of to 0 as in addition. The ones' complement plus the 1 is the two's complement. Example of operation Decimal 5+9=14 X=5, Y=9, Sum=14 Binary 0101+1001=1110 Addition of each step *addition starts from LSb Result=1110 or 14 See also Parallel binary adder References Further reading http://www.quinapalus.com/wires8.html http://www.asic-world.com/digital/arithmetic3.html External links Interactive Serial Adder, Provides the visual logic of the Serial Adder circuit built with Teahlab's Simulator. Binary arithmetic Adders (electronics)
Serial binary adder
[ "Mathematics" ]
388
[ "Arithmetic", "Binary arithmetic" ]
9,436,847
https://en.wikipedia.org/wiki/Ammonium%20iron%28II%29%20sulfate
Ammonium iron(II) sulfate, or Mohr's salt, is the inorganic compound with the formula . Containing two different cations, Fe2+ and , it is classified as a double salt of ferrous sulfate and ammonium sulfate. It is a common laboratory reagent because it is readily crystallized, and crystals resist oxidation by air. Like the other ferrous sulfate salts, ferrous ammonium sulfate dissolves in water to give the aquo complex [Fe(H2O)6]2+, which has octahedral molecular geometry. Its mineral form is mohrite. Structure This compound is a member of a group of double sulfates called Schönites or Tutton's salts. Tutton's salts form monoclinic crystals and have formula M2N(SO4)2·6H2O (M = various monocations). With regards to the bonding, crystals consist of octahedra [Fe(H2O)6]2+ centers, which are hydrogen bonded to sulfate and ammonium. Mohr's salt is named after the German chemist Karl Friedrich Mohr, who made many important advances in the methodology of titration in the 19th century. Applications In analytical chemistry, this salt is the preferred source of ferrous ions as the solid has a long shelf life, being resistant to oxidation. This stability extends somewhat to solutions reflecting the effect of pH on the ferrous–ferric redox couple. This oxidation occurs more readily at high pH. The ammonium ions make solutions of Mohr's salt slightly acidic, which slows this oxidation process. Sulfuric acid is commonly added to solutions to reduce oxidation to ferric iron. It is used in Gel dosimetry to measure high doses of gamma rays. Preparation Mohr's salt forms upon evaporation of an equimolar mixture of aqueous ferrous sulfate and ammonium sulfate. Contaminants Common impurities include magnesium, nickel, manganese, lead, and zinc, many of which form isomorphous salts. References Ammonium compounds Iron(II) compounds Sulfates Double salts
Ammonium iron(II) sulfate
[ "Chemistry" ]
451
[ "Double salts", "Ammonium compounds", "Sulfates", "Salts" ]
9,437,083
https://en.wikipedia.org/wiki/Environmental%20flow
Environmental flows describe the quantity, timing, and quality of water flows required to sustain freshwater and estuarine ecosystems and the human livelihoods and well being that depend on these ecosystems. In the Indian context river flows required for cultural and spiritual needs assumes significance. Through implementation of environmental flows, water managers strive to achieve a flow regime, or pattern, that provides for human uses and maintains the essential processes required to support healthy river ecosystems. Environmental flows do not necessarily require restoring the natural, pristine flow patterns that would occur absent human development, use, and diversion but, instead, are intended to produce a broader set of values and benefits from rivers than from management focused strictly on water supply, energy, recreation, or flood control. Rivers are parts of integrated systems that include floodplains and riparian corridors. Collectively these systems provide a large suite of benefits. However, the world's rivers are increasingly being altered through the construction of dams, diversions, and levees. More than half of the world's large rivers are dammed, a figure that continues to increase. Almost 1,000 dams are planned or under construction in South America and 50 new dams are planned on China's Yangtze River alone. Dams and other river structures change the downstream flow patterns and consequently affect water quality, temperature, sediment movement and deposition, fish and wildlife, and the livelihoods of people who depend on healthy river ecosystems. Environmental flows seek to maintain these river functions while at the same time providing for traditional offstream benefits. Evolution of environmental flow concepts and recognition From the turn of the 20th century through the 1960s, water management in developed nations focused largely on maximizing flood protection, water supplies, and hydropower generation. During the 1970s, the ecological and economic effects of these projects prompted scientists to seek ways to modify dam operations to maintain certain fish species. The initial focus was on determining the minimum flow necessary to preserve an individual species, such as trout, in a river. Environmental flows evolved from this concept of "minimum flows" and, later, "instream flows", which emphasized the need to keep water within waterways. By the 1990s, scientists came to realize that the biological and social systems supported by rivers are too complicated to be summarized by a single minimum flow requirement. Since the 1990s, restoring and maintaining more comprehensive environmental flows has gained increasing support, as has the capability of scientists and engineers to define these flows to maintain the full spectrum of riverine species, processes and services. Furthermore, implementation has evolved from dam reoperation to an integration of all aspects of water management, including groundwater and surface water diversions and return flows, as well as land use and storm water management. The science to support regional-scale environmental flow determination and management has likewise advanced. In a global survey of water specialists undertaken in 2003 to gauge perceptions of environmental flow, 88% of the 272 respondents agreed that the concept is essential for sustainably managing water resources and meeting the long-term needs of people. In 2007, the Brisbane Declaration on Environmental Flows was endorsed by more than 750 practitioners from more than 50 countries. The declaration announced an official pledge to work together to protect and restore the world's rivers and lakes. By 2010, many countries throughout the world had adopted environmental flow policies, although their implementation remains a challenge. Examples One effort currently underway to restore environmental flows is the Sustainable Rivers Project, a collaboration between The Nature Conservancy (TNC) and U.S. Army Corps of Engineers (USACE), which is the largest water manager in the United States. Since 2002, TNC and the USACE have been working to define and implement environmental flows by altering the operations of USACE dams in 8 rivers across 12 states. Dam reoperation to release environmental flows, in combination with floodplain restoration, has in some instances increased the water available for hydropower production while reducing flood risk. Arizona's Bill Williams River, flowing downstream of Alamo Dam, is one of the rivers featured in the Sustainable Rivers Project. Having discussed modifying dam operations since the early 1990s, local stakeholders began to work with TNC and USACE in 2005 to identify specific strategies for improving the ecological health and biodiversity of the river basin downstream from the dam. Scientists compiled the best available information and worked together to define environmental flows for the Bill Williams River. While not all of the recommended environmental flow components could be implemented immediately, the USACE has changed its operations of Alamo Dam to incorporate more natural low flows and controlled floods. Ongoing monitoring is capturing resulting ecological responses such as rejuvenation of native willow-cottonwood forest, suppression of invasive and non-native tamarisk, restoration of more natural densities of beaver dams and associated lotic-lentic habitat, changes in aquatic insect populations, and enhanced groundwater recharge. USACE engineers continue to consult with scientists on a regular basis and use the monitoring results to further refine operations of the dam. Another case in which stakeholders developed environmental flow recommendations is Honduras' Patuca III Hydropower Project. The Patuca River, the second longest river in Central America, has supported fish populations, nourished crops, and enabled navigation for many indigenous communities, including the Tawahka, Pech, and Miskito Indians, for hundreds of years. To protect the ecological health of the largest undisturbed rainforest north of the Amazon and its inhabitants, TNC and Empresa Nacional de Energía Eléctrica (ENEE, the agency responsible for the project) agreed to study and determine flows necessary to sustain the health of human and natural communities along the river. Due to very limited available data, innovative approaches were developed for estimating flow needs based on experiences and observations of the local people who depend on this nearly pristine river reach. Methods, tools, and models More than 200 methods are used worldwide to prescribe river flows needed to maintain healthy rivers. However, very few of these are comprehensive and holistic, accounting for seasonal and inter-annual flow variation needed to support the whole range of ecosystem services that healthy rivers provide. Such comprehensive approaches include DRIFT (Downstream Response to Imposed Flow Transformation), BBM (Building Block Methodology), and the "Savannah Process" for site-specific environmental flow assessment, and ELOHA (Ecological Limits of Hydrologic Alteration) for regional-scale water resource planning and management. The "best" method, or more likely, methods, for a given situation depends on the amount of resources and data available, the most important issues, and the level of certainty required. To facilitate environmental flow prescriptions, a number of computer models and tools have been developed by groups such as the USACE's Hydrologic Engineering Center to capture flow requirements defined in a workshop setting (e.g., HEC-RPT ) or to evaluate the implications of environmental flow implementation (e.g., HEC-ResSim , HEC-RAS , and HEC-EFM ). Additionally, a 2D model is developed from a 3D turbulence model based on Smagorinsky large eddy closure to more appropriately model environmental large scale flows. This model is based on a slow manifold of the turbulent Smagorinsky large eddy closure instead of conventional depth-averaging flow equations. Other tried and tested environmental flow assessment methods include DRIFT (King et al. 2003), which was recently used in the Kishenganga HPP dispute between Pakistan and India at the International Court of Arbitration. In India In India, the need for environmental flows has emerged from the hundreds of large dams being planned in the Himalayan rivers for hydro power generation. The cascades of dams planned across the Lohit, Dibang River in the Brahmaputra River, the Alaknanda and Bhagirathi River in the Ganga basin and the Teesta in Sikkim for example, would end up in the rivers flowing more through tunnels and pen stocks rather than the river channels. There have been some recommendations by various authorities (Courts, Tribunals, Expert Appraisal Committee of the Ministry of Environment and Forests (India)) on releasing e-flows from dams. However, these recommendations have never been backed by strong objectives about why certain e-flow releases are needed. See also Freshwater inflow Water scarcity References Rivers Aquatic ecology
Environmental flow
[ "Biology" ]
1,673
[ "Aquatic ecology", "Ecosystems" ]
9,437,662
https://en.wikipedia.org/wiki/Air%20kiss
An air kiss, blown kiss, or thrown kiss is a ritual or social gesture whose meaning is basically the same as that of many forms of kissing. The air kiss is a pretence of kissing: the lips are pursed as if kissing, but without actually touching the other person's body. Sometimes, the air kiss includes touching cheek-to-cheek. Also, the gesture may be accompanied by the mwah sound. The onomatopoeic word mwah (a representation of the sound of a kiss) has entered Webster's dictionary. The character block Unicode 1F618 provides the "emoji face throwing a kiss 😘" to computer screens. Western culture A symbolic kiss is frequent in Western cultures. A kiss can be "blown" to another by kissing the fingertips and then blowing the fingertips, pointing them in the direction of the recipient. This is used to convey affection, usually when parting or when the partners are physically distant but can view each other. Blown kisses are also used when a person wishes to convey affection to a large crowd or audience. The term flying kiss is used in India to describe a blown kiss. North America In North America and most western countries influenced by North America, air kisses are sometimes associated with glamour models and other celebrities. It is a modified cheek kiss, involving kissing in the air near the cheek, with the cheeks touching the lips or not. Southeast Asia In Indonesia, and Malaysia, it is common to air-kiss an elder's hand as a traditional form of respectful greeting. Instead of pursing one's lips, the younger person exhaling through his nose softly on the hand before drawing the hand to the younger person's forehead. In the Philippines, elder relatives traditionally kiss a younger relative's cheek in this same way, by exhaling gently through the nose when the younger relative's cheek is brought close. See also Air guitar Air quotes References External links "How to Air Kiss", a video Gestures Kissing Parting traditions
Air kiss
[ "Biology" ]
405
[ "Behavior", "Gestures", "Human behavior" ]
9,440,330
https://en.wikipedia.org/wiki/Gideon%20Gartner
Gideon Isaiah Gartner (March 13, 1935 – December 12, 2020) was an American businessman, investor, and philanthropist. He was often referred to as the father of the modern analyst industry. He is best known as the founder of Gartner, Inc. (formerly Gartner Group Inc.) a Stamford, Connecticut information technology (IT) research and advisory company. Early life and education Gideon Isaiah Gartner was born on March 13, 1935, in Tel Aviv, to Eastern European Jewish émigrés. His father Abraham was an engineer, while his mother Pnina (née Bedri) was a musician and teacher. In 1938 the family moved to the United States and settled in Brooklyn, where Abraham became a civil engineer for New York City. Gartner attended the Yeshiva of Flatbush and then Midwood High School, graduating in 1952. A gifted musician who excelled at both piano and the French horn, he was offered a musical scholarship to the University of Miami, but was discouraged from pursuing music as a career choice. Instead, Gartner chose to attend the Massachusetts Institute of Technology, graduating in 1956 with a Bachelor of Science in Mechanical Engineering.  He went on to earn his Masters in Business Administration from the MIT Sloan School of Management in 1960. Speaking about his courses at MIT, he is noted to have said that most classes bored him except computer science and programming courses. Career Early career Gartner began his career in operations research at System Development Corporation, a subcontractor to Philco Corporation. He worked in Paramus, New Jersey, for the U.S. military’s Strategic Air Command control system and then in Alexandria, Virginia, for the Defense Communications Agency. In 1961 Philco sold the first large-scale transistorized computer to Israel’s Ministry of Defense, and sent Gartner, with his background in Hebrew, to help program the computer and to market Philco to other Israel government agencies. While in Israel, Gartner was recruited by International Business Machines (IBM) to lead its Systems Engineering staff serving emerging European and Mideast markets. Gartner rose steadily at IBM, eventually moving to its White Plains, New York, offices to become Manager of Market Information in the Data Processing Division, and a leading figure in the company’s department of commercial analysis. In 1969, Gartner left IBM to form Computer Decisions, a magazine publishing company that documented the latest happenings in the computer industry which marked as his first independent entrepreneurial venture. The new publishing firm struggled in the recession of 1969-70 and was sold to a time-share company which subsequently failed. Wall Street In 1970, despite having no background in security analysis, Gartner was hired by EF Hutton to cover IBM and the burgeoning computer industry. He thrived at Hutton, and in 1972 joined Oppenheimer and Co. where he forged a national reputation as the top technology watcher on Wall Street. Gartner introduced a series of innovations to the practice of security analysis, some of which foreshadowed the research techniques and marketing theories that would distinguish his later firms. These included chart-and-audiotape programs to deliver his analysis to clients in a more rapid and digestible manner than was customary; “chunking” lengthy research into shorter pieces with bottom-line emphasis, bridging the gap between the massive generalizations at most brokerage houses and the heavy technological data put out by market research firms; and distributing to clients a relentless flow of ancillary content in the form of daily short-takes and advice that became known as “sunflowers.” Gartner also developed a reputation for going against the grain and accurately predicting computer trends and stock movements, with Stock Market Innovators, an industry trade paper, noting that “a remarkable number of his long-term forecasts have proven prophetic.” Some of these forecasts extended to developments that would come to pass after his time as an active analyst had ended, as when Gartner described in highly specific terms the capabilities, features, and challenges of what would become mass-produced personal computers. Gartner was voted the top individual technology analyst on Wall Street every year from 1972 through 1977 in Institutional Investor Magazine’s annual poll of major banks, funds and other institutional investing firms. In the 1977 joint exhibit sponsored by the Association for Computing Machinery (ACM), Goldman Sachs, and the Boston Computer Museum, documented in the book “Wizards and Their Wonders: Portraits in Computing”, Gartner was honored as one of the preeminent “Communicator” stars in the IT industry. He rose to VP and head of the Oppenheimer Technology Group, and eventually became a partner. Gartner, Inc. While working at Oppenheimer and Co. in the late 1970s, Gartner realized that his investor insights would be valuable to computer manufacturers and end users too. He joined David L. Stein, an industry veteran, to form Gartner Group in 1979. Amongst other things the company analyzed residual value of used computers. Analysts for the group were noted to have gone through a stringent selection process with an inquisition themed group interview. He is known to have emphasized that analysts write concise reports filled with provocative views rather than dissertations. In his words, "If what you’re writing about isn’t controversial, don't write about it." He would serve as the group's CEO and chairman through April 1991. At the time, other advisory firms usually sold to only computer hardware, software, and services vendors. Exceptions at that time included Dataquest (selling a service for investors), and Input Corp. (selling a service for users). Gartner sold to vendors, plus users (generally large enterprises and other organizations, such as government agencies), plus investors and consulting firms. Investors were among the first targets, as Gartner had just resigned as partner of Oppenheimer & Co., but at Oppenheimer he had also begun servicing a group of Chief Information Officers (CIOs) of large corporations, which became the base for Gartner's enterprise activities. As Gartner Group's coverage of IBM was deep (IBM was the primary industry vendor at the time), other vendors desired to be tied into Gartner’s research network. Thus, the market Gartner addressed was unusually broad, and each constituency provided insights and information which benefited others, arguably creating market advantage. Very soon the group became the preferred place a corporate client went when it had any question about the IT industry. Gartner's venture capital financing was unique among advisory firms at the time. Gartner Group was initially financed by Bessemer Venture Partners and E.M. Warburg Pincus, with Bank Paribas joining a year later. This led to the firm being the first in its field to raise public capital in a 1986 offering, supporting its growth. Raising capital from venture firms allowed Gartner to build a nationwide sales organization, the first of its kind in the industry. Gartner instituted a sales measurement and compensation scheme, based upon how IBM had measured its rental sales, but novel when applied to consulting/advisory firms: since Gartner Group sold annual renewable contracts and recorded "Contract Value", it based progress reporting and compensation (commission for sales personnel and bonus for analysts and managers), on the growth of appropriate Contract Value (CV) during a period of time; this was called Net Contract Value Increase (NCVI). Uniquely, compared with all prior consulting and advisory models, all variable compensation was based upon growth and not on revenue from renewals, an important factor in developing a strong growth culture. Having come from Wall Street, Gartner adopted the idea of employing senior industry people, who were in fact "peers" of their prospective clients. This was a departure from the current industry practice at the time, where analysts were relatively young and relatively inexperienced albeit bright. Instead of focusing primarily on market research, Gartner emphasized a basket of "values" including : G2 (competitive intelligence and analysis), quantitative methods for clients to analyze residual values and obsolescence metrics of IT hardware, saving money within the IT organization, and IT education within clients’ staffs. Gartner developed a disciplined "research process", which was documented in a Research Notebook, used in regular training programs at the firm. Process highlights called for analysts to "scan" all sources of input, be trained in recognizing "patterns", develop "new ideas" from these patterns, and "document" the results in brief one-page "research notes". General industry practice at the time was to publish relatively long reports. Gartner research "gimmicks" were introduced, such as the "stalking horse", a research collaborative tool whereby analysts were compelled to graphically present and defend their logic at research meetings. Thus, the "horse" became the company mascot. Intensive research meetings for all researchers were conducted at least weekly, and provided additional training and other benefits. Other innovations were introduced, in areas such as research-hiring interview methods, conferences including the breakthrough Symposium, inquiry systems to connect clients with internal analysts. All the above contributed to an unusually strong and acknowledged organizational culture. Gartner Group was ranked among the fastest growing private firms in the U.S. (by Inc. Magazine) until it went public in 1986, whereupon it was listed for several years among the best small companies in America (by Business Week, e.g. #9 overall, and #1 in profitability, in 1987). Gartner was sold to Saatchi & Saatchi in 1988, and Gartner signed a contract to remain as CEO until April 1991. In 1990, Gartner led a successful leveraged buyout of the firm financed by Information Partners, a private equity fund owned by Bain Capital and Dun & Bradstreet. Soundview Technologies Gartner, who had retained contact with his Wall Street clients, initiated a new financial service for Gartner Group via a new partnership with Dillon, Read & Co., which distributed its reports and personal services to Dillon, Read & Co. investment client organizations. Gartner Group severed the Dillon Read relationship and became an independent broker-dealer in 1984, named Gartner Securities Corp., and spun this business out to its shareholders just before its first public offering in 1986, providing its analysis, investment advice and banking services, to all institutional investors. Its name was changed in 1988 to SoundView Technology Group, when Gartner was acquired by Saatchi & Saatchi. Soundview was unique in that it combined accepted Wall Street research and distribution methods, with the intimate (albeit "arms length") relationship with Gartner analysts, and arguably became the leading technology research boutique on Wall Street. But it merged in 2000 with Wit Capital, and was eventually sold (early 2004) to Charles Schwab & Co., and thereafter completely absorbed into Schwab and UBS. GiGa Information Group Giga Information Group was founded by Gartner in 1995. He raised more than $15 million in several tranches to develop the company; he was Chairman and CEO until late 1999. In less than four years from first shipment (April 1, 1996, to December 1999), this innovative firm became the fastest growing technology advisory consulting company in history, generating a run rate from zero to over $65 million, with more than 1,200 enterprise clients. Three primary innovations were introduced to the advisory business through Giga: its offering of a single comprehensive IT Advisory service (compared with the typical multiple services of other advisory firms), an external cadre of experts to supplement the strategic nature of analysts on staff, and a set of web functions which stressed objectivity of analysis, and allowed on-line research by clients (called “The Knowledge Salon”). Giga went public in 1998, but its stock languished during and after the technology stock market meltdown of 2000. In February 2003 the company was sold to Forrester Research. Personal life Gartner lived in Aspen, Colorado, and Stamford, Connecticut, and was involved in business ventures, athletics, and classical music. His music background includes a lifetime of piano practice, as well as performing on the French Horn, having been a member of the London School Symphony Orchestra, and the Brooklyn Philharmonic (then called the Brooklyn Philharmonia). He was on the board of the Opera Orchestra of N.Y, was a trustee of the Music Associates of Aspen (the Aspen Music Festival and School), was a fellow of the Aspen Institute, and was on the National Councils or equivalent, of the Aspen Art Museum, and the Anderson Ranch. He served on the Library Committee of the M.I.T. Corporation and was a Sustaining Fellow Life Member of M.I.T. He was a past member of the board of the Society for Information Management where he was Special Appointee to the President. He was a Director's Circle member of the Charles Babbage Institute and a member of the board of directors of the IT History Society. In the 1977 joint exhibit sponsored by the ACM and Goldman Sachs in Washington, D.C., and the Boston Computer Museum, documented in the book “Wizards and Their Wonders: Portraits in Computing”, Gartner was honored as one of the 19 “Communicator” stars in the IT industry. Gartner's professional activities have included speaking before major organizations worldwide. He has addressed graduate student groups at Harvard Business School, M.I.T., Yale, University of Georgia, and Arizona State, among others. In 1985, Gartner taught a course at UCLA’s Graduate School of Management (GSM), which was formally rated by his students as their best course taken throughout GSM. Gartner has also written extensively, for example, the AMA journal, series of articles for Computer Decisions and Information Week magazines, and the forward and much of Chapter 7, in “The E-Marketplace…Strategies for success in B2B eCommerce”, by Warren Raisch (McGraw Hill, 2001). He was an active angel investor in early-stage companies, and was a member of New York Angels. Gartner died on December 12, 2020, at his home in New York, of complications from Alzheimer's disease. He was aged 85. Notes Sources Gartner Inc. annual reports Gideon I. Gartner, Oral history interview by Jeffrey R. Yost, 12 August 2005, Aspen, Colorado. Charles Babbage Institute, University of Minnesota, Minneapolis Securities Exchange Commission documents Gartner Group Records, 1981-2000. Charles Babbage Institute, University of Minnesota. Collection of 57 linear feet documents Gideon Gartner's business activity with his companies Gartner Group, Giga, and Soundview Technology Group. 1935 births 2020 deaths 20th-century American businesspeople 20th-century American engineers 21st-century American businesspeople 21st-century American engineers American chief executives of financial services companies American computer businesspeople American corporate directors American financial analysts American financial company founders American operations researchers American people of Palestinian-Jewish descent American philanthropists American systems scientists American technology company founders Angel investors American venture capitalists Businesspeople from Tel Aviv Computer systems engineers Gartner people Harvard Business School people Jewish American scientists Jewish engineers Jews from Mandatory Palestine Emigrants from Mandatory Palestine to the United States MIT Sloan School of Management alumni MIT School of Engineering alumni
Gideon Gartner
[ "Technology" ]
3,170
[ "Computer systems engineers", "Computer systems" ]
9,440,365
https://en.wikipedia.org/wiki/Lieben%20Prize
The Ignaz Lieben Prize, named after the Austrian banker , is an annual Austrian award made by the Austrian Academy of Sciences to young scientists working in the fields of molecular biology, chemistry, or physics. Biography The Ignaz Lieben Prize has been called the Austrian Nobel Prize. It is similar in intent but somewhat older than the Nobel Prize. The Austrian merchant Ignaz L. Lieben, whose family supported many philanthropic activities, had stipulated in his testament that 6,000 florins should be used “for the common good”. In 1863 this money was given to the Austrian Imperial Academy of Sciences, and the Ignaz L. Lieben Prize was instituted. Every three years, the sum of 900 florins was to be given to an Austrian scientist in the field of chemistry, physics, or physiology. This sum corresponded to roughly 40 per cent of the annual income of a university professor. From 1900 on, the prize was offered on a yearly basis. The endowment was twice increased by the Lieben family. When the endowment had lost its value due to inflation after World War I, the family transferred the necessary sum yearly to the Austrian Academy of Sciences. But since the family was persecuted by the National Socialists, the prize was discontinued after the German Anschluss of Austria in 1938. Richard Lieben (1842–1919), the younger son of Ignaz Lieben, financed the Richard Lieben Prize in Mathematics, which was awarded every three years from 1912 to 1921, and one final time in 1928, before being discontinued. In 2004 the Lieben prize was reinstated, with support from Isabel Bader and Alfred Bader (who was able to flee from Austria to Great Britain at the age of fourteen in 1938). Now, the award amounts to US Dollar 36,000, and it is offered yearly to young scientists who work in Austria, Bosnia-Herzegovina, Croatia, the Czech Republic, Hungary, Slovakia or Slovenia (i.e., in one of the countries that were part of the Austro-Hungarian Empire a hundred years ago), and who work in the fields of molecular biology, chemistry, or physics. Laureates Source (1865–1937; 2004–2007): Ignaz Lieben Gesellschaft: 2022 Dennis Kurzbach 2021 2020 Norbert Werner 2019 Gašper Tkačik 2018 Nuno Maulide 2017 Iva Tolić 2016 Illés Farkas 2015 Francesca Ferlaino 2014 2013 Barbara Kraus 2012 2011 Mihály Kovács 2010 Robert Kralovics 2009 Frank Verstraete 2008 Csaba Pal 2007 Markus Aspelmeyer 2006 Andrius Baltuska 2005 Ronald Micura 2004 Zoltan Nusser Not awarded 1938–2003 1937 Marietta Blau and Hertha Wambacher 1936 Franz Lippay and Richard Rössler 1935 Armin Dadieu 1934 Eduard Haschek 1933 Ferdinand Scheminzky 1932 Georg Koller 1931 Karl Höfler 1930 Wolf Johannes Müller 1929 Karl Przibram 1927 Otto Porsch and Gustav Klein 1926 Adolf Franke 1925 Lise Meitner 1924 Otto Loewi and Ernst Peter Pick 1923 Otto von Fürth 1922 Karl Wilhelm Friedrich Kohlrausch 1921 Karl von Frisch 1920 Ernst Späth 1919 Victor Franz Hess 1918 Eugen Steinach 1917 Wilhelm Schlenk 1916 Friedrich Adolf Paneth 1915 Wilhelm Trendelenburg 1914 Fritz Pregl 1913 Stefan Meyer 1912 Oswald Richter 1911 Friedrich Emich 1910 Felix Ehrenhaft 1909 Eugen Steinach 1908 Paul Friedlaender 1907 Hans Benndorf 1906 Arnold Durig 1905 Rudolf Wegscheider and Hans Leopold Meyer 1904 Franz Schwab 1903 Josef Schaffer 1902 Josef Herzig 1901 Josef Liznar 1900 Theodor Beer and Oskar Zoth 1898 Konrad Natterer 1895 Josef Maria Eder and Eduard Valenta 1892 Guido Goldschmiedt 1889 Sigmund Ritter Exner von Ewarten 1886 Zdenko Hans Skraup 1883 Victor Ritter Ebner von Rofenstein 1880 Hugo Weidel 1877 Sigmund Ritter Exner von Ewarten 1874 Eduard Linnemann 1871 Leander Ditscheiner 1868 Eduard Linnemann and Karl von Than 1865 Josef Stefan Richard Lieben Prize 1912 Josip Plemelj 1915 Gustav Herglotz 1918 Wilhelm Gross 1921 Hans Hahn and Johann Radon 1928 Karl Menger See also List of biology awards List of chemistry awards List of physics awards References External links Austrian science and technology awards Biology awards Chemistry awards Physics awards 1863 establishments in the Austrian Empire Awards established in 1863
Lieben Prize
[ "Technology" ]
906
[ "Science and technology awards", "Chemistry awards", "Biology awards", "Physics awards" ]
9,441,061
https://en.wikipedia.org/wiki/GISAID
GISAID (), the Global Initiative on Sharing All Influenza Data, previously the Global Initiative on Sharing Avian Influenza Data, is a global science initiative established in 2008 to provide access to genomic data of influenza viruses. The database was expanded to include the coronavirus responsible for the COVID-19 pandemic, as well as other pathogens. The database has been described as "the world's largest repository of COVID-19 sequences". GISAID facilitates genomic epidemiology and real-time surveillance to monitor the emergence of new COVID-19 viral strains across the planet. Since its establishment as an alternative to sharing avian influenza data via conventional public-domain archives, GISAID has facilitated the exchange of outbreak genome data during the H1N1 pandemic in 2009, the H7N9 epidemic in 2013, the COVID-19 pandemic and the 2022–2023 mpox outbreak. History Origin Since 1952, influenza strains had been collected by National Influenza Centers (NICs) and distributed through the WHO's Global Influenza Surveillance and Response System (GISRS). Countries provided samples to the WHO but the data was then shared with them for free with pharmaceutical companies who could patent vaccines produced from the samples. Beginning in January 2006, Italian researcher Ilaria Capua refused to upload her data to a closed database and called for genomic data on H5N1 avian influenza to be in the public domain. At a conference of the OIE/FAO Network of Expertise on Animal Influenza, Capua persuaded participants to agree to each sequence and release data on 20 strains of influenza. Some scientists had concerns about sharing their data in case others published scientific papers using the data before them, but Capua dismissed this telling Science "What is more important? Another paper for Ilaria Capua's team or addressing a major health threat? Let's get our priorities straight." Peter Bogner, a German in his 40s based in the US and who previously had no experience in public health, read an article about Capua's call and helped to found and fund GISAID. Bogner met Nancy Cox, who was then leading the US Centers for Disease Control's influenza division at a conference, and Cox went on to chair GISAID's Scientific Advisory Council. The acronym GISAID was coined in a correspondence letter published in the journal Nature in August 2006, putting forward an initial aspiration of creating a consortium for a new Global Initiative on Sharing Avian Influenza Data (later, "All" would replace "Avian"), whereby its members would release data in publicly available databases up to six months after analysis and validation. Initially the organisation collaborated with the Australian non-profit organization Cambia and the Creative Commons project Science Commons. Although no essential ground rules for sharing were established, the correspondence letter was signed by over 70 leading scientists, including seven Nobel laureates, because access to the most current genetic data for the highly pathogenic H5N1 zoonotic virus was often restricted, in part due to the hesitancy of World Health Organization member states to share their virus genomes and put ownership rights at risk. Towards the end of 2006, Indonesia announced it would not share samples of avian flu with the WHO which led to a global health crisis due to an ongoing epidemic. By October 2006, Indonesia had agreed to share their data with GISAID, which their health minister considered to have a "fair and transparent" mechanism for sharing data. It was one of the first countries to do so. In February 2007, GISAID and the Swiss Institute of Bioinformatics (SIB) announced a cooperation agreement, with the SIB building and administering the EpiFlu database on behalf of GISAID. Ultimately, GISAID was launched in May 2008 in Geneva on the occasion of the 61st World Health Assembly, as a registration-based database rather than a consortium. 2009 onwards In 2009 SIB disconnected the database from the GISAID portal over a contract dispute, resulting in litigation. In April 2010 the Federal Republic of Germany announced during the 7th International Ministerial Conference on Avian and Pandemic Influenza in Hanoi, Vietnam, that GISAID had entered into a cooperation agreement with the German government, making Germany the long-term host of the GISAID platform. Under the agreement, Germany's Federal Ministry of Food, Agriculture and Consumer Protection was to ensure the sustainability of the initiative by providing technical hosting facilities, and the Federal Institute for Animal Health, the Friedrich Loeffler Institute, was to ensure the plausibility and curation of scientific data in GISAID. By 2021, the ministry was no longer involved with either database hosting nor curation. In 2013 GISAID dissolved a nonprofit organisation based in Washington DC and the organisation began to be operated by a German association called Freunde von GISAID (Friends of GISAID). Some of the earliest SARS-CoV-2 genetic sequences were released by the Chinese Center for Disease Control and Prevention and shared through GISAID in mid January 2020. Since 2020, millions of SARS-CoV-2 genome sequences have been uploaded to the GISAID database. In 2022, GISAID added Mpox virus and Respiratory syncytial virus (RSV) to the list of pathogens supported by its database. Indonesia's Ministry of Health announced in November 2023 the establishment of GISAID Academy in Bali, to focus on bioinformatics education, advance pathogen genomic surveillance, and increased regional response capacity. The GISAID model of incentivizing and recognizing those who deposit data has been recommended as a model for future initiatives; Because of this work, the entity has been described as "a critical shield for humankind". Database for SARS-CoV-2 genomes GISAID maintains what has been described as "the world's largest repository of COVID-19 sequences", and "by far the world's largest database of SARS-CoV-2 sequences". By mid-April 2021, GISAID's SARS-CoV-2 database reached over 1,200,000 submissions, a testament to the hard work of researchers in over 170 different countries. Only three months later, the number of uploaded SARS-CoV-2 sequences had doubled again, to over 2.4 million. By late 2021, the database contained over 5 million genome sequences; as of December 2021, over 6 million sequences had been submitted; by April 2022, there were 10 million sequences accumulated; and in January 2023 the number had reached 14.4 million. In January 2020, the SARS-CoV-2 genetic sequence data was shared through GISAID. Throughout the first year of the COVID-19 pandemic, most of the SARS-CoV-2 whole-genome sequences that were generated and shared globally were submitted through GISAID. When the SARS-CoV-2 Omicron variant was detected in South Africa, by quickly uploading the sequence to GISAID, the National Institute for Communicable Diseases there was able to learn that Botswana and Hong Kong had also reported cases possessing the same gene sequence. In March 2023, GISAID temporarily suspended database access for some scientists, removing raw data relevant to investigations of the origins of SARS-CoV-2. GISAID stated that they do not delete records from their database, but data may become temporarily invisible during updates or corrections. Availability of the data was restored, with an additional restriction that any analysis based thereon would not be shared with the public. Governance The board of Friends of GISAID consists of Peter Bogner and two German lawyers who are not involved in the day-to-day operations of the organisation. Scientific advice to the organization is provided by its Scientific Advisory Council, including directors of leading public health laboratories, such as WHO Collaborating Centres for Influenza. In 2023, GISAID's lack of transparency was criticized by some GISAID funders, including the European Commission and the Rockefeller Foundation, with long-term funding being denied from International Federation of Pharmaceutical Manufacturers and Associations (IFPMA). In June 2023, it was reported in Vanity Fair that Bogner had said that "GISAID will soon launch an independent compliance board 'responsible for addressing a wide range of governance matters'". The Telegraph similarly reported that GISAID's in-house counsel was developing new governance processes intended to be transparent and allow for the resolution of scientific disputes without the involvement of Bogner. Access and intellectual property The creation of the GISAID database was motivated in part by concerns raised by researchers from developing countries, with Scientific American noting in 2009 that "a previous data-sharing system run by WHO forced them to give up intellectual property rights to their virus samples when they sent them to WHO. The virus samples would then be used by private pharmaceutical companies to make vaccines that are awarded patents and sold at a profit at prices many poor nations cannot afford". In a 2022 piece in The Lancet, it was further noted that scientists in North America and Europe sought unrestricted access, with "scientists from Africa requiring sufficient protections for those who generate and share data as per the GISAID terms and conditions". Unlike public-domain databases such as GenBank and EMBL, users of GISAID must have their identity confirmed and agree to a Database Access Agreement that governs the way GISAID data can be used. These Terms of Use are "weighted in favour of the data provider and gives them enduring control over the genetic data they upload". They prevent users from sharing any data with other users who have not agreed to them, and require that users of the data must credit the data generators in published work, and also make a reasonable attempt to collaborate with data generators and involve them in research and analysis that uses their data. A difficulty that GISAID's Data Access Agreement attempts to address is that many researchers fear sharing of influenza sequence data could facilitate its misappropriation through intellectual property claims by the vaccine industry and others, hindering access to vaccines and other items in developing countries, either through high costs or by preventing technology transfer. While most public interest experts agree with GISAID that influenza sequence data should be made public, and this is the subject of agreement by many researchers, some provide the information only after filing patent claims while others have said that access to it should be only on the condition that no patents or other intellectual property claims are filed, as was controversial with the Human Genome Project. GISAID's Data Access Agreement addresses this directly to promote sharing data. GISAID's procedures additionally suggest that those who access the EpiFlu database consult the countries of origin of genetic sequences and the researchers who discovered the sequences. As a result, the GISAID license has been important in rapid pandemic preparedness. However, these restrictions evidence common criticisms to an open data model. GISAID describes itself as "open access", which is naturally replicated by the media and in journal publications. This description indeed aligns with the original announcement of the consortium, which also mentioned depositing the data to the databases participating in the INSDC. As of March 2023, this is not the case, as "GISAID does not offer a mechanism to release data to any other database". A few academic papers have compared GISAID's licensing model to unrestricted, open databases, highlighting the differences while other researchers have signed an open letter calling for the use of any of the INSDC's unrestricted databases. In 2017, GISAID's editorial board stated that "re3data.org and DataCite, the world's leading provider of digital object identifiers (DOI) for research data, affirmed the designation of access to GISAID's database and data as Open Access". However, after several researchers had their accounts suspended in March 2023 as reported by the journal Science and other news outlets, its open access status was revoked by the Registry of Research Data Repositories (re3data), which now classifies it as a "restricted access repository". In 2020 the World Health Organization chief scientist Soumya Swaminathan called the initiative "a game changer", while the co-director of the European Bioinformatics Institute (EBI) Rolf Apweiler has argued that because it does not allow sequences to be reshared publicly, it hampers efforts to understand the coronavirus and the rapid rise of new variants. GISAID's restrictions on access have led to conflict with "labs and institutions whose priorities are academic rather than driven by the immediate priorities of public health protection". In January 2021, GISAID's restricted access led a group of scientists to write an open letter asking for SARS-CoV-2 sequences to be deposited in open databases, which was replicated in the journals Nature and Science. Furthermore, the article from Science points out that the lack of transparency in access to the database also prevents many scientists from even criticising the platform. A paper from 2017 describing the success of GISAID mentions that revoking researchers' credentials was rare, but it did happen. The same publication described a "perceived merit in GISAID's formula for balancing the need for control and openness". In April 2023, Science and The Economist reported these issues continue as well as the lack of transparency of its governance. An investigation by The Telegraph into claims made by Science noted the incentives of various potential competitors in the field, for whom GISAID is an obstacle to consolidation of control over the field, and also noted that GISAID's position inevitably places it at the center of disputes between groups of scientists, which will tend to result in the losing side blaming GISAID for that outcome. See also References Further reading External links Avian influenza Influenza Mpox COVID-19 pandemic Genome databases Influenza A virus subtype H5N1 Organisations based in Munich Public health organizations International scientific organizations Bioinformatics Virology Non-profit organisations based in Germany
GISAID
[ "Engineering", "Biology" ]
2,906
[ "Bioinformatics", "Biological engineering" ]
9,441,268
https://en.wikipedia.org/wiki/Computational%20magnetohydrodynamics
Computational magnetohydrodynamics (CMHD) is a rapidly developing branch of magnetohydrodynamics that uses numerical methods and algorithms to solve and analyze problems that involve electrically conducting fluids. Most of the methods used in CMHD are borrowed from the well established techniques employed in Computational fluid dynamics. The complexity mainly arises due to the presence of a magnetic field and its coupling with the fluid. One of the important issues is to numerically maintain the (conservation of magnetic flux) condition, from Maxwell's equations, to avoid the presence of unrealistic effects, namely magnetic monopoles, in the solutions. Open-source MHD software Pencil CodeCompressible resistive MHD, intrinsically divergence free, embedded particles module, finite-difference explicit scheme, high-order derivatives, Fortran95 and C, parallelized up to hundreds of thousands cores. Source code is available. RAMSES RAMSES is an open source program to model astrophysical systems, featuring self-gravitating, magnetised, compressible, radiative fluid flows. It is based on the Adaptive Mesh Refinement (AMR) technique on a fully threaded graded octree. RAMSES is written in Fortran 90 and is making intensive use of the Message Passing Interface (MPI) library. Source code is available. RamsesGPU RamsesGPU is an MHD program written in C++, based on the original RAMSES but only for regular grid (no AMR). The code has been designed to run on large clusters of GPU (NVIDIA graphics processors), so parallelization relies on MPI for distributed memory processing, as well as the programing language CUDA for efficient usage of GPU resources. Static Gravity Fields are supported. Different finite volume methods are implemented. Source code is available. AthenaAthena is a grid-based program for astrophysical magnetohydrodynamics (MHD). It was developed primarily for studies of the interstellar medium, star formation, and accretion flows. Source code is available. EOF-Library EOF-Library is a software that couples Elmer FEM and OpenFOAM simulation packages. It enables efficient internal field interpolation and communication between the finite element and the finite volume frameworks. Potential applications are MHD, convective cooling of electrical devices, industrial plasma physics and microwave heating of liquids. Closed-source MHD software USim MACH2 STAR-CCM+ See also Magnetohydrodynamic turbulence Magnetic flow meter Plasma modeling References Brio, M., Wu, C. C.(1988), "An upwind differencing scheme for the equations of ideal magnetohydrodynamics", Journal of Computational Physics, 75, 400–422. Henri-Marie Damevin and Klaus A. Hoffmann(2002), "Development of a Runge-Kutta Scheme with TVD for Magnetogasdynamics", Journal of Spacecraft and Rockets, 34, No.4, 624–632. Robert W. MacCormack(1999), "An upwind conservation form method for ideal magnetohydrodynamics equations", AIAA-99-3609. Robert W. MacCormack(2001), "A conservation form method for magneto-fluid dynamics", AIAA-2001-0195. Further reading Toro, E. F. (1999), Riemann Solvers and Numerical Methods for Fluid Dynamics, Springer-Verlag. External links NCBI Magnetohydrodynamics, Computational Magnetohydrodynamics, Computational Computational fields of study
Computational magnetohydrodynamics
[ "Physics", "Chemistry", "Technology" ]
743
[ "Computational electromagnetics", "Computational fields of study", "Computational fluid dynamics", "Computational physics", "Computing and society", "Fluid dynamics" ]
9,442,947
https://en.wikipedia.org/wiki/Carleman%27s%20condition
In mathematics, particularly, in analysis, Carleman's condition gives a sufficient condition for the determinacy of the moment problem. That is, if a measure satisfies Carleman's condition, there is no other measure having the same moments as The condition was discovered by Torsten Carleman in 1922. Hamburger moment problem For the Hamburger moment problem (the moment problem on the whole real line), the theorem states the following: Let be a measure on such that all the moments are finite. If then the moment problem for is determinate; that is, is the only measure on with as its sequence of moments. Stieltjes moment problem For the Stieltjes moment problem, the sufficient condition for determinacy is Generalized Carleman's condition In, Nasiraee et al. showed that, despite previous assumptions, when the integrand is an arbitrary function, Carleman's condition is not sufficient, as demonstrated by a counter-example. In fact, the example violates the bijection, i.e. determinacy, property in the probability sum theorem. When the integrand is an arbitrary function, they further establish a sufficient condition for the determinacy of the moment problem, referred to as the generalized Carleman's condition. Notes References Chapter 3.3, Durrett, Richard. Probability: Theory and Examples. 5th ed. Cambridge Series in Statistical and Probabilistic Mathematics 49. Cambridge ; New York, NY: Cambridge University Press, 2019. Mathematical analysis Moment (mathematics) Probability theory Theorems in approximation theory
Carleman's condition
[ "Physics", "Mathematics" ]
326
[ "Theorems in mathematical analysis", "Mathematical analysis", "Moments (mathematics)", "Physical quantities", "Theorems in approximation theory", "Moment (physics)" ]
9,443,168
https://en.wikipedia.org/wiki/Marmorino
Marmorino Veneziano is a type of plaster or stucco. It is based on calcium oxide and used for interior and exterior wall decorations. Marmorino plaster can be finished via multiple techniques for a variety of matte, satin, and glossy final effects. It was used as far back as Roman times, but was made popular once more during the Renaissance 500 years ago in Venice. Marmorino is made from crushed marble and lime putty, which can be tinted to give a wide range of colours. This can then be applied to make many textures, from polished marble to natural stone effects. Widely used in Italy, its appeal has spread through North America especially, but now worldwide. Because of the hours of workmanship, the pricing places it in the high-end market. However, many examples can be seen in public buildings, bars, restaurants, etc. Its waterproofing and antibacterial qualities as well as visual effects have also made it very desirable for luxury bathrooms, honeymoon bedrooms and other wet areas. Not confined to interior use, it can be seen on the exterior of many buildings to great effect. History Marmorino is well known as a classic Venetian plaster; however, its origins are much older, dating to ancient Roman times. We can see evidence of it today in the villas of Pompeii and in various ancient Roman structures. In addition, it was also written about in Vitruvius's De architectura, a 1st Century B.C. history of Rome. Marmorino was rediscovered centuries later after the discovery of Vitruvio's ancient treatise in the 15th century. This 'new' plaster conformed well to the aesthetic requirements dictated by the classical ideal that in the 15th century had recently become fashionable in the Venetian lagoon area. The first record of work being done with marmorino is a building contract with the nuns of Santa Chiara of Murano in 1473. In this document, it is written that before the marmorino could be applied, the wall had to be prepared with a mortar made of lime and "coccio pesto" (ground terra cotta). This "coccio pesto" was then excavated from tailings of bricks or recycled from old roof tiles. At this point, to better understand the popularity of marmorino in Venetian life, two facts need to be considered. The first is that in a city that extends over water, the transport of sand for making plaster and the disposal of tailings was, and still is, a huge problem. So the use of marmorino was successful not only because the substrate was prepared using terra cotta scraps, but also the finish, marmorino, was made with leftover stone and marble, which were in great abundance at that time. These ground discards were mixed with lime to create marmorino. Besides, marmorino and substrates made of "coccio pesto" resisted the ambient dampness of the lagoon better than almost any other plaster. The first because it is extremely breathable by virtue of the kind of lime used (only lime which sets on exposure to air after losing excess water) and the second, because it contains terracotta which when added to lime makes the mixture hydraulic, that is, it's effective even in very damp conditions (because it contains silica and aluminium, bases of modern cement and Hydraulic lime preparations). The second consideration is that an aesthetically pleasing result could be achieved in an era dominated by the return of a classical Greco-Roman style allowing less weight to be transmitted to the foundation when compared to the habit of covering facades with slabs of stone. Usually, marmorino was white to imitate Istrian stone, which was most often used in Venetian construction, but was occasionally decorated with frescoes to imitate the marble, which Venetian merchants brought home from their voyages to the Orient. (In this period of the Republic of Venice, merchants felt obliged to return home bearing precious, exotic marble as a tribute to the beauty of their own city.) Marmorino maintained its prestige for centuries until the end of the 1800s when interest in it faded and it was considered only an economical solution to the use of marble. Only at the end of the 1970s, thanks in part to architect Carlo Scarpa's use of marmorino, did this finishing technique return to the interest of the best modern architects. For about 10 years, industries were also interested in marmorino which was only produced by artisans. Today, however, ready-to-use marmorino can be found, often with glue added to allow it to be applied on non-traditional surfaces such as drywall or wood panelling. See also Scagliola Stucco References Giovanni Polistena, History of Marmorino, Stucco Italiano, 2012. External links Why Lime? History & Benefits Wall & Furniture Films Building materials Craft materials Wallcoverings Plastering
Marmorino
[ "Physics", "Chemistry", "Engineering" ]
1,003
[ "Building engineering", "Coatings", "Architecture", "Construction", "Materials", "Plastering", "Matter", "Building materials" ]
9,443,870
https://en.wikipedia.org/wiki/Syracuse%20dish
A Syracuse dish or Syracuse watch glass is a shallow, circular, flat-bottomed dish of thick glass. Usually, it is 67 mm in outer diameter and 52 mm in inner diameter. Background Nathan Cobb, one of the pioneers of nematology in the United States, was the first who suggested using the syracuse dish for counting nematodes in 1918. Uses It is used as laboratory equipment in biology for either storage or culturing. References Laboratory glassware Microbiology equipment
Syracuse dish
[ "Biology" ]
97
[ "Microbiology equipment" ]
9,444,754
https://en.wikipedia.org/wiki/Biodiversity%20of%20Assam
The biodiversity of Assam, a state in North-East India, makes it a biological hotspot with many rare and endemic plant and animal species. The greatest success in recent years has been the conservation of the Indian rhinoceros at the Kaziranga National Park, but a rapid increase in human population in Assam threatens many plants and animals and their habitats. The rhinoceros, tiger, deer or chital / futukihorina (Axis axis), swamp deer or dolhorina (Cervus duvauceli duvauceli), clouded leopard (Neofelis nebulosa), hoolock gibbon, pygmy hog or nol-gahori (Porcula salvania), hispid hare, golden langur (Trachypithecus geei), golden cat, giant civet, binturong, hog badger, porcupine, and civet are found in Assam. Moreover, there are abundant numbers of Gangetic dolphins, mongooses, giant squirrels and pythons. The largest population of wild water buffalo is in Assam. The major birds in Assam include the blue-throated barbet or hetuluka (Megalaima asiatica), white-winged wood duck or deuhnah (Asarcornis scultulata), Pallas's fish eagle or kuruwa (Haliaeetus leucoryphus), great pied hornbill or rajdhonesh (Buceros bicornis homrai), Himalayan golden-backed three-toed wood-pecker or barhoituka (Dinopium shorii shorii), and migratory pelican. Assam is also known for orchids and for valuable plants and forest products. See also List of national parks of Assam List of wildlife sanctuaries of Assam Brahmaputra Valley semi-evergreen forests Physical geography of Assam Rhino poaching in Assam Notes References Biodiversity of Assam: Status Strategy & Action Plan for Conservation, eds A K Bhagabati, M C Kalita, S Baruah, Eastern Book House, New Delhi (2006) External links Assam Biota of Assam Tourism in Assam
Biodiversity of Assam
[ "Biology" ]
455
[ "Biodiversity" ]
9,445,193
https://en.wikipedia.org/wiki/Attack%20of%20the%20Alligators%21
"Attack of the Alligators!" is an episode of Thunderbirds, a British Supermarionation television series created by Gerry and Sylvia Anderson and filmed by their production company AP Films (APF) for ITC Entertainment. Written by Alan Pattillo and directed by David Lane, it was first broadcast on 10 March 1966 on ATV Midlands as the 23rd episode of Series One. It is the 24th episode in the official running order. Set in the 2060s, the series follows the exploits of International Rescue, an organisation that uses technologically advanced rescue vehicles to save human life. The main characters are ex-astronaut Jeff Tracy, founder of International Rescue, and his five adult sons, who pilot the organisation's main vehicles: the Thunderbird machines. The plot of "Attack of the Alligators!" sees a group of alligators grow to enormous size after their swamp is contaminated by a new food additive. When the reptiles lay siege to a house, International Rescue is called in to save the trapped occupants. Combining science-fiction and haunted house themes, with a plot deliberately written to be "nightmarish", "Attack of the Alligators!" was filmed at APF Studios in Slough in late 1965. It was the first APF production to use live animals, the re-sized alligators being played by juvenile crocodiles. Filming of the episode was controversial as the crew resorted to using electric shocks to coax movement out of the animals. Concern for the crocodiles' welfare prompted an investigation by the Royal Society for the Prevention of Cruelty to Animals (RSPCA), which ultimately took no action against APF. "Attack of the Alligators!" remains a favourite with Thunderbirds fans and commentators and is generally regarded as one of the series' best episodes. Along with "The Cham-Cham", the next episode to enter production, it went over-budget, causing the final instalment of Thunderbirds Series One ("Security Hazard") to be re-written as a clip show to lower costs. In 1976, "Attack of the Alligators!" inspired an episode of The New Avengers titled "Gnaws", written by ex-Thunderbirds writer Dennis Spooner. Plot A businessman, Blackmer, visits the reclusive Dr Orchard, a scientist who lives in a dilapidated house on the Ambro River. From the local plant Sidonicus americanus, Orchard has developed a food additive called "theramine" that increases the size of animals. Enlargement of animal stock presents a simple solution to world famine as well as other economic advantages. Blackmer's boatman, Culp, has been eavesdropping on the meeting. When a storm forces Blackmer to stay at the house overnight, Culp decides to steal the theramine and sell it to the highest bidder. Waiting until the house's other occupants are asleep, he breaks into Orchard's laboratory and pours some theramine into a vial. The rest of the supply is accidentally knocked into a sink and drains into the Ambro River. When Blackmer and Culp leave the next morning, their boat is attacked by an alligator, now enormous due to the theramine contamination. Orchard's assistant, Hector McGill, manages to rescue Blackmer but Culp is nowhere to be found. The house is quickly surrounded by three giant alligators that repeatedly hurl themselves at the building with Orchard, Blackmer, McGill and the housekeeper, Mrs Files, trapped inside. At Mrs Files' suggestion, McGill transmits a distress call to International Rescue. This is picked up by John Tracy on the Thunderbird 5 space station and relayed to Tracy Island, where Jeff immediately dispatches his other four sons to the danger zone in Thunderbirds 1 and 2. Arriving in Thunderbird 1 and transferring to a hover-jet, Scott fires the hover-jet's missile gun to disperse the alligators and accesses the house via the laboratory window. The room eventually caves in, forcing Scott and the others to retreat to the lounge. There, they are confronted by Culp, who holds them at gunpoint. Virgil, Alan and Gordon arrive in Thunderbird 2. Alan and Gordon man tranquiliser guns and subdue two of the alligators. When the third returns to the house, Alan exits Thunderbird 2 on another hover-jet to lure it away. He hits a tree and falls off the hover-jet, but is saved by Gordon, who tranquilises the alligator before it reaches Alan. Threatening to empty the entire theramine vial into the Ambro unless he is given safe passage upriver, Culp sets off in Blackmer's boat. At the same time, Gordon launches Thunderbird 4. A fourth, much larger alligator appears and attacks the boat, killing Culp. Virgil disposes of the creature with a missile fired from Thunderbird 2. Later, Gordon finds the theramine vial intact on the riverbed. After his sons return to Tracy Island, Jeff announces that theramine will be subject to international security restrictions. Tin-Tin has been away on a shopping trip and has bought Alan a present for his birthday – a pygmy alligator. Regular voice cast Ray Barrett as John Tracy Peter Dyneley as Jeff Tracy Christine Finn as Tin-Tin Kyrano David Graham as Gordon Tracy and Brains David Holliday as Virgil Tracy Shane Rimmer as Scott Tracy Matt Zimmerman as Alan Tracy Production The episode was partly inspired by H. G. Wells' 1904 novel The Food of the Gods and How It Came to Earth and its theme of animal size change. Another influence was the 1927 film The Cat and the Canary and its 1939 re-make, both of which feature stalkers and a haunted house premise. In an interview, Gerry Anderson described housekeeper Mrs Files as a "Mrs Danvers-type character". Writer Alan Pattillo, who according to special effects supervisor Derek Meddings "had tried to come up with the most nightmarish rescue situation he could", had wanted to direct the episode as well. In the end, however, it was directed by David Lane. The opening scene features an insert shot of a stormy sky that later introduced the opening titles of The Prisoner. "Attack of the Alligators!" was filmed in October and November 1965. The production overran its one-month schedule, forcing the crew to work extra hours, and sometimes long into the night, to finish the filming. Special effects assistant Ian Wingrove remembered that the episode's complex technical aspects had the crew "[working] day and night ... through a weekend". According to Lane, at one stage the shoot ran for 48 hours straight, with two editors processing the footage in shifts. He added: "I think Derek [Meddings] went three days, non-stop, just shooting." The alligators in the episode were portrayed not by actual alligators, as Gerry Anderson had originally intended, but juvenile crocodiles. These were acquired from a private zoo in the north of England to double as the enlarged alligators on the episode's scale model sets and water tanks. The crocodiles that appear in the episode were long; a larger specimen, measuring , was not used as it proved too aggressive to be taken out of its box. The crew kept the water tanks heated to a suitably warm temperature and used electric shocks to coax movement out of the crocodiles. The animals were unpredictable and difficult to control, either basking in the heat of the studio lights or disappearing into the tanks for hours at a time. To make them more visible to the cameras, the crew attached them to guiding rods and co-ordinated their movements. The use of live animals in both puppet and model shots required an unusually high level of collaboration between the puppet and effects crews. Effects director Brian Johnson and several other crew members refused to take part on animal welfare grounds. Camera operator Alan Perry did not remember any of the crocodiles being mistreated; series supervising director Desmond Saunders, however, claimed that more than one specimen died of pneumonia after being left in an unheated tank overnight. Director David Elliott, though filming a different episode at the time, recalled that another dislocated one of its limbs after receiving an electric shock. Puppet operator Christine Glanville admitted that the filming could not have been pleasant for the crocodiles because the tanks contained "all sorts of dirty paint water, oil and soapy water to make it look swampy." Saunders commented: "It was scandalous. It was one of the great episodes. Nevertheless there was a price to be paid for it." Animal cruelty concerns prompted an anonymous telephone call to the RSPCA, which dispatched an inspector to the studios. After a brief investigation, no action was taken against APF. This coincided with a decision to increase the voltage of the electric shocks to induce greater movement from the crocodiles. According to Gerry Anderson, when the inspector arrived, "Meddings explained that his team were laying the crocodiles down and they weren't doing anything. They were just lying there. The RSPCA man said, well, they would, because of the warmth of the lamps. So Derek said, 'We've been giving them a touch with an electrode just to make them move.' The guy asked what voltage they were using and Derek said it was about 20 volts, and the guy said, 'Oh, they've got terribly thick skins, you know. If you want them to move, you'll have to pump it up to 60.'" The inspector later joined the production to work alongside the crocodiles' handler. Filming with the crocodiles was often hazardous. During a promotional photoshoot featuring Lady Penelope (who does not appear in the episode), one of the animals attacked the puppet and destroyed one of its legs. While filming a scene, Meddings was pulling one of the crocodiles towards him on a rope when the animal slid out of its harness. Meddings wrote of the incident: "My crew never saw me move as fast as I did to get out of the tank when I pulled the rope and realised the creature was free." Of the largest crocodile, which was kept at the back of the stage when not being used, Wingrove recalled: "You would forget that it was there, then one day someone shouted 'Look out!' and we turned round to see this big crocodile walking across the stage – which cleared of people very quickly!" Both this episode and "The Cham-Cham", the next to enter production, overspent their budgets. This led the writers to re-work the final episode of Thunderbirds Series One ("Security Hazard") as a clip show to reduce costs. Broadcast and reception Originally transmitted on 10 March 1966, "Attack of the Alligators!" had its first UK-wide network broadcast on 20 March 1992 on BBC2. During that channel's 2000-2001 Thunderbirds re-run, the episode became the eleventh to be repeated when it replaced "Brink of Disaster", which along with "The Perils of Penelope" had been postponed until the end of the run due to similarities between the story and real-world events (both episodes feature dangerous situations involving trains and 2000 had seen several major railway accidents, most notably the Hatfield rail crash). Critical response "Attack of the Alligators!" is a popular episode of Thunderbirds and is widely regarded as one of the series' best. It was well received by Sylvia Anderson, who described it as her favourite episode. Lew Grade, head of distributor ITC, expressed great satisfaction with the filming during a visit to APF Studios in 1965. Stephen La Rivière considers the story one of the most unusual of the series, while Peter Webber of DVD Monthly magazine calls the episode "just insane". In 2004, "Attack of the Alligators!" was re-issued on DVD in North America as part of A&E Video's The Best of Thunderbirds: The Favorite Episodes. Reviewing the release for the website DVD Verdict, David Gutierrez awarded "Attack of the Alligators!" a perfect score of 100, declaring it the best episode in the collection and praising its production values: "It's like a beautifully directed short film". He elaborated: "'Attack of the Alligators!' serves as a terrific example of how strong Thunderbirds can look. It's not Howdy Doody sporting a jetpack – it's an hour-long programme that feels like a motion picture." Susanna Lazarus of Radio Times suggests that the episode is memorable specifically for its crocodile footage. The techniques used to produce the footage have caused the episode to be described as "controversial" by some sources. Mark Pickavance of the website Den of Geek criticises the footage from a visual standpoint, arguing that the use of scale sets with young crocodiles, "shot in super close-up to make them seem huge", does not produce a convincing illusion of giant alligators. Author Dave Thompson compares the giant reptiles to Swamp Thing, a superorganism featured in the DC Comics Universe. In 1976, Thunderbirds writer Dennis Spooner adapted the premise of "Attack of the Alligators!" while writing "Gnaws", an episode of The New Avengers featuring an enlarged rat. References Works cited External links 1966 British television episodes Fiction about size change Thunderbirds (TV series) episodes Works about crocodilians
Attack of the Alligators!
[ "Physics", "Mathematics" ]
2,756
[ "Fiction about size change", "Quantity", "Physical quantities", "Size" ]
9,445,837
https://en.wikipedia.org/wiki/Irrational%20rotation
In the mathematical theory of dynamical systems, an irrational rotation is a map where is an irrational number. Under the identification of a circle with , or with the interval with the boundary points glued together, this map becomes a rotation of a circle by a proportion of a full revolution (i.e., an angle of  radians). Since is irrational, the rotation has infinite order in the circle group and the map has no periodic orbits. Alternatively, we can use multiplicative notation for an irrational rotation by introducing the map The relationship between the additive and multiplicative notations is the group isomorphism . It can be shown that is an isometry. There is a strong distinction in circle rotations that depends on whether is rational or irrational. Rational rotations are less interesting examples of dynamical systems because if and , then when . It can also be shown that when . Significance Irrational rotations form a fundamental example in the theory of dynamical systems. According to the Denjoy theorem, every orientation-preserving -diffeomorphism of the circle with an irrational rotation number is topologically conjugate to . An irrational rotation is a measure-preserving ergodic transformation, but it is not mixing. The Poincaré map for the dynamical system associated with the Kronecker foliation on a torus with angle is the irrational rotation by . C*-algebras associated with irrational rotations, known as irrational rotation algebras, have been extensively studied. Properties If is irrational, then the orbit of any element of under the rotation is dense in . Therefore, irrational rotations are topologically transitive. Irrational (and rational) rotations are not topologically mixing. Irrational rotations are uniquely ergodic, with the Lebesgue measure serving as the unique invariant probability measure. Suppose . Since is ergodic,. Generalizations Circle rotations are examples of group translations. For a general orientation preserving homomorphism of to itself we call a homeomorphism a lift of if where . The circle rotation can be thought of as a subdivision of a circle into two parts, which are then exchanged with each other. A subdivision into more than two parts, which are then permuted with one-another, is called an interval exchange transformation. Rigid rotations of compact groups effectively behave like circle rotations; the invariant measure is the Haar measure. Applications Skew Products over Rotations of the Circle: In 1969 William A. Veech constructed examples of minimal and not uniquely ergodic dynamical systems as follows: "Take two copies of the unit circle and mark off segment of length in the counterclockwise direction on each one with endpoint at 0. Now take irrational and consider the following dynamical system. Start with a point , say in the first circle. Rotate counterclockwise by until the first time the orbit lands in ; then switch to the corresponding point in the second circle, rotate by until the first time the point lands in ; switch back to the first circle and so forth. Veech showed that if is irrational, then there exists irrational for which this system is minimal and the Lebesgue measure is not uniquely ergodic." See also Bernoulli map Modular arithmetic Siegel disc Toeplitz algebra Phase locking (circle map) Weyl sequence References Further reading C. E. Silva, Invitation to ergodic theory, Student Mathematical Library, vol 42, American Mathematical Society, 2008 Dynamical systems Irrational numbers Rotation
Irrational rotation
[ "Physics", "Mathematics" ]
706
[ "Physical phenomena", "Irrational numbers", "Mathematical objects", "Classical mechanics", "Rotation", "Motion (physics)", "Mechanics", "Numbers", "Dynamical systems" ]
9,446,798
https://en.wikipedia.org/wiki/Abhyankar%27s%20lemma
In mathematics, Abhyankar's lemma (named after Shreeram Shankar Abhyankar) allows one to kill tame ramification by taking an extension of a base field. More precisely, Abhyankar's lemma states that if A, B, C are local fields such that A and B are finite extensions of C, with ramification indices a and b, and B is tamely ramified over C and b divides a, then the compositum AB is an unramified extension of A. See also Finite extensions of local fields References . Theorem 3, page 504. . , p. 279. . Theorems in algebraic geometry Lemmas in algebra Algebraic number theory Theorems in abstract algebra
Abhyankar's lemma
[ "Mathematics" ]
153
[ "Theorems in algebraic geometry", "Theorems in abstract algebra", "Theorems in algebra", "Lemmas in algebra", "Algebraic number theory", "Theorems in geometry", "Lemmas", "Number theory" ]
9,446,878
https://en.wikipedia.org/wiki/Arlequin%20%28software%29
Arlequin is a free population genetics software distributed as an integrated GUI data analysis software. It performs several types of tests and calculations, including Fixation index (Fst, also known as the "F-statistics"), computing genetic distance, Hardy–Weinberg equilibrium, linkage disequilibrium, analysis of molecular variance, mismatch distribution, and pairwise difference tests. The software is designed to be able to handle different kinds of molecular, non-molecular, and/or frequency type data. About The Arlequin is a software package that integrates basic and advanced levels/methods for population genetics and data analysis. Version 3.5.2.2 is available only on Microsoft Windows as zip archive and installation executables. Mac OS X and Linux have only older 3.5.2 version but restricted on 64-bit environments and have only command-line interface as the "arlecore" program, "arlsumstat" program, as well as the example files. In 2019, the new R functions were integrated into the Arlequin software. The new R functions are able to integrate the software into zip files for Windows, Mac and Linux versions. References External links Official site About Free bioinformatics software Science software for Linux Science software for macOS Science software for Windows Population genetics
Arlequin (software)
[ "Chemistry", "Biology" ]
272
[ "Biochemistry stubs", "Biotechnology stubs", "Bioinformatics", "Bioinformatics stubs" ]
9,446,968
https://en.wikipedia.org/wiki/Volatile%20%28astrogeology%29
Volatiles are the group of chemical elements and chemical compounds that can be readily vaporized. In contrast with volatiles, elements and compounds that are not readily vaporized are known as refractory substances. On planet Earth, the term 'volatiles' often refers to the volatile components of magma. In astrogeology volatiles are investigated in the crust or atmosphere of a planet or moon. Volatiles include nitrogen, carbon dioxide, ammonia, hydrogen, methane, sulfur dioxide, water and others. Planetary science Planetary scientists often classify volatiles with exceptionally low melting points, such as hydrogen and helium, as gases, whereas those volatiles with melting points above about 100 K (–173 °C, –280 °F) are referred to as ices. The terms "gas" and "ice" in this context can apply to compounds that may be solids, liquids or gases. Thus, Jupiter and Saturn are gas giants, and Uranus and Neptune are ice giants, even though the vast majority of the "gas" and "ice" in their interiors is a hot, highly dense fluid that gets denser as the center of the planet is approached, and in the case of Neptune, may reach temperatures of 5,100 °C. Inside of Jupiter's orbit, cometary activity is driven by the sublimation of water ice. Supervolatiles such as CO and CO2 have generated cometary activity as far out as . Igneous petrology In igneous petrology the term more specifically refers to the volatile components of magma (mostly water vapor and carbon dioxide) that affect the appearance and explosivity of volcanoes. Volatiles in a magma with a high viscosity, generally felsic with a higher silica (SiO2) content, tend to produce eruptions that are explosive eruption. Volatiles in a magma with a low viscosity, generally mafic with a lower silica content, tend to vent as effusive eruption and can give rise to a lava fountain. Volatiles in magma Some volcanic eruptions are explosive because of the mixing between water and magma reaching the surface, which releases energy suddenly. However, in some cases, the eruption is caused by volatiles dissolved in the magma itself. Approaching the surface, pressure decreases and the volatiles come out of solution, creating bubbles that circulate in the liquid. The bubbles become connected together, forming a network. This promotes the fragmentation into small drops or spray or coagulate clots in gas. Generally, 95-99% of magma is liquid rock. However, the small percentage of gas present represents a very large volume when it expands on reaching atmospheric pressure. Gas is thus important in a volcano system because it generates explosive eruptions. Magma in the mantle and lower crust has a high volatile content. Water and carbon dioxide are not the only volatiles that volcanoes release; other volatiles include hydrogen sulfide and sulfur dioxide. Sulfur dioxide is common in basaltic and rhyolite rocks. Volcanoes also release a large amount of hydrogen chloride and hydrogen fluoride as volatiles. Solubility of volatiles There are three main factors that affect the dispersion of volatiles in magma: confining pressure, composition of magma, temperature of magma. Pressure and composition are the most important parameters. To understand how the magma behaves rising to the surface, the role of solubility within the magma must be known. An empirical law has been used for different magma-volatiles combination. For instance, for water in magma the equation is n=0.1078 P where n is the amount of dissolved gas as weight percentage (wt%), P is the pressure in megapascal (MPa) that acts on the magma. The value changes, for example for water in rhyolite n = 0.4111 P and for the carbon dioxide n = 0.0023 P. These simple equations work if there is only one volatile in a magma. However, in reality, the situation is not so simple because there are often multiple volatiles in a magma. It is a complex chemical interaction between different volatiles. Simplifying, the solubility of water in rhyolite and basalt is function of pressure and depth below the surface in absence of other volatiles. Both basalt and rhyolite lose water with decreasing pressure as the magma rises to the surface. The solubility of water is higher in rhyolite than in basaltic magma. Knowledge of the solubility allows the determination of the maximum amount of water that might be dissolved in relation with pressure. If the magma contains less water than the maximum possible amount, it is undersaturated in water. Usually, insufficient water and carbon dioxide exist in the deep crust and mantle, so magma is often undersaturated in these conditions. Magma becomes saturated when it reaches the maximum amount of water that can be dissolved in it. If the magma continues to rise up to the surface and more water is dissolved, it becomes supersaturated. If more water is dissolved in magma, it can be ejected as bubbles or water vapor. This happens because pressure decreases in the process and velocity increases and the process has to balance also between decrease of solubility and pressure. Making a comparison with the solubility of carbon dioxide in magma, this is considerably less than water and it tends to exsolve at greater depth. In this case water and carbon dioxide are considered independent. What affects the behavior of the magmatic system is the depth at which carbon dioxide and water are released. Low solubility of carbon dioxide means that it starts to release bubbles before reaching the magma chamber. The magma is at this point already supersaturated. The magma enriched in carbon dioxide bubbles, rises up to the roof of the chamber and carbon dioxide tends to leak through cracks into the overlying caldera. Basically, during an eruption the magma loses more carbon dioxide than water, that in the chamber is already supersaturated. Overall, water is the main volatile during an eruption. Nucleation of bubbles Bubble nucleation happens when a volatile becomes saturated. Actually, the bubbles are composed of molecules that tend to aggregate spontaneously in a process called homogeneous nucleation. The surface tension acts on the bubbles shrinking the surface and forces them back to the liquid. The nucleation process is greater when the space to fit is irregular and the volatile molecules can ease the effect of surface tension. The nucleation can occur thanks to the presence of solid crystals, which are stored in the magma chamber. They are perfect potential nucleation sites for bubbles. If there is no nucleation in the magma the bubbles formation might appear really late and magma becomes significantly supersaturated. The balance between supersaturation pressure and bubble's radii expressed by this equation: ∆P=2σ/r, where ∆P is 100 MPa and σ is the surface tension. If the nucleation starts later when the magma is very supersaturated, the distance between bubbles becomes smaller. Essentially if the magma rises rapidly to the surface, the system will be more out of equilibrium and supersaturated. When the magma rises there is competition between adding new molecules to the existing ones and creating new ones. The distance between molecules characterizes the efficiency of volatiles to aggregate to the new or existing site. Crystals inside magma can determine how bubbles grow and nucleate. See also Cryovolcano Ice References External links Glossary of planetary astronomy terms Volatiles of Costa Rican volcanoes. Volatile Planetary Science Research Discoveries Astrobiology Ice Origins Petrology Planetary geology Prebiotic chemistry Volcanology
Volatile (astrogeology)
[ "Chemistry", "Astronomy", "Biology" ]
1,555
[ "Origin of life", "Speculative evolution", "Prebiotic chemistry", "Astrobiology", "Biological hypotheses", "Astronomical sub-disciplines" ]
9,447,566
https://en.wikipedia.org/wiki/Rational%20consequence%20relation
In logic, a rational consequence relation is a non-monotonic consequence relation satisfying certain properties listed below. A rational consequence relation is a logical framework that refines traditional deductive reasoning to better model real-world scenarios. It incorporates rules like reflexivity, left logical equivalence, right-hand weakening, cautious monotony, disjunction on the left-hand side, logical and on the right-hand side, and rational monotony. These rules enable the relation to handle everyday situations more effectively by allowing for non-monotonic reasoning, where conclusions can be drawn based on usual rather than absolute implications. This approach is particularly useful in cases where adding more information can change the outcome, providing a more nuanced understanding than monotone consequence relations. Properties A rational consequence relation satisfies: REF Reflexivity and the so-called Gabbay–Makinson rules: LLE Left logical equivalence RWE Right-hand weakening CMO Cautious monotonicity DIS Logical or (i.e. disjunction) on left hand side AND Logical and on right hand side RMO Rational monotonicity Uses The rational consequence relation is non-monotonic, and the relation is intended to carry the meaning theta usually implies phi or phi usually follows from theta. In this sense it is more useful for modeling some everyday situations than a monotone consequence relation because the latter relation models facts in a more strict boolean fashion—something either follows under all circumstances or it does not. Example: cake The statement "If a cake contains sugar then it tastes good" implies under a monotone consequence relation the statement "If a cake contains sugar and soap then it tastes good." Clearly this doesn't match our own understanding of cakes. By asserting "If a cake contains sugar then it usually tastes good" a rational consequence relation allows for a more realistic model of the real world, and certainly it does not automatically follow that "If a cake contains sugar and soap then it usually tastes good." Note that if we also have the information "If a cake contains sugar then it usually contains butter" then we may legally conclude (under CMO) that "If a cake contains sugar and butter then it usually tastes good.". Equally in the absence of a statement such as "If a cake contains sugar then usually it contains no soap" then we may legally conclude from RMO that "If the cake contains sugar and soap then it usually tastes good." If this latter conclusion seems ridiculous to you then it is likely that you are subconsciously asserting your own preconceived knowledge about cakes when evaluating the validity of the statement. That is, from your experience you know that cakes that contain soap are likely to taste bad so you add to the system your own knowledge such as "Cakes that contain sugar do not usually contain soap.", even though this knowledge is absent from it. If the conclusion seems silly to you then you might consider replacing the word soap with the word eggs to see if it changes your feelings. Example: drugs Consider the sentences: Young people are usually happy Drug abusers are usually not happy Drug abusers are usually young We may consider it reasonable to conclude: Young drug abusers are usually not happy This would not be a valid conclusion under a monotonic deduction system (omitting of course the word 'usually'), since the third sentence would contradict the first two. In contrast the conclusion follows immediately using the Gabbay–Makinson rules: applying the rule CMO to the last two sentences yields the result. Consequences The following consequences follow from the above rules: MP Modus ponens MP is proved via the rules AND and RWE. CON Conditionalisation CC Cautious cut The notion of cautious cut simply encapsulates the operation of conditionalisation, followed by MP. It may seem redundant in this sense, but it is often used in proofs so it is useful to have a name for it to act as a shortcut. SCL Supraclassity SCL is proved trivially via REF and RWE. Rational consequence relations via atom preferences Let be a finite language. An atom is a formula of the form (where and ). Notice that there is a unique valuation which makes any given atom true (and conversely each valuation satisfies precisely one atom). Thus an atom can be used to represent a preference about what we believe ought to be true. Let be the set of all atoms in L. For SL, define . Let be a sequence of subsets of . For , in SL, let the relation be such that if one of the following holds: for each for some and for the least such i, . Then the relation is a rational consequence relation. This may easily be verified by checking directly that it satisfies the GM-conditions. The idea behind the sequence of atom sets is that the earlier sets account for the most likely situations such as "young people are usually law abiding" whereas the later sets account for the less likely situations such as "young joyriders are usually not law abiding". Notes By the definition of the relation , the relation is unchanged if we replace with , with ... and with . In this way we make each disjoint. Conversely it makes no difference to the rational consequence relation if we add to subsequent atoms from any of the preceding . The representation theorem It can be proven that any rational consequence relation on a finite language is representable via a sequence of atom preferences above. That is, for any such rational consequence relation there is a sequence of subsets of such that the associated rational consequence relation is the same relation: Notes By the above property of , the representation of a rational consequence relation need not be unique—if the are not disjoint then they can be made so without changing the rational consequence relation and conversely if they are disjoint then each subsequent set can contain any of the atoms of the previous sets without changing the rational consequence relation. References Logical consequence Binary relations Non-classical logic
Rational consequence relation
[ "Mathematics" ]
1,226
[ "Mathematical relations", "Binary relations" ]
9,447,823
https://en.wikipedia.org/wiki/Cosmonova
Cosmonova is an IMAX Dome cinema and planetarium located in an annex of the Swedish Museum of Natural History in Stockholm, Sweden. Cosmonova premiered over three nights starting on 13 October 1992, with the first public showing on 16 October. It was the first ever dedicated IMAX installation in Sweden (and third in the Nordic countries after Tietomaa Science Centre in Oulu, Finland and Tycho Brahe Planetarium in Copenhagen, Denmark) and is also the largest planetarium in Sweden. References External links Planetaria IMAX venues Cinemas in Sweden
Cosmonova
[ "Astronomy" ]
114
[ "Astronomy education", "Astronomy organizations", "Planetaria" ]
7,259,561
https://en.wikipedia.org/wiki/Tylopilus%20felleus
Tylopilus felleus, commonly known as the bitter bolete or the bitter tylopilus, is a fungus of the bolete family. Its distribution includes east Asia, Europe and eastern North America, extending south into Mexico and Central America. A mycorrhizal species, it grows in deciduous and coniferous woodland, often fruiting under beech and oak. Its fruit bodies have convex to flat caps that are some shade of brown, buff or tan and typically measure up to in diameter. The pore surface is initially white before turning pinkish with age. Like most boletes it lacks a ring and it may be distinguished from Boletus edulis and other similar species by its unusual pink pores and the prominent dark-brown net-like pattern on its stalk. French mycologist Pierre Bulliard described this species as Boletus felleus in 1788 before it was transferred into the new genus Tylopilus. It is the type species of Tylopilus and the only member of the genus found in Europe. Tylopilus felleus has been the subject of research into bioactive compounds that have been tested for antitumour and antibiotic properties. Although not poisonous it is generally considered inedible owing to its overwhelming bitterness. Taxonomy The species was first described in the scientific literature as le bolet chicotin (Boletus felleus) by French mycologist Pierre Bulliard in 1788. As the large genus Boletus was carved up into smaller genera, Petter Karsten transferred it in 1881 to Tylopilus, a genus diagnosed by its pink spores and adnate tubes. Tylopilus felleus is the type species of Tylopilus and the only member of the genus found in Europe. Synonyms include Boletus alutarius, described by Elias Magnus Fries in 1815 and later by Friedrich Wilhelm Gottlieb Rostkovius in 1844, and Paul Christoph Hennings's subsequent transfer of Fries's taxon into Tylopilus, T. alutarius. Lucien Quélet placed the taxon in Dictyopus in 1886 and then Rhodoporus in 1888, but neither of these genera are recognised today, the former having been merged into Boletus and the latter into Tylopilus. Genetic analysis published in 2013 shows that T. felleus and many (but not all) other members of Tylopilus form a Tylopilus clade within a larger group informally called anaxoboletus in the Boletineae. Other clades in the group include the porcini and Strobilomyces clades as well as three other groups composed of members of various genera including Xerocomus, Xerocomellus and Boletus badius and relatives. A variety described from the Great Lakes region, var. uliginosus, was recognised by Alexander H. Smith and Harry D. Thiers in 1971 on the basis of its microscopic features, a distinction supported by Professor C.B. Wolfe of Pennsylvania State University. However Index Fungorum does not consider this an independent taxon. Similarly, Boletus felleus var. minor, published originally by William Chambers Coker and A.H. Beers in 1943 (later transferred to Tylopilus by Albert Pilát and Aurel Dermek in 1974), has been folded into synonymy with T. felleus. Charles Horton Peck described Boletus felleus var. obesus in 1889, but no record of a type specimen exists. Although some records exist of T. felleus in Australia, their spores are of consistently smaller dimensions and this taxon has been classified as a separate species, T. brevisporus. Tylopilus felleus derives its genus name from the Greek tylos "bump" and pilos "hat" and its specific name from the Latin fel meaning "bile", referring to its bitter taste, similar to bile. The mushroom is commonly known as the "bitter bolete" or the "bitter tylopilus". Description The cap of this species grows up to 15 cm (6 in) in diameter, though some North American specimens reach 30 cm (12 in) across. Grey-yellow to pale- or walnut-brown, it is slightly downy at first and later becomes smooth with a matte lustre. It is initially convex before flattening out with maturity. The cap skin does not peel away from the flesh. The pores underneath are white at first and become pinkish with maturity. They are adnate to the stalk and bulge downwards as the mushroom ages. The pores bruise carmine or brownish, often developing rusty-brown spots with age, and number about one or two per millimetre. The tubes are long relative to the size of the cap, measuring deep in the middle part of the cap. The stalk is initially bulbous before stretching and thinning in the upper part; the lower part of the stalk remains swollen, sometimes shrinking at the base where it attaches to the substrate. It measures —rarely to —tall, and wide, and can bulge out to across at the base. It is lighter in colour than the cap, and covered with a coarse brown network of markings, which have been likened to fishnet stockings in appearance. Described as "very appetising" in appearance, the flesh is white or creamy, and pink beneath the cap cuticle; the flesh can also develop pinkish tones where it has been cut. It has a slight smell, which has been described as pleasant, as well as faintly unpleasant. The flesh is softer than that of other boletes, and tends to become more spongy as the mushroom matures. Insects rarely infest this species. The colour of the spore print is brownish, with pink, reddish, or rosy tints. Spores are somewhat fuse-shaped, smooth, and measure 11–17 by 3–5 μm. The basidia (spore-bearing cells) are club-shaped, four-spored, and measure 18–25.6 by 7.0–10.2 μm. Cystidia on the walls of the tubes (pleurocystidia) are fuse-shaped with a central swelling, thin-walled, and have granular contents. They possess sharp to tapered tips, and have overall dimensions of 36–44 by 8.0–11.0 μm. On the pore edges, the cheilocystidia are similar in shape to the pleurocystidia, measuring 24.8–44.0 by 7.3–11.0 μm. The hymenium of Smith and Thiers's variety uliginosus, when mounted in Melzer's reagent, shows reddish globules of pigment measuring 2–8 μm that appear in the hyphae and throughout the hymenium, and a large (8–12 μm) globule in the pleurocystidia. Several chemical tests have been documented that can help confirm the identify of this species. On the cap flesh, application of formaldehyde turns the tissue pinkish, iron salts result in a colour change to greyish-green, aniline causes a lavender to reddish-brown colour, and phenol a purplish pink to reddish brown. On the cap cuticle, nitric acid causes an orange-salmon colour, sulphuric acid creates orange-red, ammonia usually makes brown, and a potassium hydroxide solution usually makes orange. Similar species Italian cook and author Antonio Carluccio reports this is one of the most common fungi brought to him to identify, having been mistaken for an edible species. Young specimens can be confused with many edible boletes, though as the pores become more pink the species becomes easier to identify. Some guidebooks advocate tasting the flesh, the smallest piece of which will be very bitter. The dark-on-light reticulation in the stalk is distinctive and is the opposite colouration to that on the stalk of the prized Boletus edulis. T. felleus is found in the same habitat as B. badius, though the latter's yellow tubes and blue-bruising flesh easily distinguish these very dissimilar species. B. subtomentosus may have a similar-coloured cap but its yellow pores and slender stalk aid identification. Tylopilus rubrobrunneus, found in hardwood forests of eastern North America, is similar in appearance to T. felleus but has a purplish to purple-brown cap. It is also inedible owing to its bitter taste. Another North American species, T. variobrunneus, has a cap that is reddish-brown to chestnut-brown, with olive tones in youth. It has shorter spores than T. felleus, typically measuring 9–13 by 3–4.5  μm. In the field it can be distinguished from the latter species by its mild to slightly bitter taste. T. rhoadsiae, found in the southeastern United States, has a lighter-coloured cap which is smaller, up to in diameter. The edible T. indecisus and T. ferrugineus can be confused with T. felleus but have less reticulated stalks. The dimensions of the spores of the Australian species T. brevisporus range from 9.2 to 10.5 by 3.5 to 3.9 μm. T. neofelleus, limited in distribution to deciduous forests of China, New Guinea, Japan and Taiwan, can be distinguished from T. felleus macroscopically by its vinaceous-brown cap and pinkish-brown to vinaceous stalk and microscopically by its smaller spores (measuring 11–14 by 4–5 μm) and longer pleurocystidia (49–107 by 14–24 μm). Ecology, distribution and habitat Like all Tylopilus species, T. felleus is mycorrhizal. It is found in deciduous and coniferous woodland, often under beech and oak in well-drained acid soils, which can be sandy, gravelly or peaty. If encountered on calcareous (chalky) soil, it will be in moist areas that have become waterlogged and have ample leaf litter. Fruit bodies grow singly or in small groups, and occasionally in small clusters with two or three joined at the base of the stem. Fruit bodies have also been growing in the cavities of old trees, on old conifer stumps, or on buried rotten wood. The fungus obtains most of its nitrogen requirements from amino acids derived from the breakdown of proteins, although a lesser amount is obtained from the amino sugar glucosamine (a breakdown product of chitin, a major component of fungal cell walls). The mycorrhizal plant partner benefits from the fungus's ability to use these forms of nitrogen, which are often abundant in the forest floor. Fruit bodies appear over summer and autumn, anytime from June to October or even November, in many of the northern temperate zones. Large numbers may appear in some years and none in others, generally proportional to the amount of rainfall. Variety uliginosus, known from Michigan, grows among lichens and mosses under pines. In North America it is known from eastern Canada, south to Florida and west to Minnesota in the United States and into Mexico and Central America. Its European distribution is widespread; it is relatively common in many regions but rare or almost absent in others. In Asia it has been recorded from the vicinity of Dashkin in the Astore District of northern Pakistan and as far east as China, where it has been recorded from Hebei, Jiangsu, Fujian, Guangdong and Sichuan provinces, and Korea. The strong taste of the fruit body may have some role in insects avoiding it. The small fly species Megaselia pygmaeoides feeds on and infests the fruit bodies of T. felleus in North America though it seems to prefer other boletes in Europe. Fruit bodies can be parasitized by the mould Sepedonium ampullosporum. Infection results in necrosis of the mushroom tissue and a yellow colour caused by the formation of large amounts of pigmented aleurioconidia (single-celled conidia produced by extrusion from the conidiophores). The bacterium Paenibacillus tylopili has been isolated from the mycorrhizosphere of T. felleus; this is the region around its subterranean hyphae where nutrients released from the fungus affect the activity of the microbial population in the soil. The bacterium excretes enzymes that allow it to break down the biomolecule chitin. Fruit bodies of T. felleus have a high capacity to accumulate radioactive caesium (137Cs) from contaminated soil, a characteristic attributed to the deep soil penetration achieved by the mycelium. In contrast the species has a limited capacity to accumulate the radioactive isotope 210Po. Edibility As its common name suggests, it is extremely bitter, though not toxic as such. This bitterness is worsened by cooking. One specimen can foul the taste of a whole meal prepared with mushrooms. Despite this it is sold in markets (tianguis) in Mexico. A local recipe from France, Romania and East Germany calls for stewing it in skimmed milk, after which it can be eaten or powdered and used for flavouring. The mushroom is not bitter for those who lack genetic sensitivity to bitter taste, a trait endowed by the gene TAS2R38 (taste receptor 2 member 38). The compound responsible for the bitter taste has not been identified. Research The mycelium of Tylopilus felleus can be grown in axenic culture, on agar containing growth medium. The fungus can form fruit bodies if the temperature is suitable and the light conditions simulate a 12-hour day. The mushrooms are usually deformed, often lacking stalks so that the cap grows on the surface direct and the caps are usually in diameter. There are few Boletaceae species known to fruit in culture since ectomycorrhizal fungi tend to not fruit when separated from their host plant. Compounds from T. felleus have been extracted and researched for potential medical uses. Tylopilan is a beta-glucan that was isolated from the fruit bodies in 1988 and shown in laboratory tests to have cytotoxic properties and to stimulate non-specific immunological response. In particular it enhances phagocytosis, the process by which macrophages and granulocytes engulf and digest foreign bacteria. In experiments on mice with tumour cells it appeared to have antitumour effects when administered in combination with a preparation of Cutibacterium acnes in a 1994 Polish study. Researchers in 2004 reported that extracts of the fruit body inhibit the enzyme pancreatic lipase; it was the second most inhibitory of 100 mushrooms they tested. A compound present in the mushroom, N-γ-glutamyl boletine, has mild antibacterial activity. See also Boletus rubripes – the red-stemmed bitter bolete List of North American boletes References felleus Fungi described in 1788 Fungi of Asia Fungi of Central America Fungi of Europe Fungi of North America Inedible fungi Fungus species Taxa named by Jean Baptiste François Pierre Bulliard
Tylopilus felleus
[ "Biology" ]
3,166
[ "Fungi", "Fungus species" ]
7,259,581
https://en.wikipedia.org/wiki/Saturn%20IB-A
The Saturn IB-A was a proposed Saturn I family variant but was never built. It was to be a three-stage rocket virtually identical in layout to the Saturn IB-CE, with upgraded H-1 engines and a stretched S-IVB stage. External links astronautix.com Apollo program Saturn IB
Saturn IB-A
[ "Astronomy" ]
64
[ "Rocketry stubs", "Astronomy stubs" ]
7,259,606
https://en.wikipedia.org/wiki/Niwaki
is the Japanese word for "garden trees". Niwaki is also a descriptive word for highly "sculpting trees". Most varieties of plants used in Japanese gardens are called niwaki. These trees help to create the structure of the garden. Japanese gardens are not about using large range of plants, rather the objective is creating atmosphere or ambiance. The technique of niwaki is more about what to do with a tree than the tree itself. While Western gardeners enjoy experimenting with a wide range of different plants, Japanese gardeners achieve variety through training and shaping a relatively limited set of plants. Trees play a key role in the gardens and landscapes of Japan as well as being of important spiritual and cultural significance to its people. Fittingly, Japanese gardeners have fine-tuned a distinctive set of pruning techniques meant to coax out the essential characters of niwaki. Niwaki are often cultivated to achieve some very striking effects: trees are made to look older than they really are with broad trunks and gnarled branches; trees are made to imitate wind-swept or lightning-struck trees in the wild; Cryptomeria japonica specimens are often pruned to resemble free-growing trees. Some designers are using zoke (miscellaneous plants) as well as the niwaki to create a more "natural" mood to the landscape. Most traditional garden designers still rely primarily on the rarefied niwaki palette. The principles of niwaki may be applied to garden trees all over the world and are not restricted to Japanese gardens. Plant types The plants used most commonly in Japanese gardens today include: Japanese black pine (Pinus thunbergii) Japanese cedar (Cryptomeria japonica) Camellias including sasanqua (Camellia sasanqua) Many other flowering varieties (Camellia japonica cvs) Japanese evergreen oaks (Quercus glauca, Quercus myrsinifolia) Gardenia (Gardenia jasminoides) Sweet osmanthus (Osmanthus fragrans) Japanese maple (Acer palmatum) Japanese apricot (Prunus mume and others) Yoshino flowering cherry (Prunus × yedoensis 'Yoshino') Japanese aucuba (Aucuba japonica) Japanese andromeda (Pieris japonica) Winter daphne (Daphne odora) Japanese enkianthus (Enkianthus perulatus) Satsuki azalea (Rhododendron cvs.) References Japanese style of gardening Landscape architecture
Niwaki
[ "Engineering" ]
525
[ "Landscape architecture", "Architecture" ]
7,259,690
https://en.wikipedia.org/wiki/Saturn%20IB-B
The proposed Saturn IB-B was to be essentially an uprated Saturn IB using the new MS-IVB-2 upper stage, using the HG-3 engine, developed from the S-IVB and an uprated S-IB-A first stage. External links astronautix.com Saturn IB
Saturn IB-B
[ "Astronomy" ]
64
[ "Rocketry stubs", "Astronomy stubs" ]
7,260,624
https://en.wikipedia.org/wiki/Michael%20Denton
Michael John Denton (born 25 August 1943) is a British biochemist who is a proponent of intelligent design and a Senior Fellow at the Discovery Institute's Center for Science and Culture. He holds a PhD degree in biochemistry. Denton's book, Evolution: A Theory in Crisis, inspired intelligent design proponents Phillip Johnson and Michael Behe. Biography Denton gained a medical degree from Bristol University in 1969 and a PhD in biochemistry from King's College London in 1974. He was a senior research fellow in the Biochemistry Department at the University of Otago, Dunedin, New Zealand from 1990 to 2005. He later became a scientific researcher in the field of genetic eye diseases. He has spoken worldwide on genetics, evolution and the anthropic argument for design. Denton's current interests include defending the "anti-Darwinian evolutionary position" and the design hypothesis formulated in his book Nature’s Destiny. Denton described himself as an agnostic. He is currently a senior fellow at the Discovery Institute's Center for Science and Culture. Books Evolution: A Theory in Crisis In 1985 Denton wrote the book Evolution: A Theory in Crisis, presenting a systematic critique of neo-Darwinism ranging from paleontology, fossils, homology, molecular biology, genetics and biochemistry, and argued that evidence of design exists in nature. Some book reviews criticized his arguments. He describes himself as an evolutionist and he has rejected biblical creationism. The book influenced Phillip E. Johnson, the father of intelligent design, Michael Behe, a proponent of irreducible complexity, and George Gilder, co-founder of the Discovery Institute, the hub of the intelligent design movement. Since writing the book Denton has changed many of his views on evolution; however, he still believes that the existence of life is a matter of design. Nature's Destiny Denton still accepts design and embraces a non-Darwinian evolutionary theory. He denies that randomness accounts for the biology of organisms; he has proposed an evolutionary theory which is a "directed evolution" in his book Nature's Destiny (1998). Life, according to Denton, did not exist until the initial conditions of the universe were fine-tuned (see Fine-tuned universe). Denton was influenced by Lawrence Joseph Henderson (1878-1942), Paul Davies and John D. Barrow who argued for an anthropic principle in the cosmos (Denton 1998, v, Denton 2005). His second book Nature's Destiny (1998) is his biological contribution to the anthropic principle debate, dominated by physicists. He argues for a law-like evolutionary unfolding of life. Publications Evolution: A Theory in Crisis. Adler & Adler, 1985. Nature's Destiny: How the Laws of Biology Reveal Purpose in the Universe, New York: Free Press, 1998. Evolution: Still a Theory in Crisis. Seattle, Washington: Discovery Institute, 2016. Paperback: References External links Are We Spiritual Machines? Ray Kurzweil Vs. the Critics of Strong A.I. with an essay by Michael Denton Denton's Genetic-Medicine Work at University of Sindh 1943 births Living people Alumni of King's College London Intelligent design advocates Non-Darwinian evolution Pseudoscientific biologists
Michael Denton
[ "Biology" ]
651
[ "Non-Darwinian evolution", "Biology theories" ]
7,260,875
https://en.wikipedia.org/wiki/Edison%20screw
Edison screw (ES) is a standard lightbulb socket for electric light bulbs. It was developed by Thomas Edison (1847–1931), patented in 1881, and was licensed in 1909 under General Electric's Mazda trademark. The bulbs have right-hand threaded metal bases (caps) which screw into matching threaded sockets (lamp holders). For bulbs powered by AC current, the thread is generally connected to neutral and the contact on the bottom tip of the base is connected to the "live" phase. In North America and continental Europe, Edison screws displaced other socket types for general lighting. In the early days of electrification, Edison screws were the only standard connector, and appliances other than light bulbs were connected to AC power via lamp sockets. Today Edison screw sockets comply with international standards. Their types are designated as "Exx", such as "E26", where "xx" indicates the diameter of the socket in millimeters. History In the United States, early manufacturers of incandescent lamps used several different and incompatible bases in the 1880s and 1890s. In designing his screw, Edison copied the lid of a kerosene can in his workshop, even sawing it off to make a prototype in 1880. Another company, the Thomson-Houston Electric Company, used a threaded stud at the bottom of the socket and a flat contact ring. The Sawyer-Man or Westinghouse base used a spring clip acting on grooves in the bulb base and a contact stud at the bottom of the lamp. Most smaller competitors had to produce lamps for all three types, and some used their own designs as well. Other lamp bases include the bayonet mount and wedge base. All three major designs were patented. Edison himself filed his applications in 1881 and 1890. In response to Edison's patent, Reginald Fessenden invented the bi-pin connector for the 1893 World's Fair. After some design tweaks Edison settled upon a screw 1 inch in diameter with 7 threads per inch of length, which much later became E26. Screw shells produced as early as 1888 had a lighter taper than the modern ones. In 1892, Edison General Electric Company merged with Thomson-Houston to found General Electric, which gradually adopted the Edison screw and made it prevalent. By about 1908, the Edison base was most common in the U.S. with the others falling out of use. Proposals to introduce one or several international standards for Edison screws began in 1918, when France suggested to the International Electrotechnical Commission (IEC) to take up the issue of sockets and holders. All IEC attempts to reach consensus by 1925 failed, but lamp makers continued the work in an independent committee and developed two standards—one for Europe, another for Americas—which were endorsed by the IEC in 1930 and 1931 respectively. It was in this period when E-designations of screws first originated in Germany (where seven DIN VDE standards were enacted in 1924—1925) and then adopted by IEC. Types Specifications for all lamp mount types are defined in the following American National Standards Institute (ANSI) and International Electrotechnical Commission (IEC) publications: Lamp Caps – ANSI C81.61 and IEC 60061-1 Lamp Holders – ANSI C81.62 and IEC 60061-2 Gauges (to ensure interchangeability) – ANSI C81.63 and IEC 60061-3 Guidelines for Electrical Lamp Bases, Lampholders and Gauges – ANSI C81.64 and IEC 60061-4 Generally, the two standards are harmonized, although several types of screw mount are still defined in only one standard. In the designation "Exx", "E" stands for "Edison" and "xx" indicates the diameter in millimeters as measured across the peaks of the thread on the base (male), e.g., E12 has a diameter of 12 mm. This is distinct from the glass envelope (bulb) diameter, which in the U.S. is given in eighths of an inch, e.g., A19, MR16, T12. There are four commonly used thread size groups for mains supply lamps: Candelabra: E12 North America, E11 in Europe Intermediate: E17 North America, E14 (Small ES, SES) in Europe Medium or standard: E26 (MES) in North America, E27 (ES) in Europe Mogul: E39 North America, E40 (Goliath ES) in Europe. The E26 and E27 are usually interchangeable, as are the E39 and E40, although less so; although there is only a 1 mm difference in thread outside diameter, there is a small difference in pitch; an E40 cap will often fit in an E39 holder but not the other way around. E11 and E12 are not interchangeable. Other semi-standard screw thread sizes are available for certain specific applications. The large E39 "Mogul" and E40 "Goliath" base are used on street lights, and high-wattage lamps (such as a 100 W / 200 W / 300 W 3-way) and many high-intensity discharge lamps. In areas following the U.S. National Electrical Code, general-use lamps over 300 W cannot use an E26 base and must instead use the E39 base. Medium Edison screw (MES) bulbs for 12 V are also produced for recreational vehicles. Large outdoor Christmas lights use Intermediate base, as do some desk lamps and many microwave ovens. Previously, emergency exit signs also tended to use the intermediate base, but U.S. and Canadian rules now require long-life and energy-efficient LED lamps, which can be purchased inside a conventional Edison base bulb as a retrofit. A medium screw base should not carry more than 25 amperes current; this may limit the practical rating of low voltage lamps. E29 "Admedium" bases are used for special applications, for example UV spotlight lamps in magnetic crack detection machines. In countries that use 220–240 Volt AC domestic power, standard-size E27 and small E14 are the most common screw-mount sizes and are prevalent throughout continental Europe and China. In 120 Volt North America, 100 Volt Japan and 110 Volt Taiwan, the standard size for general-purpose lamps is E26. E12 is typically used for candelabra fixtures. E14 or E17 are also sometimes used, especially in small table lamps and novelty lighting, and occasionally the lights on newer ceiling fans. 'Christmas lights' use several base sizes: E17 for C9 bulbs, E12 for C7 bulbs, E10 for decades-old series-wired C6 bulb sets in the U.S., and an entirely different wedge base for T1¾ mini-lights. For a short time early on, these mini lights were manufactured using E5 screw bases. A tiny E5 or E5.5 size is only used for extra-low voltages, such as in interior illumination for model buildings, and model vehicles such as model trains. These are often called "pea bulbs" if they are globe-shaped, but they commonly look like sub-miniature Christmas bulbs, or large "grain-of-wheat" bulbs. E10 bulbs are common on battery-powered flashlights, as are bayonet mounts (although those are usually held in with a circular flange located where the base meets the glass envelope of the bulb). The E11 base is sometimes used for 50/75/100 Watt halogen lamps in North America, where it is called the "mini-can", and tighter threads are used to keep them out of E12 base nightlights and other places where they could start a fire. There are also adapters between screw sizes, and for adapting to or from bayonet caps. A socket extender makes the bulb stick out further, such as to accommodate a compact fluorescent lamp that is too large to fit in a recessed lighting fixture. Most Edison screws have right-hand threads (lamp is turned clockwise to tighten), but left-hand threaded screws are sometimes used, usually for a non-standard voltage or wattage bulb. This prevents the use of an incorrect bulb, which could cause damage. Public locations such as railway trains and the New York City Subway have used light bulbs with left-hand threads to discourage theft of the bulbs for use in regular light fixtures. Fittings Three-way lamps have a d suffix to indicate double contacts, usually E26d or E27d, or rarely E39d. The second contact is used for the lower-wattage filament of the two inside the lamp. This extra contact is a ring located around the main contact. Unlike bayonet sockets, three-way and regular lamps are interchangeable, although the low filament or low setting does not work if mismatched. The small Edison screw has nine threads per inch, or a pitch of per thread. The medium Edison screw has seven threads per inch, or a pitch of per thread. In the U.S., the Energy Independence and Security Act of 2007 requirement for greater energy efficiency only applies to the medium Edison screw, all other being considered "specialty" lamps. Diazed fuses DII uses the same E27 thread as standard 230 V lamps, but have a longer body and cannot be screwed into a lamp holder (socket). A lamp base is too short to contact the bottom terminal of a fuse holder. However it is possible (but not useful) to screw a DII fuse holder without a fuse in an E27 lamp holder. Other uses The Edison screw socket was used as an outlet (such as for toasters) when mains electricity was still mainly used for lighting, and before wall outlets became common. In North America, fuses were used in buildings wired before 1960. These Edison base fuses would screw into a fuse socket similar to Edison-base incandescent lamps. Some adapters for wall outlets use an Edison screw, allowing a light socket to become an ungrounded electrical outlet (such as to install Christmas lights temporarily via a porch light), or to make a pull-chain switch with two outlets, or to split it for two lamps. Another adapter can make a wall outlet into a lamp holder (lamp socket). Various other accessories have been made, including a smoke detector that recharges over a few hours and lasts for a few days or weeks thereafter, and still allows the attached lamp to operate normally. There have also been electronics that stick onto the end of the screw base and allow the attached lamp to flash, for example, to attract the attention of arriving guests or emergency vehicles; others function as a dimmer or timer, or dim gradually in a child's bedroom in the evening. Some vacuum tubes, such as certain rectifiers, use an Edison screw base. See also A-series light bulb Multifaceted reflector Screw thread diameters GU24 lamp fitting Notes References External links Edison screw thread (in English) Electrical power connectors Mechanical standards Standards of the United States Types of lamp fr:Support des lampes électriques#Culots à vis
Edison screw
[ "Engineering" ]
2,295
[ "Mechanical standards", "Mechanical engineering" ]
7,260,876
https://en.wikipedia.org/wiki/Ehrling%27s%20lemma
In mathematics, Ehrling's lemma, also known as Lions' lemma, is a result concerning Banach spaces. It is often used in functional analysis to demonstrate the equivalence of certain norms on Sobolev spaces. It was named after Gunnar Ehrling. Statement of the lemma Let (X, ||⋅||X), (Y, ||⋅||Y) and (Z, ||⋅||Z) be three Banach spaces. Assume that: X is compactly embedded in Y: i.e. X ⊆ Y and every ||⋅||X-bounded sequence in X has a subsequence that is ||⋅||Y-convergent; and Y is continuously embedded in Z: i.e. Y ⊆ Z and there is a constant k so that ||y||Z ≤ k||y||Y for every y ∈ Y. Then, for every ε > 0, there exists a constant C(ε) such that, for all x ∈ X, Corollary (equivalent norms for Sobolev spaces) Let Ω ⊂ Rn be open and bounded, and let k ∈ N. Suppose that the Sobolev space Hk(Ω) is compactly embedded in Hk−1(Ω). Then the following two norms on Hk(Ω) are equivalent: and For the subspace of Hk(Ω) consisting of those Sobolev functions with zero trace (those that are "zero on the boundary" of Ω), the L2 norm of u can be left out to yield another equivalent norm. References Notes Bibliography Banach spaces Sobolev spaces Lemmas in analysis
Ehrling's lemma
[ "Mathematics" ]
344
[ "Theorems in mathematical analysis", "Mathematical analysis", "Mathematical analysis stubs", "Lemmas in mathematical analysis", "Lemmas" ]
7,262,157
https://en.wikipedia.org/wiki/Sintz%20Gas%20Engine%20Company
The Sintz Gas Engine Company was formed in about 1885 by Clark Sintz and others in Springfield, Ohio. It was a pioneering marine engine manufacturing business that expanded into other fields. After its sale in 1902 to the Michigan Yacht and Power Company, Sintz ceased to exist in 1903 as an entity. Background Clark Sintz had been undertaking pioneering engine work both on his own and with John F Endter. John Foos held the patent. In 1885 the company demonstrated a small 2-cycle engine in a small boat. The engine was based on a Dugald Clerk design. Clerk was a Scottish engineer who had patented the engine in the 1870s. Foos formed his own company, Foos Gas Engine Company, in 1889 using his own improved version of Clark Sintz's engine. In 1894 Elwood Haynes used a Sintz engine in his first car, as did Milton Reeves in 1896. In 1894 Sintz sold his interest in the company and, together with his son, Claude formed the Wolverine Motor Works. Wolverine Motor Works The Wolverine Motor Works initially was formed to make motor cars but instead began making marine engines for pleasure boats and in 1901 moved its marine engine manufacturing to Holland, Michigan. That same year Sintz sold the business to Charles Snyder. Sintz had been engaged by Snyder to design a small gauge railway for his banana plantation in Panama. Claude Sintz went on to make marine engines under his name from 1904 to 1907 and then founded The Sintz-Wallin Company of Grand Rapids. His early engines were two strokes with the brand name Leader. In 1913 Sintz-Wallin merged with the Midland Tractor Company and formed the Leader Gas Engine Company. In 1915 the Leader's moved to Quincy, Illinois, where they consolidated along with Dayton Foundry and Machine Company and Hayton Pump Company into Dayton-Dick Company. Dayton-Dick became Dayton-Dowd in 1919 and ceased making tractors in 1924. The pump manufacturing business continued until 1945 when it was acquired by the Peerless Pump Company. Peerless is now owned by Grundfos. Cars From 1899 to 1903 the Sintz company produced cars of numerous styles. It also produced rail cars and light trams. All were powered by an own-make two-stroke engine. Michigan Yacht and Power Company In about 1890 O J Mulford, W A Pungs, and a Mr Seymour formed the Michigan Yacht and Power Company in Detroit. They made small power boats and were distributors of the Sintz marine engines. In 1901 or 1902, Michigan Yacht and Power Company purchased the Sintz company and moved it to Detroit. In late 1903 Sintz ceased to exist as an entity. The new company was named the Pungs-Finch Auto and Gas Engine Company in 1904. Pungs bought out his partner O. J. Mulford, who departed and established the Gray Marine Motor Company in 1905. Gray Marine Motor Company renamed again in 1911 as Gray Motor Company, reformed in 1924 as Gray Marine Motor Company, and eventually acquired by Continental in 1944. References David Burgess Wise, The New Illustrated Encyclopedia of Automobiles. Defunct motor vehicle manufacturers of the United States Defunct manufacturing companies based in Michigan Defunct manufacturing companies based in Ohio Motor vehicle manufacturers based in Michigan Springfield, Ohio Marine engine manufacturers Engine manufacturers of the United States Vehicle manufacturing companies established in 1899 Vehicle manufacturing companies disestablished in 1903 Automotive pioneers Automotive engineers Vintage vehicles Cars introduced in 1899 1890s cars 1900s cars
Sintz Gas Engine Company
[ "Engineering" ]
683
[ "Automotive engineering", "Automotive engineers" ]
7,262,872
https://en.wikipedia.org/wiki/London%20equations
The London equations, developed by brothers Fritz and Heinz London in 1935, are constitutive relations for a superconductor relating its superconducting current to electromagnetic fields in and around it. Whereas Ohm's law is the simplest constitutive relation for an ordinary conductor, the London equations are the simplest meaningful description of superconducting phenomena, and form the genesis of almost any modern introductory text on the subject. A major triumph of the equations is their ability to explain the Meissner effect, wherein a material exponentially expels all internal magnetic fields as it crosses the superconducting threshold. Description There are two London equations when expressed in terms of measurable fields: Here is the (superconducting) current density, E and B are respectively the electric and magnetic fields within the superconductor, is the charge of an electron or proton, is electron mass, and is a phenomenological constant loosely associated with a number density of superconducting carriers. The two equations can be combined into a single "London Equation" in terms of a specific vector potential which has been gauge fixed to the "London gauge", giving: In the London gauge, the vector potential obeys the following requirements, ensuring that it can be interpreted as a current density: in the superconductor bulk, where is the normal vector at the surface of the superconductor. The first requirement, also known as Coulomb gauge condition, leads to the constant superconducting electron density as expected from the continuity equation. The second requirement is consistent with the fact that supercurrent flows near the surface. The third requirement ensures no accumulation of superconducting electrons on the surface. These requirements do away with all gauge freedom and uniquely determine the vector potential. One can also write the London equation in terms of an arbitrary gauge by simply defining , where is a scalar function and is the change in gauge which shifts the arbitrary gauge to the London gauge. The vector potential expression holds for magnetic fields that vary slowly in space. London penetration depth If the second of London's equations is manipulated by applying Ampere's law, , then it can be turned into the Helmholtz equation for magnetic field: where the inverse of the laplacian eigenvalue: is the characteristic length scale, , over which external magnetic fields are exponentially suppressed: it is called the London penetration depth: typical values are from 50 to 500 nm. For example, consider a superconductor within free space where the magnetic field outside the superconductor is a constant value pointed parallel to the superconducting boundary plane in the z direction. If x leads perpendicular to the boundary then the solution inside the superconductor may be shown to be From here the physical meaning of the London penetration depth can perhaps most easily be discerned. Rationale Original arguments While it is important to note that the above equations cannot be formally derived, the Londons did follow a certain intuitive logic in the formulation of their theory. Substances across a stunningly wide range of composition behave roughly according to Ohm's law, which states that current is proportional to electric field. However, such a linear relationship is impossible in a superconductor for, almost by definition, the electrons in a superconductor flow with no resistance whatsoever. To this end, the London brothers imagined electrons as if they were free electrons under the influence of a uniform external electric field. According to the Lorentz force law these electrons should encounter a uniform force, and thus they should in fact accelerate uniformly. Assume that the electrons in the superconductor are now driven by an electric field, then according to the definition of current density we should have This is the first London equation. To obtain the second equation, take the curl of the first London equation and apply Faraday's law, , to obtain As it currently stands, this equation permits both constant and exponentially decaying solutions. The Londons recognized from the Meissner effect that constant nonzero solutions were nonphysical, and thus postulated that not only was the time derivative of the above expression equal to zero, but also that the expression in the parentheses must be identically zero: This results in the second London equation and (up to a gauge transformation which is fixed by choosing "London gauge") since the magnetic field is defined through Additionally, according to Ampere's law , one may derive that: On the other hand, since , we have , which leads to the spatial distribution of magnetic field obeys : with penetration depth . In one dimension, such Helmholtz equation has the solution form Inside the superconductor , the magnetic field exponetially decay, which well explains the Meissner effect. With the magnetic field distribution, we can use Ampere's law again to see that the supercurrent also flows near the surface of superconductor, as expected from the requirement for interpreting as physical current. While the above rationale holds for superconductor, one may also argue in the same way for a perfect conductor. However, one important fact that distinguishes the superconductor from perfect conductor is that perfect conductor does not exhibit Meissner effect for . In fact, the postulation does not hold for a perfect conductor. Instead, the time derivative must be kept and cannot be simply removed. This results in the fact that the time derivative of field (instead of field) obeys: For , deep inside a perfect conductor we have rather than as the superconductor. Consequently, whether the magnetic flux inside a perfect conductor will vanish depends on the initial condition (whether it's zero-field cooled or not). Canonical momentum arguments It is also possible to justify the London equations by other means. Current density is defined according to the equation Taking this expression from a classical description to a quantum mechanical one, we must replace values and by the expectation values of their operators. The velocity operator is defined by dividing the gauge-invariant, kinematic momentum operator by the particle mass m. Note we are using as the electron charge. We may then make this replacement in the equation above. However, an important assumption from the microscopic theory of superconductivity is that the superconducting state of a system is the ground state, and according to a theorem of Bloch's, in such a state the canonical momentum p is zero. This leaves which is the London equation according to the second formulation above. References Superconductivity Equations
London equations
[ "Physics", "Materials_science", "Mathematics", "Engineering" ]
1,335
[ "Physical quantities", "Superconductivity", "Mathematical objects", "Materials science", "Equations", "Condensed matter physics", "Electrical resistance and conductance" ]
7,263,885
https://en.wikipedia.org/wiki/Flavor%20Flav
William Jonathan Drayton Jr. (born March 16, 1959), known by his stage name Flavor Flav ( ), is an American hip hop artist. Known for his catchphrase "Yeah, boyeeeeee!" when performing, he is a founding member, alongside Chuck D, of Public Enemy, a rap group that has earned six Grammy Award nominations, and has been inducted into the Rock and Roll Hall of Fame. After spending several years out of the limelight, Flav starred in multiple VH1 reality series, including The Surreal Life, Strange Love, and Flavor of Love. Early life and education Drayton was born in Roosevelt, New York, and grew up in nearby Freeport, two communities within the Town of Hempstead. Drayton is the cousin of former Penn State basketball player Shep Garner, and of Brooklyn MC Timbo King of Royal Fam. He is also a cousin of rappers Ol' Dirty Bastard, RZA, and GZA of the Wu-Tang Clan. He began teaching himself piano at the age of five, and mastered piano, drums and guitar at an early age, while also singing in the youth choir at his church. According to Chuck D, Drayton is proficient in fifteen instruments. By the time he dropped out of Freeport High School in the 11th grade, he had been in and out of jail for robbery and burglary. Drayton attended culinary school in 1978. Later, he attended Adelphi University on Long Island, where he met Carlton Ridenhour (who later became known as Chuck D). They first collaborated on Chuck D's hip-hop college radio show, then began rapping together. Drayton's stage name Flavor Flav was originally his graffiti tag. Career Music Flavor Flav (often referred to as "Flav") came to prominence as a founding member and hype man of the rap group Public Enemy, which he co-founded in 1985 with Chuck D. A year later, the group released "Public Enemy #1", which brought them to the attention of Def Jam Records executive Rick Rubin. Rubin initially did not understand Flav's role in the act and wanted to sign Chuck D as a solo act; however, Chuck D insisted that Flav be signed with them and the two were signed to Def Jam. The group's first album Yo! Bum Rush the Show was released in 1987. Flav served as the comic foil to Chuck D's serious, politically charged style. The group gained much wider fame with their following release, 1988's It Takes a Nation of Millions to Hold Us Back, which went double platinum. By the time the political single "Fight the Power" was released in 1989, the group had become mainstream superstars. Along with Chuck D, the showman of the group and its promotional voice, Flav stood out among the members of Public Enemy as he often got the fans excited, appearing on stage and in public wearing big hats and glasses, and a large clock dangling from his neck. The first released track on which Flav rapped solo was "Life of a Nigerian" on Goat Ju JU, although the first hit on which he rapped solo would not come until the 1990 single "911 Is a Joke". During Public Enemy's first years of existence, Flav experienced tensions with group-mate Professor Griff, who never liked Flav's flamboyant stance in what Griff felt should be a serious, politically-challenging group. In 1999, Flavor Flav recorded with DJ Tomekk and Grandmaster Flash the single "1, 2, 3, ... Rhymes Galore". The single stayed for 17 weeks in the top ten of the German charts. In 2006, Flav put out his first solo album, titled Hollywood. It was released during the second season of the reality TV dating show Flavor of Love. On March 1, 2020, Public Enemy released a statement saying that the group would be "moving forward without Flavor Flav," following a disagreement over the group's decision to endorse Bernie Sanders and perform at his Los Angeles rally. Flavor Flav denounced the firing, maintaining that he was Chuck D's partner in Public Enemy and could therefore not be fired from it. On April 1, 2020, Chuck D announced that the firing was a hoax. Flavor Flav stated shortly thereafter that he was not a part of the hoax and disapproved of the stunt. Television After a hiatus from the music scene, Flavor Flav was invited to participate on VH1 reality show The Surreal Life. During this show, he developed a relationship with actress and singer Brigitte Nielsen. Following the conclusion of The Surreal Life, VH1 gave Flav and Nielsen a show titled Strange Love, which detailed their globetrotting adventure in love. At the end of Strange Love, Nielsen decided to return to her fiancé, Mattia Dessi. Flavor of Love, which aired for three seasons, is a reality show where Flavor Flav looks for love. The show's success led to spin-offs titled I Love New York and I Love Money. During the third season reunion of Flavor of Love, Flav proposed to Liz, the mother of his youngest son, Karma. The Comedy Central roast of Flavor Flav aired on August 12, 2007. Guests appearing at the roast included: Snoop Dogg, Brigitte Nielsen, Jimmy Kimmel, Carrot Top, Lisa Lampanelli, Ice-T, Jeff Ross, Katt Williams, Patton Oswalt, Greg Giraldo, and Sommore. Flav played Calvester Hill on the MyNetworkTV comedy series Under One Roof, starring alongside Kelly Perine. Restaurant owner In 2011, Flav partnered with Nick Cimino to open Flav's Fried Chicken in Cimino's hometown of Clinton, Iowa. The two had met through Cimino's brother Peter, who runs Mama Cimino's in Las Vegas and Castle Rock Bar and Pizzeria in Kingman, Arizona. After enjoying Flav's homemade fried chicken, Peter Cimino began selling chicken wings using Flav's recipe. The founders hoped to start a national restaurant franchise. A mix of squabbling owners, bounced checks, and bad business decisions led to Flavor Flav's Chicken shutting down barely four months after it opened. Flavor Flav was not involved in the restaurant's day-to-day operations; Nick Cimino simply paid for Flavor Flav's license. Flavor Flav's House of Flavor in Las Vegas opened on Flav's birthday in March 2012. Flavor Flav teamed up with Gino Harmon and Salvatore Bitonti to start a national franchise known as Flavor Flav's Chicken & Ribs, which opened December 21, 2012 in Sterling Heights, Michigan. The business was not affiliated with the previous two ventures Flavor Flav has had in the restaurant business. Flavor Flav's Chicken & Ribs was a casual dining experience with a quick serve attitude. Flavor Flav's Chicken & Ribs closed in July 2013 after being evicted by its landlord for failure to pay rent. Other TV and media appearances In 2002, Flav appeared in Taking Back Sunday's music video for their song You're So Last Summer. Flav has appeared as a playable fighter in the 2004 fighting game; Def Jam: Fight for NY. In May 2005, Flav took part in the UK reality TV show The Farm on Channel 5. Also in 2005, Flavor Flav made a guest appearance in the MTV2 surreal black comedy show Wonder Showzen as himself, in the debut episode "Birth". On June 14, 2006, Flav's participation, with WEVR-MRC, in the Lisa Tolliver Show celebration of National Safety Month, earned kudos from Surgeon General of the United States Richard Carmona. On November 18, 2009, Flav became a downloadable character in the PlayStation Network's video game Pain. Flav stars in Deon Taylor's horror anthology Nite Tales and Dark Christmas. On May 10, 2010, Flav guest hosted the wrestling show WWE Raw. On August 14, 2011, Flav appeared as a host at the twelfth annual Gathering of the Juggalos. On January 10, 2012, Flav appeared with his longtime fiancée Liz on ABC's Celebrity Wife Swap. His fiancée traded places with Suzette, the wife of Twisted Sister front-man Dee Snider. On February 5, 2012, Flav appeared along with Elton John in a Pepsi Co. ad during Super Bowl XLVI. On February 11, 2012, Flav appeared as an honorary member of the UNLV Rebellion during the UNLV Runnin' Rebels victory over San Diego State, 65-63. From June to September 2012, Flav co-starred and rapped in the web series Dr. Fubalous. Flav has also appeared in YooHoo & Friends as Father Time, and the animated feature Hitpig!, both under the direction of animator David Feiss. Sports Flavor Flav sang a widely noted performance of the Star Spangled Banner at the Milwaukee Bucks/Atlanta Hawks game in 2023. His performance was confusing to some, but Flavor Flav described the performance as a bucket list item, and a tribute to military veterans. He frequently engages in philanthropy to help struggling Olympic athletes. Flav helped financially support athletes of the United States women's national water polo team amidst their preparations for the 2024 Summer Olympics in Paris, signing a five-year sponsorship deal as the team's official hype man. He travelled to France to personally watch the team play. Flavor Flav made a custom bronze clock necklace for US Gymnast Jordan Chiles when her Bronze medal was controversially rescinded at the Paris 2024 games. He also helped raise money for the family of Paralympic sprinter Nick Mayhugh to travel to Paris to see their relative play, and paid rent for the Olympic Discus Thrower Veronica Fraley during the Olympic games. Flavor Flav is a long time fan of women's sports. Flavor Flav has been a notable guest at WNBA games, including performing a call and response tribute to the late Fatman Scoop at a New York Liberty game in September 2024. Personal life and legal issues Flavor Flav had his first three children with Karen Ross, three more with Angie Parker, a son with Elizabeth Trujillo, and another child with Kate Gammell. In 1991, Flav pleaded guilty to assaulting his then-girlfriend Karen Ross and served 30 days in jail, lost custody of his children, and fell deeper into addiction. In 1993, Flav was charged with attempted murder and imprisoned for 90 days for shooting at his neighbor. Later that year, Flav was charged with domestic violence, and cocaine and marijuana charges. His family performed an intervention, and he checked into the Betty Ford Center for an addiction to crack cocaine. After Flav's father died of complications from diabetes in 1997, Flav decided to re-enter rehabilitation, this time at the Long Island Center for Recovery. At one point, he broke both arms in a motorcycle crash. Flavor Flav dated Beverly Johnson, and by 2000, he lived in a small apartment in the Bronx with her and her two children from a previous marriage, while making money scalping baseball tickets. In 2002, Flav spent nine weeks in Rikers Island jail for driving with a suspended license, numerous parking tickets, and tardiness for probation appointments. Following his release from jail, Flav broke up with Johnson and moved in with his mother on Long Island. Chuck D became concerned about his friend's well-being and, toward the end of 2003, suggested Flav move to Los Angeles. Flav moved into his friend Princess' apartment, and within months met Cris Abrego and Mark Cronin, the creators and executive producers of the reality television series The Surreal Life. The pair sought him out as soon as they heard Flav had moved to Los Angeles. Seeing that he had remained free from his previous addictions, they wanted to cast him. Initially Flav refused, feeling the show was for celebrities past their prime. He was eventually convinced to join by previous participant MC Hammer. On May 2, 2011, Flav was arrested on four outstanding misdemeanor warrants for various driving offenses. Police said Flav had two outstanding arrest warrants for driving without a license, one for driving without insurance, and one related to a parking citation. Flav has since been released. In June 2011, Flav said on the Australian radio show The Kyle and Jackie O Show that when his drug problem was at its worst, he would spend up to US$2,600 a day on crack cocaine. As of October 2012, Elizabeth Trujillo, the mother of Flav's son, Karma, lived with Flav in Las Vegas and had been his fiancée for eight years. On October 17, 2012, Flav was jailed in Las Vegas on felony charges stemming from a domestic argument with Trujillo and his threats to attack her teenage son, Gibran, with a knife. Flav's mother, Anna Drayton, died on December 31, 2013. On January 9, 2014, Flav was pulled over on Long Island's Meadowbrook Parkway for driving in a zone, and was additionally charged with possession of marijuana and unlicensed operation of a vehicle. Authorities discovered Flav had 16 suspensions on his license. He was en route to his mother's funeral. Flav was arrested near Las Vegas on May 21, 2015. The charges included speeding and driving under the influence. On July 21, 2019, Flav had his youngest son Jordan with Kate Gammell. Idiosyncrasies When asked about the significance of his trademark clock necklaces, Flav responded: "The reason why I wear this clock is because, you know, time is the most important element, and when we stop, time keeps going." Flavor Flav has a penchant for speaking about himself in the third person. Discography Solo albums Hollywood (2006) Guest appearances Barshem, "Where's My $5 At?" Bumpy Knuckles & DJ Premier, "Shake the Room", Kolexxxion, 2012 De La Soul, "Come on Down", The Grind Date, 2004 DJ Tomekk, "1, 2, 3, ... Rhymes Galore" (w/ Grandmaster Flash, MC Rene & Afrob), Return of Hip Hop, 1999 George Clinton, "Paint The White House Black" (w/ Chuck D, Ice Cube, MC Breed, Kam, Yo-Yo & Dr. Dre), Hey, Man, Smell My Finger, 1993;"Tweakin'" (w/ Chuck D), The Cinderella Theory, 1989 Heavy D and The Boyz, "You Can't See What I Can See", B-Side to "Don't Curse", 1992 Ice Cube, "I'm Only Out for One Thang", AmeriKKKa's Most Wanted, 1990 Living Colour, "Funny Vibe" (w/ Chuck D), Vivid, 1988 Material, "Burnin" (w/ DXT), Intonarumori, 1999 Nigo, "From New York to Tokyo", 2001 Nikki D, "Lettin' Off Steam (Club Mix)", Daddy's Little Girl, 1991 Prince Akeem, "Only We Can Do This", Coming Down Like Babylon, 1991 Stop the Violence Movement, "Self Destruction", 1989 Wu-Tang Clan, "Soul Power", Iron Flag, 2001 Eric B. & Rakim, "I Ain't No Joke" (Music Video Cameo) Xzibit, "What U See Is What U Get" (Music Video Cameo) P. Diddy, "P.E. 2000" (Music Video Cameo) Taking Back Sunday, "You're So Last Summer" (Music Video Cameo) Will Smith, "So Fresh" (w/ Biz Markie & Slick Rick) (Music Video Cameo) Micayla de Ette, "Write a Song", 2019 Notes References External links Public Enemy Artist profile at MTV 1959 births Living people Adelphi University alumni African-American male rappers African-American television producers Television producers from New York (state) Hardcore hip-hop artists American people convicted of burglary East Coast hip-hop musicians American people convicted of assault Underground rappers American people convicted of robbery Illeists People from Freeport, New York People from Roosevelt, New York Public Enemy (band) members Rappers from New York (state) 21st-century American rappers Clocks Hype men
Flavor Flav
[ "Physics", "Technology", "Engineering" ]
3,414
[ "Physical systems", "Machines", "Clocks", "Measuring instruments" ]
7,264,522
https://en.wikipedia.org/wiki/Substrate%20mapping
Substrate mapping (or wafer mapping) is a process in which the performance of semiconductor devices on a substrate is represented by a map showing the performance as a colour-coded grid. The map is a convenient representation of the variation in performance across the substrate, since the distribution of those variations may be a clue as to their cause. The concept also includes the package of data generated by modern wafer testing equipment which can be transmitted to equipment used for subsequent 'back-end' manufacturing operations. History The initial process supported by substrate maps was inkless binning. Each tested die is assigned a bin value, depending on the result of the test. For example, a pass die is assigned a bin value of 1 for a good bin, bin 10 for an open circuit, and bin 11 for a short circuit. In the very early days of wafer test, the dies were put in different bins or buckets, depending on the test results. Physical binning may no longer be used, but the analogy is still good. The next step in the process was to mark the failing dies with ink, so that during assembly only uninked dies were used for die attachment and final assembly. The inking step may be skipped if the assembly equipment is able to access the information in the maps generated by the test equipment. A wafer map is where the substrate map applies to an entire wafer, while a substrate map is mapping in other areas of the semiconductors process including frames, trays and strips. E142 As with many items in the Semiconductor process area, also for this process step there are standards available. The latest and most potential standard is the E142 standard, provided by the SEMI organization. This standard has been approved via ballots for release in 2005. It supports many possible substrate maps, including the ones named above. While the old standards could only support standard bin maps, representing bin information, this standard also support transfermaps, which can help in tracing back dies on strips to the locations they come from off the wafer for example. External links SEMI organization: organization which is working on semiconductor process standards. Semiconductor device fabrication
Substrate mapping
[ "Materials_science" ]
434
[ "Semiconductor device fabrication", "Microtechnology" ]
7,265,847
https://en.wikipedia.org/wiki/Chemotaxis%20assay
Chemotaxis assays are experimental tools for evaluation of chemotactic ability of prokaryotic or eukaryotic cells. A wide variety of techniques have been developed. Some techniques are qualitative - allowing an investigator to approximately determine a cell's chemotactic affinity for an analyte - while others are quantitative, allowing a precise measurement of this affinity. Quality control In general, the most important requisite is to calibrate the incubation time of the assay both to the model cell and the ligand to be evaluated. Too short incubation time results in no cells in the sample, while too long time perturbs the concentration gradients and measures more chemokinetic than chemotactic responses. The most commonly used techniques are grouped into two main groups: Agar-plate techniques This way of evaluation deals with agar-agar or gelatine containing semi-solid layers made prior to the experiment. Small wells are cut into the layer and filled with cells and the test substance. Cells can migrate towards the chemical gradient in the semi solid layer or under the layer as well. Some variations of the technique deal also with wells and parallel channels connected by a cut at the start of the experiment (PP-technique). Radial arrangement of PP-technique (3 or more channels) provides the possibility to compare chemotactic activity of different cell populations or study preference between ligands. Counting of cells: positive responder cells could be counted from the front of migrating cells, after staining or in native conditions in light microscope. Two-chamber techniques Boyden chamber Chambers isolated by filters are proper tools for accurate determination of chemotactic behavior. The pioneer type of these chambers was constructed by Boyden. The motile cells are placed into the upper chamber, while fluid containing the test substance is filled into the lower one. The size of the motile cells to be investigated determines the pore size of the filter; it is essential to choose a diameter which allows active transmigration. For modelling in vivo conditions, several protocols prefer coverage of filter with molecules of extracellular matrix (collagen, elastin etc.) Efficiency of the measurements was increased by development of multiwell chambers (e.g. NeuroProbe), where 24, 96, 384 samples are evaluated in parallel. Advantage of this variant is that several parallels are assayed in identical conditions. Bridge chambers In another setting, the chambers are connected side by side horizontally (Zigmond chamber) or as concentric rings on a slide (Dunn chamber) Concentration gradient develops on a narrow connecting bridge between the chambers and the number of migrating cells is also counted on the surface of the bridge by light microscope. In some cases the bridge between the two chambers is filled with agar and cells have to "glide" in this semisolid layer. Capillary techniques Some capillary techniques provide also a chamber like arrangement, however, there is no filter between the cells and the test substance. Quantitative results are gained by the multiwell type of this probe using 4-8-12-channel pipettes. Accuracy of the pipette and increased number of the parallel running samples is the great advantage of this test. Counting of cells: positive responder cells are count from the lower chamber (long incubation time) or from the filter (short incubation time). For detection of cells general staining techniques (e.g. trypan blue) or special probes (e.g. mt-dehydrogenase detection with MTT assay) are used. Labelled (e.g. fluorochromes) cells are also used, in some assays cells get labelled during transmigration the filter. Other techniques Besides the above-mentioned two most commonly used family of techniques, a wide range of protocols were developed to measure chemotactic activity. Some of them are only qualitative, like aggregation tests, where small pieces of agar or filters are placed onto a slide and accumulation of cells around is measured. In another semiquantitative technique, cells are overlaid the test substance and changes in opalescence of the originally cell-free compartment is recorded during the incubation time. The third frequently used qualitative technique is the T-maze and its adaptations for microplates. In the original version, a container drilled in a peg is filled with cells. Then the peg is twisted and the cells get contact with two other containers filled with different substances. The incubation is stopped by resetting the peg and the cell number is counted from the containers. Also, lately, microfluidic devices have been used more and more frequently to test quantitatively, and precisely, for chemotaxis. References External links Chemotaxis Cell Migration Gateway Cytometric chemotaxis and cell migration assay Free tool based on ImageJ to analyse chemotactical processes Chemotaxis Image Analysis Tool Molecular biology Laboratory techniques Perception Physiology Signal transduction
Chemotaxis assay
[ "Chemistry", "Biology" ]
1,021
[ "Physiology", "Signal transduction", "nan", "Molecular biology", "Biochemistry", "Neurochemistry" ]
7,266,141
https://en.wikipedia.org/wiki/Strongly%20positive%20bilinear%20form
A bilinear form, a(•,•) whose arguments are elements of normed vector space V is a strongly positive bilinear form if and only if there exists a constant, c>0, such that for all where is the norm on V. References AMS 108 p.120 Functional analysis
Strongly positive bilinear form
[ "Mathematics" ]
65
[ "Mathematical analysis", "Functions and mappings", "Functional analysis", "Mathematical analysis stubs", "Mathematical objects", "Mathematical relations" ]
7,266,349
https://en.wikipedia.org/wiki/Torricellian%20chamber
In cave diving, a Torricellian chamber is a cave chamber with an airspace above the water at less than atmospheric pressure. This is formed when the water level drops and there is no way for more air to get into the chamber. Surfacing in such chambers could pose an increased risk of decompression sickness to divers, equivalent to flying after diving or an ascent to altitude equivalent to the chamber surface pressure. Also, in a Torricellian chamber the diver's depth gauge is unlikely to give an accurate reading of pressure as most depth gauges are not calibrated to measure depths less than zero. Dive computers generally indicate that the diver has surfaced when the pressure drops to less than about 1msw. The chambers are named after Evangelista Torricelli, inventor of the barometer. References External links (scroll down to alphabetical order) Cave diving Underwater diving physics
Torricellian chamber
[ "Physics" ]
181
[ "Applied and interdisciplinary physics", "Underwater diving physics" ]
7,266,679
https://en.wikipedia.org/wiki/Bedford%20Level%20experiment
The Bedford Level experiment was a series of observations carried out along a length of the Old Bedford River on the Bedford Level of the Cambridgeshire Fens in the United Kingdom during the 19th and early 20th centuries to deny the curvature of the Earth through measurement. Samuel Birley Rowbotham, who conducted the first observations starting in 1838, claimed that he had proven the Earth to be flat. However, in 1870, after adjusting Rowbotham's method to allow for the effects of atmospheric refraction, Alfred Russel Wallace found a curvature consistent with a spherical Earth. The Bedford Level At the point chosen for all the experiments, the river is a slow-flowing drainage canal running in an uninterrupted straight line for a stretch to the north-east of the village of Welney. This makes it an ideal location to directly measure the curvature of the Earth, as Rowbotham wrote in Zetetic Astronomy: Experiments The first experiment at this site was conducted by Rowbotham in the summer of 1838. He waded into the river and used a telescope held above the water to watch a boat, with a flag on its mast above the water, row slowly away from him. He reported that the vessel remained constantly in his view for the full to Welney Bridge, whereas, had the water surface been curved with the accepted circumference of a spherical Earth, the top of the mast should have been about below his line of sight. He published this observation using the pseudonym Parallax in 1849 and subsequently expanded it into a book, Earth Not a Globe published in 1865. Rowbotham repeated his experiments several times over the years, but his claims received little attention until, in 1870, a supporter by the name of John Hampden offered a wager that he could show, by repeating Rowbotham's experiment, that the Earth was flat. The naturalist and qualified surveyor Alfred Russel Wallace accepted the wager. Wallace, by virtue of his surveyor's training and knowledge of physics, avoided the errors of the preceding experiments and won the bet. The crucial steps were: To set a sight line above the water, and thereby reduce the effects of atmospheric refraction. To add a pole in the middle of the length of canal that could be used to see the "bump" caused by the curvature of the Earth between the two end points. Despite Hampden initially refusing to accept the demonstration, Wallace was awarded the bet by the referee, John Henry Walsh, editor of The Field sports magazine. Hampden subsequently published a pamphlet alleging that Wallace had cheated, and sued for his money. Several protracted court cases ensued, with the result that Hampden was imprisoned for threatening to kill Wallace and for libel. The same court ruled that the wager had been invalid because Hampden retracted the bet and required that Wallace return the money to Hampden. Wallace, who had been unaware of Rowbotham's earlier experiments, was criticized by his peers for "his 'injudicious' involvement in a bet to 'decide' the most fundamental and established of scientific facts". In 1901, Henry Yule Oldham, a reader in geography at King's College, Cambridge, reproduced Wallace's results using three poles fixed at equal height above water level. When viewed through a theodolite, the middle pole was found to be about higher than the poles at each end. This version of the experiment was taught in schools in England until photographs of the Earth from space became available, and it remains in the syllabus for the Indian Certificate of Secondary Education for 2023. On 11 May 1904 Lady Elizabeth Anne Blount, who was later influential in the formation of the Flat Earth Society, hired a commercial photographer to use a telephoto-lens camera to take a picture from Welney of a large white sheet she had placed, the bottom edge near the surface of the river, at Rowbotham's original position away. The photographer, Edgar Clifton from Dallmeyer's studio, mounted his camera above the water at Welney and was surprised to be able to obtain a picture of the target, which he believed should have been invisible to him, given the low mounting point of the camera. Lady Blount published the pictures far and wide. These controversies became a regular feature in the English Mechanic magazine in 1904–05, which published Blount's photo and reported two experiments in 1905 that showed the opposite results. One of these, by Clement Stretton conducted on the Ashby Canal, mounted a theodolite on the canal bank aligned with the cabin roof of a boat. When the boat had moved one mile distant, the instrument showed a dip from the sight-line of about eight inches. Refraction Atmospheric refraction can produce the results noted by Rowbotham and Blount. Because the density of air in the Earth's atmosphere decreases with height above the Earth's surface, all light rays travelling nearly horizontally bend downward, so that the line of sight is a curve. This phenomenon is routinely accounted for in levelling and celestial navigation. If the measurement is close enough to the surface, this downward curve may match the mean curvature of the Earth's surface. In this case, the two effects of assumed curvature and refraction could cancel each other out, and the Earth will then appear flat in optical experiments. This would have been aided, on each occasion, by a temperature inversion in the atmosphere with temperature increasing with altitude above the canal, similar to the phenomenon of the superior image mirage. Temperature inversions like this are common. An increase in air temperature or lapse rate of 0.11 Celsius degrees per metre of altitude would create an illusion of a flat canal, and all optical measurements made near ground level would be consistent with a completely flat surface. If the lapse rate were higher than this (temperature increasing with height at a greater rate), all optical observations would be consistent with a concave surface, a "bowl-shaped Earth". Under average conditions, optical measurements are consistent with a spherical Earth approximately 15% less curved than in reality. Repetition of the atmospheric conditions required for each of the many observations is not unlikely, and warm days over still water can produce favourable conditions. Similar experiments conducted elsewhere On 25 July 1896, Ulysses Grant Morrow, a newspaper editor, conducted a similar experiment on the Old Illinois Drainage Canal, Summit, Illinois. Unlike Rowbotham, he was seeking to demonstrate that the surface of the Earth was curved: when he too found that his target marker, above water level and distant, was clearly visible, he concluded that the Earth's surface was concavely curved, in line with the expectations of his sponsors, the Koreshan Unity society. The findings were dismissed by critics as the result of atmospheric refraction. See also History of geodesy Notes References History of geography History of Earth science Earth sciences Physics experiments Flat Earth Geodesy History of measurement
Bedford Level experiment
[ "Physics", "Mathematics" ]
1,415
[ "Applied mathematics", "Geodesy", "Experimental physics", "Physics experiments" ]
7,266,999
https://en.wikipedia.org/wiki/Janus%20laser
The Janus laser was a (then considered high power) two beam infrared neodymium doped silica glass laser built at Lawrence Livermore National Laboratory in 1974 for the study of inertial confinement fusion. Janus was built using about 100 pounds of Nd:glass laser material. Initially, Janus was only capable of producing laser pulses of about 10 joules of energy at a power of 0.5TW. It was the first laser at Livermore to generate thermonuclear fusion via the irradiation of DT gas filled glass ("exploding pusher") targets. See also List of laser articles List of laser types Further reading External links https://web.archive.org/web/20041109063036/http://www.llnl.gov/50science/lasers.html Inertial confinement fusion research lasers
Janus laser
[ "Physics" ]
184
[ "Plasma physics stubs", "Plasma physics" ]
7,268,083
https://en.wikipedia.org/wiki/Common%20Lisp%20the%20Language
Common Lisp the Language is a reference book by Guy L. Steele about a set of technical standards and programming languages named Common Lisp. History Before standardizing The first edition (Digital Press, 1984; ; 465 pages) was written by Guy L. Steele Jr., Scott E. Fahlman, Richard P. Gabriel, David A. Moon, and Daniel L. Weinreb. It served as the basis for the Common Lisp technical standard by the American National Standards Institute (ANSI), and is thus termed ANSI Common Lisp. During standardizing The second edition (Digital Press, 1990; ; 1029 pages) was written by Guy L. Steele Jr. It reflected the then-current status of the standardizing process and documented important new features such as Common Lisp Object System (CLOS), the loop macro, and conditions. It also has a chapter on series and generators. After standardizing The ANSI Common Lisp standard was published in 1994 and differs from the language dialects described in Common Lisp the Language (1984) and Common Lisp the Language, Second Edition (1990). Substantive additions and deletions were made between the time of the Second Edition and the final version of ANSI Common Lisp. Also, series and generators were discussed in appendix matter of the Second Edition but were not a part of any working draft nor the final version of ANSI Common Lisp. Although ANSI Common Lisp and the language dialects described by the two editions of Common Lisp the Language differ, the ANSI Common Lisp specification indirectly acknowledges the practical importance of Common Lisp the Language (first and second edition) by explicitly suggesting the reserved words (keywords) :cltl1 and :cltl2 for potential inclusion on the *features* list, allowing conditionals to be added to code that must interoperate between ANSI Common Lisp and those other dialects. See also Common Lisp HyperSpec (hypertext version of the ANSI Common Lisp standard) External links Common Lisp the Language, 2nd Edition Online HTML version. (Also provides several downloadable formats, including LaTeX sources.) Mirror sites (in case of standard site being offline) Mirror provided by lisp.se Mirror provided by supelec.fr Common Lisp publications 1984 non-fiction books 1990 non-fiction books Books by Guy L. Steele Jr.
Common Lisp the Language
[ "Technology" ]
488
[ "Computing stubs", "Computer book stubs" ]
7,268,502
https://en.wikipedia.org/wiki/ISC3
ISC3 (Industrial Source Complex) model is a popular steady-state Gaussian plume model which can be used to assess pollutant concentrations from a wide variety of sources associated with an industrial complex. This model can account for the following: Point, area, line, and volume sources Settling and dry deposition of particles Downwash Separation of point sources Limited terrain adjustment ISC3 operates in both long-term and short-term modes. The screening version of ISC3 is SCREEN3. Very recently, the status of ISC3 as a Preferred/Recommended Model of the US Environmental Protection Agency has been withdrawn, but it can still be used as an alternative to the Preferred/Recommended models in regulatory applications with case-by-case justification to the reviewing authority. Input data ISC short term version required two sets of data: source data and hourly averaged meteorological data: Source data Dimensions of the source Emission discharge rate Release height of the emission source Meteorological data Ambient temperature, K Wind direction Wind speed, m/s Atmospheric stability classes (A through F, entered as 1 through 6) Urban and rural mixing height, m See also Bibliography of atmospheric dispersion modeling Atmospheric dispersion modeling List of atmospheric dispersion models Further reading For those who are unfamiliar with air pollution dispersion modelling and would like to learn more about the subject, it is suggested that either one of the following books be read: www.crcpress.com www.air-dispersion.com External links ISC3 User's Guide, Volume I ISC3 User's Guide, Volume II Meteorological Monitoring Guidance for Regulatory Modeling Applications Atmospheric dispersion modeling
ISC3
[ "Chemistry", "Engineering", "Environmental_science" ]
336
[ "Atmospheric dispersion modeling", "Environmental modelling", "Environmental engineering" ]
7,269,589
https://en.wikipedia.org/wiki/Thailand%20Institute%20of%20Nuclear%20Technology
The Thailand Institute of Nuclear Technology (TINT) (สถาบันเทคโนโลยีนิวเคลียร์แห่งชาติ) is a public organization in Bangkok, Thailand. Overview The institute is an entity established in December 2006 for national nuclear research and development. It is aimed to serve as the research body, cooperating with the Office of Atoms for Peace (OAP) who serves as the nuclear regulatory body of the country. TINT operates under Ministry of Higher Education, Science, Research and Innovation (MoST) and works closely with OAP and the International Atomic Energy Agency (IAEA). Research programs: Medical and Public Health Agricultural Material and Industrial Environmental Advanced Technology Nuclear operations: Safety Nuclear Engineering Reactor Operation External links สถาบันเทคโนโลยีนิวเคลียร์แห่งชาติ website (Thai) Thailand Institute of Nuclear Technology (TINT) website References Thailand Institute of Nuclear Technology Nuclear technology in Thailand Nuclear research institutes Public organizations of Thailand Scientific organizations based in Thailand Government agencies established in 2006 2006 establishments in Thailand Organizations based in Bangkok Research institutes established in 2006
Thailand Institute of Nuclear Technology
[ "Engineering" ]
188
[ "Nuclear research institutes", "Nuclear organizations" ]
7,269,645
https://en.wikipedia.org/wiki/Top-of-mind%20awareness
Top-of-mind awareness (TOMA) is a measure of how aware is a consumer of a brand. It is part of consumer behaviour, and is a key aspect of marketing research and marketing communications. Definitions of top-of-mind awareness In marketing, "top-of-mind awareness" refers to a brand or specific product being first in customers' minds when thinking of a particular industry or category. Top-of-mind awareness is defined in Marketing Metrics: "The first brand that comes to mind when a customer is asked an unprompted question about a category. The percentage of customers for whom a given brand is top of mind can be measured." TOMA has also been defined as "the percent of respondents who, without prompting, name a specific brand or product first when asked to list all the advertisements they recall seeing in a general product category over the past 30 days." At the market level, top-of-mind awareness is more often defined as the "most remembered" or "most recalled" brand names. Top-of-mind awareness: uses and applications Top-of-mind awareness is a special form of brand awareness. Top-of-mind awareness is generally measured by asking consumers open-ended questions about the brand that first comes to mind in a particular category, like a fast-food restaurant (McDonald’s). Market researchers are then able to take this data and turn it into a percentage to figure out who is leading the way in top-of-mind awareness. Companies attempt to build and increase brand awareness using such digital marketing strategies as search engine optimization (SEO), search engine marketing (SEM), social media marketing (SMM), content marketing, and more. In a survey of nearly 200 senior marketing managers, 50% responded that they found the "top-of-mind" metric very useful. See also Mind share Promotion Brand awareness Consumer Behaviour References Further reading Consumer behaviour Market research
Top-of-mind awareness
[ "Biology" ]
397
[ "Behavior", "Consumer behaviour", "Human behavior" ]
51,873
https://en.wikipedia.org/wiki/Paper%20clip
A paper clip (or paperclip) is a tool used to hold sheets of paper together, usually made of steel wire bent to a looped shape (though some are covered in plastic). Most paper clips are variations of the Gem type introduced in the 1890s or earlier, characterized by the one and a half loops made by the wire. Common to paper clips proper is their utilization of torsion and elasticity in the wire, and friction between wire and paper. When a moderate number of sheets are inserted between the two "tongues" of the clip, the tongues will be forced apart and cause torsion in the bend of the wire to grip the sheets together. They are usually used to bind papers together for productivity and portability. The paper clip's widespread use in various settings, from offices to educational institutions, underscores its functional design and adaptability. While primarily designed for binding papers, its versatility has led to a range of applications, both practical and creative. Shape and composition Paper clips usually have an oblong shape with straight sides, but may also be triangular or circular, or have more elaborate shapes. The most common material is steel or some other metal, but molded plastic is also used. Some other kinds of paper clips use a two-piece clamping system. Recent innovations include multi-colored plastic-coated paper clips and spring-fastened binder clips. Regular metal paper clips weigh about a gram. History According to the Early Office Museum, the first patent for a bent wire paper clip was awarded in the United States to Samuel B. Fay in 1867. This clip was originally intended primarily for attaching tickets to fabric, although the patent recognized that it could be used to attach papers together. Fay received U.S. patent 64,088 on April 23, 1867. Although functional and practical, Fay's design along with the 50 other designs patented prior to 1899 are not considered reminiscent of the modern paperclip design known today. Another notable paper clip design was also patented in the United States by Erlman J. Wright on July 24, 1877, patent #193,389. This clip was advertised at that time for use in fastening together loose leaves of papers, documents, periodicals, newspapers etc. The most common type of wire paper clip still in use, the Gem paper clip, was never patented, but it was most likely in production in Britain in the early 1870s by "The Gem Manufacturing Company", according to the American expert on technological innovations, Professor Henry J. Petroski. He refers to an 1883 article about "Gem Paper-Fasteners", praising them for being "better than ordinary pins" for "binding together papers on the same subject, a bundle of letters, or pages of a manuscript". Since the 1883 article had no illustration of this early "Gem", it may have been different from modern paper clips of that name. The earliest illustration of its current form is in an 1893 advertisement for the "Gem Paper Clip". In 1904 Cushman & Denison registered a trademark for the "Gem" name in connection with paper clips. The announcement stated that it had been used since March 1, 1892, which may have been the time of its introduction in the United States. Paper clips are still sometimes called "Gem clips", and in Swedish the word for any paper clip is "gem" (but pronounced similar to English game). Definite proof that the modern type of paper clip was well known in 1899 at the latest, is the patent granted to William Middlebrook of Waterbury, Connecticut on April 27 of that year for a "Machine for making wire paper clips." The drawing clearly shows that the product is a perfect clip of the Gem type. The fact that Middlebrook did not mention it by name, suggests that it was already well known at the time. Since then countless variations on the same theme have been patented. Some have pointed instead of rounded ends, some have the end of one loop bent slightly to make it easier to insert sheets of paper, and some have wires with undulations or barbs to get a better grip. In addition, purely aesthetic variants have been patented, clips with triangular, star, or round shapes. But the original Gem type has for more than a hundred years proved to be the most practical, and consequently by far the most popular. Its qualities—ease of use, gripping without tearing, and storing without tangling—have been difficult to improve upon. In the United States, National Paperclip Day is celebrated on May 29. The Gem-type paperclip has become a symbol of inventive design, as confirmed below – although falsely – by its celebration as a Norwegian invention in 1899. More convincing is its appropriation as logo of the Year of Design () in Barcelona 2003, depicted on posters, T-shirts and other merchandise. Unsupported claims It has been claimed that the paper clip was invented by English intellectual Herbert Spencer (1820–1903). Spencer registered a "binding-pin" on 2 September 1846, which was made and sold by Adolphus Ackermann for over a year, advertised as "for holding loose manuscripts, sermons, weekly papers, and all unstitched publications". Spencer's design, approximately unfolded, looked more like a modern cotter pin than a modern paper clip. Norwegian claim Norwegian Johan Vaaler (1866–1910) has been identified as the inventor of the paper clip. He was granted patents in Germany and in the United States (1901) for a paper clip of similar design, but less functional and practical. Because it was more complicated to insert into the paper, Vaaler probably did not know that a better product was already on the market, although not yet in Norway. His version was never manufactured and never marketed because the superior Gem was already available. Long after Vaaler's death, his countrymen created a national myth based on the false assumption that the paper clip was invented by an unrecognized Norwegian genius. Norwegian dictionaries since the 1950s have mentioned Vaaler as the inventor of the paper clip, and that myth later found its way into international dictionaries and much of the international literature on paper clips. Vaaler probably succeeded in having his design patented abroad, despite the previous existence of more useful paper clips, because patent authorities at that time were quite liberal and rewarded any marginal modification of existing inventions. Johan Vaaler began working for Alfred J. Bryns Patentkontor in Kristiania in 1892 and was later promoted to office manager, a position he held until his death. As the employee of a patent office, he could easily have obtained a patent in Norway. His reasons for applying abroad are not known; it is possible that he wanted to secure the commercial rights internationally. Also, he may have been aware that a Norwegian manufacturer would find it difficult to introduce a new invention abroad, starting from the small home market. Vaaler's patents expired quietly, while the "Gem" was used worldwide, including his own country. The failure of his design was its impracticality. Without the two full loops of the fully developed paper clip, it was difficult to insert sheets of paper into his clip. One could manipulate the end of the inner wire so that it could receive the sheet, but the outer wire was a dead end because it could not exploit the torsion principle. The clip would instead stand out like a keel, perpendicular to the sheet of paper. The impracticality of Vaaler's design may easily be demonstrated by cutting off the last outer loop and one long side from a regular Gem clip. National symbol The originator of the Norwegian paper clip myth was an engineer of the Norwegian national patent agency who visited Germany in the 1920s to register Norwegian patents in that country. He came across Vaaler's patent but failed to detect that it was not the same as the then-common Gem-type clip. In the report of the first fifty years of the patent agency, he wrote an article in which he proclaimed Vaaler to be the inventor of the common paper clip. This piece of information found its way into some Norwegian encyclopedias after World War II. Events of that war contributed greatly to the mythical status of the paper clip. Patriots wore them in their lapels as a symbol of resistance to the German occupiers and local Nazi authorities when other signs of resistance, such as flag pins or pins showing the cipher of the exiled King Haakon VII of Norway, were forbidden. Those wearing them did not yet see them as national symbols, as the myth of their Norwegian origin was not commonly known at the time. The clips were meant to denote solidarity and unity ("we are bound together"). The wearing of paper clips was soon prohibited, and people wearing them could risk severe punishment. The leading Norwegian encyclopedia mentioned the role of the paper clip as a symbol of resistance in a supplementary volume in 1952 but did not yet proclaim it a Norwegian invention. That information was added in later editions. According to the 1974 edition, the idea of using the paper clip to denote resistance originated in France. A clip worn on a lapel or front pocket could be seen as "deux gaules" (two posts or poles) and be interpreted as a reference to the leader of the French Resistance, General Charles de Gaulle. The post-war years saw a widespread consolidation of the paper clip as a national symbol. Authors of books and articles on the history of Norwegian technology eagerly seized it to make a thin story more substantial. They chose to overlook the fact that Vaaler's clip was not the same as the fully developed Gem-type clip. In 1989, a giant paper clip, almost high, was erected on the campus of a commercial college near Oslo in honor of Vaaler, ninety years after his invention was patented. But this monument shows a Gem-type clip, not the one patented by Vaaler. The celebration of the alleged Norwegian origin of the paper clip culminated in 1999, one hundred years after Vaaler submitted his application for a German patent. A commemorative stamp was issued that year, the first in a series to draw attention to Norwegian inventiveness. The background shows a facsimile of the German "Patentschrift". However, the figure in the foreground is not the paper clip depicted on that document, but the much better known "Gem". In 2005, the national biographical encyclopedia of Norway (Norsk biografisk leksikon) published the biography of Johan Vaaler, stating he was the inventor of the paper clip. Other uses Wire is versatile in its nature. Thus a paper clip is a useful accessory in many kinds of mechanical work, including computer work: the metal wire can be unfolded with a little force. Several devices call for a very thin rod to push a recessed button which the user might only rarely need. This is seen on most CD-ROM drives as an "emergency eject" should the power fail; also on early floppy disk drives (including the early Macintosh). Various smartphones require the use of a long, thin object such as a paper clip to eject the SIM card and some Palm PDAs advise the use of a paper clip to reset the device. The trackball can be removed from early Logitech pointing devices using a paper clip as the key to the bezel. A paper clip bent into a "U" can be used to start an ATX PSU without connecting it to a motherboard, by connecting the green[what?] to a black[what?] on the motherboard header. One or more paper clips can make a loopback device for a RS-232 interface (or indeed many interfaces). A paper clip could be installed in a Commodore 1541 disk drive as a flexible head-stop. The steel wire from a paperclip can be used in dentistry to form a dental post. Pipe smokers, including Cannabis smokers use straightened out paper clips to unclog their pipe or bong bowl. Another creative use of paper clips is in "paperclip art", where enthusiasts bend and twist paper clips into intricate designs and figures, ranging from simple shapes to detailed sculptures. This form of art showcases the flexibility and adaptability of the paper clip beyond its traditional use. Additionally, paper clips can serve as temporary bookmarks in books or documents. Their slim profile and easy placement make them useful for marking a specific page or section without causing damage or adding bulk. Paper clips can be bent into a crude but sometimes effective lock picking device. Some types of handcuffs can be unfastened using paper clips. There are two approaches. The first one is to unfold the clip in a line and then twist the end in a right angle, trying to imitate a key and using it to lift the lock fixator. The second approach, which is more feasible but needs some practice, is to use the semi-unfolded clip kink for lifting when the clip is inserted through the hole where the handcuffs are closed. A paper clip image is the standard image for an attachment in an email client. Trade In 1994, the United States imposed anti-dumping tariffs against China on paper clips. Other fastening devices Binder clip Brass fastener Bulldog clip Staple Treasury tag See also Clippy – an anthropomorphic paper clip assistant in Microsoft Office Universal Paperclips - a game based on a thought experiment where the user plays the role of an AI programmed to produce paperclips Operation Paperclip Paper Clips Project – project where a small town American school wished to understand the grand scale of 6,000,000 Jews murdered during the Holocaust by collecting 6,000,000 (and more) physical objects, deciding to collect paperclips because of their small size and easy availability Notes Further reading External links History of the Paper Clip Patents —Paper clip—E. P. Bugge American inventions Fasteners Office equipment Products introduced in 1867 Stationery
Paper clip
[ "Engineering" ]
2,850
[ "Construction", "Fasteners" ]
51,926
https://en.wikipedia.org/wiki/Secotioid
Secotioid fungi produce an intermediate fruiting body form that is between the mushroom-like hymenomycetes and the closed bag-shaped gasteromycetes, where an evolutionary process of gasteromycetation has started but not run to completion. Secotioid fungi may or may not have opening caps, but in any case they often lack the vertical geotropic orientation of the hymenophore needed to allow the spores to be dispersed by wind, and the basidiospores are not forcibly discharged or otherwise prevented from being dispersed (e.g. gills completely inclosed and never exposed as in the secotioid form of Lentinus tigrinus)—note—some mycologists do not consider a species to be secotioid unless it has lost ballistospory. Explanation of secotioid development and gasteromycetation Historically agarics and boletes (which bear their spores on a hymenium of gills or tubes respectively) were classified quite separately from the gasteroid fungi, such as puff-balls and truffles, of which the spores are formed in a large mass enclosed in an outer skin. However, in spite of this apparently very great difference in form, recent mycological research, both at microscopic and molecular level has shown that sometimes species of open mushrooms are much more closely related to particular species of gasteroid fungi than they are to each other. Fungi which do not open up to let their spores be dispersed in the air, but which show a clear morphological relation to agarics or boletes, constitute an intermediate form and are called secotioid. The word is derived from the name of the genus Secotium, which was defined in 1840 by Kunze for a South African example, S. gueinzii, which is the type species. In the following years numerous secotioid species were added to this genus, including ones which according to modern taxonomy belong to other genera or families. On a microscopic scale, secotioid fungi do not expel their spores forcibly from the basidium; their spores are "statismospores". Like gasteroid fungi, secotioid species rely on animals such as rodents or insects to distribute their spores. It can at times be disadvantageous for a mushroom to open up and free its spores in the usual way. If this development is aborted, a secotioid form arises, perhaps to be followed eventually by an evolutionary progression to a fully gasteroid form. This type of progression is called gasteromycetation and seems to have happened several times independently starting from various genera of "normal" mushrooms. This means that the secotioid and also the gasteroid fungi are polyphyletic. According to the paper by Thiers, in certain climates and certain seasons, it may be an advantage to remain closed, because moisture can be conserved in that way. For example, the gasteroid genus Hymenogaster has been shown to be closely related to agaric genera such as Hebeloma, which were formerly placed in family Cortinariaceae or Strophariaceae. This is found by DNA analysis and also indicated on a microscopic scale by the resemblance of the spores and basidia. According to a current classification system, Hebeloma now belongs to family Hymenogastraceae, and is considered more narrowly related to the closed Hymenogaster fungi than, for instance, to the ordinary mushrooms in genus Cortinarius. A similar case is the well-known "Deceiver" mushroom Laccaria laccata which is now classified in the Hydnangiaceae, Hydnangium being a gastroid genus. It has been found that a change in a single locus of a gene of the gilled mushroom Lentinus tigrinus causes it to have a closed fruiting body. This suggests that the emergence of a secotioid species may not require many mutations. There is a spectrum of secotioid species ranging from the open form to the closed form in the following respects: there may be an evident stipe, or there may be only a remnant consisting of a column of non-fertile tissue, if there is a stipe the edge of the cap may separate from it (partially opening), or may not, there may be recognizable gills (though oriented in all directions and very convoluted), or the fertile interior may be uniform like the gleba of gasteroid fungi, and the spore-bearing tissue may be above ground (epigeous), or underground (hypogeous), or partly buried. The adjective "sequestrate" is sometimes used as a general term to mean "either secotioid or gasteroid". Examples Cortinarius is a very widespread genus of agarics, but also contains some secotioid species, such as C. leucocephalus, C. coneae and C. cartilagineus. Pholiota nubigena is a secotioid species found early in the year at high altitude in the western United States. It was originally assigned to Secotium and later to a more specific secotioid genus Nivatogastrium, but in fact it is closely allied to Pholiota squarrosa and it has now been moved to genus Pholiota itself, although the latter consists primarily of agarics. Gastroboletus is a secotioid bolete genus where the fruiting bodies may or may not open, but in any case the tubes are not aligned vertically as in a true bolete. Secotioid mushrooms of the genus Endoptychum (such as E. agaricoides, E. arizonicum) have been now moved to genus Chlorophyllum, closely related to Macrolepiota. Agaricus deserticola is a secotioid species of Agaricus (the genus of common cultivated mushrooms) which at one time was placed in the genus Secotium. Similarly, Agaricus inapertus was formerly known as Endoptychum depressum until molecular analysis revealed it to be closely aligned with Agaricus. References Mycology Fungal morphology and anatomy Mushroom types
Secotioid
[ "Biology" ]
1,306
[ "Fungi", "Mycology", "Mushroom types" ]
51,932
https://en.wikipedia.org/wiki/Kinetic%20energy%20penetrator
A kinetic energy penetrator (KEP), also known as long-rod penetrator (LRP), is a type of ammunition designed to penetrate vehicle armour using a flechette-like, high-sectional density projectile. Like a bullet or kinetic energy weapon, this type of ammunition does not contain explosive payloads and uses purely kinetic energy to penetrate the target. Modern KEP munitions are typically of the armour-piercing fin-stabilized discarding sabot (APFSDS) type. History Early cannons fired kinetic energy ammunition, initially consisting of heavy balls of worked stone and later of dense metals. From the beginning, combining high muzzle energy with projectile weight and hardness have been the foremost factors in the design of such weapons. Similarly, the foremost purpose of such weapons has generally been to defeat protective shells of armored vehicles or other defensive structures, whether it is stone walls, sailship timbers, or modern tank armour. Kinetic energy ammunition, in its various forms, has consistently been the choice for those weapons due to the highly focused terminal ballistics. The development of the modern KE penetrator combines two aspects of artillery design, high muzzle velocity and concentrated force. High muzzle velocity is achieved by using a projectile with a low mass and large base area in the gun barrel. Firing a small-diameter projectile wrapped in a lightweight outer shell, called a sabot, raises the muzzle velocity. Once the shell clears the barrel, the sabot is no longer needed and falls off in pieces. This leaves the projectile traveling at high velocity with a smaller cross-sectional area and reduced aerodynamic drag during the flight to the target (see external ballistics and terminal ballistics). Germany developed modern sabots under the name "treibspiegel" ("thrust mirror") to give extra altitude to its anti-aircraft guns during the Second World War. Before this, primitive wooden sabots had been used for centuries in the form of a wooden plug attached to or breech loaded before cannonballs in the barrel, placed between the propellant charge and the projectile. The name "sabot" (pronounced in English usage) is the French word for clog (a wooden shoe traditionally worn in some European countries). Concentration of force into a smaller area was initially attained by replacing the single metal (usually steel) shot with a composite shot using two metals, a heavy core (based on tungsten) inside a lighter metal outer shell. These designs were known as armour-piercing composite rigid (APCR) by the British, high-velocity armor-piercing (HVAP) by the US, and hartkern (hard core) by the Germans. On impact, the core had a much more concentrated effect than plain metal shot of the same weight and size. The air resistance and other effects were the same as for the shell of identical size. High-velocity armor-piercing (HVAP) rounds were primarily used by tank destroyers in the US Army and were relatively uncommon as the tungsten core was expensive and prioritized for other applications. Between 1941 and 1943, the British combined the two techniques in the armour-piercing discarding sabot (APDS) round. The sabot replaced the outer metal shell of the APCR. While in the gun, the shot had a large base area to get maximum acceleration from the propelling charge but once outside, the sabot fell away to reveal a heavy shot with a small cross-sectional area. APDS rounds served as the primary kinetic energy weapon of most tanks during the early-Cold War period, though they suffered the primary drawback of inaccuracy. This was resolved with the introduction of the armour-piercing fin-stabilized discarding sabot (APFSDS) round during the 1970s, which added stabilising fins to the penetrator, greatly increasing accuracy. Design The principle of the kinetic energy penetrator is that it uses its kinetic energy, which is a function of its mass and velocity, to force its way through armor. If the armor is defeated, the heat and spalling (particle spray) generated by the penetrator going through the armor, and the pressure wave that develops, ideally destroys the target. The modern kinetic energy weapon maximizes the stress (kinetic energy divided by impact area) delivered to the target by: maximizing the mass – that is, using the densest metals practical, which is one of the reasons depleted uranium or tungsten carbide is often used – and muzzle velocity of the projectile, as kinetic energy scales with the mass m and the square of the velocity v of the projectile minimizing the width, since if the projectile does not tumble, it will hit the target face first. As most modern projectiles have circular cross-sectional areas, their impact area will scale with the square of the radius r (the impact area being ) The penetrator length plays a large role in determining the ultimate depth of penetration. Generally, a penetrator is incapable of penetrating deeper than its own length, as the sheer stress of impact and perforation ablates it. This has led to the current designs which resemble a long metal arrow. For monobloc penetrators made of a single material, a perforation formula devised by Wili Odermatt and W. Lanz can calculate the penetration depth of an APFSDS round. In 1982, an analytical investigation drawing from concepts of gas dynamics and experiments on target penetration led to the conclusion on the efficiency of impactors that penetration is deeper using unconventional three-dimensional shapes. See also Compact Kinetic Energy Missile Earthquake bomb Flechette Hellfire R9X Impact depth Kinetic bombardment MGM-166 LOSAT Röchling shell Notes References Anti-tank rounds Projectiles Ammunition Collision
Kinetic energy penetrator
[ "Physics" ]
1,165
[ "Collision", "Mechanics" ]
51,937
https://en.wikipedia.org/wiki/Hymenophore
A hymenophore refers to the hymenium-bearing structure of a fungal fruiting body. Hymenophores can be smooth surfaces, lamellae, folds, tubes, or teeth. The term was coined by Robert Hooke in 1665. References Mycology Fungal morphology and anatomy
Hymenophore
[ "Biology" ]
63
[ "Mycology" ]
51,940
https://en.wikipedia.org/wiki/Waveplate
A waveplate or retarder is an optical device that alters the polarization state of a light wave travelling through it. Two common types of waveplates are the half-wave plate, which rotates the polarization direction of linearly polarized light, and the quarter-wave plate, which converts between different elliptical polarizations (such as the special case of converting from linearly polarized light to circularly polarized light and vice versa.) Waveplates are constructed out of a birefringent material (such as quartz or mica, or even plastic), for which the index of refraction is different for light linearly polarized along one or the other of two certain perpendicular crystal axes. The behavior of a waveplate (that is, whether it is a half-wave plate, a quarter-wave plate, etc.) depends on the thickness of the crystal, the wavelength of light, and the variation of the index of refraction. By appropriate choice of the relationship between these parameters, it is possible to introduce a controlled phase shift between the two polarization components of a light wave, thereby altering its polarization. With an engineered combination of two birefringent materials, an achromatic waveplate can be manufactured such that the spectral response of its phase retardance can be nearly flat. A common use of waveplates—particularly the sensitive-tint (full-wave) and quarter-wave plates—is in optical mineralogy. Addition of plates between the polarizers of a petrographic microscope makes the optical identification of minerals in thin sections of rocks easier, in particular by allowing deduction of the shape and orientation of the optical indicatrices within the visible crystal sections. This alignment can allow discrimination between minerals which otherwise appear very similar in plane polarized and cross polarized light. Principles of operation A waveplate works by shifting the phase between two perpendicular polarization components of the light wave. A typical waveplate is simply a birefringent crystal with a carefully chosen orientation and thickness. The crystal is cut into a plate, with the orientation of the cut chosen so that the optic axis of the crystal is parallel to the surfaces of the plate. This results in two axes in the plane of the cut: the ordinary axis, with index of refraction no, and the extraordinary axis, with index of refraction ne. The ordinary axis is perpendicular to the optic axis. The extraordinary axis is parallel to the optic axis. For a light wave normally incident upon the plate, the polarization component along the ordinary axis travels through the crystal with a speed vo = c/no, while the polarization component along the extraordinary axis travels with a speed ve = c/ne. This leads to a phase difference between the two components as they exit the crystal. When ne < no, as in calcite, the extraordinary axis is called the fast axis and the ordinary axis is called the slow axis. For ne > no the situation is reversed. Depending on the thickness of the crystal, light with polarization components along both axes will emerge in a different polarization state. The waveplate is characterized by the amount of relative phase, Γ, that it imparts on the two components, which is related to the birefringence Δn and the thickness L of the crystal by the formula where λ0 is the vacuum wavelength of the light. Waveplates in general, as well as polarizers, can be described using the Jones matrix formalism, which uses a vector to represent the polarization state of light and a matrix to represent the linear transformation of a waveplate or polarizer. Although the birefringence Δn may vary slightly due to dispersion, this is negligible compared to the variation in phase difference according to the wavelength of the light due to the fixed path difference (λ0 in the denominator in the above equation). Waveplates are thus manufactured to work for a particular range of wavelengths. The phase variation can be minimized by stacking two waveplates that differ by a tiny amount in thickness back-to-back, with the slow axis of one along the fast axis of the other. With this configuration, the relative phase imparted can be, for the case of a quarter-wave plate, one-fourth a wavelength rather than three-fourths or one-fourth plus an integer. This is called a zero-order waveplate. For a single waveplate changing the wavelength of the light introduces a linear error in the phase. Tilt of the waveplate enters via a factor of 1/cos θ (where θ is the angle of tilt) into the path length and thus only quadratically into the phase. For the extraordinary polarization the tilt also changes the refractive index to the ordinary via a factor of cos θ, so combined with the path length, the phase shift for the extraordinary light due to tilt is zero. A polarization-independent phase shift of zero order needs a plate with thickness of one wavelength. For calcite the refractive index changes in the first decimal place, so that a true zero order plate is ten times as thick as one wavelength. For quartz and magnesium fluoride the refractive index changes in the second decimal place and true zero order plates are common for wavelengths above 1 μm. Plate types Half-wave plate For a half-wave plate, the relationship between L, Δn, and λ0 is chosen so that the phase shift between polarization components is Γ = π. Now suppose a linearly polarized wave with polarization vector is incident on the crystal. Let θ denote the angle between and , where is the vector along the waveplate's fast axis. Let z denote the propagation axis of the wave. The electric field of the incident wave is where lies along the waveplate's slow axis. The effect of the half-wave plate is to introduce a phase shift term eiΓ = eiπ = −1 between the f and s components of the wave, so that upon exiting the crystal the wave is now given by If denotes the polarization vector of the wave exiting the waveplate, then this expression shows that the angle between and is −θ. Evidently, the effect of the half-wave plate is to mirror the wave's polarization vector through the plane formed by the vectors and . For linearly polarized light, this is equivalent to saying that the effect of the half-wave plate is to rotate the polarization vector through an angle 2θ; however, for elliptically polarized light the half-wave plate also has the effect of inverting the light's handedness. Quarter-wave plate For a quarter-wave plate, the relationship between L, Δn, and λ0 is chosen so that the phase shift between polarization components is Γ = π/2. Now suppose a linearly polarized wave is incident on the crystal. This wave can be written as where the f and s axes are the quarter-wave plate's fast and slow axes, respectively, the wave propagates along the z axis, and Ef and Es are real. The effect of the quarter-wave plate is to introduce a phase shift term eiΓ =eiπ/2 = i between the f and s components of the wave, so that upon exiting the crystal the wave is now given by The wave is now elliptically polarized. If the axis of polarization of the incident wave is chosen so that it makes a 45° with the fast and slow axes of the waveplate, then Ef = Es ≡ E, and the resulting wave upon exiting the waveplate is and the wave is circularly polarized. If the axis of polarization of the incident wave is chosen so that it makes a 0° with the fast or slow axes of the waveplate, then the polarization will not change, so remains linear. If the angle is in between 0° and 45° the resulting wave has an elliptical polarization. A circulating polarization can be visualized as the sum of two linear polarizations with a phase difference of 90°. The output depends on the polarization of the input. Suppose polarization axes x and y parallel with the slow and fast axis of the waveplate: The polarization of the incoming photon (or beam) can be resolved as two polarizations on the x and y axis. If the input polarization is parallel to the fast or slow axis, then there is no polarization of the other axis, so the output polarization is the same as the input (only the phase more or less delayed). If the input polarization is 45° to the fast and slow axis, the polarization on those axes are equal. But the phase of the output of the slow axis will be delayed 90° with the output of the fast axis. If not the amplitude but both sine values are displayed, then x and y combined will describe a circle. With other angles than 0° or 45° the values in fast and slow axis will differ and their resultant output will describe an ellipse. Full-wave, or sensitive-tint plate A full-wave plate introduces a phase difference of exactly one wavelength between the two polarization directions, for one wavelength of light. In optical mineralogy, it is common to use a full-wave plate designed for green light (a wavelength near 540 nm). Linearly polarized white light which passes through the plate becomes elliptically polarized, except for that green light wavelength, which will remain linear. If a linear polarizer oriented perpendicular to the original polarization is added, this green wavelength is fully extinguished but elements of the other colors remain. This means that under these conditions the plate will appear an intense shade of red-violet, sometimes known as "sensitive tint". This gives rise to this plate's alternative names, the sensitive-tint plate or (less commonly) red-tint plate. These plates are widely used in mineralogy to aid in identification of minerals in thin sections of rocks. Multiple-order vs. zero-order waveplates A multiple-order waveplate is made from a single birefringent crystal that produces an integer multiple of the rated retardance (for example, a multiple-order half-wave plate may have an absolute retardance of 37λ/2). By contrast, a zero-order waveplate produces exactly the specified retardance. This can be accomplished by combining two multiple-order wave plates such that the difference in their retardances yields the net (true) retardance of the waveplate. Zero-order waveplates are less sensitive to temperature and wavelength shifts, but are more expensive than multiple-order ones. Stacking a series of different-order waveplates with polarization filters between them yields a Lyot filter. Either the filters can be rotated, or the waveplates can be replaced with liquid crystal layers, to obtain a widely tunable pass band in optical transmission spectrum. Use in mineralogy and optical petrology The sensitive-tint (full-wave) and quarter-wave plates are widely used in the field of optical mineralogy. Addition of plates between the polarizers of a petrographic microscope makes easier the optical identification of minerals in thin sections of rocks, in particular by allowing deduction of the shape and orientation of the optical indicatrices within the visible crystal sections. In practical terms, the plate is inserted between the perpendicular polarizers at an angle of 45 degrees. This allows two different procedures to be carried out to investigate the mineral under the crosshairs of the microscope. Firstly, in ordinary cross polarized light, the plate can be used to distinguish the orientation of the optical indicatrix relative to crystal elongation – that is, whether the mineral is "length slow" or "length fast" – based on whether the visible interference colors increase or decrease by one order when the plate is added. Secondly, a slightly more complex procedure allows for a tint plate to be used in conjunction with interference figure techniques to allow measurement of the optic angle of the mineral. The optic angle (often notated as "2V") can both be diagnostic of mineral type, as well as in some cases revealing information about the variation of chemical composition within a single mineral type. See also Crystal optics Fresnel rhomb Photoelastic modulator Polarization rotator Q-plate Spatial light modulator Zone plate References External links Waveplates RP photonics Encyclopedia of Laser Physics and Technology Polarizers and Waveplates Animation Optical mineralogy Polarization (waves) Optical components
Waveplate
[ "Physics", "Materials_science", "Technology", "Engineering" ]
2,589
[ "Glass engineering and science", "Optical components", "Astrophysics", "Polarization (waves)", "Components" ]
51,944
https://en.wikipedia.org/wiki/Dichroism
In optics, a dichroic material is either one which causes visible light to be split up into distinct beams of different wavelengths (colours) (not to be confused with dispersion), or one in which light rays having different polarizations are absorbed by different amounts. In beam splitters The original meaning of dichroic, from the Greek dikhroos, two-coloured, refers to any optical device which can split a beam of light into two beams with differing wavelengths. Such devices include mirrors and filters, usually treated with optical coatings, which are designed to reflect light over a certain range of wavelengths and transmit light which is outside that range. An example is the dichroic prism, used in some camcorders, which uses several coatings to split light into red, green and blue components for recording on separate CCD arrays, however it is now more common to have a Bayer filter to filter individual pixels on a single CCD array. This kind of dichroic device does not usually depend on the polarization of the light. The term dichromatic is also used in this sense. With polarized light The second meaning of dichroic refers to the property of a material, in which light in different polarization states traveling through it experiences a different absorption coefficient; this is also known as diattenuation. When the polarization states in question are right and left-handed circular polarization, it is then known as circular dichroism (CD). Most materials exhibiting CD are chiral, although non-chiral materials showing CD have been recently observed. Since the left- and right-handed circular polarizations represent two spin angular momentum (SAM) states, in this case for a photon, this dichroism can also be thought of as spin angular momentum dichroism and could be modelled using quantum mechanics. In some crystals,, such as tourmaline, the strength of the dichroic effect varies strongly with the wavelength of the light, making them appear to have different colours when viewed with light having differing polarizations. This is more generally referred to as pleochroism, and the technique can be used in mineralogy to identify minerals. In some materials, such as herapathite (iodoquinine sulfate) or Polaroid sheets, the effect is not strongly dependent on wavelength. In liquid crystals Dichroism, in the second meaning above, occurs in liquid crystals due to either the optical anisotropy of the molecular structure or the presence of impurities or the presence of dichroic dyes. The latter is also called a guest–host effect. See also Birefringence Dichromatism Lycurgus Cup Pleochroism References Polarization (waves)
Dichroism
[ "Physics" ]
573
[ "Polarization (waves)", "Astrophysics" ]
51,955
https://en.wikipedia.org/wiki/Distribution%20%28mathematics%29
Distributions, also known as Schwartz distributions or generalized functions, are objects that generalize the classical notion of functions in mathematical analysis. Distributions make it possible to differentiate functions whose derivatives do not exist in the classical sense. In particular, any locally integrable function has a distributional derivative. Distributions are widely used in the theory of partial differential equations, where it may be easier to establish the existence of distributional solutions (weak solutions) than classical solutions, or where appropriate classical solutions may not exist. Distributions are also important in physics and engineering where many problems naturally lead to differential equations whose solutions or initial conditions are singular, such as the Dirac delta function. A function is normally thought of as on the in the function domain by "sending" a point in the domain to the point Instead of acting on points, distribution theory reinterprets functions such as as acting on in a certain way. In applications to physics and engineering, are usually infinitely differentiable complex-valued (or real-valued) functions with compact support that are defined on some given non-empty open subset . (Bump functions are examples of test functions.) The set of all such test functions forms a vector space that is denoted by or Most commonly encountered functions, including all continuous maps if using can be canonically reinterpreted as acting via "integration against a test function." Explicitly, this means that such a function "acts on" a test function by "sending" it to the number which is often denoted by This new action of defines a scalar-valued map whose domain is the space of test functions This functional turns out to have the two defining properties of what is known as a : it is linear, and it is also continuous when is given a certain topology called . The action (the integration ) of this distribution on a test function can be interpreted as a weighted average of the distribution on the support of the test function, even if the values of the distribution at a single point are not well-defined. Distributions like that arise from functions in this way are prototypical examples of distributions, but there exist many distributions that cannot be defined by integration against any function. Examples of the latter include the Dirac delta function and distributions defined to act by integration of test functions against certain measures on Nonetheless, it is still always possible to reduce any arbitrary distribution down to a simpler of related distributions that do arise via such actions of integration. More generally, a is by definition a linear functional on that is continuous when is given a topology called the . This leads to space of (all) distributions on , usually denoted by (note the prime), which by definition is the space of all distributions on (that is, it is the continuous dual space of ); it is these distributions that are the main focus of this article. Definitions of the appropriate topologies on spaces of test functions and distributions are given in the article on spaces of test functions and distributions. This article is primarily concerned with the definition of distributions, together with their properties and some important examples. History The practical use of distributions can be traced back to the use of Green's functions in the 1830s to solve ordinary differential equations, but was not formalized until much later. According to , generalized functions originated in the work of on second-order hyperbolic partial differential equations, and the ideas were developed in somewhat extended form by Laurent Schwartz in the late 1940s. According to his autobiography, Schwartz introduced the term "distribution" by analogy with a distribution of electrical charge, possibly including not only point charges but also dipoles and so on. comments that although the ideas in the transformative book by were not entirely new, it was Schwartz's broad attack and conviction that distributions would be useful almost everywhere in analysis that made the difference. A detailed history of the theory of distributions was given by . Notation The following notation will be used throughout this article: is a fixed positive integer and is a fixed non-empty open subset of Euclidean space denotes the natural numbers. will denote a non-negative integer or If is a function then will denote its domain and the of denoted by is defined to be the closure of the set in For two functions the following notation defines a canonical pairing: A of size is an element in (given that is fixed, if the size of multi-indices is omitted then the size should be assumed to be ). The of a multi-index is defined as and denoted by Multi-indices are particularly useful when dealing with functions of several variables, in particular, we introduce the following notations for a given multi-index : We also introduce a partial order of all multi-indices by if and only if for all When we define their multi-index binomial coefficient as: Definitions of test functions and distributions In this section, some basic notions and definitions needed to define real-valued distributions on are introduced. Further discussion of the topologies on the spaces of test functions and distributions is given in the article on spaces of test functions and distributions. For all and any compact subsets and of , we have: Distributions on are continuous linear functionals on when this vector space is endowed with a particular topology called the . The following proposition states two necessary and sufficient conditions for the continuity of a linear function on that are often straightforward to verify. Proposition: A linear functional on is continuous, and therefore a , if and only if any of the following equivalent conditions is satisfied: For every compact subset there exist constants and (dependent on ) such that for all with support contained in , For every compact subset and every sequence in whose supports are contained in , if converges uniformly to zero on for every multi-index , then Topology on Ck(U) We now introduce the seminorms that will define the topology on Different authors sometimes use different families of seminorms so we list the most common families below. However, the resulting topology is the same no matter which family is used. All of the functions above are non-negative -valued seminorms on As explained in this article, every set of seminorms on a vector space induces a locally convex vector topology. Each of the following sets of seminorms generate the same locally convex vector topology on (so for example, the topology generated by the seminorms in is equal to the topology generated by those in ). With this topology, becomes a locally convex Fréchet space that is normable. Every element of is a continuous seminorm on Under this topology, a net in converges to if and only if for every multi-index with and every compact the net of partial derivatives converges uniformly to on For any any (von Neumann) bounded subset of is a relatively compact subset of In particular, a subset of is bounded if and only if it is bounded in for all The space is a Montel space if and only if A subset of is open in this topology if and only if there exists such that is open when is endowed with the subspace topology induced on it by Topology on Ck(K) As before, fix Recall that if is any compact subset of then If is finite then is a Banach space with a topology that can be defined by the norm And when then is even a Hilbert space. Trivial extensions and independence of Ck(K)'s topology from U Suppose is an open subset of and is a compact subset. By definition, elements of are functions with domain (in symbols, ), so the space and its topology depend on to make this dependence on the open set clear, temporarily denote by Importantly, changing the set to a different open subset (with ) will change the set from to so that elements of will be functions with domain instead of Despite depending on the open set (), the standard notation for makes no mention of it. This is justified because, as this subsection will now explain, the space is canonically identified as a subspace of (both algebraically and topologically). It is enough to explain how to canonically identify with when one of and is a subset of the other. The reason is that if and are arbitrary open subsets of containing then the open set also contains so that each of and is canonically identified with and now by transitivity, is thus identified with So assume are open subsets of containing Given its is the function defined by: This trivial extension belongs to (because has compact support) and it will be denoted by (that is, ). The assignment thus induces a map that sends a function in to its trivial extension on This map is a linear injection and for every compact subset (where is also a compact subset of since ), If is restricted to then the following induced linear map is a homeomorphism (linear homeomorphisms are called ): and thus the next map is a topological embedding: Using the injection the vector space is canonically identified with its image in Because through this identification, can also be considered as a subset of Thus the topology on is independent of the open subset of that contains which justifies the practice of writing instead of Canonical LF topology Recall that denotes all functions in that have compact support in where note that is the union of all as ranges over all compact subsets of Moreover, for each is a dense subset of The special case when gives us the space of test functions. The canonical LF-topology is metrizable and importantly, it is than the subspace topology that induces on However, the canonical LF-topology does make into a complete reflexive nuclear Montel bornological barrelled Mackey space; the same is true of its strong dual space (that is, the space of all distributions with its usual topology). The canonical LF-topology can be defined in various ways. Distributions As discussed earlier, continuous linear functionals on a are known as distributions on Other equivalent definitions are described below. There is a canonical duality pairing between a distribution on and a test function which is denoted using angle brackets by One interprets this notation as the distribution acting on the test function to give a scalar, or symmetrically as the test function acting on the distribution Characterizations of distributions Proposition. If is a linear functional on then the following are equivalent: is a distribution; is continuous; is continuous at the origin; is uniformly continuous; is a bounded operator; is sequentially continuous; explicitly, for every sequence in that converges in to some is sequentially continuous at the origin; in other words, maps null sequences to null sequences; explicitly, for every sequence in that converges in to the origin (such a sequence is called a ), a is by definition any sequence that converges to the origin; maps null sequences to bounded subsets; explicitly, for every sequence in that converges in to the origin, the sequence is bounded; maps Mackey convergent null sequences to bounded subsets; explicitly, for every Mackey convergent null sequence in the sequence is bounded; a sequence is said to be if there exists a divergent sequence of positive real numbers such that the sequence is bounded; every sequence that is Mackey convergent to the origin necessarily converges to the origin (in the usual sense); The kernel of is a closed subspace of The graph of is closed; There exists a continuous seminorm on such that There exists a constant and a finite subset (where is any collection of continuous seminorms that defines the canonical LF topology on ) such that For every compact subset there exist constants and such that for all For every compact subset there exist constants and such that for all with support contained in For any compact subset and any sequence in if converges uniformly to zero for all multi-indices then Topology on the space of distributions and its relation to the weak-* topology The set of all distributions on is the continuous dual space of which when endowed with the strong dual topology is denoted by Importantly, unless indicated otherwise, the topology on is the strong dual topology; if the topology is instead the weak-* topology then this will be indicated. Neither topology is metrizable although unlike the weak-* topology, the strong dual topology makes into a complete nuclear space, to name just a few of its desirable properties. Neither nor its strong dual is a sequential space and so neither of their topologies can be fully described by sequences (in other words, defining only what sequences converge in these spaces is enough to fully/correctly define their topologies). However, a in converges in the strong dual topology if and only if it converges in the weak-* topology (this leads many authors to use pointwise convergence to the convergence of a sequence of distributions; this is fine for sequences but this is guaranteed to extend to the convergence of nets of distributions because a net may converge pointwise but fail to converge in the strong dual topology). More information about the topology that is endowed with can be found in the article on spaces of test functions and distributions and the articles on polar topologies and dual systems. A map from into another locally convex topological vector space (such as any normed space) is continuous if and only if it is sequentially continuous at the origin. However, this is no longer guaranteed if the map is not linear or for maps valued in more general topological spaces (for example, that are not also locally convex topological vector spaces). The same is true of maps from (more generally, this is true of maps from any locally convex bornological space). Localization of distributions There is no way to define the value of a distribution in at a particular point of . However, as is the case with functions, distributions on restrict to give distributions on open subsets of . Furthermore, distributions are in the sense that a distribution on all of can be assembled from a distribution on an open cover of satisfying some compatibility conditions on the overlaps. Such a structure is known as a sheaf. Extensions and restrictions to an open subset Let be open subsets of Every function can be from its domain to a function on by setting it equal to on the complement This extension is a smooth compactly supported function called the and it will be denoted by This assignment defines the operator which is a continuous injective linear map. It is used to canonically identify as a vector subspace of (although as a topological subspace). Its transpose (explained here) is called the and as the name suggests, the image of a distribution under this map is a distribution on called the restriction of to The defining condition of the restriction is: If then the (continuous injective linear) trivial extension map is a topological embedding (in other words, if this linear injection was used to identify as a subset of then 's topology would strictly finer than the subspace topology that induces on it; importantly, it would be a topological subspace since that requires equality of topologies) and its range is also dense in its codomain Consequently if then the restriction mapping is neither injective nor surjective. A distribution is said to be if it belongs to the range of the transpose of and it is called if it is extendable to Unless the restriction to is neither injective nor surjective. Lack of surjectivity follows since distributions can blow up towards the boundary of . For instance, if and then the distribution is in but admits no extension to Gluing and distributions that vanish in a set Let be an open subset of . is said to if for all such that we have vanishes in if and only if the restriction of to is equal to 0, or equivalently, if and only if lies in the kernel of the restriction map Support of a distribution This last corollary implies that for every distribution on , there exists a unique largest subset of such that vanishes in (and does not vanish in any open subset of that is not contained in ); the complement in of this unique largest open subset is called . Thus If is a locally integrable function on and if is its associated distribution, then the support of is the smallest closed subset of in the complement of which is almost everywhere equal to 0. If is continuous, then the support of is equal to the closure of the set of points in at which does not vanish. The support of the distribution associated with the Dirac measure at a point is the set If the support of a test function does not intersect the support of a distribution then A distribution is 0 if and only if its support is empty. If is identically 1 on some open set containing the support of a distribution then If the support of a distribution is compact then it has finite order and there is a constant and a non-negative integer such that: If has compact support, then it has a unique extension to a continuous linear functional on ; this function can be defined by where is any function that is identically 1 on an open set containing the support of . If and then and Thus, distributions with support in a given subset form a vector subspace of Furthermore, if is a differential operator in , then for all distributions on and all we have and Distributions with compact support Support in a point set and Dirac measures For any let denote the distribution induced by the Dirac measure at For any and distribution the support of is contained in if and only if is a finite linear combination of derivatives of the Dirac measure at If in addition the order of is then there exist constants such that: Said differently, if has support at a single point then is in fact a finite linear combination of distributional derivatives of the function at . That is, there exists an integer and complex constants such that where is the translation operator. Distribution with compact support Distributions of finite order with support in an open subset Global structure of distributions The formal definition of distributions exhibits them as a subspace of a very large space, namely the topological dual of (or the Schwartz space for tempered distributions). It is not immediately clear from the definition how exotic a distribution might be. To answer this question, it is instructive to see distributions built up from a smaller space, namely the space of continuous functions. Roughly, any distribution is locally a (multiple) derivative of a continuous function. A precise version of this result, given below, holds for distributions of compact support, tempered distributions, and general distributions. Generally speaking, no proper subset of the space of distributions contains all continuous functions and is closed under differentiation. This says that distributions are not particularly exotic objects; they are only as complicated as necessary. Distributions as sheaves Decomposition of distributions as sums of derivatives of continuous functions By combining the above results, one may express any distribution on as the sum of a series of distributions with compact support, where each of these distributions can in turn be written as a finite sum of distributional derivatives of continuous functions on . In other words, for arbitrary we can write: where are finite sets of multi-indices and the functions are continuous. Note that the infinite sum above is well-defined as a distribution. The value of for a given can be computed using the finitely many that intersect the support of Operations on distributions Many operations which are defined on smooth functions with compact support can also be defined for distributions. In general, if is a linear map that is continuous with respect to the weak topology, then it is not always possible to extend to a map by classic extension theorems of topology or linear functional analysis. The “distributional” extension of the above linear continuous operator A is possible if and only if A admits a Schwartz adjoint, that is another linear continuous operator B of the same type such that , for every pair of test functions. In that condition, B is unique and the extension A’ is the transpose of the Schwartz adjoint B. Preliminaries: Transpose of a linear operator Operations on distributions and spaces of distributions are often defined using the transpose of a linear operator. This is because the transpose allows for a unified presentation of the many definitions in the theory of distributions and also because its properties are well-known in functional analysis. For instance, the well-known Hermitian adjoint of a linear operator between Hilbert spaces is just the operator's transpose (but with the Riesz representation theorem used to identify each Hilbert space with its continuous dual space). In general, the transpose of a continuous linear map is the linear map or equivalently, it is the unique map satisfying for all and all (the prime symbol in does not denote a derivative of any kind; it merely indicates that is an element of the continuous dual space ). Since is continuous, the transpose is also continuous when both duals are endowed with their respective strong dual topologies; it is also continuous when both duals are endowed with their respective weak* topologies (see the articles polar topology and dual system for more details). In the context of distributions, the characterization of the transpose can be refined slightly. Let be a continuous linear map. Then by definition, the transpose of is the unique linear operator that satisfies: Since is dense in (here, actually refers to the set of distributions ) it is sufficient that the defining equality hold for all distributions of the form where Explicitly, this means that a continuous linear map is equal to if and only if the condition below holds: where the right-hand side equals Differential operators Differentiation of distributions Let be the partial derivative operator To extend we compute its transpose: Therefore Thus, the partial derivative of with respect to the coordinate is defined by the formula With this definition, every distribution is infinitely differentiable, and the derivative in the direction is a linear operator on More generally, if is an arbitrary multi-index, then the partial derivative of the distribution is defined by Differentiation of distributions is a continuous operator on this is an important and desirable property that is not shared by most other notions of differentiation. If is a distribution in then where is the derivative of and is a translation by thus the derivative of may be viewed as a limit of quotients. Differential operators acting on smooth functions A linear differential operator in with smooth coefficients acts on the space of smooth functions on Given such an operator we would like to define a continuous linear map, that extends the action of on to distributions on In other words, we would like to define such that the following diagram commutes: where the vertical maps are given by assigning its canonical distribution which is defined by: With this notation, the diagram commuting is equivalent to: To find the transpose of the continuous induced map defined by is considered in the lemma below. This leads to the following definition of the differential operator on called which will be denoted by to avoid confusion with the transpose map, that is defined by As discussed above, for any the transpose may be calculated by: For the last line we used integration by parts combined with the fact that and therefore all the functions have compact support. Continuing the calculation above, for all The Lemma combined with the fact that the formal transpose of the formal transpose is the original differential operator, that is, enables us to arrive at the correct definition: the formal transpose induces the (continuous) canonical linear operator defined by We claim that the transpose of this map, can be taken as To see this, for every compute its action on a distribution of the form with : We call the continuous linear operator the . Its action on an arbitrary distribution is defined via: If converges to then for every multi-index converges to Multiplication of distributions by smooth functions A differential operator of order 0 is just multiplication by a smooth function. And conversely, if is a smooth function then is a differential operator of order 0, whose formal transpose is itself (that is, ). The induced differential operator maps a distribution to a distribution denoted by We have thus defined the multiplication of a distribution by a smooth function. We now give an alternative presentation of the multiplication of a distribution on by a smooth function The product is defined by This definition coincides with the transpose definition since if is the operator of multiplication by the function (that is, ), then so that Under multiplication by smooth functions, is a module over the ring With this definition of multiplication by a smooth function, the ordinary product rule of calculus remains valid. However, some unusual identities also arise. For example, if is the Dirac delta distribution on then and if is the derivative of the delta distribution, then The bilinear multiplication map given by is continuous; it is however, hypocontinuous. Example. The product of any distribution with the function that is identically on is equal to Example. Suppose is a sequence of test functions on that converges to the constant function For any distribution on the sequence converges to If converges to and converges to then converges to Problem of multiplying distributions It is easy to define the product of a distribution with a smooth function, or more generally the product of two distributions whose singular supports are disjoint. With more effort, it is possible to define a well-behaved product of several distributions provided their wave front sets at each point are compatible. A limitation of the theory of distributions (and hyperfunctions) is that there is no associative product of two distributions extending the product of a distribution by a smooth function, as has been proved by Laurent Schwartz in the 1950s. For example, if is the distribution obtained by the Cauchy principal value If is the Dirac delta distribution then but, so the product of a distribution by a smooth function (which is always well-defined) cannot be extended to an associative product on the space of distributions. Thus, nonlinear problems cannot be posed in general and thus are not solved within distribution theory alone. In the context of quantum field theory, however, solutions can be found. In more than two spacetime dimensions the problem is related to the regularization of divergences. Here Henri Epstein and Vladimir Glaser developed the mathematically rigorous (but extremely technical) . This does not solve the problem in other situations. Many other interesting theories are non-linear, like for example the Navier–Stokes equations of fluid dynamics. Several not entirely satisfactory theories of algebras of generalized functions have been developed, among which Colombeau's (simplified) algebra is maybe the most popular in use today. Inspired by Lyons' rough path theory, Martin Hairer proposed a consistent way of multiplying distributions with certain structures (regularity structures), available in many examples from stochastic analysis, notably stochastic partial differential equations. See also Gubinelli–Imkeller–Perkowski (2015) for a related development based on Bony's paraproduct from Fourier analysis. Composition with a smooth function Let be a distribution on Let be an open set in and If is a submersion then it is possible to define This is , and is also called , sometimes written The pullback is often denoted although this notation should not be confused with the use of '*' to denote the adjoint of a linear mapping. The condition that be a submersion is equivalent to the requirement that the Jacobian derivative of is a surjective linear map for every A necessary (but not sufficient) condition for extending to distributions is that be an open mapping. The Inverse function theorem ensures that a submersion satisfies this condition. If is a submersion, then is defined on distributions by finding the transpose map. The uniqueness of this extension is guaranteed since is a continuous linear operator on Existence, however, requires using the change of variables formula, the inverse function theorem (locally), and a partition of unity argument. In the special case when is a diffeomorphism from an open subset of onto an open subset of change of variables under the integral gives: In this particular case, then, is defined by the transpose formula: Convolution Under some circumstances, it is possible to define the convolution of a function with a distribution, or even the convolution of two distributions. Recall that if and are functions on then we denote by defined at to be the integral provided that the integral exists. If are such that then for any functions and we have and If and are continuous functions on at least one of which has compact support, then and if then the value of on do depend on the values of outside of the Minkowski sum Importantly, if has compact support then for any the convolution map is continuous when considered as the map or as the map Translation and symmetry Given the translation operator sends to defined by This can be extended by the transpose to distributions in the following way: given a distribution is the distribution defined by Given define the function by Given a distribution let be the distribution defined by The operator is called . Convolution of a test function with a distribution Convolution with defines a linear map: which is continuous with respect to the canonical LF space topology on Convolution of with a distribution can be defined by taking the transpose of relative to the duality pairing of with the space of distributions. If then by Fubini's theorem Extending by continuity, the convolution of with a distribution is defined by An alternative way to define the convolution of a test function and a distribution is to use the translation operator The convolution of the compactly supported function and the distribution is then the function defined for each by It can be shown that the convolution of a smooth, compactly supported function and a distribution is a smooth function. If the distribution has compact support, and if is a polynomial (resp. an exponential function, an analytic function, the restriction of an entire analytic function on to the restriction of an entire function of exponential type in to ), then the same is true of If the distribution has compact support as well, then is a compactly supported function, and the Titchmarsh convolution theorem implies that: where denotes the convex hull and denotes the support. Convolution of a smooth function with a distribution Let and and assume that at least one of and has compact support. The of and denoted by or by is the smooth function: satisfying for all : Let be the map . If is a distribution, then is continuous as a map . If also has compact support, then is also continuous as the map and continuous as the map If is a continuous linear map such that for all and all then there exists a distribution such that for all Example. Let be the Heaviside function on For any Let be the Dirac measure at 0 and let be its derivative as a distribution. Then and Importantly, the associative law fails to hold: Convolution of distributions It is also possible to define the convolution of two distributions and on provided one of them has compact support. Informally, to define where has compact support, the idea is to extend the definition of the convolution to a linear operation on distributions so that the associativity formula continues to hold for all test functions It is also possible to provide a more explicit characterization of the convolution of distributions. Suppose that and are distributions and that has compact support. Then the linear maps are continuous. The transposes of these maps: are consequently continuous and it can also be shown that This common value is called and it is a distribution that is denoted by or It satisfies If and are two distributions, at least one of which has compact support, then for any If is a distribution in and if is a Dirac measure then ; thus is the identity element of the convolution operation. Moreover, if is a function then where now the associativity of convolution implies that for all functions and Suppose that it is that has compact support. For consider the function It can be readily shown that this defines a smooth function of which moreover has compact support. The convolution of and is defined by This generalizes the classical notion of convolution of functions and is compatible with differentiation in the following sense: for every multi-index The convolution of a finite number of distributions, all of which (except possibly one) have compact support, is associative. This definition of convolution remains valid under less restrictive assumptions about and The convolution of distributions with compact support induces a continuous bilinear map defined by where denotes the space of distributions with compact support. However, the convolution map as a function is continuous although it is separately continuous. The convolution maps and given by both to be continuous. Each of these non-continuous maps is, however, separately continuous and hypocontinuous. Convolution versus multiplication In general, regularity is required for multiplication products, and locality is required for convolution products. It is expressed in the following extension of the Convolution Theorem which guarantees the existence of both convolution and multiplication products. Let be a rapidly decreasing tempered distribution or, equivalently, be an ordinary (slowly growing, smooth) function within the space of tempered distributions and let be the normalized (unitary, ordinary frequency) Fourier transform. Then, according to , hold within the space of tempered distributions. In particular, these equations become the Poisson Summation Formula if is the Dirac Comb. The space of all rapidly decreasing tempered distributions is also called the space of and the space of all ordinary functions within the space of tempered distributions is also called the space of More generally, and A particular case is the Paley-Wiener-Schwartz Theorem which states that and This is because and In other words, compactly supported tempered distributions belong to the space of and Paley-Wiener functions better known as bandlimited functions, belong to the space of For example, let be the Dirac comb and be the Dirac delta;then is the function that is constantly one and both equations yield the Dirac-comb identity. Another example is to let be the Dirac comb and be the rectangular function; then is the sinc function and both equations yield the Classical Sampling Theorem for suitable functions. More generally, if is the Dirac comb and is a smooth window function (Schwartz function), for example, the Gaussian, then is another smooth window function (Schwartz function). They are known as mollifiers, especially in partial differential equations theory, or as regularizers in physics because they allow turning generalized functions into regular functions. Tensor products of distributions Let and be open sets. Assume all vector spaces to be over the field where or For define for every and every the following functions: Given and define the following functions: where and These definitions associate every and with the (respective) continuous linear map: Moreover, if either (resp. ) has compact support then it also induces a continuous linear map of (resp. denoted by or is the distribution in defined by: Spaces of distributions For all and all every one of the following canonical injections is continuous and has an image (also called the range) that is a dense subset of its codomain: where the topologies on () are defined as direct limits of the spaces in a manner analogous to how the topologies on were defined (so in particular, they are not the usual norm topologies). The range of each of the maps above (and of any composition of the maps above) is dense in its codomain. Suppose that is one of the spaces (for ) or (for ) or (for ). Because the canonical injection is a continuous injection whose image is dense in the codomain, this map's transpose is a continuous injection. This injective transpose map thus allows the continuous dual space of to be identified with a certain vector subspace of the space of all distributions (specifically, it is identified with the image of this transpose map). This transpose map is continuous but it is necessarily a topological embedding. A linear subspace of carrying a locally convex topology that is finer than the subspace topology induced on it by is called . Almost all of the spaces of distributions mentioned in this article arise in this way (for example, tempered distribution, restrictions, distributions of order some integer, distributions induced by a positive Radon measure, distributions induced by an -function, etc.) and any representation theorem about the continuous dual space of may, through the transpose be transferred directly to elements of the space Radon measures The inclusion map is a continuous injection whose image is dense in its codomain, so the transpose is also a continuous injection. Note that the continuous dual space can be identified as the space of Radon measures, where there is a one-to-one correspondence between the continuous linear functionals and integral with respect to a Radon measure; that is, if then there exists a Radon measure on such that for all and if is a Radon measure on then the linear functional on defined by sending to is continuous. Through the injection every Radon measure becomes a distribution on . If is a locally integrable function on then the distribution is a Radon measure; so Radon measures form a large and important space of distributions. The following is the theorem of the structure of distributions of Radon measures, which shows that every Radon measure can be written as a sum of derivatives of locally functions on : Positive Radon measures A linear function on a space of functions is called if whenever a function that belongs to the domain of is non-negative (that is, is real-valued and ) then One may show that every positive linear functional on is necessarily continuous (that is, necessarily a Radon measure). Lebesgue measure is an example of a positive Radon measure. Locally integrable functions as distributions One particularly important class of Radon measures are those that are induced locally integrable functions. The function is called if it is Lebesgue integrable over every compact subset of . This is a large class of functions that includes all continuous functions and all Lp space functions. The topology on is defined in such a fashion that any locally integrable function yields a continuous linear functional on – that is, an element of – denoted here by whose value on the test function is given by the Lebesgue integral: Conventionally, one abuses notation by identifying with provided no confusion can arise, and thus the pairing between and is often written If and are two locally integrable functions, then the associated distributions and are equal to the same element of if and only if and are equal almost everywhere (see, for instance, ). Similarly, every Radon measure on defines an element of whose value on the test function is As above, it is conventional to abuse notation and write the pairing between a Radon measure and a test function as Conversely, as shown in a theorem by Schwartz (similar to the Riesz representation theorem), every distribution which is non-negative on non-negative functions is of this form for some (positive) Radon measure. Test functions as distributions The test functions are themselves locally integrable, and so define distributions. The space of test functions is sequentially dense in with respect to the strong topology on This means that for any there is a sequence of test functions, that converges to (in its strong dual topology) when considered as a sequence of distributions. Or equivalently, Distributions with compact support The inclusion map is a continuous injection whose image is dense in its codomain, so the transpose map is also a continuous injection. Thus the image of the transpose, denoted by forms a space of distributions. The elements of can be identified as the space of distributions with compact support. Explicitly, if is a distribution on then the following are equivalent, The support of is compact. The restriction of to when that space is equipped with the subspace topology inherited from (a coarser topology than the canonical LF topology), is continuous. There is a compact subset of such that for every test function whose support is completely outside of , we have Compactly supported distributions define continuous linear functionals on the space ; recall that the topology on is defined such that a sequence of test functions converges to 0 if and only if all derivatives of converge uniformly to 0 on every compact subset of . Conversely, it can be shown that every continuous linear functional on this space defines a distribution of compact support. Thus compactly supported distributions can be identified with those distributions that can be extended from to Distributions of finite order Let The inclusion map is a continuous injection whose image is dense in its codomain, so the transpose is also a continuous injection. Consequently, the image of denoted by forms a space of distributions. The elements of are The distributions of order which are also called are exactly the distributions that are Radon measures (described above). For a is a distribution of order that is not a distribution of order . A distribution is said to be of if there is some integer such that it is a distribution of order and the set of distributions of finite order is denoted by Note that if then so that is a vector subspace of , and furthermore, if and only if Structure of distributions of finite order Every distribution with compact support in is a distribution of finite order. Indeed, every distribution in is a distribution of finite order, in the following sense: If is an open and relatively compact subset of and if is the restriction mapping from to , then the image of under is contained in The following is the theorem of the structure of distributions of finite order, which shows that every distribution of finite order can be written as a sum of derivatives of Radon measures: Example. (Distributions of infinite order) Let and for every test function let Then is a distribution of infinite order on . Moreover, can not be extended to a distribution on ; that is, there exists no distribution on such that the restriction of to is equal to Tempered distributions and Fourier transform Defined below are the , which form a subspace of the space of distributions on This is a proper subspace: while every tempered distribution is a distribution and an element of the converse is not true. Tempered distributions are useful if one studies the Fourier transform since all tempered distributions have a Fourier transform, which is not true for an arbitrary distribution in Schwartz space The Schwartz space is the space of all smooth functions that are rapidly decreasing at infinity along with all partial derivatives. Thus is in the Schwartz space provided that any derivative of multiplied with any power of converges to 0 as These functions form a complete TVS with a suitably defined family of seminorms. More precisely, for any multi-indices and define Then is in the Schwartz space if all the values satisfy The family of seminorms defines a locally convex topology on the Schwartz space. For the seminorms are, in fact, norms on the Schwartz space. One can also use the following family of seminorms to define the topology: Otherwise, one can define a norm on via The Schwartz space is a Fréchet space (that is, a complete metrizable locally convex space). Because the Fourier transform changes into multiplication by and vice versa, this symmetry implies that the Fourier transform of a Schwartz function is also a Schwartz function. A sequence in converges to 0 in if and only if the functions converge to 0 uniformly in the whole of which implies that such a sequence must converge to zero in is dense in The subset of all analytic Schwartz functions is dense in as well. The Schwartz space is nuclear, and the tensor product of two maps induces a canonical surjective TVS-isomorphisms where represents the completion of the injective tensor product (which in this case is identical to the completion of the projective tensor product). Tempered distributions The inclusion map is a continuous injection whose image is dense in its codomain, so the transpose is also a continuous injection. Thus, the image of the transpose map, denoted by forms a space of distributions. The space is called the space of . It is the continuous dual space of the Schwartz space. Equivalently, a distribution is a tempered distribution if and only if The derivative of a tempered distribution is again a tempered distribution. Tempered distributions generalize the bounded (or slow-growing) locally integrable functions; all distributions with compact support and all square-integrable functions are tempered distributions. More generally, all functions that are products of polynomials with elements of Lp space for are tempered distributions. The can also be characterized as , meaning that each derivative of grows at most as fast as some polynomial. This characterization is dual to the behaviour of the derivatives of a function in the Schwartz space, where each derivative of decays faster than every inverse power of An example of a rapidly falling function is for any positive Fourier transform To study the Fourier transform, it is best to consider complex-valued test functions and complex-linear distributions. The ordinary continuous Fourier transform is a TVS-automorphism of the Schwartz space, and the is defined to be its transpose which (abusing notation) will again be denoted by So the Fourier transform of the tempered distribution is defined by for every Schwartz function is thus again a tempered distribution. The Fourier transform is a TVS isomorphism from the space of tempered distributions onto itself. This operation is compatible with differentiation in the sense that and also with convolution: if is a tempered distribution and is a smooth function on is again a tempered distribution and is the convolution of and In particular, the Fourier transform of the constant function equal to 1 is the distribution. Expressing tempered distributions as sums of derivatives If is a tempered distribution, then there exists a constant and positive integers and such that for all Schwartz functions This estimate, along with some techniques from functional analysis, can be used to show that there is a continuous slowly increasing function and a multi-index such that Restriction of distributions to compact sets If then for any compact set there exists a continuous function compactly supported in (possibly on a larger set than itself) and a multi-index such that on Using holomorphic functions as test functions The success of the theory led to an investigation of the idea of hyperfunction, in which spaces of holomorphic functions are used as test functions. A refined theory has been developed, in particular Mikio Sato's algebraic analysis, using sheaf theory and several complex variables. This extends the range of symbolic methods that can be made into rigorous mathematics, for example, Feynman integrals. See also Differential equations related Generalizations of distributions Notes References Bibliography . . . . . . . . . . . . Further reading M. J. Lighthill (1959). Introduction to Fourier Analysis and Generalised Functions. Cambridge University Press. (requires very little knowledge of analysis; defines distributions as limits of sequences of functions under integrals) V.S. Vladimirov (2002). Methods of the theory of generalized functions. Taylor & Francis. . . . . . Articles containing proofs Functional analysis Generalizations of the derivative Generalized functions Smooth functions Schwartz distributions Differential equations Linear functionals
Distribution (mathematics)
[ "Mathematics" ]
9,546
[ "Functions and mappings", "Functional analysis", "Mathematical objects", "Differential equations", "Equations", "Mathematical relations", "Articles containing proofs" ]
52,015
https://en.wikipedia.org/wiki/Proper%20motion
Proper motion is the astrometric measure of the observed changes in the apparent places of stars or other celestial objects in the sky, as seen from the center of mass of the Solar System, compared to the abstract background of the more distant stars. The components for proper motion in the equatorial coordinate system (of a given epoch, often J2000.0) are given in the direction of right ascension (μα) and of declination (μδ). Their combined value is computed as the total proper motion (μ). It has dimensions of angle per time, typically arcseconds per year or milliarcseconds per year. Knowledge of the proper motion, distance, and radial velocity allows calculations of an object's motion from the Solar System's frame of reference and its motion from the galactic frame of reference – that is motion in respect to the Sun, and by coordinate transformation, that in respect to the Milky Way. Introduction Over the course of centuries, stars appear to maintain nearly fixed positions with respect to each other, so that they form the same constellations over historical time. As examples, both Ursa Major in the northern sky and Crux in the southern sky, look nearly the same now as they did hundreds of years ago. However, precise long-term observations show that such constellations change shape, albeit very slowly, and that each star has an independent motion. This motion is caused by the movement of the stars relative to the Sun and Solar System. The Sun travels in a nearly circular orbit (the solar circle) about the center of the galaxy at a speed of about 220 km/s at a radius of from Sagittarius A* which can be taken as the rate of rotation of the Milky Way itself at this radius. Any proper motion is a two-dimensional vector (as it excludes the component as to the direction of the line of sight) and it bears two quantities or characteristics: its position angle and its magnitude. The first is the direction of the proper motion on the celestial sphere (with 0 degrees meaning the motion is north, 90 degrees meaning the motion is east, (left on most sky maps and space telescope images) and so on), and the second is its magnitude, typically expressed in arcseconds per year (symbols: arcsec/yr, as/yr, ″/yr, ″ yr−1) or milliarcseconds per year (symbols: mas/yr, mas yr−1). Proper motion may alternatively be defined by the angular changes per year in the star's right ascension (μα) and declination (μδ) with respect to a constant epoch. The components of proper motion by convention are arrived at as follows. Suppose an object moves from coordinates (α1, δ1) to coordinates (α2, δ2) in a time Δt. The proper motions are given by: The magnitude of the proper motion μ is given by the Pythagorean theorem: technically abbreviated: where δ is the declination. The factor in cos2δ accounts for the widening of the lines (hours) of right ascension away from the poles, cosδ, being zero for a hypothetical object fixed at a celestial pole in declination. Thus, a co-efficient is given to negate the misleadingly greater east or west velocity (angular change in α) in hours of Right Ascension the further it is towards the imaginary infinite poles, above and below the earth's axis of rotation, in the sky. The change μα, which must be multiplied by cosδ to become a component of the proper motion, is sometimes called the "proper motion in right ascension", and μδ the "proper motion in declination". If the proper motion in right ascension has been converted by cosδ, the result is designated μα*. For example, the proper motion results in right ascension in the Hipparcos Catalogue (HIP) have already been converted. Hence, the individual proper motions in right ascension and declination are made equivalent for straightforward calculations of various other stellar motions. The position angle θ is related to these components by: Motions in equatorial coordinates can be converted to motions in galactic coordinates. Examples For most stars seen in the sky, the observed proper motions are small and unremarkable. Such stars are often either faint or are significantly distant, have changes of below 0.01″ per year, and do not appear to move appreciably over many millennia. A few do have significant motions, and are usually called high-proper motion stars. Motions can also be in almost seemingly random directions. Two or more stars, double stars or open star clusters, which are moving in similar directions, exhibit so-called shared or common proper motion (or cpm.), suggesting they may be gravitationally attached or share similar motion in space. Barnard's Star has the largest proper motion of all stars, moving at 10.3″ yr−1. Large proper motion usually strongly indicates an object is close to the Sun. This is so for Barnard's Star, about 6 light-years away. After the Sun and the Alpha Centauri system, it is the nearest known star. Being a red dwarf with an apparent magnitude of 9.54, it is too faint to see without a telescope or powerful binoculars. Of the stars visible to the naked eye (conservatively limiting unaided visual magnitude to 6.0), 61 Cygni A (magnitude V=5.20) has the highest proper motion at 5.281″ yr−1, discounting Groombridge 1830 (magnitude V=6.42), proper motion: 7.058″ yr−1. A proper motion of 1 arcsec per year 1 light-year away corresponds to a relative transverse speed of 1.45 km/s. Barnard's Star's transverse speed is 90 km/s and its radial velocity is 111 km/s (perpendicular (at a right, 90° angle), which gives a true or "space" motion of 142 km/s. True or absolute motion is more difficult to measure than the proper motion, because the true transverse velocity involves the product of the proper motion times the distance. As shown by this formula, true velocity measurements depend on distance measurements, which are difficult in general. In 1992 Rho Aquilae became the first star to have its Bayer designation invalidated by moving to a neighbouring constellation – it is now in Delphinus. Usefulness in astronomy Stars with large proper motions tend to be nearby; most stars are far enough away that their proper motions are very small, on the order of a few thousandths of an arcsecond per year. It is possible to construct nearly complete samples of high proper motion stars by comparing photographic sky survey images taken many years apart. The Palomar Sky Survey is one source of such images. In the past, searches for high proper motion objects were undertaken using blink comparators to examine the images by eye. More modern techniques such as image differencing can scan digitized images, or comparisons to star catalogs obtained by satellites. As any selection biases of these surveys are well understood and quantifiable, studies have confirmed more and inferred approximate quantities of unseen stars – revealing and confirming more by studying them further, regardless of brightness, for instance. Studies of this kind show most of the nearest stars are intrinsically faint and angularly small, such as red dwarfs. Measurement of the proper motions of a large sample of stars in a distant stellar system, like a globular cluster, can be used to compute the cluster's total mass via the Leonard-Merritt mass estimator. Coupled with measurements of the stars' radial velocities, proper motions can be used to compute the distance to the cluster. Stellar proper motions have been used to infer the presence of a super-massive black hole at the center of the Milky Way. This now confirmed to exist black hole is called Sgr A*, and has a mass of 4.3 × (solar masses). Proper motions of the galaxies in the Local Group are discussed in detail in Röser. In 2005, the first measurement was made of the proper motion of the Triangulum Galaxy M33, the third largest and only ordinary spiral galaxy in the Local Group, located 0.860 ± 0.028 Mpc beyond the Milky Way. The motion of the Andromeda Galaxy was measured in 2012, and an Andromeda–Milky Way collision is predicted in about 4.5 billion years. Proper motion of the NGC 4258 (M106) galaxy in the M106 group of galaxies was used in 1999 to find an accurate distance to this object. Measurements were made of the radial motion of objects in that galaxy moving directly toward and away from Earth, and assuming this same motion to apply to objects with only a proper motion, the observed proper motion predicts a distance to the galaxy of . History Proper motion was suspected by early astronomers (according to Macrobius, c. AD 400) but a proof was not provided until 1718 by Edmund Halley, who noticed that Sirius, Arcturus and Aldebaran were over half a degree away from the positions charted by the ancient Greek astronomer Hipparchus roughly 1850 years earlier. The lesser meaning of "proper" used is arguably dated English (but neither historic, nor obsolete when used as a postpositive, as in "the city proper") meaning "belonging to" or "own". "Improper motion" would refer to perceived motion that is nothing to do with an object's inherent course, such as due to Earth's axial precession, and minor deviations, nutations well within the 26,000-year cycle. Stars with high proper motion See also Astronomical coordinate systems Galaxy rotation curve Leonard–Merritt mass estimator Milky Way Peculiar velocity Radial velocity Relative velocity Solar apex Space velocity (astronomy) Stellar kinematics Very-long-baseline interferometry References External links Hipparcos: High Proper Motion Stars Edmond Halley: Discovery of proper motions Astrometry Stellar astronomy Motion (physics) Concepts in astronomy
Proper motion
[ "Physics", "Astronomy" ]
2,084
[ "Physical phenomena", "Concepts in astronomy", "Astrometry", "Motion (physics)", "Space", "Mechanics", "Spacetime", "Astronomical sub-disciplines", "Stellar astronomy" ]
52,021
https://en.wikipedia.org/wiki/W.%20T.%20Tutte
William Thomas Tutte (; 14 May 1917 – 2 May 2002) was an English and Canadian code breaker and mathematician. During the Second World War, he made a brilliant and fundamental advance in cryptanalysis of the Lorenz cipher, a major Nazi German cipher system which was used for top-secret communications within the Wehrmacht High Command. The high-level, strategic nature of the intelligence obtained from Tutte's crucial breakthrough, in the bulk decrypting of Lorenz-enciphered messages specifically, contributed greatly, and perhaps even decisively, to the defeat of Nazi Germany. He also had a number of significant mathematical accomplishments, including foundation work in the fields of graph theory and matroid theory. Tutte's research in the field of graph theory proved to be of remarkable importance. At a time when graph theory was still a primitive subject, Tutte commenced the study of matroids and developed them into a theory by expanding from the work that Hassler Whitney had first developed around the mid-1930s. Even though Tutte's contributions to graph theory have been influential to modern graph theory and many of his theorems have been used to keep making advances in the field, most of his terminology was not in agreement with their conventional usage and thus his terminology is not used by graph theorists today. "Tutte advanced graph theory from a subject with one text (D. Kőnig's) toward its present extremely active state." Early life and education Tutte was born in Newmarket in Suffolk. He was the younger son of William John Tutte (1873–1944), an estate gardener, and Annie (née Newell; 1881–1956), a housekeeper. Both parents worked at Fitzroy House stables where Tutte was born. The family spent some time in Buckinghamshire, County Durham and Yorkshire before returning to Newmarket, where Tutte attended Cheveley Church of England primary school in the nearby village of Cheveley. In 1927, when he was ten, Tutte won a scholarship to the Cambridge and County High School for Boys. He took up his place there in 1928. In 1935 he won a scholarship to study natural sciences at Trinity College, Cambridge, where he specialized in chemistry and graduated with first-class honours in 1938. He continued with physical chemistry as a graduate student, but transferred to mathematics at the end of 1940. As a student, he (along with three of his friends) became one of the first to solve the problem of squaring the square, and the first to solve the problem without a squared subrectangle. Together the four created the pseudonym Blanche Descartes, under which Tutte published occasionally for years. Second World War Soon after the outbreak of the Second World War, Tutte's tutor, Patrick Duff, suggested him for war work at the Government Code and Cypher School at Bletchley Park (BP). He was interviewed and sent on a training course in London before going to Bletchley Park, where he joined the Research Section. At first, he worked on the Hagelin cipher that was being used by the Italian Navy. This was a rotor cipher machine that was available commercially, so the mechanics of enciphering was known, and decrypting messages only required working out how the machine was set up. In the summer of 1941, Tutte was transferred to work on a project called Fish. Intelligence information had revealed that the Germans called the wireless teleprinter transmission systems "Sägefisch" ('sawfish'). This led the British to use the code Fish for the German teleprinter cipher system. The nickname Tunny (tunafish) was used for the first non-Morse link, and it was subsequently used for the Lorenz SZ machines and the traffic that they enciphered. Telegraphy used the 5-bit International Telegraphy Alphabet No. 2 (ITA2). Nothing was known about the mechanism of enciphering other than that messages were preceded by a 12-letter indicator, which implied a 12-wheel rotor cipher machine. The first step, therefore, had to be to diagnose the machine by establishing the logical structure and hence the functioning of the machine. Tutte played a pivotal role in achieving this, and it was not until shortly before the Allied victory in Europe in 1945, that Bletchley Park acquired a Tunny Lorenz cipher machine. Tutte's breakthroughs led eventually to bulk decrypting of Tunny-enciphered messages between the German High Command (OKW) in Berlin and their army commands throughout occupied Europe and contributed—perhaps decisively—to the defeat of Germany. Diagnosing the cipher machine On 31 August 1941, two versions of the same message were sent using identical keys, which constituted a "depth". This allowed John Tiltman, Bletchley Park's veteran and remarkably gifted cryptanalyst, to deduce that it was a Vernam cipher which uses the Exclusive Or (XOR) function (symbolised by "⊕"), and to extract the two messages and hence obtain the obscuring key. After a fruitless period during which Research Section cryptanalysts tried to work out how the Tunny machine worked, this and some other keys were handed to Tutte, who was asked to "see what you can make of these". At his training course, Tutte had been taught the Kasiski examination technique of writing out a key on squared paper, starting a new row after a defined number of characters that was suspected of being the frequency of repetition of the key. If this number was correct, the columns of the matrix would show more repetitions of sequences of characters than chance alone. Tutte knew that the Tunny indicators used 25 letters (excluding J) for 11 of the positions, but only 23 letters for the other. He therefore tried Kasiski's technique on the first impulse of the key characters, using a repetition of 25 × 23 = 575. He did not observe a large number of column repetitions with this period, but he did observe the phenomenon on a diagonal. He therefore tried again with 574, which showed up repeats in the columns. Recognising that the prime factors of this number are 2, 7 and 41, he tried again with a period of 41 and "got a rectangle of dots and crosses that was replete with repetitions". It was clear, however, that the first impulse of the key was more complicated than that produced by a single wheel of 41 key impulses. Tutte called this component of the key (chi1). He figured that there was another component, which was XOR-ed with this, that did not always change with each new character, and that this was the product of a wheel that he called (psi1). The same applied for each of the five impulses So for a single character, the whole key K consisted of two components: At Bletchley Park, mark impulses were signified by x and space impulses by •. For example, the letter "H" would be coded as ••x•x. Tutte's derivation of the chi and psi components was made possible by the fact that dots were more likely than not to be followed by dots, and crosses more likely than not to be followed by crosses. This was a product of a weakness in the German key setting, which they later eliminated. Once Tutte had made this breakthrough, the rest of the Research Section joined in to study the other impulses, and it was established that the five chi wheels all advanced with each new character and that the five psi wheels all moved together under the control of two mu or "motor" wheels. Over the following two months, Tutte and other members of the Research Section worked out the complete logical structure of the machine, with its set of wheels bearing cams that could either be in a position (raised) that added x to the stream of key characters, or in the alternative position that added in •. Diagnosing the functioning of the Tunny machine in this way was a truly remarkable cryptanalytical achievement which, in the citation for Tutte's induction as an Officer of the Order of Canada, was described as "one of the greatest intellectual feats of World War II". Tutte's statistical method To decrypt a Tunny message required knowledge not only of the logical functioning of the machine, but also the start positions of each rotor for the particular message. The search was on for a process that would manipulate the ciphertext or key to produce a frequency distribution of characters that departed from the uniformity that the enciphering process aimed to achieve. While on secondment to the Research Section in July 1942, Alan Turing worked out that the XOR combination of the values of successive characters in a stream of ciphertext and key emphasised any departures from a uniform distribution. The resultant stream (symbolised by the Greek letter "delta" Δ) was called the difference because XOR is the same as modulo 2 subtraction. The reason that this provided a way into Tunny was that although the frequency distribution of characters in the ciphertext could not be distinguished from a random stream, the same was not true for a version of the ciphertext from which the chi element of the key had been removed. This was the case because where the plaintext contained a repeated character and the psi wheels did not move on, the differenced psi character () would be the null character ('/ ' at Bletchley Park). When XOR-ed with any character, this character has no effect. Repeated characters in the plaintext were more frequent both because of the characteristics of German (EE, TT, LL and SS are relatively common), and because telegraphists frequently repeated the figure-shift and letter-shift characters. To quote the General Report on Tunny:Turingery introduced the principle that the key differenced at one, now called ΔΚ, could yield information unobtainable from ordinary key. This Δ principle was to be the fundamental basis of nearly all statistical methods of wheel-breaking and setting. Tutte exploited this amplification of non-uniformity in the differenced values and by November 1942 had produced a way of discovering wheel starting points of the Tunny machine which became known as the "Statistical Method". The essence of this method was to find the initial settings of the chi component of the key by exhaustively trying all positions of its combination with the ciphertext, and looking for evidence of the non-uniformity that reflected the characteristics of the original plaintext. Because any repeated characters in the plaintext would always generate •, and similarly would generate • whenever the psi wheels did not move on, and about half of the time when they did – some 70% overall. As well as applying differencing to the full 5-bit characters of the ITA2 code, Tutte applied it to the individual impulses (bits). The current chi wheel cam settings needed to have been established to allow the relevant sequence of characters of the chi wheels to be generated. It was totally impracticable to generate the 22 million characters from all five of the chi wheels, so it was initially limited to 41 × 31 = 1271 from the first two. After explaining his findings to Max Newman, Newman was given the job of developing an automated approach to comparing ciphertext and key to look for departures from randomness. The first machine was dubbed Heath Robinson, but the much faster Colossus computer, developed by Tommy Flowers and using algorithms written by Tutte and his colleagues, soon took over for breaking codes. Doctorate and career In late 1945, Tutte resumed his studies at Cambridge, now as a graduate student in mathematics. He published some work begun earlier, one a now famous paper that characterises which graphs have a perfect matching, and another that constructs a non-Hamiltonian graph. Tutte completed a doctorate in mathematics from Cambridge in 1948 under the supervision of Shaun Wylie, who had also worked at Bletchley Park on Tunny. His thesis An Algebraic Theory of Graphs was considered ground breaking and was about the subject later known as matroid theory. The same year, invited by Harold Scott MacDonald Coxeter, he accepted a position at the University of Toronto. In 1962, he moved to the University of Waterloo in Waterloo, Ontario, where he stayed for the rest of his academic career. He officially retired in 1985, but remained active as an emeritus professor. Tutte was instrumental in helping to found the Department of Combinatorics and Optimization at the University of Waterloo. His mathematical career concentrated on combinatorics, especially graph theory, which he is credited as having helped create in its modern form, and matroid theory, to which he made profound contributions; one colleague described him as "the leading mathematician in combinatorics for three decades". He was editor in chief of the Journal of Combinatorial Theory until retiring from Waterloo in 1985. He also served on the editorial boards of several other mathematical research journals. Research contributions Tutte's work in graph theory includes the structure of cycle spaces and cut spaces, the size of maximum matchings and existence of k-factors in graphs, and Hamiltonian and non-Hamiltonian graphs. He disproved Tait's conjecture, on the Hamiltonicity of polyhedral graphs, by using the construction known as Tutte's fragment. The eventual proof of the four colour theorem made use of his earlier work. The graph polynomial he called the "dichromate" has become famous and influential under the name of the Tutte polynomial and serves as the prototype of combinatorial invariants that are universal for all invariants that satisfy a specified reduction law. The first major advances in matroid theory were made by Tutte in his 1948 Cambridge PhD thesis which formed the basis of an important sequence of papers published over the next two decades. Tutte's work in graph theory and matroid theory has been profoundly influential on the development of both the content and direction of these two fields. In matroid theory, he discovered the highly sophisticated homotopy theorem and founded the studies of chain groups and regular matroids, about which he proved deep results. In addition, Tutte developed an algorithm for determining whether a given binary matroid is a graphic matroid. The algorithm makes use of the fact that a planar graph is simply a graph whose circuit-matroid, the dual of its bond-matroid, is graphic. Tutte wrote a paper entitled How to Draw a Graph in which he proved that any face in a 3-connected graph is enclosed by a peripheral cycle. Using this fact, Tutte developed an alternative proof to show that every Kuratowski graph is non-planar by showing that K5 and K3,3 each have three distinct peripheral cycles with a common edge. In addition to using peripheral cycles to prove that the Kuratowski graphs are non-planar, Tutte proved that every simple 3-connected graph can be drawn with all its faces convex, and devised an algorithm which constructs the plane drawing by solving a linear system. The resulting drawing is known as the Tutte embedding. Tutte's algorithm makes use of the barycentric mappings of the peripheral circuits of a simple 3-connected graph. The findings published in this paper have proved to be of much significance because the algorithms that Tutte developed have become popular planar graph drawing methods. One of the reasons for which Tutte's embedding is popular is that the necessary computations that are carried out by his algorithms are simple and guarantee a one-to-one correspondence of a graph and its embedding onto the Euclidean plane, which is of importance when parameterising a three-dimensional mesh to the plane in geometric modelling. "Tutte's theorem is the basis for solutions to other computer graphics problems, such as morphing." Tutte was mainly responsible for developing the theory of enumeration of planar graphs, which has close links with chromatic and dichromatic polynomials. This work involved some highly innovative techniques of his own invention, requiring considerable manipulative dexterity in handling power series (whose coefficients count appropriate kinds of graphs) and the functions arising as their sums, as well as geometrical dexterity in extracting these power series from the graph-theoretic situation. Tutte summarised his work in the Selected Papers of W.T. Tutte, 1979, and in Graph Theory as I have known it, 1998. Positions, honours and awards Tutte's work in World War II and subsequently in combinatorics brought him various positions, honours and awards: 1958, Fellow of the Royal Society of Canada (FRSC); 1971, Jeffery–Williams Prize by the Canadian Mathematical Society; 1975, Henry Marshall Tory Medal by the Royal Society of Canada; 1977, A conference on Graph Theory and Related Topics was held at the University of Waterloo in his honour on the occasion of his sixtieth birthday; 1982, Isaak-Walton-Killam Award by the Canada Council; 1987, Fellow of the Royal Society (FRS); 1990–1996, First President of the Institute of Combinatorics and its Applications; 1998, Appointed honorary director of the Centre for Applied Cryptographic Research at the University of Waterloo; 2001, Officer of the Order of Canada (OC); 2001, CRM-Fields-PIMS prize. 2016, Waterloo Region Hall of Fame 2017, Waterloo "William Tutte Way" road naming Tutte served as Librarian for the Royal Astronomical Society of Canada in 1959–1960, and asteroid 14989 Tutte (1997 UB7) was named after him. Because of Tutte's work at Bletchley Park, Canada's Communications Security Establishment named an internal organisation aimed at promoting research into cryptology, the Tutte Institute for Mathematics and Computing (TIMC), in his honour in 2011. In September 2014, Tutte was celebrated in his hometown of Newmarket, England, with the unveiling of a sculpture, after a local newspaper started a campaign to honour his memory. Bletchley Park in Milton Keynes celebrated Tutte's work with an exhibition Bill Tutte: Mathematician + Codebreaker from May 2017 to 2019, preceded on 14 May 2017 by lectures about his life and work during the Bill Tutte Centenary Symposium. Personal life and death In addition to the career benefits of working at the new University of Waterloo, the more rural setting of Waterloo County appealed to Bill and his wife Dorothea. They bought a house in the nearby village of West Montrose, Ontario where they enjoyed hiking, spending time in their garden on the Grand River and allowing others to enjoy the beautiful scenery of their property. They also had an extensive knowledge of all the birds in their garden. Dorothea, an avid potter, was also a keen hiker and Bill organised hiking trips. Even near the end of his life Bill still was an avid walker. After his wife died in 1994, he moved back to Newmarket (Suffolk), but then returned to Waterloo in 2000, where he died two years later. He is buried in West Montrose United Cemetery. Select publications Books . Also Volume I: Volume II: Reprinted by Cambridge University Press 2001, Reprinted 2012, Articles See also List of University of Waterloo people Systolic geometry Notes References Sources Appendix 5 in in Updated and extended version of Action This Day: From Breaking of the Enigma Code to the Birth of the Modern Computer Bantam Press 2001 That version is a facsimile copy, but there is a transcript of much of this document in '.pdf' format at: , and a web transcript of Part 1 at: in Transcript of a lecture given by Prof. Tutte at the University of Waterloo Appendix 4 in External links Professor William T. Tutte William Tutte, 84, Mathematician and Code-breaker, Dies – Obituary from The New York Times William Tutte: Unsung mathematical mastermind – Obituary from The Guardian CRM-Fields-PIMS Prize – 2001 – William T. Tutte "60 Years in the Nets" – a lecture (audio recording) given at the Fields Institute on 25 October 2001 to mark the receipt of the 2001 CRM-Fields Prize Tutte's disproof of Tait's conjecture "Bletchley's forgotten heroes", Ian Douglas, The Daily Telegraph, 25 December 2012 . . The Tutte Institute for Research in Mathematics and Computer Science 1917 births 2002 deaths People from Newmarket, Suffolk Alumni of Trinity College, Cambridge Bletchley Park people British cryptographers Cipher-machine cryptographers 20th-century English mathematicians Graph theorists Graph drawing people History of computing in the United Kingdom Academic staff of the University of Toronto Academic staff of the University of Waterloo Officers of the Order of Canada British expatriate academics in Canada Fellows of the Royal Society Fellows of the Royal Society of Canada Foreign Office personnel of World War II
W. T. Tutte
[ "Mathematics", "Technology" ]
4,290
[ "Graph theory", "History of computing in the United Kingdom", "Mathematical relations", "Graph theorists", "History of computing" ]
52,033
https://en.wikipedia.org/wiki/Mathematical%20optimization
Mathematical optimization (alternatively spelled optimisation) or mathematical programming is the selection of a best element, with regard to some criteria, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and continuous optimization. Optimization problems arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries. In the more general approach, an optimization problem consists of maximizing or minimizing a real function by systematically choosing input values from within an allowed set and computing the value of the function. The generalization of optimization theory and techniques to other formulations constitutes a large area of applied mathematics. Optimization problems Optimization problems can be divided into two categories, depending on whether the variables are continuous or discrete: An optimization problem with discrete variables is known as a discrete optimization, in which an object such as an integer, permutation or graph must be found from a countable set. A problem with continuous variables is known as a continuous optimization, in which optimal arguments from a continuous set must be found. They can include constrained problems and multimodal problems. An optimization problem can be represented in the following way: Given: a function from some set to the real numbers Sought: an element such that for all ("minimization") or such that for all ("maximization"). Such a formulation is called an optimization problem or a mathematical programming problem (a term not directly related to computer programming, but still in use for example in linear programming – see History below). Many real-world and theoretical problems may be modeled in this general framework. Since the following is valid: it suffices to solve only minimization problems. However, the opposite perspective of considering only maximization problems would be valid, too. Problems formulated using this technique in the fields of physics may refer to the technique as energy minimization, speaking of the value of the function as representing the energy of the system being modeled. In machine learning, it is always necessary to continuously evaluate the quality of a data model by using a cost function where a minimum implies a set of possibly optimal parameters with an optimal (lowest) error. Typically, is some subset of the Euclidean space , often specified by a set of constraints, equalities or inequalities that the members of have to satisfy. The domain of is called the search space or the choice set, while the elements of are called candidate solutions or feasible solutions. The function is variously called an objective function, criterion function, loss function, cost function (minimization), utility function or fitness function (maximization), or, in certain fields, an energy function or energy functional. A feasible solution that minimizes (or maximizes) the objective function is called an optimal solution. In mathematics, conventional optimization problems are usually stated in terms of minimization. A local minimum is defined as an element for which there exists some such that the expression holds; that is to say, on some region around all of the function values are greater than or equal to the value at that element. Local maxima are defined similarly. While a local minimum is at least as good as any nearby elements, a global minimum is at least as good as every feasible element. Generally, unless the objective function is convex in a minimization problem, there may be several local minima. In a convex problem, if there is a local minimum that is interior (not on the edge of the set of feasible elements), it is also the global minimum, but a nonconvex problem may have more than one local minimum not all of which need be global minima. A large number of algorithms proposed for solving the nonconvex problems – including the majority of commercially available solvers – are not capable of making a distinction between locally optimal solutions and globally optimal solutions, and will treat the former as actual solutions to the original problem. Global optimization is the branch of applied mathematics and numerical analysis that is concerned with the development of deterministic algorithms that are capable of guaranteeing convergence in finite time to the actual optimal solution of a nonconvex problem. Notation Optimization problems are often expressed with special notation. Here are some examples: Minimum and maximum value of a function Consider the following notation: This denotes the minimum value of the objective function , when choosing from the set of real numbers . The minimum value in this case is 1, occurring at . Similarly, the notation asks for the maximum value of the objective function , where may be any real number. In this case, there is no such maximum as the objective function is unbounded, so the answer is "infinity" or "undefined". Optimal input arguments Consider the following notation: or equivalently This represents the value (or values) of the argument in the interval that minimizes (or minimize) the objective function (the actual minimum value of that function is not what the problem asks for). In this case, the answer is , since is infeasible, that is, it does not belong to the feasible set. Similarly, or equivalently represents the pair (or pairs) that maximizes (or maximize) the value of the objective function , with the added constraint that lie in the interval (again, the actual maximum value of the expression does not matter). In this case, the solutions are the pairs of the form and , where ranges over all integers. Operators and are sometimes also written as and , and stand for argument of the minimum and argument of the maximum. History Fermat and Lagrange found calculus-based formulae for identifying optima, while Newton and Gauss proposed iterative methods for moving towards an optimum. The term "linear programming" for certain optimization cases was due to George B. Dantzig, although much of the theory had been introduced by Leonid Kantorovich in 1939. (Programming in this context does not refer to computer programming, but comes from the use of program by the United States military to refer to proposed training and logistics schedules, which were the problems Dantzig studied at that time.) Dantzig published the Simplex algorithm in 1947, and also John von Neumann and other researchers worked on the theoretical aspects of linear programming (like the theory of duality) around the same time. Other notable researchers in mathematical optimization include the following: Richard Bellman Dimitri Bertsekas Michel Bierlaire Stephen P. Boyd Roger Fletcher Martin Grötschel Ronald A. Howard Fritz John Narendra Karmarkar William Karush Leonid Khachiyan Bernard Koopman Harold Kuhn László Lovász David Luenberger Arkadi Nemirovski Yurii Nesterov Lev Pontryagin R. Tyrrell Rockafellar Naum Z. Shor Albert Tucker Major subfields Convex programming studies the case when the objective function is convex (minimization) or concave (maximization) and the constraint set is convex. This can be viewed as a particular case of nonlinear programming or as generalization of linear or convex quadratic programming. Linear programming (LP), a type of convex programming, studies the case in which the objective function f is linear and the constraints are specified using only linear equalities and inequalities. Such a constraint set is called a polyhedron or a polytope if it is bounded. Second-order cone programming (SOCP) is a convex program, and includes certain types of quadratic programs. Semidefinite programming (SDP) is a subfield of convex optimization where the underlying variables are semidefinite matrices. It is a generalization of linear and convex quadratic programming. Conic programming is a general form of convex programming. LP, SOCP and SDP can all be viewed as conic programs with the appropriate type of cone. Geometric programming is a technique whereby objective and inequality constraints expressed as posynomials and equality constraints as monomials can be transformed into a convex program. Integer programming studies linear programs in which some or all variables are constrained to take on integer values. This is not convex, and in general much more difficult than regular linear programming. Quadratic programming allows the objective function to have quadratic terms, while the feasible set must be specified with linear equalities and inequalities. For specific forms of the quadratic term, this is a type of convex programming. Fractional programming studies optimization of ratios of two nonlinear functions. The special class of concave fractional programs can be transformed to a convex optimization problem. Nonlinear programming studies the general case in which the objective function or the constraints or both contain nonlinear parts. This may or may not be a convex program. In general, whether the program is convex affects the difficulty of solving it. Stochastic programming studies the case in which some of the constraints or parameters depend on random variables. Robust optimization is, like stochastic programming, an attempt to capture uncertainty in the data underlying the optimization problem. Robust optimization aims to find solutions that are valid under all possible realizations of the uncertainties defined by an uncertainty set. Combinatorial optimization is concerned with problems where the set of feasible solutions is discrete or can be reduced to a discrete one. Stochastic optimization is used with random (noisy) function measurements or random inputs in the search process. Infinite-dimensional optimization studies the case when the set of feasible solutions is a subset of an infinite-dimensional space, such as a space of functions. Heuristics and metaheuristics make few or no assumptions about the problem being optimized. Usually, heuristics do not guarantee that any optimal solution need be found. On the other hand, heuristics are used to find approximate solutions for many complicated optimization problems. Constraint satisfaction studies the case in which the objective function f is constant (this is used in artificial intelligence, particularly in automated reasoning). Constraint programming is a programming paradigm wherein relations between variables are stated in the form of constraints. Disjunctive programming is used where at least one constraint must be satisfied but not all. It is of particular use in scheduling. Space mapping is a concept for modeling and optimization of an engineering system to high-fidelity (fine) model accuracy exploiting a suitable physically meaningful coarse or surrogate model. In a number of subfields, the techniques are designed primarily for optimization in dynamic contexts (that is, decision making over time): Calculus of variations is concerned with finding the best way to achieve some goal, such as finding a surface whose boundary is a specific curve, but with the least possible area. Optimal control theory is a generalization of the calculus of variations which introduces control policies. Dynamic programming is the approach to solve the stochastic optimization problem with stochastic, randomness, and unknown model parameters. It studies the case in which the optimization strategy is based on splitting the problem into smaller subproblems. The equation that describes the relationship between these subproblems is called the Bellman equation. Mathematical programming with equilibrium constraints is where the constraints include variational inequalities or complementarities. Multi-objective optimization Adding more than one objective to an optimization problem adds complexity. For example, to optimize a structural design, one would desire a design that is both light and rigid. When two objectives conflict, a trade-off must be created. There may be one lightest design, one stiffest design, and an infinite number of designs that are some compromise of weight and rigidity. The set of trade-off designs that improve upon one criterion at the expense of another is known as the Pareto set. The curve created plotting weight against stiffness of the best designs is known as the Pareto frontier. A design is judged to be "Pareto optimal" (equivalently, "Pareto efficient" or in the Pareto set) if it is not dominated by any other design: If it is worse than another design in some respects and no better in any respect, then it is dominated and is not Pareto optimal. The choice among "Pareto optimal" solutions to determine the "favorite solution" is delegated to the decision maker. In other words, defining the problem as multi-objective optimization signals that some information is missing: desirable objectives are given but combinations of them are not rated relative to each other. In some cases, the missing information can be derived by interactive sessions with the decision maker. Multi-objective optimization problems have been generalized further into vector optimization problems where the (partial) ordering is no longer given by the Pareto ordering. Multi-modal or global optimization Optimization problems are often multi-modal; that is, they possess multiple good solutions. They could all be globally good (same cost function value) or there could be a mix of globally good and locally good solutions. Obtaining all (or at least some of) the multiple solutions is the goal of a multi-modal optimizer. Classical optimization techniques due to their iterative approach do not perform satisfactorily when they are used to obtain multiple solutions, since it is not guaranteed that different solutions will be obtained even with different starting points in multiple runs of the algorithm. Common approaches to global optimization problems, where multiple local extrema may be present include evolutionary algorithms, Bayesian optimization and simulated annealing. Classification of critical points and extrema Feasibility problem The satisfiability problem, also called the feasibility problem, is just the problem of finding any feasible solution at all without regard to objective value. This can be regarded as the special case of mathematical optimization where the objective value is the same for every solution, and thus any solution is optimal. Many optimization algorithms need to start from a feasible point. One way to obtain such a point is to relax the feasibility conditions using a slack variable; with enough slack, any starting point is feasible. Then, minimize that slack variable until the slack is null or negative. Existence The extreme value theorem of Karl Weierstrass states that a continuous real-valued function on a compact set attains its maximum and minimum value. More generally, a lower semi-continuous function on a compact set attains its minimum; an upper semi-continuous function on a compact set attains its maximum point or view. Necessary conditions for optimality One of Fermat's theorems states that optima of unconstrained problems are found at stationary points, where the first derivative or the gradient of the objective function is zero (see first derivative test). More generally, they may be found at critical points, where the first derivative or gradient of the objective function is zero or is undefined, or on the boundary of the choice set. An equation (or set of equations) stating that the first derivative(s) equal(s) zero at an interior optimum is called a 'first-order condition' or a set of first-order conditions. Optima of equality-constrained problems can be found by the Lagrange multiplier method. The optima of problems with equality and/or inequality constraints can be found using the 'Karush–Kuhn–Tucker conditions'. Sufficient conditions for optimality While the first derivative test identifies points that might be extrema, this test does not distinguish a point that is a minimum from one that is a maximum or one that is neither. When the objective function is twice differentiable, these cases can be distinguished by checking the second derivative or the matrix of second derivatives (called the Hessian matrix) in unconstrained problems, or the matrix of second derivatives of the objective function and the constraints called the bordered Hessian in constrained problems. The conditions that distinguish maxima, or minima, from other stationary points are called 'second-order conditions' (see 'Second derivative test'). If a candidate solution satisfies the first-order conditions, then the satisfaction of the second-order conditions as well is sufficient to establish at least local optimality. Sensitivity and continuity of optima The envelope theorem describes how the value of an optimal solution changes when an underlying parameter changes. The process of computing this change is called comparative statics. The maximum theorem of Claude Berge (1963) describes the continuity of an optimal solution as a function of underlying parameters. Calculus of optimization For unconstrained problems with twice-differentiable functions, some critical points can be found by finding the points where the gradient of the objective function is zero (that is, the stationary points). More generally, a zero subgradient certifies that a local minimum has been found for minimization problems with convex functions and other locally Lipschitz functions, which meet in loss function minimization of the neural network. The positive-negative momentum estimation lets to avoid the local minimum and converges at the objective function global minimum. Further, critical points can be classified using the definiteness of the Hessian matrix: If the Hessian is positive definite at a critical point, then the point is a local minimum; if the Hessian matrix is negative definite, then the point is a local maximum; finally, if indefinite, then the point is some kind of saddle point. Constrained problems can often be transformed into unconstrained problems with the help of Lagrange multipliers. Lagrangian relaxation can also provide approximate solutions to difficult constrained problems. When the objective function is a convex function, then any local minimum will also be a global minimum. There exist efficient numerical techniques for minimizing convex functions, such as interior-point methods. Global convergence More generally, if the objective function is not a quadratic function, then many optimization methods use other methods to ensure that some subsequence of iterations converges to an optimal solution. The first and still popular method for ensuring convergence relies on line searches, which optimize a function along one dimension. A second and increasingly popular method for ensuring convergence uses trust regions. Both line searches and trust regions are used in modern methods of non-differentiable optimization. Usually, a global optimizer is much slower than advanced local optimizers (such as BFGS), so often an efficient global optimizer can be constructed by starting the local optimizer from different starting points. Computational optimization techniques To solve problems, researchers may use algorithms that terminate in a finite number of steps, or iterative methods that converge to a solution (on some specified class of problems), or heuristics that may provide approximate solutions to some problems (although their iterates need not converge). Optimization algorithms Simplex algorithm of George Dantzig, designed for linear programming Extensions of the simplex algorithm, designed for quadratic programming and for linear-fractional programming Variants of the simplex algorithm that are especially suited for network optimization Combinatorial algorithms Quantum optimization algorithms Iterative methods The iterative methods used to solve problems of nonlinear programming differ according to whether they evaluate Hessians, gradients, or only function values. While evaluating Hessians (H) and gradients (G) improves the rate of convergence, for functions for which these quantities exist and vary sufficiently smoothly, such evaluations increase the computational complexity (or computational cost) of each iteration. In some cases, the computational complexity may be excessively high. One major criterion for optimizers is just the number of required function evaluations as this often is already a large computational effort, usually much more effort than within the optimizer itself, which mainly has to operate over the N variables. The derivatives provide detailed information for such optimizers, but are even harder to calculate, e.g. approximating the gradient takes at least N+1 function evaluations. For approximations of the 2nd derivatives (collected in the Hessian matrix), the number of function evaluations is in the order of N². Newton's method requires the 2nd-order derivatives, so for each iteration, the number of function calls is in the order of N², but for a simpler pure gradient optimizer it is only N. However, gradient optimizers need usually more iterations than Newton's algorithm. Which one is best with respect to the number of function calls depends on the problem itself. Methods that evaluate Hessians (or approximate Hessians, using finite differences): Newton's method Sequential quadratic programming: A Newton-based method for small-medium scale constrained problems. Some versions can handle large-dimensional problems. Interior point methods: This is a large class of methods for constrained optimization, some of which use only (sub)gradient information and others of which require the evaluation of Hessians. Methods that evaluate gradients, or approximate gradients in some way (or even subgradients): Coordinate descent methods: Algorithms which update a single coordinate in each iteration Conjugate gradient methods: Iterative methods for large problems. (In theory, these methods terminate in a finite number of steps with quadratic objective functions, but this finite termination is not observed in practice on finite–precision computers.) Gradient descent (alternatively, "steepest descent" or "steepest ascent"): A (slow) method of historical and theoretical interest, which has had renewed interest for finding approximate solutions of enormous problems. Subgradient methods: An iterative method for large locally Lipschitz functions using generalized gradients. Following Boris T. Polyak, subgradient–projection methods are similar to conjugate–gradient methods. Bundle method of descent: An iterative method for small–medium-sized problems with locally Lipschitz functions, particularly for convex minimization problems (similar to conjugate gradient methods). Ellipsoid method: An iterative method for small problems with quasiconvex objective functions and of great theoretical interest, particularly in establishing the polynomial time complexity of some combinatorial optimization problems. It has similarities with Quasi-Newton methods. Conditional gradient method (Frank–Wolfe) for approximate minimization of specially structured problems with linear constraints, especially with traffic networks. For general unconstrained problems, this method reduces to the gradient method, which is regarded as obsolete (for almost all problems). Quasi-Newton methods: Iterative methods for medium-large problems (e.g. N<1000). Simultaneous perturbation stochastic approximation (SPSA) method for stochastic optimization; uses random (efficient) gradient approximation. Methods that evaluate only function values: If a problem is continuously differentiable, then gradients can be approximated using finite differences, in which case a gradient-based method can be used. Interpolation methods Pattern search methods, which have better convergence properties than the Nelder–Mead heuristic (with simplices), which is listed below. Mirror descent Heuristics Besides (finitely terminating) algorithms and (convergent) iterative methods, there are heuristics. A heuristic is any algorithm which is not guaranteed (mathematically) to find the solution, but which is nevertheless useful in certain practical situations. List of some well-known heuristics: Differential evolution Dynamic relaxation Evolutionary algorithms Genetic algorithms Hill climbing with random restart Memetic algorithm Nelder–Mead simplicial heuristic: A popular heuristic for approximate minimization (without calling gradients) Particle swarm optimization Simulated annealing Stochastic tunneling Tabu search Applications Mechanics Problems in rigid body dynamics (in particular articulated rigid body dynamics) often require mathematical programming techniques, since you can view rigid body dynamics as attempting to solve an ordinary differential equation on a constraint manifold; the constraints are various nonlinear geometric constraints such as "these two points must always coincide", "this surface must not penetrate any other", or "this point must always lie somewhere on this curve". Also, the problem of computing contact forces can be done by solving a linear complementarity problem, which can also be viewed as a QP (quadratic programming) problem. Many design problems can also be expressed as optimization programs. This application is called design optimization. One subset is the engineering optimization, and another recent and growing subset of this field is multidisciplinary design optimization, which, while useful in many problems, has in particular been applied to aerospace engineering problems. This approach may be applied in cosmology and astrophysics. Economics and finance Economics is closely enough linked to optimization of agents that an influential definition relatedly describes economics qua science as the "study of human behavior as a relationship between ends and scarce means" with alternative uses. Modern optimization theory includes traditional optimization theory but also overlaps with game theory and the study of economic equilibria. The Journal of Economic Literature codes classify mathematical programming, optimization techniques, and related topics under JEL:C61-C63. In microeconomics, the utility maximization problem and its dual problem, the expenditure minimization problem, are economic optimization problems. Insofar as they behave consistently, consumers are assumed to maximize their utility, while firms are usually assumed to maximize their profit. Also, agents are often modeled as being risk-averse, thereby preferring to avoid risk. Asset prices are also modeled using optimization theory, though the underlying mathematics relies on optimizing stochastic processes rather than on static optimization. International trade theory also uses optimization to explain trade patterns between nations. The optimization of portfolios is an example of multi-objective optimization in economics. Since the 1970s, economists have modeled dynamic decisions over time using control theory. For example, dynamic search models are used to study labor-market behavior. A crucial distinction is between deterministic and stochastic models. Macroeconomists build dynamic stochastic general equilibrium (DSGE) models that describe the dynamics of the whole economy as the result of the interdependent optimizing decisions of workers, consumers, investors, and governments. Electrical engineering Some common applications of optimization techniques in electrical engineering include active filter design, stray field reduction in superconducting magnetic energy storage systems, space mapping design of microwave structures, handset antennas, electromagnetics-based design. Electromagnetically validated design optimization of microwave components and antennas has made extensive use of an appropriate physics-based or empirical surrogate model and space mapping methodologies since the discovery of space mapping in 1993. Optimization techniques are also used in power-flow analysis. Civil engineering Optimization has been widely used in civil engineering. Construction management and transportation engineering are among the main branches of civil engineering that heavily rely on optimization. The most common civil engineering problems that are solved by optimization are cut and fill of roads, life-cycle analysis of structures and infrastructures, resource leveling, water resource allocation, traffic management and schedule optimization. Operations research Another field that uses optimization techniques extensively is operations research. Operations research also uses stochastic modeling and simulation to support improved decision-making. Increasingly, operations research uses stochastic programming to model dynamic decisions that adapt to events; such problems can be solved with large-scale optimization and stochastic optimization methods. Control engineering Mathematical optimization is used in much modern controller design. High-level controllers such as model predictive control (MPC) or real-time optimization (RTO) employ mathematical optimization. These algorithms run online and repeatedly determine values for decision variables, such as choke openings in a process plant, by iteratively solving a mathematical optimization problem including constraints and a model of the system to be controlled. Geophysics Optimization techniques are regularly used in geophysical parameter estimation problems. Given a set of geophysical measurements, e.g. seismic recordings, it is common to solve for the physical properties and geometrical shapes of the underlying rocks and fluids. The majority of problems in geophysics are nonlinear with both deterministic and stochastic methods being widely used. Molecular modeling Nonlinear optimization methods are widely used in conformational analysis. Computational systems biology Optimization techniques are used in many facets of computational systems biology such as model building, optimal experimental design, metabolic engineering, and synthetic biology. Linear programming has been applied to calculate the maximal possible yields of fermentation products, and to infer gene regulatory networks from multiple microarray datasets as well as transcriptional regulatory networks from high-throughput data. Nonlinear programming has been used to analyze energy metabolism and has been applied to metabolic engineering and parameter estimation in biochemical pathways. Machine learning Solvers See also Brachistochrone curve Curve fitting Deterministic global optimization Goal programming Important publications in optimization Least squares Mathematical Optimization Society (formerly Mathematical Programming Society) Mathematical optimization algorithms Mathematical optimization software Process optimization Simulation-based optimization Test functions for optimization Vehicle routing problem Notes Further reading G.L. Nemhauser, A.H.G. Rinnooy Kan and M.J. Todd (eds.): Optimization, Elsevier, (1989). Stanislav Walukiewicz:Integer Programming, Springer,ISBN 978-9048140688, (1990). R. Fletcher: Practical Methods of Optimization, 2nd Ed.,Wiley, (2000). Panos M. Pardalos:Approximation and Complexity in Numerical Optimization: Continuous and Discrete Problems, Springer,ISBN 978-1-44194829-8, (2000). Xiaoqi Yang, K. L. Teo, Lou Caccetta (Eds.):Optimization Methods and Applications,Springer, ISBN 978-0-79236866-3, (2001). Panos M. Pardalos, and Mauricio G. C. Resende(Eds.):Handbook of Applied Optimization、Oxford Univ Pr on Demand, ISBN 978-0-19512594-8, (2002). Wil Michiels, Emile Aarts, and Jan Korst: Theoretical Aspects of Local Search, Springer, ISBN 978-3-64207148-5, (2006). Der-San Chen, Robert G. Batson,and Yu Dang: Applied Integer Programming: Modeling and Solution,Wiley,ISBN 978-0-47037306-4, (2010). Mykel J. Kochenderfer and Tim A. Wheeler: Algorithms for Optimization, The MIT Press, ISBN 978-0-26203942-0, (2019). Vladislav Bukshtynov: Optimization: Success in Practice, CRC Press (Taylor & Francis), ISBN 978-1-03222947-8, (2023) . Rosario Toscano: Solving Optimization Problems with the Heuristic Kalman Algorithm: New Stochastic Methods, Springer, ISBN 978-3-031-52458-5 (2024). Immanuel M. Bomze, Tibor Csendes, Reiner Horst and Panos M. Pardalos: Developments in Global Optimization, Kluwer Academic, ISBN 978-1-4419-4768-0 (2010). External links Links to optimization source codes Operations research Optimization
Mathematical optimization
[ "Mathematics" ]
6,307
[ "Mathematical optimization", "Applied mathematics", "Mathematical analysis", "Operations research" ]
52,035
https://en.wikipedia.org/wiki/Motion%20compensation
Motion compensation in computing is an algorithmic technique used to predict a frame in a video given the previous and/or future frames by accounting for motion of the camera and/or objects in the video. It is employed in the encoding of video data for video compression, for example in the generation of MPEG-2 files. Motion compensation describes a picture in terms of the transformation of a reference picture to the current picture. The reference picture may be previous in time or even from the future. When images can be accurately synthesized from previously transmitted/stored images, the compression efficiency can be improved. Motion compensation is one of the two key video compression techniques used in video coding standards, along with the discrete cosine transform (DCT). Most video coding standards, such as the H.26x and MPEG formats, typically use motion-compensated DCT hybrid coding, known as block motion compensation (BMC) or motion-compensated DCT (MC DCT). Functionality Motion compensation exploits the fact that, often, for many frames of a movie, the only difference between one frame and another is the result of either the camera moving or an object in the frame moving. In reference to a video file, this means much of the information that represents one frame will be the same as the information used in the next frame. Using motion compensation, a video stream will contain some full (reference) frames; then the only information stored for the frames in between would be the information needed to transform the previous frame into the next frame. Illustrated example The following is a simplistic illustrated explanation of how motion compensation works. Two successive frames were captured from the movie Elephants Dream. As can be seen from the images, the bottom (motion compensated) difference between two frames contains significantly less detail than the prior images, and thus compresses much better than the rest. Thus the information that is required to encode compensated frame will be much smaller than with the difference frame. This also means that it is also possible to encode the information using difference image at a cost of less compression efficiency but by saving coding complexity without motion compensated coding; as a matter of fact that motion compensated coding (together with motion estimation, motion compensation) occupies more than 90% of encoding complexity. MPEG In MPEG, images are predicted from previous frames or bidirectionally from previous and future frames are more complex because the image sequence must be transmitted and stored out of order so that the future frame is available to generate the After predicting frames using motion compensation, the coder finds the residual, which is then compressed and transmitted. Global motion compensation In global motion compensation, the motion model basically reflects camera motions such as: Dolly — moving the camera forward or backward Track — moving the camera left or right Boom — moving the camera up or down Pan — rotating the camera around its Y axis, moving the view left or right Tilt — rotating the camera around its X axis, moving the view up or down Roll — rotating the camera around the view axis It works best for still scenes without moving objects. There are several advantages of global motion compensation: It models the dominant motion usually found in video sequences with just a few parameters. The share in bit-rate of these parameters is negligible. It does not partition the frames. This avoids artifacts at partition borders. A straight line (in the time direction) of pixels with equal spatial positions in the frame corresponds to a continuously moving point in the real scene. Other MC schemes introduce discontinuities in the time direction. MPEG-4 ASP supports global motion compensation with three reference points, although some implementations can only make use of one. A single reference point only allows for translational motion which for its relatively large performance cost provides little advantage over block based motion compensation. Moving objects within a frame are not sufficiently represented by global motion compensation. Thus, local motion estimation is also needed. Motion-compensated DCT Block motion compensation Block motion compensation (BMC), also known as motion-compensated discrete cosine transform (MC DCT), is the most widely used motion compensation technique. In BMC, the frames are partitioned in blocks of pixels (e.g. macro-blocks of 16×16 pixels in MPEG). Each block is predicted from a block of equal size in the reference frame. The blocks are not transformed in any way apart from being shifted to the position of the predicted block. This shift is represented by a motion vector. To exploit the redundancy between neighboring block vectors, (e.g. for a single moving object covered by multiple blocks) it is common to encode only the difference between the current and previous motion vector in the bit-stream. The result of this differentiating process is mathematically equivalent to a global motion compensation capable of panning. Further down the encoding pipeline, an entropy coder will take advantage of the resulting statistical distribution of the motion vectors around the zero vector to reduce the output size. It is possible to shift a block by a non-integer number of pixels, which is called sub-pixel precision. The in-between pixels are generated by interpolating neighboring pixels. Commonly, half-pixel or quarter pixel precision (Qpel, used by H.264 and MPEG-4/ASP) is used. The computational expense of sub-pixel precision is much higher due to the extra processing required for interpolation and on the encoder side, a much greater number of potential source blocks to be evaluated. The main disadvantage of block motion compensation is that it introduces discontinuities at the block borders (blocking artifacts). These artifacts appear in the form of sharp horizontal and vertical edges which are easily spotted by the human eye and produce false edges and ringing effects (large coefficients in high frequency sub-bands) due to quantization of coefficients of the Fourier-related transform used for transform coding of the residual frames Block motion compensation divides up the current frame into non-overlapping blocks, and the motion compensation vector tells where those blocks come from (a common misconception is that the previous frame is divided up into non-overlapping blocks, and the motion compensation vectors tell where those blocks move to). The source blocks typically overlap in the source frame. Some video compression algorithms assemble the current frame out of pieces of several different previously transmitted frames. Frames can also be predicted from future frames. The future frames then need to be encoded before the predicted frames and thus, the encoding order does not necessarily match the real frame order. Such frames are usually predicted from two directions, i.e. from the I- or P-frames that immediately precede or follow the predicted frame. These bidirectionally predicted frames are called B-frames. A coding scheme could, for instance, be IBBPBBPBBPBB. Further, the use of triangular tiles has also been proposed for motion compensation. Under this scheme, the frame is tiled with triangles, and the next frame is generated by performing an affine transformation on these triangles. Only the affine transformations are recorded/transmitted. This is capable of dealing with zooming, rotation, translation etc. Variable block-size motion compensation Variable block-size motion compensation (VBSMC) is the use of BMC with the ability for the encoder to dynamically select the size of the blocks. When coding video, the use of larger blocks can reduce the number of bits needed to represent the motion vectors, while the use of smaller blocks can result in a smaller amount of prediction residual information to encode. Other areas of work have examined the use of variable-shape feature metrics, beyond block boundaries, from which interframe vectors can be calculated. Older designs such as H.261 and MPEG-1 video typically use a fixed block size, while newer ones such as H.263, MPEG-4 Part 2, H.264/MPEG-4 AVC, and VC-1 give the encoder the ability to dynamically choose what block size will be used to represent the motion. Overlapped block motion compensation Overlapped block motion compensation (OBMC) is a good solution to these problems because it not only increases prediction accuracy but also avoids blocking artifacts. When using OBMC, blocks are typically twice as big in each dimension and overlap quadrant-wise with all 8 neighbouring blocks. Thus, each pixel belongs to 4 blocks. In such a scheme, there are 4 predictions for each pixel which are summed up to a weighted mean. For this purpose, blocks are associated with a window function that has the property that the sum of 4 overlapped windows is equal to 1 everywhere. Studies of methods for reducing the complexity of OBMC have shown that the contribution to the window function is smallest for the diagonally-adjacent block. Reducing the weight for this contribution to zero and increasing the other weights by an equal amount leads to a substantial reduction in complexity without a large penalty in quality. In such a scheme, each pixel then belongs to 3 blocks rather than 4, and rather than using 8 neighboring blocks, only 4 are used for each block to be compensated. Such a scheme is found in the H.263 Annex F Advanced Prediction mode Quarter Pixel (QPel) and Half Pixel motion compensation In motion compensation, quarter or half samples are actually interpolated sub-samples caused by fractional motion vectors. Based on the vectors and full-samples, the sub-samples can be calculated by using bicubic or bilinear 2-D filtering. See subclause 8.4.2.2 "Fractional sample interpolation process" of the H.264 standard. 3D image coding techniques Motion compensation is utilized in stereoscopic video coding. In video, time is often considered as the third dimension. Still, image coding techniques can be expanded to an extra dimension. JPEG 2000 uses wavelets, and these can also be used to encode motion without gaps between blocks in an adaptive way. Fractional pixel affine transformations lead to bleeding between adjacent pixels. If no higher internal resolution is used the delta images mostly fight against the image smearing out. The delta image can also be encoded as wavelets, so that the borders of the adaptive blocks match. 2D+Delta Encoding techniques utilize H.264 and MPEG-2 compatible coding and can use motion compensation to compress between stereoscopic images. History A precursor to the concept of motion compensation dates back to 1929, when R.D. Kell in Britain proposed the concept of transmitting only the portions of an analog video scene that changed from frame-to-frame. In 1959, the concept of inter-frame motion compensation was proposed by NHK researchers Y. Taki, M. Hatori and S. Tanaka, who proposed predictive inter-frame video coding in the temporal dimension. Motion-compensated DCT Practical motion-compensated video compression emerged with the development of motion-compensated DCT (MC DCT) coding, also called block motion compensation (BMC) or DCT motion compensation. This is a hybrid coding algorithm, which combines two key data compression techniques: discrete cosine transform (DCT) coding in the spatial dimension, and predictive motion compensation in the temporal dimension. DCT coding is a lossy block compression transform coding technique that was first proposed by Nasir Ahmed, who initially intended it for image compression, in 1972. In 1974, Ali Habibi at the University of Southern California introduced hybrid coding, which combines predictive coding with transform coding. However, his algorithm was initially limited to intra-frame coding in the spatial dimension. In 1975, John A. Roese and Guner S. Robinson extended Habibi's hybrid coding algorithm to the temporal dimension, using transform coding in the spatial dimension and predictive coding in the temporal dimension, developing inter-frame motion-compensated hybrid coding. For the spatial transform coding, they experimented with the DCT and the fast Fourier transform (FFT), developing inter-frame hybrid coders for both, and found that the DCT is the most efficient due to its reduced complexity, capable of compressing image data down to 0.25-bit per pixel for a videotelephone scene with image quality comparable to an intra-frame coder requiring 2-bit per pixel. In 1977, Wen-Hsiung Chen developed a fast DCT algorithm with C.H. Smith and S.C. Fralick. In 1979, Anil K. Jain and Jaswant R. Jain further developed motion-compensated DCT video compression, also called block motion compensation. This led to Chen developing a practical video compression algorithm, called motion-compensated DCT or adaptive scene coding, in 1981. Motion-compensated DCT later became the standard coding technique for video compression from the late 1980s onwards. The first digital video coding standard was H.120, developed by the CCITT (now ITU-T) in 1984. H.120 used motion-compensated DPCM coding, which was inefficient for video coding, and H.120 was thus impractical due to low performance. The H.261 standard was developed in 1988 based on motion-compensated DCT compression, and it was the first practical video coding standard. Since then, motion-compensated DCT compression has been adopted by all the major video coding standards (including the H.26x and MPEG formats) that followed. See also Motion estimation Image stabilization Inter frame HDTV blur Television standards conversion VidFIRE X-Video Motion Compensation Applications video compression change of framerate for playback of 24 frames per second movies on 60 Hz LCDs or 100 Hz interlaced cathode-ray tubes References External links Temporal Rate Conversion - article giving an overview of motion compensation techniques. A New FFT Architecture and Chip Design for Motion Compensation based on Phase Correlation DCT and DFT coefficients are related by simple factors DCT better than DFT also for video John Wiseman, An Introduction to MPEG Video Compression DCT and motion compensation Compatibility between DCT, motion compensation and other methods Film and video technology H.26x Video compression Motion in computer vision Data compression
Motion compensation
[ "Physics" ]
2,866
[ "Physical phenomena", "Motion (physics)", "Motion in computer vision" ]
52,070
https://en.wikipedia.org/wiki/Arch
An arch is a curved vertical structure spanning an open space underneath it. Arches may support the load above them, or they may perform a purely decorative role. As a decorative element, the arch dates back to the 4th millennium BC, but structural load-bearing arches became popular only after their adoption by the Ancient Romans in the 4th century BC. Arch-like structures can be horizontal, like an arch dam that withstands the horizontal hydrostatic pressure load. Arches are normally used as supports for many types of vaults, with the barrel vault in particular being a continuous arch. Extensive use of arches and vaults characterizes an arcuated construction, as opposed to the trabeated system, where, like in the architectures of ancient Greece, China, and Japan (as well as the modern steel-framed technique), posts and beams dominate. Arches had several advantages over the lintel, especially in the masonry construction: with the same amount of material it can have larger span, carry more weight, and can be made from smaller and thus more manageable pieces. Their role in construction was diminished in the middle of the 19th century with introduction of the wrought iron (and later steel): the high tensile strength of these new materials made long lintels possible. Basic concepts Terminology A true arch is a load-bearing arc with elements held together by compression. In much of the world introduction of the true arch was a result of European influence. The term false arch has few meanings. It is usually used to designate an arch that has no structural purpose, like a proscenium arch in theaters used to frame the performance for the spectators, but is also applied to corbelled and triangular arches that are not based on compression. A typical true masonry arch consists of the following elements: Keystone, the top block in an arch. Portion of the arch around the keystone (including the keystone itself), with no precisely defined boundary, is called a crown Voussoir (a wedge-like construction block). A rowlock arch is formed by multiple concentric layers of voussoirs. Extrados (an external surface of the arch) Impost is block at the base of the arch (the voussoir immediately above the impost is a springer). The tops of imposts define the springing level. A portion of the arch between the springing level and the crown (centered around the 45° angle) is called a haunch. If the arch resides on top of a column, the impost is formed by an abacus or its thicker version, dosseret. Intrados (an underside of the arch, also known as a soffit) Rise (height of the arc, distance from the springing level to the crown) Clear span Abutment The roughly triangular-shaped portion of the wall between the extrados and the horizontal division above is called spandrel. A (left or right) half-segment of an arch is called an arc, the overall line of an arch is arcature (this term is also used for an arcade). Archivolt is the exposed (front-facing) part of the arch, sometimes decorated (occasionally also used to designate the intrados). If the sides of voussoir blocks are not straight, but include angles and curves for interlocking, the arch is called "joggled". Arch action A true arch, due to its rise, resolves the vertical loads into horizontal and vertical reactions at the ends, a so called arch action. The vertical load produces a positive bending moment in the arch, while the inward-directed horizontal reaction from the spandrel/abutment provides a counterbalancing negative moment. As a result, the bending moment in any segment of the arch is much smaller than in a beam with the equivalent load and span. The diagram on the right shows the difference between a loaded arch and a beam. Elements of the arch are mostly subject to compression (A), while in the beam a bending moment is present, with compression at the top and tension at the bottom (B). In the past, when arches were made of masonry pieces, the horizontal forces at the ends of an arch (so called thrust) caused the need for heavy abutments (cf. Roman triumphal arch). The other way to counteract the forces, and thus allow thinner supports, was to use the counter-arches, as in an arcade arrangement, where the horizontal thrust of each arch is counterbalanced by its neighbors, and only the end arches need to buttressed. With new construction materials (steel, concrete, engineered wood), not only the arches themselves got lighter, but the horizontal thrust can be further relieved by a tie connecting the ends of an arch. Funicular shapes When evaluated from the perspective of an amount of material required to support a given load, the best solid structures are compression-only; with the flexible materials, the same is true for tension-only designs. There is a fundamental symmetry in nature between solid compression-only and flexible tension-only arrangements, noticed by Robert Hooke in 1676: "As hangs the flexible line, so but inverted will stand the rigid arch", thus the study (and terminology) of arch shapes is inextricably linked to the study of hanging chains, the corresponding curves or polygons are called funicular. Just like the shape of a hanging chain will vary depending on the weights attached to it, the shape of an ideal (compression-only) arch will depend on the distribution of the load. While building masonry arches in the not very tall buildings of the past, a practical assumption was that the stones can withstand virtually unlimited amount of pressure (up to 100 N per mm2), while the tensile strength was very low, even with the mortar added between the stones, and can be effectively assumed to be zero. Under these assumptions the calculations for the arch design are greatly simplified: either a reduced-scale model can be built and tested, or a funicular curve (pressure polygon) can be calculated or modeled, and as long as this curve stays within the confines of the voussoirs, the construction will be stable (a so called "safe theorem"). Classifications There are multiple ways to classify an arch: by the geometrical shape of its intrados (for example, semicircular, triangular, etc.); for the arches with rounded intrados, by the number of circle segments forming the arch (for example, round arch is single-centred, pointed arch is two-centred); by the material used (stone, brick, concrete, steel) and construction approach. For example, the wedge-shaped voussoirs of a brick arch can be made by cutting the regular bricks ("axed brick" arch) or manufactured in the wedge shape ("gauged brick" arch); structurally, by the number of hinges (movable joints) between solid components. For example, voussoirs in a stone arch should not move, so these arches usually have no hinges (are "fixed"). Permitting some movement in a large structure allows to alleviate stresses (caused, for example, by the thermal expansion), so many bridge spans are built with three hinges (one at each support and one at the crown) since the mid-19th century. Arrangements A sequence of arches can be grouped together forming an arcade. Romans perfected this form, as shown, for example, by arched structures of Pont du Gard. In the interior of hall churches, arcades of separating arches were used to separate the nave of a church from the side aisle, or two adjacent side aisles. Two-tiered arches, with two arches superimposed, were sometimes used in Islamic architecture, mostly for decorative purposes. An opening of the arch can be filled, creating a blind arch. Blind arches are frequently decorative, and were extensively used in Early Christian, Romanesque, and Islamic architecture. Alternatively, the opening can be filled with smaller arches, producing a containing arch, common in Gothic and Romanesque architecture. Multiple arches can be superimposed with an offset, creating an interlaced series of usually (with some exceptions) blind and decorative arches. Most likely of Islamic origin, the interlaced arcades were popular in Romanesque and Gothic architecture. Rear-arch (also rere-arch) is the one that frames the internal side of an opening in the external wall. Structural Structurally, relieving arches (often blind or containing) can be used to take off load from some portions of the building (for example, to allow use of thinner exterior walls with larger window openings, or, as in the Roman Pantheon, to redirect the weight of the upper structures to particular strong points). Transverse arches, introduced in Carolingian architecture, are placed across the nave to compartmentalize (together with longitudinal separating arches) the internal space into bays and support vaults. A diaphragm arch similarly goes in the transverse direction, but carries a section of wall on top. It is used to support or divide sections of the high roof. Strainer arches were built as an afterthought to prevent two adjacent supports from imploding due to miscalculation. Frequently they were made very decorative, with one of the best examples provided by the Wells Cathedral. Strainer arches can be "inverted" (upside-down) while remaining structural. When used across railway cuttings to prevent collapse of the walls, strainer arches may be referred to as flying arches. A counter-arch is built adjacent to another arch to oppose its horizontal action or help to stabilize it, for example, when constructing a flying buttress. Shapes The large variety of arch shapes (left) can mostly be classified into three broad categories: rounded, pointed, and parabolic. Rounded "Round" semicircular arches were commonly used for ancient arches that were constructed of heavy masonry, and were relied heavily on by the Roman builders since the 4th century BC. It is considered to be the most common arch form, characteristic for Roman, Romanesque, and Renaissance architecture. A segmental arch, with a rounded shape that is less than a semicircle, is very old (the versions were cut in the rock in Ancient Egypt 2100 BC at Beni Hasan). Since then it was occasionally used in Greek temples, utilized in Roman residential construction, Islamic architecture, and got popular as window pediments during the Renaissance. A basket-handle arch (also known as depressed arch, three-centred arch, basket arch) consists of segments of three circles with origins at three different centers (sometimes uses five or seven segments, so can also be five-centred, etc.). Was used in late Gothic and Baroque architecture. A horseshoe arch (also known as keyhole arch) has a rounded shape that includes more than a semicircle, is associated with Islamic architecture and was known in areas of Europe with Islamic influence (Spain, Southern France, Italy). Occasionally used in Gothics, it briefly enjoyed popularity as the entrance door treatment in the interwar England. Pointed A pointed arch consists of two ("two-centred arch") or more circle segments culminating in a point at the top. It originated in the Islamic architecture, arrived in Europe in the second half of the 11th century (Cluny Abbey) and later became prominent in the Gothic architecture. The advantages of a pointed arch over a semicircular one are flexible ratio of span to rise and lower horizontal reaction at the base. This innovation allowed for taller and more closely spaced openings, which are typical of Gothic architecture. Equilateral arch is the most common form of the pointed arch, with the centers of two circles forming the intrados coinciding with the springing points of the opposite segment. Together with the apex point, they form a equilateral triangle, thus the name. If the centers of circles are farther apart, the arch becomes a narrower and sharper lancet arch that appeared in France in the Early Gothic architecture (Saint-Denis Abbey) and became prominent in England in the late 12th and early 13th centuries (Salisbury Cathedral). If the centers are closer to another, the result is a wider blunt arch. The intrados of the cusped arch (also known as multifoil arch, polyfoil arch, polylobed arch, and scalloped arch) includes several independent circle segments in a scalloped arrangement. These primarily decorative arches are common in Islamic architecture and Northern European Late Gothic, can be found in Romanesque architecture. A similar trefoil arch includes only three segments and sometimes has a rounded, not pointed, top. Common in Islamic architecture and Romanesque buildings influenced by it, it later became popular in the decorative motifs of the Late Gothic designs of Northern Europe. Each arc of an ogee arch consists of at least two circle segments (for a total of at least four), with the center of an upper circle being outside the extrados. After European appearance in the 13th century on the facade of the St Mark's Basilica, the arch became a fixture of the English Decorated style, French Flamboyant, Venetian, and other Late Gothic styles. Ogee arch is also known as reversed curve arch, occasionally also called an inverted arch. The top of an ogee arch sometimes projects beyond the wall, forming the so-called nodding ogee popular in 14th century England (pulpitum in Southwell Minster). Each arc of a four-centred arch is made of two circle segments with distinct centers; usually the radius used closer to the springing point is smaller with a more pronounced curvature. Common in Islamic architecture (Persian arch), and, with upper portion flattened almost to straight lines (Tudor arch), in the English Perpendicular Gothic.A keel arch is a variant of four-centred arch with haunches almost straight, resembling a section view of a capsized ship. Popular in Islamic architecture, it can be also found in Europe, occasionally with a small ogee element at the top, so it is sometimes considered to be a variation of an ogee arch. Curtain arch (also known as inflexed arch, and, like the keel arch, usually decorative) uses two (or more) drooping curves that join at the apex. Utilized as a dressing for windows and doors primarily in Saxony in the Late Gothic and early Renaissance buildings (late 15th to early 16th century), associated with . When the intrados has multiple concave segments, the arch is also called a draped arch or tented arch. A similar arch that uses a mixture of curved and straight segments or exhibits sharp turns between segments is a mixed-line arch (or mixtilinear arch). In Moorish architecture the mixed-line arch evolved into an ornate lambrequin arch, also known as muqarnas arch. Parabolic The popularity of the arches using segments of a circle is due to simplicity of layout and construction, not their structural properties. Consequently, the architects historically used a variety of other curves in their designs: elliptical curves, hyperbolic cosine curves (including catenary), and parabolic curves. There are two reasons behind the selection of these curves: they are still relatively easy to trace with common tools prior to construction; depending on a situation, they can have superior structural properties and/or appearance. The hyperbolic curve is not easy to trace, but there are known cases of its use. The non-circumferential curves look similar, and match at shallow profiles, so a catenary is often misclassified as a parabola (per Galileo, "the [hanging] chain fits its parabola almost perfectly"). González et al. provide an example of Palau Güell, where researchers do not agree on classification of the arches or claim the prominence of parabolic arches, while the measurements show that just two of the 23 arches designed by Gaudi are actually parabolic. Three parabolic-looking curves in particular are of significance to the arch design: parabola itself, catenary, and weighted catenary. The arches naturally use the inverted (upside-down) versions of these curves. A parabola represents an ideal (all-compression) shape when the load is equally distributed along the span, while the weight of the arch itself is negligible. A catenary is the best solution for the case where an arch with uniform thickness carries just its own weight with no external load. The practical designs for bridges are somewhere in between, and thus use the curves that represent a compromise that combines both the catenary and the funicular curve for particular non-uniform distribution of load. The practical free-standing arches are stronger and thus heavier at the bottom, so a weighted catenary curve is utilized for them. The same curve also fits well an application where a bridge consists of an arch with a roadway of packed dirt above it, as the dead load increases with a distance from the center. Other Unlike regular arches, the flat arch (also known as jack arch, lintel arch, straight arch, plate-bande) is not curved. Instead, the arch is flat in profile and can be used under the same circumstances as lintel. However, lintels are subject to bending stress, while the flat arches are true arches, composed of irregular voussoir shapes (the keystone is the only one of the symmetric wedge shape), and that efficiently uses the compressive strength of the masonry in the same manner as a curved arch and thus requires a mass of masonry on both sides to absorb the considerable lateral thrust. Used in the Roman architecture to imitate the Greek lintels, Islamic architecture, European medieval and Renaissance architecture. The flat arch is still being used as a decorative pattern, primarily at the top of window openings. False arches The corbel (also corbelled) arch, made of two corbels meeting in the middle of the span, is a true arch in a sense of being able to carry a load, but it is false in a structural sense, as its components are subject to bending stress. The typical profile is not curved, but has triangular shape. Invented prior to the semicircular arch, the corbel arch was used already in the Egyptian and Mycenaean architecture in the 3rd and 2nd millennium BC. Like a corbel arch, the triangular arch is not a true arch in a structural sense. Its intrados is formed by two slabs leaning against each other. Brick builders would call triangular any arch with straight inclined sides. The design was common in Anglo-Saxon England until the late 11th century (St Mary Goslany). Mayan corbel arches are sometimes called triangular due to their shape. Variations Few transformations can be applied to arch shapes. If one impost is much higher than another, the arch (frequently pointed) is known as ramping arch, raking arch, or rampant arch (from ). Originally used to support inclined structures, like stairs, in the 13th-14th centuries they appeared as parts of flying buttresses used to counteract the thrust of Gothic ribbed vaults. A central part of an arch can be raised on short vertical supports, creating a trefoil-like shouldered arch. The raised central part can vary all the way from a flat arch to ogee. The shouldered arches were used to decorate openings in Europe from medieval times to Late Gothic architecture, became common in Iranian architecture from the 14th century, and were later adopted in the Ottoman Turkey. In a stilted arch (also surmounted), the springing line is located above the imposts (on "stilts"). Known to Islamic architects by the 8th century, the technique was utilized to vertically align the apexes of arches of different dimensions in Romanesque and Gothic architecture. Stilting was useful for semicircular arches, where the ratio of the rise fixed at of the span, but was applied to the pointed arches, too. The skew arch (also known as an oblique arch) is used when the arch needs to form an oblique angle in the horizontal plane with respect to the (parallel) springings, for example, when a bridge crosses the river at an angle different than 90°. A splayed arch is used for the case of unequal spans on the sides of the arch (when, for example, an interior opening in the wall is larger than the exterior one), the intrados of a round splayed arch is not cylindrical, but has a conical shape. A wide arch with its rise less than of the span (and thus the geometric circle of at least one segment is below the springing line) is called a surbased arch (sometimes also a depressed arch). A drop arch is either a basket handle arch or a blunt arch. Hinged arches The practical arch bridges are built either as a fixed arch, a two-hinged arch, or a three-hinged arch. The fixed arch is most often used in reinforced concrete bridges and tunnels, which have short spans. Because it is subject to additional internal stress from thermal expansion and contraction, this kind of arch is statically indeterminate (the internal state is impossible to determine based on the external forces alone). The two-hinged arch is most often used to bridge long spans. This kind of arch has pinned connections at its base. Unlike that of the fixed arch, the pinned base can rotate, thus allowing the structure to move freely and compensate for the thermal expansion and contraction that changes in outdoor temperature cause. However, this can result in additional stresses, and therefore the two-hinged arch is also statically indeterminate, although not as much as the fixed arch. The three-hinged arch is not only hinged at its base, like the two-hinged arch, yet also at its apex. The additional apical connection allows the three-hinged arch to move in two opposite directions and compensate for any expansion and contraction. This kind of arch is thus not subject to additional stress from thermal change. Unlike the other two kinds of arch, the three-hinged arch is therefore statically determinate. It is most often used for spans of medial length, such as those of roofs of large buildings. Another advantage of the three-hinged arch is that the reaction of the pinned bases is more predictable than the one for the fixed arch, allowing shallow, bearing-type foundations in spans of medial length. In the three-hinged arch "thermal expansion and contraction of the arch will cause vertical movements at the peak pin joint but will have no appreciable effect on the bases," which further simplifies foundational design. History The arch became popular in the Roman times and mostly spread alongside the European influence, although it was known and occasionally used much earlier. Many ancient architectures avoided the use of arches, including the Viking and Hindu ones. Bronze Age: ancient Near East True arches, as opposed to corbel arches, were known by a number of civilizations in the ancient Near East including the Levant, but their use was infrequent and mostly confined to underground structures, such as drains where the problem of lateral thrust is greatly diminished. An example of the latter would be the Nippur arch, built before 3800 BC, and dated by H. V. Hilprecht (1859–1925) to even before 4000 BC. Rare exceptions are an arched mudbrick home doorway dated to from Tell Taya in Iraq and two Bronze Age arched Canaanite city gates, one at Ashkelon (dated to ), and one at Tel Dan (dated to ), both in modern-day Israel. An Elamite tomb dated 1500 BC from Haft Teppe contains a parabolic vault which is considered one of the earliest evidences of arches in Iran. The use of true arches in Egypt also originated in the 4th millennium BC (underground barrel vaults at the Dendera cemetery). Standing arches were known since at least the Third Dynasty, but very few examples survived, since the arches were mostly used in non-durable secular buildings and made of mud brick voussoirs that were not wedge-shaped, but simply held in place by mortar, and thus susceptible to a collapse (the oldest arch still standing is at Ramesseum). Sacred buildings exhibited either lintel design or corbelled arches. Arches were mostly missing in Egypt temples even after the Roman conquest, even though Egyptians thought of the arch as a spiritual shape and used it in the rock-cut tombs and portable shrines. Auguste Mariette suggested that this choice was based on a relative fragility of a vault: "what would remain of the tombs and temples of Egyptians today, if they had preferred the vault?" Mycenaean architecture utilized only the corbel arches in their beehive tombs with triangular openings. Mycenaeans had also built probably the oldest still standing stone-arch bridge in the world, Arkadiko Bridge, in Greece. As evidenced by their imitations of the parabolic arches, Hittites most likely were exposed to the Egyptian designs, but used the corbelled technique to build them. Classical Persia and Greece The Assyrians, also apparently under the Egyptian influence, adopted the true arch (with a slightly pointed profile) early in the 8th century. In ancient Persia, the Achaemenid Empire (550 BC–330 BC) built small barrel vaults (essentially a series of arches built together to form a hall) known as iwan, which became massive, monumental structures during the later Parthian Empire (247 BC–AD 224). This architectural tradition was continued by the Sasanian Empire (224–651), which built the Taq Kasra at Ctesiphon in the 6th century AD, the largest free-standing vault until modern times. An early European example of a voussoir arch appears in the 4th century BC Greek Rhodes Footbridge. Proto-true arches can also be found under the stairs of the temple of Apollo at Didyma and the stadium at Olympia. . Ancient Rome The ancient Romans learned the semicircular arch from the Etruscans (both cultures apparently adopted the design in the 4th century BC), refined it and were the first builders in Europe to tap its full potential for above ground buildings: The Romans were the first builders in Europe, perhaps the first in the world, to fully appreciate the advantages of the arch, the vault and the dome. Throughout the Roman Empire, from Syria to Scotland, engineers erected arch structures. The first use of arches was for civic structures, like drains and city gates. Later the arches were utilized for major civic buildings bridges and aqueducts, with the outstanding 1st century AD examples provided by the Colosseum, Pont Du Gard, and the aqueduct of Segovia. The introduction of the ceremonial triumphal arch dates back to Roman Republic, although the best examples are from the imperial times (Arch of Augustus at Susa, Arch of Titus). Romans initially avoided using the arch in the religious buildings and, in Rome, arched temples were quite rare until the recognition of Christianity in 313 AD (with the exceptions provided by the Pantheon and the "temple of Minerva Medica"). Away from the capital, arched temples were more common (, temple of Jupiter at Sbeitla, Severan temple at Djemila). Arrival of Christianity prompted creation of the new type of temple, a Christian basilica, that made a thorough break with the pagan tradition with arches as one of the main elements of the design, along with the exposed brick walls (Santa Sabina in Rome, Sant'Apollinare in Classe). For a long period, from the late 5th century to the 20th century, arcades were a standard staple for the Western Christian architecture. Vaults began to be used for roofing large interior spaces such as halls and temples, a function that was also assumed by domed structures from the 1st century BC onwards. The segmental arch was first built by the Romans who realized that an arch in a bridge did not have to be a semicircle, such as in Alconétar Bridge or Ponte San Lorenzo. The utilitarian and mass residential (insulae) buildings, as found in Ostia Antica and Pompeii, mostly used low segmental arches made of bricks and architraves made of wood, while the concrete lintel arches can be found in villas and palaces. Ancient China Ancient architecture of China (and Japan) used mostly timber-framed construction and trabeated system. Arches were little-used, although there are few arch bridges known from literature and one artistic depiction in stone-carved relief. Since the only surviving artefacts of architecture from the Han dynasty (202 BC – 220 AD) are rammed earth defensive walls and towers, ceramic roof tiles from no longer existent wooden buildings, stone gate towers, and underground brick tombs, the known vaults, domes, and archways were built with the support of the earth and were not free-standing. China's oldest surviving stone arch bridge is the Anji Bridge. Still in use, it was built between 595 CE and 605 CE during the Sui dynasty. Islamic Islamic architects adopted the Roman arches, but had quickly shown their resourcefulness: by the 8th century the simple semicircular arch was almost entirely replaced with fancier shapes, few fine examples of the former in the Umayyad architecture notwithstanding (cf. the Great Mosque of Damascus, 706–715 CE). The first pointed arches appear already at the end of the 7th century AD (Al-Aqsa Mosque, Palace of Ukhaidhir, cisterns at the White Mosque of Ramle). Their variations spread fast and wide: Mosque of Ibn Tulun in Cairo (876-879 AD), Nizamiyya Madrasa at Khar Gerd (now Iran, 11th century), Kongo Mosque in Diani Beach (Kenya, 16th century). Islamic architecture brought to life a large amount of arch forms: the round horseshoe arch that became a characteristic trait of the Islamic buildings, the keel arch, the cusped arch, and the mixed-line arch (where the curved "ogee swell" is interspersed with abrupt bends). The Great Mosque of Cordoba, that can be considered a catalogue of Islamic arches, contains also the arches with almost straight sides, trefoil, interlaced, and joggled. Mosque of Ibn Tulun adds four-centred and stilted version of the pointed arch. It is quite likely that the appearance of the pointed arch, an essential element of the Gothic style, in Europe (Monte Cassino, 1066–1071 AD, and the Cluny Abbey five years later) and the ogee arch in Venice ( 1250) is a result of the Islamic influence, possibly through Sicily. Saoud also credits to Islamic architects the spread of the transverse arch. Mixed-line arch became popular in the Mudéjar style and subsequently spread around the Spanish-speaking world. Western Europe The collapse of the Western Roman Empire left the church as the only client of major construction; with all pre-Romanesque architectural styles borrowing from Roman construction with its semicircular arch. Due to the decline in the construction quality, the walls were thicker, and the arches thus heavier, than their Roman prototypes. Eventually the architects started to use the depth of the arches for decoration, turning the deep opening into an arch order (or rebated arch, a sequence of progressively smaller concentric arches, each inset with a rebate). Romanesque style started experiments with the pointed arch late in the 11th century (Cluny Abbey). In few decades, the practice spread (Durham Cathedral, Basilica of Saint-Denis). Early Gothic utilized the flexibility of the pointed arch by grouping together arches of different spans but with the same height. While the arches used in the mediaeval Europe were borrowed from the Roman and Islamic architecture, the use of pointed arch to form the rib vault was novel and became the defining characteristic of Gothic construction. At about 1400 AD, the city-states of Italy, where the pointed arch had never gotten much traction, initiated the revival of the Roman style with its round arches, Renaissance. By the 16th century the new style spread across Europe and, through the influence of empires, to the rest of the world. Arch became a dominant architectural form until the introduction of the new construction materials, like steel and concrete. India The history of arch in India is very long (some arches were apparently found in excavations of Kosambi, 2nd millennium BC. However, the continuous history begins with rock-cut arches in the Lomas Rishi cave (3rd century BC). Vaulted roof of an early Harappan burial chamber has been noted at Rakhigarhi. S.R Rao reports vaulted roof of a small chamber in a house from Lothal. Barrel vaults were also used in the Late Harappan Cemetery H culture dated 1900 BC-1300 BC which formed the roof of the metal working furnace, the discovery was made by Vats in 1940 during excavation at Harappa. The use of arches until the Islamic conquest of India in the 12th century AD was sporadic, with ogee arches and barrel vaults in rock-cut temples (Karla Caves, from the 1st century BC) and decorative pointed gavaksha arches. By the 5th century AD voussoir vaults were used structurally in the brick construction. Surviving examples include the temple at Bhitargaon (5th century AD) and Mahabodhi temple (7th century AD), the latter has both pointed arches and semicircular arches. These Gupta era arch vault system was later used extensively in Burmese Buddhist temples in Pyu and Bagan in 11th and 12th centuries. With the arrival of Islamic and other Western Asia influence, the arches became prominent in the Indian architecture, although the post and lintel construction was still preferred. A variety of pointed and lobed arches was characteristic for the Indo-Islamic architecture, with the monumental example of Buland Darwaza, that has pointed arch decorated with small cusped arches. Pre-Columbian America Mayan architecture utilized the corbel arches. The other Mesoamerican cultures used only the flat roofs with no arches whatsoever, although some researchers had suggested that both Maya and Aztec architects understood the concept of a true arch. Revival of the trabeated system The 19th-century introduction of the wrought iron (and later steel) into construction changed the role of the arch. Due to the high tensile strength of new materials, relatively long lintels became possible, as was demonstrated by the tubular Britannia Bridge (Robert Stephenson, 1846-1850). A fervent proponent of the trabeated system, Alexander "Greek" Thomson, whose preference for lintels was originally based on aesthetic criteria, observed that the spans of this bridge are longer than that of any arch ever built, thus "the simple, unsophisticated stone lintel contains in its structure all the scientific appliances [...] used in the great tubular bridge. [...] Stonehenge is more scientifically constructed than York Minster." Use of arches in bridge construction continued (the Britannia Bridge was rebuilt in 1972 as a truss arch bridge), yet the steel frames and reinforced concrete frames mostly replaced the arches as the load-bearing elements in buildings. Construction As a pure compression form, the utility of the arch is due to many building materials, including stone and unreinforced concrete, being strong under compression, but brittle when tensile stress is applied to them. Masonry The voussoirs can be wedge-shaped or have a form of a rectangular cuboid, in the latter case the wedge-like shape is provided by the mortar. An arch is held in place by the weight of all of its members, making construction problematic. One answer is to build a frame (historically, of wood) which exactly follows the form of the underside of the arch. This is known as a centre or centring. Voussoirs are laid on it until the arch is complete and self-supporting. For an arch higher than head height, scaffolding would be required, so it could be combined with the arch support. Arches may fall when the frame is removed if design or construction has been faulty. Old arches sometimes need reinforcement due to decay of the keystones, forming what is known as bald arch. Reinforced concrete In reinforced concrete construction, the principle of the arch is used so as to benefit from the concrete's strength in resisting compressive stress. Where any other form of stress is raised, such as tensile or torsional stress, it has to be resisted by carefully placed reinforcement rods or fibres. Architectural styles The type of arches (or absence of them) is one of the most prominent characteristics of an architectural style. For example, when Heinrich Hübsch, in the 19th century, tried to classify the architectural style, his "primary elements" were roof and supports, with the top-level basic types: trabeated (no arches) and arcuated (arch-based). His next division for the arcuated styles was based on the use of round and pointed arch shapes. Cultural references The steady horizontal push of an arch against the abutments gave rise to a saying "the arch never sleeps", attributed to many sources, from Hindu to Arabs. This adage stresses that the arch carries "a seed of death" for itself and the structure containing it, a statement that can be made upon observation of the Roman ruins. The plot of The Nebuly Coat by J. Meade Falkner, inspired by a collapse of a tower at the Chichester Cathedral plays with the idea while dealing with the slow disintegration of a church building. Saoud explains the proverb by chain-like self-balancing of the horizontal and vertical forces in the arch and its "universal adaptability". See also Buttress Dome Flying arch Flying buttress Order moulding Suspension bridge References Sources * External links Physics of Stone Arches by Nova: a model to build an arch without it collapsing InteractiveTHRUST: interactive applets, tutorials Paper about the three-hinged arch of the Galerie des Machines of 1889 Whitten by Javier Estévez Cimadevila & Isaac López César. Bridge components
Arch
[ "Technology" ]
7,813
[ "Bridge components", "Components" ]
52,081
https://en.wikipedia.org/wiki/Antihydrogen
Antihydrogen () is the antimatter counterpart of hydrogen. Whereas the common hydrogen atom is composed of an electron and proton, the antihydrogen atom is made up of a positron and antiproton. Scientists hope that studying antihydrogen may shed light on the question of why there is more matter than antimatter in the observable universe, known as the baryon asymmetry problem. Antihydrogen is produced artificially in particle accelerators. Experimental history Accelerators first detected hot antihydrogen in the 1990s. ATHENA studied cold in 2002. It was first trapped by the Antihydrogen Laser Physics Apparatus (ALPHA) team at CERN in 2010, who then measured the structure and other important properties. ALPHA, AEgIS, and GBAR plan to further cool and study atoms. 1s–2s transition measurement In 2016, the ALPHA experiment measured the atomic electron transition between the two lowest energy levels of antihydrogen, 1s–2s. The results, which are identical to that of hydrogen within the experimental resolution, support the idea of matter–antimatter symmetry and CPT symmetry. In the presence of a magnetic field the 1s–2s transition splits into two hyperfine transitions with slightly different frequencies. The team calculated the transition frequencies for normal hydrogen under the magnetic field in the confinement volume as: fdd = fcc = A single-photon transition between s states is prohibited by quantum selection rules, so to elevate ground state positrons to the 2s level, the confinement space was illuminated by a laser tuned to half the calculated transition frequencies, stimulating allowed two photon absorption. Antihydrogen atoms excited to the 2s state can then evolve in one of several ways: They can emit two photons and return directly to the ground state as they were They can absorb another photon, which ionizes the atom They can emit a single photon and return to the ground state via the 2p state—in this case the positron spin can flip or remain the same. Both the ionization and spin-flip outcomes cause the atom to escape confinement. The team calculated that, assuming antihydrogen behaves like normal hydrogen, roughly half the antihydrogen atoms would be lost during the resonant frequency exposure, as compared to the no-laser case. With the laser source tuned 200 kHz below half the transition frequencies, the calculated loss was essentially the same as for the no-laser case. The ALPHA team made batches of antihydrogen, held them for 600 seconds and then tapered down the confinement field over 1.5 seconds while counting how many antihydrogen atoms were annihilated. They did this under three different experimental conditions: Resonance: exposing the confined antihydrogen atoms to a laser source tuned to exactly half the transition frequency for 300 seconds for each of the two transitions, Off-resonance: exposing the confined antihydrogen atoms to a laser source tuned 200 kilohertz below the two resonance frequencies for 300 seconds each, No-laser: confining the antihydrogen atoms without any laser illumination. The two controls, off-resonance and no-laser, were needed to ensure that the laser illumination itself was not causing annihilations, perhaps by liberating normal atoms from the confinement vessel surface that could then combine with the antihydrogen. The team conducted 11 runs of the three cases and found no significant difference between the off-resonance and no laser runs, but a 58% drop in the number of events detected after the resonance runs. They were also able to count annihilation events during the runs and found a higher level during the resonance runs, again with no significant difference between the off-resonance and no laser runs. The results were in good agreement with predictions based on normal hydrogen and can be "interpreted as a test of CPT symmetry at a precision of 200 ppt." Characteristics The CPT theorem of particle physics predicts antihydrogen atoms have many of the characteristics regular hydrogen has; i.e. the same mass, magnetic moment, and atomic state transition frequencies (see atomic spectroscopy). For example, excited antihydrogen atoms are expected to glow the same color as regular hydrogen. Antihydrogen atoms should be attracted to other matter or antimatter gravitationally with a force of the same magnitude that ordinary hydrogen atoms experience. This would not be true if antimatter has negative gravitational mass, which is considered highly unlikely, though not yet empirically disproven (see gravitational interaction of antimatter). Recent theoretical framework for negative mass and repulsive gravity (antigravity) between matter and antimatter has been developed, and the theory is compatible with CPT theorem. When antihydrogen comes into contact with ordinary matter, its constituents quickly annihilate. The positron annihilates with an electron to produce gamma rays. The antiproton, on the other hand, is made up of antiquarks that combine with quarks in either neutrons or protons, resulting in high-energy pions, that quickly decay into muons, neutrinos, positrons, and electrons. If antihydrogen atoms were suspended in a perfect vacuum, they should survive indefinitely. As an anti-element, it is expected to have exactly the same properties as hydrogen. For example, antihydrogen would be a gas under standard conditions and combine with antioxygen to form antiwater, 2. Production The first antihydrogen was produced in 1995 by a team led by Walter Oelert at CERN using a method first proposed by Charles Munger Jr, Stanley Brodsky and Ivan Schmidt Andrade. In the LEAR, antiprotons from an accelerator were shot at xenon clusters, producing electron-positron pairs. Antiprotons can capture positrons with probability about , so this method is not suited for substantial production, as calculated. Fermilab measured a somewhat different cross section, in agreement with predictions of quantum electrodynamics. Both resulted in highly energetic, or hot, anti-atoms, unsuitable for detailed study. Subsequently, CERN built the Antiproton Decelerator (AD) to support efforts towards low-energy antihydrogen, for tests of fundamental symmetries. The AD supplies several CERN groups. CERN expects their facilities will be capable of producing 10 million antiprotons per minute. Low-energy antihydrogen Experiments by the ATRAP and ATHENA collaborations at CERN, brought together positrons and antiprotons in Penning traps, resulting in synthesis at a typical rate of 100 antihydrogen atoms per second. Antihydrogen was first produced by ATHENA in 2002, and then by ATRAP and by 2004, millions of antihydrogen atoms were made. The atoms synthesized had a relatively high temperature (a few thousand kelvins), and would hit the walls of the experimental apparatus as a consequence and annihilate. Most precision tests require long observation times. ALPHA, a successor of the ATHENA collaboration, was formed to stably trap antihydrogen. While electrically neutral, its spin magnetic moments interact with an inhomogeneous magnetic field; some atoms will be attracted to a magnetic minimum, created by a combination of mirror and multipole fields. In November 2010, the ALPHA collaboration announced that they had trapped 38 antihydrogen atoms for a sixth of a second, the first confinement of neutral antimatter. In June 2011, they trapped 309 antihydrogen atoms, up to 3 simultaneously, for up to 1,000 seconds. They then studied its hyperfine structure, gravity effects, and charge. ALPHA will continue measurements along with experiments ATRAP, AEgIS and GBAR. In 2018, AEgIS has produced a novel pulsed source of antihydrogen atoms with a production time spread of merely 250 nanoseconds. The pulsed source is generated by the charge exchange reaction between Rydberg positronium atoms -- produced via the injection of a pulsed positron beam into a nanochanneled Si target, and excited by laser pulses -- and antiprotons, trapped, cooled and manipulated in electromagnetic traps. The pulsed production enables the control of the antihydrogen temperature, the formation of an antihydrogen beam, and in the next phase a precision measurement on the gravitational behaviour using an atomic interferometer, the so-called Moiré deflectormeter. Larger antimatter atoms Larger antimatter atoms such as antideuterium (), antitritium (), and antihelium () are much more difficult to produce. Antideuterium, antihelium-3 () and antihelium-4 () nuclei have been produced with such high velocities that synthesis of their corresponding atoms poses several technical hurdles. See also Gravitational interaction of antimatter References External links Antimatter Hydrogen Hydrogen physics Gases
Antihydrogen
[ "Physics", "Chemistry" ]
1,854
[ "Antimatter", "Matter", "Phases of matter", "Statistical mechanics", "Gases" ]
52,082
https://en.wikipedia.org/wiki/Cable%20transport
Cable transport is a broad class of transport modes that have cables. They transport passengers and goods, often in vehicles called cable cars. The cable may be driven or passive, and items may be moved by pulling, sliding, sailing, or by drives within the object being moved on cableways. The use of pulleys and balancing of loads moving up and down are common elements of cable transport. They are often used in mountainous areas where cable haulage can overcome large differences in elevation. Common modes of cable transport Aerial transport Forms of cable transport in which one or more cables are strung between supports of various forms and cars are suspended from these cables. Aerial tramway Chairlift Funitel Gondola lift Ski lift Zip line Cable railways Forms of cable transport where cars on rails are hauled by cables. The rails are usually steeply inclined and usually at ground level. Cable car Funicular Other Other forms of cable-hauled transport. Cable ferry Surface lift Elevator History Rope-drawn transport dates back to 250 BC as evidenced by illustrations of aerial ropeway transportation systems in South China. Early aerial tramways The first recorded mechanical ropeway was by Venetian Fausto Veranzio who designed a bi-cable passenger ropeway in 1616. The industry generally considers Dutchman Adam Wybe to have built the first operational system in 1644. The technology, which was further developed by the people living in the Alpine regions of Europe, progressed and expanded with the advent of wire rope and electric drive. The first use of wire rope for aerial tramways is disputed. American inventor Peter Cooper is one early claimant, constructing an aerial tramway using wire rope in Baltimore 1832, to move landfill materials. Though there is only partial evidence for the claimed 1832 tramway, Cooper was involved in many of such tramways built in the 1850s, and in 1853 he built a two-mile-long tramway to transport iron ore to his blast furnaces at Ringwood, New Jersey. World War I motivated extensive use of military tramways for warfare between Italy and Austria. During the industrial revolution, new forms of cable-hauled transportation systems were created including the use of steel cable to allow for greater load support and larger systems. Aerial tramways were first used for commercial passenger haulage in the 1900s. The first cable railways The earliest form of cable railway was the gravity incline, which in its simplest form consists of two parallel tracks laid on a steep gradient, with a single rope wound around a winding drum and connecting the trains of wagons on the tracks. Loaded wagons at the top of the incline are lowered down, their weight hauling empty wagons from the bottom. The winding drum has a brake to control the rate of travel of the wagons. The first use of a gravity incline isn't recorded, but the Llandegai Tramway at Bangor in North Wales was opened in 1798, and is one of the earliest examples using iron rails. The first cable-hauled street railway was the London and Blackwall Railway, built in 1840, which used fibre to grip the haulage rope. This caused a series of technical and safety issues, which led to the adoption of steam locomotives by 1848. The first Funicular railway was opened in Lyon in 1862. The Westside and Yonkers Patent Railway Company developed a cable-hauled elevated railway. This 3½ mile long line was proposed in 1866 and opened in 1868. It operated as a cable railway until 1871 when it was converted to use steam locomotives. The next development of the cable car came in California. Andrew Hallidie, a Scottish emigre, gave San Francisco the first effective and commercially successful route, using steel cables, opening the Clay Street Hill Railroad on August 2, 1873. Hallidie was a manufacturer of steel cables. The system featured a human-operated grip, which was able to start and stop the car safely. The rope that was used allowed the multiple, independent cars to run on one line, and soon Hallidie's concept was extended to multiple lines in San Francisco. The first cable railway outside the United Kingdom and the United States was the Roslyn Tramway, which opened in 1881, in Dunedin, New Zealand. America remained the country that made the greatest use of cable railways; by 1890 more than 500 miles of cable-hauled track had been laid, carrying over 1,000,000 passengers per year. However, in 1890, electric tramways exceeded the cable hauled tramways in mileage, efficiency and speed. Early ski lifts The first surface lift was built in 1908 by German Robert Winterhalder in Schollach/Eisenbach, Hochschwarzwald and started operations February 14, 1908. A steam-powered toboggan tow, in length, was built in Truckee, California, in 1910. The first skier-specific tow in North America was apparently installed in 1933 by Alec Foster at Shawbridge in the Laurentians outside Montreal, Quebec. The modern J-bar and T-bar mechanism was invented in 1934 by the Swiss engineer Ernst Constam, with the first lift installed in Davos, Switzerland. The first chairlift was developed by James Curran in 1936. The co-owner of the Union Pacific Railroad, William Averell Harriman owned America's first ski resort, Sun Valley, Idaho. He asked his design office to tackle the problem of lifting skiers to the top of the resort. Curran, a Union Pacific bridge designer, adapted a cable hoist he had designed for loading bananas in Honduras to create the first ski lift. More recent developments More recent developments are being classified under the type of track that their design is based upon. After the success of this operation, several other projects were initiated in New Zealand and Chicago. The social climate around pollution is allowing for a shift from cars back to the utilization of cable transport due to their advantages. However, for many years they were a niche form of transportation used primarily in difficult-to-operate conditions for cars (such as on ski slopes as lifts). Now that cable transport projects are on the increase, the social effects are beginning to become more significant. In 2018 the highest 3S cablecar has been inaugurated in Zermatt, Switzerland after more than two years of construction. This cablecar is also called the "Matterhorn Glacier ride" and it allows passengers to reach the top of the Klein Matterhorn mountain (3883m) Social effects Comparison with other transport types When compared to trains and cars, the volume of people to transport over time and the start-up cost of the project must be a consideration. In areas with extensive road networks, personal vehicles offer greater flexibility and range. Remote places like mountainous regions and ski slopes may be difficult to link with roads, making cable transport project a much easier approach. A cable transport project system may also need fewer invasive changes to the local environment. The use of Cable Transport is not limited to such rural locations as skiing resorts; it can be used in urban development areas. Their uses in urban areas include funicular railways, gondola lifts, and aerial tramways. Safety According to a study by the technical inspection association TÜV SÜD, for every 100 million hours of travel, there are on average 25 deaths due to car accidents, 16 due to plane accidents and only two due to cable car accidents, most of which are due to passenger behaviour. Accidents A cable car accident in Cavalese, Italy, on 9 March 1976 is considered the worst aerial lift accident in history. The car crashed off the rails and fell 200 meters down a mountainside, also crashing through a grassy meadow before coming to a halt. The tragedy caused the death of 43 people, and four lift officials were jailed for charges regarding the accident. On April 15, 1978, a cable car at Squaw Valley Ski Resort in California came off from one of its cables, dropping 75 feet (23 m) and violently bouncing up. It collided with a cable which sheared through the car. Four people were killed and 31 injured. The Singapore cable car crash of 29 January 1983 occurred when a drilling rig passed beneath the cable car system linking the Singapore mainland with Sentosa island. The derrick of the drilling rig aboard the ship MV Eniwetok struck the cables, causing two of the gondolas to fall into the sea below. There were 7 fatalities. On February 3, 1998, twenty people died in Cavalese, Italy, when a United States Marine Corps EA-6B Prowler aircraft, while flying too low, against regulations, cut a cable supporting a gondola of an aerial tramway. Those killed, 19 passengers and one operator, were eight Germans, five Belgians, three Italians, two Poles, one Austrian, and one Dutch. The United States refused to have the four Marines tried under Italian law and later court-martialed two of them with minimal charges in their country. The Kaprun disaster was a fire that occurred in an ascending train in the tunnel of the Gletscherbahn Kaprun 2 funicular in Kaprun, Austria, on 11 November 2000. The disaster claimed the lives of 155 people, leaving 12 survivors (10 Germans and two Austrians) from the burning train. It is one of the worst cable car accidents in history. A cable car derailed and crashed to the ground in the Nevis Range, near Fort William, Scotland, on 13 July 2006, seriously injuring all five passengers. Another car on the same rail also slid back down the rails when the crash happened. Following the incident, 50 people were left stranded at the station whilst the staff and aid helped the passengers of the crashed car. On Wednesday 25 July 2012, passengers of the London cable car were stuck 90 meters in the air when a power failure caused the gondola to stop over the River Thames. The fault happened at 11:45am and lasted for about 30 minutes. No passengers were injured, but this was the first problem to ever hit the London's new cable car link. References Further reading External links Melbourne's cable trams on YouTube San Francisco's Cable Cars & Motor Cars; 1900-1940s – with 1906 Earthquake on YouTube Transport by mode Sustainable transport
Cable transport
[ "Physics" ]
2,050
[ "Sustainable transport", "Transport", "Transport by mode", "Physical systems" ]
52,085
https://en.wikipedia.org/wiki/Protein%20folding
Protein folding is the physical process by which a protein, after synthesis by a ribosome as a linear chain of amino acids, changes from an unstable random coil into a more ordered three-dimensional structure. This structure permits the protein to become biologically functional. The folding of many proteins begins even during the translation of the polypeptide chain. The amino acids interact with each other to produce a well-defined three-dimensional structure, known as the protein's native state. This structure is determined by the amino-acid sequence or primary structure. The correct three-dimensional structure is essential to function, although some parts of functional proteins may remain unfolded, indicating that protein dynamics are important. Failure to fold into a native structure generally produces inactive proteins, but in some instances, misfolded proteins have modified or toxic functionality. Several neurodegenerative and other diseases are believed to result from the accumulation of amyloid fibrils formed by misfolded proteins, the infectious varieties of which are known as prions. Many allergies are caused by the incorrect folding of some proteins because the immune system does not produce the antibodies for certain protein structures. Denaturation of proteins is a process of transition from a folded to an unfolded state. It happens in cooking, burns, proteinopathies, and other contexts. Residual structure present, if any, in the supposedly unfolded state may form a folding initiation site and guide the subsequent folding reactions. The duration of the folding process varies dramatically depending on the protein of interest. When studied outside the cell, the slowest folding proteins require many minutes or hours to fold, primarily due to proline isomerization, and must pass through a number of intermediate states, like checkpoints, before the process is complete. On the other hand, very small single-domain proteins with lengths of up to a hundred amino acids typically fold in a single step. Time scales of milliseconds are the norm, and the fastest known protein folding reactions are complete within a few microseconds. The folding time scale of a protein depends on its size, contact order, and circuit topology. Understanding and simulating the protein folding process has been an important challenge for computational biology since the late 1960s. Process of protein folding Primary structure The primary structure of a protein, its linear amino-acid sequence, determines its native conformation. The specific amino acid residues and their position in the polypeptide chain are the determining factors for which portions of the protein fold closely together and form its three-dimensional conformation. The amino acid composition is not as important as the sequence. The essential fact of folding, however, remains that the amino acid sequence of each protein contains the information that specifies both the native structure and the pathway to attain that state. This is not to say that nearly identical amino acid sequences always fold similarly. Conformations differ based on environmental factors as well; similar proteins fold differently based on where they are found. Secondary structure Formation of a secondary structure is the first step in the folding process that a protein takes to assume its native structure. Characteristic of secondary structure are the structures known as alpha helices and beta sheets that fold rapidly because they are stabilized by intramolecular hydrogen bonds, as was first characterized by Linus Pauling. Formation of intramolecular hydrogen bonds provides another important contribution to protein stability. α-helices are formed by hydrogen bonding of the backbone to form a spiral shape (refer to figure on the right). The β pleated sheet is a structure that forms with the backbone bending over itself to form the hydrogen bonds (as displayed in the figure to the left). The hydrogen bonds are between the amide hydrogen and carbonyl oxygen of the peptide bond. There exists anti-parallel β pleated sheets and parallel β pleated sheets where the stability of the hydrogen bonds is stronger in the anti-parallel β sheet as it hydrogen bonds with the ideal 180 degree angle compared to the slanted hydrogen bonds formed by parallel sheets. Tertiary structure The α-Helices and β-Sheets are commonly amphipathic, meaning they have a hydrophilic and a hydrophobic portion. This ability helps in forming tertiary structure of a protein in which folding occurs so that the hydrophilic sides are facing the aqueous environment surrounding the protein and the hydrophobic sides are facing the hydrophobic core of the protein. Secondary structure hierarchically gives way to tertiary structure formation. Once the protein's tertiary structure is formed and stabilized by the hydrophobic interactions, there may also be covalent bonding in the form of disulfide bridges formed between two cysteine residues. These non-covalent and covalent contacts take a specific topological arrangement in a native structure of a protein. Tertiary structure of a protein involves a single polypeptide chain; however, additional interactions of folded polypeptide chains give rise to quaternary structure formation. Quaternary structure Tertiary structure may give way to the formation of quaternary structure in some proteins, which usually involves the "assembly" or "coassembly" of subunits that have already folded; in other words, multiple polypeptide chains could interact to form a fully functional quaternary protein. Driving forces of protein folding Folding is a spontaneous process that is mainly guided by hydrophobic interactions, formation of intramolecular hydrogen bonds, van der Waals forces, and it is opposed by conformational entropy. The folding time scale of an isolated protein depends on its size, contact order, and circuit topology. Inside cells, the process of folding often begins co-translationally, so that the N-terminus of the protein begins to fold while the C-terminal portion of the protein is still being synthesized by the ribosome; however, a protein molecule may fold spontaneously during or after biosynthesis. While these macromolecules may be regarded as "folding themselves", the process also depends on the solvent (water or lipid bilayer), the concentration of salts, the pH, the temperature, the possible presence of cofactors and of molecular chaperones. Proteins will have limitations on their folding abilities by the restricted bending angles or conformations that are possible. These allowable angles of protein folding are described with a two-dimensional plot known as the Ramachandran plot, depicted with psi and phi angles of allowable rotation. Hydrophobic effect Protein folding must be thermodynamically favorable within a cell in order for it to be a spontaneous reaction. Since it is known that protein folding is a spontaneous reaction, then it must assume a negative Gibbs free energy value. Gibbs free energy in protein folding is directly related to enthalpy and entropy. For a negative delta G to arise and for protein folding to become thermodynamically favorable, then either enthalpy, entropy, or both terms must be favorable. Minimizing the number of hydrophobic side-chains exposed to water is an important driving force behind the folding process. The hydrophobic effect is the phenomenon in which the hydrophobic chains of a protein collapse into the core of the protein (away from the hydrophilic environment). In an aqueous environment, the water molecules tend to aggregate around the hydrophobic regions or side chains of the protein, creating water shells of ordered water molecules. An ordering of water molecules around a hydrophobic region increases order in a system and therefore contributes a negative change in entropy (less entropy in the system). The water molecules are fixed in these water cages which drives the hydrophobic collapse, or the inward folding of the hydrophobic groups. The hydrophobic collapse introduces entropy back to the system via the breaking of the water cages which frees the ordered water molecules. The multitude of hydrophobic groups interacting within the core of the globular folded protein contributes a significant amount to protein stability after folding, because of the vastly accumulated van der Waals forces (specifically London Dispersion forces). The hydrophobic effect exists as a driving force in thermodynamics only if there is the presence of an aqueous medium with an amphiphilic molecule containing a large hydrophobic region. The strength of hydrogen bonds depends on their environment; thus, H-bonds enveloped in a hydrophobic core contribute more than H-bonds exposed to the aqueous environment to the stability of the native state. In proteins with globular folds, hydrophobic amino acids tend to be interspersed along the primary sequence, rather than randomly distributed or clustered together. However, proteins that have recently been born de novo, which tend to be intrinsically disordered, show the opposite pattern of hydrophobic amino acid clustering along the primary sequence. Chaperones Molecular chaperones are a class of proteins that aid in the correct folding of other proteins in vivo. Chaperones exist in all cellular compartments and interact with the polypeptide chain in order to allow the native three-dimensional conformation of the protein to form; however, chaperones themselves are not included in the final structure of the protein they are assisting in. Chaperones may assist in folding even when the nascent polypeptide is being synthesized by the ribosome. Molecular chaperones operate by binding to stabilize an otherwise unstable structure of a protein in its folding pathway, but chaperones do not contain the necessary information to know the correct native structure of the protein they are aiding; rather, chaperones work by preventing incorrect folding conformations. In this way, chaperones do not actually increase the rate of individual steps involved in the folding pathway toward the native structure; instead, they work by reducing possible unwanted aggregations of the polypeptide chain that might otherwise slow down the search for the proper intermediate and they provide a more efficient pathway for the polypeptide chain to assume the correct conformations. Chaperones are not to be confused with folding catalyst proteins, which catalyze chemical reactions responsible for slow steps in folding pathways. Examples of folding catalysts are protein disulfide isomerases and peptidyl-prolyl isomerases that may be involved in formation of disulfide bonds or interconversion between cis and trans stereoisomers of peptide group. Chaperones are shown to be critical in the process of protein folding in vivo because they provide the protein with the aid needed to assume its proper alignments and conformations efficiently enough to become "biologically relevant". This means that the polypeptide chain could theoretically fold into its native structure without the aid of chaperones, as demonstrated by protein folding experiments conducted in vitro; however, this process proves to be too inefficient or too slow to exist in biological systems; therefore, chaperones are necessary for protein folding in vivo. Along with its role in aiding native structure formation, chaperones are shown to be involved in various roles such as protein transport, degradation, and even allow denatured proteins exposed to certain external denaturant factors an opportunity to refold into their correct native structures. A fully denatured protein lacks both tertiary and secondary structure, and exists as a so-called random coil. Under certain conditions some proteins can refold; however, in many cases, denaturation is irreversible. Cells sometimes protect their proteins against the denaturing influence of heat with enzymes known as heat shock proteins (a type of chaperone), which assist other proteins both in folding and in remaining folded. Heat shock proteins have been found in all species examined, from bacteria to humans, suggesting that they evolved very early and have an important function. Some proteins never fold in cells at all except with the assistance of chaperones which either isolate individual proteins so that their folding is not interrupted by interactions with other proteins or help to unfold misfolded proteins, allowing them to refold into the correct native structure. This function is crucial to prevent the risk of precipitation into insoluble amorphous aggregates. The external factors involved in protein denaturation or disruption of the native state include temperature, external fields (electric, magnetic), molecular crowding, and even the limitation of space (i.e. confinement), which can have a big influence on the folding of proteins. High concentrations of solutes, extremes of pH, mechanical forces, and the presence of chemical denaturants can contribute to protein denaturation, as well. These individual factors are categorized together as stresses. Chaperones are shown to exist in increasing concentrations during times of cellular stress and help the proper folding of emerging proteins as well as denatured or misfolded ones. Under some conditions proteins will not fold into their biochemically functional forms. Temperatures above or below the range that cells tend to live in will cause thermally unstable proteins to unfold or denature (this is why boiling makes an egg white turn opaque). Protein thermal stability is far from constant, however; for example, hyperthermophilic bacteria have been found that grow at temperatures as high as 122 °C, which of course requires that their full complement of vital proteins and protein assemblies be stable at that temperature or above. The bacterium E. coli is the host for bacteriophage T4, and the phage encoded gp31 protein () appears to be structurally and functionally homologous to E. coli chaperone protein GroES and able to substitute for it in the assembly of bacteriophage T4 virus particles during infection. Like GroES, gp31 forms a stable complex with GroEL chaperonin that is absolutely necessary for the folding and assembly in vivo of the bacteriophage T4 major capsid protein gp23. Fold switching Some proteins have multiple native structures, and change their fold based on some external factors. For example, the KaiB protein switches fold throughout the day, acting as a clock for cyanobacteria. It has been estimated that around 0.5–4% of PDB (Protein Data Bank) proteins switch folds. Protein misfolding and neurodegenerative disease A protein is considered to be misfolded if it cannot achieve its normal native state. This can be due to mutations in the amino acid sequence or a disruption of the normal folding process by external factors. The misfolded protein typically contains β-sheets that are organized in a supramolecular arrangement known as a cross-β structure. These β-sheet-rich assemblies are very stable, very insoluble, and generally resistant to proteolysis. The structural stability of these fibrillar assemblies is caused by extensive interactions between the protein monomers, formed by backbone hydrogen bonds between their β-strands. The misfolding of proteins can trigger the further misfolding and accumulation of other proteins into aggregates or oligomers. The increased levels of aggregated proteins in the cell leads to formation of amyloid-like structures which can cause degenerative disorders and cell death. The amyloids are fibrillary structures that contain intermolecular hydrogen bonds which are highly insoluble and made from converted protein aggregates. Therefore, the proteasome pathway may not be efficient enough to degrade the misfolded proteins prior to aggregation. Misfolded proteins can interact with one another and form structured aggregates and gain toxicity through intermolecular interactions. Aggregated proteins are associated with prion-related illnesses such as Creutzfeldt–Jakob disease, bovine spongiform encephalopathy (mad cow disease), amyloid-related illnesses such as Alzheimer's disease and familial amyloid cardiomyopathy or polyneuropathy, as well as intracellular aggregation diseases such as Huntington's and Parkinson's disease. These age onset degenerative diseases are associated with the aggregation of misfolded proteins into insoluble, extracellular aggregates and/or intracellular inclusions including cross-β amyloid fibrils. It is not completely clear whether the aggregates are the cause or merely a reflection of the loss of protein homeostasis, the balance between synthesis, folding, aggregation and protein turnover. Recently the European Medicines Agency approved the use of Tafamidis or Vyndaqel (a kinetic stabilizer of tetrameric transthyretin) for the treatment of transthyretin amyloid diseases. This suggests that the process of amyloid fibril formation (and not the fibrils themselves) causes the degeneration of post-mitotic tissue in human amyloid diseases. Misfolding and excessive degradation instead of folding and function leads to a number of proteopathy diseases such as antitrypsin-associated emphysema, cystic fibrosis and the lysosomal storage diseases, where loss of function is the origin of the disorder. While protein replacement therapy has historically been used to correct the latter disorders, an emerging approach is to use pharmaceutical chaperones to fold mutated proteins to render them functional. Experimental techniques for studying protein folding While inferences about protein folding can be made through mutation studies, typically, experimental techniques for studying protein folding rely on the gradual unfolding or folding of proteins and observing conformational changes using standard non-crystallographic techniques. X-ray crystallography X-ray crystallography is one of the more efficient and important methods for attempting to decipher the three dimensional configuration of a folded protein. To be able to conduct X-ray crystallography, the protein under investigation must be located inside a crystal lattice. To place a protein inside a crystal lattice, one must have a suitable solvent for crystallization, obtain a pure protein at supersaturated levels in solution, and precipitate the crystals in solution. Once a protein is crystallized, X-ray beams can be concentrated through the crystal lattice which would diffract the beams or shoot them outwards in various directions. These exiting beams are correlated to the specific three-dimensional configuration of the protein enclosed within. The X-rays specifically interact with the electron clouds surrounding the individual atoms within the protein crystal lattice and produce a discernible diffraction pattern. Only by relating the electron density clouds with the amplitude of the X-rays can this pattern be read and lead to assumptions of the phases or phase angles involved that complicate this method. Without the relation established through a mathematical basis known as Fourier transform, the "phase problem" would render predicting the diffraction patterns very difficult. Emerging methods like multiple isomorphous replacement use the presence of a heavy metal ion to diffract the X-rays into a more predictable manner, reducing the number of variables involved and resolving the phase problem. Fluorescence spectroscopy Fluorescence spectroscopy is a highly sensitive method for studying the folding state of proteins. Three amino acids, phenylalanine (Phe), tyrosine (Tyr) and tryptophan (Trp), have intrinsic fluorescence properties, but only Tyr and Trp are used experimentally because their quantum yields are high enough to give good fluorescence signals. Both Trp and Tyr are excited by a wavelength of 280 nm, whereas only Trp is excited by a wavelength of 295 nm. Because of their aromatic character, Trp and Tyr residues are often found fully or partially buried in the hydrophobic core of proteins, at the interface between two protein domains, or at the interface between subunits of oligomeric proteins. In this apolar environment, they have high quantum yields and therefore high fluorescence intensities. Upon disruption of the protein's tertiary or quaternary structure, these side chains become more exposed to the hydrophilic environment of the solvent, and their quantum yields decrease, leading to low fluorescence intensities. For Trp residues, the wavelength of their maximal fluorescence emission also depend on their environment. Fluorescence spectroscopy can be used to characterize the equilibrium unfolding of proteins by measuring the variation in the intensity of fluorescence emission or in the wavelength of maximal emission as functions of a denaturant value. The denaturant can be a chemical molecule (urea, guanidinium hydrochloride), temperature, pH, pressure, etc. The equilibrium between the different but discrete protein states, i.e. native state, intermediate states, unfolded state, depends on the denaturant value; therefore, the global fluorescence signal of their equilibrium mixture also depends on this value. One thus obtains a profile relating the global protein signal to the denaturant value. The profile of equilibrium unfolding may enable one to detect and identify intermediates of unfolding. General equations have been developed by Hugues Bedouelle to obtain the thermodynamic parameters that characterize the unfolding equilibria for homomeric or heteromeric proteins, up to trimers and potentially tetramers, from such profiles. Fluorescence spectroscopy can be combined with fast-mixing devices such as stopped flow, to measure protein folding kinetics, generate a chevron plot and derive a Phi value analysis. Circular dichroism Circular dichroism is one of the most general and basic tools to study protein folding. Circular dichroism spectroscopy measures the absorption of circularly polarized light. In proteins, structures such as alpha helices and beta sheets are chiral, and thus absorb such light. The absorption of this light acts as a marker of the degree of foldedness of the protein ensemble. This technique has been used to measure equilibrium unfolding of the protein by measuring the change in this absorption as a function of denaturant concentration or temperature. A denaturant melt measures the free energy of unfolding as well as the protein's m value, or denaturant dependence. A temperature melt measures the denaturation temperature (Tm) of the protein. As for fluorescence spectroscopy, circular-dichroism spectroscopy can be combined with fast-mixing devices such as stopped flow to measure protein folding kinetics and to generate chevron plots. Vibrational circular dichroism of proteins The more recent developments of vibrational circular dichroism (VCD) techniques for proteins, currently involving Fourier transform (FT) instruments, provide powerful means for determining protein conformations in solution even for very large protein molecules. Such VCD studies of proteins can be combined with X-ray diffraction data for protein crystals, FT-IR data for protein solutions in heavy water (D2O), or quantum computations. Protein nuclear magnetic resonance spectroscopy Protein nuclear magnetic resonance (NMR) is able to collect protein structural data by inducing a magnet field through samples of concentrated protein. In NMR, depending on the chemical environment, certain nuclei will absorb specific radio-frequencies. Because protein structural changes operate on a time scale from ns to ms, NMR is especially equipped to study intermediate structures in timescales of ps to s. Some of the main techniques for studying proteins structure and non-folding protein structural changes include COSY, TOCSY, HSQC, time relaxation (T1 & T2), and NOE. NOE is especially useful because magnetization transfers can be observed between spatially proximal hydrogens are observed. Different NMR experiments have varying degrees of timescale sensitivity that are appropriate for different protein structural changes. NOE can pick up bond vibrations or side chain rotations, however, NOE is too sensitive to pick up protein folding because it occurs at larger timescale. Because protein folding takes place in about 50 to 3000 s−1 CPMG Relaxation dispersion and chemical exchange saturation transfer have become some of the primary techniques for NMR analysis of folding. In addition, both techniques are used to uncover excited intermediate states in the protein folding landscape. To do this, CPMG Relaxation dispersion takes advantage of the spin echo phenomenon. This technique exposes the target nuclei to a 90 pulse followed by one or more 180 pulses. As the nuclei refocus, a broad distribution indicates the target nuclei is involved in an intermediate excited state. By looking at Relaxation dispersion plots the data collect information on the thermodynamics and kinetics between the excited and ground. Saturation Transfer measures changes in signal from the ground state as excited states become perturbed. It uses weak radio frequency irradiation to saturate the excited state of a particular nuclei which transfers its saturation to the ground state. This signal is amplified by decreasing the magnetization (and the signal) of the ground state. The main limitations in NMR is that its resolution decreases with proteins that are larger than 25 kDa and is not as detailed as X-ray crystallography. Additionally, protein NMR analysis is quite difficult and can propose multiple solutions from the same NMR spectrum. In a study focused on the folding of an amyotrophic lateral sclerosis involved protein SOD1, excited intermediates were studied with relaxation dispersion and Saturation transfer. SOD1 had been previously tied to many disease causing mutants which were assumed to be involved in protein aggregation, however the mechanism was still unknown. By using Relaxation Dispersion and Saturation Transfer experiments many excited intermediate states were uncovered misfolding in the SOD1 mutants. Dual-polarization interferometry Dual polarisation interferometry is a surface-based technique for measuring the optical properties of molecular layers. When used to characterize protein folding, it measures the conformation by determining the overall size of a monolayer of the protein and its density in real time at sub-Angstrom resolution, although real-time measurement of the kinetics of protein folding are limited to processes that occur slower than ~10 Hz. Similar to circular dichroism, the stimulus for folding can be a denaturant or temperature. Studies of folding with high time resolution The study of protein folding has been greatly advanced in recent years by the development of fast, time-resolved techniques. Experimenters rapidly trigger the folding of a sample of unfolded protein and observe the resulting dynamics. Fast techniques in use include neutron scattering, ultrafast mixing of solutions, photochemical methods, and laser temperature jump spectroscopy. Among the many scientists who have contributed to the development of these techniques are Jeremy Cook, Heinrich Roder, Terry Oas, Harry Gray, Martin Gruebele, Brian Dyer, William Eaton, Sheena Radford, Chris Dobson, Alan Fersht, Bengt Nölting and Lars Konermann. Proteolysis Proteolysis is routinely used to probe the fraction unfolded under a wide range of solution conditions (e.g. fast parallel proteolysis (FASTpp). Single-molecule force spectroscopy Single molecule techniques such as optical tweezers and AFM have been used to understand protein folding mechanisms of isolated proteins as well as proteins with chaperones. Optical tweezers have been used to stretch single protein molecules from their C- and N-termini and unfold them to allow study of the subsequent refolding. The technique allows one to measure folding rates at single-molecule level; for example, optical tweezers have been recently applied to study folding and unfolding of proteins involved in blood coagulation. von Willebrand factor (vWF) is a protein with an essential role in blood clot formation process. It discovered – using single molecule optical tweezers measurement – that calcium-bound vWF acts as a shear force sensor in the blood. Shear force leads to unfolding of the A2 domain of vWF, whose refolding rate is dramatically enhanced in the presence of calcium. Recently, it was also shown that the simple src SH3 domain accesses multiple unfolding pathways under force. Biotin painting Biotin painting enables condition-specific cellular snapshots of (un)folded proteins. Biotin 'painting' shows a bias towards predicted Intrinsically disordered proteins. Computational studies of protein folding Computational studies of protein folding includes three main aspects related to the prediction of protein stability, kinetics, and structure. A 2013 review summarizes the available computational methods for protein folding. Levinthal's paradox In 1969, Cyrus Levinthal noted that, because of the very large number of degrees of freedom in an unfolded polypeptide chain, the molecule has an astronomical number of possible conformations. An estimate of 3300 or 10143 was made in one of his papers. Levinthal's paradox is a thought experiment based on the observation that if a protein were folded by sequential sampling of all possible conformations, it would take an astronomical amount of time to do so, even if the conformations were sampled at a rapid rate (on the nanosecond or picosecond scale). Based upon the observation that proteins fold much faster than this, Levinthal then proposed that a random conformational search does not occur, and the protein must, therefore, fold through a series of meta-stable intermediate states. Energy landscape of protein folding The configuration space of a protein during folding can be visualized as an energy landscape. According to Joseph Bryngelson and Peter Wolynes, proteins follow the principle of minimal frustration, meaning that naturally evolved proteins have optimized their folding energy landscapes, and that nature has chosen amino acid sequences so that the folded state of the protein is sufficiently stable. In addition, the acquisition of the folded state had to become a sufficiently fast process. Even though nature has reduced the level of frustration in proteins, some degree of it remains up to now as can be observed in the presence of local minima in the energy landscape of proteins. A consequence of these evolutionarily selected sequences is that proteins are generally thought to have globally "funneled energy landscapes" (a term coined by José Onuchic) that are largely directed toward the native state. This "folding funnel" landscape allows the protein to fold to the native state through any of a large number of pathways and intermediates, rather than being restricted to a single mechanism. The theory is supported by both computational simulations of model proteins and experimental studies, and it has been used to improve methods for protein structure prediction and design. The description of protein folding by the leveling free-energy landscape is also consistent with the 2nd law of thermodynamics. Physically, thinking of landscapes in terms of visualizable potential or total energy surfaces simply with maxima, saddle points, minima, and funnels, rather like geographic landscapes, is perhaps a little misleading. The relevant description is really a high-dimensional phase space in which manifolds might take a variety of more complicated topological forms. The unfolded polypeptide chain begins at the top of the funnel where it may assume the largest number of unfolded variations and is in its highest energy state. Energy landscapes such as these indicate that there are a large number of initial possibilities, but only a single native state is possible; however, it does not reveal the numerous folding pathways that are possible. A different molecule of the same exact protein may be able to follow marginally different folding pathways, seeking different lower energy intermediates, as long as the same native structure is reached. Different pathways may have different frequencies of utilization depending on the thermodynamic favorability of each pathway. This means that if one pathway is found to be more thermodynamically favorable than another, it is likely to be used more frequently in the pursuit of the native structure. As the protein begins to fold and assume its various conformations, it always seeks a more thermodynamically favorable structure than before and thus continues through the energy funnel. Formation of secondary structures is a strong indication of increased stability within the protein, and only one combination of secondary structures assumed by the polypeptide backbone will have the lowest energy and therefore be present in the native state of the protein. Among the first structures to form once the polypeptide begins to fold are alpha helices and beta turns, where alpha helices can form in as little as 100 nanoseconds and beta turns in 1 microsecond. There exists a saddle point in the energy funnel landscape where the transition state for a particular protein is found. The transition state in the energy funnel diagram is the conformation that must be assumed by every molecule of that protein if the protein wishes to finally assume the native structure. No protein may assume the native structure without first passing through the transition state. The transition state can be referred to as a variant or premature form of the native state rather than just another intermediary step. The folding of the transition state is shown to be rate-determining, and even though it exists in a higher energy state than the native fold, it greatly resembles the native structure. Within the transition state, there exists a nucleus around which the protein is able to fold, formed by a process referred to as "nucleation condensation" where the structure begins to collapse onto the nucleus. Modeling of protein folding De novo or ab initio techniques for computational protein structure prediction can be used for simulating various aspects of protein folding. Molecular dynamics (MD) was used in simulations of protein folding and dynamics in silico. First equilibrium folding simulations were done using implicit solvent model and umbrella sampling. Because of computational cost, ab initio MD folding simulations with explicit water are limited to peptides and small proteins. MD simulations of larger proteins remain restricted to dynamics of the experimental structure or its high-temperature unfolding. Long-time folding processes (beyond about 1 millisecond), like folding of larger proteins (>150 residues) can be accessed using coarse-grained models. Several large-scale computational projects, such as Rosetta@home, Folding@home and Foldit, target protein folding. Long continuous-trajectory simulations have been performed on Anton, a massively parallel supercomputer designed and built around custom ASICs and interconnects by D. E. Shaw Research. The longest published result of a simulation performed using Anton as of 2011 was a 2.936 millisecond simulation of NTL9 at 355 K. Such simulations are currently able to unfold and refold small proteins (<150 amino acids residues) in equilibrium and predict how mutations affect folding kinetics and stability. In 2020 a team of researchers that used AlphaFold, an artificial intelligence (AI) protein structure prediction program developed by DeepMind placed first in CASP, a long-standing structure prediction contest. The team achieved a level of accuracy much higher than any other group. It scored above 90% for around two-thirds of the proteins in CASP's global distance test (GDT), a test that measures the degree of similarity between the structure predicted by a computational program, and the empirical structure determined experimentally in a lab. A score of 100 is considered a complete match, within the distance cutoff used for calculating GDT. AlphaFold's protein structure prediction results at CASP were described as "transformational" and "astounding". Some researchers noted that the accuracy is not high enough for a third of its predictions, and that it does not reveal the physical mechanism of protein folding for the protein folding problem to be considered solved. Nevertheless, it is considered a significant achievement in computational biology and great progress towards a decades-old grand challenge of biology, predicting the structure of proteins. See also Anfinsen's dogma Chevron plot Denaturation midpoint Downhill folding Folding (chemistry) Phi value analysis Potential energy of protein Protein dynamics Protein misfolding cyclic amplification Protein structure prediction software Proteopathy Time-resolved mass spectrometry References External links Human Proteome Folding Project Biochemical reactions Protein structure
Protein folding
[ "Chemistry", "Biology" ]
7,285
[ "Biochemistry", "Protein structure", "Structural biology", "Biochemical reactions" ]
52,119
https://en.wikipedia.org/wiki/Embedding
In mathematics, an embedding (or imbedding) is one instance of some mathematical structure contained within another instance, such as a group that is a subgroup. When some object is said to be embedded in another object , the embedding is given by some injective and structure-preserving map . The precise meaning of "structure-preserving" depends on the kind of mathematical structure of which and are instances. In the terminology of category theory, a structure-preserving map is called a morphism. The fact that a map is an embedding is often indicated by the use of a "hooked arrow" (); thus: (On the other hand, this notation is sometimes reserved for inclusion maps.) Given and , several different embeddings of in may be possible. In many cases of interest there is a standard (or "canonical") embedding, like those of the natural numbers in the integers, the integers in the rational numbers, the rational numbers in the real numbers, and the real numbers in the complex numbers. In such cases it is common to identify the domain with its image contained in , so that . Topology and geometry General topology In general topology, an embedding is a homeomorphism onto its image. More explicitly, an injective continuous map between topological spaces and is a topological embedding if yields a homeomorphism between and (where carries the subspace topology inherited from ). Intuitively then, the embedding lets us treat as a subspace of . Every embedding is injective and continuous. Every map that is injective, continuous and either open or closed is an embedding; however there are also embeddings that are neither open nor closed. The latter happens if the image is neither an open set nor a closed set in . For a given space , the existence of an embedding is a topological invariant of . This allows two spaces to be distinguished if one is able to be embedded in a space while the other is not. Related definitions If the domain of a function is a topological space then the function is said to be if there exists some neighborhood of this point such that the restriction is injective. It is called if it is locally injective around every point of its domain. Similarly, a is a function for which every point in its domain has some neighborhood to which its restriction is a (topological, resp. smooth) embedding. Every injective function is locally injective but not conversely. Local diffeomorphisms, local homeomorphisms, and smooth immersions are all locally injective functions that are not necessarily injective. The inverse function theorem gives a sufficient condition for a continuously differentiable function to be (among other things) locally injective. Every fiber of a locally injective function is necessarily a discrete subspace of its domain Differential topology In differential topology: Let and be smooth manifolds and be a smooth map. Then is called an immersion if its derivative is everywhere injective. An embedding, or a smooth embedding, is defined to be an immersion that is an embedding in the topological sense mentioned above (i.e. homeomorphism onto its image). In other words, the domain of an embedding is diffeomorphic to its image, and in particular the image of an embedding must be a submanifold. An immersion is precisely a local embedding, i.e. for any point there is a neighborhood such that is an embedding. When the domain manifold is compact, the notion of a smooth embedding is equivalent to that of an injective immersion. An important case is . The interest here is in how large must be for an embedding, in terms of the dimension of . The Whitney embedding theorem states that is enough, and is the best possible linear bound. For example, the real projective space of dimension , where is a power of two, requires for an embedding. However, this does not apply to immersions; for instance, can be immersed in as is explicitly shown by Boy's surface—which has self-intersections. The Roman surface fails to be an immersion as it contains cross-caps. An embedding is proper if it behaves well with respect to boundaries: one requires the map to be such that , and is transverse to in any point of . The first condition is equivalent to having and . The second condition, roughly speaking, says that is not tangent to the boundary of . Riemannian and pseudo-Riemannian geometry In Riemannian geometry and pseudo-Riemannian geometry: Let and be Riemannian manifolds or more generally pseudo-Riemannian manifolds. An isometric embedding is a smooth embedding that preserves the (pseudo-)metric in the sense that is equal to the pullback of by , i.e. . Explicitly, for any two tangent vectors we have Analogously, isometric immersion is an immersion between (pseudo)-Riemannian manifolds that preserves the (pseudo)-Riemannian metrics. Equivalently, in Riemannian geometry, an isometric embedding (immersion) is a smooth embedding (immersion) that preserves length of curves (cf. Nash embedding theorem). Algebra In general, for an algebraic category , an embedding between two -algebraic structures and is a -morphism that is injective. Field theory In field theory, an embedding of a field in a field is a ring homomorphism . The kernel of is an ideal of , which cannot be the whole field , because of the condition . Furthermore, any field has as ideals only the zero ideal and the whole field itself (because if there is any non-zero field element in an ideal, it is invertible, showing the ideal is the whole field). Therefore, the kernel is , so any embedding of fields is a monomorphism. Hence, is isomorphic to the subfield of . This justifies the name embedding for an arbitrary homomorphism of fields. Universal algebra and model theory If is a signature and are -structures (also called -algebras in universal algebra or models in model theory), then a map is a -embedding exactly if all of the following hold: is injective, for every -ary function symbol and we have , for every -ary relation symbol and we have iff Here is a model theoretical notation equivalent to . In model theory there is also a stronger notion of elementary embedding. Order theory and domain theory In order theory, an embedding of partially ordered sets is a function between partially ordered sets and such that Injectivity of follows quickly from this definition. In domain theory, an additional requirement is that is directed. Metric spaces A mapping of metric spaces is called an embedding (with distortion ) if for every and some constant . Normed spaces An important special case is that of normed spaces; in this case it is natural to consider linear embeddings. One of the basic questions that can be asked about a finite-dimensional normed space is, what is the maximal dimension such that the Hilbert space can be linearly embedded into with constant distortion? The answer is given by Dvoretzky's theorem. Category theory In category theory, there is no satisfactory and generally accepted definition of embeddings that is applicable in all categories. One would expect that all isomorphisms and all compositions of embeddings are embeddings, and that all embeddings are monomorphisms. Other typical requirements are: any extremal monomorphism is an embedding and embeddings are stable under pullbacks. Ideally the class of all embedded subobjects of a given object, up to isomorphism, should also be small, and thus an ordered set. In this case, the category is said to be well powered with respect to the class of embeddings. This allows defining new local structures in the category (such as a closure operator). In a concrete category, an embedding is a morphism that is an injective function from the underlying set of to the underlying set of and is also an initial morphism in the following sense: If is a function from the underlying set of an object to the underlying set of , and if its composition with is a morphism , then itself is a morphism. A factorization system for a category also gives rise to a notion of embedding. If is a factorization system, then the morphisms in may be regarded as the embeddings, especially when the category is well powered with respect to . Concrete theories often have a factorization system in which consists of the embeddings in the previous sense. This is the case of the majority of the examples given in this article. As usual in category theory, there is a dual concept, known as quotient. All the preceding properties can be dualized. An embedding can also refer to an embedding functor. See also Ambient space Closed immersion Cover Dimensionality reduction Flat (geometry) Immersion Johnson–Lindenstrauss lemma Submanifold Subspace Universal space Notes References . . External links Embedding of manifolds on the Manifold Atlas Abstract algebra Category theory General topology Differential topology Functions and mappings Maps of manifolds Model theory Order theory
Embedding
[ "Mathematics" ]
1,955
[ "General topology", "Functions and mappings", "Mathematical structures", "Mathematical analysis", "Mathematical logic", "Mathematical objects", "Fields of abstract algebra", "Topology", "Mathematical relations", "Category theory", "Model theory", "Differential topology", "Abstract algebra", ...
52,124
https://en.wikipedia.org/wiki/VHS
VHS (Video Home System) is a standard for consumer-level analog video recording on tape cassettes, introduced in 1976 by the Victor Company of Japan (JVC). It was the dominant home video format throughout the tape media period in the 1980s and 1990s. Magnetic tape video recording was adopted by the television industry in the 1950s in the form of the first commercialized video tape recorders (VTRs), but the devices were expensive and used only in professional environments. In the 1970s, videotape technology became affordable for home use, and widespread adoption of videocassette recorders (VCRs) began; the VHS became the most popular media format for VCRs as it would win the "format war" against Betamax (backed by Sony) and a number of other competing tape standards. The cassettes themselves use a 0.5-inch magnetic tape between two spools and typically offer a capacity of at least two hours. The popularity of VHS was intertwined with the rise of the video rental market, when films were released on pre-recorded videotapes for home viewing. Newer improved tape formats such as S-VHS were later developed, as well as the earliest optical disc format, LaserDisc; the lack of global adoption of these formats increased VHS's lifetime, which eventually peaked and started to decline in the late 1990s after the introduction of DVD, a digital optical disc format. VHS rentals were surpassed by DVD in the United States in 2003, which eventually became the preferred low-end method of movie distribution. For home recording purposes, VHS and VCRs were surpassed by (typically hard disk–based) digital video recorders (DVR) in the 2000s. History Before VHS In 1956, after several attempts by other companies, the first commercially successful VTR, the Ampex VRX-1000, was introduced by Ampex Corporation. At a price of US$50,000 in 1956 () and US$300 () for a 90-minute reel of tape, it was intended only for the professional market. Kenjiro Takayanagi, a television broadcasting pioneer then working for JVC as its vice president, saw the need for his company to produce VTRs for the Japanese market at a more affordable price. In 1959, JVC developed a two-head video tape recorder and, by 1960, a color version for professional broadcasting. In 1964, JVC released the DV220, which would be the company's standard VTR until the mid-1970s. In 1969, JVC collaborated with Sony Corporation and Matsushita Electric (Matsushita was the majority stockholder of JVC until 2011) to build a video recording standard for the Japanese consumer. The effort produced the U-matic format in 1971, which was the first cassette format to become a unified standard for different companies. It was preceded by the reel-to-reel " EIAJ format. The U-matic format was successful in businesses and some broadcast television applications, such as electronic news-gathering, and was produced by all three companies until the late 1980s, but because of cost and limited recording time, very few of the machines were sold for home use. Therefore, soon after the U-Matic release, all three companies started working on new consumer-grade video recording formats of their own. Sony started working on Betamax, Matsushita started working on VX, and JVC released the CR-6060 in 1975, based on the U-matic format. VHS development In 1971, JVC engineers Yuma Shiraishi and Shizuo Takano put together a team to develop a VTR for consumers. By the end of 1971, they created an internal diagram, "VHS Development Matrix", which established twelve objectives for JVC's new VTR: The system must be compatible with any ordinary television set. Picture quality must be similar to a normal air broadcast. The tape must have at least a two-hour recording capacity. Tapes must be interchangeable between machines. The overall system should be versatile, meaning it can be scaled and expanded, such as connecting a video camera, or dubbing between two recorders. Recorders should be affordable, easy to operate, and have low maintenance costs. Recorders must be capable of being produced in high volume, their parts must be interchangeable, and they must be easy to service. In early 1972, the commercial video recording industry in Japan took a financial hit. JVC cut its budgets and restructured its video division, shelving the VHS project. However, despite the lack of funding, Takano and Shiraishi continued to work on the project in secret. By 1973, the two engineers had produced a functional prototype. Competition with Betamax In 1974, the Japanese Ministry of International Trade and Industry (MITI), desiring to avoid consumer confusion, attempted to force the Japanese video industry to standardize on just one home video recording format. Later, Sony had a functional prototype of the Betamax format, and was very close to releasing a finished product. With this prototype, Sony persuaded the MITI to adopt Betamax as the standard, and allow it to license the technology to other companies. JVC believed that an open standard, with the format shared among competitors without licensing the technology, was better for the consumer. To prevent the MITI from adopting Betamax, JVC worked to convince other companies, in particular Matsushita (Japan's largest electronics manufacturer at the time, marketing its products under the National brand in most territories and the Panasonic brand in North America, and JVC's majority stockholder), to accept VHS, and thereby work against Sony and the MITI. Matsushita agreed, primarily out of concern that Sony might become the leader in the field if its proprietary Betamax format was the only one allowed to be manufactured. Matsushita also regarded Betamax's one-hour recording time limit as a disadvantage. Matsushita's backing of JVC persuaded Hitachi, Mitsubishi, and Sharp to back the VHS standard as well. Sony's release of its Betamax unit to the Japanese market in 1975 placed further pressure on the MITI to side with the company. However, the collaboration of JVC and its partners was much stronger, which eventually led the MITI to drop its push for an industry standard. JVC released the first VHS machines in Japan in late 1976, and in the United States in mid-1977. Sony's Betamax competed with VHS throughout the late 1970s and into the 1980s (see Videotape format war). Betamax's major advantages were its smaller cassette size, theoretical higher video quality, and earlier availability, but its shorter recording time proved to be a major shortcoming. Originally, Beta I machines using the NTSC television standard were able to record one hour of programming at their standard tape speed of 1.5 inches per second (ips). The first VHS machines could record for two hours, due to both a slightly slower tape speed (1.31 ips) and significantly longer tape. Betamax's smaller cassette limited the size of the reel of tape, and could not compete with VHS's two-hour capability by extending the tape length. Instead, Sony had to slow the tape down to 0.787 ips (Beta II) in order to achieve two hours of recording in the same cassette size. Sony eventually created a Beta III speed of 0.524 ips, which allowed NTSC Betamax to break the two-hour limit, but by then VHS had already won the format battle. Additionally, VHS had a "far less complex tape transport mechanism" than Betamax, and VHS machines were faster at rewinding and fast-forwarding than their Sony counterparts. VHS eventually won the war, gaining 60% of the North American market by 1980. Initial releases of VHS-based devices The first VCR to use VHS was the Victor HR-3300, and was introduced by the president of JVC in Japan on September 9, 1976. JVC started selling the HR-3300 in Akihabara, Tokyo, Japan, on October 31, 1976. Region-specific versions of the JVC HR-3300 were also distributed later on, such as the HR-3300U in the United States, and the HR-3300EK in the United Kingdom. The United States received its first VHS-based VCR, the RCA VBT200, on August 23, 1977. The RCA unit was designed by Matsushita and was the first VHS-based VCR manufactured by a company other than JVC. It was also capable of recording four hours in LP (long play) mode. The UK received its first VHS-based VCR, the Victor HR-3300EK, in 1978. Quasar and General Electric followed-up with VHS-based VCRs – all designed by Matsushita. By 1999, Matsushita alone produced just over half of all Japanese VCRs. TV/VCR combos, combining a TV set with a VHS mechanism, were also once available for purchase. Combo units containing both a VHS mechanism and a DVD player were introduced in the late 1990s, and at least one combo unit, the Panasonic DMP-BD70V, included a Blu-ray player. Technical details VHS has been standardized in IEC 60774–1. Cassette and tape design The VHS cassette is a 187 mm wide, 103 mm deep, and 25 mm thick (7 × 4× 1 inch) plastic shell held together with five Phillips-head screws. The flip-up cover, which allows players and recorders to access the tape, has a latch on the right side, with a push-in toggle to release it (bottom view image). The cassette has an anti-despooling mechanism, consisting of several plastic parts between the spools, near the front of the cassette (white and black in the top view). The spool latches are released by a push-in lever within a 6.35 mm ( inch) hole at the bottom of the cassette, 19 mm ( inch) in from the edge label. The tapes are made, pre-recorded, and inserted into the cassettes in cleanrooms, to ensure quality and to keep dust from getting embedded in the tape and interfering with recording (both of which could cause signal dropouts) There is a clear tape leader at both ends of the tape to provide an optical auto-stop for the VCR transport mechanism. In the VCR, a light source is inserted into the cassette through the circular hole in the center of the underside, and two photodiodes are on the left and right sides of where the tape exits the cassette. When the clear tape reaches one of these, enough light will pass through the tape to the photodiode to trigger the stop function; some VCRs automatically rewind the tape when the trailing end is detected. Early VCRs used an incandescent bulb as the light source: when the bulb failed, the VCR would act as if a tape were present when the machine was empty, or would detect the blown bulb and completely stop functioning. Later designs use an infrared LED, which has a much longer life. The recording medium is a Mylar magnetic tape, 12.7 mm ( inch) wide, coated with metal oxide, and wound on two spools. The tape speed for "Standard Play" mode (see below) is 3.335 cm/s (1.313 ips) for NTSC, 2.339 cm/s (0.921 ips) for PAL—or just over 2.0 and 1.4 metres (6 ft 6.7 in and 4 ft 7.2 in) per minute respectively. The tape length for a T-120 VHS cassette is 247.5 metres (812 ft). Tape loading technique As with almost all cassette-based videotape systems, VHS machines pull the tape out of the cassette shell and wrap it around the inclined head drum, which rotates at 1,800 rpm in NTSC machines and at 1,500 rpm for PAL, one complete rotation of the head corresponding to one video frame. VHS uses an "M-loading" system, also known as M-lacing, where the tape is drawn out by two threading posts and wrapped around more than 180 degrees of the head drum (and also other tape transport components) in a shape roughly approximating the letter M. The heads in the rotating drum get their signal wirelessly using a rotary transformer. Recording capacity A VHS cassette holds a maximum of about 430 m (1,410 ft) of tape at the lowest acceptable tape thickness, giving a maximum playing time of about four hours in a T-240/DF480 for NTSC and five hours in an E-300 for PAL at "standard play" (SP) quality. More frequently, however, VHS tapes are thicker than the required minimum to avoid complications such as jams or tears in the tape. Other speeds include "long play" (LP), "extended play" (EP) or "super long play" (SLP) (standard on NTSC; rarely found on PAL machines). For NTSC, LP and EP/SLP double and triple the recording time accordingly, but these speed reductions cause a reduction in horizontal resolution – from the normal equivalent of 250 vertical lines in SP, to the equivalent of 230 in LP and even less in EP/SLP. Due to the nature of recording diagonally from a spinning drum, the actual write speed of the video heads does not get slower when the tape speed is reduced. Instead, the video tracks become narrower and are packed closer together. This results in noisier playback that can be more difficult to track correctly: The effect of subtle misalignment is magnified by the narrower tracks. The heads for linear audio are not on the spinning drum, so for them, the tape speed from one reel to the other is the same as the speed of the heads across the tape. This speed is quite slow: for SP it is about 2/3s that of an audio cassette, and for EP it is slower than the slowest microcassette speed. This is widely considered inadequate for anything but basic voice playback, and was a major liability for VHS-C camcorders that encouraged the use of the EP speed. Color depth deteriorates significantly at lower speeds in PAL: often, a color image on a PAL tape recorded at low speed is displayed only in monochrome, or with intermittent color, when playback is paused. Tape lengths VHS cassettes for NTSC and PAL/SECAM systems are physically identical, although the signals recorded on the tape are incompatible. The tape speeds are different too, so the playing time for any given cassette will vary between the systems. To avoid confusion, manufacturers indicate the playing time in minutes that can be expected for the market the tape is sold in: E-XXX indicates playing time in minutes for PAL or SECAM. T-XXX indicates playing time in minutes for NTSC or PAL-M. To calculate the playing time for a T-XXX tape in a PAL machine, this formula is used: PAL/SECAM recording time = T-XXX in minutes × 1.426 To calculate the playing time for an E-XXX tape in an NTSC machine, this formula is used: NTSC recording time = E-XXX in minutes × 0.701 Since the recording/playback time for PAL/SECAM is roughly 1/3 longer than the recording/playback time for NTSC, some tape manufacturers label their cassettes with both T-XXX and E-XXX marks, like T60/E90, T90/E120 and T120/E180. SP is standard play, LP is long play ( speed, equal to recording time in DVHS "HS" mode), EP/SLP is extended/super long play ( speed) which was primarily released into the NTSC market. Copy protection As VHS was designed to facilitate recording from various sources, including television broadcasts or other VCR units, content producers quickly found that home users were able to use the devices to copy videos from one tape to another. Despite generation loss in quality when a tape was copied, this practice was regarded as a widespread problem, which members of the Motion Picture Association of America (MPAA) claimed caused them great financial losses. In response, several companies developed technologies to protect copyrighted VHS tapes from casual duplication by home users. The most popular method was Analog Protection System, better known simply as Macrovision, produced by a company of the same name. According to Macrovision: The technology is applied to over 550 million videocassettes annually and is used by every MPAA movie studio on some or all of their videocassette releases. Over 220 commercial duplication facilities around the world are equipped to supply Macrovision videocassette copy protection to rights owners...The study found that over 30% of VCR households admit to having unauthorized copies, and that the total annual revenue loss due to copying is estimated at $370,000,000 annually. The system was first used in copyrighted movies beginning with the 1984 film The Cotton Club. Macrovision copy protection saw refinement throughout its years, but has always worked by essentially introducing deliberate errors into a protected VHS tape's output video stream. These errors in the output video stream are ignored by most televisions, but will interfere with re-recording of programming by a second VCR. The first version of Macrovision introduces high signal levels during the vertical blanking interval, which occurs between the video fields. These high levels confuse the automatic gain control circuit in most VHS VCRs, leading to varying brightness levels in an output video, but are ignored by the TV as they are out of the frame-display period. "Level II" Macrovision uses a process called "colorstriping", which inverts the analog signal's colorburst period and causes off-color bands to appear in the picture. Level III protection added additional colorstriping techniques to further degrade the image. These protection methods worked well to defeat analog-to-analog copying by VCRs of the time. Consumer products capable of digital video recording are mandated by law to include features which detect Macrovision encoding of input analog streams, and disrupt copying of the video. Both intentional and false-positive detection of Macrovision protection has frustrated archivists who wish to copy now-fragile VHS tapes to a digital format for preservation. As of the 2020s, modern software decoding ignores Macrovision as software is not limited to the fixed standards that Macrovision was intended to disrupt in hardware based systems. Recording process The recording process in VHS consists of the following steps, in this order: The tape is pulled from the supply reel by a capstan and pinch roller, similar to those used in audio tape recorders. The tape passes across the erase head, which wipes any existing recording from the tape. The tape is wrapped around the head drum, using a little more than 180 degrees of the drum. One of the heads on the spinning drum records one field of video onto the tape, in one diagonally oriented track. The tape passes across the audio and control head, which records the control track and the linear audio tracks. The tape is wound onto the take-up reel due to torque applied to the reel by the machine. Erase head The erase head is fed by a high-level, high-frequency AC signal that overwrites any previous recording on the tape. Without this step, the new recording cannot be guaranteed to completely replace any old recording that might have been on the tape. Video recording The tape path then carries the tape around the spinning video-head drum, wrapping it around a little more than 180 degrees (called the omega transport system) in a helical fashion, assisted by the slanted tape guides. The head rotates constantly at 1798.2 rpm in NTSC machines, exactly 1500 in PAL, each complete rotation corresponding to one frame of video. Two tape heads are mounted on the cylindrical surface of the drum, 180 degrees apart from each other, so that the two heads "take turns" in recording. The rotation of the inclined head drum, combined with the relatively slow movement of the tape, results in each head recording a track oriented at a diagonal with respect to the length of the tape, with the heads moving across the tape at speeds higher than what would otherwise be possible. This is referred to as helical scan recording. A tape speed of inches per second corresponds to the heads on the drum moving across the tape at (a writing speed of) 4.86 or 6.096 meters per second. To maximize the use of the tape, the video tracks are recorded very close together. To reduce crosstalk between adjacent tracks on playback, an azimuth recording method is used: The gaps of the two heads are not aligned exactly with the track path. Instead, one head is angled at plus six degrees from the track, and the other at minus six degrees. This results, during playback, in destructive interference of the signal from the tracks on either side of the one being played. Each of the diagonal-angled tracks is a complete TV picture field, lasting of a second ( on PAL) on the display. One tape head records an entire picture field. The adjacent track, recorded by the second tape head, is another or of a second TV picture field, and so on. Thus one complete head rotation records an entire NTSC or PAL frame of two fields. The original VHS specification had only two video heads. When the EP recording speed was introduced, the thickness of these heads was reduced to accommodate the narrower tracks. However, this subtly reduced the quality of the SP speed, and dramatically lowered the quality of freeze frame and high speed search. Later models implemented both wide and narrow heads, and could use all four during pause and shuttle modes to further improve quality although machines later combined both pairs into one. In machines supporting VHS HiFi (described later), yet another pair of heads was added to handle the VHS HiFi signal. Camcorders using the miniaturized drum required twice as many heads to complete any given task. This almost always meant four heads on the miniaturized drum with performance similar to a two head VCR with a full sized drum. No attempt was made to record Hi-Fi audio with such devices, as this would require an additional four heads to work. W-VHS decks could have up to 12 heads in the head drum, of which 11 were active including a flying erase head for erasing individual video fields, and one was a dummy used for balancing the head drum. The high tape-to-head speed created by the rotating head results in a far higher bandwidth than could be practically achieved with a stationary head. VHS machines record up to 3 MHz of baseband video bandwidth and 300 kHz of baseband chroma bandwidth. The luminance (black and white) portion of the video is frequency modulated and combined with a down-converted "color under" chroma (color) signal that is encoded using quadrature amplitude modulation. Including side bands, the signal on a VHS tape can use up to 10 MHz of RF bandwidth. VHS horizontal resolution is 240 TVL, or about 320 lines across a scan line. The vertical resolution (number of scan lines) is the same as the respective analog TV standard (625 for PAL or 525 for NTSC; somewhat fewer scan lines are actually visible due to overscan and the VBI). In modern-day digital terminology, NTSC VHS resolution is roughly equivalent to 333×480 pixels for luma and 40×480 pixels for chroma. 333×480=159,840 pixels or 0.16 MP (1/6 of a megapixel). PAL VHS resolution is roughly 333×576 pixels for luma and 40×576 pixels for chroma (although when decoded PAL and SECAM half the vertical color resolution). JVC countered 1985's SuperBeta with VHS HQ, or High Quality. The frequency modulation of the VHS luminance signal is limited to 3 megahertz, which makes higher resolutions technically impossible even with the highest-quality recording heads and tape materials, but an HQ branded deck includes luminance noise reduction, chroma noise reduction, white clip extension, and improved sharpness circuitry. The effect was to increase the apparent horizontal resolution of a VHS recording from 240 to 250 analog (equivalent to 333 pixels from left-to-right, in digital terminology). The major VHS OEMs resisted HQ due to cost concerns, eventually resulting in JVC reducing the requirements for the HQ brand to white clip extension plus one other improvement. In 1987, JVC introduced a new format called Super VHS (often known as S-VHS) which extended the bandwidth to over 5 megahertz, yielding 420 analog horizontal (560 pixels left-to-right). Most Super VHS recorders can play back standard VHS tapes, but not vice versa. S-VHS was designed for higher resolution, but failed to gain popularity outside Japan because of the high costs of the machines and tapes. Because of the limited user base, Super VHS was never picked up to any significant degree by manufacturers of pre-recorded tapes, although it was used extensively in the low-end professional market for filming and editing. Audio recording After leaving the head drum, the tape passes over the stationary audio and control head. This records a control track at the bottom edge of the tape, and one or two linear audio tracks along the top edge. Original linear audio system In the original VHS specification, audio was recorded as baseband in a single linear track, at the upper edge of the tape, similar to how an audio compact cassette operates. The recorded frequency range was dependent on the linear tape speed. For the VHS SP mode, which already uses a lower tape speed than the compact cassette, this resulted in a mediocre frequency response of roughly 100 Hz to 10 kHz for NTSC, frequency response for PAL VHS with its lower standard tape speed was somewhat worse of about 80 Hz to 8 kHz. The signal-to-noise ratio (SNR) was an acceptable 42 dB for NTSC and 41 dB for PAL. Both parameters degraded significantly with VHS's longer play modes, with EP/NTSC frequency response peaking at 4 kHz. S-VHS tapes can give better audio (and video) quality, because the tapes are designed to have almost twice the bandwidth of VHS at the same speed. Sound cannot be recorded on a VHS tape without recording a video signal because the video signal is used to generate the control track pulses which effectively regulate the tape speed on playback. Even in the audio dubbing mode, a valid video recording (control track signal) must be present on the tape for audio to be correctly recorded. If there is no video signal to the VCR input during recording, most later VCRs will record black video and generate a control track while the sound is being recorded. Some early VCRs record audio without a control track signal; this is of little use, because the absence of a signal from the control track means that the linear tape speed is irregular during playback. More sophisticated VCRs offer stereo audio recording and playback. Linear stereo fits two independent channels in the same space as the original mono audiotrack. While this approach preserves acceptable backward compatibility with monoaural audio heads, the splitting of the audio track degrades the audio's signal-to-noise ratio, causing objectionable tape hiss at normal listening volume. To counteract the hiss, linear stereo VHS VCRs use Dolby B noise reduction for recording and playback. This dynamically boosts the high frequencies of the audio program on the recorded medium, improving its signal strength relative to the tape's background noise floor, then attenuates the high frequencies during playback. Dolby-encoded program material exhibits a high-frequency emphasis when played on non-Hi-Fi VCRs that are not equipped with the matching Dolby Noise Reduction decoder, although this may actually improve the sound quality of non-Hi-Fi VCRs, especially at the slower recording speeds. High-end consumer recorders take advantage of the linear nature of the audio track, as the audio track could be erased and recorded without disturbing the video portion of the recorded signal. Hence, "audio dubbing" and "video dubbing", where either the audio or video is re-recorded on tape (without disturbing the other), were supported features on prosumer linear video editing-decks. Without dubbing capability, an audio or video edit could not be done in-place on master cassette, and requires the editing output be captured to another tape, incurring generational loss. Studio film releases began to emerge with linear stereo audiotracks in 1982. From that point, nearly every home video release by Hollywood featured a Dolby-encoded linear stereo audiotrack. However, linear stereo was never popular with equipment makers or consumers. Tracking adjustment and index marking Another linear control track at the tape's lower edge holds pulses that mark the beginning of every frame of video; these are used to fine-tune the tape speed during playback, so that the high speed rotating heads remained exactly on their helical tracks rather than somewhere between two adjacent tracks (known as "tracking"). Since good tracking depends on precise distances between the rotating drum and the fixed control/audio head reading the linear tracks, which usually varies by a couple of micrometers between machines due to manufacturing tolerances, most VCRs offer tracking adjustment, either manual or automatic, to correct such mismatches. The control track is also used to hold index marks, which were normally written at the beginning of each recording session, and can be found using the VCR's index search function: this will fast-wind forward or backward to the nth specified index mark, and resume playback from there. At times, higher-end VCRs provided functions for the user to manually add and remove these marks. By the late 1990s, some high-end VCRs offered more sophisticated indexing. For example, Panasonic's Tape Library system assigned an ID number to each cassette, and logged recording information (channel, date, time and optional program title entered by the user) both on the cassette and in the VCR's memory for up to 900 recordings (600 with titles). Hi-Fi audio system Around 1984, JVC added Hi-Fi audio to VHS (model HR-D725U, in response to Betamax's introduction of Beta Hi-Fi.) Both VHS Hi-Fi and Betamax Hi-Fi delivered flat full-range frequency response (20 Hz to 20 kHz), excellent 70 dB signal-to-noise ratio (in consumer space, second only to the compact disc), dynamic range of 90 dB, and professional audio-grade channel separation (more than 70 dB). VHS Hi-Fi audio is achieved by using audio frequency modulation (AFM), modulating the two stereo channels (L, R) on two different frequency-modulated carriers and embedding the combined modulated audio signal pair into the video signal. To avoid crosstalk and interference from the primary video carrier, VHS's implementation of AFM relied on a form of magnetic recording called depth multiplexing. The modulated audio carrier pair was placed in the hitherto-unused frequency range between the luminance and the color carrier (below 1.6 MHz), and recorded first. Subsequently, the video head erases and re-records the video signal (combined luminance and color signal) over the same tape surface, but the video signal's higher center frequency results in a shallower magnetization of the tape, allowing both the video and residual AFM audio signal to coexist on tape. (PAL versions of Beta Hi-Fi use this same technique). During playback, VHS Hi-Fi recovers the depth-recorded AFM signal by subtracting the audio head's signal (which contains the AFM signal contaminated by a weak image of the video signal) from the video head's signal (which contains only the video signal), then demodulates the left and right audio channels from their respective frequency carriers. The result of the complex process was audio of high fidelity, which was uniformly solid across all tape-speeds (EP, LP or SP.) Since JVC had gone through the complexity of ensuring Hi-Fi's backward compatibility with non-Hi-Fi VCRs, virtually all studio home video releases produced after this time contained Hi-Fi audio tracks, in addition to the linear audio track. Under normal circumstances, all Hi-Fi VHS VCRs will record Hi-Fi and linear audio simultaneously to ensure compatibility with VCRs without Hi-Fi playback, though only early high-end Hi-Fi machines provided linear stereo compatibility. The sound quality of Hi-Fi VHS stereo is comparable to some extent to the quality of CD audio, particularly when recordings were made on high-end or professional VHS machines that have a manual audio recording level control. This high quality compared to other consumer audio recording formats such as compact cassette attracted the attention of amateur and hobbyist recording artists. Home recording enthusiasts occasionally recorded high quality stereo mixdowns and master recordings from multitrack audio tape onto consumer-level Hi-Fi VCRs. However, because the VHS Hi-Fi recording process is intertwined with the VCR's video-recording function, advanced editing functions such as audio-only or video-only dubbing are impossible. A short-lived alternative to the HiFi feature for recording mixdowns of hobbyist audio-only projects was a PCM adaptor so that high-bandwidth digital video could use a grid of black-and-white dots on an analog video carrier to give pro-grade digital sounds though DAT tapes made this obsolete. Some VHS decks also had a "simulcast" switch, allowing users to record an external audio input along with off-air pictures. Some televised concerts offered a stereo simulcast soundtrack on FM radio and as such, events like Live Aid were recorded by thousands of people with a full stereo soundtrack despite the fact that stereo TV broadcasts were some years off (especially in regions that adopted NICAM). Other examples of this included network television shows such as Friday Night Videos and MTV for its first few years in existence. Likewise, some countries, most notably South Africa, provided alternate language audio tracks for TV programming through an FM radio simulcast. The considerable complexity and additional hardware limited VHS Hi-Fi to high-end decks for many years. While linear stereo all but disappeared from home VHS decks, it was not until the 1990s that Hi-Fi became a more common feature on VHS decks. Even then, most customers were unaware of its significance and merely enjoyed the better audio performance of the newer decks. VHS Hi-Fi audio has been standardized in IEC 60774-2. Issues with Hi-Fi audio Due to the path followed by the video and Hi-Fi audio heads being striped and discontinuous—unlike that of the linear audio track—head-switching is required to provide a continuous audio signal. While the video signal can easily hide the head-switching point in the invisible vertical retrace section of the signal, so that the exact switching point is not very important, the same is obviously not possible with a continuous audio signal that has no inaudible sections. Hi-Fi audio is thus dependent on a much more exact alignment of the head switching point than is required for non-HiFi VHS machines. Misalignments may lead to imperfect joining of the signal, resulting in low-pitched buzzing. The problem is known as "head chatter", and tends to increase as the audio heads wear down. Another issue that made VHS Hi-Fi imperfect for music is the inaccurate reproduction of levels (softer and louder) which are not re-created as the original source. Variations Super-VHS / ADAT / SVHS-ET Several improved versions of VHS exist, most notably Super-VHS (S-VHS), an analog video standard with improved video bandwidth. S-VHS improved the horizontal luminance resolution to 400 lines (versus 250 for VHS/Beta and 500 for DVD). The audio system (both linear and AFM) is the same. S-VHS made little impact on the home market, but gained dominance in the camcorder market due to its superior picture quality. The ADAT format provides the ability to record multitrack digital audio using S-VHS media. JVC also developed SVHS-ET technology for its Super-VHS camcorders and VCRs, which simply allows them to record Super VHS signals onto lower-priced VHS tapes, albeit with a slight blurring of the image. Nearly all later JVC Super-VHS camcorders and VCRs have SVHS-ET ability. VHS-C / Super VHS-C Another variant is VHS-Compact (VHS-C), originally developed for portable VCRs in 1982, but ultimately finding success in palm-sized camcorders. The longest tape available for NTSC holds 60 minutes in SP mode and 180 minutes in EP mode. Since VHS-C tapes are based on the same magnetic tape as full-size tapes, they can be played back in standard VHS players using a mechanical adapter, without the need of any kind of signal conversion. The magnetic tape on VHS-C cassettes is wound on one main spool and uses a gear wheel to advance the tape. The adapter is mechanical, although early examples were motorized, with a battery. It has an internal hub to engage with the VCR mechanism in the location of a normal full-size tape hub, driving the gearing on the VHS-C cassette. Also, when a VHS-C cassette is inserted into the adapter, a small swing-arm pulls the tape out of the miniature cassette to span the standard tape path distance between the guide rollers of a full-size tape. This allows the tape from the miniature cassette to use the same loading mechanism as that from the standard cassette. Super VHS-C or S-VHS Compact was developed by JVC in 1987. S-VHS provided an improved luminance and chrominance quality, yet S-VHS recorders were compatible with VHS tapes. Sony was unable to shrink its Betamax form any further, so instead developed Video8/Hi8 which was in direct competition with the VHS-C/S-VHS-C format throughout the 1980s, 1990s, and 2000s. Ultimately neither format "won" and both have been superseded by digital high definition equipment. W-VHS / Digital-VHS (high-definition) Wide-VHS (W-VHS) allowed recording of MUSE Hi-Vision analog high definition television, which was broadcast in Japan from 1989 until 2007. The other improved standard, called Digital-VHS (D-VHS), records digital high definition video onto a VHS form factor tape. D-VHS can record up to 4 hours of ATSC digital television in 720p or 1080i formats using the fastest record mode (equivalent to VHS-SP), and up to 49 hours of lower-definition video at slower speeds. D9 There is also a JVC-designed component digital professional production format known as Digital-S, or officially under the name D9, that uses a VHS form factor tape and essentially the same mechanical tape handling techniques as an S-VHS recorder. This format is the least expensive format to support a Sel-Sync pre-read for video editing. This format competed with Sony's Digital Betacam in the professional and broadcast market, although in that area Sony's Betacam family ruled supreme, in contrast to the outcome of the VHS/Betamax domestic format war. It has now been superseded by high definition formats. V-Lite In the late 1990s, there was a disposable promotional variation of the VHS format called V-Lite. It was a cassette constructed largely with polystyrene, with only the rotating components like the tape reels being of hard plastic with glued casings without standard features like a protective cover for the exposed tape. Its purpose was to be as lightweight as possible for minimized mass delivery costs for the purpose of a media company's promotional campaign and intended for only a few viewings with a runtime of typically 2 to 3 minutes. One such production so promoted was the A&E Network's 2000 adaptation of The Great Gatsby. The format arose concurrently and then rendered obsolete, with the rise of the DVD video format which eventually supplanted VHS, being lighter and less expensive still to mass-distribute, while video streaming would later supplant the use of physical media for video promotion. Accessories Shortly after the introduction of the VHS format, VHS tape rewinders were developed. These devices served the sole purpose of rewinding VHS tapes. Proponents of the rewinders argued that the use of the rewind function on the standard VHS player would lead to wear and tear of the transport mechanism. The rewinder would rewind the tapes smoothly and also normally do so at a faster rate than the standard rewind function on VHS players. However, some rewinder brands did have some frequent abrupt stops, which occasionally led to tape damage. Some devices were marketed which allowed a personal computer to use a VHS recorder as a data backup device. The most notable of these was ArVid, widely used in Russia and CIS states. Similar systems were manufactured in the United States by Corvus and Alpha Microsystems, and in the UK by Backer from Danmere Ltd. The Backer system could store up to 4 GB of data with a transfer rate of 9 MB per minute. Signal standards VHS can record and play back all varieties of analog television signals in existence at the time VHS was devised. However, a machine must be designed to record a given standard. Typically, a VHS machine can only handle signals using the same standard as the country it was sold in. This is because some parameters of analog broadcast TV are not applicable to VHS recordings, the number of VHS tape recording format variations is smaller than the number of broadcast TV signal variations—for example, analog TVs and VHS machines (except multistandard devices) are not interchangeable between the UK and Germany, but VHS tapes are. The following tape recording formats exist in conventional VHS (listed in the form of standard/lines/frames): SECAM/625/25 (SECAM, French variety) MESECAM/625/25 (most other SECAM countries, notably the former Soviet Union and Middle East) NTSC/525/30 (Most parts of Americas, Japan, South Korea) PAL/525/30 (i.e., PAL-M, Brazil) PAL/625/25 (most of Western Europe, Australia, New Zealand, many parts of Asia such as China and India, some parts of South America such as Argentina, Uruguay and the Falklands, and Africa) PAL/625/25 VCRs allow playback of SECAM (and MESECAM) tapes with a monochrome picture, and vice versa, as the line standard is the same. Since the 1990s, dual and multi-standard VHS machines, able to handle a variety of VHS-supported video standards, became more common. For example, VHS machines sold in Australia and Europe could typically handle PAL, MESECAM for record and playback, and NTSC for playback only on suitable TVs. Dedicated multi-standard machines can usually handle all standards listed, and some high-end models could convert the content of a tape from one standard to another on the fly during playback by using a built-in standards converter. S-VHS is only implemented as such in PAL/625/25 and NTSC/525/30; S-VHS machines sold in SECAM markets record internally in PAL, and convert between PAL and SECAM during recording and playback. S-VHS machines for the Brazilian market record in NTSC and convert between it and PAL-M. A small number of VHS decks are able to decode closed captions on video cassettes before sending the full signal to the set with the captions. A smaller number still are able, additionally, to record subtitles transmitted with world standard teletext signals (on pre-digital services), simultaneously with the associated program. S-VHS has a sufficient resolution to record teletext signals with relatively few errors, although for some years now it has been possible to recover teletext pages and even complete "page carousels" from regular VHS recordings using non-real-time computer processing. Uses in marketing VHS was popular for long-form content, such as feature films or documentaries, as well as short-play content, such as music videos, in-store videos, teaching videos, distribution of lectures and talks, and demonstrations. VHS instruction tapes were sometimes included with various products and services, including exercise equipment, kitchen appliances, and computer software. Comparison to Betamax VHS was the winner of a protracted and somewhat bitter format war during the late 1970s and early 1980s against Sony's Betamax format as well as other formats of the time. Betamax was widely perceived at the time as the better format, as the cassette was smaller in size, and Betamax offered slightly better video quality than VHS – it had lower video noise, less luma-chroma crosstalk, and was marketed as providing pictures superior to those of VHS. However, the sticking point for both consumers and potential licensing partners of Betamax was the total recording time. To overcome the recording limitation, Beta II speed (two-hour mode, NTSC regions only) was released in order to compete with VHS's two-hour SP mode, thereby reducing Betamax's horizontal resolution to 240 lines (vs 250 lines). In turn, the extension of VHS to VHS HQ produced 250 lines (vs 240 lines), so that overall a typical Betamax/VHS user could expect virtually identical resolution. (Very high-end Betamax machines still supported recording in the Beta I mode and some in an even higher resolution Beta Is (Beta I Super HiBand) mode, but at a maximum single-cassette run time of 1:40 [with an L-830 cassette].) Because Betamax was released more than a year before VHS, it held an early lead in the format war. However, by 1981, United States' Betamax sales had dipped to only 25-percent of all sales. There was debate between experts over the cause of Betamax's loss. Some, including Sony's founder Akio Morita, say that it was due to Sony's licensing strategy with other manufacturers, which consistently kept the overall cost for a unit higher than a VHS unit, and that JVC allowed other manufacturers to produce VHS units license-free, thereby keeping costs lower. Others say that VHS had better marketing, since the much larger electronics companies at the time (Matsushita, for example) supported VHS. Sony would make its first VHS players/recorders in 1988, although it continued to produce Betamax machines concurrently until 2002. Decline VHS was widely used in television-equipped American and European living rooms for more than twenty years from its introduction in the late 1970s. The home television recording market, also known as the VHS market, as well as the camcorder market, has since transitioned to digital recording on solid-state memory cards. The introduction of the DVD format to American consumers in March 1997 triggered the market share decline of VHS. DVD rentals surpassed those on the VHS format in the United States for the first time in June 2003. The Hill said that David Cronenberg's movie A History of Violence, sold on VHS in 2006, was "widely believed to be the last instance of a major motion picture to be released in that format". By December 2008, the Los Angeles Times reported on "the final truckload of VHS tapes" being shipped from a warehouse in Palm Harbor, Florida, citing Ryan J. Kugler's Distribution Video Audio Inc. as "the last major supplier". Though 94.5 million Americans still owned VHS format VCRs in 2005, market share continued to drop. In the mid-2000s, several retail chains in the United States and Europe announced they would stop selling VHS equipment. In the U.S., no major brick-and-mortar retailers stock VHS home-video releases, focusing only on DVD and Blu-ray media. Sony Pictures Home Entertainment along with other companies ceased production of VHS in late 2010 in South Korea. The last known company in the world to manufacture VHS equipment was Funai of Japan, who produced video cassette recorders under the Sanyo brand in China and North America. Funai ceased production of VHS equipment (VCR/DVD combos) in July 2016, citing falling sales and a shortage of components. Modern use Despite the decline in both VHS players and programming on VHS machines, they are still owned in some households worldwide. Those who still use or hold on to VHS do so for a number of reasons, including nostalgic value, ease of use in recording, keeping personal videos or home movies, watching content currently exclusive to VHS, and collecting. Some expatriate communities in the United States also obtain video content from their native countries in VHS format. Although VHS has been discontinued in the United States, VHS recorders and blank tapes were still sold at stores in other developed countries prior to digital television transitions. As an acknowledgement of the continued use of VHS, Panasonic announced the world's first dual deck VHS-Blu-ray player in 2009. The last standalone JVC VHS-only unit was produced October 28, 2008. JVC, and other manufacturers, continued to make combination DVD+VHS units even after the decline of VHS. Countries like South Korea released films on VHS until December 2010, with Inception being the last Hollywood film to be released on VHS in the country. A market for pre-recorded VHS tapes has continued, and some online retailers such as Amazon still sell new and used pre-recorded VHS cassettes of movies and television programs. None of the major Hollywood studios generally issues releases on VHS. The last major studio film to be released in the format in the United States and Canada, other than as part of special marketing promotions, was A History of Violence in 2006. In October 2008, Distribution Video Audio Inc., the last major American supplier of pre-recorded VHS tapes, shipped its final truckload of tapes to stores in America. However, there have been a few exceptions. For example, The House of the Devil was released on VHS in 2010 as an Amazon-exclusive deal, in keeping with the film's intent to mimic 1980s horror films. The first Paranormal Activity film, produced in 2007, had a VHS release in the Netherlands in 2010. The horror film V/H/S/2 was released as a combo in North America that included a VHS tape in addition to a Blu-ray and a DVD copy on September 24, 2013. In 2019, Paramount Pictures produced limited quantities of the 2018 film Bumblebee to give away as promotional contest prizes. In 2021, professional wrestling promotion Impact Wrestling released a limited run of VHS tapes containing that year's Slammiversary, which quickly sold out. The company later announced future VHS runs of pay-per-view events. The VHS medium has a cult following. For instance, in February 2021, it was reported that VHS was once again doing well as an underground market. In January 2023, it was reported that VHS tapes were once again becoming valuable collectors items. VHS collecting would make a comeback in the 2020s. The 2024 horror film, Alien: Romulus, will have a limited release on VHS, marking the first major Hollywood film to receive an official VHS release since 2006. Successors VCD The Video CD (VCD) was created in 1993, becoming an alternative medium for video, in a CD-sized disc. Though occasionally showing compression artifacts and color banding that are common discrepancies in digital media, the durability and longevity of a VCD depends on the production quality of the disc, and its handling. The data stored digitally on a VCD theoretically does not degrade (in the analog sense like tape). In the disc player, there is no physical contact made with either the data or label sides. When handled properly, a VCD will last a long time. Since a VCD can hold only 74 minutes of video, a movie exceeding that mark has to be divided into two or more discs. DVD The DVD-Video format was introduced first on November 1, 1996, in Japan; to the United States on March 26, 1997 (test marketed); and mid-to-late 1998 in Europe and Australia. While the DVD was highly successful in the pre-recorded retail market, it failed to displace VHS for in home recording of video content (e.g. broadcast or cable television). A number of factors hindered the commercial success of the DVD in this regard, including: A reputation for being temperamental and unreliable, as well as the risk of scratches and hairline cracks. Incompatibilities in playing discs recorded on a different manufacturer's machines to that of the original recording machine. Compression artifacts: MPEG-2 video compression can result in visible artifacts such as macroblocking, mosquito noise and ringing which become accentuated in extended recording modes (more than three hours on a DVD-5 disc). Standard VHS will not suffer from any of these problems, all of which are characteristic of certain digital video compression systems (see Discrete cosine transform) but VHS will result in reduced luminance and chroma resolution, which makes the picture look horizontally blurred (resolution decreases further with LP and EP recording modes). VHS also adds considerable noise to both the luminance and chroma channels. High-capacity digital recording technologies High-capacity digital recording systems are also gaining in popularity with home users. These types of systems come in several form factors: Hard disk–based set-top boxes Hard disk/optical disc combination set-top boxes Personal computer–based media center Portable media players with TV-out capability Hard disk-based systems include TiVo as well as other digital video recorder (DVR) offerings. These types of systems provide users with a no-maintenance solution for capturing video content. Customers of subscriber-based TV generally receive electronic program guides, enabling one-touch setup of a recording schedule. Hard disk–based systems allow for many hours of recording without user-maintenance. For example, a 120 GB system recording at an extended recording rate (XP) of 10 Mbit/s MPEG-2 can record over 25 hours of video content. Legacy Often considered an important medium of film history, the influence of VHS on art and cinema was highlighted in a retrospective staged at the Museum of Arts and Design in 2013. In 2015, the Yale University Library collected nearly 3,000 horror and exploitation movies on VHS tapes, distributed from 1978 to 1985, calling them "the cultural id of an era." The documentary film Rewind This! (2013), directed by Josh Johnson, tracks the impact of VHS on film industry through various filmmakers and collectors. The last Blockbuster franchise is still renting out VHS tapes, and is based in Bend, Oregon, a town home to under 100,000 people as of 2020. The VHS aesthetic is also a central component of the analog horror genre, which is largely known for imitating recordings of late 20th century TV broadcasts. See also Analog video Tape head cleaner Analog video on discs: Capacitance Electronic Disc (CED) Video High Density (VHD) LaserDisc Notes References External links HowStuffWorks: How VCRs Work The 'Total Rewind' VCR museum – A covering the history of VHS and other vintage formats. VHSCollector.com: Analog Video Cassette Archive – A growing archive of commercially released video cassettes from their dawn to the present, and a guide to collecting. Audiovisual introductions in 1976 Products introduced in 1976 Japanese inventions Composite video formats Panasonic Videotape Digital media Home video Videocassette formats
VHS
[ "Technology" ]
11,530
[ "Multimedia", "Digital media" ]
52,136
https://en.wikipedia.org/wiki/Citrus
Citrus is a genus of flowering trees and shrubs in the family Rutaceae. Plants in the genus produce citrus fruits, including important crops such as oranges, mandarins, lemons, grapefruits, pomelos, and limes. Citrus is native to South Asia, East Asia, Southeast Asia, Melanesia, and Australia. Indigenous people in these areas have used and domesticated various species since ancient times. Its cultivation first spread into Micronesia and Polynesia through the Austronesian expansion (–1500 BCE). Later, it was spread to the Middle East and the Mediterranean () via the incense trade route, and from Europe to the Americas. Renowned for their highly fragrant aromas and complex flavor, citrus are among the most popular fruits in cultivation. With a propensity to hybridize between species, making their taxonomy complicated, there are numerous varieties encompassing a wide range of appearance and fruit flavors. Evolution Evolutionary history The large citrus fruit of today evolved originally from small, edible berries over millions of years. Citrus species began to diverge from a common ancestor about 15 million years ago, at about the same time that Severinia (such as the Chinese box orange) diverged from the same ancestor. About 7 million years ago, the ancestors of Citrus split into the main genus, Citrus, and the Poncirus group (such as the trifoliate orange), which some taxonomies consider a separate genus and others include in Citrus Poncirus is closely enough related that it can still be hybridized with all other citrus and used as rootstock. These estimates are made using genetic mapping of plant chloroplasts. A DNA study published in Nature in 2018 concludes that the genus Citrus evolved in the foothills of the Himalayas, in the area of Assam (India), western Yunnan (China), and northern Myanmar. The three ancestral species in the genus Citrus associated with modern Citrus cultivars are the mandarin orange, pomelo, and citron. Almost all of the common commercially important citrus fruits (sweet oranges, lemons, grapefruit, limes, and so on) are hybrids between these three species, their main progenies, and other wild Citrus species within the last few thousand years. Citrus plants are native to subtropical and tropical regions of Asia, Island Southeast Asia, Near Oceania, and northeastern and central Australia. Domestication of citrus species involved much hybridization and introgression, leaving much uncertainty about when and where domestication first happened. A genomic, phylogenic, and biogeographical analysis by Wu et al. (2018) has shown that the center of origin of the genus Citrus is likely the southeast foothills of the Himalayas, in a region stretching from eastern Assam, northern Myanmar, to western Yunnan. It diverged from a common ancestor with Poncirus trifoliata. A change in climate conditions during the Late Miocene (11.63 to 5.33 mya) resulted in a sudden speciation event. The species resulting from this event include the citrons (Citrus medica) of South Asia; the pomelos (C. maxima) of Mainland Southeast Asia; the mandarins (C. reticulata), kumquats (C. japonica), mangshanyegan (C. mangshanensis), and ichang papedas (C. cavaleriei) of southeastern China; the kaffir limes (C. hystrix) of Island Southeast Asia; and the biasong and samuyao (C. micrantha) of the Philippines. This was followed by the spread of citrus species into Taiwan and Japan in the Early Pliocene (5.33 to 3.6 mya), resulting in the tachibana orange (C. tachibana); and beyond the Wallace Line into Papua New Guinea and Australia during the Early Pleistocene (2.5 million to 800,000 years ago), where further speciation events created in the Australian limes. Fossil record A fossil leaf from the Pliocene of Valdarno, Italy is described as †Citrus meletensis. In China, fossil leaf specimens of †Citrus linczangensis have been collected from late Miocene coal-bearing strata of the Bangmai Formation in Yunnan province. C. linczangensis resembles C. meletensis in having an intramarginal vein, an entire margin, and an articulated and distinctly winged petiole. Taxonomy Many cultivated Citrus species are natural or artificial hybrids of a small number of core ancestral species, including the citron, pomelo, and mandarin. Natural and cultivated citrus hybrids include commercially important fruit such as oranges, grapefruit, lemons, limes, and some tangerines. The multiple hybridisations have made the taxonomy of Citrus complex. Apart from these core species, Australian limes and the recently discovered mangshanyegan are grown. Kumquats and Clymenia spp. are now generally considered to belong within the genus Citrus. The false oranges, Oxanthera from New Caledonia, have been transferred to the Citrus genus on phylogenetic evidence. A recent taxonomy reincorporates the trifoliate orange (Poncirus) into an enlarged Citrus, but recognizes that many botanists still follow Swingle in splitting it off. History The earliest introductions of citrus species by human migrations was during the Austronesian expansion (–1500 BCE), where Citrus hystrix, Citrus macroptera, and Citrus maxima were among the canoe plants carried by Austronesian voyagers eastwards into Micronesia and Polynesia. The citron (Citrus medica) was also introduced early into the Mediterranean basin from India and Southeast Asia. It was introduced via two ancient trade routes: an overland route through Persia, the Levant and the Mediterranean islands; and a maritime route through the Arabian Peninsula and Ptolemaic Egypt into North Africa. Although the exact date of the original introduction is unknown due to the sparseness of archaeobotanical remains, the earliest evidence are seeds recovered from the Hala Sultan Tekke site of Cyprus, dated to around 1200 BCE. Other archaeobotanical evidence includes pollen from Carthage dating back to the 4th century BCE; and carbonized seeds from Pompeii dated to around the 3rd to 2nd century BCE. The earliest complete description of the citron was written by Theophrastus, . Lemons, pomelos, and sour oranges were introduced to the Mediterranean by Arab traders around the 10th century CE. Sweet oranges were brought to Europe by the Genoese and Portuguese from Asia during the 15th to 16th century. Mandarins were not introduced until the 19th century. Oranges were introduced to Florida by Spanish colonists. In cooler parts of Europe, citrus fruit was grown in orangeries starting in the 17th century; many were as much status symbols as functional agricultural structures. Etymology The generic name Citrus originates from Latin, where it denoted either the citron (C. medica) or a conifer tree (Thuja). The Latin word is related to the ancient Greek word for the cedar of Lebanon, (), perhaps from a perceived similarity of the smell of citrus leaves and fruit with that of cedar. Description Tree Citrus plants are large shrubs or small to moderate-sized trees, reaching tall, with spiny shoots and alternately arranged evergreen leaves with an entire margin. The flowers are solitary or in small corymbs, each flower diameter, with five (rarely four) white petals and numerous stamens; they are often very strongly scented, due to the presence of essential oil glands. Fruit The fruit is a hesperidium, a specialised berry with multiple carpels, globose to elongated, long and diameter, with a leathery rind or "peel" called a pericarp. The outermost layer of the pericarp is an "exocarp" called the flavedo, commonly referred to as the zest. The middle layer of the pericarp is the mesocarp, which in citrus fruits consists of the white, spongy albedo or pith. The innermost layer of the pericarp is the endocarp. This surrounds a variable number of carpels, shaped as radial segments. The seeds, if present, develop inside the carpels. The space inside each segment is a locule filled with juice vesicles, or pulp. From the endocarp, string-like "hairs" extend into the locules, which provide nourishment to the fruit as it develops. The genus is commercially important with cultivars of many species grown for their fruit. Some cultivars have been developed to be easy to peel and seedless, meaning they are parthenocarpic. The fragrance of citrus fruits is conferred by flavonoids and limonoids in the rind. The flavonoids include various flavanones and flavones. The carpels are juicy; they contain a high quantity of citric acid, which with other organic acids including ascorbic acid (vitamin C) give them their characteristic sharp taste. Citrus fruits are diverse in size and shape, as well as in color and flavor, reflecting their biochemistry; for instance, grapefruit is made bitter-tasting by a flavanone, naringin. Cultivation Most commercial citrus cultivation uses trees produced by grafting the desired fruiting cultivars onto rootstocks selected for disease resistance and hardiness. The trees are not generally frost hardy. They thrive in a consistently sunny, humid environment with fertile soil and adequate water. The colour of citrus fruits only develops in climates with a (diurnal) cool winter. In tropical regions with no winter at all, citrus fruits remain green until maturity, hence the tropical "green oranges". The terms 'ripe' and 'mature' are widely used synonymously, but they mean different things. A mature fruit is one that has completed its growth phase. Ripening is the sequence of changes within the fruit from maturity to the beginning of decay. These changes involve the conversion of starches to sugars, a decrease in acids, softening, and a change in the fruit's colour. Citrus fruits are non-climacteric and respiration slowly declines and the production and release of ethylene is gradual. Production According to the UN Food and Agriculture Organization, world production of all citrus fruits in 2016 was 124 million tonnes, with about half of this production as oranges. At US $15.2 billion equivalent in 2018, citrus trade makes up nearly half of the world fruit trade, which was US$32.1 billion that year. According to the United Nations Conference on Trade and Development, citrus production grew during the early 21st century mainly by the increase in cultivation areas, improvements in transportation and packaging, rising incomes and consumer preference for healthy foods. In 2019–20, world production of oranges was estimated to be 47.5 million tonnes, led by Brazil, Mexico, the European Union, and China as the largest producers. Pests and diseases Among the diseases of citrus plantations are citrus black spot (a fungus), citrus canker (a bacterium), citrus greening (a bacterium, spread by an insect pest), and sweet orange scab (a fungus, Elsinöe australis). Citrus plants are liable to infestation by ectoparasites which act as vectors to plant diseases: for example, aphids transmit the damaging citrus tristeza virus, while the aphid-like Asian citrus psyllid can carry the bacterium which causes the serious citrus greening disease. This threatens production in Florida, California, and worldwide. Citrus groves are attacked by parasitic Nematodes including citrus (Tylenchulus semipenetrans) and sheath nematodes (Hemicycliophora spp.). Deficiency diseases Citrus plants can develop the deficiency condition chlorosis, characterized by yellowing leaves. The condition is often caused by an excessively high pH (alkaline soil), which prevents the plant from absorbing nutrients such as iron, magnesium, and zinc needed to produce chlorophyll. Effects on humans Some Citrus species contain significant amounts of furanocoumarins. In humans, some of these act as strong photosensitizers when applied topically to the skin, while others interact with medications when taken orally in the grapefruit juice effect. Due to the photosensitizing effects of certain furanocoumarins, some Citrus species cause phytophotodermatitis, a potentially severe skin inflammation resulting from contact with a light-sensitizing botanical agent followed by exposure to ultraviolet light. In Citrus species, the primary photosensitizing agent appears to be bergapten, a linear furanocoumarin derived from psoralen. This claim has been confirmed for lime and bergamot. In particular, bergamot essential oil has a higher concentration of bergapten (3–3.6 g/kg) than any other Citrus-based essential oil. A systematic review indicates that citrus fruit consumption is associated with a 10% reduction of risk for developing breast cancer. Uses Culinary Many citrus fruits, such as oranges, tangerines, grapefruits, and clementines, are generally eaten fresh. They are typically peeled and can be easily split into segments. Grapefruit is more commonly halved and eaten out of the skin with a spoon. Lemonade is a popular beverage prepared by diluting the juice and adding sugar. Lemon juice is mixed in salad dressings and squeezed over fruit salad to stop it from turning brown: its acidity suppresses oxidation by polyphenol oxidase enzymes. A variety of flavours can be derived from different parts and treatments of citrus fruits. The colourful outer skin of some citrus fruits, known as zest, is used as a flavouring in cooking. The whole of the bitter orange (and sometimes other citrus fruits) including the peel with its essential oils is cooked with sugar to make marmalade. As ornamental plants By the 17th century, orangeries were added to great houses in Europe, both to enable the fruit to be grown locally and for prestige, as seen in the Versailles Orangerie. Some modern hobbyists grow dwarf citrus in containers or greenhouses in areas where the weather is too cold to grow it outdoors; Citrofortunella hybrids have good cold resistance. In art and culture Lemons appear in paintings, pop art, and novels. A wall painting in the tomb of Nakht in 15th century BC Egypt depicts a woman in a festival, holding a lemon. In the 17th century, Giovanna Garzoni painted a Still Life with Bowl of Citrons, the fruits still attached to leafy flowering twigs, with a wasp on one of the fruits. The impressionist Edouard Manet depicted a lemon on a pewter plate. In modern art, Arshile Gorky painted Still Life with Lemons in the 1930s. Citrus fruits "were the clear status symbols of the nobility in the ancient Mediterranean", according to the paleoethnobotanist Dafna Langgut. In Louisa May Alcott's 1868 novel Little Women, the character Amy March states that "It's nothing but limes now, for everyone is sucking them in their desks in schooltime, and trading them off for pencils, bead rings, paper dolls, or something else… If one girl likes another, she gives her a lime; if she’s mad with her, she eats one before her face, and doesn’t offer even a suck." See also Japanese citrus References External links Effects of pollination on Citrus plants Pollination of Citrus by Honey Bees Citrus Research and Education Center of IFAS (largest citrus research center in world) Citrus Variety Collection by the University of California Citrus (Mark Rieger, Professor of Horticulture, University of Georgia) Fundecitrus – Fund for Citrus Plant Protection is an organization of citrus Brazilian producers and processors. Citrus – taxonomy fruit anatomy at GeoChemBio Citrus, 2015, University of Valencia Cocktail garnishes Garden plants Citrus fruits Lists of plants Ornamental trees Aurantioideae genera Taxa named by Carl Linnaeus
Citrus
[ "Biology" ]
3,327
[ "Lists of biota", "Lists of plants", "Plants" ]
52,137
https://en.wikipedia.org/wiki/GSI%20Helmholtz%20Centre%20for%20Heavy%20Ion%20Research
The GSI Helmholtz Centre for Heavy Ion Research () is a federally and state co-funded heavy ion () research center in Darmstadt, Germany. It was founded in 1969 as the Society for Heavy Ion Research (), abbreviated GSI, to conduct research on and with heavy-ion accelerators. It is the only major user research center in the State of Hesse. The laboratory performs basic and applied research in physics and related natural science disciplines. Main fields of study include plasma physics, atomic physics, nuclear structure and reactions research, biophysics and medical research. The lab is a member of the Helmholtz Association of German Research Centres. Shareholders are the German Federal Government (90%) and the State of Hesse, Thuringia and Rhineland-Palatinate. As a member of the Helmholtz Association, the current name was given to the facility on 7 October 2008 in order to bring it sharper national and international awareness. The GSI Helmholtz Centre for Heavy Ion Research has strategic partnerships with the Technische Universität Darmstadt, Goethe University Frankfurt, Johannes Gutenberg University Mainz and the Frankfurt Institute for Advanced Studies. Primary research The chief tool is the heavy ion accelerator facility consisting of: UNILAC, the Universal Linear Accelerator (energy of 2 – 11.4 MeV per nucleon) SIS 18 (Schwer-Ionen-Synchrotron), the heavy-ion synchrotron (0.010 – 2 GeV/u) ESR, the experimental storage ring (0.005 – 0.5 GeV/u) FRS Fragment Separator. The UNILAC was commissioned in 1975; the SIS 18 and the ESR were added in 1990 boosting the ion acceleration from 10% of light speed to 90%. Elements discovered at GSI: bohrium (1981), meitnerium (1982), hassium (1984), darmstadtium (1994), roentgenium (1994), and copernicium (1996). Elements confirmed at GSI: nihonium (2012), flerovium (2009), moscovium (2012), livermorium (2010), and tennessine (2012). Technological developments Another important technology developed at the GSI is the use of heavy ion beams for cancer treatment (from 1997). Instead of using X-ray radiation, carbon ions are used to irradiate the patient. The technique allows tumors which are close to vital organs to be treated, which is not possible with X-rays. This is due to the fact that the Bragg peak of carbon ions is much sharper than the peak of X-ray photons. A facility based on this technology, called Heidelberger Ionenstrahl-Therapiezentrum (HIT), built at the University of Heidelberg Medical Center began treating patients in November 2009. Facilities other than UNILAC and SIS-18 Two high-energy lasers, the nhelix (Nanosecond High Energy Laser for heavy Ion eXperiments) and the Phelix (Petawatt High Energy Laser for heavy Ion eXperiments). A Large Area Neutron Detector (LAND). A FRagment Separator (FRS) – The GSI Fragment Separator or FRS is a facility built in 1990. It produces and separates different beams of (usually) radioactive ions. The process is made from a stable beam accelerated by UNILAC and then SIS impinging on a production target. From this, many fragments are produced. The secondary beam is produced by magnetic selection of the ions. An Experimental Storage Ring (ESR) in which large numbers of highly charged radioactive ions can be stored for extended periods of time with energies of 0.005 – 0.5 GeV/u. This facility provides the means to make precise measurements of their decay modes. The discovery of a mysterious new phenomenon is known as the GSI anomaly. Future evolution In the years to come, GSI will evolve to an international structure named FAIR for Facility for Antiproton and Ion Research: one new synchrotron (with respective magnetic rigidity 100 T⋅m), a Super-FRS and several new rings among which one that can be used for antimatter research. The major part of the facility will be commissioned in 2022; full operation is planned for 2025. The creation of FAIR was co-signed on 7 November 2007 by 10 countries: Finland, France, Germany, India, Romania, Russia, Slovenia, Sweden, United Kingdom, and Poland. Representatives included Annette Schavan, the German federal minister of science and Roland Koch, the prime minister of the state of Hesse. See also GANIL Riken JINR CERN FRIB NSCL ISIS neutron source RAON References External links HGF GSI FAIR Buildings and structures in Darmstadt Nuclear research institutes Research institutes in Germany Research institutes established in 1969 1969 establishments in West Germany Organisations based in Hesse
GSI Helmholtz Centre for Heavy Ion Research
[ "Engineering" ]
1,010
[ "Nuclear research institutes", "Nuclear organizations" ]
52,141
https://en.wikipedia.org/wiki/Inversion%20%28meteorology%29
In meteorology, an inversion (or temperature inversion) is a phenomenon in which a layer of warmer air overlies cooler air. Normally, air temperature gradually decreases as altitude increases, but this relationship is reversed in an inversion. An inversion traps air pollution, such as smog, near the ground. An inversion can also suppress convection by acting as a "cap". If this cap is broken for any of several reasons, convection of any humidity can then erupt into violent thunderstorms. Temperature inversion can cause freezing rain in cold climates. Normal atmospheric conditions Usually, within the lower atmosphere (the troposphere) the air near the surface of the Earth is warmer than the air above it, largely because the atmosphere is heated from below as solar radiation warms the Earth's surface, which in turn then warms the layer of the atmosphere directly above it, e.g., by thermals (convective heat transfer). Air temperature also decreases with an increase in altitude because higher air is at lower pressure, and lower pressure results in a lower temperature, following the ideal gas law and adiabatic lapse rate. Description Under the right conditions, the normal vertical temperature gradient is inverted so that the air is colder near the surface of the Earth. This can occur when, for example, a warmer, less-dense air mass moves over a cooler, denser air mass. This type of inversion occurs in the vicinity of warm fronts, and also in areas of oceanic upwelling such as along the California coast in the United States. With sufficient humidity in the cooler layer, fog is typically present below the inversion cap. An inversion is also produced whenever radiation from the surface of the earth exceeds the amount of radiation received from the sun, which commonly occurs at night, or during the winter when the sun is very low in the sky. This effect is virtually confined to land regions as the ocean retains heat far longer. In the polar regions during winter, inversions are nearly always present over land. A warmer air mass moving over a cooler one can "shut off" any convection which may be present in the cooler air mass: this is known as a capping inversion. However, if this cap is broken, either by extreme convection overcoming the cap or by the lifting effect of a front or a mountain range, the sudden release of bottled-up convective energy—like the bursting of a balloon—can result in severe thunderstorms. Such capping inversions typically precede the development of tornadoes in the Midwestern United States. In this instance, the "cooler" layer is quite warm but is still denser and usually cooler than the lower part of the inversion layer capping it. Subsidence inversion An inversion can develop aloft as a result of air gradually sinking over a wide area and being warmed by adiabatic compression, usually associated with subtropical high-pressure areas. A stable marine layer may then develop over the ocean as a result. As this layer moves over progressively warmer waters, however, turbulence within the marine layer can gradually lift the inversion layer to higher altitudes, and eventually even pierce it, producing thunderstorms, and under the right circumstances, tropical cyclones. The accumulated smog and dust under the inversion quickly taints the sky reddish, easily seen on sunny days. Atmospheric consequences Temperature inversions stop atmospheric convection (which is normally present) from happening in the affected area and can lead to high concentrations of atmospheric pollutants. Cities especially suffer from the effects of temperature inversions because they both produce more atmospheric pollutants and have higher thermal masses than rural areas, resulting in more frequent inversions with higher concentrations of pollutants. The effects are even more pronounced when a city is surrounded by hills or mountains since they form an additional barrier to air circulation. During a severe inversion, trapped air pollutants form a brownish haze that can cause respiratory problems. The Great Smog of 1952 in London, England, is one of the most serious examples of such an inversion. It was blamed for an estimated 10,000 to 12,000 deaths. Sometimes the inversion layer is at a high enough altitude that cumulus clouds can condense but can only spread out under the inversion layer. This decreases the amount of sunlight reaching the ground and prevents new thermals from forming. As the clouds disperse, sunny weather replaces cloudiness in a cycle that can occur more than once a day. Wave propagation Light As the temperature of air increases, the index of refraction of air decreases, a side effect of hotter air being less dense. Normally this results in distant objects being shortened vertically, an effect that is easy to see at sunset when the sun is visible as an oval. In an inversion, the normal pattern is reversed, and distant objects are instead stretched out or appear to be above the horizon, leading to the phenomenon known as a Fata Morgana or mirage. Inversions can magnify the so-called "green flash"—a phenomenon occurring at sunrise or sunset, usually visible for a few seconds, in which the sun's green light is isolated due to dispersion. The shorter wavelength is refracted most, with the blue component of sunlight "completely scattered out by Rayleigh scattering", making green the first or last light from the upper rim of the solar disc to be seen. Radio waves Very high frequency radio waves can be refracted by inversions, making it possible to hear FM radio or watch VHF low-band television broadcasts from long distances on foggy nights. The signal, which would normally be refracted up and away into space, is instead refracted down towards the earth by the temperature-inversion boundary layer. This phenomenon is called tropospheric ducting. Along coastlines during Autumn and Spring, due to multiple stations being simultaneously present because of reduced propagation losses, many FM radio stations are plagued by severe signal degradation disrupting reception. In higher frequencies such as microwaves, such refraction causes multipath propagation and fading. Sound When an inversion layer is present, if a sound or explosion occurs at ground level, the sound wave is refracted by the temperature gradient (which affects sound speed) and returns to the ground. The sound, therefore, travels much better than normal. This is noticeable in areas around airports, where the sound of aircraft taking off and landing often can be heard at greater distances around dawn than at other times of day, and inversion thunder which is significantly louder and travels further than when it is produced by lightning strikes under normal conditions. Shock waves The shock wave from an explosion can be reflected by an inversion layer in much the same way as it bounces off the ground in an air-burst and can cause additional damage as a result. This phenomenon killed two people in the Soviet RDS-37 nuclear test when a building collapsed. See also Aerosol Particulates Index of meteorology articles References External links 'Fire inversions' lock smoke in valleys Atmospheric thermodynamics Radio frequency propagation
Inversion (meteorology)
[ "Physics" ]
1,415
[ "Physical phenomena", "Spectrum (physical sciences)", "Radio frequency propagation", "Electromagnetic spectrum", "Waves" ]