id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
9,387,033 | https://en.wikipedia.org/wiki/Compound%20management | Compound management in the field of drug discovery refers to the systematic collection, storage, retrieval, and quality control of small molecule chemical compounds used in high-throughput screening and other research activities to identify hits that can be developed into candidate drugs.
Drug discovery depends on methods by which many different chemicals are assayed for their activity. These chemicals are stored as physical quantities in a chemical library or libraries which are often assembled from both outside vendors and internal chemical synthesis efforts. These chemical libraries are used in high-throughput screening in the drug discovery hit to lead process.
The chemical libraries in larger pharmaceutical companies are a critical part of the discovery process. These chemicals are stored in environmentally controlled conditions in small or large containers, often labeled with codes that pass back into a database. Each chemical in the storage bank must be monitored for shelf life, quantity, purity and other parameters, and its banked location. In some companies, the compounds can also include biological compounds, such as purified proteins or nucleic acids. The management of these chemical libraries, including renewal of outdated chemicals, databases containing the information, robotics often involved in fetching chemicals, and quality control of the storage environment is called Compound Management or Compound Control. Compound Management is often a significant expense, as well as career for one or more individuals who manage a chemical library at a research site.
There are many books and journal articles devoted entirely or in part to compound management. It has become a critical technological component for high-throughput screening and chemical genomics. There are great challenges to be faced in the necessity of compound management, which are being surmounted by concerted efforts in the public and private domain. In 2008, authors at the National Institutes of Health's Chemical Genomics Center have released a paper showing the necessity of a highly automated, reliable and parallel compound management platform, in order to serve over 200,000 different compounds.
In short, Compound Management requires inventory control of small molecules and biologics needed for assays and experiments, especially in high-throughput screening. It utilizes knowledge of chemistry, robotics, biology, and database management. The manager must also be acutely aware of safety standards in the handling and storing of radioactive, volatile, flammable and unstable compounds. Often, in large pharmaceutical companies, the chemical and biological compounds contained in compound libraries can number in the millions, making compound management and compound control important contributors to research and drug discovery.
Outsourcing
Because of the significant expenses and infrastructure required for accurate compound management (space requirements, robotics, IT support, analytical support, etc.) many companies choose to outsource this function to a company that specializes in this arena. It is important to work with a company that has significant experience in compound management due to the complexity of tracking not only inventory data, but also compound location, storage conditions, and compound integrity. This experience also is of paramount importance when knowing how to appropriately deal with the wide array of materials handled including, solids, liquids, volatile materials, sticky solids, oils, and gums as well as hazardous, flammable, hygroscopic and toxic compounds.
Customers can specify not only the quantity of material but also the exact vial and cap or plate for their specific application. The service provides enormous savings from a time perspective as researchers do not spend their valuable time on weighing hundreds of compounds or getting them into the correct format for their assay. It also dramatically reduces disposal costs since the exact amount of material required can be ordered rather than needing to order e.g. 100 g of material when only 0.1 g is needed for the experiment.
The high throughput analytical chemistry component of the company allows rapid validation that compounds are the correct material at the desired purity. While controlled storage conditions minimize degradation, customers may use this service to validate that the material they sent to outsourcing partner originally was correct and pure. Subsequently the service allows re-evaluation of compounds that may have decomposed during long term storage. The purification services complement the analytical services by allowing cost effective, environmentally friendly recovery of partially degraded reactive intermediates and HTS compounds at a fraction of the cost of synthesizing or purchasing these materials.
Conferences
There are several conferences related to compound management. The best known is Compound Management & Integrity although many chemistry and pharmaceutical conferences include talks or specific sections on the topic.
References
Drug discovery | Compound management | [
"Chemistry",
"Biology"
] | 880 | [
"Life sciences industry",
"Medicinal chemistry",
"Drug discovery"
] |
9,387,395 | https://en.wikipedia.org/wiki/Stacked%20Volumetric%20Optical%20Disc | The Stacked Volumetric Optical Disc (or SVOD) is an optical disc format developed by Hitachi Maxell, which uses an array of wafer-thin optical discs to allow data storage.
Each "layer" (a thin polycarbonate disc) holds around 9.4 GB of information, and the wafers are stacked in layers of 20, 25, 100, or more, giving a substantially larger overall data capacity; for example, 100× cartridges could hold 940 GB using the system as announced.
Hitachi Maxell announced the creation of the SVOD standard in 2006, intending to launch it the next year. Aimed primarily at commercial users, the target price was ¥40,000 for a cartridge of 100 thin discs, with the potential to expand into the home user market. When they announced the system, Hitachi Maxell publicly recognized the possibility that the system could be eventually modified for use with a blue-violet laser, similar to Blu-ray discs, which could have expanded the capacity of the system to 3-5 TB. It is possible that they in fact developed this "second generation" SVOD for use with standard Blu-ray lasers, with each thin disc having a storage capacity of 25 GB, or a 100-disc cartridge having a storage of 5 TB. Hitachi Maxell developed systems both for burning to the media using standard DVD optical heads, and pre-recording to the media using a special heat imprint technique they called "nanoimprinting." Though nanoimprinting initially required 6 minutes per disc for pressing, they had improved it to 8 seconds, and intended to achieve a comparable throughput to standard DVD pressing. The primary application of the SVOD system seemed to be business data archival, replacing digital tape archives.
In 2007, Japanese broadcaster NHK announced a similar system, based on Blu-ray discs, of stacked optical storage media specifically designed to rotate at high speeds, up to 15,000 RPM.
SVOD was anticipated to be a likely be a candidate, along with Holographic Versatile Discs (HVDs), to be a next-generation optical disc standard. However, as of 2021, little has been done with the format.
References
External links
Hitachi Maxell develops wafer-thin storage disc details and interview from IDG News Service (dead link, archived) (4 October 2006)
Maxell details in Japanese language (dead link, archived) (19 April 2006)
Vaporware
Rotating disc computer storage media
Audio storage
Video storage
120 mm discs
DVD
Optical discs | Stacked Volumetric Optical Disc | [
"Technology"
] | 512 | [
"Computer industry",
"Vaporware"
] |
9,387,442 | https://en.wikipedia.org/wiki/Radafaxine | Radafaxine (developmental code GW-353,162; also known as (2S,3S)-hydroxybupropion or (S,S)-hydroxybupropion) is a norepinephrine–dopamine reuptake inhibitor (NDRI) which was under development by GlaxoSmithKline in the 2000s for a variety of different indications but was never marketed. These uses included treatment of restless legs syndrome, major depressive disorder, bipolar disorder, neuropathic pain, fibromyalgia, and obesity. Regulatory filing was planned for 2007, but development was discontinued in 2006 due to "poor test results".
Pharmacology
Pharmacodynamics
Radafaxine is described as a norepinephrine–dopamine reuptake inhibitor (NDRI). In contrast to bupropion, it appears to have a higher potency on inhibition of norepinephrine reuptake than on dopamine reuptake. Radafaxine has about 70% of the efficacy of bupropion in blocking dopamine reuptake, and 392% of efficacy in blocking norepinephrine reuptake, making it fairly selective for inhibiting the reuptake of norepinephrine over dopamine. This, according to GlaxoSmithKline, may account for the increased effect of radafaxine on pain and fatigue. At least one study suggests that radafaxine has a low abuse potential similar to bupropion.
Chemistry
Radafaxine is a potent metabolite of bupropion, the compound in GlaxoSmithKline's Wellbutrin. More specifically, hydroxybupropion is a major metabolite of bupropion that is further metabolized via an intramolecular cyclization to give radafaxine as the (2S,3S) isomer, as well as the corresponding (2R,3R) isomer isomer, which is less pharmacologically active as a monoamine reuptake inhibitor than radafaxine. Manifaxine (GW-320,659) was developed as an analogue of radafaxine and has been studied for the treatment of ADHD and obesity.
See also
(2R,3R)-Hydroxybupropion
3-Chlorophenmetrazine
Manifaxine
References
External links
Abandoned drugs
Alcohols
Antidepressants
Beta-Hydroxyamphetamines
3-Chlorophenyl compounds
Phenylmorpholines
Nicotinic antagonists
Norepinephrine–dopamine reuptake inhibitors | Radafaxine | [
"Chemistry"
] | 586 | [
"Drug safety",
"Abandoned drugs"
] |
9,387,758 | https://en.wikipedia.org/wiki/Champalimaud%20Foundation | The Champalimaud Foundation MHM () is a private biomedical research foundation. It was created according to the will of the late entrepreneur António de Sommer Champalimaud, in 2004. The complete name of the foundation honors the mother and father of the founder and is Fundação Anna de Sommer Champalimaud e Dr. Carlos Montez Champalimaud. It is located in Belém, Lisbon, Portugal.
Overview
The mission of the Foundation is "to develop programmes of advanced biomedical research and provide clinical care of excellence, with a focus on translating pioneering scientific discoveries into solutions which can improve the quality of life of individuals around the world."
The foundation undertakes research in the fields of neuroscience and oncology at the modernistic Champalimaud Centre for the Unknown in Belém, opened in 2011. Research into visual impairment is undertaken via an outreach program.
The Champalimaud Clinical Center (CCC) is a modern scientific, medical and technological institution providing specialized clinical treatment for oncology. The Center develops advanced programs for research of diseases. The CCC tries to customize the therapies in order to achieve more effectiveness in controlling and treating the diseases. It was designed by Indian architect Charles Correa.
Management
The management of the Foundation consists of Board of Directors, General Council, Scientific Committee, Ethics Committee and Vision Award Jury. The acting President is Leonor Beleza appointed by António Champalimaud in his will.
António Champalimaud Vision Award
The award was established in 2007 to recognise contributions to research into vision. In even numbered years it is awarded for contributions to overall vision research and in odd numbered years for contributions to the alleviation of visual problems, primarily in developing countries.
Recipients
Source: Champalimaud Foundation
2023: St John of Jerusalem Eye Hospital Group
2022: Gerrit Melles and Claes Dohlman, for opening new paths to treat those affected by corneal disease
2019: Instituto da Visão – IPEPO, the Altino Ventura Foundation and the UNICAMP Ophthalmology Service, longstanding organisations that have worked to prevent blindness and visual impairment by providing eye services to underserved populations in Brazil
2018: Jean Bennett, Albert Maguire, Robin Ali, James Bainbridge, Samuel Jacobson, William W. Hauswirth and Michael Redmond, for the first successful gene therapy to cure an inherited human disease
2017: Sightsavers and CBM (Christian Blind Mission), organisations with long and distinguished histories of supporting blindness prevention, alleviation and rehabilitation programmes in developing countries
2016: Christine Holt, Carol Mason, John Flanagan and Carla Shatz, for ground-breaking work that has illuminated our understanding of the way in which our eyes send signals to the appropriate areas of the brain
2015: Kilimanjaro Centre for Community Ophthalmology (KCCO), Seva Foundation and Seva Canada
2014: Napoleone Ferrara, Joan W. Miller, Evangelos S. Gragoudas, Patricia D'Amore, Anthony P. Adamis, George L. King and Lloyd Paul Aiello, for the development of anti-angiogenic therapy for retinal disease
2013: Nepal Netra Jyoty Sangh, Eastern Regional Eye Care Programme, Lumbini Eye Institute and Tilganga Institute of Ophthalmology
2012: David Williams, for the application of adaptive optics (AO) to the eye; and James Fujimoto, David Huang, Carmen A. Puliafito, Joel S. Schuman & Eric Swanson, for the development of optical coherence tomography (OCT)
2011: African Programme for Onchocerciasis Control
2010: J. Anthony Movshon and William Newsome
2009: Helen Keller International
2008: Jeremy Nathans and King-Wai Yau
2007: Aravind Eye Care System
Honours and Awards
Honorary Member of the Order of Merit, Portugal (4 September 2019)
See also
Healthcare in Portugal
Emergency medical services in Portugal
List of medicine awards
References
External links
Biomedical research foundations
Foundations based in Portugal
Organisations based in Lisbon
Medical and health organisations based in Portugal | Champalimaud Foundation | [
"Engineering",
"Biology"
] | 839 | [
"Biotechnology organizations",
"Biomedical research foundations"
] |
9,387,775 | https://en.wikipedia.org/wiki/K%C3%A1rm%C3%A1n%E2%80%93Howarth%20equation | In isotropic turbulence the Kármán–Howarth equation (after Theodore von Kármán and Leslie Howarth 1938), which is derived from the Navier–Stokes equations, is used to describe the evolution of non-dimensional longitudinal autocorrelation.
Mathematical description
Consider a two-point velocity correlation tensor for homogeneous turbulence
For isotropic turbulence, this correlation tensor can be expressed in terms of two scalar functions, using the invariant theory of full rotation group, first derived by Howard P. Robertson in 1940,
where is the root mean square turbulent velocity and are turbulent velocity in all three directions. Here, is the longitudinal correlation and is the lateral correlation of velocity at two different points. From continuity equation, we have
Thus uniquely determines the two-point correlation function. Theodore von Kármán and Leslie Howarth derived the evolution equation for from Navier–Stokes equation as
where uniquely determines the triple correlation tensor
Loitsianskii's invariant
L.G. Loitsianskii derived an integral invariant for the decay of the turbulence by taking the fourth moment of the Kármán–Howarth equation in 1939, i.e.,
If decays faster than as and also in this limit, if we assume that vanishes, we have the quantity,
which is invariant. Lev Landau and Evgeny Lifshitz showed that this invariant is equivalent to conservation of angular momentum. However, Ian Proudman and W.H. Reid showed that this invariant does not hold always since is not in general zero, at least, in the initial period of the decay. In 1967, Philip Saffman showed that this integral depends on the initial conditions and the integral can diverge under certain conditions.
Decay of turbulence
For the viscosity dominated flows, during the decay of turbulence, the Kármán–Howarth equation reduces to a heat equation once the triple correlation tensor is neglected, i.e.,
With suitable boundary conditions, the solution to above equation is given by
so that,
See also
Kármán–Howarth–Monin equation (Andrei Monin's anisotropic generalization of the Kármán–Howarth relation)
Batchelor–Chandrasekhar equation (homogeneous axisymmetric turbulence)
Corrsin equation (Kármán–Howarth relation for scalar transport equation)
Chandrasekhar invariant (density fluctuation invariant in isotropic homogeneous turbulence)
References
Equations of fluid dynamics
Fluid dynamics
Turbulence | Kármán–Howarth equation | [
"Physics",
"Chemistry",
"Engineering"
] | 491 | [
"Equations of fluid dynamics",
"Turbulence",
"Equations of physics",
"Chemical engineering",
"Piping",
"Fluid dynamics"
] |
9,387,798 | https://en.wikipedia.org/wiki/Lorenz%20beam | The Lorenz beam was a blind-landing radio navigation system developed by C. Lorenz AG in Berlin for bad weather landing. The first experimental system had been installed in 1932 at Berlin-Tempelhof Central Airport and was demonstrated at the International Air Service Conference in January, 1933. Further improvements of the system were accepted during the meetings in November. 1933 and September 1934. By 1937 in addition to German airports the Lorenz System was employed in Europe, e.g. London, Paris, Milan, Stockholm, Warsaw, Vienna and Zürich, as well as internationally in Japan and Russia, with additional systems in preparation in Australia, South America and South Africa. The Lorenz company referred to it simply as the Ultrakurzwellen-Landefunkfeuer, German for "ultra-short-wave landing radio beacon", or LFF. In the UK it was known as Standard Beam Approach (SBA).
Further work lead to the addition of a Glide Path to the Lorenz beam, for which a patent was awarded in 1937.
Prior to the start of the Second World War the Germans deployed the system at many Luftwaffe airfields in and outside Germany and equipped most of their bombers with the radio equipment needed to use it. It was also adapted into versions with much narrower and longer-range beams that was used to guide the bombers on missions over Britain, under the name Knickebein and X-Gerät.
Beam navigation provides a single line in space, making it useful for landing or enroute navigation, but not as a general purpose navigation system that allows the receiver to determine their location. This led to a rotating version of the same system for air navigation known as Elektra, which allowed the determination of a "fix" through timing. Further development produced a system that worked over very long distances, hundreds or thousands of kilometres, known as Sonne (or often, Elektra-Sonnen) that allowed aircraft and U-boats to take fixes far into the Atlantic. The British captured Sonne receivers and maps and started to use it for their own navigation under the name Consol.
The system began to be replaced soon after the war by modern instrument landing systems, which provide both horizontal positioning like LFF as well as vertical positioning and distance markers as well. Some LFF systems remained in use, with the longest-lived at RAF Ternhill not going out of service until 1960.
Description
The blind approach navigation system was developed starting in 1932 by Dr. Ernst Kramar of the Lorenz company., It was adopted by Deutsche Lufthansa in 1934 and sold around the world. , The Lorenz company was founded in 1880 by Carl Lorenz and is now part of ITT.
Lorenz used a single radio transmitter at 33.3 MHz () and three vertically polararized antennas placed in a line parallel to the end of the runway. The center antenna was always provided with the RF signal, while the other two were short-circuited by a mechanical rotary switch turned by a simple motor. This resulted in a "kidney" shaped broadcast pattern centered on one of the two "side" antennas depending on which antenna had been short-circuited. The keying of the contacts on the switch were set so that one antenna was shorted for 1/8 of the time, considered a "Dot" and the other 7/8 oth the time considered as a "Dash", opposed to the duration of dit, dah and pauses as defined for the Morse code, were e.g. a dash is 3x the duration of a dot. The signal could be detected for some distance off the end of the runway, as much as 30 km. The Lorenz obtained a sharper beam than could be created by an aerial array by having two lobes of signal.
A pilot approaching the runway would tune his radio to the broadcast frequency and listen for the signal. If he heard a series of dots, he knew he was off the runway centerline to the left (the dot-sector) and had to turn to the right to line up with the runway. If he was to the right, he would hear a series of dashes instead (the dash-sector), and turned left. The key to the operation of the system was an area in the middle where the two signals overlapped. The dots of the one signal "filled in" the dashes of the other, resulting in a steady tone known as the equi-signal. By adjusting his path until he heard the equi-signal, the pilot could align his aircraft with the runway for landing.
Two small marker beacons were also used: one 300 m off the end of runway, the HEZ (), and another 3 km away, the VEZ (), both were broadcast on 38 MHz and modulated at 1700 and 700 Hz, respectively. These signals were broadcast directly upward, and would be heard briefly as the aircraft flew over them. To approach the runway, the pilot would fly to a published altitude and then use the main directional signals to line up with the runway and started flying toward it. When he flew over the , he would start descending on a standard glide slope, continuing to land or abort at the , depending on whether or not he could see the runway.
Lorenz could fly a plane down a straight line with relatively high accuracy, enough so that the aircraft could then find the runway visually in all but the worst conditions. However, it required fairly constant monitoring of the radio by the pilot, who would often also be tasked with talking to the local control tower. In order to ease the workload, Lorenz later introduced a cockpit indicator that could listen to the signals and display the direction to the runway centerline as an arrow telling the pilot which direction to turn. The indicator also included two neon lamps to indicate when the aircraft crossed over each of the marker beacons. Later derivatives of the system had signals of equal length in the pattern left-right-silence, to operate a visual indicator in the cabin.
The Lorenz system was similar to the Diamond-Dunmore system, developed by the US Bureau of Standards in the early 1930s.
Use for blind bombing
In the Second World War the Lorenz beam principle was used by the German Luftwaffe as the basis of a number of blind bombing aids, notably Knickebein ('crooked leg') and the X-Gerät ('X-Apparatus'), in their bombing offensive against English cities during the winter of 1940/41. Knickebein was very similar to LFF, modifying it only slightly to be more highly directional and work over much longer distance. Using the same frequencies allowed their bombers to use the already-installed LFF receivers, although a second receiver was needed in order to pinpoint a single location.
The X-Gerät involved cross-beams of the same characteristics but on different frequencies, which would both enable the pilot to calculate his speed (from the time between crossing the Fore Cross Signal and crossing the Main Cross Signal), and indicate when he should drop his payload. The calculation was performed by a mechanical computer. Lorenz modified this system to create the Viktoria/Hawaii lateral guidance system for the V-2 rocket.
Allied jamming effort
When the British discovered the existence of the 'Knickebein' system, they rapidly jammed it, however, the 'X-Gerät' was not successfully jammed for quite some time. A later innovation by the Germans was the 'Baedeker' or 'Taub' modification, which used supersonic modulation. This was so quickly jammed that the Germans practically gave up on the use of beam-bombing systems, with the exception of the 'FuGe 25A', which operated for a short time towards the end of Operation Steinbock, known as the "Baby Blitz".
A further operational drawback of the system was that bombers had to follow a fixed course between the beam transmitter station and the target; once the beam had been detected, defensive measures were made more effective by knowledge of the course.
Sonne/Consol
'Sonne' (Eng. 'Sun') was a derivation of Lorenz used by the Luftwaffe for long-range navigation out over the Atlantic using transmitters in Occupied Europe, and another in neutral Spain, and after its existence had been discovered by the British, under the direction of R. V. Jones it was allowed to continue in use, un-jammed, because it was felt that it was actually more useful to RAF Coastal Command than it was to the Germans. In British use the German system was named 'Consol', and it remained un-jammed for the period of the war.
Sonne/Consol after World War II
The long range version developed by the Germans during the war was used by many countries for civilian purposes after the war, mostly under its English name Consol. Transmitters were installed in the US, the UK and the USSR.
Technical considerations
The reason the Lorenz beam principle was necessary, with its overlapping beams, was because the sharpness of a beam increases approximately logarithmically with the length of the aerial array with which it is generated. A law of diminishing returns operates, such that to attain the sharpness achieved by the Lorenz system with a single beam (approximately 1 mile wide over a range of two hundred miles), an array of prohibitive size would be required.
See also
Battle of the Beams
Radio navigation
Instrument landing system
SCR-277
References
External links
https://web.archive.org/web/20110621075111/http://www.sonne-consol.es/
World War II German electronics
Radio navigation
Avionics | Lorenz beam | [
"Technology"
] | 1,977 | [
"Avionics",
"Aircraft instruments"
] |
9,387,972 | https://en.wikipedia.org/wiki/PH-sensitive%20polymers | pH sensitive or pH responsive polymers are materials which will respond to the changes in the pH of the surrounding medium by varying their dimensions. Materials may swell, collapse, or change depending on the pH of their environment. This behavior is exhibited due to the presence of certain functional groups in the polymer chain. pH-sensitive materials can be either acidic or basic, responding to either basic or acidic pH values. These polymers can be designed with many different architectures for different applications. Key uses of pH sensitive polymers are controlled drug delivery systems, biomimetics, micromechanical systems, separation processes, and surface functionalization.
Types
pH sensitive polymers can be broken into two categories: those with acidic groups (such as -COOH and -SO3H) and those with basic groups (-NH2). The mechanism of response is the same for both, only the stimulus varies. The general form of the polymer is a backbone with functional "pendant groups" that hang off of it. When these functional groups become ionized in certain pH levels, they acquire a charge (+/-). Repulsions between like charges cause the polymers to change shape.
Polyacids
Polyacids, also known as anionic polymers, are polymers that have acidic groups. Examples of acidic functional groups include carboxylic acids (-COOH), sulfonic acids (-SO3H), phosphonic acids, and boronic acids. Polyacids accept protons at low pH values. At higher pH values, they deprotonate and become negatively charged. The negative charges create a repulsion that causes the polymer to swell. This swelling behavior is observed when the pH is greater than the pKa of the polymer. Examples include polymethyl methacrylate polymers (pharmacologyonline 1 (2011)152-164) and cellulose acetate phthalate.
Polybases
Polybases are the basic equivalent of polyacids and are also known as cationic polymers. They accept protons at low pH like polyacids do, but they then become positively charged. In contrast, at higher pH values they are neutral. Swelling behavior is seen when the pH is less than the pKa of the polymer.
Natural polymers
Although many sources talk about synthetic pH sensitive polymers, natural polymers can also display pH-responsive behavior. Examples include chitosan, hyaluronic acid, alginic acid and dextran. Chitosan, a frequently used example, is cationic. Since DNA is negatively charged, DNA could be attached to chitosan as a way to deliver genes to cells. Alginic acid, on the other hand, is anionic. It is often evaluated as a calcium-salt for drug delivery applications(International journal of biological macromolecules 75 (2015) 409-17) . Natural polymers have appeal because they display good biocompatibility, which makes them useful for biomedical applications. However, a disadvantage to natural polymers is that researchers can have more control over the structure of synthetic polymers and so can design those polymers for specific applications.
Multi-stimuli polymers
Polymers can be designed to respond to more than one external stimulus, such as pH and temperature. Often, these polymers are structured as a copolymer where each polymer displays one type of response.
Structure
pH sensitive polymers have been created with linear block copolymer, star, branched, dendrimer, brush, and comb architectures. Polymers of different architectures will self-assemble into different structures. This self-assembly can occur due to the nature of the polymer and the solvent, or due to a change in pH. pH changes can also cause the larger structure to swell or deswell. For example, block copolymers often form micelles, as will star polymers and branched polymers. However, star and branched polymers can form rod or worm-shaped micelles rather than the typical spheres. Brush polymers are usually used for modifying surfaces since their structure doesn’t allow them to form a larger structure like a micelle.
Response to change in pH
Often, the response to different pH values is swelling or deswelling. For example, polyacids release protons to become negatively charged at high pH. Since polymer chains are often in close proximity to other parts of the same chain or to other chains, like-charged parts of the polymer repel each other. This repulsion leads to a swelling of the polymer.
Polymers can also form micelles (spheres) in response to a change in pH. This behavior can occur with linear block copolymers. If the different blocks of the copolymer have different properties, they can form micelles with one type of block on the inside and one type on the outside. For example, in water the hydrophobic blocks of a copolymer could end up on the inside of a micelle, with hydrophilic blocks on the outside. Additionally, a change in pH could cause micelles to swap their inner and outer molecules depending on the properties of the polymers involved.
Responses other than simply swelling and deswelling with a change in pH are possible as well. Researchers have created polymers that undergo a sol-gel transition (from a solution to a gel) with a change in pH, but which also change from being a stiff gel to a soft gel for certain pH values.
Synthesis
pH sensitive polymers can be synthesized using several common polymerization methods. Functional groups may need to be protected so that they do not react depending on the type of polymerization. The masking can be removed after polymerization so that they regain their pH-sensitive functionality. Living polymerization is often used for making pH sensitive polymers because molecular weight distribution of the final polymers can be controlled. Examples include group transfer polymerization (GTP), atom transfer radical polymerization (ATRP), and reversible addition-fragmentation chain transfer (RAFT). Graft copolymers are a popular type to synthesize because their structure is a backbone with branches. The composition of the branches can be changed to achieve different properties. Hydrogels can be produced using emulsion polymerization.
Characterization
Contact angle
Several methods can be used to measure the contact angle of a water drop on the surface of a polymer. The contact angle value is used to quantify wettability or hydrophobicity of the polymer.
Degree of swelling
Equal to (swollen weight-deswelled weight)/deswelled weight *100% and determined by massing polymers before and after swelling. This indicates how much the polymer swelled upon a change in pH.
pH critical point
The pH at which a significant structural change in how the molecules are arranged is observed. This structural change does not involve breaking bonds, but rather a change in conformation. For example, a swelling/deswelling transition would constitute a reversible conformational change. The value of the pH critical point can be determined by examining swelling percentage as a function of pH. Researchers aim to design molecules that transition at a pH that matters for the given application.
Surface changes
Confocal microscopy, scanning electron microscopy, Raman spectroscopy, and atomic force microscopy are all used to determine how the surface of a polymer changes in response to pH.
Applications
Purification and separation
pH sensitive polymers have been considered for use in membranes. A change in pH could change the ability of the polymer to let ions through, allowing it to act as a filter.
Surface modification
pH sensitive polymers have been used to modify the surfaces of materials. For example, they can be used to change the wettability of a surface.
Biomedical use
pH sensitive polymers have been used for drug delivery. For example, they can be used to release insulin in specific quantities.
References
Smart materials | PH-sensitive polymers | [
"Materials_science",
"Engineering"
] | 1,582 | [
"Smart materials",
"Materials science"
] |
9,388,131 | https://en.wikipedia.org/wiki/Type%20Ib%20and%20Ic%20supernovae | Type Ib and Type Ic supernovae are categories of supernovae that are caused by the stellar core collapse of massive stars. These stars have shed or been stripped of their outer envelope of hydrogen, and, when compared to the spectrum of Type Ia supernovae, they lack the absorption line of silicon. Compared to Type Ib, Type Ic supernovae are hypothesized to have lost more of their initial envelope, including most of their helium. The two types are usually referred to as stripped core-collapse supernovae.
Spectra
When a supernova is observed, it can be categorized in the Minkowski–Zwicky supernova classification scheme based upon the absorption lines that appear in its spectrum. A supernova is first categorized as either a Type I or Type II, then subcategorized based on more specific traits. Supernovae belonging to the general category Type I lack hydrogen lines in their spectra; in contrast to Type II supernovae which do display lines of hydrogen. The Type I category is subdivided into Type Ia, Type Ib and Type Ic.
Type Ib/Ic supernovae are distinguished from Type Ia by the lack of an absorption line of singly ionized silicon at a wavelength of 635.5 nanometres. As Type Ib and Ic supernovae age, they also display lines from elements such as oxygen, calcium and magnesium. In contrast, Type Ia spectra become dominated by lines of iron. Type Ic supernovae are distinguished from Type Ib in that the former also lack lines of helium at 587.6 nm.
Formation
Prior to becoming a supernova, an evolved massive star is organized like an onion, with layers of different elements undergoing fusion. The outermost layer consists of hydrogen, followed by helium, carbon, oxygen, and so forth. Thus when the outer envelope of hydrogen is shed, this exposes the next layer that consists primarily of helium (mixed with other elements). This can occur when a very hot, massive star reaches a point in its evolution when significant mass loss is occurring from its stellar wind. Highly massive stars (with 25 or more times the mass of the Sun) can lose up to 10−5 solar masses () each year—the equivalent of every 100,000 years.
Type Ib and Ic supernovae are hypothesized to have been produced by core collapse of massive stars that have lost their outer layer of hydrogen and helium, either via winds or mass transfer to a companion. The progenitors of Types Ib and Ic have lost most of their outer envelopes due to strong stellar winds or else from interaction with a close companion of about . Rapid mass loss can occur in the case of a Wolf–Rayet star, and these massive objects show a spectrum that is lacking in hydrogen. Type Ib progenitors have ejected most of the hydrogen in their outer atmospheres, while Type Ic progenitors have lost both the hydrogen and helium shells; in other words, Type Ic have lost more of their envelope (i.e., much of the helium layer) than the progenitors of Type Ib. In other respects, however, the underlying mechanism behind Type Ib and Ic supernovae is similar to that of a Type II supernova, thus placing Types Ib and Ic between Type Ia and Type II. Because of their similarity, Type Ib and Ic supernovae are sometimes collectively called Type Ibc supernovae.
There is some evidence that a small fraction of the Type Ic supernovae may be the progenitors of gamma ray bursts (GRBs); in particular, type Ic supernovae that have broad spectral lines corresponding to high-velocity outflows are thought to be strongly associated with GRBs. However, it is also hypothesized that any hydrogen-stripped Type Ib or Ic supernova could be a GRB, dependent upon the geometry of the explosion. In any case, astronomers believe that most Type Ib, and probably Type Ic as well, result from core collapse in stripped, massive stars, rather than from the thermonuclear runaway of white dwarfs.
As they are formed from rare, very massive stars, the rate of Type Ib and Ic supernova occurrence is much lower than the corresponding rate for Type II supernovae. They normally occur in regions of new star formation, and are extremely rare in elliptical galaxies. Because they share a similar operating mechanism, Type Ibc and the various Type II supernovae are collectively called core-collapse supernovae. In particular, Type Ibc may be referred to as stripped core-collapse supernovae.
Light curves
The light curves (a plot of luminosity versus time) of Type Ib supernovae vary in form, but in some cases can be nearly identical to those of Type Ia supernovae. However, Type Ib light curves may peak at lower luminosity and may be redder. In the infrared portion of the spectrum, the light curve of a Type Ib supernova is similar to a Type II-L light curve. Type Ib supernovae usually have slower decline rates for the spectral curves than Ic.
Type Ia supernovae light curves are useful for measuring distances on a cosmological scale. That is, they serve as standard candles. However, due to the similarity of the spectra of Type Ib and Ic supernovae, the latter can form a source of contamination of supernova surveys and must be carefully removed from the observed samples before making distance estimates.
See also
Type Ia supernova
Type II supernova
References
External links
List of all known Type Ib and Ic supernovae at The Open Supernova Catalog .
Type 1b and 1c Supernova | Type Ib and Ic supernovae | [
"Chemistry",
"Astronomy"
] | 1,150 | [
"Supernovae",
"Astronomical events",
"Explosions"
] |
2,190,732 | https://en.wikipedia.org/wiki/Tree%20%28descriptive%20set%20theory%29 | In descriptive set theory, a tree on a set is a collection of finite sequences of elements of such that every prefix of a sequence in the collection also belongs to the collection.
Definitions
Trees
The collection of all finite sequences of elements of a set is denoted .
With this notation, a tree is a nonempty subset of , such that if
is a sequence of length in , and if ,
then the shortened sequence also belongs to . In particular, choosing shows that the empty sequence belongs to every tree.
Branches and bodies
A branch through a tree is an infinite sequence of elements of , each of whose finite prefixes belongs to . The set of all branches through is denoted and called the body of the tree .
A tree that has no branches is called wellfounded; a tree with at least one branch is illfounded. By Kőnig's lemma, a tree on a finite set with an infinite number of sequences must necessarily be illfounded.
Terminal nodes
A finite sequence that belongs to a tree is called a terminal node if it is not a prefix of a longer sequence in . Equivalently, is terminal if there is no element of such that that . A tree that does not have any terminal nodes is called pruned.
Relation to other types of trees
In graph theory, a rooted tree is a directed graph in which every vertex except for a special root vertex has exactly one outgoing edge, and in which the path formed by following these edges from any vertex eventually leads to the root vertex.
If is a tree in the descriptive set theory sense, then it corresponds to a graph with one vertex for each sequence in , and an outgoing edge from each nonempty sequence that connects it to the shorter sequence formed by removing its last element. This graph is a tree in the graph-theoretic sense. The root of the tree is the empty sequence.
In order theory, a different notion of a tree is used: an order-theoretic tree is a partially ordered set with one minimal element in which each element has a well-ordered set of predecessors.
Every tree in descriptive set theory is also an order-theoretic tree, using a partial ordering in which two sequences and are ordered by if and only if is a proper prefix of . The empty sequence is the unique minimal element, and each element has a finite and well-ordered set of predecessors (the set of all of its prefixes).
An order-theoretic tree may be represented by an isomorphic tree of sequences if and only if each of its elements has finite height (that is, a finite set of predecessors).
Topology
The set of infinite sequences over (denoted as ) may be given the product topology, treating X as a discrete space.
In this topology, every closed subset of is of the form for some pruned tree .
Namely, let consist of the set of finite prefixes of the infinite sequences in . Conversely, the body of every tree forms a closed set in this topology.
Frequently trees on Cartesian products are considered. In this case, by convention, we consider only the subset of the product space, , containing only sequences whose even elements come from and odd elements come from (e.g., ). Elements in this subspace are identified in the natural way with a subset of the product of two spaces of sequences, (the subset for which the length of the first sequence is equal to or 1 more than the length of the second sequence).
In this way we may identify with for over the product space. We may then form the projection of ,
.
See also
Laver tree, a type of tree used in set theory as part of a notion of forcing
References
Descriptive set theory
Trees (set theory)
Determinacy | Tree (descriptive set theory) | [
"Mathematics"
] | 754 | [
"Game theory",
"Determinacy"
] |
2,190,765 | https://en.wikipedia.org/wiki/Internal%20set | In mathematical logic, in particular in model theory and nonstandard analysis, an internal set is a set that is a member of a model.
The concept of internal sets is a tool in formulating the transfer principle, which concerns the logical relation between the properties of the real numbers R, and the properties of a larger field denoted *R called the hyperreal numbers. The field *R includes, in particular, infinitesimal ("infinitely small") numbers, providing a rigorous mathematical justification for their use. Roughly speaking, the idea is to express analysis over R in a suitable language of mathematical logic, and then point out that this language applies equally well to *R. This turns out to be possible because at the set-theoretic level, the propositions in such a language are interpreted to apply only to internal sets rather than to all sets (note that the term "language" is used in a loose sense in the above).
Edward Nelson's internal set theory is an axiomatic approach to nonstandard analysis (see also Palmgren at constructive nonstandard analysis). Conventional infinitary accounts of nonstandard analysis also use the concept of internal sets.
Internal sets in the ultrapower construction
Relative to the ultrapower construction of the hyperreal numbers as equivalence classes of sequences of reals, an internal subset [An] of *R is one defined by a sequence of real sets , where a hyperreal is said to belong to the set if and only if the set of indices n such that , is a member of the ultrafilter used in the construction of *R.
More generally, an internal entity is a member of the natural extension of a real entity. Thus, every element of *R is internal; a subset of *R is internal if and only if it is a member of the natural extension of the power set of R; etc.
Internal subsets of the reals
Every internal subset of *R that is a subset of (the embedded copy of) R is necessarily finite (see Theorem 3.9.1 Goldblatt, 1998). In other words, every internal infinite subset of the hyperreals necessarily contains nonstandard elements.
See also
Standard part function
Superstructure (mathematics)
References
Goldblatt, Robert. Lectures on the hyperreals. An introduction to nonstandard analysis. Graduate Texts in Mathematics, 188. Springer-Verlag, New York, 1998.
Nonstandard analysis | Internal set | [
"Mathematics"
] | 498 | [
"Mathematical objects",
"Infinity",
"Nonstandard analysis",
"Mathematics of infinitesimals",
"Model theory"
] |
2,190,885 | https://en.wikipedia.org/wiki/Tribendimidine | Tribendimidine is a broad-spectrum anthelmintic agent developed in China, at the National Institute of Parasitic Diseases in Shanghai. It is a derivative of amidantel.
In clinical trials, it was highly effective in treating ankylostomiasis, ascariasis and enterobiasis. It is also effective against clonorchiasis. However, animal studies suggest it is ineffective in treating Schistosoma mansoni or Fasciola hepatica disease. The drug has also performed well in trials against opisthorchiasis, curing about 70% of cases.
Tribendimidine is manufactured by Shandong Xinhua Pharmaceutical Company Limited in Zibo, Shandong, China. It was approved by the China Food and Drug Administration in 2007.
References
Anthelmintics
Amidines
Imines
Nicotinic agonists | Tribendimidine | [
"Chemistry"
] | 177 | [
"Bases (chemistry)",
"Amidines",
"Functional groups"
] |
2,190,913 | https://en.wikipedia.org/wiki/Prenatal%20development | Prenatal development () involves the development of the embryo and of the fetus during a viviparous animal's gestation. Prenatal development starts with fertilization, in the germinal stage of embryonic development, and continues in fetal development until birth.
In human pregnancy, prenatal development is also called antenatal development. The development of the human embryo follows fertilization, and continues as fetal development. By the end of the tenth week of gestational age, the embryo has acquired its basic form and is referred to as a fetus. The next period is that of fetal development where many organs become fully developed. This fetal period is described both topically (by organ) and chronologically (by time) with major occurrences being listed by gestational age.
The very early stages of embryonic development are the same in all mammals, but later stages of development, and the length of gestation varies.
Terminology
In the human:
Different terms are used to describe prenatal development, meaning development before birth. A term with the same meaning is the "antepartum" (from Latin ante "before" and parere "to give birth") Sometimes "antepartum" is however used to denote the period between the 24th/26th week of gestational age until birth, for example in antepartum hemorrhage.
The perinatal period (from Greek peri, "about, around" and Latin nasci "to be born") is "around the time of birth". In developed countries and at facilities where expert neonatal care is available, it is considered from 22 completed weeks (usually about 154 days) of gestation (the time when birth weight is normally 500 g) to 7 completed days after birth. In many of the developing countries the starting point of this period is considered 28 completed weeks of gestation (or weight more than 1000 g).
Fertilization
Fertilization marks the first germinal stage of embryonic development. When semen is released into the vagina, the spermatozoa travel through the cervix, along the body of the uterus, and into one of the fallopian tubes where fertilization usually takes place in the ampulla. A great many sperm cells are released with the possibility of just one managing to adhere to and enter the thick protective layer surrounding the egg cell (ovum). The first sperm cell to successfully penetrate the egg cell donates its genetic material (DNA) to combine with the DNA of the egg cell resulting in a new one-celled zygote. The term "conception" refers variably to either fertilization or to formation of the conceptus after its implantation in the uterus, and this terminology is controversial.
The zygote will develop into a male if the egg is fertilized by a sperm that carries a Y chromosome, or a female if the sperm carries an X chromosome. The Y chromosome contains a gene, SRY, which will switch on androgen production at a later stage leading to the development of a male body type. In contrast, the mitochondrial DNA of the zygote comes entirely from the egg cell.
Development of the embryo
Following fertilization, the embryonic stage of development continues until the end of the 10th week (gestational age) (8th week fertilization age). The first two weeks from fertilization is also referred to as the germinal stage or preembryonic stage.
The zygote spends the next few days traveling down the fallopian tube dividing several times to form a ball of cells called a morula. Further cellular division is accompanied by the formation of a small cavity between the cells. This stage is called a blastocyst. Up to this point there is no growth in the overall size of the embryo, as it is confined within a glycoprotein shell, known as the zona pellucida. Instead, each division produces successively smaller cells.
The blastocyst reaches the uterus at roughly the fifth day after fertilization. The blastocyst hatches from the zona pellucida allowing the blastocyst's outer cell layer of trophoblasts to come into contact with, and adhere to, the endometrial cells of the uterus. The trophoblasts will eventually give rise to extra-embryonic structures, such as the placenta and the membranes. The embryo becomes embedded in the endometrium in a process called implantation. In most successful pregnancies, the embryo implants 8 to 10 days after ovulation. The embryo, the extra-embryonic membranes, and the placenta are collectively referred to as a conceptus, or the "products of conception".
Rapid growth occurs and the embryo's main features begin to take form. This process is called differentiation, which produces the varied cell types (such as blood cells, kidney cells, and nerve cells). A spontaneous abortion, or miscarriage, in the first trimester of pregnancy is usually due to major genetic mistakes or abnormalities in the developing embryo. During this critical period the developing embryo is also susceptible to toxic exposures, such as:
Alcohol, certain drugs, and other toxins that cause birth defects, such as fetal alcohol syndrome
Infection (such as rubella or cytomegalovirus)
Radiation from x-rays or radiation therapy
Nutritional deficiencies such as lack of folate which contributes to spina bifida
Nutrition
The embryo passes through 3 phases of acquisition of nutrition from the mother:
Absorption phase: Zygote is nourished by cellular cytoplasm and secretions in fallopian tubes and uterine cavity.
Histoplasmic transfer: After nidation and before establishment of uteroplacental circulation, embryonic nutrition is derived from decidual cells and maternal blood pools that open up as a result of eroding activity of trophoblasts.
Hematotrophic phase: After third week of gestation, substances are transported passively via intervillous space.
Development of the fetus
The first ten weeks of gestational age is the period of embryogenesis and together with the first three weeks of prenatal development make up the first trimester of pregnancy.
From the 10th week of gestation (8th week of development), the developing embryo is called a fetus. All major structures are formed by this time, but they continue to grow and develop. Because the precursors of the organs are now formed, the fetus is not as sensitive to damage from environmental exposure as the embryo was. Instead, toxic exposure often causes physiological abnormalities or minor congenital malformation.
Development of organ systems
Development continues throughout the life of the fetus and through into life after birth. Significant changes occur to many systems in the period after birth as they adapt to life outside the uterus.
Fetal blood
Hematopoiesis first takes place in the yolk sac. The function is transferred to the liver by the 10th week of gestation and to the spleen and bone marrow beyond that. The total blood volume is about 125 ml/kg of fetal body weight near term.
Red blood cells
Megaloblastic red blood cells are produced early in development, which become normoblastic near term. Life span of prenatal RBCs is 80 days. Rh antigen appears at about 40 days of gestation.
White blood cells
The fetus starts producing leukocytes at 2 months gestational age, mainly from the thymus and the spleen. Lymphocytes derived from the thymus are called T lymphocytes (T cells), whereas those derived from bone marrow are called B lymphocytes (B cells). Both of these populations of lymphocytes have short-lived and long-lived groups. Short-lived T cells usually reside in thymus, bone marrow and spleen; whereas long-lived T cells reside in the blood stream. Plasma cells are derived from B cells and their life in fetal blood is 0.5 to 2 days.
Glands
The thyroid is the first gland to develop in the embryo at the 4th week of gestation. Insulin secretion in the fetus starts around the 12th week of gestation.
Cognitive development
Electrical brain activity is first detected at the end of week 5 of gestation. Synapses do not begin to form until week 17. Neural connections between the sensory cortex and thalamus develop as early as 24 weeks' gestational age, but the first evidence of their function does not occur until around 30 weeks, when minimal consciousness, dreaming, and the ability to feel pain emerges.
Initial knowledge of the effects of prenatal experience on later neuropsychological development originates from the Dutch Famine Study, which researched the cognitive development of individuals born after the Dutch famine of 1944–45. The first studies focused on the consequences of the famine to cognitive development, including the prevalence of intellectual disability. Such studies predate David Barker's hypothesis about the association between the prenatal environment and the development of chronic conditions later in life. The initial studies found no association between malnourishment and cognitive development, but later studies found associations between malnourishment and increased risk for schizophrenia, antisocial disorders, and affective disorders.
There is evidence that the acquisition of language begins in the prenatal stage. After 26 weeks of gestation, the peripheral auditory system is already fully formed. Also, most low-frequency sounds (less than 300 Hz) can reach the fetal inner ear in the womb of mammals. Those low-frequency sounds include pitch, rhythm, and phonetic information related to language. Studies have indicated that fetuses react to and recognize differences between sounds. Such ideas are further reinforced by the fact that newborns present a preference for their mother's voice, present behavioral recognition of stories only heard during gestation, and (in monolingual mothers) present preference for their native language. A more recent study with EEG demonstrated different brain activation in newborns hearing their native language compared to when they were presented with a different language, further supporting the idea that language learning starts while in gestation.
Growth rate
The growth rate of a fetus is linear up to 37 weeks of gestation, after which it plateaus. The growth rate of an embryo and infant can be reflected as the weight per gestational age, and is often given as the weight put in relation to what would be expected by the gestational age. A baby born within the normal range of weight for that gestational age is known as appropriate for gestational age (AGA). An abnormally slow growth rate results in the infant being small for gestational age, while an abnormally large growth rate results in the infant being large for gestational age. A slow growth rate and preterm birth are the two factors that can cause a low birth weight. Low birth weight (below 2000 grams) can slightly increase the likelihood of schizophrenia.
The growth rate can be roughly correlated with the fundal height of the uterus which can be estimated by abdominal palpation. More exact measurements can be performed with obstetric ultrasonography.
Factors influencing development
Intrauterine growth restriction is one of the causes of low birth weight associated with over half of neonatal deaths.
Poverty
Poverty has been linked to poor prenatal care and has been an influence on prenatal development. Women in poverty are more likely to have children at a younger age, which results in low birth weight. Many of these expecting mothers have little education and are therefore less aware of the risks of smoking, drinking alcohol, and drug use other factors that influence the growth rate of a fetus.
Mother's age
The term Advanced maternal age is used to describe women who are over 35 during pregnancy. Women who give birth over the age of 35 are more likely to experience complications ranging from preterm birth and delivery by Caesarean section, to an increased risk of giving birth to a child with chromosomal abnormalities such as Down syndrome. The chances of stillbirth and miscarriage also increase with maternal age as do the chances of the mother suffering from Gestational diabetes or high blood pressure during pregnancy. Some sources suggest that health problems are also associated with teenage pregnancy. These may include high blood pressure, low birth weight and premature birth. Some studies note that adolescent pregnancy is often associated with poverty, low education, and inadequate family support. Stigma and social context tend to create and exacerbate some of the challenges of adolescent pregnancy.
Drug use
An estimated 5 percent of fetuses in the United States are exposed to illicit drug use during pregnancy. Maternal drug use occurs when drugs ingested by the pregnant woman are metabolized in the placenta and then transmitted to the fetus. Recent research displays that there is a correlation between fine motor skills and prenatal risk factors such as the use of psychoactive substances and signs of abortion during pregnancy. As well as perinatal risk factors such as gestation time, duration of delivery, birth weight and postnatal risk factors such as constant falls.
Cannabis
When using cannabis, there is a greater risk of birth defects, low birth weight, and a higher rate of death in infants or stillbirths. Drug use will influence extreme irritability, crying, and risk for SIDS once the fetus is born.
Marijuana will slow the fetal growth rate and can result in premature delivery. It can also lead to low birth weight, a shortened gestational period and complications in delivery. Cannabis use during pregnancy was unrelated to risk of perinatal death or need for special care, but, the babies of women who used cannabis at least once per week before and throughout pregnancy were 216g lighter than those of non‐users, had significantly shorter birth lengths and smaller head circumferences.
Opioids
Opioids including heroin will cause interrupted fetal development, stillbirths, and can lead to numerous birth defects. Heroin can also result in premature delivery, creates a higher risk of miscarriages, result in facial abnormalities and head size, and create gastrointestinal abnormalities in the fetus. There is an increased risk for SIDS, dysfunction in the central nervous system, and neurological dysfunctions including tremors, sleep problems, and seizures. The fetus is also put at a great risk for low birth weight and respiratory problems.
Cocaine
Cocaine use results in a smaller brain, which results in learning disabilities for the fetus. Cocaine puts the fetus at a higher risk of being stillborn or premature. Cocaine use also results in low birthweight, damage to the central nervous system, and motor dysfunction. The vasoconstriction of the effects of cocaine lead to a decrease in placental blood flow to the fetus that results in fetal hypoxia (oxygen deficiency) and decreased fetal nutrition; these vasoconstrictive effects on the placenta have been linked to the number of complications in malformations that are evident in the newborn.
Methamphetamine
Prenatal methamphetamine exposure has shown to negatively impact brain development and behavioral functioning. A 2019 study further investigated neurocognitive and neurodevelopmental effects of prenatal methamphetamine exposure. This study had two groups, one containing children who were prenatally exposed to methamphetamine but no other illicit drugs and one containing children who met diagnosis criteria for ADHD but were not prenatally exposed to any illicit substance. Both groups of children completed intelligence measures to compute an IQ. Study results showed that the prenatally exposed children performed lower on the intelligence measures than their non-exposed peers with ADHD. The study results also suggest that prenatal exposure to methamphetamine may negatively impact processing speed as children develop.
Alcohol
Maternal alcohol use leads to disruptions of the fetus' brain development, interferes with the fetus' cell development and organization, and affects the maturation of the central nervous system. Even small amounts of alcohol use can cause lower height, weight and head size at birth and higher aggressiveness and lower intelligence during childhood. Fetal alcohol spectrum disorder is a developmental disorder that is a consequence of heavy alcohol intake by the mother during pregnancy. Children with FASD have a variety of distinctive facial features, heart problems, and cognitive problems such as developmental disabilities, attention difficulties, and memory deficits.
Tobacco use
Tobacco smoking during pregnancy exposes the fetus to nicotine, tar, and carbon monoxide. Nicotine results in less blood flow to the fetus because it constricts the blood vessels. Carbon monoxide reduces the oxygen flow to the fetus. The reduction of blood and oxygen flow may result in miscarriage, stillbirth, low birth weight, and premature births. Exposure to secondhand smoke leads to higher risks of low birth weight and childhood cancer.
Infections
If a mother is infected with a disease, the placenta cannot always filter out the pathogens. Viruses such as rubella, chicken pox, mumps, herpes, and human immunodeficiency virus (HIV) are associated with an increased risk of miscarriage, low birth weight, prematurity, physical malformations, and intellectual disabilities. HIV can lead to acquired immune deficiency syndrome (AIDS). Untreated HIV carries a risk of between 10 and 20 per cent of being passed on to the fetus. Bacterial or parasitic diseases may also be passed on to the fetus, and include chlamydia, syphilis, tuberculosis, malaria, and commonly toxoplasmosis. Toxoplasmosis can be acquired through eating infected undercooked meat or contaminated food, and by drinking contaminated water. The risk of fetal infection is lowest during early pregnancy, and highest during the third trimester. However, in early pregnancy the outcome is worse, and can be fatal.
Maternal nutrition
Adequate nutrition is needed for a healthy fetus. Mothers who gain less than 20 pounds during pregnancy are at increased risk for having a preterm or low birth weight infant. Iron and iodine are especially important during prenatal development. Mothers who are deficient in iron are at risk for having a preterm or low birth weight infant. Iodine deficiencies increase the risk of miscarriage, stillbirth, and fetal brain abnormalities. Adequate prenatal care gives an improved result in the newborn.
Low birth weight
Low birth weight increases an infants risk of long-term growth and cognitive and language deficits. It also results in a shortened gestational period and can lead to prenatal complications.
Stress
Stress during pregnancy can have an impact the development of the embryo. Reilly (2017) states that stress can come from many forms of life events such as community, family, financial issues, and natural causes. While a woman is pregnant, stress from outside sources can take a toll on the growth in the womb that may affect the child's learning and relationships when born. For instance, they may have behavioral problems and might be antisocial. The stress that the mother experiences affects the fetus and the fetus' growth which can include the fetus' nervous system (Reilly, 2017). Stress can also lead to low birth weight. Even after avoiding other factors like alcohol, drugs, and being healthy, stress can have its impacts whether families know it or not. Many women who deal with maternal stress do not seek treatment.
Similar to stress, Reilly stated that in recent studies, researchers have found that pregnant women who show depressive symptoms are not as attached and bonded to their child while it is in the womb (2017).
Environmental toxins
Exposure to environmental toxins in pregnancy lead to higher rates of miscarriage, sterility, and birth defects. Toxins include fetal exposure to lead, mercury, and ethanol or hazardous environments. Prenatal exposure to mercury may lead to physical deformation, difficulty in chewing and swallowing, and poor motoric coordination. Exposure to high levels of lead prenatally is related to prematurity, low birth weight, brain damage, and a variety of physical defects. Exposure to persistent air pollution from traffic and smog may lead to reduced infant head size, low birth weight, increased infant death rates, impaired lung and immune system development.
See also
Prenatal memory
Prenatal and perinatal psychology
Fetal pig
Timeline of human prenatal development
Transplacental carcinogenesis
References
Further reading
"Prenatal Development Prenatal Environmental Influences Mother, Birth, Fetus, and Pregnancy." Social Issues Reference. Version Child Development Vol. 6. N.p., n.d. Web. 19 Nov. 2012.
Niedziocha, Laura. "The Effects of Drugs And Alcohol on Fetal Development | LIVESTRONG.COM." LIVESTRONG.COM Lose Weight & Get Fit with Diet, Nutrition & Fitness Tools | LIVESTRONG.COM. N.p., 4 Sept. 2011. Web. 19 Nov. 2012. <How To Adult>.
Brady, Joanne P., Marc Posner, and Cynthia Lang. "Risk and Reality: The Implications of Prenatal Exposure to Alcohol and Other Drugs ." ASPE. N.p., n.d. Web. 19 Nov. 2012. <Risk and Reality: The Implications of Prenatal Exposure to Alcohol and Other Drugs>.
External links
Chart of human fetal development, U.S. National Library of Medicine (NLM)
U.K. Human Fertilisation and Embryology Authority (HFEA), regulatory agency overseeing the use of gametes and embryos in fertility treatment and research
"Child Safety tips: 10 Expert Tips for Keeping Your Kids Safe",
Embryology
Fertility
Midwifery | Prenatal development | [
"Biology"
] | 4,464 | [
"Behavioural sciences",
"Behavior",
"Human development"
] |
2,191,091 | https://en.wikipedia.org/wiki/Post-perovskite | Post-perovskite (pPv) is a high-pressure phase of magnesium silicate (MgSiO3). It is composed of the prime oxide constituents of the Earth's rocky mantle (MgO and SiO2), and its pressure and temperature for stability imply that it is likely to occur in portions of the lowermost few hundred km of Earth's mantle.
The post-perovskite phase has implications for the D′′ layer, which influences the convective mixing in the mantle responsible for plate tectonics.
Post-perovskite has the same crystal structure as the synthetic solid compound CaIrO3, and is often referred to as the "CaIrO3-type phase of MgSiO3" in the literature. The crystal system of post-perovskite is orthorhombic, its space group is Cmcm, and its structure is a stacked SiO6-octahedral sheet along the b axis. The name "post-perovskite" derives from silicate perovskite, the stable phase of MgSiO3 throughout most of Earth's mantle, which has the perovskite structure. The prefix "post-" refers to the fact that it occurs after perovskite structured MgSiO3 as pressure increases (and historically, the progression of high pressure mineral physics). At upper mantle pressures, nearest Earth's surface, MgSiO3 persists as the silicate mineral enstatite, a pyroxene rock forming mineral found in igneous and metamorphic rocks of the crust.
History
The CaIrO3-type phase of MgSiO3 phase was discovered in 2004 using the laser-heated diamond anvil cell (LHDAC) technique by a group at the Tokyo Institute of Technology and, independently, by researchers from the Swiss Federal Institute of Technology (ETH Zurich) and Japan Agency for Marine-Earth Science and Technology who used a combination of quantum-mechanical simulations and LHDAC experiments. The TIT group's paper appeared in the journal Science. The ETH/JAM-EST collaborative paper and TIT group's second paper appeared two months later in the journal Nature. This simultaneous discovery was preceded by S. Ono's experimental discovery of a similar phase, possessing exactly the same structure, in Fe2O3.
Importance in Earth's mantle
Post-perovskite phase is stable above 120 GPa at 2500 K, and exhibits a positive Clapeyron slope such that the transformation pressure increases with temperature. Because these conditions correspond to a depth of about 2600 km and the D" seismic discontinuity occurs at similar depths, the perovskite to post-perovskite phase change is considered to be the origin of such seismic discontinuities in this region. Post-perovskite also holds great promise for mapping experimentally determined information regarding the temperatures and pressures of its transformation into direct information regarding temperature variations in the D" layer once the seismic discontinuities attributed to this transformation have been sufficiently mapped out. Such information can be used, for example, to:
1) better constrain the amount of heat leaving Earth's core
2) determine whether or not subducted slabs of oceanic lithosphere reach the base of the mantle
3) help delineate the degree of chemical heterogeneity in the lower mantle
4) find out whether or not the lowermost mantle is unstable to convective instabilities that result in upwelling hot thermal plumes of rock which rise up and possibly trace out volcanic hot spot tracks at Earth's surface.
For these reasons the finding of the MgSiO3-post-perovskite phase transition is considered by many geophysicists to be the most important discovery in deep Earth science in several decades, and was only made possible by the concerted efforts of mineral physics scientists around the world as they sought to increase the range and quality of LHDAC experiments and as ab initio calculations attained predictive power.
Physical properties
The sheet structure of post-perovskite makes the compressibility of the b axis higher than that of the a or c axis. This anisotropy may yield the morphology of a platy crystal habit parallel to the (010) plane; the seismic anisotropy observed in the D" region might qualitatively (but not quantitatively) be explained by this characteristic. Theory predicted the (110) slip associated with particularly favorable stacking faults and confirmed by later experiments. Some theorists predicted other slip systems, which await experimental confirmation.
In 2005 and 2006 Ono and Oganov published two papers predicting that post-perovskite should have high electrical conductivity, perhaps two orders of magnitude higher than perovskite's conductivity. In 2008 Hirose's group published an experimental report confirming this prediction. A highly conductive post-perovskite layer provides an explanation for the observed decadal variations of the length of day.
Chemical properties
Another potentially important effect that needs to be better characterized for the post-perovskite phase transition is the influence of other chemical components that are known to be present to some degree in Earth's lowermost mantle. The phase transition pressure (characterized by a two-phase loop in this system), was initially thought to decrease as the FeO content increases, but some recent experiments suggest the opposite. However, it is possible that the effect of Fe2O3 is more relevant as most of iron in post-perovskite is likely to be trivalent (ferric). Such components as Al2O3 or the more oxidized Fe2O3 also affect the phase transition pressure, and might have strong mutual interactions with one another. The influence of variable chemistry present in the Earth's lowermost mantle upon the post-perovskite phase transition raises the issue of both thermal and chemical modulation of its possible appearance (along with any associated discontinuities) in the D" layer.
Summary
Experimental and theoretical work on the perovskite/post-perovskite phase transition continues, while many important features of this phase transition remain ill-constrained. For example, the Clapeyron slope (characterized by the Clausius–Clapeyron relation) describing the increase in the pressure of the phase transition with increasing temperature is known to be relatively high in comparison to other solid-solid phase transitions in the Earth's mantle, however, the experimentally determined value varies from about 5 MPa/K to as high as 13 MPa/K. Ab initio calculations give a tighter range, between 7.5 MPa/K and 9.6 MPa/K, and are probably the most reliable estimates available today. The difference between experimental estimates arises primarily because different materials were used as pressure standards in Diamond Anvil Cell experiments. A well-characterized equation of state for the pressure standard, when combined with high energy synchrotron generated X-ray diffraction patterns of the pressure standard (which is mixed in with the experimental sample material), yields information on the pressure-temperature conditions of the experiment. However, as these extreme pressures and temperatures have not been sufficiently explored in experiments, the equations of state for many popular pressure standards are not yet well characterized and often yield different results. Another source of uncertainty in LHDAC experiments is the measurement of temperature from a sample's thermal radiation, which is required to obtain the pressure from the equation of state of the pressure standard. In laser-heated experiments at such high pressures (over 1 million atmospheres), the samples are necessarily small and numerous approximations (e.g., gray body) are required to obtain estimates of the temperature.
See also
Ferropericlase
References
External links
A synthesis on the discovery of post-perovskite and its geological implications (in French)
Petrology
Silicate minerals
High pressure science
Perovskites | Post-perovskite | [
"Physics"
] | 1,631 | [
"High pressure science",
"Applied and interdisciplinary physics"
] |
2,191,118 | https://en.wikipedia.org/wiki/NGC%201365 | NGC 1365, also known as the Fornax Propeller Galaxy or the Great Barred Spiral Galaxy, is a double-barred spiral galaxy about 56 million light-years away in the constellation Fornax. It was discovered on 2 September 1826 by Scottish astronomer James Dunlop.
Characteristics
NGC 1365 is a large barred spiral galaxy in the Fornax cluster. Within the larger long bar stretching across the center of the galaxy appears to be a smaller bar that comprises the core, with an apparent size of about 50 × 40. This second bar is more prominent in infrared images of the central region of the galaxy, and likely arises from a combination of dynamical instabilities of stellar orbits in the region, along with gravity, density waves, and the overall rotation of the disc. The inner bar structure likely rotates as a whole more rapidly than the larger long bar, creating the diagonal shape seen in images.
The spiral arms extend in a wide curve north and south from the ends of the east–west bar and form an almost ring like Z-shaped halo. Astronomers think NGC 1365's prominent bar plays a crucial role in the galaxy's evolution, drawing gas and dust into a star-forming maelstrom and ultimately feeding material into the central black hole.
NGC 1365, including its two outer spiral arms, spreads over around 300,000 light-years. Different parts of the galaxy take different times to make a full rotation around the core of the galaxy, with the outer parts of the bar completing one circuit in about 350 million years. NGC 1365 and other galaxies of its type have come to more prominence in recent years with new observations indicating that the Milky Way could also be a barred spiral galaxy. Such galaxies are quite common — two thirds of spiral galaxies are barred according to recent estimates, and studying others can help astronomers understand our own galactic home.
Supernovae
Four supernovae have been observed in NGC 1365:
SN 1957C (type unknown, mag. 16.5) was discovered by H. S. Gates in October 1957.
SN 1983V (type Ic, mag. 13.5) was discovered by Robert Evans on November 25 1983, and independently discovered by P. O. Lindblad and P. Grosbol on 27 November 1983.
SN 2001du (type II, mag. 14) was discovered by Robert Evans on August 24 2001.
SN 2012fr (type Ia, mag. 14.7) was discovered by Alain Klotz on October 27 2012.
Supermassive black hole
The central supermassive black hole in the active nucleus, which has a mass of about 2 million solar masses or half the mass of the Milky Way's central black hole Sagittarius A*, rotates at close to the speed of light. These observations, announced in February 2013, were made using the X-ray telescope satellite NuSTAR.
See also
NGC 1300
NGC 1097
References
External links
An Elegant Galaxy in an Unusual Light — ESO press release 22 September 2010
The Great Barred Spiral Galaxy
Fine Details in a Barred Galaxy — ESO press release 27 February 1999
Starry Bulges Yield Secrets to Galaxy Growth — Hubble Space Telescope press release October 6, 1999 01:00 PM (EDT)
Starry bulges yield secrets to galaxy growth Lars Lindberg Christensen, ESA/Hubble news release 6 October 1999
Supermassive Black Hole Spins Super-Fast (Release No.: 2013-07 : February 27, 2013)
NASA's NuSTAR Helps Solve Riddle of Black Hole Spin NuSTAR Feb 27, 2013
EXTREME X-RAY VARIABILITY AND ABSORPTION IN NGC 1365 XMM-Newton
NGC 1365 at Constellation Guide
Barred spiral galaxies
Seyfert galaxies
Luminous infrared galaxies
Fornax Cluster
1365
13179
Fornax
Articles containing video clips
Discoveries_by_James_Dunlop
Astronomical objects discovered in 1826 | NGC 1365 | [
"Astronomy"
] | 784 | [
"Fornax",
"Constellations"
] |
2,191,185 | https://en.wikipedia.org/wiki/Neutron%20cross%20section | In nuclear physics, the concept of a neutron cross section is used to express the likelihood of interaction between an incident neutron and a target nucleus. The neutron cross section σ can be defined as the area in cm2 for which the number of neutron-nuclei reactions taking place is equal to the product of the number of incident neutrons that would pass through the area and the number of target nuclei. In conjunction with the neutron flux, it enables the calculation of the reaction rate, for example to derive the thermal power of a nuclear power plant. The standard unit for measuring the cross section is the barn, which is equal to 10−28 m2 or 10−24 cm2. The larger the neutron cross section, the more likely a neutron will react with the nucleus.
An isotope (or nuclide) can be classified according to its neutron cross section and how it reacts to an incident neutron. Nuclides that tend to absorb a neutron and either decay or keep the neutron in its nucleus are neutron absorbers and will have a capture cross section for that reaction. Isotopes that undergo fission are fissionable fuels and have a corresponding fission cross section. The remaining isotopes will simply scatter the neutron, and have a scatter cross section. Some isotopes, like uranium-238, have nonzero cross sections of all three.
Isotopes which have a large scatter cross section and a low mass are good neutron moderators (see chart below). Nuclides which have a large absorption cross section are neutron poisons if they are neither fissile nor undergo decay. A poison that is purposely inserted into a nuclear reactor for controlling its reactivity in the long term and improve its shutdown margin is called a burnable poison.
Parameters of interest
The neutron cross section, and therefore the probability of an neutron-nucleus interaction, depends on:
the target type (hydrogen, uranium...),
the type of nuclear reaction (scattering, fission...).
the incident particle energy, also called speed or temperature (thermal, fast...),
and, to a lesser extent, of:
its relative angle between the incident neutron and the target nuclide,
the target nuclide temperature.
Target type dependence
The neutron cross section is defined for a given type of target particle. For example, the capture cross section of deuterium 2H is much smaller than that of common hydrogen 1H. This is the reason why some reactors use heavy water (in which most of the hydrogen is deuterium) instead of ordinary light water as moderator: fewer neutrons are lost by capture inside the medium, hence enabling the use of natural uranium instead of enriched uranium. This is the principle of a CANDU reactor.
Type of reaction dependence
The likelihood of interaction between an incident neutron and a target nuclide, independent of the type of reaction, is expressed with the help of the total cross section σT. However, it may be useful to know if the incoming particle bounces off the target (and therefore continue travelling after the interaction) or disappears after the reaction. For that reason, the scattering and absorption cross sections σS and σA are defined and the total cross section is simply the sum of the two partial cross sections:
Absorption cross section
If the neutron is absorbed when approaching the nuclide, the atomic nucleus moves up on the table of isotopes by one position. For instance, 235U becomes 236*U with the * indicating the nucleus is highly energized. This energy has to be released and the release can take place through any of several mechanisms.
The simplest way for the release to occur is for the neutron to be ejected by the nucleus. If the neutron is emitted immediately, it acts the same as in other scattering events.
The nucleus may emit gamma radiation.
The nucleus may β− decay, where a neutron is converted into a proton, an electron and an electron-type antineutrino (the antiparticle of the neutrino)
About 81% of the 236*U nuclei are so energized that they undergo fission, releasing the energy as kinetic motion of the fission fragments, also emitting between one and five free neutrons.
Nuclei that undergo fission as their predominant decay method after neutron capture include 233U, 235U, 237U, 239Pu, 241Pu.
Nuclei that predominantly absorb neutrons and then emit beta particle radiation lead to these isotopes, e.g., 232Th absorbs a neutron and becomes 233*Th, which beta decays to become 233Pa, which in turn beta decays to become 233U.
Isotopes that undergo beta decay transmute from one element to another element. Those that undergo gamma or X-ray emission do not cause a change in element or isotope.
Scattering cross-section
The scattering cross-section can be further subdivided into coherent scattering and incoherent scattering, which is caused by the spin dependence of the scattering cross-section and, for a natural sample, presence of different isotopes of the same element in the sample.
Because neutrons interact with the nuclear potential, the scattering cross-section varies for different isotopes of the element in question. A very prominent example is hydrogen and its isotope deuterium. The total cross-section for hydrogen is over 10 times that of deuterium, mostly due to the large incoherent scattering length of hydrogen. Some metals are rather transparent to neutrons, aluminum and zirconium being the two best examples of this.
Incident particle energy dependence
For a given target and reaction, the cross section is strongly dependent on the neutron speed. In the extreme case, the cross section can be, at low energies, either zero (the energy for which the cross section becomes significant is called threshold energy) or much larger than at high energies.
Therefore, a cross section should be defined either at a given energy or should be averaged in an energy range (or group).
As an example, the plot on the right shows that the fission cross section of uranium-235 is low at high neutron energies but becomes higher at low energies. Such physical constraints explain why most operational nuclear reactors use a neutron moderator to reduce the energy of the neutron and thus increase the probability of fission which is essential to produce energy and sustain the chain reaction.
A simple estimation of energy dependence of any kind of cross section is provided by the Ramsauer model, which is based on the idea that the effective size of a neutron is proportional to the breadth of the probability density function of where the neutron is likely to be, which itself is proportional to the neutron's thermal de Broglie wavelength.
Taking as the effective radius of the neutron, we can estimate the area of the circle in which neutrons hit the nuclei of effective radius as
While the assumptions of this model are naive, it explains at least qualitatively the typical measured energy dependence of the neutron absorption cross section. For neutrons of wavelength much larger than typical radius of atomic nuclei (1–10 fm, E = 10–1000 keV) can be neglected. For these low energy neutrons (such as thermal neutrons) the cross section is inversely proportional to neutron velocity.
This explains the advantage of using a neutron moderator in fission nuclear reactors. On the other hand, for very high energy neutrons (over 1 MeV), can be neglected, and the neutron cross section is approximately constant, determined just by the cross section of atomic nuclei.
However, this simple model does not take into account so called neutron resonances, which strongly modify the neutron cross section in the energy range of 1 eV–10 keV, nor the threshold energy of some nuclear reactions.
Target temperature dependence
Cross sections are usually measured at 20 °C. To account for the dependence with temperature of the medium (viz. the target), the following formula is used:
where σ is the cross section at temperature T, and σ0 the cross section at temperature T0 (T and T0 in kelvins).
The energy is defined at the most likely energy and velocity of the neutron. The neutron population consists of a Maxwellian distribution, and hence the mean energy and velocity will be higher. Consequently, also a Maxwellian correction-term √π has to be included when calculating the cross-section Equation 38.
Doppler broadening
The Doppler broadening of neutron resonances is a very important phenomenon and improves nuclear reactor stability. The prompt temperature coefficient of most thermal reactors is negative, owing to the nuclear Doppler effect. Nuclei are located in atoms which are themselves in continual motion owing to their thermal energy (temperature). As a result of these thermal motions, neutrons impinging on a target appears to the nuclei in the target to have a continuous spread in energy. This, in turn, has an effect on the observed shape of resonance. The resonance becomes shorter and wider than when the nuclei are at rest.
Although the shape of resonances changes with temperature, the total area under the resonance remains essentially constant. But this does not imply constant neutron absorption. Despite the constant area under resonance a resonance integral, which determines the absorption, increases with increasing target temperature. This, of course, decreases coefficient k (negative reactivity is inserted).
Link to reaction rate and interpretation
Imagine a spherical target (shown as the dashed grey and red circle in the figure) and a beam of particles (in blue) "flying" at speed v (vector in blue) in the direction of the target. We want to know how many particles impact it during time interval dt. To achieve it, the particles have to be in the green cylinder in the figure (volume V). The base of the cylinder is the geometrical cross section of the target perpendicular to the beam (surface σ in red) and its height the length travelled by the particles during dt (length v dt):
Noting n the number of particles per unit volume, there are n V particles in the volume V, which will, per definition of V, undergo a reaction. Noting r the reaction rate onto one target, it gives:
It follows directly from the definition of the neutron flux :
Assuming that there is not one but N targets per unit volume, the reaction rate R per unit volume is:
Knowing that the typical nuclear radius r is of the order of 10−12 cm, the expected nuclear cross section is of the order of π r2 or roughly 10−24 cm2 (thus justifying the definition of the barn). However, if measured experimentally ( σ = R / (Φ N) ), the experimental cross sections vary enormously. As an example, for slow neutrons absorbed by the (n, γ) reaction the cross section in some cases (xenon-135) is as much as 2,650,000 barns, while the cross sections for transmutations by gamma-ray absorption are in the neighborhood of 0.001 barn ( has more examples).
The so-called nuclear cross section is consequently a purely conceptual quantity representing how big the nucleus should be to be consistent with this simple mechanical model.
Continuous versus average cross section
Cross sections depend strongly on the incoming particle speed. In the case of a beam with multiple particle speeds, the reaction rate R is integrated over the whole range of energy:
Where σ(E) is the continuous cross section, Φ(E) the differential flux and N the target atom density.
In order to obtain a formulation equivalent to the mono energetic case, an average cross section is defined:
Where is the integral flux.
Using the definition of the integral flux Φ and the average cross section σ, the same formulation as before is found:
Microscopic versus macroscopic cross section
Up to now, the cross section referred to in this article corresponds to the microscopic cross section σ. However, it is possible to define the macroscopic cross section Σ which corresponds to the total "equivalent area" of all target particles per unit volume:
where N is the atomic density of the target.
Therefore, since the cross section can be expressed in cm2 and the density in cm−3, the macroscopic cross section is usually expressed in cm−1. Using the equation derived above, the reaction rate R can be derived using only the neutron flux Φ and the macroscopic cross section Σ:
Mean free path
The mean free path λ of a random particle is the average length between two interactions. The total length L that non perturbed particles travel during a time interval dt in a volume dV is simply the product of the length l covered by each particle during this time with the number of particles N in this volume:
Noting v the speed of the particles and n is the number of particles per unit volume:
It follows:
Using the definition of the neutron flux Φ
It follows:
This average length L is however valid only for unperturbed particles. To account for the interactions, L is divided by the total number of reactions R to obtain the average length between each collision λ:
From :
It follows:
where λ is the mean free path and Σ is the macroscopic cross section.
Within stars
Because 8Li and 12Be form natural stopping points on the table of isotopes for hydrogen fusion, it is believed that all of the higher elements are formed in very hot stars where higher orders of fusion predominate. A star like the Sun produces energy by the fusion of simple 1H into 4He through a series of reactions. It is believed that when the inner core exhausts its 1H fuel, the Sun will contract, slightly increasing its core temperature until 4He can fuse and become the main fuel supply. Pure 4He fusion leads to 8Be, which decays back to 2 4He; therefore the 4He must fuse with isotopes either more or less massive than itself to result in an energy producing reaction. When 4He fuses with 2H or 3H, it forms stable isotopes 6Li and 7Li respectively. The higher order isotopes between 8Li and 12C are synthesized by similar reactions between hydrogen, helium, and lithium isotopes.
Typical cross sections
Some cross sections that are of importance in a nuclear reactor are given in the following table.
The thermal cross-section is averaged using a Maxwellian spectrum.
The fast cross section is averaged using the uranium-235 fission spectrum.
The cross sections were taken from the JEFF-3.1.1 library using JANIS software.
* negligible, less than 0.1% of the total cross section and below the Bragg scattering cutoff
External links
XSPlot an online nuclear cross section plotter
Neutron scattering lengths and cross-sections
Periodic Table of Elements: Sorted by Cross Section (Thermal Neutron Capture)
References
cross section
Nuclear physics | Neutron cross section | [
"Physics"
] | 2,970 | [
"Nuclear physics"
] |
2,191,229 | https://en.wikipedia.org/wiki/Phenomenology%20%28architecture%29 | Architectural phenomenology is the discursive and realist attempt to understand and embody the philosophical insights of phenomenology within the discipline of architecture. The phenomenology of architecture is the philosophical study of architecture employing the methods of phenomenology. David Seamon defines it as "the descriptive and interpretive explication of architectural experiences, situations, and meanings as constituted by qualities and features of both the built environment and human life".
Architectural phenomenology emphasizes human experience, background, intention and historical reflection, interpretation, and poetic and ethical considerations in contrast to the anti-historicism of postwar modernism and the pastiche of postmodernism. Much like phenomenology itself, architectural phenomenology is better understood as an orientation toward thinking and making rather than a specific aesthetic or movement. Interest in phenomenology within architectural circles began in the 1950s, reached a wide audience in the late 1970s and 1980s, and continues today.
Historical development
Origins
Edmund Husserl is credited with founding Phenomenology, as a philosophical approach to understanding experience, in the early 20th century. The emergence of Phenomenology occurred during a period of extensive transformation referred to as Modernism. During this time, Western society was experiencing rapid technological advances and social change. Concurrently, as the theory and practice of architecture adapted to these changes, Modern architecture emerged. Consistent within the broad context of Modernism which was characterized by the rejection of tradition, both phenomenology and modern architecture were focused on how humans experience their environments. While Phenomenology was focused on how humans can know things and spaces, modern architecture was concerned with how to create the places of human experience aligned to the modernist ethos of the time.
The early approaches to fit the architecture within the phenomenological framework started in the 1940s.
Husserl's method of eidetic reduction allows the mind to abstract the raw and transitory sensory data - colors and shapes in case of architecture - into the "essences", the timeless architectural forms. This suggestion of seamless synthesis of past experience, present senses, and premonitions suited very well the school of Romantic architecture by providing absolute judgements on rights and wrongs without rationalizing and devolving into science.
Early Architectural Studies (1950s-1960s)
Architects first started seriously studying phenomenology at Princeton University in the 1950s under the influence of Jean Labatut. In the 1950s, architect Charles W. Moore conducted some of the first phenomenological studies of architecture during his doctoral studies under Labatut, drawing heavily on the philosopher Gaston Bachelard, which were published in 1958 as Water and Architecture. In Europe, Milanese architect Ernesto Nathan Rogers advanced architectural phenomenology during the 1950s and early 1960s through his influential editorship of the Italian design magazine Casabella Continuità. He collaborated with philosopher Enzo Paci and influenced a generation of young architects including Vittorio Gregotti and Aldo Rossi.
Environment-behavioral studies (1970s-early 1980s)
A interdisciplinary field of environment-behavior studies (EBS) was introduced into multiple American, Canadian, and British architecture programs in the 1970s. Also called "architectural psychology," "behavioral geography," "environmental psychology," or "human factors in design," the discipline is associated with Christopher Alexander, Kevin A. Lynch, and Oscar Newman. While most of the EBS research was positivist, the thought behind it influenced the phenomenologically inclined "humanistic geographers" (Edward Relph, Yi-Fu Tuan).
The Essex School (1970s-1980s)
In the 1970s, the School of Comparative Studies at the University of Essex, under the direction of Dalibor Vesely and Joseph Rykwert, was a breeding ground for a generation of architectural phenomenologists, including David Leatherbarrow, Alberto Pérez-Gómez, and Daniel Libeskind. In the 1980s, Vesely and his colleague Peter Carl continued to develop architectural phenomenology in their research and teaching at the Department of Architecture at the University of Cambridge. As architectural phenomenology became established in academia, professors expanded its considerations through theory seminars beyond Gaston Bachelard and Martin Heidegger, to include Edmund Husserl, Maurice Merleau-Ponty, Hans-Georg Gadamer, Hannah Arendt and theorists whose modes of thinking bordered on phenomenology, including Gilles Deleuze, Henri Bergson, Paul Virilio, Charles Taylor, Hubert Dreyfus and Edward S. Casey. George Baird called the Essex School "the most significant recent mode of phenomenology in current architectural theory" and credits Vesely for architectural phenomenology's historical reliance on Heidegger instead of Merleau-Ponty, who was championed by Rykwert, Moore, and Labatut. During the 1980s, Kenneth Frampton became an influence in architectural phenomenology.
In 1979, Norwegian architect, theorist and historian Christian Norberg-Schulz's book Genius Loci: Towards a Phenomenology of Architecture became an important reference for those interested in the topic in the 1980s for its readily accessible explanations for how a such an approach could be translated into design. The book was markedly influenced by Martin Heidegger's hermeneutic ontology. Norberg-Schulz spawned a wide following, including his successor at the Oslo School of Architecture, Thomas Thiis-Evensen.
Contemporary Architectural Phenomenology (2010-present)
Recent scholarly activity in architectural phenomenology draws on contemporary phenomenology and philosophy of mind authors Gallagher and Zahavi. Some examples include a 2018 issue of Log with the theme "Disorienting phenomenology" as well as Jorge Otero-Pailos' Architecture's Historical Turn, Sara Ahmed's Queer Phenomenology, Dylan Trigg's The Thing, Alexander Weheliye's Habeas Viscus, and Joseph Bedford's dissertation Creativity's Shadow: Dabilor Vesely, Phenomenology and Architectural Education (1968 - 1989). With the expansion of virtual reality as architectural experiences there is new attention to Phenomenology. Heather Renee Barker's Designing Post-Virtual Architectures, Wicked Tactics and World Building addresses the phenomenological method and the life-world within this context. Contemporary scholarship has become more skeptical of Heidegger's influence. The phenomenology has been gradually displaced in the architectural theory by other "cutting-edge" ideas since 1980, but remains associated with works of Alvar Aalto, Tadao Ando, Steven Holl, Louis Kahn, Aldo van Eyck, and Peter Zumthor.
Themes
Dwelling
As a phenomenological perspective on being in society and dwelling within a social world took focus, expanded interest in the urban and social experience became central to the thinking of social philosophers like Alfred Schutz. The phenomenon of dwelling, as explicated in Heidegger's essay "Building Dwelling Thinking" (originally published in 1954 as "Bauen Wohnen Denken"), became an important theme in architectural phenomenology. Heidegger links dwelling to the "gathering of the fourfold," namely the regions of being entailed by the phenomena of "the saving of earth, the reception of sky (heavens), the initiation of mortals into their death, and the awaiting/remembering of divinities." The essence of dwelling is not architectural, per se, in the same manner that the essence of technology for him is not technological per se.* Nader El-Bizri, "Being at Home Among Things: Heidegger's Reflections on Dwelling", Environment, Space, Place Vol. 3 (2011)pp. 47–71; and Nader El-Bizri, 'On Dwelling: Heideggerian Allusions to Architectural Phenomenology', Studia UBB Philosophia 60 (2015)pp. 5–30; also: Nader El-Bizri, 'Phenomenology of Place and Space in our Epoch: Thinking along Heideggerian Pathways', in The Phenomenology of Real and Virtual Places, ed. E. Champion (London : Routledge, 2018), pp. 123-143
Environmental embodiment
Environmental embodiment is the perceptual awareness of a person ("lived body") in its interaction with the lifeworld (for example, a person "sees the springiness of steel” or “hear[s] the hardness and unevenness of cobbles in the rattle of a carriage”). According to Juhani Pallasmaa, the 21st century buildings are too heavy on being visually striking and need more "multivalent sensuousness". Thomas Thiis-Evensen suggests emphasizing the relationship between indoors and outdoors, established by floor, walls, and roof, through "existential expressions" of motion, weight, and substance. "Motion" corresponds to the perception of dynamics, visual inertia (does an element appear to expand? be stable?); "weight" to the element looking heavy (or light); "substance" is the appearance of the material (does the surface look like it will be cold to touch? soft?).
Body routine is a set of gestures, behaviors, and actions used to achieve a particular task. Phenomenologists believe that proper architectural design can combine bodily routines of multiple bodies into a converged "place ballet", when interpersonal exchanges occur in an office lounge or shopping mall. The space-syntax theory declares that particular spatial arrangements of walkways (for example, corridors or sidewalks) can result either in encounters or feeling of isolation.
Bodies and places are interdependent (they "interanimate" each other). This is visible, for example, in a concept of shop/house suggested by Howard Davis, a building type that, in different historical forms, contains both the residence and a place of business.
Notable architects
Notable architects and scholars of architecture associated with architectural phenomenology include:
George Baird
Nader El-Bizri
Kenneth Frampton
Marco Frascari
Vittorio Gregotti
Steven Holl
David Leatherbarrow
Daniel Libeskind
Charles W. Moore
Christian Norberg-Schulz
Mohsen Mostafavi
Juhani Pallasmaa
Alberto Pérez-Gómez
Steen Eiler Rasmussen
Ernesto Nathan Rogers
Joseph Rykwert
Dalibor Vesely
Peter Zumthor
See also
Architectural theory
Atmosphere (architecture and spatial design)
Critical Regionalism
Khôra
References
Bibliography
Major Works
Gaston Bachelard, 1969 [1957]. The Poetics of Space, trans. Maria Jolas. Boston: Beacon Press.
Kent Bloomer and Charles Moore, 1977. Body, Memory and Architecture. New Haven: Yale University Press.
Kenneth Frampton, 1974. "On Reading Heidegger." Oppositions 4 (October 1974), unpaginated.
Karsten Harries, 1980. "The Dream of the Complete Building." Perspecta: The Yale Journal of Architecture 17: 36-43.
Karsten Harries, 1982. "Building and the Terror of Time." Perspecta: The Yale Journal of Architecture 19: 59-69.
Karsten Harries, 1997. The Ethical Function of Architecture. Cambridge, Massachusetts: MIT Press.
Martin Heidegger, 1971 [1927]. Poetry, Language, Thought, trans. Albert Hofstadter. New York: Harper & Row.
Martin Heidegger, 1973. "Art and Space", trans. Charles Siebert. Man and World, 1973, Fall 6: 3–8.
Steven Holl, Juhani Pallasmaa, and Alberto Pérèz-Gomez, 1994. Questions of Perception: Phenomenology of Architecture. A&U Special Issue, July 1994.
Christian Norberg-Schulz, 1971. Existence, Space and Architecture. New York: Praeger.
Christian Norberg-Schulz, 1976. "The Phenomenon of Space." Architectural Association Quarterly 8, no. 4: 3-10.
Christian Norberg-Schulz, 1980. Genius Loci: Towards a Phenomenology of Architecture. New York: Rizzoli.
Christian Norberg-Schulz, 1983. "Heidegger's Thinking on Architecture." Perspecta: The Yale Architectural Journal 20: 61-68.
Christian Norberg-Schulz, 1985 [1984]. The Concept of Dwelling: On the Way to Figurative Architecture. New York: Electa/Rizzoli.
Juhani Pallasmaa, 1986. "The Geometry of Feeling: A Look at the Phenomenology of Architecture." Skala: Nordic Journal of Architecture and Art 4: 22-25.
Juhani Pallasmaa, 1996. The Eyes of the Skin: Architecture and the Senses. New York: Wiley.
Steen Eiler Rasmussen, 1959 [1957]. Experiencing Architecture. Cambridge, Massachusetts: MIT Press.
Fred Rush, 2009. On Architecture. London & New York: Routledge.
M. Reza Shirazi, 2014. Towards an Articulated Phenomenological Interpretation of Architecture: Phenomenal Phenomenology. London: Routledge.
Thomas Thiis-Evensen, 1987. Archetypes in Architecture. Oxford: Oxford University Press.
Dalibor Vesely, 1988. "On the Relevance of Phenomenology." Pratt Journal of Architecture 2: 59-62.
Pierre von Meiss, 1990 [1986]. Elements of Architecture: From Form to Place. London, E & FN Spon.
Further Reading
Dennis Pohl, 2018, "Heidegger's Architects," in: Environmental & Architectural Phenomenology, Vol. 29, No. 1:19–20.
Nader El-Bizri, 2011. "Being at Home Among Things: Heidegger's Reflections on Dwelling." Environment, Space, Place, Vol. 3:47–71.
Nader El-Bizri, 2015. "On Dwelling: Heideggerian Allusions to Architectural Phenomenology." Studia UBB Philosophia 60: 5–30.
Benoît Jacquet & Vincent Giraud, eds., 2012. From the Things Themselves: Architecture and Phenomenology. Kyoto and Paris: Kyoto University Press and Ecole française d'Extrême-Orient.
Maurice Merleau-Ponty, 1962 [1945]. The Phenomenology of Perception, trans. Colin Smith. New York: Humanities Press.
Mohsen Mostafavi and David Leatherbarrow, 1993. On Weathering: The Life of Buildings in Time. Cambridge, Massachusetts: MIT Press.
Kate Nesbitt, ed., 1996. Theorizing a New Agenda for Architecture: An Anthology of Architectural Theory 1965-1995. New York: Princeton Architectural Press.
Christian Norberg-Schulz, 1965. Intentions in Architecture. Cambridge, Massachusetts: MIT Press.
Christian Norberg-Schulz, 1988. Architecture: Meaning and Place. New York: Rizzoli.
Alberto Pérez-Gómez, 1983. Architecture and the Crisis of Modern Science. Cambridge, Massachusetts: MIT Press.
Steen Eiler Rasmussen, 1959. Experiencing Architecture. Cambridge, Massachusetts: MIT Press.
David Seamon & Robert Mugerauer eds.,1985. Dwelling, Place & Environment: Towards a Phenomenology of Person and World. Dordrecht, Netherlands: Martinus Nijhoff.
Adam Sharr, 2007. Heidegger for Architects. London and New York: Routledge.
Dalibor Vesely, 2004. Architecture in the Age of Divided Representation: The Question of Creativity in the Shadow of Production. Cambridge, Massachusetts: MIT Press.
Sources
Architectural theory
+
+
Deconstructivism
Phenomenological methodology | Phenomenology (architecture) | [
"Engineering"
] | 3,273 | [
"Postmodern architecture",
"Architectural theory",
"Architecture"
] |
2,191,347 | https://en.wikipedia.org/wiki/Mobile%20content | Mobile content is any type of web hypertext and information content and electronic media which is viewed or used on mobile phones, like text, sound, ringtones, graphics, flash, discount offers, mobile games, movies, and GPS navigation. As mobile phone use has grown since the mid-1990s, the usage and significance of the mobile devices in everyday technological life has grown accordingly. Owners of mobile phones can now use their devices to make photo snapshots for upload, twits, mobile calendar appointments, and mostly send and receive text messages (SMSes or instant messages), listen to music, watch videos, take mobile pictures and make videos, use websites to redeem coupons for purchases, view and edit office documents, get driving instructions on mobile maps and so on. The use of mobile content in various areas has grown accordingly.
Camera phones may not only present but also produce media, for example photographs and videos with a few million pixels, and can act as pocket video cameras.
Mobile content can also refer to text and multimedia that is online on websites and hosted on mobile facilitated servers, which may either be standard desktop Internet pages, mobile webpages or specific mobile pages.
Transmission
Mobile text and image content via SMS is one of the main technologies in mobile phones for communication, and is used to send mobile users and consumers messages, especially simple content such as ringtones and wallpapers. Because SMS is the main messaging (non Internet) technology used by young people, it is still the most effective way to communicate and for providers of reaching this target market. SMS is also easy to use, ubiquitous, sometimes reaching a wider audience than any other technology available in the mobile space (MMS, bluetooth, mobile e-mail or WAP). What is important is that SMS is extremely easy to use, which makes its application for various uses increasingly eady day by day.
Although SMS is a technology that has long history since first cellular phones it maybe replaced in use by the likes of Multimedia Messaging Service (MMS) or WAP, but SMS frequently gains new powers. One example is the introduction of applications whereby mobile tickets are sent to consumers via SMS, which contains a WAP-push that contains a link where a barcode is placed. This clearly substitutes MMS, which has a limited users reach and still has some applicability and interoperability problems.
It is important to keep enhancing the phone user and consumer confidence in using SMS for mobile content applications. This means, if user and consumer has in order some new wallpaper or ringtone, this as the user expects has to work somehow almost properly, and in a speedy and very reliable way. Therefore, it is of importance to choose the right SMS gateway available or as provider as to ensure the quality-of-service along the whole path of the content SMS until it reaches the consumer's mobile.
Modern phones come with Bluetooth and Near field communication. This allows video to be sent from phone to phone over Bluetooth, which has the advantages that there is no data charge.
Content types
Apps
Mobile application development, also known as mobile apps, has become a significant mobile content market since the release of the first iPhone from Apple in 2007. Prior to the release of Apple's phone product, the market for mobile applications (outside of games) had been quite limited. The bundling of the iPhone with an app store, as well as the iPhone's unique design and user interface, helped bring a large surge in mobile application use. It also enabled additional competition from other players. For example, Google's Android platform for mobile content has further increased the amount of app content available to mobile phone subscribers.
Some examples of mobile apps would be applications to manage travel schedules, buy movie tickets, preview video content, manage RSS news feeds, read digital version of popular newspapers, identify music, look at star constellations, view Wikipedia, and much more. Many television networks have their own app to promote and present their content. iTyphoon is an example of a mobile application used to provide information about typhoons in the Philippines.
Games
Mobile games are applications that allow people to play a game on a mobile handset. The main categories of mobile games include Puzzle/Strategy, Retro/Arcade, Action/Adventure, Card/Casino, Trivia/Word, Sports/Racing, given in approximate order of their popularity.
Several studies have shown that the majority of mobile games are bought and played by women. Sixty-five percent of mobile game revenue is driven by female wireless subscribers. They are the biggest driver of revenue for the Puzzle/Strategy category; comprising 72 percent of the total share of revenue, while men made up 28 percent (see Table 2). Women dominate revenue generation for all mobile game categories, with the exception of Action/Adventure mobile games, in which men drive 60 percent of the revenue for that category. It is also said that teens are three times as likely as those over twenty to play cell phone games.
Images
Mobile images are used as the wallpaper to a mobile phone, and are also available as screensavers. On some handsets, images can also be set to display when a particular person calls the users. Sites like adg.ms allow users to download free content, however service operators such as Telus Mobility blocks non Telus website downloads.
Music
Mobile music is any audio file that is played on a mobile phone. Mobile music is normally formatted as an AAC (Advanced Audio Coding) file or an MP3, and comes in several different formats. Monophonic ringtones were the earliest form of ringtone, and played one tone at a time. This was improved upon with polyphonic ringtones, which played several tones at the same time so a more convincing melody could be created. The next step was to play clips of actual songs, which were dubbed Realtones. These are preferred by record labels as this evolution of the ringtone has allowed them to gain a cut of lucrative ringtone market. In short Realtones generate royalties for record labels (the master recording owners) as well as publishers (the writers), however, when Monophonic or Polyphonic ringtones are sold only publishing or "mechanical" royalties are incurred as no master recording has been exploited. Some companies promote covertones, which are ringtones that are recorded by cover bands to sound like a famous song. Recently Ringback tones have become available, which are played to the person calling the owner of the ringback tone. Voicetones are ringtones that play someone talking or shouting rather than music, and there are various of ringtones of natural and everyday sounds. Realtones are the most popular form of ringtones. As an example, they captures 76.4% of the US ringtone market in the second quarter of 2006, followed by monophonic and polyphonic ringtones at 12% and ringback tones and 11.5% – but monophonic and polyphonic ringtones are falling in popularity while ringback tones are growing. This trend is common around the globe. A recent innovation is the singtone, whereby "the user’s voice is recorded singing to a popular music track and then “tuned-up” automatically to sound good. This can then be downloaded as a ringtone or sent to another user's mobile phone" said the director of Synchro Arts, the developers.
As well as mobile music there are full track downloads, which are an entire song encoded to play on a mobile phone. These can be purchased and bought over the mobile network, but data charges can make this prohibitive. The other way to get a song onto a mobile phone is by "side loading" it, which normally involves downloading the song onto a computer and then transferring it to the mobile phone via Bluetooth, infra-red or cable connections. It is possible to use a full track as a ringtone. In recent years, websites have sprung that allow users to upload audio files and customize them into ringtones using specialized applications, including Myxer, MobilesRingtones, Bongotones, Ringtoneslab and Zedge.
Mobile music is becoming an integral part of the music industry as a whole. In 2005, the International Federation of Phonographic Industries (IFPI) said it expects mobile music to generate more revenues that online music before the end of that year. In the first half of 2005, the digital music market grew enough to offset the fall in the traditional music market – without including the sale of ringtones, which still makes up the majority of mobile music sales around the globe.
Video
Mobile video comes in several forms including 3GPP, MPEG-4, RTSP, and Flash Lite.
Mobishows and cellsodes
A Mobishow or a cellsode are terms to describe a broadcast quality programme / series which has been produced, directed, edited and encoded for the mobile phone. Mobishows and Cellsodes can range from short video clips such as betting advice or the latest celebrity gossip, through to half-hour drama serials. Examples include The Ashes and Mr Paparazzi Show which both were created for mobile viewing.
Streaming
Radio
Mobile streaming radio is an application that streams on-demand audio channels or live radio stations to the mobile phone. In the U.S., mSpot was the first company to develop and commercialize streaming radio which went live in March 2005 on Sprint. Today, all major carriers offer some sort streaming radio service featuring programmed stations based on popular genres and live stations which included both music and talk.
TV
Mobile video also comes in the form of streaming TV over the mobile network, which must be a 2.5G or 3G network. This mimics a television station in that the user cannot elect to see what they wish but must watch whatever is on the channel at the time.
There is also mobile broadcast TV, which operates like a traditional television station and broadcasts the content over a different spectrum. This frees up the mobile network to handle calls and other data usage, and because of the "one-to-many" nature of mobile broadcast TV the video quality is a lot better than that streamed over the mobile networks, which is a "one-to-one" system.
The problem is that broadcast technologies do not have a natural up link, so for users to interact with the TV stream the service has to be closely integrated to the carriers mobile network. The main technologies for broadcast TV are DVB-H, Digital Multimedia Broadcasting (DMB), and MediaFLO.
Live video
Live video can also be streamed and shared from a cell phone through applications like Qik and InstaLively. The uploaded video can be shared to friends through emails or social networking sites. Most Live video streaming application works over the cell network or through Wi-Fi. They also require most users to have a dataplan from their cell phone carriers.
International trends
Since the late 1990s, mobile content has become an increasingly important market worldwide. The South Koreans are the world leaders in Mobile Content and 3-G mobile networks, then the Japanese, followed closely by the Europeans, are heavy users of their mobile phones and have been attaining custom mobile content for their devices for years. In fact, mobile phone use has begun to exceed the use of PCs in some countries. In the United States and Canada, mobile phone use and the accompanying use of mobile content has been slower to gain traction because of political issues and because open networks do not exist in America.
On current trends, mobile phone content will play an increasing role in the lives of millions across the globe in the years ahead, as users will depend on their mobile phones to keep in touch not only with their friends but with world news, sports scores, the latest movies and music, and more.
Mobile content is usually downloaded through WAP sites, but new methods are on the rise. In Italy, 800,000 people are registered users to Passa Parola, an application that allows users to browse a big database for mobile content and directly download it to their handsets. This tool can also be used to recommend content to others, or send content as a gift.
An increasing number of people are also beginning to use applications like Qik to upload and share their videos from their cell phone to the internet. Mobile phone software like Qik allows user to share their videos to their friends through emails, SMS, and even social networking sites like Twitter and Facebook.
A 2016 Pew Research report "The Modem News Consumer" said 70 percent of those ages 18–29 preferred getting news from mobile devices rather than desktops, while the number was 53 percent for persons 30 to 49.
References
Further reading
External links | Mobile content | [
"Technology"
] | 2,565 | [
"Mobile content"
] |
2,191,350 | https://en.wikipedia.org/wiki/Head%20bobble | The head bobble, head wobble, or Indian head shake refers to a common gesture found in South Asian cultures, most notably in India. The motion usually consists of a side-to-side tilting of the head in arcs along the coronal plane. A form of nonverbal communication, it may mean yes, good, maybe, okay, or I understand, depending on the context.
Usage
In India, a head bobble can have a variety of different meanings. Most frequently it means yes, or is used to indicate understanding. The meaning of the head bobble depends on the context of the conversation or encounter. It can serve as an alternative to thank you or as a polite introduction, or it can represent acknowledgement.
Head bobbles can also be used in an intentionally vague manner. An unenthusiastic head bobble can be a polite way of declining something without saying no directly.
The gesture is common throughout India. However, it is used more frequently in South India.
See also
Head shake
Nod
References
External links
East vs WestThe myths that mystifyTED talk (ends with a reference to "the Indian head shake")
Do you wobble?
Human communication
Head gestures
South Asia | Head bobble | [
"Biology"
] | 246 | [
"Human communication",
"Behavior",
"Human behavior"
] |
2,191,355 | https://en.wikipedia.org/wiki/Shrug | A shrug is a gesture or posture performed by raising both shoulders. In certain countries, it is a representation of an individual either being indifferent about something or not knowing an answer to a question.
Shrugging
The shoulder-raising action may be accompanied by rotating the palms upwards, pulling closed lips downwards, raising the eyebrows or tilting the head to one side. A shrug is an emblem, meaning that it integrates the vocabulary of only certain cultures and may be used in place of words. In many countries, such as the United States, Sweden and Morocco, a shrug represents hesitation or lack of knowledge; however, in other countries, such as Japan and China, shrugging is uncommon and is not used to show hesitation. People from the Philippines, Iran and Iraq may interpret a shrug as a somewhat impolite sign of confidence.
Gallic shrug
The Gallic shrug, "generally a nuanced gesture with myriad meanings", is performed by sticking out your lower lip, raising your eyebrows and shoulders simultaneously, and voicing a nonchalant bof.
Emoji
The shrug gesture is a Unicode emoji included as .
The shrug emoticon, better known as the shruggie, made from Unicode characters, is also typed as , where "ツ" is the character tsu from Japanese katakana.
See also
Indifference (emotion)
Meh
References
Gestures
Human communication | Shrug | [
"Biology"
] | 278 | [
"Human communication",
"Behavior",
"Gestures",
"Human behavior"
] |
2,191,357 | https://en.wikipedia.org/wiki/Chinese%20numerology | Some numbers are believed by some to be auspicious or lucky (吉利, ) or inauspicious or unlucky (不吉, ) based on the Chinese word that the number sounds similar to. The numbers 6 and 8 are widely considered to be lucky, while 4 is considered unlucky. These traditions are not unique to Chinese culture, with other countries with a history of Han characters also having similar beliefs stemming from these concepts.
Zero
The number 0 (零, ) is the beginning of all things and is generally considered a good number, because it sounds like 良 (pinyin: liáng), which means 'good'.
One
The number 1 (一, ) is neither auspicious nor inauspicious. It is a number given to winners to indicate
first place. But it can also symbolize loneliness or being single. For example: November 11 is the Singles' Day in China, as the date has four '1' which stand for singles.
Two
The number 2 (二, cardinal, or 兩, used with units, ) is most often considered a good number in Chinese culture. In Cantonese, 2 (二 or 兩, ) is homophonous with the characters for "easy" (易, ) and "bright" (亮, ), respectively. There is a Chinese saying: "good things come in pairs". It is common to repeat characters in product brand names, such as the character 喜 (), can be repeated to form the character 囍 ().
24 () in Cantonese sounds like "easy die" (易死, ).
28 () in Cantonese sounds like "easy prosper" (易發, ).
Three
The number 3 (三, ) sounds like 生 (), which means "to live" or "life" so it's considered a good number. It's significant since it is one of three important stages in a person's life (birth, marriage, and death).On the other hand, number 3 (三,) sounds like 散 () which means "to split" or "to separate" or "to part ways" or "to break up with" so it is a bad number too.
Four
While not traditionally considered an unlucky number, 4 has in recent times, gained an association with bad luck because of its pronunciation, predominantly for the Cantonese.
The belief that the number 4 is unlucky originated in China, where the Chinese have avoided the number since ancient times. The Chinese interpretation of 4 as unlucky is a more recent development, considering there are many examples, sayings and elements of the number 4 considered as auspicious instead in Chinese history.
The number 4 (四, ) is sometimes considered an unlucky number particularly in Cantonese because the way it is pronounced in the Cantonese dialect is nearly homophonous to the word "death" (死 ).
Thus, some buildings in East Asia omit floors and room numbers containing 4, similar to the Western practice of some buildings not having a 13th floor because 13 is considered unlucky.
Where East Asian and Western cultures blend, such as in Hong Kong, it is possible in some buildings that the thirteenth floor along with all the floors with 4s to be omitted. Thus a building whose top floor is numbered 100 would in fact have just eighty one floors. Similarly in Vietnamese, the number 4 (四) is called tứ in Sino-Vietnamese, which sounds like tử (死) (death) in Vietnamese.
The number 4 can also symbolise luck, prosperity and happiness in Chinese culture. In the musical scale, 4 is pronounced Fa, which sounds like 发 (fortune) in Mandarin. In this case, some Chinese people regard 4 as the propitious and lucky number. There is also an old Chinese idiom 四季发财 (To be Wealthy All Year).
In traditional Chinese history and other Chinese dialect groups like the Teochew people, the number 4 is considered a very lucky and auspicious number. For starters, it is an even number. There is a preference of even numbers over odd numbers. Many historical and philosophical Chinese concepts are also in groups of 4.
Another common explanation is that the number 4 in Teochew sounds like or rhymes with the word "happiness" or "joy" (喜 Teochew: hi2).
Finally, another plausible explanation is that in the Teochew dialect, the number 4 (Teochew: si) is similarly pronounced to the word "silk" (絲 Teochew: si1) or "Emperor's seal" (璽 Teochew: si2), a symbol of royalty, power and prosperity.
In Teochew culture, it is acceptable and considered lucky to give "red packets" of money (紅包 Teochew: ang5 bao1) in monetary groups of 4 (e.g. $4, $40, $44, $440 etc...) during Chinese New Year and other festivities like weddings. Stacks of 4 mandarin oranges (Citrus reticulata) are often presented on grand or formal Teochew occasions, the most common stack configuration with 3 mandarin oranges below and 1 on top.
The house numbers with 4 and 44, while shunned by the Cantonese, are often chosen by Teochews for its particular auspicious connotations. Heng Pang Kiat JP Esq. (aka Hing Pang Kiat) (c 1856 - 1930), a prominent Teochew businessman and property developer in Singapore, had specially picked the house number 44 on Emerald Hill, even though he had a choice of house numbers from 38 to 52, from his property developments in Emerald Hill.
There is an exception for the Cantonese with the house number 54, which is considered very lucky as it sounds like 唔死 (m̀ séi) meaning "Will not die and shall live forever". The number 9 is considered the highest number representing great success in Chinese numerology, thus the number 54 can also be interpreted as 5 + 4 = 9, to mean great success.
The transmission of this superstition could also be linked to religion. Buddhism played a significant role in the spread of Chinese characters and culture across the region. In Japan, the idea that the number 4 was once considered auspicious is documented in the Kojiki, emphasizing its connection to good fortune. However, as Chinese influence grew, and the pronunciation became closer to "shi," it began to be associated with death. In Korea, Buddhism's influence was more prominent when the religion was first introduced, and in Vietnam, the Sino-Vietnamese pronunciations might have contributed to this superstition. Buddhism provided the platform for discussing death, giving rise to this cultural foundation.
Five
The number 5 (五, ) sounds like "me" in Mandarin (吾, ) and Cantonese (唔, ). It is considered a lucky number. Thus, the number is used for the measurements and naming of the presidential car of Xi Jinping, or the Hongqi L5.
53 () sounds like "my life" in Mandarin (吾生, ) and "not birth" in Cantonese (唔生, ).
54 () sounds like "my death" in Mandarin (吾死, ) and "not die" in Cantonese (唔死, ).
58 () sounds like "me prosper" in Mandarin (吾發, ) and "no prosperity" in Cantonese (唔發 or 沒發, or respectively).
Five is also associated with the five elements (Water, Fire, Earth, Wood, and Metal) in Chinese philosophy, and in turn was historically associated with the Emperor of China. For example, the Tiananmen gate, being the main thoroughfare to the Forbidden City, has five arches.
Six
The number 6 (六, ) in Mandarin sounds like "slick" or "smooth" (溜, ). In Cantonese, 6 () sounds like "good fortune" or "happiness" (祿, 樂 ).
Therefore 6 is considered a good number for business.
Seven
The number 7 (七, ) in Mandarin sounds like "even" in Mandarin (齊, ), so it is a good number for relationships. It also sounds like "arise" (起, ) and "life essence" (氣, ) in Mandarin.
Seven can also be considered an unlucky number since the 7th month (July) is a "ghost month". It also sounds like "to deceive" (欺, ) in Mandarin.
In Cantonese, 7 () sounds like 𨳍 (), which is a vulgar way of saying "penis".
Eight
The number 8 (八, ) sounds like "發" ().
There is also a visual resemblance between 88 and 囍 (), a popular decorative design composed of two stylized characters 喜 ().
The number 8 is viewed as such an auspicious number that even being assigned a number with several eights is considered very lucky.
Steve Wozniak held the United States telephone number +1(408)888-8888 for many years in Silicon Valley. Several businesses in Silicon Valley have multiple "8" characters in their names, particularly near the cluster of wealthy Chinese ex-pats in and around Cupertino, California, the original home of Apple computer when Steve Jobs was invited back to run the company after Next.
In 2014, the Australian Department of Home Affairs renamed their previous Business Skills (provisional) visas, subclasses 160–165, to 188 and 888 Subclasses, both of which include eights.
In 2003, the phone number "+86 28 8888 8888" was sold to Sichuan Airlines for CN¥2.33 million (approximately US$280,000).
The opening ceremony of the 2008 Summer Olympics in Beijing began on 8/8/08 at 8 minutes and 8 seconds past local time (UTC+08).
China, Taiwan, Hong Kong, Macau, Malaysia, the Philippines and Singapore use the time zone UTC+08:00.
The Asian American mass media company 88rising (known primarily for being the record label of artists such as Joji and Rich Brian) adopted the name in 2016, and has referenced its symbolism in the titles of several events, including the 2018 US tour 88rising Double Happiness.
The Petronas Twin Towers in Malaysia each have 88 floors.
Buick offers a minivan for the Chinese market under the GL8 name, a model name not used in any other market.
The Air Canada route from Shanghai to Toronto is Flight AC88, and the route from Hong Kong to Vancouver is Flight AC8.
The KLM route from Hong Kong to Amsterdam is Flight KL888.
The Etihad Airways route from Abu Dhabi to Beijing then onwards to Nagoya is Flight EY888.
The United Airlines route from Beijing to San Francisco is Flight UA888, the route from Beijing to Newark is Flight UA88, and the route from Chengdu to San Francisco is Flight UA8.
The Air Astana route from Beijing to Almaty is Flight KC888.
The British Airways route from Chengdu to London is Flight BA88.
The Cathay Pacific route from Hong Kong to Vancouver is Flight CX888.
Singapore Airlines reserves flight numbers beginning with the number 8 for flights to Mainland China, Hong Kong (except SQ1/2 to and from San Francisco via Hong Kong) and Taiwan (i.e. a typical flight between Singapore and Hong Kong would be numbered SQ856/861).
SriLankan Airlines reserves flight numbers beginning with the number 8 for flights to Mainland China and Hong Kong.
The Turkish Airlines route from Istanbul to Beijing is TK88.
The US Treasury has sold 70,000 dollar bills with serial numbers that contain 4 eights.
Boeing delivered the 8,888th 737 to come off the production line to Xiamen Airlines. The airplane, a Next-Generation 737–800, features a special livery commemorating the airplane's significance.
In Singapore, a breeder of rare Dragon fish (Asian arowana, which are "lucky fish" and being a rare species, are required to be microchipped), makes sure to use numbers with plenty of eights in their microchip tag numbers, and appears to reserve particular numbers especially rich in eights and sixes (e.g., 702088880006688) for particularly valuable specimens.
As part of grand opening promotions, a Commerce Bank branch in New York's Chinatown raffled off safety deposit box No. 888.
An "auspicious" numbering system was adopted by the developers of 39 Conduit Road Hong Kong, where the top floor was "88" – Chinese for double fortune. It is already common in Hong Kong for ~4th floors not to exist; there is no requirement by the Buildings Department for numbering other than that it being "made in a logical order." A total of 43 intermediate floor numbers are omitted from 39 Conduit Road: those missing include 14, 24, 34, 54, 64, all floors between 40 and 49; the floor number which follows 68 is 88.
Similar to the common Western practice of using "9" for price points, it is common to see "8" being used in its place to achieve the same psychological effect. So for example menu prices like $58, $88 are frequently seen.
Nine
The number 9 (九, ) was historically associated with the Emperor of China, and the number was frequently used in matters relating to the Emperor, before the establishment of the imperial examinations officials were organized in the nine-rank system, the nine bestowments were rewards the Emperor made for officials of extraordinary capacity and loyalty, while the nine familial exterminations was one of the harshest punishments the Emperor sentenced; the Emperor's robes often had nine dragons, and Chinese mythology held that the dragon has nine children.
Also, the number 9 sounds like "long lasting" (久, ), so it is often used in weddings.
In Cantonese, the number 9 is also a vulgar way of saying penis (𨳊, ), similar to 7 as well, with 9 referring to an erect penis instead.
Combinations
48: Any 3 digit numbers that ends with 48 sounds like "wealthy for X lifetimes", for example, 748 () sounds like "七世發" () meaning "wealthy for 7 lifetimes".
167 () in Cantonese sounds like "一碌𨳍" (), which is a vulgar way of saying "a dick".
168 () sounds like "一路发" () meaning "fortune all the way".
250 () is usually used to insult someone the speaker considers extremely foolish. Alternative ways such as 兩百五 (lǐang bǎi wǔ) and 二百五十 (èr bǎi wǔ shí) do not have this meaning.
448 () sounds like "死先發" () meaning "wealthy on death".
514 () in Mandarin sounds like "我要死" ().
518 () in Mandarin sounds like "我要发" () which means "I am going to prosper".
520 () in Mandarin sounds similar to "我愛你" ().
548 () in Cantonese sounds like "唔洗發"() meaning "no need to be wealthy".
748 () in Mandarin sounds like "去死吧" ().
1314 () sounds like "一生一世" () meaning "forever" and is often used romantically.
5354 () in Cantonese sounds like "唔生唔死" () meaning "not alive not dead", referring to being in a miserable state like one is almost dead.
7414 in Mandarin is like "go to die"
7456 () in Mandarin sounds like "气死我了" () meaning "to make me angry" or "to piss me off".
9413 () sounds like "九死一生" () meaning 90% chance of being dead and only 10% chance of being alive, or survived such situations (a narrow escape).
5201314 () in Mandarin sounds like "我愛你一生一世" ().
See also
Bagua
Chinese mathematics
Chinese number gestures
Chinese numerals
Color in Chinese culture
Culture of China
King Wen sequence
Numerology
Homophonic puns in Mandarin Chinese
Faux pas derived from Chinese pronunciation
References
External links
Numbers game in China
Craving lucky numbers in daily life
Number four not so deadly for Chinese
Lucky numbers and role in Chinese practice of gift giving between business partners
Learning Chinese number with gestures
Chinese culture
Language games
Numerology
Homonymy in Chinese | Chinese numerology | [
"Mathematics"
] | 3,425 | [
"Numerology",
"Mathematical objects",
"Numbers"
] |
2,191,496 | https://en.wikipedia.org/wiki/Joseph%20Weber | Joseph Weber (May 17, 1919 – September 30, 2000) was an American physicist. He gave the earliest public lecture on the principles behind the laser and the maser and developed the first gravitational wave detectors, known as Weber bars.
Early life
Joseph Weber was born in Paterson, New Jersey, on 17 May 1919, the last of four children born to Yiddish-speaking Jewish immigrant parents. His name was "Yonah" until he entered grammar school. He had no birth certificate, and his father had taken the last name of "Weber" to match an available passport in order to emigrate to the US. Thus, Joe Weber had little proof of either his family or his given name, which gave him some trouble in obtaining a passport at the height of the red scare.
Early education
Weber attended Paterson public schools (and the Paterson Talmud Torah), graduating at sixteen from the "Mechanic Arts Course" of Paterson Eastside High School in June 1935. He began his undergraduate education at Cooper Union, but to save his family the expense of his room and board he won admittance to the United States Naval Academy through a competitive exam. He graduated from the Academy in 1940.
Naval career
He served aboard US Navy ships during World War II, rising to the rank of lieutenant commander. Weber was the Officer of the Deck on the USS Lexington when the ship received word of the attack on Pearl Harbor. In the Battle of the Coral Sea his carrier sank the Japanese aircraft carrier Shōhō and was in turn mortally damaged on May 8, 1942. Weber often regaled his students with the story of how the Lexington glowed incandescent as she slipped beneath the waves.
Later, he commanded the sub-chaser SC-690, first in the Caribbean, and later in the Mediterranean Sea. In that role, he took part in the invasion of Sicily at Gela Beach, in July 1943.
He studied electronics at the Naval Postgraduate School in 1943-45, and from 1945 to 1948, he headed electronic countermeasures design for the Navy's Bureau of Ships, in Washington, DC. He resigned from the navy as a lieutenant commander in 1948 to become a professor of engineering.
Early post-naval career; development of the MASER
In 1948, he joined the engineering faculty of the University of Maryland, College Park. A condition of his appointment was that he should quickly attain a PhD. Thus, he did his PhD studies, on microwave spectroscopy, at night, while already a faculty member. He completed his PhD, with a thesis entitled Microwave Technique in Chemical Kinetics, from The Catholic University of America in 1951. Building on his naval expertise in tube microwave engineering, he worked out the idea of coherent microwave emissions. He submitted a paper in 1951 for the June 1952 Electron Tube Research Conference held in Ottawa, which was the earliest public lecture on the principles behind the laser and the maser. After this presentation, RCA asked Weber to give a seminar on this idea, and Charles Hard Townes asked him for a copy of the paper. Townes was working along similar lines, as were Nikolay Basov and Aleksandr Prokhorov. Although Weber was jointly nominated for the Nobel Prize in Physics in 1962 and 1963 for his contributions to the development of the laser, it was Townes, Basov, and Prokhorov, who received the 1964 Nobel Prize, "for fundamental work in the field of quantum electronics, which has led to the construction of oscillators and amplifiers based on the maser–laser principle."
Work on gravitational wave detection
His interest in general relativity led Weber to use a 1955–1956 sabbatical, funded by a Guggenheim Fellowship, to study gravitational radiation with John Archibald Wheeler at the Institute for Advanced Study in Princeton, NJ and the Lorentz Institute for Theoretical Physics at the University of Leiden in the Netherlands. At the time, the existence of gravitational waves was not widely accepted. After he began publishing papers on the detection of gravitational waves, he moved from the engineering department to the physics department at Maryland.
He developed the first gravitational wave detectors (Weber bars) in the 1960s, and began publishing papers with evidence that he had detected these waves. In 1972, he sent a gravitational wave detection apparatus to the Moon (the "Lunar Surface Gravimeter," part of the Apollo Lunar Surface Experiments Package) on the Apollo 17 lunar mission.
Claims of gravitational wave detection discredited
In the 1970s, the results of these gravitational wave experiments were largely discredited, although Weber continued to argue that he had detected gravitational waves. In order to test Weber's results, IBM Physicist Richard Garwin built a detector that was similar to Joseph Weber's. In six months, it detected only one pulse, which was most likely noise. David Douglass, another physicist, had discovered an error in Weber's computer program that, he claimed, produced the daily gravitational wave signals that Weber claimed to have detected. Because of the error, a signal seemed to appear out of noise. Garwin aggressively confronted Weber with this information at the Fifth Cambridge Conference on Relativity at MIT in June 1974. A series of letters was then exchanged in Physics Today. Garwin asserted that Weber's model was "insane, because the universe would convert all of its energy into gravitational radiation in 50 million years or so, if one were really detecting what Joe Weber was detecting." "Weber," Garwin declared, "is just such a character that he has not said, 'No, I never did see a gravity wave.' And the National Science Foundation, unfortunately, which funded that work, is not man enough to clean the record, which they should." In 1972, Heinz Billing and colleagues at Max Planck Institute for Physics built a detector similar to Weber's in an attempt to verify his claim but found no results.
Weber himself continued to maintain his gravitational wave detection equipment until his death.
Discovery of gravitational waves by LIGO
On February 11, 2016, the LIGO Scientific Collaboration and Virgo Collaboration teams held a press conference to announce that they had directly detected gravitational waves from a pair of black holes merging, on Rosh Hashanah 2015, (Weber's yahrtzeit), using the Advanced LIGO detectors.
During the announcement, Weber was credited by numerous speakers as the founder of the field, including by Kip Thorne, who co-founded LIGO and also devoted much of his career to the search for gravitational waves. Later, Thorne told the Washington Post, "He really is the founding father of this field."
Weber's second wife, astronomer Virginia Trimble, was seated in the front row of the audience during the LIGO press conference. In an interview with Science afterwards, Trimble was asked if Weber really saw gravitational waves, to which she replied: "I don't know. But I think if there had been two technologies going forward they would have pushed each other, as collaborators not as competitors, and it might have led to an observation sooner."
Work on neutrino detection
In the course of defending his work on gravitational wave detection, Weber began related work on neutrino detection. Assuming infinite crystal stiffness, Weber calculated that it could be possible to detect neutrinos using sapphire crystals, and published experimental results on neutrino scattering with these crystals. Weber also patented the idea of using vibrating crystals to generate neutrinos. His experimental results contradicted previous and subsequent findings from other experiments, but Weber's neutrino theories continue to be tested.
Legacy
Although his attempts to find gravitational waves with bar detectors are considered to have failed, Weber is widely regarded as the father of gravitational wave detection efforts, including LIGO, MiniGrail, and several HFGW research programs around the world. His notebooks contained ideas for laser interferometers; later such a detector was first constructed by his former student Robert Forward at Hughes Research Laboratories.
The Joseph Weber Award for Astronomical Instrumentation was named in his honor.
Personal life
His first marriage, to his high school classmate Anita Straus, ended with her death in 1971. His second marriage was to astronomer Virginia Trimble. He had 4 sons (from his first marriage), and six grandchildren.
Joseph Weber died on 30 September 2000 in Pittsburgh, Pennsylvania, during treatment for lymphoma that had been diagnosed about three years earlier.
References
External links
(Obituary)
(Obituary)
(Obituary)
(Profile)
(Obituary)
UNITED STATES NAVAL ACADEMY; SIXTIETH GRADUATION ANNIVERSARY OF THE CLASS OF 1940: Class Individual Biographies: JOSEPH WEBER
Joseph Weber: An Officer and a Gentleman (in German)
Physics World obituary
1919 births
2000 deaths
20th-century American physicists
Jewish American physicists
United States Naval Academy alumni
Naval Postgraduate School alumni
Catholic University of America alumni
Fellows of the American Physical Society
University of Maryland, College Park faculty
Eastside High School (Paterson, New Jersey) alumni
Scientists from Paterson, New Jersey
Gravitational-wave astronomy
Laser researchers
Deaths from lymphoma in the United States
20th-century American Jews
Military personnel from New Jersey | Joseph Weber | [
"Physics",
"Astronomy"
] | 1,825 | [
"Astronomical sub-disciplines",
"Gravitational-wave astronomy",
"Astrophysics"
] |
2,191,513 | https://en.wikipedia.org/wiki/Chinese%20number%20gestures | Chinese number gestures are a method to signify the natural numbers one through ten using one hand. This method may have been developed to bridge the many varieties of Chinese—for example, the numbers 4 () and 10 () are hard to distinguish in some dialects. Some suggest that it was also used by business people during bargaining (i.e., to convey a bid by feeling the hand gesture in a sleeve) when they wish for more privacy in a public place. These gestures are fully integrated into Chinese Sign Language.
Methods
While the five digits on one hand can easily express the numbers one through five, six through ten have special signs that can be used in commerce or day-to-day communication. The gestures are rough representations of the Chinese numeral characters they represent. The system varies in practice, especially for the representation of "7" to "10". Two of the systems are listed below:
Six (六)
The little finger and thumb are extended (the extended thumb indicating one set of 5); the other fingers are closed, sometimes with the palm facing the signer.
Seven (七)
Northern China: The fingertips are all touching, pointed upwards, or just the fingertips of the thumb and first two fingers (the most common method); another method is similar to the eight (described below) except that the little finger is also extended.
Northern China: The index finger and middle finger point outward, with the thumb extended upwards (the extended thumb indicating one set of 5), sometimes with the palm facing the observer.
Coastal southern China: The index finger points down with the thumb extended, mimicking the shape of a "7".
Eight (八)
Northern China: The thumb and index finger make an "L" and the other fingers are closed, with the palm facing the observer.
Northern China: The index finger and middle finger point down and with the fingertips optionally touching a horizontal surface, making the Chinese number 8 ("八").
Coastal southern China: The thumb, index finger, and middle finger are extended.
Nine (九)
Mainland China: The index finger makes a hook and the other fingers are closed, sometimes with the palm facing the signer.
Taiwan: Four of the five digits of the hand are extended, the exception being the little finger.
Hong Kong: Both methods are used.
Ten (十)
The fist is closed with the palm facing the signer, or the middle finger crosses an extended index finger, facing the observer. Some Chinese distinguish between zero and ten by having the thumb closed or open, respectively.
The arms are raised and the index fingers of both hands are crossed in a "十" (making the Chinese number ten) with the palms facing in opposite directions, optionally with the hands placed in front of the signer's face.
Use of the signs corresponds to the use of numbers in the Chinese language. For instance, the sign for five just as easily means fifty. A two followed by a six, using a single hand only, could mean 260 or 2600 etc. besides twenty-six. These signs also commonly refer to days of the week, starting from Monday, as well as months of the year, whose names in Chinese are enumerations.
In different regions signs for numbers vary significantly. One may interpret the "8" sign as a "7". The "index finger-hook" symbol for 9, also means "death" in other contexts.
The numbers zero through five are simpler:
Zero (〇)
Northern China: The fist is closed. This may be interpreted as 10 depending on the situation, though some Chinese distinguish between zero and ten by having the thumb closed or open, respectively.
Coastal southern China: The thumb and index finger make a circle, with the other three fingers closed.
One (一)
The index finger is extended.
Two (二)
The index and middle fingers are extended.
Three (三)
The thumb and index finger are closed and the other three fingers are extended.
The thumb holds the little finger down in the palm and the middle three fingers are extended.
Four (四)
The thumb is held in the palm and the four fingers are extended.
Five (五)
All five digits are extended.
Only the thumb is extended (either upwards or outwards) with the palm facing the signer.
Counting with fingers is often different from expressing a specific number with a finger gesture. When counting, the palm can be either facing its owner or the audience, depending on the purpose. Before counting, all fingers are closed; counting starts by extending the thumb as the first, then the index finger as the second, until all fingers are extended as the fifth; then counting can be continued by folding fingers with the same sequence, from thumb through the little finger, for counting from the sixth through the tenth. Repeating the same method for counting larger numbers. One can also starts counting with all fingers extended. Some believe that for formal scenario such as giving speech or presentation, counting with the palm facing the audience and starting with all fingers extended is more polite, since the gesture of folding of fingers representing bowing.
When playing drinking finger games (划拳, 猜拳), slightly different sets of finger gestures of numbers is used. One of them is:
Zero (〇)
The fist is closed.
One (一)
The thumb is extended with all other fingers folded toward the palm.
Two (二)
The thumb and index finger make an "L", other fingers closed.
Three (三)
With the last two fingers closed and the rest fingers (the thumb and the first two fingers) extended, or
With the index finger and thumb closed, the last three fingers are extended.
Four (四)
The thumb is held in palm with the four fingers extended.
Five (五)
All five digits are extended.
Gallery
From 1 to 5
From 6 to 10 in North China
From 6 to 10 in coastal South China
From 6 to 10 in Taiwan
The digit 0
The gesture of the digit 0 is used for showing numbers like 20, 30, 40, etc., where the left hand shows the tens digit and the right hand shows the digit 0.
See also
Chinese numerals
Finger-counting
Finger binary
Hand signaling (open outcry)
Nonverbal communication
Numbers in Chinese culture
Sign language
References
References
Finger-counting
Chinese language
Chinese culture
Numerals
de:Chinesische Zahlschrift#Handzeichen zum Ausdruck chinesischer Zahlen | Chinese number gestures | [
"Mathematics"
] | 1,307 | [
"Numeral systems",
"Numerals",
"Finger-counting"
] |
2,191,566 | https://en.wikipedia.org/wiki/American%20Society%20for%20Artificial%20Internal%20Organs | American Society for Artificial Internal Organs (ASAIO) is an organization of individuals and groups that are interested in artificial internal organs and their development.
It supports research into artificial internal organs and holds an annual meeting, which attracts industry, researchers and government officials. ASAIO's most heavily represented areas are nephrology, cardiopulmonary devices (artificial hearts, heart-lung machines) and biomaterials. It publishes a peer-reviewed publication, the ASAIO Journal, 10 times a year.
References
External links
American Society for Artificial Internal Organs home page
ASAIO Journal - home
Implants (medicine)
Medical associations based in the United States
Prosthetics
Medical and health organizations based in Florida | American Society for Artificial Internal Organs | [
"Engineering",
"Biology"
] | 145 | [
"Biological engineering",
"Bioengineering stubs",
"Biotechnology stubs",
"Medical technology stubs",
"Medical technology"
] |
2,191,629 | https://en.wikipedia.org/wiki/Atomix%20%28video%20game%29 | Atomix is a puzzle video game developed by Günter Krämer (as "Softtouch") and published by Thalion Software, released for the Amiga and other personal computers in late 1990. The object of the game is to assemble molecules from compound atoms by moving the atoms on a two-dimensional playfield.
Atomix was received positively; reviewers noted the game's addictiveness and enjoyable gameplay, though criticized its repetitiveness.
Gameplay
Atomix takes place on a playfield consisting of a number of walls, with the atoms scattered throughout. The player is tasked with assembling a molecule from the atoms. The atoms must be arranged to exactly match the molecule displayed on the left side of the screen. The player can choose an atom and move it in any of the four cardinal directions. A moved atom keeps sliding in one direction until it hits a wall or another atom. Solving the puzzles requires strategic planning in moving the atoms, and on later levels with little free space, even finding room for the completed molecule can be a problem. Once the molecule is assembled, the player is given a score; the faster the puzzle was completed, the higher the score.
Each puzzle must be completed within a time limit. A portion of the player's score can be spent to restart a failed puzzle. The entire game consists of 30 puzzles of increasing difficulty. In addition, after every five puzzles, there is a bonus level where the player must move laboratory flasks filled with various amounts of liquid to arrange them from empty to full.
The game also offers a two-player mode, where two players work on the same puzzle; they take turns which last up to thirty seconds.
Development
Amiga Format reviewed a pre-release version in its May, 1990 issue. It was almost a complete version of the game although it lacked sound.
Initially the game was released for Amiga, Atari ST and the IBM PC; as of May 1990, the C64 version was not yet planned, and was only released a few months later. A ZX Spectrum version was also planned. It was to be distributed by U.S. Gold, but was never released.
The game was published for Enterprise 128 in 2006, and this version was written by Zoltán Povázsay from Hungary.
A clone for the Atari Jaguar called Atomic has been released in 2006, written by Sébastien Briais (AKA Seb from the Removers). A second version called Atomic Reloaded has been released in 2009.
Reception
Atomix received warm reactions from reviewers. They stated that it was highly enjoyable and addictive despite its high difficulty level. Reviewers also pointed out the possible educational application of the game.
However, certain reviewers criticized the game for its repetitiveness and stated that it lacked replayability. Some reviewers also wrote about the game's unoriginality, noting similarities to earlier games, Xor and Leonardo.
Graphics were generally considered adequate, though not spectacular; Zzap!64 called them "a bit dull and repetitive" and "simplistic, but slick and effective", while CU Amiga remarked that despite their simplicity, they "create a nice, tidy display". The soundtrack was found enjoyable, though the Commodore Format reviewer considered it annoyingly repetitive.
Atomix has been the subject of scientific research in computational complexity theory. Like Sokoban, when generalized to puzzles of arbitrary sizes, the problem of determining whether an Atomix puzzle has a solution is PSPACE-complete. Some heuristic approaches have been considered.
Legacy
Several open source clones of Atomix exist: Atomiks, GNOME Atomix, KAtomic and WAtomic.
References
Notes
Bibliography
1990 video games
Amiga games
Atari ST games
Cancelled ZX Spectrum games
Commodore 64 games
DOS games
Multiplayer and single-player video games
PSPACE-complete problems
Puzzle video games
Thalion Software games
Video games developed in Germany | Atomix (video game) | [
"Mathematics"
] | 777 | [
"PSPACE-complete problems",
"Mathematical problems",
"Computational problems"
] |
2,191,814 | https://en.wikipedia.org/wiki/ACCOLC | ACCOLC (Access Overload Control) was a procedure in the United Kingdom for restricting mobile telephone usage in the event of emergencies. It is similar to the GTPS (Government Telephone Preference Scheme) for landlines.
This scheme allowed the mobile telephone networks to restrict access in a specific area to registered numbers only and is normally invoked by the Police Incident Commander (although it can be invoked by the Cabinet Office). The emergency services are responsible for registering their key numbers in advance.
ACCOLC was replaced by MTPAS (Mobile Telecommunication Privileged Access Scheme) in 2009.
Purpose
The purpose of ACCOLC (US) was to restrict non-essential access to cellular phone networks during emergencies. This actively prevents unnecessary usage from congesting the cell networks, thus allowing emergency services personnel priority for communications. It also serves to control information flow in and out of a declared emergency area.
Mobile networks can become overwhelmed by a high concentration of calls that often occur immediately after a major incident. Reliable access to the mobile networks, even during times when an exceptionally large number of calls are being made, is achieved by installing a special SIM (subscriber identity module) card in the telephone handset. Special SIMs are only available to entitled users within the emergency services community, and not to members of the public.
Verizon Wireless in the United States has also implemented ACCOLC on its wireless networks, the modalities of use may differ from those in Britain especially with regards to ACCOLC being activated permanently on the network.
Implementation
In an emergency situation, the mobile network operator can implement ACCOLC onto specific mobile cell sites (that cover the area of the required restriction). Most systems allow the operator to allow/restrict specific Access Class levels to gain access to the cell sites.
The customer's SIM card is provisioned with an Access Class level between 0 and 15. Most SIM cards will be coded with a random access class level between 0 and 9. For special case mobile customers – e.g. emergency services, government officials, civil defence, etc. they will be issued with a SIM card with a high access class value (between 10 and 15).
When the mobile operator needs to implement ACCOLC restriction, they will update the configuration on the specified cell sites. The access class value is a field that is transmitted in the broadcast channel of the cell. Under normal conditions, access class levels 0 – 15 are allowed. The SIM card compares what is allowed to its own access class level. If the allowed access class levels being broadcast by the cell site does not match what is on the SIM card, then the mobile device cannot access that cell – for any services.
Use
ACCOLC was deployed at the Hillsborough disaster.
See also
Civil Contingencies Secretariat
Government Telephone Preference Scheme
US Nationwide Wireless Priority Service
References
External links
London Assembly 7 July Review Committee Volume 4 Follow up report Including report on ACCOLC being invoked around Aldgate on 7 July by the City of London Police without reference to GOLD control.
Introduction to Resilient Communications (Cabinet Office).
Emergency management in the United Kingdom
History of telecommunications in the United Kingdom
Mobile telecommunications standards | ACCOLC | [
"Technology"
] | 635 | [
"Mobile telecommunications",
"Mobile telecommunications standards"
] |
2,191,840 | https://en.wikipedia.org/wiki/Chattering%20teeth%20%28toy%29 | Chattering teeth, sometimes called chattery teeth, are a wind-up toy invented by Eddy Goldfarb mimicking the bodily function of the same name. Originally named "Yakity Yak Talking Teeth", Goldfarb and Marvin Glass sold it to novelty company H. Fishlove & Co. who released it in 1949. Chattering teeth are a pair of mechanized teeth that, after being wound up at the back, clatter together. Reproductions are today sold at novelty stores. While some chattering teeth are equipped with walking feet, many models are not.
Goldfarb's original design was awarded from the U.S. Patent Office. H. Fishlove & Co., now a division of Fun, Inc. — a manufacturer of magic trick and novelty items — still produces chattering teeth based on Goldfarb's specifications.
References
External links
Pressman Toy – Inventor Profile: Eddy Goldfarb (Web archive as of February 19, 2012)
1940s toys
Novelty items
Mechanical toys | Chattering teeth (toy) | [
"Physics",
"Technology"
] | 208 | [
"Physical systems",
"Machines",
"Mechanical toys"
] |
2,191,884 | https://en.wikipedia.org/wiki/CAVNET | CAVNET was a secure military forum which became operational in April 2004. A part of SIPRNet, it allows fast access to knowledge acquired on the ground in combat.
It was used in Iraq war, and helps US military forces against the insurgents' adaptive tactics by providing data laterally and on a broader scale than with traditional reports.
The data shared between patrols on "The Net" (as is it is sometimes referred to by soldiers) has already played a crucial role to dismantle grenade-traps hidden behind posters of Moqtada al-Sadr that US soldiers often rip down.
References
Wide area networks
History of cryptography
United States government secrecy | CAVNET | [
"Technology"
] | 134 | [
"Computing stubs",
"Computer network stubs"
] |
2,191,914 | https://en.wikipedia.org/wiki/RAYDAC | The RAYDAC (for Raytheon Digital Automatic Computer) was a one-of-a-kind computer built by Raytheon. It was started in 1949 and finished in 1953. It was installed at the Naval Air Missile Test Center at Point Mugu, California.
The RAYDAC used 5,200 vacuum tubes and 18,000 crystal diodes. It had 1,152 words of memory (36 bits per word), using delay-line memory, with an access time of up to 305 microseconds. Its addition time was 38 microseconds, multiplication time was 240 microseconds, and division time was 375 microseconds. (These times exclude the memory-access time.)
See also
List of vacuum-tube computers
External links
Erwin Tomash photo of General Front View From Right Side of RAYDAC Test Control Board (image)
Erwin Tomash drawing of RADAC Computer Control Room Showing Main Computer and Operator's Console (image)
References
One-of-a-kind computers
Vacuum tube computers
36-bit computers
Raytheon Company products | RAYDAC | [
"Technology"
] | 224 | [
"Computing stubs"
] |
2,191,918 | https://en.wikipedia.org/wiki/Dihydrogen%20bond | In chemistry, a dihydrogen bond is a kind of hydrogen bond, an interaction between a metal hydride bond and an OH or NH group or other proton donor. With a van der Waals radius of 1.2 Å, hydrogen atoms do not usually approach other hydrogen atoms closer than 2.4 Å. Close approaches near 1.8 Å, are, however, characteristic of dihydrogen bonding.
Boron hydrides
An early example of this phenomenon is credited to Brown and Heseltine. They observed intense absorptions in the IR bands at 3300 and 3210 cm−1 for a solution of (CH3)2NHBH3. The higher energy band is assigned to a normal N−H vibration whereas the lower energy band is assigned to the same bond, which is interacting with the B−H. Upon dilution of the solution, the 3300 cm−1 band increased in intensity and the 3210 cm−1 band decreased, indicative of intermolecular association.
Interest in dihydrogen bonding was reignited upon the crystallographic characterization of the molecule H3NBH3. In this molecule, like the one studied by Brown and Hazeltine, the hydrogen atoms on nitrogen have a partial positive charge, denoted Hδ+, and the hydrogen atoms on boron have a partial negative charge, often denoted Hδ−. In other words, the amine is a protic acid and the borane end is hydridic. The resulting B−H...H−N attractions stabilize the molecule as a solid. In contrast, the related substance ethane, H3CCH3, is a gas with a boiling point 285 °C lower. Because two hydrogen centers are involved, the interaction is termed a dihydrogen bond. Formation of a dihydrogen bond is assumed to precede formation of H2 from the reaction of a hydride and a protic acid. A very short dihydrogen bond is observed in NaBH4·2H2O with H−H contacts of 1.79, 1.86, and 1.94 Å.
Coordination chemistry
Protonation of transition metal hydride complexes is generally thought to occur via dihydrogen bonding. This kind of H−H interaction is distinct from the H−H bonding interaction in transition metal complexes having dihydrogen bound to a metal.
In neutral compounds
So-called hydrogen–hydrogen bond interactions have been proposed to occur between two neutral non-bonding hydrogen atoms from atoms in molecules theory, while similar interactions have been shown to exist experimentally. Many of these types of dihydrogen bonds have been identified in molecular aggregates.
Notes
Chemical bonding
Hydrogen physics | Dihydrogen bond | [
"Physics",
"Chemistry",
"Materials_science"
] | 552 | [
"Chemical bonding",
"Condensed matter physics",
"nan"
] |
2,191,926 | https://en.wikipedia.org/wiki/Neroli | Neroli oil is an essential oil produced from the blossom of the bitter orange tree (Citrus aurantium subsp. amara or Bigaradia). Its scent is sweet, honeyed and somewhat metallic with green and spicy facets. Orange blossom is also extracted from the same blossom and both extracts are extensively used in perfumery. Orange blossom can be described as smelling sweeter, warmer and more floral than neroli. The difference between how neroli and orange blossom smell and why they are referred to with different names, is a result of the process of extraction that is used to obtain the oil from the blooms. Neroli is extracted by steam distillation and orange blossom is extracted via a process of enfleurage (rarely used nowadays due to prohibitive costs) or solvent extraction.
Production
The blossoms are gathered, usually by hand, in late April to early May. The oil is extracted by steam distillation. Tunisia, Morocco, and Egypt are the leading producers of neroli.
History
By the end of the 17th century, Anne Marie Orsini, duchess of Bracciano and princess of Nerola, Italy, introduced the essence of bitter orange tree as a fashionable fragrance by using it to perfume her gloves and her bath. Since then, the term "neroli" has been used to describe this essence. Neroli has a refreshing and distinctive, spicy aroma with sweet and flowery notes.
Use
Neroli is one of the most widely used floral oils in perfumery. Like many raw materials, neroli can cause sensitisation due to a high content of aromatic terpenes; e.g., linalool, limonene, farnesol, geraniol and citral. It blends well with any citrus oil, various floral absolutes, and most of the synthetic components available on the market.
It also has a limited use in flavorings. Neroli oil is reportedly one of the ingredients in the closely guarded secret recipe for the Coca-Cola soft drink. It is a flavoring ingredient of open source cola recipes, although some variants consider it as optional, owing to the high cost.
See also
Citrus × aurantium
Nerol
Orange flower water
Orange oil
Petitgrain oil
References
External links
Entry in the British Pharmaceutical Codex from 1911
Aromatherapy
Citrus production
Essential oils | Neroli | [
"Chemistry"
] | 468 | [
"Essential oils",
"Natural products"
] |
2,192,043 | https://en.wikipedia.org/wiki/Kupala%20Night | Kupala Night (also Kupala's Night or just Kupala; Polish: , Belarusian: , Russian: , , Ukrainian: ) is one of the major folk holidays in some of the Slavic countries that coincides with the Christian feast of the Nativity of St. John the Baptist and the East Slavic feast of Saint John's Eve. In folk tradition, it was revered as the day of the summer solstice and was originally celebrated on the shortest night of the year, which is on 21-22 or 23-24 of June {Czech Republic, Poland, Slovakia, Bulgaria (where it is called Enyovden), and modern Ukraine (since 2023), and according to Julian calendar on the night between 6 and 7 July (Belarus, Russia, and parts of Ukraine). The name of the holiday is ultimately derived from the East Slavic word kǫpati "to bathe".
A number of activities and rituals are associated with Kupala Night, such as gathering herbs and flowers and decorating people, animals, and houses with them; entering water, bathing, or dousing with water and sending garlands on water; lighting fires, dancing, singing, and jumping over fire; and hunting witches and scaring them away. It was also believed that on this day the sun plays and other wonders of nature happen. The celebrations are held near the water, on the hills, surrounding that; chiefly, young men and women participate in these folkloric traditions. The rituals and symbolism of the holiday may point to its pre-Christian origins.
Names
Old East Slavic: ,
Russian: , , , ,
dialectal: ; : "bonfire in the field";
Ukrainian: , , , , , ,
Polesia: ,
Belarusian: ,
dialectal:
Polish dialects have retained loans from East Slavic languages:
Podlachia and Lublin: kupała, kąpała, kąpałeczka
Podlachia, Lublin, Sieradz, Kalisz: kupalonecka, kopernacka, kopernocka, kupalnocka
In Old Czech (15th century), there is attested kupadlo "a multicolored thread with which gifts were tied, given on the occasion of Saint John's Eve; a gift given to boys by girls on the occasion of Saint John's Eve". In Slovakia, the folk kupadla "Saint John's Eve".
History and etymology
According to many researchers, Kupala Night is a Christianized Proto-Slavic or East Slavic celebration of the summer solstice. According to Nikolay Gal'kovskiy, "Kupala Night combined two elements: pagan and Christian." The viewpoint on the pre-Christian origin of the holiday is criticized by historian Vladimir Petrukhin and ethnographer Aleksandr Strakhov. Whereas, according to Andrzej Kempinski, "The apparent ambivalence (male-female, fire-wood, light-dark) seems to testify to the ancient origins of the holiday alleviating the contradictions of a dual society." According to Holobuts’ky and Karadobri, one of the arguments for the antiquity of the holiday is the production of fire by friction.
The name appears as early as the Old East Slavic language stage. Izmail Sreznevsky, in his Materials for the Dictionary of the Old East Slavic Language, gives the entries: "Saint John's Eve" (In Hypatian Codex under year 1262: ), "baptist" (no example), "St. John's Day" (). Epigraph No. 78 in The Cathedral of Holy Wisdom in Veliky Novgorod, dated to the late 11th - early 12th century, contains an inscription . According to ethnographer Vera Sokolova, Kupala is a later name that appeared among Eastern Slavs when the holiday coincided with the day of John the Baptist.
According to Max Vasmer, the name (Ivan) Kupala/Kupalo is a variant of the name (John the) Baptist ( ) and it calques the ancient Greek equivalent . Greek "baptist" derives from the verb "to immerse; to wash; to bathe; to baptize, consecrate, immerse in baptismal font", which in Old East Slavic was originally rendered by the word "to bathe", later displaced by "to baptise". The Proto-Slavic form of the verb is reconstructed as *kǫpati "to dip in water, to bathe".
According to Mel’nychuk, the word Kupalo itself may come from Proto-Slavic *kǫpadlo ( OCz. kupadlo, SCr. kùpalo, LSrb., USrb. kupadło "bathing place"), which is composed of the discussed verb *kǫpati and the suffix *-dlo. The name of the holiday is related to the fact that the first ceremonial bath was taken during Kupala Night, and the connection to John the Baptist is secondary.
Deity Kupala
From the 17th century, sources suggest that the holiday is dedicated to the deity Kupala, whom the Slavs supposedly worshipped. However, modern researchers deny the existence of such a deity.
Rituals and beliefs
On this day, June 24, it was customary to pray to John the Baptist for headaches and for children.
Kupala Night is filled with rituals related to water, fire and herbs. Most Kupala rituals take place at night. Bathing before sunset was considered mandatory: in the north, Russians were more likely to bathe in banyas, and in the south in rivers and lakes. Closer to sunset, on high ground or near rivers, bonfires were lit. Sometimes, fires were lit in the traditional way – by friction wood against wood. In some places in Belarus and Volyn Polissia, this archaic way of lighting a fire for the holiday survived until early 20th century.
According to Vera Sokolova, among the Eastern Slavs, the holiday has been preserved in its most "archaic" form by the Belarusians. In the center of the Kupala bonfire, Belarusians would place a pole on top of which a wheel was attached. Sometimes a horse's skull, called , was placed on top of the wheel and thrown into the fire, where it would burn, after which the youth would play, sing and dance around the fire. In Belarus, old, unwanted items were collected from backyards throughout the village and taken to a place chosen for the celebration (a glade, a high riverbank), where they were then burned. Ukrainians also preserved the main archaic elements, but changed their symbolic meanings in the 19th century. Russians either forgot the main elements of the Kupala ceremony or transferred them to other holidays (Trinity Day, Peter Day).
The celebration of Kupala Night is mentioned in the Hustyn Chronicle (17th century):
This Kupala... is commemorated on the eve of the Nativity of John the Baptist... in the following manner: In the evening, ordinary children of both sexes gather and make wreaths of poisonous herbs or roots, and those covered with their clothes set fire, and then they put a green branch, and holding their hands they dance around the fire, singing their songs... Then they leap over the fire...
On Kupala Night, "bride and groom" were chosen and wedding ceremonies were conducted: they jumped over the fire holding hands, exchanged wreaths (symbol of maidenhood), looked for the fern flower and bathed in the morning dew. On this day, "village roads were plowed so that 'matchmakers would come sooner', or a furrow was plowed to a boy's house so that he would get engaged faster."
In some parts of Ukrainian and Belarusian tradition, it was only after Kupala that vesnianky were no longer sung. Eastern and Western Slavs were forbidden to eat cherries before that day. Eastern Slavs believed that women should not eat berries before St. John's Day, or their young children would die.
The custom of public condemnation and ridicule on Kupala Night (also George's Day in Spring and Trinity Day) is well known. Criticism and condemnation are usually directed at residents of one's own or a neighboring village who have violated social and moral norms over the past year. This social condemnation can be heard in Ukrainian and Belarusian songs, which contain themes of quarrels between girls and boys or residents of neighboring villages. Condemnation and ridicule are expressed in public and serve as a regulator of social relations.
According to Hutsuls beliefs, after Kupala come the "", when thunders and lightnings are common. These are days when thunderous spirits walk around, sending lightning bolts to the earth. "And then between the dark sky and the tops of the mountains, fire trees grow, connecting heaven and earth. And so it will be until the Elijah's day, the old Thunderous feast" after which, they say, "thunder will stop pounding."
Alexander Veselovsky, points out the similarity between the Slavic customs of Kupala Night and the Greek customs of Elijah's day, (Elijah the Thunderer).
Ritual dishes
The consecration of the first fruits ripening at this time may have coincided with the Kupala Night holiday.
In some Russian villages, "votive porridge" was brewed: on St. Juliana's day (June 22), girls would gather to talk and, while singing, pound barley in a mortar. On the morning of St. Agrippina's day (June 23), barley was used to cook votive porridge. During the day, this porridge was given to the poor, and in the evening, sprinkled with butter, it was eaten by everyone.
Among Belarusians, delicacies brought from home were eaten both in separate groups and at potluck and consisted of vareniki, cheese, tvarog, flour porridge (), sweet dough (babka) with ground hemp seeds, onion, garlic, bread acid (cold borscht), and eggs in lard. In Belarus in the 19th century, vodka was drunk during the holiday, and wine was drunk in Podlachia and the Carpathians. Songs have preserved mention of the ancient drinks of the night:
Will accept you, Kupal’nochka, as a guest,
With treating you with green vine,
With watering you with wheat beer,
With feeding you with quark.
Water
The obligatory custom on this day was mass bathing. It was believed that on this day all evil spirits would leave the rivers, so it was safe to swim until Elijah's day. In addition, the water of Kupala Night was endowed with revitalizing and magical properties.
In places where people were not allowed to bathe in rivers (because of russets), they bathed in "sacred springs". In the Russian North, on the day before of Kupala Night, on St. Agrippina's Day, baths were heated in which people were washed and steamed, while steaming the herbs collected on that day. Water drawn from springs on St. John's Day was said to have miraculous and magical powers.
On this holiday, according to a common sign, water can "make friends" with fire. The symbol of this union was a bonfire lit along the banks of rivers. Wreaths were often used for divination on Kupala Night: if they floated on the water, it meant good luck and long life or marriage.
A 16th-century Russian scribe attempted to explain the name () and the healing power of St. John's Day by referring to the Old Testament legend of Tobias. As he writes, it was on this day that Tobias bathed in the Tigris, where, on the advice of the archangel Raphael, he discovered a fish whose entrails cured his father of blindness.
Bonfire
The main feature of the Kupala Night is the cleansing bonfires. The youths would bring down a huge amount of brushwood from all over the village and set up a tall pyramid, with a pole in the middle, on which was placed a wheel, a barrel of tar, a horse or cow skull (Polesia), etc. According to Tatyana Agapkin and Lyudmila Vinogradova, the symbol of a tall pole with a wheel attached to it generally correlated with the universal image of the world tree.
Bonfires were lit late in the evening and usually burned until morning. In various traditions, there is evidence of the requirement to light the Kupala bonfire with "need-fire", produced by friction; in some places, the fire was carried into the house and lit in the earth. All the women of the village had to approach the fire, since any who did not go were suspected of witchcraft. A khorovod was led around the bonfire, dancing, singing Kupala songs, and jumping over the bonfire: whoever jumps more successfully and higher will be happier. The girls leap over the fire to "purify themselves and protect themselves from disease, spoilage, spells," and so that "rusalky will not attack and come during the year." A girl who did not jump over the fire was called a witch (Eastern Slavs, Poland); she was doused with water and scourged with nettles because she had not been "cleansed" by the baptismal fire. In the Kiev Governorate, a girl who lost her virginity before marriage could not jump over the bonfire during Kupala Night, as doing so would desecrate it.
In Ukraine and Belarus, girls and boys held hands and jumped over the fire in pairs. It was believed that if their hands stayed together while jumping, it would be a clear sign of their future marriage; the same if sparks flew behind them. In the Gomel Governorate, boys used to cradle girls in their arms over the Kupala bonfire to protect them from spells. Young people and children jumped over bonfires, organized noisy games: they played gorelki.
In addition to bonfires, in some places on Kupala Night, wheels and barrels of tar were set on fire, which were then rolled down the mountains or carried on poles, which is clearly related to the symbolism of the solstice.
In Belarus, the Galician Poles and Carpathian Slovaks called baptismal bonfires Sobótki after the West Slavic sobota as a "day of rest".
Kupala songs
Many folklorists believe that the content of Kupala songs is poorly related to the rituals and mythological meaning of the holiday. The multi-genre song texts include many lyrical songs with love and family themes, humorous chants between boys and girls, khorovod dance songs and games, ballads, etc. As Kupala songs, these are identified by specific melodies and a specific calendar period. In other periods, it was not customary to sing such songs.
Wreath
The wreath was a mandatory attribute of the amusements. It was made before the holiday from wild herbs and flowers. The ritual use of the Kupala wreath is also related to the magical understanding of its shape, which brings it closer to other round and perforated objects (ring, hoop, loaf, etc.). The customs of milking or sipping milk through the wreath, reaching and pulling something through the wreath, looking, pouring, drinking, washing through it are based on these attributes of the wreath.
It was believed that each plant gave the wreath special properties, and the way it was made — twisting and weaving — also added symbolism. Wreaths were often made of periwinkle, basil, geranium, ferns, roses, blackberries, oak and birch branches, etc.
During the festival, the wreath was usually destroyed: thrown into water, burned in a bonfire, thrown on a tree or the roof of a house, carried to a cemetery, etc. Sometimes the wreath was preserved and used for healing, protecting fields from hailstorms and vegetable gardens from "worms".
In Polesia, at the dawn of St. John's Day, peasants would choose the prettiest girl from among themselves, strip her naked and wrap her from head to toe in wreaths of flowers, then go to the forest, where the "dzevko-kupalo" (girl-kupalo – as the chosen girl was called) would distribute the previously prepared wreaths to her girlfriends. She would blindfold herself, and the girls would walk around her in a merry dance. The garland that someone received was used to foretell future fate: a fresh garland meant a rich and happy marriage, a dry garland meant poverty and an unhappy marriage: "she will not have happiness, she will live in misery."
Kupala tree
Depending on the region, a young birch, willow, maple, spruce, or the cut top of an apple tree was chosen for the Kupala. The girls would decorate it with wreaths, field flowers, fruits, ribbons and sometimes candles; then take it outside the village, stick it in the ground in a clearing and dance, walk and sing around it. Later, the boys would join in the fun, pretending to steal the Kupala tree or ornaments from it, knocking it over or setting it on fire, while the girls protected it. At the end, everyone together was supposed to drown the Kupala tree in the river or burn it in a bonfire.
Before the ritual, the tree could not be cut down, but simply located in a convenient place for the khorovod and dressed. In the Zhytomyr region, in one village, a dry pine tree, growing outside the village near the river, was chosen for this; it was called . The celebrants threw the burnt tree trunk into the water, and then ran away so that "the witch (didn't) catch up with them."
Medicinal and magical herbs
A characteristic sign of Kupala Night are the many customs and legends associated with the plant world. Green was used as a universal amulet: it was believed to protect from diseases and epidemics, evil eye and spoilage; from sorcerers and witches, unclean powers, "walking" dead people; from natural lightning, hurricane, fire; from snakes and predatory animals, insect pests, worms. At the same time, the contact with fresh greens was conceived as a magical means providing fertility and successful breeding of cattle, poultry, yield of cereals and vegetable crops.
It was believed that on this day it was best to collect medicinal herbs, as the plants receive great power from the sun and the earth. Some herbs were harvested at night, others in the afternoon before lunch, and others in the morning dew. While collecting medicinal herbs, a special prayer (zagovory) was recited.
According to Belarusian beliefs, Kupala herbs are most healing if they are collected by the "old and young," i.e. old people and children – as the most pure (no sex life, no menstruation, etc.).
The fern and the so-called Ivan-da-marya flower (e.g., Melampyrum nemorosum; literally: John and Mary) were associated with special Kupala legends. The names of these plants appear in Kupala songs.
The Slavs believed that only once a year, on St. John's Day, a fern blooms. This mythical flower, which does not exist in nature, is supposed to give those who pick it and keep it with them miraculous powers. According to beliefs, the bearer of the flower becomes clairvoyant, can understand the language of animals, see all treasures, no matter how deep they are in the ground, and enter treasuries unhindered by holding the flower to locks and bolts (they must crumble before it), wield unclean spirits, wield earth and water, become invisible and take any form.
One of the main symbols of St. John's Day was the Ivan-da-marya flower, which symbolized the magical combination of fire and water. Kupala songs link the origin of this flower to twins – a brother and sister – who got into a forbidden love affair and because of this turned into a flower. The story of incestuous twins finds numerous parallels in Indo-European mythologies.
Some plant names are related to the name Kupala, e.g. Czech kupadlo "Bromus", "Cuscuta trifolii", kupalnice "Ranunculus", Polish kupalnik "Arnica", Ukrainian "Taraxacum officinale", "Tussilago", Russian "Ranunculus acris".
Protection from evil spirits
It was believed that on the Kupala Night all evil spirits awaken to life and harm people; that one should beware of "the mischief of demons – domovoy, vodyanoy, leshy, rusalky".
In order to prevent witches from "taking away" milk from cows, Russians pounded consecrated willow in pastures, and in Ukraine the owner pounded aspen stakes in the yard. In Polesia, nettles, torn men's pants or a mirror was hung in the stable gate for the same purpose. In Belarus, aspen twigs and stakes were used to defend not only cattle, but also crops, "so that witches would not take the spores." To ward off evil spirits, it was customary to hammer sharp and prickly objects into tables, windows, doors, etc. In the Eastern Slavs, when a witch entered the house, a knife was driven into the table from below to prevent her from leaving. Southern Slavs believed that sticking a knife or hawthorn branch into the door would protect them from vampires or nightmares. On Kupala night, Eastern Slavs would drive scythes, pitchforks, knives and branches of certain trees into the windows and doors of houses and barns, protecting their space from evil spirits.
It was believed that in order to protect oneself from witch attacks, one should put nettles on the threshold and window sills. Ukrainian girls collected wormwood because they believed it was feared by witches and russets.
In Podolia, on St. John's Day, hemp flowers ("porridge") were collected and scattered in front of the entrances to houses and barns to bar the way for witches. In order to prevent the witches from stealing them and driving them to Bald Mountain (no horse will return from there alive), the horses must be locked up. Belarusians believed that during Kupala Night, domoviks would ride horses and torture them.
In Ukraine and Belarus, magical powers were attributed to firebrands from the Kupala bonfire. In western Polesia, young people would pull the sails from the fire, run with them as if they were torches, wave them over their heads, and then throw them into the fields "to protect the crops from evil powers."
In Polesia, a woman who did not come to the bonfire was called a witch by the youth, cursed and teased. In order to identify and neutralize the witch, the road along which cattle are usually herded was blocked with thread, plowed with a plow or harrow, sprinkled with seeds or ants and poured with ant stock, believing that the witch's cow would not be able to overcome the obstacle.
According to Slavic beliefs, the root of Lythrum salicaria dug up on St. John's Day was able to ward off sorcerers and witches; it could be used to drive demons out of the possessed and possessors.
Youth games
The games usually had a love-marriage theme: , tag, , celovki; ball games (myachevukha, v baryshi and others).
Ritual pranks
On the night of Kupala, as well as on one of the nights during the winter Christmas holidays, among Eastern Slavs, youngsters often engaged in ritual mischief and pranks: they stole firewood, carts, gates and hoisted them onto roofs, propped up house doors, covered windows, etc. Pranks on Kupala night are a South Russian and Polesian tradition.
Sun
It is a well-known belief that on St. John's Eve, the sun at sunrise shimmers with different colors or reflects, flashes, stops, etc. The most common way of referring to this phenomenon is as follows: the sun plays or jumps; in some traditions it also bathes, jumps, dances, walks, trembles, is merry, spins, bows, changes, blooms, beautifies (Russia); the sun Crowing (Polesia).
In some parts of Bulgaria, it is believed that at dawn on St. John's Day, three suns appear in the sky, of which only the central one is "ours" and the others are its brothers – shining at other times and over other lands.
The Serbs called John the Baptist because they believed that on this day the sun stops three times in the sky or plays. They explained the behavior of the sun on John's day by referring to Gospel verses relating to the birth of John the Baptist: "When Elizabeth heard Mary's greeting, the child in her womb moved, and the Holy Spirit filled Elizabeth."
Church on folk rituals
In medieval Russia, the rituals and games of the day were considered demonic and were banned by church authorities. Thus, the message of the hegumen of the Yelizarov Convent (1505) to the Pskov governor and authorities condemned the "pagan" games of Pskov residents on the night of the Nativity of John the Baptist:
For when the feast day of the Nativity of Forerunner itself arrives, then on this holy night nearly the entire city runs riot and in the villages they are possessed by drums and flutes and by the strings of the guitars and by every type of unsuitable satanic music, with the clapping of hands and dances, and with the women and the maidens and with the movements of the heads and with the terrible cry from their mouths: all of those songs are devilish and obscene, and curving their backs and leaping and jumping up and down with their legs; and right there do men and youths suffer great temptation, right there do they leer lasciviously in the face of the insolence of the women and the maidens, and there even occurs depravation for married women and perversion for the maidens.
– Epistle of Pamphilus of Yelizarov Monastery
Stoglav (a collection of decisions of the Stoglav Synod of 1551) also condemns the revelry during the Kupala Night, which originated in "Hellenistic" paganism
And furthermore many of the children of Orthodox Christians, out of simple ignorance, engage in Hellenic devilish practices, a variety of games and clapping of hands in the cities and in the villages against the festivities of the Nativity of the Great John Prodome; and on the night of that same feast day and for the whole day until night-time, men and women and children in the houses and spread throughout the streets make a ruckus in the water with all types of games and much revelry and with satanic singing and dancing and gusli and in many other unseemly manners and ways, and even in a state of drunkenness.
– Stoglav, chapter 92
Contemporary representatives of the Russian Orthodox Church continue to oppose some of the customs associated with this holiday. At the same time, responding to a question about the "intermingling" of Christian and pagan holidays, hieromonk expressed an opinion:
The perennial persistence among the people of some of the customs of the Kupala Night does not indicate a double faith, but rather an incompleteness of faith. After all, how many people who have never participated in these pagan entertainments are prone to superstition and mythological ideas. The ground for this is our fallen nature, corrupted by sin.
In 2013, at the request of the ROC, the celebrations of Kupala Night and Neptune's Day were banned in the Rossoshansky District of the Voronezh Oblast.
References
Bibliography
Slavic antiques
Dictionaries
Observances in Russia
Russian folklore
Saint John's Day
Observances in Poland
Folk calendar of the East Slavs
Belarusian traditions
Russian traditions
Ukrainian traditions
Observances in Ukraine
Slavic holidays
Days celebrating love
Summer events in Ukraine
Summer events in Poland
Summer solstice | Kupala Night | [
"Astronomy"
] | 5,849 | [
"Time in astronomy",
"Summer solstice"
] |
2,192,069 | https://en.wikipedia.org/wiki/Voodoo%20doll | The term voodoo doll commonly refers to an effigy that is typically used for the insertion of pins. Such practices are found in various forms in the magical traditions of many cultures around the world.
Despite its name, the voodoo doll is not prominent in the African diaspora religions of Haitian Vodou nor Louisiana Voodoo. Members of the High Priesthood of Louisiana Voodoo have denounced the use of voodoo dolls as irrelevant to the religion.
Depictions in culture
20th-century link with Voodoo
The association of the voodoo doll and the religion of Voodoo was established through the presentation of the latter in Western popular culture during the first half of the 20th century as part of the broader negative depictions of Black and Afro-Caribbean religious practices in the United States. In John Houston Craige's 1933 book Black Bagdad: The Arabian Nights Adventures of a Marine Captain in Haiti, a Haitian prisoner is described sticking pins into an effigy to induce illness. In film, representations of Haitian Vodou in works such as Victor Halperin's 1932 White Zombie and Jacques Tourneur’s 1943 I Walked with a Zombie also involves the use of the dolls. Voodoo dolls are also featured in one episode of The Woody Woodpecker Show (1961), as well as in the British musical Lisztomania (1975) and the films Creepshow (1982), Indiana Jones and the Temple of Doom (1984), The Witches of Eastwick (1987) Child's Play (1988) and Scooby-Doo on Zombie Island (1998).
By the early 21st century, the image of the voodoo doll had become particularly pervasive. It had become a novelty item available for purchase, with examples being provided in vending machines in British shopping centres, and an article on "How to Make a Voodoo Doll" being included on WikiHow. Voodoo dolls were also featured in the 2009 animated Disney movie The Princess and the Frog, as well as the 2011 live-action Disney movie Pirates of the Caribbean: On Stranger Tides.
In 2020, Louisiana Voodoo High Priest Robi Gilmore stated, "It blows my mind that people still believe [Voodoo dolls are relevant to Voodoo religion]. Hollywood really did us a number. We do not stab pins in dolls to hurt people; we don't take your hair and make a doll, and worship the devil with it, and ask the devil to give us black magic to get our revenge on you. It is not done, it won't be done, and it never will exist for us."
See also
Clay-body
Haitian Vodou
Haunted doll
Hopi Kachina figure
Poppet
Shikigami
Totem
Ushabti
Ushi no toki mairi
References
Footnotes
Sources
American witchcraft
Cunning folk
Dolls
English folklore
European witchcraft
Magic items
Doll | Voodoo doll | [
"Physics"
] | 559 | [
"Magic items",
"Physical objects",
"Matter"
] |
2,192,280 | https://en.wikipedia.org/wiki/Guillemet | Guillemets (, also , , ) are a pair of punctuation marks in the form of sideways double chevrons, and , used as quotation marks in a number of languages. In some of these languages, "single" guillemets, and , are used for a quotation inside another quotation. Guillemets are not conventionally used in English.
Terminology
Guillemets may also be called angle, Latin, Castilian, Spanish, or French quotes/quotation marks.
Guillemet is a diminutive of the French name , apparently after the French printer and punchcutter Guillaume Le Bé (1525–1598), though he did not invent the symbols: they first appear in a 1527 book printed by Josse Bade.
In Adobe software, its file format specifications, and in all fonts derived from these that contain the characters, the glyph names are incorrectly spelled and (a malapropism: guillemot is actually a species of seabird). Adobe has acknowledged the error. Likewise, X11 mistakenly uses and to name keys producing the characters.
Shape
Guillemets are smaller than less-than and greater-than signs, which in turn are smaller than angle brackets.
Uses
As quotation marks
Guillemets are used pointing outwards («like this») to indicate speech in these languages and regions:
Albanian
Arabic
Armenian
Azerbaijani (mostly in the Cyrillic script)
Belarusian
Breton
Bulgarian (rarely used; „...“ is official)
Catalan
Chinese (《 and 》 are used to indicate a book or album title)
Esperanto (usage varies)
Estonian (marked usage; „...“ prevails)
Franco-Provençal
French (spaced out by thin spaces « like this », except no spaces in Switzerland)
Galician
Greek
Italian
Khmer
Northern Korean (in Southern Korean, “...” is used)
Kurdish
Latvian (stūrainās pēdiņas)
Norwegian
Persian
Portuguese (used mostly in European Portuguese, due to its presence in typical computer keyboards; considered obsolete in Brazilian Portuguese)
Romanian; only to indicate a quotation within a quotation
Russian, and some languages of the former Soviet Union using Cyrillic script („...“ is also used for nested quotes and in hand-written text.)
Spanish (uncommon in daily usage, but commonly used in publishing)
Swiss languages
Turkish (dated usage; almost entirely replaced with “...” by late 20th century)
Uyghur
Ukrainian
Uzbek (mostly in the Cyrillic script)
Vietnamese (previously, now “...” is official)
Guillemets are used pointing inwards (»like this«) to indicate speech in these languages:
Croatian (preferred by typographers, alternate pair „...“ is in common use)
Czech (traditional but declining usage; „...“ prevails)
Danish (“...” is also used)
Esperanto (very uncommon)
German (guillemets are preferred for books, while „...“ is preferred in newspapers and handwriting; see above for usage in Swiss German)
Hungarian (only used „inside a section »as a secondary quote« marked by the usual quotes” like this)
Polish (used to indicate a quote inside a quote as defined by dictionaries; more common usage in practice. See also: Polish orthography)
Serbian (marked usage; „...“ prevails)
Slovak (traditional but declining usage; „...“ prevails)
Slovene („...“ and “...” also used)
Swedish (this style, and »...» are considered typographically fancy; ”...” is the common form of quotation)
Guillemets are used pointing right (»like this») to indicate speech in these languages:
Finnish (”...” is the common and correct form)
Swedish (this style, and »...« are considered typographically fancy; ”...” is the common form of quotation)
Ditto mark
In Quebec, the right-hand guillemet, , called a , is used as a ditto mark.
UML
Guillemets are used in Unified Modeling Language to indicate a stereotype of a standard element.
Mail merge
Microsoft Word uses guillemets when creating mail merges. Microsoft use these punctuation marks to denote a mail merge "field", such as , or . On the final printout, the guillemet-marked tags are replaced by each instance of the corresponding data item intended for that field by the user.
Encoding
Double guillemets are present in many 8-bit extended ASCII character sets. They were at 0xAE and 0xAF (174 and 175) in CP437 on the IBM PC, and 0xC7 and 0xC8 in Mac OS Roman, and placed in several of ISO 8859 code pages (namely: -1, -7, -8, -9, -13, -15, -16) at 0xAB and 0xBB (171 and 187).
Microsoft added the single guillemets to CP1252 and similar sets used in Windows at 0x8B and 0x9B (139 and 155) (where the ISO standard placed C1 control codes).
The ISO 8859 locations were inherited by Unicode, which added the single guillemets at new locations:
Despite their names, the characters are mirrored when used in right-to-left contexts.
Keyboard entry
The double guillemets are standard keys on French Canadian QWERTY keyboards and some others.
See also
A related pair of symbols, 'angle brackets' (a single chevron), and , is used for another purpose, in mathematics and computing.
Chevron
Computer keyboard
Quotation mark
References
External links
Punctuation
Typographical symbols | Guillemet | [
"Mathematics"
] | 1,223 | [
"Symbols",
"Typographical symbols"
] |
2,192,336 | https://en.wikipedia.org/wiki/Hardening%20%28metallurgy%29 | Hardening is a metallurgical metalworking process used to increase the hardness of a metal. The hardness of a metal is directly proportional to the uniaxial yield stress at the location of the imposed strain. A harder metal will have a higher resistance to plastic deformation than a less hard metal.
Processes
The five hardening processes are:
The Hall–Petch method, or grain boundary strengthening, is to obtain small grains. Smaller grains increases the likelihood of dislocations running into grain boundaries after shorter distances, which are very strong dislocation barriers. In general, smaller grain size will make the material harder. When the grain size approach sub-micron sizes, some materials may however become softer. This is simply an effect of another deformation mechanism that becomes easier, i.e. grain boundary sliding. At this point, all dislocation related hardening mechanisms become irrelevant.
In work hardening (also referred to as strain hardening) the material is strained past its yield point, e.g. by cold working. Ductile metal becomes harder and stronger as it's physically deformed. The plastic straining generates new dislocations. As the dislocation density increases, further dislocation movement becomes more difficult since they hinder each other, which means the material hardness increases.
In solid solution strengthening, a soluble alloying element is added to the material desired to be strengthened, and together they form a “solid solution”. A solid solution can be thought of just as a "normal" liquid solution, e.g. salt in water, except it is solid. Depending on the size of the dissolved alloying element's ion compared to that of the matrix-metal, it is dissolved either substitutionally (large alloying element substituting for an atom in the crystal) or interstitially (small alloying element taking a place between atoms in the crystal lattice). In both cases, the size difference of the foreign elements make them act as sand grains in sandpaper, resisting dislocations that try to slip by, resulting in higher material strength. In solution hardening, the alloying element does not precipitate from solution.
Precipitation hardening (also called age hardening) is a process where a second phase that begins in solid solution with the matrix metal is precipitated out of solution with the metal as it is quenched, leaving particles of that phase distributed throughout to cause resistance to slip dislocations. This is achieved by first heating the metal to a temperature where the elements forming the particles are soluble then quenching it, trapping them in a solid solution. Had it been a liquid solution, the elements would form precipitates, just as supersaturated saltwater would precipitate small salt crystals, but atom diffusion in a solid is very slow at room temperature. A second heat treatment at a suitable temperature is then required to age the material. The elevated temperature allows the dissolved elements to diffuse much faster, and form the desired precipitated particles. The quenching is required since the material otherwise would start the precipitation already during the slow cooling. This type of precipitation results in few large particles rather than the, generally desired, profusion of small precipitates. Precipitation hardening is one of the most commonly used techniques for the hardening of metal alloys.
Martensitic transformation, more commonly known as quenching and tempering, is a hardening mechanism specific for steel. The steel must be heated to a temperature where the iron phase changes from ferrite into austenite, i.e. changes crystal structure from BCC (body-centered cubic) to FCC (face-centered cubic). In austenitic form, steel can dissolve a lot more carbon. Once the carbon has been dissolved, the material is then quenched. It is important to quench with a high cooling rate so that the carbon does not have time to form precipitates of carbides. When the temperature is low enough, the steel tries to return to the low temperature crystal structure BCC. This change is very quick since it does not rely on diffusion and is called a martensitic transformation. Because of the extreme supersaturation of solid solution carbon, the crystal lattice becomes BCT (body-centered tetragonal) instead. This phase is called martensite, and is extremely hard due to a combined effect of the distorted crystal structure and the extreme solid solution strengthening, both mechanisms of which resist slip dislocation.
All hardening mechanisms introduce crystal lattice defects that act as barriers to dislocation slip.
Applications
Material hardening is required for many applications:
Machine cutting tools (drill bits, taps, lathe tools) need be much harder than the material they are operating on in order to be effective.
Knife blades – a high hardness blade keeps a sharp edge.
Bearings – necessary to have a very hard surface that will withstand continued stresses.
Armor plating - High strength is extremely important both for bullet proof plates and for heavy duty containers for mining and construction.
Anti-fatigue - Martensitic case hardening can drastically improve the service life of mechanical components with repeated loading/unloading, such as axles and cogs.
References
Metal heat treatments | Hardening (metallurgy) | [
"Chemistry"
] | 1,070 | [
"Metallurgical processes",
"Metal heat treatments"
] |
2,192,366 | https://en.wikipedia.org/wiki/Rectification%20%28geometry%29 | In Euclidean geometry, rectification, also known as critical truncation or complete-truncation, is the process of truncating a polytope by marking the midpoints of all its edges, and cutting off its vertices at those points. The resulting polytope will be bounded by vertex figure facets and the rectified facets of the original polytope.
A rectification operator is sometimes denoted by the letter with a Schläfli symbol. For example, is the rectified cube, also called a cuboctahedron, and also represented as . And a rectified cuboctahedron is a rhombicuboctahedron, and also represented as .
Conway polyhedron notation uses for ambo as this operator. In graph theory this operation creates a medial graph.
The rectification of any regular self-dual polyhedron or tiling will result in another regular polyhedron or tiling with a tiling order of 4, for example the tetrahedron becoming an octahedron As a special case, a square tiling will turn into another square tiling under a rectification operation.
Example of rectification as a final truncation to an edge
Rectification is the final point of a truncation process. For example, on a cube this sequence shows four steps of a continuum of truncations between the regular and rectified form:
Higher degree rectifications
Higher degree rectification can be performed on higher-dimensional regular polytopes. The highest degree of rectification creates the dual polytope. A rectification truncates edges to points. A birectification truncates faces to points. A trirectification truncates cells to points, and so on.
Example of birectification as a final truncation to a face
This sequence shows a birectified cube as the final sequence from a cube to the dual where the original faces are truncated down to a single point:
In polygons
The dual of a polygon is the same as its rectified form. New vertices are placed at the center of the edges of the original polygon.
In polyhedra and plane tilings
Each platonic solid and its dual have the same rectified polyhedron. (This is not true of polytopes in higher dimensions.)
The rectified polyhedron turns out to be expressible as the intersection of the original platonic solid with an appropriately scaled concentric version of its dual. For this reason, its name is a combination of the names of the original and the dual:
The tetrahedron is its own dual, and its rectification is the tetratetrahedron, better known as the octahedron.
The octahedron and the cube are each other's dual, and their rectification is the cuboctahedron.
The icosahedron and the dodecahedron are duals, and their rectification is the icosidodecahedron.
Examples
In nonregular polyhedra
If a polyhedron is not regular, the edge midpoints surrounding a vertex may not be coplanar. However, a form of rectification is still possible in this case: every polyhedron has a polyhedral graph as its 1-skeleton, and from that graph one may form the medial graph by placing a vertex at each edge midpoint of the original graph, and connecting two of these new vertices by an edge whenever they belong to consecutive edges along a common face. The resulting medial graph remains polyhedral, so by Steinitz's theorem it can be represented as a polyhedron.
The Conway polyhedron notation equivalent to rectification is ambo, represented by a. Applying twice aa, (rectifying a rectification) is Conway's expand operation, e, which is the same as Johnson's cantellation operation, t0,2 generated from regular polyhedral and tilings.
In 4-polytopes and 3D honeycomb tessellations
Each Convex regular 4-polytope has a rectified form as a uniform 4-polytope.
A regular 4-polytope {p,q,r} has cells {p,q}. Its rectification will have two cell types, a rectified {p,q} polyhedron left from the original cells and {q,r} polyhedron as new cells formed by each truncated vertex.
A rectified {p,q,r} is not the same as a rectified {r,q,p}, however. A further truncation, called bitruncation, is symmetric between a 4-polytope and its dual. See Uniform 4-polytope#Geometric derivations.
Examples
Degrees of rectification
A first rectification truncates edges down to points. If a polytope is regular, this form is represented by an extended Schläfli symbol notation t1{p,q,...} or r{p,q,...}.
A second rectification, or birectification, truncates faces down to points. If regular it has notation t2{p,q,...} or 2r{p,q,...}. For polyhedra, a birectification creates a dual polyhedron.
Higher degree rectifications can be constructed for higher dimensional polytopes. In general an n-rectification truncates n-faces to points.
If an n-polytope is (n-1)-rectified, its facets are reduced to points and the polytope becomes its dual.
Notations and facets
There are different equivalent notations for each degree of rectification. These tables show the names by dimension and the two type of facets for each.
Regular polygons
Facets are edges, represented as {}.
Regular polyhedra and tilings
Facets are regular polygons.
Regular Uniform 4-polytopes and honeycombs
Facets are regular or rectified polyhedra.
Regular 5-polytopes and 4-space honeycombs
Facets are regular or rectified 4-polytopes.
See also
Dual polytope
Quasiregular polyhedron
List of regular polytopes
Truncation (geometry)
Conway polyhedron notation
References
Coxeter, H.S.M. Regular Polytopes, (3rd edition, 1973), Dover edition, (pp. 145–154 Chapter 8: Truncation)
Norman Johnson Uniform Polytopes, Manuscript (1991)
N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. Dissertation, University of Toronto, 1966
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 26)
External links
Polytopes | Rectification (geometry) | [
"Physics"
] | 1,443 | [
"Tessellation",
"Truncated tilings",
"Symmetry"
] |
2,192,496 | https://en.wikipedia.org/wiki/Fungiculture | Fungiculture is the cultivation of fungi such as mushrooms. Cultivating fungi can yield foods (which include mostly mushrooms), medicine, construction materials and other products. A mushroom farm is involved in the business of growing fungi.
The word is also commonly used to refer to the practice of cultivation of fungi by animals such as leafcutter ants, termites, ambrosia beetles, and marsh periwinkles.
Overview
As fungi, mushrooms require different conditions than plants for optimal growth. Plants develop through photosynthesis, a process that converts atmospheric carbon dioxide into carbohydrates, especially cellulose. While sunlight provides an energy source for plants, mushrooms derive all of their energy and growth materials from their growth medium, through biochemical decomposition processes. This does not mean that light is an irrelevant requirement, since some fungi use light as a signal for fruiting. However, all the materials for growth must already be present in the growth medium. Mushrooms grow well at relative humidity levels of around 95–100%, and substrate moisture levels of 50 to 75%.
Instead of seeds, mushrooms reproduce through spores. Spores can be contaminated with airborne microorganisms, which will interfere with mushroom growth and prevent a healthy crop.
Mycelium, or actively growing mushroom culture, is placed on a substrate—usually sterilized grains such as rye or millet—and induced to grow into those grains. This is called inoculation. Inoculated grains (or plugs) are referred to as spawn. Spores are another inoculation option, but are less developed than established mycelium. Since they are also contaminated easily, they are only manipulated in laboratory conditions with a laminar flow cabinet.
Techniques
All mushroom growing techniques require the correct combination of humidity, temperature, substrate (growth medium) and inoculum (spawn or starter culture). Wild harvests, outdoor log inoculation and indoor trays all provide these elements.
Outdoor logs
Mushrooms can be grown on logs placed outdoors in stacks or piles, as has been done for hundreds of years. Sterilization is not performed as part of this method. Since production may be unpredictable and seasonal, less than 5% of commercially sold mushrooms are produced this way. Here, tree logs are inoculated with spawn, then allowed to grow as they would in wild conditions. Fruiting, or pinning, is triggered by seasonal changes, or by briefly soaking the logs in cool water. Shiitake and oyster mushrooms have traditionally been produced using the outdoor log technique, although controlled techniques such as indoor tray growing or artificial logs made of compressed substrate have been substituted.
Shiitake mushrooms that are grown under a forested canopy are considered non-timber forest products. In the Northeastern United States, shiitake mushrooms can be cultivated on a variety of hardwood logs including oak, American beech, sugar maple and hophornbeam. Softwood should not be used to cultivate shiitake mushrooms because the resin of softwoods will oftentimes inhibit the growth of the shiitake mushroom making it impractical as a growing substrate.
To produce shiitake mushrooms, 1 metre (3-foot) hardwood logs with a diameter ranging between are inoculated with the mycelium of the shiitake fungus. Inoculation is completed by drilling holes in hardwood logs, filling the holes with cultured shiitake mycelium or inoculum, and then sealing the filled holes with hot wax. After inoculation, the logs are placed under the closed canopy of a coniferous stand and are left to incubate for 12 to 15 months. Once incubation is complete, the logs are soaked in water for 24 hours. Seven to ten days after soaking, shiitake mushrooms will begin to fruit and can be harvested once fully ripe.
Indoor trays
Indoor mushroom cultivation for the purpose of producing a commercial crop was first developed in caves in France. The caves provided a stable environment (temperature, humidity) all year round. The technology for a controlled growth medium and fungal spawn was brought to the UK in the late 1800s in caves created by quarrying near areas such as Bath, Somerset. Growing indoors allows the ability to control light, temperature and humidity while excluding contaminants and pests. This enables consistent production, regulated by spawning cycles. By the mid-twentieth century this was typically accomplished in windowless, purpose-built buildings, for large-scale commercial production.
Indoor tray growing is the most common commercial technique, followed by containerized growing. The tray technique provides the advantages of scalability and easier harvesting.
There are a series of stages in the farming of the most widely used commercial mushroom species Agaricus bisporus. These are composting, fertilizing, spawning, casing, pinning, and cropping."
Six phases of mushroom cultivation
Complete sterilization is not required or performed during composting. In most cases, a pasteurization step is included to allow some beneficial microorganisms to remain in the growth substrate.
Specific time spans and temperatures required during stages 3–6 will vary respective to species and variety. Substrate composition and the geometry of growth substrate will also affect the ideal times and temperatures.
Pinning is the trickiest part for a mushroom grower, since a combination of carbon dioxide (CO2) concentration, temperature, light, and humidity triggers mushrooms towards fruiting. Up until the point when rhizomorphs or mushroom "pins" appear, the mycelium is an amorphous mass spread throughout the growth substrate, unrecognizable as a mushroom.
Carbon dioxide concentration becomes elevated during the vegetative growth phase, when mycelium is sealed in a gas-resistant plastic barrier or bag which traps gases produced by the growing mycelium. To induce pinning, this barrier is opened or ruptured. CO2 concentration then decreases from about 0.08% to 0.04%, the ambient atmospheric level.
Indoor oyster mushroom farming
Oyster mushroom farming is rapidly expanding around many parts of the world. Oyster mushrooms are grown in substrate that comprises sterilized wheat, paddy straw and even used coffee grounds, and they do not require much space compared to other crops. The per unit production and profit extracted is comparatively higher than other crops. Oyster mushrooms can also be grown indoors from kits, most commonly in the form of a box containing growing medium with spores.
Substrates
Mushroom production converts the raw natural ingredients into mushroom tissue, most notably the carbohydrate chitin.
An ideal substrate will contain enough nitrogen and carbohydrate for rapid mushroom growth. Common bulk substrates include several of the following ingredients:
Wood chips or sawdust
Mulched straw (usually wheat, but also rice and other straws)
Strawbedded horse or poultry manure
Corncobs
Waste or recycled paper
Coffee pulp or grounds
Nut and seed hulls
Cottonseed hulls
Cocoa bean hulls
Cottonseed meal
Soybean meal
Brewer's grain
Ammonium nitrate
Urea
Mushrooms metabolize complex carbohydrates in their substrate into glucose, which is then transported through the mycelium as needed for growth and energy. While it is used as a main energy source, its concentration in the growth medium should not exceed 2%. For ideal fruiting, closer to 1% is ideal.
Pests and diseases
Parasitic insects, bacteria and other fungi all pose risks to indoor production. Sciarid or phorid flies may lay eggs in the growth medium, which hatch into maggots and damage developing mushrooms during all growth stages. Bacterial blotch caused by Pseudomonas bacteria or patches of Trichoderma green mold also pose a risk during the fruiting stage. Pesticides and sanitizing agents are available to use against these infestations. Biological controls for sciarid and phorid flies have also been proposed.
Trichoderma green mold can affect mushroom production, for example in the mid-1990s in Pennsylvania leading to significant crop losses. The contaminating fungus originated from poor hygiene by workers and poorly prepared growth substrates.
Mites in the genus Histiostoma have been found in mushroom farms. Histiostoma gracilipes feeds on mushrooms directly, while H. heinemanni is suspected to spread diseases.
Commercially cultivated fungi
Agaricus bisporus, also known as champignon and the button mushroom. This species also includes the portobello and crimini mushrooms.
Auricularia cornea and Auricularia heimuer (Tree ear fungus), two closely related species of jelly fungi that are commonly used in Chinese cuisine.
Clitocybe nuda, or blewit, is cultivated in Europe.
Flammulina velutipes, the "winter mushroom", also known as enokitake in Japan
Fusarium venenatum – the source for mycoprotein which is used in Quorn, a meat analogue.
Hypsizygus tessulatus (also Hypsizygus marmoreus), called shimeji in Japanese, it is a common variety of mushroom available in most markets in Japan. Known as "Beech mushroom" in Europe.
Lentinus edodes, also known as shiitake, oak mushroom. Lentinus edodes is largely produced in Japan, China and South Korea. Lentinus edodes accounts for 10% of world production of cultivated mushrooms. Common in Japan, China, Australia and North America.
Phallus indusiatus – (bamboo mushroom), traditionally collected from the wild, it has been cultivated in China since the late 1970s.
Pleurotus species are the second most important mushrooms in production in the world, accounting for 25% of total world production. Pleurotus mushrooms are cultivated worldwide; China is the major producer. Several species can be grown on carbonaceous matter such as straw or newspaper. In the wild they are usually found growing on wood.
Pleurotus citrinopileatus (golden oyster mushroom)
Pleurotus cornucopiae (branched oyster mushroom)
Pleurotus eryngii (king trumpet mushroom)
Pleurotus ostreatus (oyster mushroom)
Rhizopus oligosporus – the fungal starter culture used in the production of tempeh. In tempeh the mycelia of R. oligosporus are consumed.
Sparassis crispa – recent developments have led to this being cultivated in California. It is cultivated on large scale in Korea and Japan.
Tremella fuciformis (Snow fungus), another type of jelly fungus that is commonly used in Chinese cuisine.
Tuber species, (the truffle), Truffles belong to the ascomycete grouping of fungi. The truffle fruitbodies develop underground in mycorrhizal association with certain trees e.g. oak, poplar, beech, and hazel. Being difficult to find, trained pigs or dogs are often used to sniff them out for easy harvesting.
Tuber aestivum (Summer or St. Jean truffle)
Tuber magnatum (Piemont white truffle)
Tuber melanosporum (Périgord truffle)
T.melanosporum x T.magnatum (Khanaqa truffle)
Terfezia sp. (desert truffle)
Ustilago maydis (corn smut), a fungal pathogen of the maize plant. Also called the Mexican truffle, although not a true truffle.
Volvariella volvacea (the "paddy straw mushroom.") Volvariella mushrooms account for 16% of total production of cultivated mushrooms in the world.
Production regions in North America
Pennsylvania is the top-producing mushroom state in the United States, and celebrates September as "Mushroom Month".
The borough of Kennett Square is a historical and present leader in mushroom production. It currently leads production
of Agaricus-type mushrooms, followed by California, Florida and Michigan.
Other mushroom-producing states:
East: Connecticut, Delaware, Florida, Maryland, New York, Pennsylvania, Tennessee, Maine, and Vermont
Central: Illinois, Oklahoma, Texas, and Wisconsin
West: California, Colorado, Montana, Oregon, Utah and Washington
The lower Fraser Valley of British Columbia, which includes Vancouver, has a significant number of producersabout 60 as of 1998.
Production in Europe
Oyster mushroom cultivation has taken off in Europe as of late. Many entrepreneurs nowadays find it as a quite profitable business, a start-up with a small investment and good profit. Italy with 785,000 tonnes and Netherlands with 307,000 tonnes are between the top ten mushroom producing countries in the world. The world's biggest producer of mushroom spawn is also situated in France.
According to a research carried out on Production and Marketing of Mushrooms: Global and National Scenario Poland, Netherlands, Belgium, Lithuania are the major exporting mushrooms countries in Europe and countries like UK, Germany, France, Russia are considered to be the major importing countries.
Education and training
Oyster mushroom cultivation is a sustainable business where different natural resources can be used as a substrate. The number of people becoming interested in this field is rapidly increasing. The possibility of creating a viable business in urban environments by using coffee grounds is appealing for many entrepreneurs.
Since mushroom cultivation is not a subject available at school, most urban farmers learned it by doing. The time to master mushroom cultivation is time consuming and costly in missed revenue. For this reason there are numerous companies in Europe specialized in mushroom cultivation that are offering training for entrepreneurs and organizing events to build community and share knowledge. They also show the potential positive impact of this business on the environment.
Courses about mushroom cultivation can be attended in many countries around Europe. There is education available for growing mushrooms on coffee grounds, more advanced training for larger scale farming, spawn production and lab work and growing facilities.
Events are organised with different intervals. The Mushroom Learning Network gathers once a year in Europe. The International Society for Mushroom Science gathers once every five-years somewhere in the world.
References
Agriculture
Symbiosis
Sustainable agriculture
Biotechnology
Food industry
Microbiology | Fungiculture | [
"Chemistry",
"Biology"
] | 2,873 | [
"Behavior",
"Symbiosis",
"Biological interactions",
"Microbiology",
"Biotechnology",
"nan",
"Microscopy"
] |
2,192,510 | https://en.wikipedia.org/wiki/Lherzolite | Lherzolite is a type of ultramafic igneous rock. It is a coarse-grained rock consisting of 40 to 90% olivine along with significant orthopyroxene and lesser amounts of calcic chromium-rich clinopyroxene. Minor minerals include chromium and aluminium spinels and garnets. Plagioclase can occur in lherzolites and other peridotites that crystallize at relatively shallow depths (20 – 30 km). At greater depth plagioclase is unstable and is replaced by spinel. At approximately 90 km depth, pyrope garnet becomes the stable aluminous phase. Garnet lherzolite is a major constituent of the Earth's upper mantle (extending to ~300 km depth). Lherzolite is known from the lower ultramafic part of ophiolite complexes (although harzburgite is more common in this setting), from alpine-type peridotite massifs, from fracture zones adjacent to mid-oceanic ridges, and as xenoliths in kimberlite pipes and alkali basalts. Partial melting of spinel lherzolite is one of the primary sources of basaltic magma.
The name is derived from its type locality, the Lherz Massif (an alpine peridotite complex, also known as orogenic lherzolite complex), at Étang de Lers, near Massat in the French Pyrenees; Étang de Lherz is the archaic spelling of this location.
The Lherz massif also contains harzburgite and dunite, as well as layers of spinel pyroxenite, garnet pyroxenite, and hornblendite. The layers represent partial melts extracted from the host peridotite during decompression in the mantle long before emplacement into the crust.
The Lherz massif is unique because it has been emplaced into Paleozoic carbonates (limestones and dolomites), which form mixed breccias of limestone-lherzolite around the margins of the massif.
The Moon's lower mantle may be composed of lherzolite.
References
Blatt, Harvey and Robert J. Tracy, 1996, Petrology: Igneous, Sedimentary and Metamorphic, 2nd ed., Freeman,
Ultramafic rocks | Lherzolite | [
"Chemistry"
] | 496 | [
"Ultramafic rocks",
"Igneous rocks by composition"
] |
2,192,517 | https://en.wikipedia.org/wiki/Garret | A garret is a habitable attic, a living space at the top of a house or larger residential building, traditionally small with sloping ceilings. In the days before elevators this was the least prestigious position in a building, at the very top of the stairs.
Etymology
The word entered Middle English through Old French with a military connotation of watchtower, garrison or billet a place for guards or soldiers to be quartered in a house. Like garrison, it comes from an Old French word of ultimately Germanic origin meaning "to provide" or "defend".
History
In the later 19th century, garrets became one of the defining features of Second Empire architecture in Paris, France, where large buildings were stratified socially between different floors. As the number of stairs to climb increased, the social status decreased. Garrets were often internal elements of the mansard roof, with skylights or dormer windows.
A "bow garret" is a two-story "outhouse" situated at the back of a typical terraced house often used in Lancashire for the hat industry in pre-mechanised days. "Bowing" was the name given to the technique of cleaning up animal (e.g. rabbit) fur in the early stages of preparation for turning it into hats. What is now believed to be the last bow garret in existence (in Denton, Greater Manchester) is now a listed building in order to preserve this historical relic.
References
External links
Old Maid in the Garret (song)
Military terminology
Rooms
fr:Chambre de bonne | Garret | [
"Engineering"
] | 317 | [
"Rooms",
"Architecture"
] |
2,192,622 | https://en.wikipedia.org/wiki/Heliometer | A heliometer (from Greek ἥλιος hḗlios "sun" and measure) is an instrument originally designed for measuring the variation of the Sun's diameter at different seasons of the year, but applied now to the modern form of the instrument which is capable of much wider use.
Description
The basic concept is to introduce a split element into a telescope's optical path so as to produce a double image. If one element is moved using a screw micrometer, precise angle measurements can be made. The simplest arrangement is to split the object lens in half, with one half fixed and the other attached to the micrometer screw and slid along the cut diameter. To measure the diameter of the Sun, for example, the micrometer is first adjusted so that the two images of the solar disk coincide (the "zero" position where the split elements form essentially a single element). The micrometer is then adjusted so that diametrically opposite sides of the two images of the solar disk just touch each other. The difference in the two micrometer readings so obtained is the (angular) diameter of the Sun. Similarly, a precise measurement of the apparent separation between two nearby stars, A and B, is made by first superimposing the two images of the stars and then adjusting the double image so that star A in one image coincides with star B in the other. The difference in the two micrometer readings so obtained is the apparent separation or angular distance between the two stars.
History
The Syrian Arab astronomer Mu'ayyad al-Din al-Urdi, in his book, described a device called "the instrument with the two holes," which he used to measure and observe the apparent diameters of the Sun and the Moon.
The first application of the divided object-glass and the employment of double images in astronomical measures is due to Servington Savery of Shilstone in 1743. Pierre Bouguer, in 1748, originated the true conception of measurement by double image without the auxiliary aid of a filar micrometer, that is by changing the distance between two object-glasses of equal focus.
John Dollond, in 1754, combined Savary's idea of the divided object-glass with Bouguer's method of measurement, resulting in the construction of the first really practical heliometers. As far as we can ascertain, Joseph von Fraunhofer, some time not long before 1820, constructed the first heliometer with an achromatic divided object-glass, i.e. the first heliometer of the modern type. The first successful measurements of stellar parallax (to determine the distance to a star) were made by Friedrich Wilhelm Bessel in 1838 for the star 61 Cygni using a Fraunhofer heliometer. This was the aperture Fraunhofer heliometer at Königsberg Observatory built by Joseph von Fraunhofer's firm, though he did not live to see it delivered to Bessel. Although the heliometer was difficult to use, it had certain advantages for Bessel including a wider field of view compared to other great refractors of the period, and overcame atmospheric turbulence in measurements compared to a filar micrometer.
Notes
Further reading
Willach, Rolf. "The Heliometer: Instrument for Gauging Distances in Space." Journal of the Antique Telescope Society, number 26, pp. 5–16 (2004).
External links
Photos from the largest heliometer in the world (Kuffner-Observatory, Vienna)
Telescope types
Measuring instruments
Sun | Heliometer | [
"Technology",
"Engineering"
] | 729 | [
"Measuring instruments"
] |
2,192,763 | https://en.wikipedia.org/wiki/Channel%20spacing | Channel spacing, also known as bandwidth, is a term used in radio frequency planning. It describes the frequency difference between adjacent allocations in a frequency plan. Channels for mediumwave radio stations, for example are allocated in internationally agreed steps of 9 or 10 kHz: 10 kHz in ITU Region 2 (the Americas), and 9 kHz elsewhere in the world.
References
Broadcast engineering | Channel spacing | [
"Engineering"
] | 78 | [
"Broadcast engineering",
"Electronic engineering"
] |
2,193,092 | https://en.wikipedia.org/wiki/Pine%20oil | Pine oil is an essential oil obtained from a variety of species of pine, particularly Pinus sylvestris. Typically, parts of the trees that are not used for lumber stumps, etc. are ground and subjected to steam distillation. As of 1995, synthetic pine oil was the "biggest single turpentine derivative." Synthetic pine oils accounted for 90% of sales as of 2000.
Composition
Pine oil is a higher boiling fraction from turpentine. Both synthetic and natural pine oil consists mainly of α-terpineol, a C10 alcohol (b.p. 214–217 °C). Other components include dipentene and pinene. The detailed composition of natural pine oil depends on many factors, such as the species of the host plant. Synthetic pine oil is obtained by treating pinene with water in the presence of a catalytic amount of sulfuric acid. This treatment results in hydration of the alkene and rearrangement of the pinene skeleton, yielding terpineols.
Uses
Industrially, pine oil was once used in froth flotation for the separation of mineral from ores. For example, in copper extraction, pine oil is used to condition copper sulfide ores for froth flotation.
It is also used as a lubricant in small and expensive clockwork instruments.
In alternative medicine it is used in aromatherapy and as a scent in bath oils.
Properties as a disinfectant
Pine oil is used as a cleaning product, disinfectant, sanitizer, microbicide (or microbistat), virucide or insecticide. It is an effective herbicide where its action is to modify the waxy cuticle of plants, resulting in desiccation. Pine oil is a disinfectant that is mildly antiseptic. It is effective against Brevibacterium ammoniagenes, the fungi Candida albicans, Enterobacter aerogenes, Escherichia coli, Gram-negative enteric bacteria, household germs, Gram-negative household germs such as those causing salmonellosis, herpes simplex types 1 and 2, influenza type A, influenza virus type A/Brazil, influenza virus type A2/Japan, intestinal bacteria, Klebsiella pneumoniae, odor-causing bacteria, mold, mildew, Pseudomonas aeruginosa, Salmonella choleraesuis, Salmonella typhi, Salmonella typhosa, Serratia marcescens, Shigella sonnei, Staphylococcus aureus, Streptococcus faecalis, Streptococcus pyogenes, and Trichophyton mentagrophytes.
Safety
With respect to the quality of indoor air, attention is directed to the effects of ambient ozone on pine oil components. Large doses may cause central nervous system depression.
See also
List of cleaning products
Dettol antiseptic liquid
Pine-Sol, cleaning product that originally contained pine oil, though it switched to a different active ingredient in 2013 due to the declining availability of pine oil
References
Further reading
Aromatherapy
Disinfectants
Essential oils | Pine oil | [
"Chemistry"
] | 649 | [
"Essential oils",
"Natural products"
] |
2,193,281 | https://en.wikipedia.org/wiki/Taq%20polymerase | Taq polymerase is a thermostable DNA polymerase I named after the thermophilic eubacterial microorganism Thermus aquaticus, from which it was originally isolated by Chinese scientist Alice Chien et al. in 1976. Its name is often abbreviated to Taq or Taq pol. It is frequently used in the polymerase chain reaction (PCR), a method for greatly amplifying the quantity of short segments of DNA.
T. aquaticus is a bacterium that lives in hot springs and hydrothermal vents, and Taq polymerase was identified as an enzyme able to withstand the protein-denaturing conditions (high temperature) required during PCR. Therefore, it replaced the DNA polymerase from E. coli originally used in PCR.
Enzymatic properties
Taq'''s optimum temperature for activity is 75–80 °C, with a half-life of greater than 2 hours at 92.5 °C, 40 minutes at 95 °C and 9 minutes at 97.5 °C, and can replicate a 1000 base pair strand of DNA in less than 10 seconds at 72 °C. At 75–80 °C, Taq reaches its optimal polymerization rate of about 150 nucleotides per second per enzyme molecule, and any deviations from the optimal temperature range inhibit the extension rate of the enzyme. A single Taq synthesizes about 60 nucleotides per second at 70 °C, 24 nucleotides/sec at 55 °C, 1.5 nucleotides/sec at 37 °C, and 0.25 nucleotides/sec at 22 °C. At temperatures above 90 °C, Taq demonstrates very little or no activity at all, but the enzyme itself does not denature and remains intact. Presence of certain ions in the reaction vessel also affects specific activity of the enzyme. Small amounts of potassium chloride (KCl) and magnesium ion (Mg2+) promote Taq's enzymatic activity. Taq polymerase is maximally activated at 50mM KCl, while optimal Mg2+ concentration is determined by the concentration of nucleoside triphosphates (dNTPs). High concentrations of KCl and Mg2+ inhibit Taq's activity. The common metal ion chelator EDTA directly binds to Taq in the absence of these metal ions.
One of Taqs drawbacks is its lack of 3' to 5' exonuclease proofreading activity resulting in relatively low replication fidelity. Originally its error rate was measured at about 1 in 9,000 nucleotides. Some thermostable DNA polymerases have been isolated from other thermophilic bacteria and archaea, such as Pfu DNA polymerase, possessing a proofreading activity, and are being used instead of (or in combination with) Taq for high-fidelity amplification. Fidelity can vary widely between Taqs, which has profound effects in downstream sequencing applications.Taq makes DNA products that have A (adenine) overhangs at their 3' ends. This may be useful in TA cloning, whereby a cloning vector (such as a plasmid) that has a T (thymine) 3' overhang is used, which complements with the A overhang of the PCR product, thus enabling ligation of the PCR product into the plasmid vector.
In PCR
In the early 1980s, Kary Mullis was working at Cetus Corporation on the application of synthetic DNAs to biotechnology. He was familiar with the use of DNA oligonucleotides as probes for binding to target DNA strands, as well as their use as primers for DNA sequencing and cDNA synthesis. In 1983, he began using two primers, one to hybridize to each strand of a target DNA, and adding DNA polymerase to the reaction. This led to exponential DNA replication, greatly amplifying discrete segments of DNA between the primers.
However, after each round of replication the mixture needs to be heated above 90 °C to denature the newly formed DNA, allowing the strands to separate and act as templates in the next round of amplification. This heating step also inactivates the DNA polymerase that was in use before the discovery of Taq polymerase, the Klenow fragment (sourced from E. coli). Taq polymerase is well-suited for this application because it is able to withstand the temperature of 95 °C which is required for DNA strand separation without denaturing.
Use of the thermostable Taq enables running the PCR at high temperature (~60 °C and above), which facilitates high specificity of the primers and reduces the production of nonspecific products, such as primer dimer. Also, use of a thermostable polymerase eliminates the need to add new enzyme to each round of thermocycling. A single closed tube in a relatively simple machine can be used to carry out the entire process. Thus, the use of Taq polymerase was the key idea that made PCR applicable to a large variety of molecular biology problems concerning DNA analysis.
Patent issues
Hoffmann-La Roche eventually bought the PCR and Taq patents from Cetus for $330 million, from which it may have received up to $2 billion in royalties. In 1989, Science Magazine named Taq polymerase its first "Molecule of the Year". Kary Mullis received the Nobel Prize in Chemistry in 1993, the only one awarded for research performed at a biotechnology company. By the early 1990s, the PCR technique with Taq polymerase was being used in many areas, including basic molecular biology research, clinical testing, and forensics. It also began to find a pressing application in direct detection of the HIV in AIDS.
In December 1999, U.S. District Judge Vaughn Walker ruled that the 1990 patent involving Taq polymerase was issued, in part, on misleading information and false claims by scientists with Cetus Corporation. The ruling supported a challenge by Promega Corporation against Hoffman-La Roche, which purchased the Taq patents in 1991. Judge Walker cited previous discoveries by other laboratories, including the laboratory of John Trela at the University of Cincinnati department of biological sciences, as the basis for the ruling.
Domain structureTaq Pol A has an overall structure similar to that of E. coli PolA. The middle 3'–5' exonuclease domain responsible for proofreading has been dramatically changed and is not functional. It has a functional 5'-3' exonuclease domain at the amino terminal, described below. The remaining two domains act in coordination, via coupled domain motion.
Exonuclease domainTaq polymerase exonuclease' is a domain found in the amino-terminal of Taq DNA polymerase I (thermostable). It assumes a ribonuclease H-like motif. The domain confers 5'-3' exonuclease activity to the polymerase.
Unlike the same domain in E. coli, which would degrade primers and must be removed by digestion for PCR use, this domain is not said to degrade the primer. This activity is used in the TaqMan probe: as the daughter strands are formed, the probes complementary to the template come in contact with the polymerase and are cleaved into fluorescent pieces.
Binding with DNA Taq polymerase is bound at its polymerase active-site cleft with the blunt end of duplex DNA. As the Taq polymerase is in contact with the bound DNA, its side chains form hydrogen bonds with the purines and pyrimidines of the DNA. The same region of Taq polymerase that has bonded to DNA also binds with exonuclease. These structures bound to the Taq polymerase have different interactions.
Mutants
A site-directed mutagenesis experiment that improves the vestigial 3'-5' exonuclease activity by a factor of 2 has been reported, but it was never reported whether doing so decreases the error rate. Following a similar line of thought, chimera proteins have been made by cherry-picking domains from E. coli, Taq, and T. neapolitana polymerase I. Swapping out the vestigial domain for a functional one from E. coli created a protein with proof-reading ability but a lower optimal temperature and low thermostability.
Versions of the polymerase without the 5'-3' exonuclease domain has been produced, among which Klentaq or the Stoffel fragment are best known. The complete lack of exonuclease activity make these variants suitable for primers that exhibit secondary structure as well as for copying circular molecules. Other variations include using Klentaq with a high-fidelity polymerase, a Thermosequenase that recognizes substrates like T7 DNA polymerase does, mutants with higher tolerances to inhibitors, or "domain-tagged" versions that have an extra helix-hairpin-helix motif around the catalytic site to hold the DNA more tightly despite adverse conditions.
Significance in disease detection
Because of the improvements Taq polymerase provided in PCR DNA replication: higher specificity, fewer nonspecific products, and simpler processes and equipment, it has been instrumental in the efforts made to detect diseases. "The use of Polymerase Chain Reaction (PCR) in infectious disease diagnosis, has resulted in an ability to diagnose early and treat appropriately diseases due to fastidious pathogens, determine the antimicrobial susceptibility of slow growing organisms, and ascertain the quantum of infection." The implementation of Taq polymerase has saved countless lives. It has served an essential role in the detection of many of the world's worst diseases, including: tuberculosis, streptococcal pharyngitis, atypical pneumonia, AIDS, measles, hepatitis, and ulcerative urogenital infections. PCR, the method used to recreate copies of specific DNA samples, makes disease detection possible by targeting a specific DNA sequence of a targeted pathogen from a patient's sample and amplifying trace amounts of the indicative sequences by copying them up to billions of times. Although this is the most accurate method of disease detection, especially for HIV, it is not performed as often as alternative, inferior tests because of the relatively high cost, labor, and time required.
The reliance upon Taq polymerase as a catalyst for the PCR replication process has been highlighted during the COVID-19 Pandemic of 2020. Shortages of the necessary enzyme have impaired the ability of countries worldwide to produce test kits for the virus. Without Taq polymerase, the disease detection process is much slower and tedious.
Despite the advantages of using Taq polymerase in PCR disease detection, the enzyme is not without its shortcomings. Retroviral diseases (HIV, HTLV-1, and HTLV-II) often include mutations from guanine to adenine in their genome. Mutations such as these are what allow PCR tests to detect the diseases but Taq'' polymerase’s relatively low fidelity rate makes the same G-to-A mutation occur and possibly yield a false positive test result.
See also
References
DNA replication
Polymerase chain reaction | Taq polymerase | [
"Chemistry",
"Biology"
] | 2,349 | [
"Biochemistry methods",
"Genetics techniques",
"Polymerase chain reaction",
"DNA replication",
"Molecular genetics"
] |
2,193,362 | https://en.wikipedia.org/wiki/Chromophore | A chromophore is a molecule which absorbs light at a particular wavelength and reflects color as a result. Chromophores are commonly referred to as colored molecules for this reason. The word is derived . Many molecules in nature are chromophores, including chlorophyll, the molecule responsible for the green colors of leaves.
The color that is seen by our eyes is that of the light not absorbed by the reflecting object within a certain wavelength spectrum of visible light. The chromophore indicates a region in the molecule where the energy difference between two separate molecular orbitals falls within the range of the visible spectrum (or in informal contexts, the spectrum under scrutiny). Visible light that hits the chromophore can thus be absorbed by exciting an electron from its ground state into an excited state. In biological molecules that serve to capture or detect light energy, the chromophore is the moiety that causes a conformational change in the molecule when hit by light.
Conjugated pi-bond system chromophores
Just like how two adjacent p-orbitals in a molecule will form a pi-bond, three or more adjacent p-orbitals in a molecule can form a conjugated pi-system. In a conjugated pi-system, electrons are able to capture certain photons as the electrons resonate along a certain distance of p-orbitals - similar to how a radio antenna detects photons along its length. Typically, the more conjugated (longer) the pi-system is, the longer the wavelength of photon can be captured. In other words, with every added adjacent double bond we see in a molecule diagram, we can predict the system will be progressively more likely to appear yellow to our eyes as it is less likely to absorb yellow light and more likely to absorb red light. ("Conjugated systems of fewer than eight conjugated double bonds absorb only in the ultraviolet region and are colorless to the human eye", "Compounds that are blue or green typically do not rely on conjugated double bonds alone.")
In the conjugated chromophores, the electrons jump between energy levels that are extended pi orbitals, created by electron clouds like those in aromatic systems. Common examples include retinal (used in the eye to detect light), various food colorings, fabric dyes (azo compounds), pH indicators, lycopene, β-carotene, and anthocyanins. Various factors in a chromophore's structure go into determining at what wavelength region in a spectrum the chromophore will absorb. Lengthening or extending a conjugated system with more unsaturated (multiple) bonds in a molecule will tend to shift absorption to longer wavelengths. Woodward–Fieser rules can be used to approximate ultraviolet-visible maximum absorption wavelength in organic compounds with conjugated pi-bond systems.
Some of these are metal complex chromophores, which contain a metal in a coordination complex with ligands. Examples are chlorophyll, which is used by plants for photosynthesis and hemoglobin, the oxygen transporter in the blood of vertebrate animals. In these two examples, a metal is complexed at the center of a tetrapyrrole macrocycle ring: the metal being iron in the heme group (iron in a porphyrin ring) of hemoglobin, or magnesium complexed in a chlorin-type ring in the case of chlorophyll. The highly conjugated pi-bonding system of the macrocycle ring absorbs visible light. The nature of the central metal can also influence the absorption spectrum of the metal-macrocycle complex or properties such as excited state lifetime. The tetrapyrrole moiety in organic compounds which is not macrocyclic but still has a conjugated pi-bond system still acts as a chromophore. Examples of such compounds include bilirubin and urobilin, which exhibit a yellow color.
Auxochrome
An auxochrome is a functional group of atoms attached to the chromophore which modifies the ability of the chromophore to absorb light, altering the wavelength or intensity of the absorption.
Halochromism
Halochromism occurs when a substance changes color as the pH changes. This is a property of pH indicators, whose molecular structure changes upon certain changes in the surrounding pH. This change in structure affects a chromophore in the pH indicator molecule. For example, phenolphthalein is a pH indicator whose structure changes as pH changes as shown in the following table:
In a pH range of about 0-8, the molecule has three aromatic rings all bonded to a tetrahedral sp3 hybridized carbon atom in the middle which does not make the π-bonding in the aromatic rings conjugate. Because of their limited extent, the aromatic rings only absorb light in the ultraviolet region, and so the compound appears colorless in the 0-8 pH range. However, as the pH increases beyond 8.2, that central carbon becomes part of a double bond becoming sp2 hybridized and leaving a p orbital to overlap with the π-bonding in the rings. This makes the three rings conjugate together to form an extended chromophore absorbing longer wavelength visible light to show a fuchsia color. At pH ranges outside 0-12, other molecular structure changes result in other color changes; see Phenolphthalein details.
Common chromophore absorption wavelengths
See also
Biological pigment
Chromatophore
Fluorophore
Litmus
Pharmacophore
Photophore, glandular organ
Pigment
Spectroscopy
Visual phototransduction
Woodward's rules
References
External links
Causes of Color: physical mechanisms by which color is generated.
High Speed Nano-Sized Electronics May be Possible with Chromophores - Azonano.com
Chemical compounds
Color | Chromophore | [
"Physics",
"Chemistry"
] | 1,229 | [
"Chemical compounds",
"Molecules",
"Matter"
] |
2,193,564 | https://en.wikipedia.org/wiki/Agon | () is a term for a conflict, struggle or contest, and for a Greek deity. This could be a contest in athletics, in chariot or horse racing, or in music or literature at a public festival in ancient Greece. Agon is the word-forming element in 'agony', explaining the concept of agon(y) in tragedy by its fundamental characters, the protagonist and antagonist.
Athletics
In one sense, agon meant a contest or a competition in athletics, for example, the Olympic Games (Ὀλυμπιακοὶ Ἀγῶνες). Agon was also a mythological personification of the contests listed above. This god was represented in a statue at Olympia with halteres (dumbbells) () in his hands. This statue was a work of sculptor , and dedicated by Micythus of Rhegium.
Religion
According to Pausanias, Agon was recognized in the Greek world as a deity, whose statue appeared at Olympia, presumably in connection with the Olympic Games, which operated as both religious festival in honor of Zeus and athletic competition. Agon is, perhaps, more of a spirit than a god in Greek mythology, but was understood to be related to both Zelos (rivalry) and Nike (victory). More generally, Agon referred to any competitive event that was held in connection with religious festivals, including athletics, music, or dramatic performances.
Agon also appears as a concept in the New Testament and is defined in that context by Strong's Concordance as, agón: a gathering, contest, struggle; as an (athletic) contest; hence, a struggle (in the soul).
Theater
In Ancient Greek drama, particularly Old Comedy (fifth century B.C.), agon refers to a contest or debate between two characters - the protagonist and the antagonist - in the highly structured Classical tragedies and dramas. The agon could also develop between an actor and the choir or between two actors with half of the chorus supporting each. Through the argument of opposing principles, the agon in these performances resembled the dialectic dialogues of Plato. The meaning of the term has escaped the circumscriptions of its classical origins to signify, more generally, the conflict on which a literary work turns.
Dance
In 1948, Lincoln Kirstein posed the idea of a ballet that would later become known as Agon. After ten years of work before Agon'''s premiere, it became the final ballet in a series of collaborations between choreographer George Balanchine and composer Igor Stravinsky. Balanchine referred to this ballet as "the most perfect work" to come out of the collaboration between Stravinsky and himself.
Literature
Harold Bloom in The Western Canon uses the term agon to refer to the attempt by a writer to resolve an intellectual conflict between his ideas and the ideas of an influential predecessor in which "the larger swallows the smaller", such as in chapter 18, Joyce's agon with Shakespeare.
In Man, Play, and Games (1961), Roger Caillois uses the term agon to describe competitive games in which the players have equal chances but the winner succeeds because of "a single quality (speed, endurance, strength, memory, skill, ingenuity, etc.), exercised, within defined limits and without outside assistance."
Sociopolitical theory
In sociopolitical theory, agon can refer to the idea that the clash of opposing forces necessarily results in growth and progress. The concept, known as agonism, has been proposed most explicitly by a number of scholars, including William E. Connolly, Bonnie Honig, and Claudio Colaguori, but is also implicitly present in the work of scholars such as Theodor Adorno, and Michel Foucault (see also agonistic democracy).
Derivatives
Words derived from agon include agony, agonism, antagonism, and protagonist.
See alsoMan, Play and Games (Roger Caillois)
Notes
Further reading
Árnason, Jóhann Páll. Agon, Logos, Polis: The Greek Achievement and Its Aftermath. Stuttgart: Franz Steiner Verlag, 2001
Barker, Elton T. Entering the Agon: Dissent and Authority in Homer, Historiography, and Tragedy. Oxford: Oxford University Press, 2009
Lloyd, Michael A. The agon in Euripides. Oxford: Clarendon Press, 1992
Pfitzner, Victor C. Paul and the Agon Motif: Traditional Athletic Imagery in the Pauline Literature.'' Leiden: Brill, 1967
Greek gods
Personifications in Greek mythology
Ancient Greek theatre
Play (activity)
New Testament Greek words and phrases | Agon | [
"Biology"
] | 948 | [
"Play (activity)",
"Behavior",
"Human behavior"
] |
2,193,654 | https://en.wikipedia.org/wiki/S.Pellegrino | S.Pellegrino () is an Italian natural mineral water brand, owned by the company Sanpellegrino S.p.A, part of Swiss company Nestlé since 1997. The principal production plant is located in San Pellegrino Terme in the Province of Bergamo, Lombardy, Italy. Products are exported to most countries in Europe, the Americas, Oceania and the Middle East, as well as in Asia in Japan, Taiwan, Singapore and Hong Kong.
Corporate organisation
Sanpellegrino S.p.A. was founded during 1899, and is based in Milan, Italy.
On 20 April 1970, the company changed its name from Società Anonima Delle Terme di S.Pellegrino to Sanpellegrino S.p.A.
In 1997, Sanpellegrino S.p.A. was bought by Perrier Vittel SA, a division of Nestlé which also owns the Perrier and Vittel bottled water brands.
Paolo Luni, who joined the company as a consultant, then became General Manager and eventually CEO, left the company in 1999 after having inaugurated the Sanpellegrino Centennial celebrations, which took place in Teatro La Scala in Milan.
Production
The Sanpellegrino Company has ten production sites in Italy including its headquarters, and more than 1,850 people work for the company. It also manages other brands such as Vera, Levissima and Acqua Panna. The revenue for 2016 was €895 million, about €96 million less than the previous year. More than 30,000 bottles of water are produced every hour in the San Pellegrino plant. The bottles are then sorted to be exported to major countries around the world.
In 2005, five hundred million bottles were sold globally. In 2017, that number had increased to one billion bottles.
Varieties
Sparkling Natural Mineral Water
Sparkling Real Fruit Juice Drink: Orange, Lemon, Lemon & Mint, Blood Orange, Grapefruit, Clementine, Pomegranate & Orange, and Prickly Pear & Orange
Essenza Flavoured Sparkling Mineral Water: Lemon & Lemon Zest, Blood Orange & Black Raspberry, Dark Morello Cherry & Pomegranate, Peach & Orange Melon, Pink Grapefruit and Citrus Blend, and Tangerine & Wild Strawberry
Mineral water production
S.Pellegrino mineral water is produced in San Pellegrino Terme. The water may originate from a layer of rock below the surface, where it is mineralized from contact with limestone and volcanic rocks. The springs are located at the foot of a dolomite mountain wall which favours the formation and replenishment of a mineral water basin. The water then seeps to depths of over and flows underground to a distant aquifer.
The carbonation is then added during the production process as the spring itself is not naturally carbonated. The soft drinks do not use the same mineral water, rather it is produced using filtered local water to produce a consistent flavor.
History
S.Pellegrino mineral water has been produced for over 620 years. In 1395, the town borders of San Pellegrino were drawn, marking the start of its water industry. Leonardo da Vinci is said to have visited the town in 1509 to sample and examine the town's miraculous water, later writing a treatise on water.
Analysis shows that the water is strikingly similar to the samples taken in 1782, the first year such analysis took place. In fact, doctors from Northern Italy in the 13th century used to suggest that their patients go to the Val Brembana spring for treatment. Over the years, its therapeutic properties attracted many visitors, and, at the beginning of 1900, San Pellegrino Terme became a mineral spa holiday resort with a casino, thermal baths and a hotel.
In 1794, a treatise mentioned S.Pellegrino water as a treatment method for kidney stone disease.
In 1839, S.Pellegrino water was recommended for people affected with kidney diseases and urinary tract infection.
In 1760, Pellegrino Foppoli built a bathhouse where visitors had to pay a fee to use the indoor facilities.
In 1803, Foppoli's descendants sold the bathhouse to Giovanni Pesenti who wanted to construct a larger building.
The town council feared that this project would prevent visitors from free use of the spring. For this reason, they filed a complaint with the prefect which led Ester Pesenti and Lorenzo Palazzolo to sign an agreement in 1831. They decided that the 24 unit spring would be divided into two. So that, 17 units were given to Pesenti and Palazzolo and 7 units to San Pellegrino Terme town council.
In 1834, the flood of the Brembo, the river that crosses San Pellegrino Terme, caused serious damage in the valley.
Since the restoration required huge expenses, in 1837, the town leased Pesenti and Palazzolo its share of the water for 12 years. In 1841, Ester Pesenti requested an authorization to continue to expand the bathhouse.
One year later, another flood hit the valley and San Pellegrino Terme sold three-quarters of its shares to Pesenti. Since the water had always been connected to the territory, they agreed to give the remaining quarter of the shares to the residents of the town who still can use an external tap free of charge. The construction work finished in 1846.
When Queen Margherita visited the town in 1905, many articles appeared on the Giornale di San Pellegrino, in which it was illustrated that the bottled mineral water was sold in the main Italian cities, in many cities around Europe, as well as in Cairo, Tangiers, Shanghai, Calcutta, Sydney, Brazil, Peru, and the United States. At that time, one case of 50 bottles cost 26 Italian lire, while a case of 24 bottles cost 14 Italian lire. At the beginning of the 20th century, carbon dioxide was added to S.Pellegrino to prevent the development of bacteria, especially during long overseas travels. It is still taken from sources in Tuscany and sent to San Pellegrino Terme.
The spa facilities were renovated, and in 1928, they were equipped with more modern tools for various diagnostic needs, such as the radioscopic and radiograph room and the microscopic and chemical analysis laboratory. In addition, Granelli reorganized the bottling plant with new equipment, which moved up to a production capacity of 120,000 bottles a day. At the beginning, it was a handmade production, then it became gradually mechanized and was managed by an all female staff. The first machinery was introduced in 1930 and, since that moment, the amount produced has been increasing. Subsequently, the company began a packaging process for shipping to the recipient countries.
In 1961, Sanpellegrino S.p.A. started to produce bottled mineral water and other beverages in the new San Pellegrino Terme factory. In 1932, the Aranciata orangeade variant was introduced. Containing S.Pellegrino as its primary ingredient, the soda added concentrated orange juice. Today, Sanpellegrino S.p.A. also produces various other flavors of carbonated beverages: Limonata (lemonade), Sanbittèr (bitters), Pompelmo (grapefruit), Aranciata Rossa (blood orange), and Chinò (chinotto).
In 1968, S.Pellegrino appeared on the front cover of the British Sunday newspaper The Observer.
During the Italian Occupation of Ethiopia production was curtailed in its entirety for the Italian military water needs. During this time they advocated for the policy changes Mussolini's government had been implementing. This increased revenue dramatically for several years, even after the occupation had faltered. Over the years, the bottling lines increased the production levels needed to satisfy the needs of a market which was becoming more and more sophisticated, and in 2012 a high speed PET bottling line was installed.
The company built a new plant some kilometers beyond the previous one as the water production continued to grow. In the early 1970s, it was decided to no longer use mineral water in the production of soft drinks, and to substitute it with spring water which was treated with particular equipment.
In May 2014, Sanpellegrino S.p.A. released two new flavors of their Sparkling Fruit Beverages. The new flavors were Melograno e Arancia (Pomegranate and Orange) and Clementina (Clementine). They were announced through an installation at Eataly's La Scuola Grande in New York where large cans of the new soda flavors were constructed out of flowers. In Italy, S.Pellegrino is available in 1.5 L bottles for about one euro, the same for their Aranciata in most stores. Competitive orange drinks can cost even less. If artificial sweeteners are used, the price is about half that of the sugared varieties.
Bottle design
The bottles' packaging has maintained the original references to its territory and its first productions. The products on the market can be divided into two categories: glass and PET.
The shape of the glass bottles has remained the same since its origin in 1899. The model is called Vichy because at that time San Pellegrino Terme was known as "the Italian Vichy", and it is characterized by the elongated shape of the bottle. The red star was a symbol of high quality products exported from Italy between the 1800s and the 1900s. On the neck of the bottle there is a representation of the Casino, above the date of foundation of the brand and the company. The label features an elaborate white and blue watermark which recalls the Belle Epoque style.
The PET line has the same shape of the glass bottles. The production started at the end of the 1990s with the aim of maintaining the same perlage and effervescence of the glass line. At the beginning, only the 50 centiliters size was produced, but since 2006, the production of the 33, 75 and 100 centilitre bottles were added to the original one.
Different versions of the label were created for collaborations, partnerships and international events.
In 2010, 2011 and 2013 the project "S.Pellegrino Meets Italian Talents" was meant to create collaborations with Italians known on an international level as a symbol of Italy. These collaborations include Missoni, Bulgari and a tribute to Luciano Pavarotti.
Accomplishments
2009: 110th anniversary since the foundation of the Società Anonima delle Terme di San Pellegrino. A limited edition silver label was created for the occasion.
2009–2012: special editions of transparent S.Pellegrino water bottle and white Acqua Panna bottle were created for The World's 50 Best Restaurants.
In popular culture
S.Pellegrino can be seen for the first time in 1949 in the film The Emperor of Capri, directed by Luigi Comencini and since that moment it has appeared in the following movies and TV series.
Films
La Dolce Vita (1960), Federico Fellini
From Russia with Love (1963), Terence Young
La Grande Bouffe (1973), Marco Ferreri
Mean Streets (1973), Martin Scorsese
Sabrina (1995), Harrison Ford, Julia Ormond
Big Night (1996), Campbell Scott, Stanley Tucci
Hollywood Ending (2002), Woody Allen
Changing Lanes (2002), Roger Michell
Ocean's Twelve (2004), Steven Soderbergh
Meet the Fockers (2004), Jay Roach
Don't Move (2004), Sergio Castellitto
The Devil Wears Prada (2006), David Frankel
Sex and the City (2008), Michael Patrick King
The Great Beauty (2013), Paolo Sorrentino
The Square (2017), Ruben Östlund
House of Gucci (2021), Ridley Scott
TV series
The Bold and the Beautiful: Brooke Logan, Ridge Forrester, Stephanie Forrester often drink S.Pellegrino at home.
House: Gregory House often drinks S.Pellegrino during the meetings with Eric Foreman.
Girlfriends: S. Pellegrino bottles are often seen in the restaurants and home gatherings that the four women have together for girl time. Starring Tracee Ellis Ross
Gossip Girl: Chuck Bass uses a bottle of S.Pellegrino to make a green smoothie in season one.
Sex and the City: The four girls often have a bottle of S.Pellegrino when they are out together. Moreover, Carrie Bradshaw often drinks S.Pellegrino while writing articles during the night.
The Sopranos: Bottles of S.Pellegrino are often seen in many of the restaurant scenes across the entire run of the series. They can often be seen on the Sopranos' family dinner table as well.
The Good Wife: Bottles of S.Pellegrino were seen briefly in the episode titled Whack-A-Mole Season 5 episode 9 where Lockhart/Gardner were discussing 8 out of 12 cases on Continuous and getting back at Alicia Florrick
Inventing Anna
Criticism
In 2007, the German consumer television program Markt reported that S.Pellegrino contains uranium. Nestlé was informed about this and responded that uranium was common in both bottled and tap water and that the 0.0070mg/l found in their product was below the 0.03 mg/L threshold established by various governments and food health organizations.
S.Pellegrino is not suitable for infants under 12 weeks of age, because their gastrointestinal tract and urinary system is immature and cannot withstand highly mineralized water.
See also
Apollinaris
Badoit
Borjomi
Evian
Farris
Gerolsteiner Brunnen
Mattoni
Topo Chico
Panna
Perrier
Ramlösa
Selters
Spa
References
External links
Aquadiv
Nestlé's description of S.Pellegrino
Sanpellegrino Chinò website
Fine Dining Lovers by S.Pellegrino & Acquapanna Company Webmagazine
Nestlé brands
Mineral water
Carbonated water
Bottled water brands
Soft drinks
Drink companies of Italy
Altagamma members
1899 establishments in Italy
Certified B Corporations in the Food & Beverage Industry | S.Pellegrino | [
"Chemistry"
] | 2,921 | [
"Mineral water"
] |
2,193,804 | https://en.wikipedia.org/wiki/Invention%20of%20the%20telephone | The invention of the telephone was the culmination of work done by more than one individual, and led to an array of lawsuits relating to the patent claims of several individuals and numerous companies. Notable people included in this were Antonio Meucci, Philipp Reis, Elisha Gray and Alexander Graham Bell.
Early development
The concept of the telephone dates back to the string telephone or lover's telephone that has been known for centuries, comprising two diaphragms connected by a taut string or wire. Sound waves are carried as mechanical vibrations along the string or wire from one diaphragm to the other. The classic example is the tin can telephone, a children's toy made by connecting the two ends of a string to the bottoms of two metal cans, paper cups or similar items. The essential idea of this toy was that a diaphragm can collect voice sounds for reproduction at a distance. One precursor to the development of the electromagnetic telephone originated in 1833 when Carl Friedrich Gauss and Wilhelm Eduard Weber invented an electromagnetic device for the transmission of telegraphic signals at the University of Göttingen, in Lower Saxony, helping to create the fundamental basis for the technology that was later used in similar telecommunication devices. Gauss's and Weber's invention is purported to be the world's first electromagnetic telegraph.
Charles Grafton Page
In 1836-8, American Charles Grafton Page passed an electric current through a coil of wire placed between the poles of a horseshoe magnet. He observed that connecting and disconnecting the current caused a ringing sound in the magnet. He called this effect "galvanic music". A similar phenomenon was reported by British inventor Edward Davy in 1837-8 and it was also pointed out by William Chappell that Michael Faraday had likely heard similar sounds during his own experiments with make and break circuits in the early 1830s.
Innocenzo Manzetti
Innocenzo Manzetti considered the idea of a telephone as early as 1844, and may have made one in 1864, as an enhancement to an automaton built by him in 1849.
Charles Bourseul was a French telegraph engineer who proposed (but did not build) the first design of a "make-and-break" telephone in 1854. That is about the same time that Meucci later claimed to have created his first attempt at the telephone in Italy.
Bourseul explained: "Suppose that a man speaks near a movable disc sufficiently flexible to lose none of the vibrations of the voice; that this disc alternately makes and breaks the currents from a battery: you may have at a distance another disc which will simultaneously execute the same vibrations.... It is certain that, in a more or less distant future, a speech will be transmitted by electricity. I have made experiments in this direction; they are delicate and demand time and patience, but the approximations obtained promise a favorable result".
Antonio Meucci
An early communicating device was invented around 1854 by Antonio Meucci, who called it a telettrofono . In 1871 Meucci filed a patent caveat at the US Patent Office. His caveat describes his invention, but does not mention a diaphragm, electromagnet, conversion of sound into electrical waves, conversion of electrical waves into sound, or other essential features of an electromagnetic telephone.
The first American demonstration of Meucci's invention took place in Staten Island, New York in 1854. In 1861, a description of it was reportedly published in an Italian-language New York newspaper, although no known copy of that newspaper issue or article has survived to the present day. Meucci claimed to have invented a paired electromagnetic transmitter and receiver, where the motion of a diaphragm modulated a signal in a coil by moving an electromagnet, although this was not mentioned in his 1871 U.S. patent caveat. A further discrepancy observed was that the device described in the 1871 caveat employed only a single conduction wire, with the telephone's transmitter-receivers being insulated from a 'ground return' path.
Meucci studied the principles of electromagnetic voice transmission for many years and was able to realise his dream of transmitting his voice through wires in 1856. He installed a telephone-like device within his house in order to communicate with his wife who was ill at the time. Some of Meucci's notes purportedly written in 1857 describe the basic principle of electromagnetic voice or in other words, the telephone.
In the 1880s Meucci was credited with the early invention of inductive loading of telephone wires to increase long-distance signals. Serious burns from an accident, a lack of English, and poor business abilities resulted in Meucci's failing to develop his inventions commercially in America. Meucci demonstrated some sort of instrument in 1849 in Havana, Cuba, however, this may have been a variant of a string telephone that used wire. Meucci has been further credited with the invention of an anti-sidetone circuit. However, examination showed that his solution to sidetone was to maintain two separate telephone circuits and thus use twice as many transmission wires. The anti-sidetone circuit later introduced by Bell Telephone instead canceled sidetone through a feedback process.
An American District Telegraph (ADT) laboratory reportedly lost some of Meucci's working models, his wife reportedly disposed of others and Meucci, who sometimes lived on public assistance, chose not to renew his 1871 teletrofono patent caveat after 1874.
A resolution was passed by the United States House of Representatives in 2002 that said Meucci did pioneering work on the development of the telephone. The resolution said that "if Meucci had been able to pay the $10 fee to maintain the caveat after 1874, no patent could have been issued to Bell".
The Meucci resolution by the US Congress was promptly followed by a Canada legislative motion by Canada's 37th Parliament, declaring Alexander Graham Bell as the inventor of the telephone. Others in Canada disagreed with the Congressional resolution, some of whom provided criticisms of both its accuracy and intent.
Chronology of Meucci's invention
A retired director general of the Telecom Italia central telecommunications research institute (CSELT), Basilio Catania, and the Italian Society of Electrotechnics, "Federazione Italiana di Elettrotecnica", have devoted a Museum to Antonio Meucci, constructing a chronology of his invention of the telephone and tracing the history of the two legal trials involving Meucci and Alexander Graham Bell.
They claim that Meucci was the actual inventor of the telephone, and base their argument on reconstructed evidence. What follows, if not otherwise stated, is a summary of their historic reconstruction.
In 1834 Meucci constructed a kind of acoustic telephone as a way to communicate between the stage and control room at the theatre "Teatro della Pergola" in Florence. This telephone is constructed on the model of pipe-telephones on ships and is still working.
In 1848 Meucci developed a popular method of using electric shocks to treat rheumatism. He used to give his patients two conductors linked to 60 Bunsen batteries and ending with a cork. He also kept two conductors linked to the same Bunsen batteries. He used to sit in his laboratory, while the Bunsen batteries were placed in a second room and his patients in a third room. In 1849 while providing a treatment to a patient with a 114 V electrical discharge, in his laboratory Meucci heard his patient's scream through the piece of copper wire that was between them, from the conductors he was keeping near his ear. His intuition was that the "tongue" of copper wire was vibrating just like a leaf of an electroscope; which means that there was an electrostatic effect. In order to continue the experiment without hurting his patient, Meucci covered the copper wire with a piece of paper. Through this device he heard inarticulated human voice. He called this device "telegrafo parlante" (litt. "talking telegraph").
On the basis of this prototype, Meucci worked on more than 30 kinds of sound transmitting devices inspired by the telegraph model as did other pioneers of the telephone, such as Charles Bourseul, Philipp Reis, Innocenzo Manzetti and others. Meucci later claimed that he did not think about transmitting voice by using the principle of the telegraph "make-and-break" method, but he looked for a "continuous" solution that did not interrupt the electric current.
Meucci later claimed that he constructed the first electromagnetic telephone, made of an electromagnet with a nucleus in the shape of a horseshoe bat, a diaphragm of animal skin, stiffened with potassium dichromate and keeping a metal disk stuck in the middle. The instrument was hosted in a cylindrical carton box. He said he constructed this as a way to connect his second-floor bedroom to his basement laboratory, and thus communicate with his wife who was an invalid.
Meucci separated the two directions of transmission in order to eliminate the so-called "local effect", adopting what we would call today a 4-wire-circuit. He constructed a simple calling system with a telegraphic manipulator which short-circuited the instrument of the calling person, producing in the instrument of the called person a succession of impulses (clicks), much more intense than those of normal conversation. As he was aware that his device required a bigger band than a telegraph, he found some means to avoid the so-called "skin effect" through superficial treatment of the conductor or by acting on the material (copper instead of iron). He successfully used an insulated copper plait, thus anticipating the litz wire used by Nikola Tesla in RF coils.
In 1864 Meucci later claimed that he realized his "best device", using an iron diaphragm with optimized thickness and tightly clamped along its rim. The instrument was housed in a shaving-soap box, whose cover clamped the diaphragm.
In August 1870, Meucci later claimed that he obtained transmission of articulate human voice at a mile distance by using as a conductor a copper plait insulated by cotton. He called his device "teletrofono". Drawings and notes by Antonio Meucci dated September 27, 1870, show coils of wire on long-distance telephone lines. The painting made by Nestore Corradi in 1858 mentions the sentence "Electric current from the inductor pipe".
The above information was published in the Scientific American Supplement No. 520 of December 19, 1885, based on reconstructions produced in 1885, for which there was no contemporary pre-1875 evidence. Meucci's 1871 caveat did not mention any of the telephone features later credited to him by his lawyer, and which were published in that Scientific American Supplement, a major reason for the loss of the 'Bell v. Globe and Meucci' patent infringement court case, which was decided against Globe and Meucci.
Johann Philipp Reis
The Reis telephone was developed from 1857 onwards. Allegedly, the transmitter was difficult to operate, since the relative position of the needle and the contact were critical to the device's operation. Thus, it can be called a "telephone", since it did transmit voice sounds electrically over distance, but was hardly a commercially practical telephone in the modern sense.
In 1874, the Reis device was tested by the British company Standard Telephones and Cables (STC). The results also confirmed it could transmit and receive speech with good quality (fidelity), but relatively low intensity.
Reis' new invention was articulated in a lecture before the Physical Society of Frankfurt on 26 October 1861, and a description, written by himself for Jahresbericht a month or two later. It created a good deal of scientific excitement in Germany; models of it were sent abroad, to London, Dublin, Tiflis, and other places. It became a subject for popular lectures, and an article for scientific cabinets.
Thomas Edison tested the Reis equipment and found that "single words, uttered as in reading, speaking and the like, were perceptible indistinctly, notwithstanding here also the inflections of the voice, the modulations of interrogation, wonder, command, etc., attained distinct expression." He used Reis's work for the successful development of the carbon microphone. Edison acknowledged his debt to Reis thus:
The first inventor of a telephone was Phillip Reis of Germany only musical not articulating. The first person to publicly exhibit a telephone for transmission of articulate speech was A. G. Bell. The first practical commercial telephone for transmission of articulate speech was invented by myself. Telephones used throughout the world are mine and Bell's. Mine is used for transmitting. Bell's is used for receiving.
Cyrille Duquet
Cyrille Duquet invents the handset.
Electro-magnetic transmitters and receivers
Elisha Gray
Elisha Gray, of Highland Park, Illinois, also devised a tone telegraph of this kind about the same time as La Cour. In Gray's tone telegraph, several vibrating steel reeds tuned to different frequencies interrupted the current, which at the other end of the line passed through electromagnets and vibrated matching tuned steel reeds near the electromagnet poles. Gray's "harmonic telegraph", with vibrating reeds, was used by the Western Union Telegraph Company. Since more than one set of vibration frequencies – that is to say, more than one musical tone – can be sent over the same wire simultaneously, the harmonic telegraph can be utilized as a 'multiplex' or many-ply telegraph, conveying several messages through the same wire at the same time. Each message can either be read by an operator by the sound, or from different tones read by different operators, or a permanent record can be made by the marks drawn on a ribbon of traveling paper by a Morse recorder. On July 27, 1875, Gray was granted U.S. patent 166,096 for "Electric Telegraph for Transmitting Musical Tones" (the harmonic).
On February 14, 1876, at the US Patent Office, Gray's lawyer filed a patent caveat for a telephone on the very same day that Bell's lawyer filed Bell's patent application for a telephone. The water transmitter described in Gray's caveat was strikingly similar to the experimental telephone transmitter tested by Bell on March 10, 1876, a fact which raised questions about whether Bell (who knew of Gray) was inspired by Gray's design or vice versa. Although Bell did not use Gray's water transmitter in later telephones, evidence suggests that Bell's lawyers may have obtained an unfair advantage over Gray.
Alexander Graham Bell
Alexander Graham Bell had pioneered a system called visible speech, developed by his father, to teach deaf children. In 1872 Bell founded a school in Boston, Massachusetts, to train teachers of the deaf. The school subsequently became part of Boston University, where Bell was appointed professor of vocal physiology in 1873.
As Professor of Vocal Physiology at Boston University, Bell was engaged in training teachers in the art of instructing the deaf how to speak and experimented with the Leon Scott phonautograph in recording the vibrations of speech. This apparatus consists essentially of a thin membrane vibrated by the voice and carrying a light-weight stylus, which traces an undulatory line on a plate of smoked glass. The line is a graphic representation of the vibrations of the membrane and the waves of sound in the air.
This background prepared Bell for work with spoken sound waves and electricity. He began his experiments in 1873–1874 with a harmonic telegraph, following the examples of Bourseul, Reis, and Gray. Bell's designs employed various on-off-on-off make-break current-interrupters driven by vibrating steel reeds which sent interrupted current to a distant receiver electro-magnet that caused a second steel reed or tuning fork to vibrate.
During a June 2, 1875, experiment by Bell and his assistant Thomas Watson, a receiver reed failed to respond to the intermittent current supplied by an electric battery. Bell told Watson, who was at the other end of the line, to pluck the reed, thinking it had stuck to the pole of the magnet. Watson complied, and to his astonishment Bell heard a reed at his end of the line vibrate and emit the same timbre of a plucked reed, although there were no interrupted on-off-on-off currents from a transmitter to make it vibrate. A few more experiments soon showed that his receiver reed had been set in vibration by the magneto-electric currents induced in the line by the motion of the distant receiver reed in the neighborhood of its magnet. The battery current was not causing the vibration but was needed only to supply the magnetic field in which the reeds vibrated. Moreover, when Bell heard the rich overtones of the plucked reed, it occurred to him that since the circuit was never broken, all the complex vibrations of speech might be converted into undulating (modulated) currents, which in turn would reproduce the complex timbre, amplitude, and frequencies of speech at a distance.
After Bell and Watson discovered on June 2, 1875, that movements of the reed alone in a magnetic field could reproduce the frequencies and timbre of spoken sound waves, Bell reasoned by analogy with the mechanical phonautograph that a skin diaphragm would reproduce sounds like the human ear when connected to a steel or iron reed or hinged armature. On July 1, 1875, he instructed Watson to build a receiver consisting of a stretched diaphragm or drum of goldbeater's skin with an armature of magnetized iron attached to its middle, and free to vibrate in front of the pole of an electromagnet in circuit with the line. A second membrane-device was built for use as a transmitter. This was the "gallows" phone. A few days later they were tried together, one at each end of the line, which ran from a room in the inventor's house, located at 5 Exeter Place in Boston, to the cellar underneath. Bell, in the work room, held one instrument in his hands, while Watson in the cellar listened at the other. Bell spoke into his instrument, "Do you understand what I say?" and Watson answered "Yes". However, the voice sounds were not distinct and the armature tended to stick to the electromagnet pole and tear the membrane.
On 10 March 1876, in a test, between two rooms in a single building, above Palace Theatre, at 109 Court Street, not far from Scollay Square in Boston showed that the telephone worked, but so far, only at a short range.
In 1876, Bell became the first to obtain a patent for an "apparatus for transmitting vocal or other sounds telegraphically", after experimenting with many primitive sound transmitters and receivers. Because of illness and other commitments, Bell made little or no telephone improvements or experiments for eight months until after his U.S. patent 174,465 was published., but within a year the first telephone exchange was built in Connecticut and the Bell Telephone Company was created in 1877, with Bell the owner of a third of the shares, quickly making him a wealthy man. Organ builder Ernest Skinner reported in his autobiography that Bell offered Boston-area organ builder Hutchings a 50% interest in the company but Hutchings declined.
In 1880, Bell was awarded the French Volta Prize for his invention and with the money, founded the Volta Laboratory in Washington, where he continued experiments in communication, in medical research, and in techniques for teaching speech to the deaf, working with Helen Keller among others. In 1885 he acquired land in Nova Scotia and established a summer home there where he continued experiments, particularly in the field of aviation.
Bell himself said that the telephone was invented in Canada but made in the United States.
Bell's success
The first successful bi-directional transmission of clear speech by Bell and Watson was made on March 10, 1876, when Bell spoke into the device, "Mr. Watson, come here, I want to see you." and Watson complied with the request. Bell tested Gray's liquid transmitter design in this experiment, but only after Bell's patent was granted and only as a proof of concept scientific experiment to prove to his own satisfaction that intelligible "articulate speech" (Bell's words) could be electrically transmitted. Because a liquid transmitter was not practical for commercial products, Bell focused on improving the electromagnetic telephone after March 1876 and never used Gray's liquid transmitter in public demonstrations or commercial use.
Bell's telephone transmitter (microphone) consisted of a double electromagnet, in front of which a membrane, stretched on a ring, carried an oblong piece of soft iron cemented to its middle. A funnel-shaped mouthpiece directed the voice sounds upon the membrane, and as it vibrated, the soft iron "armature" induced corresponding currents in the coils of the electromagnet. These currents, after traversing the wire, passed through the receiver which consisted of an electromagnet in a tubular metal can having one end partially closed by a thin circular disc of soft iron. When the undulatory current passed through the coil of this electromagnet, the disc vibrated, thereby creating sound waves in the air.
This primitive telephone was rapidly improved. The double electromagnet was replaced by a single permanently magnetized bar magnet having a small coil or bobbin of fine wire surrounding one pole, in front of which a thin disc of iron was fixed in a circular mouthpiece. The disc served as a combined diaphragm and armature. On speaking into the mouthpiece, the iron diaphragm vibrated with the voice in the magnetic field of the bar-magnet pole, and thereby caused undulatory currents in the coil. These currents, after traveling through the wire to the distant receiver, were received in an identical apparatus. This design was patented by Bell on January 30, 1877. The sounds were weak and could only be heard when the ear was close to the earphone/mouthpiece, but they were distinct.
In the third of his tests in Southern Ontario, on August 10, 1876, Bell made a call via the telegraph line from the family homestead in Brantford, Ontario, to his assistant located in Paris, Ontario, some 13 kilometers away. This test was claimed by many sources as the world's first long-distance call. The final test certainly proved that the telephone could work over long distances.
Public demonstrations
Early public demonstrations of Bell's telephone
Bell exhibited a working telephone at the Centennial Exhibition in Philadelphia in June 1876, where it attracted the attention of Brazilian emperor Pedro II plus the physicist and engineer Sir William Thomson (who would later be ennobled as the 1st Baron Kelvin). In August 1876 at a meeting of the British Association for the Advancement of Science, Thomson revealed the telephone to the European public. In describing his visit to the Philadelphia Exhibition, Thomson said, "I heard [through the telephone] passages taken at random from the New York newspapers: 'S.S. Cox Has Arrived' (I failed to make out the S.S. Cox); 'The City of New York', 'Senator Morton', 'The Senate Has Resolved To Print A Thousand Extra Copies', 'The Americans In London Have Resolved To Celebrate The Coming Fourth Of July!' All this my own ears heard spoken to me with unmistakable distinctness by the then circular disc armature of just such another little electro-magnet as this I hold in my hand."
Three great tests of the telephone
Only a few months after receiving U.S. Patent No. 174465 at the beginning of March 1876, Bell conducted three important tests of his new invention and the telephone technology after returning to his parents' home at Melville House (now the Bell Homestead National Historic Site) for the summer.
On March 10, 1876, Bell had used "the instrument" in Boston to call Thomas Watson who was in another room but out of earshot. He said, "Mr. Watson, come here – I want to see you" and Watson soon appeared at his side.
In the first test call at a longer distance in Southern Ontario, on August 3, 1876, Alexander Graham's uncle, Professor David Charles Bell, spoke to him from the Brantford telegraph office, reciting lines from Shakespeare's Hamlet ("To be or not to be...."). The young inventor, positioned at the A. Wallis Ellis store in the neighboring community of Mount Pleasant, received and may possibly have transferred his uncle's voice onto a phonautogram, a drawing made on a pen-like recording device that could produce the shapes of sound waves as waveforms onto smoked glass or other media by tracing their vibrations.
The next day on August 4 another call was made between Brantford's telegraph office and Melville House, where a large dinner party exchanged "....speech, recitations, songs and instrumental music". To bring telephone signals to Melville House, Alexander Graham audaciously "bought up" and "cleaned up" the complete supply of stovepipe wire in Brantford. With the help of two of his parents' neighbours, he tacked the stovepipe wire some 400 metres (a quarter mile) along the top of fence posts from his parents' home to a junction point on the telegraph line to the neighbouring community of Mount Pleasant, which joined it to the Dominion Telegraph office in Brantford, Ontario.
The third and most important test was the world's first true long-distance telephone call, placed between Brantford and Paris, Ontario on August 10, 1876. For that long-distance call Alexander Graham Bell set up a telephone using telegraph lines at Robert White's Boot and Shoe Store at 90 Grand River Street North in Paris via its Dominion Telegraph Co. office on Colborne Street. The normal telegraph line between Paris and Brantford was not quite 13 km (8 miles) long, but the connection was extended a further 93 km (58 miles) to Toronto to allow the use of a battery in its telegraph office. Granted, this was a one-way long-distance call. The first two-way (reciprocal) conversation over a line occurred between Cambridge and Boston (roughly 2.5 miles) on October 9, 1876. During that conversation, Bell was on Kilby Street in Boston and Watson was at the offices of the Walworth Manufacturing Company.
Scientific American described the three test calls in their September 9, 1876, article, "The Human Voice Transmitted by Telegraph". Historian Thomas Costain referred to the calls as "the three great tests of the telephone". One Bell Homestead reviewer wrote of them, "No one involved in these early calls could possibly have understood the future impact of these communication firsts".
Later public demonstrations
A later telephone design was publicly exhibited on May 4, 1877, at a lecture given by Professor Bell in the Boston Music Hall. According to a report quoted by John Munro in Heroes of the Telegraph:
Going to the small telephone box with its slender wire attachments, Mr. Bell coolly asked, as though addressing someone in an adjoining room, "Mr. Watson, are you ready!" Mr. Watson, five miles away in Somerville, promptly answered in the affirmative, and soon was heard a voice singing "America". [...] Going to another instrument, connected by wire with Providence, forty-three miles distant, Mr. Bell listened a moment, and said, "Signor Brignolli, who is assisting at a concert in Providence Music Hall, will now sing for us." In a moment the cadence of the tenor's voice rose and fell, the sound being faint, sometimes lost, and then again audible. Later, a cornet solo played in Somerville was very distinctly heard. Still later, a three-part song came over the wire from Somerville, and Mr. Bell told his audience "I will switch off the song from one part of the room to another so that all can hear." At a subsequent lecture in Salem, Massachusetts, communication was established with Boston, eighteen miles distant, and Mr. Watson at the latter place sang "Auld Lang Syne", the National Anthem, and "Hail Columbia", while the audience at Salem joined in the chorus.
On January 14, 1878, at Osborne House, on the Isle of Wight, Bell demonstrated the device to Queen Victoria, placing calls to Cowes, Southampton and London. These were the first publicly witnessed long-distance telephone calls in the UK. The queen considered the process to be "quite extraordinary" although the sound was "quite faint". She later asked to buy the equipment that was used, but Bell offered to make a model specifically for her.
Summary of Bell's achievements
Bell did for the telephone what Henry Ford did for the automobile. Although not the first to experiment with telephonic devices, Bell and the companies founded in his name were the first to develop commercially practical telephones around which a successful business could be built and grow. Bell adopted carbon transmitters similar to Edison's transmitters and adapted telephone exchanges and switching plug boards developed for telegraphy. Watson and other Bell engineers invented numerous other improvements to telephony. Bell succeeded where others failed to assemble a commercially viable telephone system. It can be argued that Bell invented the telephone industry. Bell's first intelligible voice transmission over an electric wire was named an IEEE Milestone.
Variable resistance transmitters
Water microphone – Elisha Gray
Elisha Gray recognized the lack of fidelity of the make-break transmitter of Reis and Bourseul and reasoned by analogy with the lover's telegraph, that if the current could be made to more closely model the movements of the diaphragm, rather than simply opening and closing the circuit, greater fidelity might be achieved. Gray filed a patent caveat with the US patent office on February 14, 1876, for a liquid microphone. The device used a metal needle or rod that was placed – just barely – into a liquid conductor, such as a water/acid mixture. In response to the diaphragm's vibrations, the needle dipped more or less into the liquid, varying the electrical resistance and thus the current passing through the device and on to the receiver. Gray did not convert his caveat into a patent application until after the caveat had expired and hence left the field open to Bell.
When Gray applied for a patent for the variable resistance telephone transmitter, the Patent Office determined "while Gray was undoubtedly the first to conceive of and disclose the (variable resistance) invention, as in his caveat of 14 February 1876, his failure to take any action amounting to completion until others had demonstrated the utility of the invention deprives him of the right to have it considered."
Carbon microphone – Thomas Edison, Edward Hughes, Emile Berliner
The carbon microphone was independently developed around 1878 by David Edward Hughes in England and Emile Berliner and Thomas Edison in the US. Although Edison was awarded the first patent in mid-1877, Hughes had demonstrated his working device in front of many witnesses some years earlier, and most historians credit him with its invention.
Thomas Alva Edison took the next step in improving the telephone with his invention in 1878 of the carbon grain "transmitter" (microphone) that provided a strong voice signal on the transmitting circuit that made long-distance calls practical. Edison discovered that carbon grains, squeezed between two metal plates, had a variable electrical resistance that was related to the pressure. Thus, the grains could vary their resistance as the plates moved in response to sound waves, and reproduce sound with good fidelity, without the weak signals associated with electromagnetic transmitters.
The carbon microphone was further improved by Emile Berliner, Francis Blake, David E. Hughes, Henry Hunnings, and Anthony White. The carbon microphone remained standard in telephony until the 1980s, and is still being produced.
Improvements to the early telephone
Additional inventions such as the call bell, central telephone exchange, common battery, ring tone, amplification, trunk lines, and wireless phones – at first cordless and then fully mobile – made the telephone the useful and widespread apparatus as it is now.
Telephone exchanges
The telephone exchange was an idea of the Hungarian engineer Tivadar Puskás (1844–1893) in 1876, while he was working for Thomas Edison on a telegraph exchange. Puskás was working on his idea for an electrical telegraph exchange when Alexander Graham Bell received the first patent for the telephone. This caused Puskás to take a fresh look at his own work and he refocused on perfecting a design for a telephone exchange. He then got in touch with the U.S. inventor Thomas Edison who liked the design. According to Edison, "Tivadar Puskas was the first person to suggest the idea of a telephone exchange".
Controversies
Bell has been widely recognized as the "inventor" of the telephone outside of Italy, where Meucci was championed as its inventor, and outside of Germany, where Reis was recognized as the "inventor". In the United States, there are numerous reflections of Bell as a North American icon for inventing the telephone, and the matter was for a long time non-controversial. In June 2002, however, the United States House of Representatives passed a symbolic bill recognizing the contributions of Antonio Meucci "in the invention of the telephone" (not "for the invention of the telephone"), throwing the matter into some controversy. Ten days later the Canadian parliament countered with a symbolic motion attributing the invention of the telephone to Bell.
Champions of Meucci, Manzetti, and Gray have each offered fairly precise tales of a contrivance whereby Bell actively stole the invention of the telephone from their specific inventor. In the 2002 congressional resolution, it was inaccurately noted that Bell worked in a laboratory in which Meucci's materials had been stored, and claimed that Bell must thus have had access to those materials. Manzetti claimed that Bell visited him and examined his device in 1865. In 1886 it was publicly alleged by Zenas Wilber, a patent examiner, that Bell paid him one hundred dollars, when he allowed Bell to look at Gray's confidential patent filing.
One of the valuable claims in Bell's 1876 was claim 4, a method of producing variable electric current in a circuit by varying the resistance in the circuit. That feature was not shown in any of Bell's patent drawings, but was shown in Elisha Gray's drawings in his caveat filed the same day, February 14, 1876. A description of the variable resistance feature, consisting of seven sentences, was inserted into Bell's application. That it was inserted is not disputed. But when it was inserted is a controversial issue. Bell testified that he wrote the sentences containing the variable resistance feature before January 18, 1876, "almost at the last moment" before sending his draft application to his lawyers. A book by Evenson argues that the seven sentences and claim 4 were inserted, without Bell's knowledge, just before Bell's application was hand carried to the Patent Office by one of Bell's lawyers on February 14, 1876.
Contrary to the popular story, Gray's caveat was taken to the US Patent Office a few hours before Bell's application. Gray's caveat was taken to the Patent Office in the morning of February 14, 1876, shortly after the Patent Office opened and remained near the bottom of the in-basket until that afternoon. Bell's application was filed shortly before noon on February 14 by Bell's lawyer who requested that the filing fee be entered immediately onto the cash receipts blotter and Bell's application was taken to the Examiner immediately. Late in the afternoon, Gray's caveat was entered on the cash blotter and was not taken to the Examiner until the following day. The fact that Bell's filing fee was recorded earlier than Gray's led to the myth that Bell had arrived at the Patent Office earlier. Bell was in Boston on February 14 and did not know this happened until later. Gray later abandoned his caveat and did not contest Bell's priority. That opened the door to Bell being granted US patent 174465 for the telephone on March 7, 1876.
Memorial to the invention
In 1906 the citizens of the City of Brantford, Ontario, Canada and its surrounding area formed the Bell Memorial Association to commemorate the invention of the telephone by Alexander Graham Bell in July 1874 at his parents' home, Melville House, near Brantford. Walter Allward's design was the unanimous choice from among 10 submitted models, winning the competition. The memorial was originally to be completed by 1912 but Allward did not finish it until five years later. The Governor General of Canada, Victor Cavendish, 9th Duke of Devonshire, ceremoniously unveiled the memorial on October 24, 1917.
Allward designed the monument to symbolize the telephone's ability to overcome distances. A series of steps lead to the main section where the floating allegorical figure of Inspiration appears over a reclining male figure representing Man, discovering his power to transmit sound through space, and also pointing to three floating figures, the messengers of Knowledge, Joy, and Sorrow positioned at the other end of the tableau. Additionally, there are two female figures mounted on granite pedestals representing Humanity positioned to the left and right of the memorial, one sending and the other receiving a message.
The Bell Telephone Memorial's grandeur has been described as the finest example of Allward's early work, propelling the sculptor to fame. The memorial itself has been used as a central fixture for many civic events and remains an important part of Brantford's history, helping the city style itself as 'The Telephone City'.
See also
History of the telephone
The Telephone Cases, U.S. patent dispute and infringement court cases
Timeline of the telephone
References
Further reading
Baker, Burton H. The Gray Matter: The Forgotten Story of the Telephone, St. Joseph, MI, 2000.
Bell, Alexander Graham. Speech by Alexander Graham Bell, November 2, 1911: Historical address delivered by Alexander Graham Bell, November 2, 1911, at the first meeting of the Telephone Pioneers' Association, Beinn Bhreagh Recorder, November 1911, pp. 15–19;
Bethune, Brian. Did Bell Steal the Idea for the Phone? (Book Review), Maclean's Magazine, February 4, 2008
Bourseul, Charles. Transmission électrique de la parole, L'Illustration (Paris), August 26, 1854
Bruce, Robert V. Bell: Alexander Bell and the Conquest of Solitude, Cornell University Press, 1990.
Coe, Lewis. The Telephone and Its Several Inventors: A History, McFarland, North Carolina, 1995.
Evenson, A. Edward. The Telephone Patent Conspiracy of 1876: The Elisha Gray – Alexander Bell Controversy, McFarland, North Carolina, 2000.
Gray, Charlotte. "Reluctant Genius: The Passionate Life and Inventive Mind of Alexander Graham Bell", HarperCollins, Toronto, 2006, IBO: 621.385092
Josephson, Matthew. Edison: A Biography, Wiley, 1992.
Shulman, Seth. Telephone Gambit: Chasing Alexander Graham Bell's Secret, W.W. Norton & Co.; 1st ed., 2007,
Thompson, Sylvanus P. Philipp Reis, Inventor of the Telephone, London: E. & F. N. Spon, 1883.
External links
American Treasures of the Library of Congress, Alexander Graham Bell – Lab notebook I, pp. 40–41 (image 22)
Scientific American Supplement No. 520, December 19, 1885
Telephone Patents
Patents
US 250126 Speaking Telephone by Francis Blake (November 29, 1881)
US 474230 Speaking Telegraph (graphite transmitter) by Thomas Edison (Western Union) May 3, 1892
US 485311 Telephone (solid back carbon transmitter) by Anthony C. White (Bell engineer) November 1, 1892
US 687499 Telephone Transmitter (carbon granules "candlestick" microphone) by W.W. Dean (Kellogg Co.) November 26, 1901
US 815176 Automatic Telephone Connector Switch (for rotary dial phones) by A E Keith and C J Erickson March 13, 1906
Discovery and invention controversies
History of electronic engineering
Telephone
History of the telephone
Technological controversies
Technology in society | Invention of the telephone | [
"Engineering"
] | 8,240 | [
"Electronic engineering",
"History of electronic engineering"
] |
2,193,888 | https://en.wikipedia.org/wiki/Force-free%20magnetic%20field | In plasma physics, a force-free magnetic field is a magnetic field in which the Lorentz force is equal to zero and the magnetic pressure greatly exceeds the plasma pressure such that non-magnetic forces can be neglected. For a force-free field, the electric current density is either zero or parallel to the magnetic field.
Definition
When a magnetic field is approximated as force-free, all non-magnetic forces are neglected and the Lorentz force vanishes. For non-magnetic forces to be neglected, it is assumed that the ratio of the plasma pressure to the magnetic pressure—the plasma β—is much less than one, i.e., . With this assumption, magnetic pressure dominates over plasma pressure such that the latter can be ignored. It is also assumed that the magnetic pressure dominates over other non-magnetic forces, such as gravity, so that these forces can similarly be ignored.
In SI units, the Lorentz force condition for a static magnetic field can be expressed as
where
is the current density and is the vacuum permeability. Alternatively, this can be written as
These conditions are fulfilled when the current vanishes or is parallel to the magnetic field.
Zero current density
If the current density is identically zero, then the magnetic field is the gradient of a magnetic scalar potential :
The substitution of this into results in Laplace's equation, which can often be readily solved, depending on the precise boundary conditions. In this case, the field is referred to as a potential field or vacuum magnetic field.
Nonzero current density
If the current density is not zero, then it must be parallel to the magnetic field, i.e., where is a scalar function known as the force-free parameter or force-free function. This implies that
The force-free parameter can be a function of position but must be constant along field lines.
Linear force-free field
When the force-free parameter is constant everywhere, the field is called a linear force-free field (LFFF). A constant allows for the derivation of a vector Helmholtz equation
by taking the curl of the nonzero current density equations above.
Nonlinear force-free field
When the force-free parameter depends on position, the field is called a nonlinear force-free field (NLFFF). In this case, the equations do not possess a general solution, and usually must be solved numerically.
Physical examples
In the Sun's upper chromosphere and lower corona, the plasma β can locally be of order 0.01 or lower allowing for the magnetic field to be approximated as force-free.
See also
Woltjer's theorem
Chandrasekhar–Kendall function
Magnetic helicity
References
Further reading
Plasma theory and modeling | Force-free magnetic field | [
"Physics"
] | 553 | [
"Plasma theory and modeling",
"Plasma physics"
] |
2,193,890 | https://en.wikipedia.org/wiki/Thistle | Thistle is the common name of a group of flowering plants characterized by leaves with sharp spikes on the margins, mostly in the family Asteraceae. Prickles can also occur all over the planton the stem and on the flat parts of the leaves. These prickles protect the plant from herbivores. Typically, an involucre with a clasping shape similar to a cup or urn subtends each of a thistle's flower heads. The typically feathery pappus of a ripe thistle flower is known as thistle-down.
The spininess varies considerably by species. For example, Cirsium heterophyllum has very soft spines while Cirsium spinosissimum is the opposite. Typically, species adapted to dry environments are more spiny.
The term thistle is sometimes taken to mean precisely those plants in the tribe Cardueae (synonym: Cynareae), especially the genera Carduus, Cirsium, and Onopordum. However, plants outside this tribe are sometimes also called thistles.
Biennial thistles are particularly noteworthy for their high wildlife value, producing copious floral resources for pollinators, nourishing seeds for birds like the goldfinch, foliage for butterfly larvae, and down for the lining of birds' nests.
A thistle is the floral emblem of Scotland and Lorraine, as well as the emblem of the Encyclopædia Britannica.
Taxonomy
Genera in the Asteraceae with the word thistle often used in their common names include:
Arctium – burdock
Carduus – musk thistle and others
Carlina – carline thistle
Carthamus – distaff thistle
Centaurea – star thistle
Cicerbita – sow thistle
Cirsium – melancholy thistle, creeping thistle, spear thistle, and others
Cnicus – blessed thistle
Cynara – artichoke, cardoon
Echinops – globe thistle
Galactites - milk thistle
Notobasis – Syrian thistle
Onopordum – cotton thistle, also known as Scots thistle
Scolymus – golden thistle or oyster thistle
Silybum – milk or St. Mary's thistle
Sonchus – sow thistle
Plants in families other than Asteraceae which are sometimes called thistle include:
Kali – Russian thistle, Tartar thistle, or tumbleweed, plants formerly classified in the genus Salsola (family Chenopodiaceae)
Argemone mexicana – flowering thistle, purple prickly poppy (family Papaveraceae)
Eryngium – certain species include the word thistle, such as beethistle, E. articulatum (family Apiaceae)
Dipsacus fullonum – German names include Hausdistel, Kardendisteln, Roddistel, Sprotdistel and Weberdistel (family Caprifoliaceae)
Ecology
Thistle flowers are the favourite nectar sources of the pearl-bordered fritillary, small pearl-bordered fritillary, high brown fritillary, and dark green fritillary butterflies.
Thistles and thistle-seed feeders provide important sustenance for goldfinches and the flowers are strongly favoured by many butterflies besides fritillaries such as the monarch, skippers, and the various types of tiger swallowtail. Hummingbirds will feed on the flowers of the biennial species, which feature large flowers, as compared with the perennial creeping thistle.
Some thistles, for example Cirsium vulgare, native to Eurasia, have been widely introduced outside their native range. Control measures include Trichosirocalus weevils. A problem with this approach, at least in North America, is that the introduced weevils may affect native thistles at least as much as the desired targets. Another approach towards controlling thistle growth is using thistle tortoise beetles as a biological control agent; through feeding on thistle plants, thistle tortoise beetles skeletonize the leaves and damage the plant.
Thistles are important nectar sources for pollinators. Some ecological organizations, such as the Xerces Society, have attempted to raise awareness of their benefits to counteract the general agricultural and home garden labeling of thistles as weeds. The monarch butterfly, Danaus plexippus for instance, was highlighted as traditionally relying upon taller large-flowered thistle species such as Tall thistle, Cirsium altissimum, for its migration. Although such organizations focus on the benefits of native thistles, certain non-native thistles, such as Cirsium vulgare in North America, may provide similar benefits to wildlife.
Some prairie and wildflower seed production companies supply bulk seed for native North American thistle species for wildlife habitat restoration, although availability tends to be low. Thistles are particularly valued by bumblebees for their high nectar production. Cirsium vulgare was ranked in the top ten for nectar production in a UK plants survey conducted by the AgriLand project supported by the UK Insect Pollinators Initiative. Bull thistle was a top producer of nectar sugar in another study in Britain, ranked third with a production per floral unit of (2323 ± 418μg).
Uses
Pliny and medieval writers thought it could return hair to bald heads and that in the early modern period it was believed to be a remedy for headaches, plague, cancer sores, vertigo, and jaundice.
Cuisine
In the Beira region, thistle flowers are used as rennet in cheese making. "Serra da Estrela" is not only the name of a mountain chain in this country, "Serra da Estrela" is also the name of one of the most appreciated cheeses made from sheep's milk.
Economic significance
Thistles, even if one restricts the term to members of the Asteraceae, are too varied a group for generalisation. Many are troublesome weeds, including some invasive species of Cirsium, Carduus, Silybum and Onopordum. Typical adverse effects are competition with crops and interference with grazing in pastures, where dense growths of spiny vegetation suppress forage plants and repel grazing animals. Some species, although not intensely poisonous, affect the health of animals that ingest them.
The genus Cynara includes the commercially important species of artichoke. Some species regarded as major weeds are sources of vegetable rennet used in commercial cheese making. Similarly, some species of Silybum that occur as weeds are cultivated for seeds that yield vegetable oil and pharmaceutical compounds such as Silibinin.
Other thistles that nominally are weeds are important honey plants, both as bee fodder in general, and as sources of luxury monofloral honey products.
Medicine
The Milk Thistle, also known as silymarin, has been used to treat liver or gallbladder problems. While not confirmed by the U.S. Department of Health and Human services, milk thistle has shown beneficial results in previous studies for people with HCV. It is possible that milk thistle can lower blood sugar levels for type two diabetes. As a dietary supplement, milk thistle is recommended for hepatitis, cirrhosis, jaundice, diabetes, and indigestion.
Culture
Symbolism
Scottish thistle
The thistle has been the national emblem of Scotland since the reign of King Alexander III (1249–1286).
According to legend, an invading Norse army was attempting to sneak up at night upon a Scottish army's encampment. One barefoot Norseman stepped on a thistle and cried out in pain, thus alerting Scots to the presence of the invaders. Possibly, this happened in 1263 during the Battle of Largs, which marked the beginning of the departure of King Haakon IV (Haakon the Elder) of Norway who, having control of the Northern Isles and Hebrides, had harried the coast of the Kingdom of Scotland for some years.
The thistle was used on silver coins first issued by King James III in 1474 as a Scottish symbol and national emblem. In 1536, the bawbee, a sixpence in the pound Scots, was issued for the first time under King James V; it showed a crowned thistle. Thistles continued to appear regularly on Scottish and later British coinage until 2008, when a 5p coin design showing "The Badge of Scotland, a thistle royally crowned" ceased to be minted, though it remains in circulation. The Most Ancient and Most Noble Order of the Thistle, the highest and oldest chivalric order of Scotland, has thistles on its insignia and a chapel in St Giles's Kirk, Edinburgh, dubbed the Thistle Chapel. The thistle is the main charge of the regimental badge of the Scots Guards, the oldest regiment in the British Army.
Both the Order of the Thistle and the Scots Guards use the motto Nemo me impune lacessit, the motto of the House of Stuart and referring to the thistle's prickly nature. Pound coins with this motto and a thistle were minted in 1984, 1989, and 2014. The combination of thistle and motto first appeared on the bawbee issued by King Charles II. In 1826, the grant of arms to the new National Bank of Scotland stipulates that the shield be surrounded by thistles and "thistle" is used as the name of several Scottish football clubs. Since 1960, a stylised thistle, also representing the Scottish Saltire, has been the logo of the Scottish National Party. The thistle is also seen as the logo for Scottish Rugby. Many businesses in Scotland choose this symbol to represent their organization.
Since 2013, a different stylised thistle, crowned with the Scottish crown, has been the emblem of Police Scotland, and had long featured in the arms of seven of the eight pre-2013 Scottish police services and constabularies, the sole exception being the Northern Constabulary. As part of the arms of the University of Edinburgh, the thistle appears together with a saltire on one of the escutcheons of the Mercat Cross in Edinburgh. The coat of arms and crest of Nova Scotia ("New Scotland"), briefly Scotland's colony, have since the 17th century featured thistles.
Following his ascent to the English throne, King James VI of Scotland & I of England used a badge consisting of a Tudor rose "dimidiated" with a Scottish thistle and surmounted by a royal crown.
As the floral emblem of Scotland it appears in the Royal Arms of the United Kingdom thereafter, and was included in the heraldry of various British institutions, such as the Badge of the Supreme Court of the United Kingdom alongside the Tudor rose, Northern Irish flax, and Welsh leek. This floral combination appears on the present issues of the one pound coin. Beside the Tudor rose and Irish shamrock the thistle appears on the badge of the Yeomen of the Guard and the arms of the Canada Company. Issues of the historical florin showed the same flora, later including a leek.
The thistle is also used to symbolise connection with Scotland overseas. For example, in Canada, it is one of the four floral emblems on the flag of Montreal; in the US, Carnegie Mellon University features the thistle in its crest in honour of the Scottish heritage of its founder, Andrew Carnegie, and Annapolis, Maryland features the thistle in its flag and seal. The thistle is also the emblem of the Encyclopædia Britannica (which originated in Edinburgh, Scotland) and Jardine Matheson Holdings Limited (as the company was founded by two Scots).
Which species of thistle is referred to in the original legend is disputed. Popular modern usage favours cotton thistle (Onopordum acanthium), perhaps because of its more imposing appearance, though it is not native and unlikely to have occurred in Scotland in mediaeval times. The spear thistle (Cirsium vulgare), an abundant native species in Scotland, is a more likely candidate. Other species, including dwarf thistle (Cirsium acaule), musk thistle (Carduus nutans), and melancholy thistle (Cirsium heterophyllum) have also been suggested.
Thistle of Lorraine
The thistle, and more precisely Onopordum acanthium, is one of the symbols of Lorraine, together with its coat of arms which displays three avalerions, and the Cross of Lorraine.
Lorraine is a region located in northeastern France, along the border with Luxembourg and Germany. Before the French Revolution, a large part of the region formed the Duchy of Lorraine. In the Middle Ages, the thistle was an emblem of the Virgin Mary because its white sap would bring to mind the milk falling from the breast of the Mother of God. It was later adopted as a personal symbol by René of Anjou, together with the Cross of Lorraine, then known as the Cross of Anjou. It seems through his book Livre du cuer d'amours espris that the Duke chose the thistle as his emblem not only because it was a Christian symbol, but also because he associated it with physical love.
The thistle and the cross were used again by his grandson, René II, Duke of Lorraine, who introduced them in the region. The two symbols became hugely popular among the local people during the Battle of Nancy in 1477, during which the Lorrain army defeated Burgundy. The Duke's motto was "Qui s'y frotte s'y pique", meaning "who touches it, pricks oneself", with a similar idea to the Scottish motto "Nemo me impune lacessit". Nowadays the thistle is still the official symbol of the city of Nancy, as well as the emblem of the AS Nancy football team, and the Lorraine Regional Natural Park.
Place names
Carduus is the Latin term for a thistle (hence cardoon, chardon in French), and Cardonnacum is a Late Latin word for a place with thistles. This is believed to be the origin of name of the Burgundy village of Chardonnay, Saône-et-Loire, which in turn is thought to be the home of the famous Chardonnay grape variety.
References
External links
Asteraceae
National symbols of Scotland
Plant common names
Heraldic charges | Thistle | [
"Biology"
] | 2,884 | [
"Plant common names",
"Common names of organisms",
"Plants"
] |
2,193,975 | https://en.wikipedia.org/wiki/EtherChannel | EtherChannel is a port link aggregation technology or port-channel architecture used primarily on Cisco switches. It allows grouping of several physical Ethernet links to create one logical Ethernet link for the purpose of providing fault-tolerance and high-speed links between switches, routers and servers. An EtherChannel can be created from between two and eight active Fast, Gigabit or 10-Gigabit Ethernet ports, with an additional one to eight inactive (failover) ports which become active as the other active ports fail. EtherChannel is primarily used in the backbone network, but can also be used to connect end user machines.
EtherChannel technology was invented by Kalpana in the early 1990s. Kalpana was acquired by Cisco Systems in 1994. In 2000, the IEEE passed 802.3ad, which is an open standard version of EtherChannel.
Benefits
Using an EtherChannel has numerous advantages, and probably the most desirable aspect is the bandwidth. Using the maximum of 8 active ports a total bandwidth of 800 Mbit/s, 8 Gbit/s or 80 Gbit/s is possible depending on port speed. This assumes there is a traffic mixture, as those speeds do not apply to a single application only. It can be used with Ethernet running on twisted pair wiring, single-mode and multimode fiber.
Because EtherChannel takes advantage of existing wiring it makes it very scalable. It can be used at all levels of the network to create higher bandwidth links as the traffic needs of the network increase. All Cisco switches have the ability to support EtherChannel.
When an EtherChannel is configured all adapters that are part of the channel share the same Layer 2 (MAC) address. This makes the EtherChannel transparent to network applications and users because they only see the one logical connection; they have no knowledge of the individual links.
EtherChannel aggregates the traffic across all the available active ports in the channel. The port is selected using a Cisco-proprietary hash algorithm, based on source or destination MAC addresses, IP addresses or TCP and UDP port numbers. The hash function gives a number between 0 and 7, and the following table shows how the 8 numbers are distributed among the 2 to 8 physical ports. In the hypothesis of real random hash algorithm, 2, 4 or 8 ports configurations lead to fair load-balancing, whereas other configurations lead to unfair load-balancing.
Fault-tolerance is another key aspect of EtherChannel. Should a link fail, the EtherChannel technology will automatically redistribute traffic across the remaining links. This automatic recovery takes less than one second and is transparent to network applications and the end user. This makes it very resilient and desirable for mission-critical applications.
Spanning tree protocol (STP) can be used with an EtherChannel. STP treats all the links as a single one and BPDUs are only sent down one of the links. Without the use of an EtherChannel, STP would effectively shutdown any redundant links between switches until one connection goes down. This is where an EtherChannel is most desirable, it allows use of all available links between two devices.
EtherChannels can be also configured as VLAN trunks. If any single link of an EtherChannel is configured as a VLAN trunk, the entire EtherChannel will act as a VLAN trunk. Cisco ISL, VTP and IEEE 802.1Q are compatible with EtherChannel.
Limitations
A limitation of EtherChannel is that all the physical ports in the aggregation group must reside on the same switch except in the case of a switch stack, where they can reside on different switches on the stack. Avaya's SMLT protocol removes this limitation by allowing the physical ports to be split between two switches in a triangle configuration or 4 or more switches in a mesh configuration. Cisco's Virtual Switching System (VSS) allows the creation of a Multichassis Etherchannel (MEC) similar to the DMLT protocol allowing ports to be aggregated towards different physical chassis that form a single virtual switch entity. Also Extreme Networks may do this functionality via M-LAG Multilink Agreggation. Cisco Nexus series of switches allow the creation of a (vPC) between a remote device and two individual Nexus switches. The two Cisco Nexus switches involved in a vPC differ from stacking or VSS technology in that stacking and VSS create a single data and control plane across the multiple switches, whereas vPC creates a single data plane across the two Nexus switches while keeping the two control planes separate.
Components
EtherChannel is made up of the following key elements:
Ethernet links — EtherChannel works over links defined by the IEEE 802.3 standard, including all sub-standards. All links in a single EtherChannel must be the same speed.
Compatible hardware — the entire line of Cisco Catalyst switches as well as Cisco IOS software-based routers support EtherChannel. Configuring an EtherChannel between a switch and a computer requires support built into the operating system; FreeBSD, for example, supports EtherChannel via LACP. Multiple EtherChannels per device are supported; the number depends on the type of equipment. Catalyst 6500 and 6000 switches support a maximum of 64 EtherChannels.
Configuration — an EtherChannel must be configured using the Cisco IOS on switches and router, and using specific drivers when connecting a server. There are two main ways an EtherChannel can be set up. The first is by manually issuing a command on each port of the device that is part of the EtherChannel. This must be done for the corresponding ports on both sides of the EtherChannel. The second way is by using Cisco's Port Aggregation Protocol (PAgP) or the IETF's LACP for the automated aggregation of Ethernet ports.
EtherChannel vs. 802.3ad
EtherChannel and IEEE 802.3ad standards are very similar and accomplish the same goal. There are a few differences between the two, other than the fact that EtherChannel is Cisco proprietary and 802.3ad is an open standard, listed below:
Both technologies are capable of automatically configuring this logical link. EtherChannel supports both LACP and Cisco's PAgP, whereas 802.3ad uses LACP.
LACP allows for up to 8 active and 8 standby links, whereas PAgP only allows for 8 active links.
See also
Distributed Split Multi-Link Trunking
Link aggregation
Link Aggregation Control Protocol
Multi-link trunking
R-SMLT
References
Network architecture
Cisco Systems
Ethernet | EtherChannel | [
"Engineering"
] | 1,314 | [
"Network architecture",
"Computer networks engineering"
] |
2,193,987 | https://en.wikipedia.org/wiki/Decomposition%20%28computer%20science%29 | Decomposition in computer science, also known as factoring, is breaking a complex problem or system into parts that are easier to conceive, understand, program, and maintain.
Overview
Different types of decomposition are defined in computer sciences:
In structured programming, algorithmic decomposition breaks a process down into well-defined steps.
Structured analysis breaks down a software system from the system context level to system functions and data entities as described by Tom DeMarco.<ref>Tom DeMarco (1978). Structured Analysis and System Specification. New York, NY: Yourdon, 1978. , .</ref>
Object-oriented decomposition breaks a large system down into progressively smaller classes or objects that are responsible for part of the problem domain.
According to Booch, algorithmic decomposition is a necessary part of object-oriented analysis and design, but object-oriented systems start with and emphasize decomposition into objects.
More generally, functional decomposition in computer science is a technique for mastering the complexity of the function of a model. A functional model of a system is thereby replaced by a series of functional models of subsystems.
Decomposition topics
Decomposition paradigm
A decomposition paradigm in computer programming is a strategy for organizing a program as a number of parts, and usually implies a specific way to organize a program text. Typically the aim of using a decomposition paradigm is to optimize some metric related to program complexity, for example a program's modularity or its maintainability.
Most decomposition paradigms suggest breaking down a program into parts to minimize the static dependencies between those parts, and to maximize each part's cohesiveness. Popular decomposition paradigms include the procedural, modules, abstract data type, and object oriented paradigms.
Though the concept of decomposition paradigm is entirely distinct from that of model of computation, they are often confused. For example, the functional model of computation is often confused with procedural decomposition, and the actor model of computation is often confused with object oriented decomposition.
Decomposition diagram
A decomposition diagram shows a complex, process, organization, data subject area, or other type of object broken down into lower level, more detailed components. For example, decomposition diagrams may represent organizational structure or functional decomposition into processes. Decomposition diagrams provide a logical hierarchical decomposition of a system.
See also
Code refactoring
Component-based software engineering
Dynamization
Duplicate code
Event partitioning
How to Solve It''
Integrated Enterprise Modeling
Personal information management
Readability
Subroutine
References
External links
Object Oriented Analysis and Design
On the Criteria To Be Used in Decomposing Systems into Modules
Software design
Decomposition methods | Decomposition (computer science) | [
"Engineering"
] | 514 | [
"Decomposition methods",
"Design",
"Software design",
"Industrial engineering"
] |
2,194,138 | https://en.wikipedia.org/wiki/The%20Elliott%20Wave%20Theorist | The Elliott Wave Theorist is a monthly newsletter published by Elliott Wave International. The first issue of the Theorist was published in April 1976 and has been continuously in print on a subscription basis since May 1979. The publication includes Elliott wave analysis of the financial markets and cultural trends, plus commentary on topics that include technical analysis, behavioral finance, physics, pattern recognition, and socionomics. Robert Prechter is the publication's editor and main contributor.
History
The Theorist began as Robert Prechter's vehicle for Elliott wave market opinions when he worked as a technical analyst at Merrill Lynch. The publication gathered a following, and Prechter continued to offer it via subscriptions after he left Merrill in 1979.
In the early 1980s, the Theorist issued an aggressively bullish stock market forecast; its prominence grew, and the number of subscribers eventually reaching some 20,000. That number declined in the 1990s (as did subscription levels among financial publishers generally), though the Theorist remains frequently cited on financial websites, in blogs, newsgroups, books, scholarly papers, and by major media.
The newsletter has earned several awards, including Hard Money Digest's "Newsletter Award of Excellence," and Timer Digest's "Timer of the Year." The Theorist has also included commentary for which contributors were criticized, including the forecast of a long-term bear market in the U.S. stock market.
Distinction and controversy
The Theorist has featured several topics of distinction and controversy. Prechter's August 1985 Theorist essay "Pop Culture and the Stock Market" preceded a shorter version of the September 1985 cover story essay in Barron's, "Elvis, Frankenstein and Andy Warhol." Following Benoit Mandelbrot's 1999 Scientific American article "A Fractal Walk Down Wall Street," the Theorist ran detailed criticism of that article, saying that Mandelbrot took credit for ideas that "originated with Ralph Nelson Elliott, who put them forth more comprehensively and more accurately with respect to real-world markets in his 1938 book The Wave Principle." In recent years the Theorist has been credited with popularizing market indicators such as the “skyscraper indicator,” and been a forum for ideas and research regarding socionomics from Prechter and others, such as the 2006 essay, “Social Mood and Automobile Styling,” which received wide media coverage.
Notes
Newsletters
Technical analysis
Behavioral finance
Market trends
Publications established in 1976 | The Elliott Wave Theorist | [
"Biology"
] | 488 | [
"Behavioral finance",
"Behavior",
"Human behavior"
] |
2,194,260 | https://en.wikipedia.org/wiki/Parafoil | A parafoil is a nonrigid (textile) airfoil with an aerodynamic cell structure which is inflated by the wind. Ram-air inflation forces the parafoil into a classic wing cross-section. Parafoils are most commonly constructed out of ripstop nylon.
The device was developed in 1964 by Domina Jalbert (1904–1991). Jalbert had a history of designing kites and was involved in the development of hybrid balloon-kite aerial platforms for carrying scientific instruments. He envisaged the parafoil would be used to suspend an aerial platform or for the recovery of space equipment. A patent was granted in 1966.
Deployment shock prevented the parafoil's immediate acceptance as a parachute. It was not until the addition of a drag canopy on the riser lines (known as a "slider") which slowed their spread that the parafoil became a suitable parachute. Compared to a simple round canopy, a parafoil parachute has greater steerability, will glide further and allows greater control of the rate of descent; the parachute format is mechanically a glider of the free-flight kite type and such aspects spawned paraglider use.
The air flow into the parafoil is coming more from below than the flight path might suggest, so the frontmost ropes tow against the airflow. When gliding, the angle of attack is lowered and the airflow meets the parafoil head on. This makes it difficult to achieve an optimum gliding angle without the parafoil deflating.
In 2019 Jalbert was awarded posthumously the Fédération Aéronautique Internationale (FAI) Gold Parachuting Medal for inventing the parafoil.
Parafoils see wide use in a variety of windsports such as kite flying, powered parachutes, paragliding, kitesurfing, speed flying, wingsuit flying and skydiving. The world's largest kite is a parafoil-variant.
Today, SpaceX uses steerable Parafoils to recover the fairings of their Falcon 9 rocket.
Patents
Multi-cell wing type aerial device, filed October 1964, issued November 1966
See also
Foil kite
References
Aerospace engineering
Parachuting
Aircraft wing design
Parafoils
Kites | Parafoil | [
"Engineering"
] | 440 | [
"Aerospace engineering"
] |
2,194,517 | https://en.wikipedia.org/wiki/Hyoscine%20butylbromide | Hyoscine butylbromide, also known as scopolamine butylbromide and sold under the brandname Buscopan among others, is an anticholinergic medication used to treat abdominal pain, esophageal spasms, bladder spasms, biliary colic, and renal colic. It is also used to improve excessive respiratory secretions at the end of life. Hyoscine butylbromide can be taken by mouth, injection into a muscle, or into a vein.
Side effects may include sleepiness, vision changes, dry mouth, rapid heart rate, triggering of glaucoma, and severe allergies. Sleepiness is uncommon. It is unclear if it is safe in pregnancy. It appears safe in breastfeeding. Greater care is recommended in those with heart problems. It is an anticholinergic agent, which does not have much effect on the brain.
Hyoscine butylbromide was patented in 1950, and approved for medical use in 1951. It is on the World Health Organization's List of Essential Medicines. It is not available for human use in the United States, and a similar compound methscopolamine may be used instead. It is manufactured from hyoscine - also known as scopolamine - which occurs naturally in a variety of plants in the nightshade family, Solanaceae, including deadly nightshade (Atropa belladonna).
It is available in the United States only for the medical treatment of horses.
Medical uses
Hyoscine butylbromide is effective in treating crampy abdominal pain.
Hyoscine butylbromide is effective in reducing the duration of the first stage of labour, and it is not associated with any obvious adverse outcomes in mother or neonate.
It is also used during abdominal, pelvic MRI, virtual colonoscopy, and double barium contrasted studies to improve the quality of pictures. Hyoscine butylbromide can reduce the peristaltic movement of the intestines and mucosal foldings, thus reducing the movement artifact of the images.
Side effects
Since little of the medication crosses the blood-brain barrier, this drug has less effect on the brain and therefore causes a reduced occurrence of the centrally-mediated effects (such as delusions, somnolence and inhibition of motor functions) which reduce the usefulness of some other anticholinergic drugs.
Hyoscine butylbromide is still capable of affecting the chemoreceptor trigger zone, due to the lack of a well-developed blood-brain barrier in the medulla oblongata, which increases the antiemetic effect it produces via local action on the smooth muscle of the gastrointestinal tract.
Other side effects include accommodation reflex disturbances, tachycardia, dry mouth, nausea; urinary retention, reduced blood pressure; dyshidrosis; Other symptoms are dizziness, flushing and immune system disorders (anaphylactic shock, potentially fatal); anaphylactic reactions; dyspnoea; skin reactions and other hypersensitivity reactions. Cautions should be taken for those with untreated glaucoma, heart failure, benign prostatic hyperplasia with urinary retention as hyoscine may exacerbate these conditions.
Pharmacology
Hyoscine butylbromide reduces smooth muscle contraction and the production of respiratory secretions. These are normally stimulated by the parasympathetic nervous system, via the neurotransmitter acetylcholine. As an antimuscarinic, hyoscine butylbromide binds to muscarinic acetylcholine receptors, blocking their effect.
It is a quaternary ammonium compound and a semisynthetic derivative of hyoscine hydrobromide (scopolamine). The attachment of the butyl-bromide moiety effectively prevents the movement of this drug across the blood–brain barrier, effectively minimising undesirable central nervous system side effects associated with scopolamine/hyoscine.
Abuse
Hyoscine butylbromide is not centrally active and has a low incidence of abuse. In 2015, it was reported that prisoners at Wandsworth Prison and other UK prisons were smoking prescribed hyoscine butylbromide, releasing the potent hallucinogen scopolamine. There have also been reports of abuse in Mashhad Central Prison in Iran.
References
Bromides
Carboxylate esters
Chemical substances for emergency medicine
Epoxides
Heterocyclic compounds with 3 rings
M2 receptor antagonists
M3 receptor antagonists
Peripherally selective drugs
Quaternary ammonium compounds
Wikipedia medicine articles ready to translate
World Health Organization essential medicines | Hyoscine butylbromide | [
"Chemistry"
] | 992 | [
"Bromides",
"Salts",
"Chemicals in medicine",
"Chemical substances for emergency medicine"
] |
2,194,578 | https://en.wikipedia.org/wiki/Bence%20Jones%20protein | Bence Jones protein is a monoclonal globulin protein or immunoglobulin light chain found in the urine, with a molecular weight of 22–24 kDa. Detection of Bence Jones protein may be suggestive of multiple myeloma, or Waldenström's macroglobulinemia.
Bence Jones proteins are particularly diagnostic of multiple myeloma in the context of target organ manifestations such as kidney failure, lytic (or "punched out") bone lesions, anemia, or large numbers of plasma cells in the bone marrow. Bence Jones proteins are present in 2/3 of multiple myeloma cases.
The proteins are immunoglobulin light chains (paraproteins) and are produced by neoplastic plasma cells. They can be kappa (most of the time) or lambda. The light chains can be immunoglobulin fragments or single homogeneous immunoglobulins. They are found in urine as a result of decreased kidney filtration capabilities due to kidney failure, sometimes induced by hypercalcemia from the calcium released as the bones are destroyed, dehydration due to polyuria, amyloidosis or from the light chains themselves. The light chains were historically detected by heating a urine specimen (which causes the protein to precipitate) and nowadays by electrophoresis of concentrated urine. More recently, serum free light chain assays have been utilised in a number of published studies which have indicated superiority over the urine tests, particularly for patients producing low levels of monoclonal free light chains, as seen in nonsecretory multiple myeloma and amyloid light chain amyloidosis (AL amyloidosis).
History
The Bence Jones protein was described by the English physician Henry Bence Jones in 1847 and published in 1848.
References
Blood tests
Urine tests
Hematology | Bence Jones protein | [
"Chemistry"
] | 386 | [
"Blood tests",
"Chemical pathology"
] |
2,194,640 | https://en.wikipedia.org/wiki/Falling%20on%20a%20grenade | Falling on a grenade is the deliberate act of using one's body to cover a live time-fused hand grenade, absorbing the explosion and fragmentation in an effort to save the lives of others nearby. Since this is almost universally fatal, it is considered an especially conspicuous and selfless act of individual sacrifice in wartime. In United States military history, more citations for the Medal of Honor, the country's highest military decoration, have been awarded for falling on grenades to save comrades than any other single act.
Such an act can be survivable: In World War I British soldier John Carmichael was awarded the Victoria Cross for saving his men by putting his steel helmet over a grenade and then standing on the helmet to reduce the blast damage. Carmichael survived although it was several years before he recovered sufficiently to be discharged from the hospital.
In World War II, U.S. Marine Jack Lucas, in the Battle of Iwo Jima, leapt onto an enemy grenade, jamming it into the volcanic ash and soft sand with his rifle and covering it with his body, while reaching out and pulling the other one beneath him. One grenade exploded, severely injuring him and the other failed to detonate. Lucas lived, but spent the rest of his life with over 200 pieces of shrapnel in his body. In 2008 near Sangin in Afghanistan, British Royal Marine Matthew Croucher used his rucksack to pin a tripwire grenade to the floor. His body armor absorbed the majority of the blast.
On November 21, 2010, in Marjah, Helmand Province, Afghanistan in support of Operation Enduring Freedom, U.S. Marine Lance Corporal Kyle Carpenter threw himself upon a grenade, to save a fellow Marine in his sandbagged position, sustaining injuries to his face and right arm and losing his right eye. He survived these wounds. Despite these rare instances, the odds of survival are extremely slim. With modern medicine, however, odds are greatly increased when compared to falling on a grenade in the 20th century.
See also
Altruistic suicide
Max Cleland
Nathan Elbaz
Human shield
Roi Klein
William McFadzean
Michael A. Monsoor
John Robert Osborn
References
Grenades
Suicide by explosive material
Altruism | Falling on a grenade | [
"Biology"
] | 446 | [
"Behavior",
"Altruism"
] |
2,194,681 | https://en.wikipedia.org/wiki/Vernon%20Coleman | Vernon Edward Coleman (born 1946) is an English conspiracy theorist and writer, who writes on topics related to human health, politics and animal welfare. He was formerly a general practitioner (GP) and newspaper columnist. Coleman's medical claims have been widely discredited and described as pseudoscientific conspiracy theories.
Early life
Coleman was born in 1946, the only child of an electrical engineer. He was raised in Walsall, Staffordshire, in the West Midlands of England, where he attended Queen Mary's Grammar School and a medical school in Birmingham.
Career
Coleman qualified as a physician in 1970 and worked as a GP. In 1981, the Department of Health and Social Security (DHSS) fined him for refusing to write the diagnoses on sick notes, which he considered a breach of patient confidentiality.
After publishing his first book, The Medicine Men, in 1976, which accused the National Health Service of being controlled by pharmaceutical companies, Coleman left the NHS.
Coleman has since written under multiple pen names; in the late 1970s, he published three novels about life as a GP under the name Edward Vernon.
In 1987 Coleman appeared on the Central Weekend Programme as a sceptic against jogging for fitness.
An anti-vivisectionist, Coleman provided a supplementary memorandum for the House of Lords on the topic of vivisection in 1993.
In 1994 Coleman was ordered to pay damages for threatening scientist Colin Blakemore, who had been targeted by anti-vivisection activists after a letter bomb sent by animal rights group calling itself 'The Justice Department' was sent to Blakemore's home, with another exploding and injuring three people. Blakemore was later granted a temporary injunction by a High Court judge after Coleman had said he would publish a pamphlet with Blakemore's home address and telephone number to encourage the public to 'get in touch with you to discuss your work'. Coleman was ordered not to publish anything that might jeopardize Colin Blakemore's safety and to give solicitors the names of anyone to whom he might already have given the information.
In 1995, Coleman published the book How to Stop Your Doctor Killing You, which the Advertising Standards Authority later subjected to an advertisement ban.
Coleman went on to work as a newspaper columnist for a number of publications, including The Sun and The Sunday People, where he was an agony uncle until he resigned in 2003.
He relinquished his medical licence in March 2016 and is no longer registered or licensed to practice as a GP.
Coleman was reported to have been made an honorary professor by the International Open University based in Sri Lanka.
Writing and media appearances
Coleman's self-published books and blog have been reported as a major source of misinformation regarding the COVID-19 pandemic, cancer, HIV/AIDS, vaccines and human health.
A 1989 editorial in the British Medical Journal criticised Coleman's comments made for The Sun as the 'Sun Doctor' on leprosy as a 'particularly distasteful piece of tabloid journalism...[containing] a catalogue of selected facts and misinterpretations' following the announcement that Diana, Princess of Wales, was to shake hands with a person with leprosy. The incident was later covered on Channel 4's Hard News, with Coleman declining to defend his statement without a fee covering travel costs.
Coleman's 1993 novel Mrs Caldicot's Cabbage War was turned into a film in 2002 with the same name.
Whilst working for The Sunday People, Coleman wrote that if children diagnosed with autism were "stuck up to their necks in a vat full of warm sewage for 10 hours they would soon learn some manners" and that diagnoses of hyperactivity and autism were "misused by middle-class, aspirational parents to excuse the behaviour of their obnoxious children."<ref>Casebook column Sunday People' June 25, 1995.</ref> Following the article, autism charities received phone calls from distressed parents. The Chairman of the East Anglian Autistic Support Trust, Owen Spencer-Thomas, whose elder son has severe autism, condemned Coleman's remarks as "irresponsible, medically unsound and deeply hurtful" to families that had a child with autism. Spencer-Thomas challenged Coleman to spend 24 hours caring for his son in the presence of fully trained carers who understood the effects of autism. Coleman declined and refused to withdraw his remarks leading to an investigation by the Press Complaints Committee. During his time at the paper, Coleman was again censured by the Press Complaints Commission for making misleading medical claims.
Coleman became a self-published author in 2004 after Alice's Diary, a book about his cat, was turned down by traditional publishers.
AIDS denial
Writing for The Sun newspaper in 1989, Coleman denied that AIDS was a significant risk to the heterosexual community. He later claimed AIDS is a hoax, writing, "it is now my considered view that the disease we know as AIDS probably doesn't exist and has never existed". Such claims have been rejected by the medical community.
On 17 November 1989, The Sun published an article under the headline "Straight sex cannot give you AIDS—official", claiming "the killer disease AIDS can only be caught by homosexuals, bisexuals, junkies or anyone who has received a tainted blood transfusion". The following day, Coleman supported The Sun's claims with an article under the headline "AIDS—The hoax of the century", similarly claiming AIDS was not a significant risk to heterosexuals, that medical companies, doctors and condom manufacturers were conspiring to scare the public and had vested interests in profiteering from public service announcements, and that moral campaigners were attempting to frighten young people into celibacy to establish traditional family values. Coleman also claimed gay activists were "worried that once it was widely known that AIDS was not a major threat to heterosexuals, then funds for AIDS research would fall".
Journalist David Randall argued in The Universal Journalist that the story was one of the worst cases of journalistic malpractice in recent history.
Anti-vaccination and conspiracy theories
Coleman has claimed that COVID-19 is a hoax, that vaccines are dangerous, and that face masks cause cancer. All these claims have been debunked by more senior medical professionals. Coleman has also claimed the Coronavirus Pandemic has links to the Agenda 21 Conspiracy Theory and the Great Reset Theory, which both suggest a cabal of elite figures are attempting to depopulate the global community. No evidence has been found to support these claims.
In 2019, Coleman wrote a book entitled Anyone Who Tells You Vaccines Are Safe And Effective Is Lying which booksellers were criticised for selling.
Coleman later claimed "no one can possibly know if the COVID-19 vaccine is safe and effective because the trial is still underway; thousands of people who had the vaccine have died or been seriously injured by it; legally, all those people giving vaccinations are war criminals". These claims were debunked by Health Feedback, a member of the World Health Organization-led project Vaccine Safety Net. Coleman later claimed "COVID-19 vaccines are dangerous" and that "bodies of vaccinated people are laboratories making lethal viruses". Both claims were similarly debunked as inaccurate, misleading and unsupported by the Poynter Institute due to a lack of evidence from the legitimate medical community. Coleman has also claimed in a viral video that "the jabbed will be lucky to last five years" which was again proven to false due to a lack of evidence. In a similar widely circulated social media post, Coleman claimed "more children will be seriously injured or killed by the vaccination than the COVID-19 infection itself" which was again found to be false as there is no evidence that children suffer more from COVID-19 vaccines than from COVID-19.
At an anti-lockdown protest in London on 24 July 2021, Coleman claimed that the wearing of face masks caused cancer, dementia, hypoxia and hypercapnia, bacterial pneumonia due to oxygen deficiency. These claims were similarly debunked by the medical community due to a lack of peer-reviewed evidence. Coleman later claimed that the wearing of face masks caused mucormycosis, despite no link being found between mask wearing and mucormycosis. All evidence suggests that wearing masks are safe and an effective way towards protecting individuals from COVID-19.
In November 2021, Coleman made the false claim that "this [vaccination] jab was an experiment certain to kill and injure" which was debunked due to its lack of evidence and a reliance upon a discredited research report authored by Steven Gundry.
Despite being debunked, Coleman's conspiracy theories have been used to push COVID-19 denial, pseudoscience and anti-mask propaganda. Police officers urged residents in Prestwich, Greater Manchester to dismiss anti-vaccination leaflets in May 2021 which had been distributed in the area and credited to Coleman. In a statement, the local authority "requested the public to dismiss the message being sent out and is encouraging all relevant age groups to take up the offer of a vaccine". The same leaflets were also distributed in Luton, Bedfordshire with Luton Council warning that the leaflets contained "dangerous misinformation". Similar leaflets have been distributed across Scotland and condemned by Shirley-Anne Somerville of the Scottish Parliament. The Catholic Church has also urged parishioners to "read the Vatican document on vaccination morality" after Coleman's anti-vaccination videos and quotations were circulated in 2021 by a Franciscan priest in Gosport, Hampshire. In an investigation, the Diocese of Portsmouth announced "The Catholic Diocese of Portsmouth is very disappointed that one of the Family of Mary Immaculate and St Francis in Gosport has publicly expressed a personal view about the Covid vaccination programme that is contrary to the official position of the Catholic Church and the Diocese. We would encourage all our parishioners to benefit from the protection afforded by the vaccine."
Coleman has also claimed the National Health Service "kills more people than it saves" referencing a flawed study by The BMJ to support this claim. He has also falsely claimed the NHS reduced "screening tests" to lower carbon emissions. Although there were a reduced number of cancer screenings due to a lack of resources during the COVID-19 pandemic, no evidence was found to support Coleman's claim that screenings were being limited in effort to combat global warming.
Coleman denies climate change and claims global warming is a “malicious, dangerous myth”.
Advertising Standards Authority rulings
In 2005, the UK's Advertising Standards Authority (ASA) banned an advertisement for a book published by Coleman entitled How to Stop Your Doctor Killing You'' which claimed doctors were "the person most likely to kill you". The ASA upheld complaints that the advert was misleading, offensive and denigrated the medical profession. The ASA found Coleman's claims were lacking evidence, "irresponsible" and "likely to discourage vulnerable people from seeking essential medical treatment". In response to the ruling, Coleman called for the ASA to be banned and later made a complaint to the Office of Fair Trading, claiming "the ASA's action(s) are in breach of Article 10 of the Human Rights Act". The Office of Fair trading did not pursue Coleman's complaint.
In 2007, the ASA again found Coleman had made misleading claims in an advertisement promoting a supposed link between eating meat and contracting cancer. Coleman failed to respond to the ASA's enquiries and was subsequently found to have again breached the organisation's code of conduct, with the ASA deeming Coleman's advert was again lacking evidence and likely to cause undue fear and distress. Coleman was instructed not to further run the advertisement and informed to respond to future ASA investigations.
Personal life
Coleman is married. He is a vegan and supports animal rights. He has stated that he cross-dresses, and has written several articles about men who cross-dress.
Notes
External links
1946 births
Living people
20th-century English medical doctors
20th-century English writers
British anti-vaccination activists
British conspiracy theorists
English anti-vivisectionists
COVID-19 conspiracy theorists
English conspiracy theorists
HIV/AIDS denialists
Health-related conspiracy theories
People educated at Queen Mary's Grammar School
People from Walsall | Vernon Coleman | [
"Technology"
] | 2,526 | [
"Health-related conspiracy theories",
"Science and technology-related conspiracy theories"
] |
2,194,893 | https://en.wikipedia.org/wiki/Uranium%E2%80%93uranium%20dating | Uranium–uranium dating is a radiometric dating technique which compares two isotopes of uranium (U) in a sample: uranium-234 (234U) and uranium-238 (238U). It is one of several radiometric dating techniques exploiting the uranium radioactive decay series, in which 238U undergoes 14 alpha and beta decay events on the way to the stable isotope 206Pb. Other dating techniques using this decay series include uranium–thorium dating and uranium–lead dating.
Uranium series
238U, with a half-life of about 4.5 billion years, decays to 234U through emission of an alpha particle to thorium-234 (234Th), which is comparatively unstable with a half-life of just 24 days. 234Th then decays through beta particle emission to protactinium-234 (234Pa). This decays with a half-life of 6.7 hours, again through emission of a beta particle, to 234U. This isotope has a half-life of about 245,000 years. The next decay product, thorium-230 (230Th), has a half-life of about 75,000 years and is used in the uranium-thorium technique. Although analytically simpler, in practice 234U/238U requires knowledge of the ratio at the time the material under study was formed and is generally used only for samples older than the ca. 450,000 year upper limit of the 230Th/238U technique. For those materials (principally marine carbonates) for which these conditions apply, it remains a superior technique.
Unlike other radiometric dating techniques, those using the uranium decay series (except for those using the stable final isotopes 206Pb and 207Pb) compare the ratios of two radioactive unstable isotopes. This complicates calculations as both the parent and daughter isotopes decay over time into other isotopes.
In theory, the 234U/238U technique can be useful in dating samples between about 10,000 and 2 million years Before Present (BP), or up to about eight times the half-life of 234U. As such, it provides a useful bridge in radiometric dating techniques between the ranges of 230Th/238U (accurate up to ca. 450,000 years) and U–Pb dating (accurate up to the age of the solar system, but problematic on samples younger than about 2 million years).
See also
Carbon dating
Chronological dating
References
Radiometric dating
Uranium | Uranium–uranium dating | [
"Chemistry"
] | 505 | [
"Radiometric dating",
"Radioactivity"
] |
2,194,897 | https://en.wikipedia.org/wiki/Indicia%20%28philately%29 | In philately, indicia are markings on a mail piece (as opposed to an adhesive stamp) showing that postage has been prepaid by the sender. Indicia is the plural of the Latin word , meaning distinguishing marks, signs or identifying marks. The term imprinted stamp is used more or less interchangeably, but some indicia are not imprinted stamps. One example is the handstamp, which can be seen in a photo on this page.
Forms of indicia
Indicia can take a number of forms, including printed designs or handstamps where a stamp would normally be that indicate the pre-payment of postage. Imprinted stamps on postal stationery are indicia.
The term also refers to a meter stamp impression or the part thereof that indicates the value or postal rate.
See also
Indicia (publishing)
Information Based Indicia
Postmark
References
External links
Carnegie Mellon University Computer Science technical reports CMU-CS-96-113: Cryptographic Postage Indicia
Postal systems
Philatelic terminology
sk:Cenina | Indicia (philately) | [
"Technology"
] | 211 | [
"Transport systems",
"Postal systems"
] |
2,194,910 | https://en.wikipedia.org/wiki/4-Aminopyridine | 4-Aminopyridine (4-AP) is an organic compound with the chemical formula . It is one of the three isomeric aminopyridines. It is used as a research tool in characterizing subtypes of the potassium channel. It has also been used as a drug, to manage some of the symptoms of multiple sclerosis, and is indicated for symptomatic improvement of walking in adults with several variations of the disease. It was undergoing Phase III clinical trials , and the U.S. Food and Drug Administration (FDA) approved the compound on January 22, 2010. Fampridine is also marketed as Ampyra (pronounced "am-PEER-ah," according to the maker's website) in the United States by Acorda Therapeutics and as Fampyra in the European Union, Canada, and Australia. In Canada, the medication has been approved for use by Health Canada since February 10, 2012.
Applications
In the laboratory, 4-AP is a useful pharmacological tool in studying various potassium conductances in physiology and biophysics. It is a relatively selective blocker of members of Kv1 (Shaker, KCNA) family of voltage-activated K+ channels. However, 4-AP has been shown to potentiate voltage-gated Ca2+ channel currents independent of effects on voltage-activated K+ channels.
Convulsant activity
4-Aminopyridine is a potent convulsant and is used to generate seizures in animal models for the evaluation of antiseizure agents.
Vertebrate pesticide
4-Aminopyridine is also used under the trade name Avitrol as 0.5% or 1% in bird control bait. It causes convulsions and, infrequently, death, depending on dosage. The manufacturer says the proper dose should cause epileptic-like convulsions which cause the poisoned birds to emit distress calls resulting in the flock leaving the site; if the dose was sub-lethal, the birds will recover after 4 or more hours without long-term ill effect. The amount of bait should be limited so that relatively few birds are poisoned, causing the remainder of the flock to be frightened away with a minimum of mortality. A lethal dose will usually cause death within an hour. The use of 4-aminopyridine in bird control has been criticized by the Humane Society of the United States.
Medical use
Fampridine has been used clinically in Lambert–Eaton myasthenic syndrome and multiple sclerosis. It acts by blocking voltage-gated potassium channels, prolonging action potentials and thereby increasing neurotransmitter release at the neuromuscular junction.
The drug has been shown to reverse saxitoxin and tetrodotoxin toxicity in tissue and animal experiments.
In calcium entry blocker overdose in humans, 4-aminopyridine can increase the cytosolic Ca2+
concentration very efficiently independent of the calcium channels.
Multiple sclerosis
Fampridine has been shown to improve visual function and motor skills and relieve fatigue in patients with multiple sclerosis (MS). However, the effect of the drug is strongly established for walking capacity only. Common side effects include dizziness, nervousness and nausea, and the incidence of adverse effects was shown to be less than 5% in all studies.
4-AP works as a potassium channel blocker. Strong potassium currents decrease action potential duration and amplitude, which increases the probability of conduction failure − a well documented characteristic of demyelinated axons. Potassium channel blockade has the effect of increasing axonal action potential propagation and improving the probability of synaptic vesicle release. A study has shown that 4-AP is a potent calcium channel activator and can improve synaptic and neuromuscular function by directly acting on the calcium channel beta subunit.
MS patients treated with 4-AP exhibited a response rate of 29.5% to 80%. A long-term study (32 months) indicated that 80-90% of patients who initially responded to 4-AP exhibited long-term benefits. Although improving symptoms, 4-AP does not inhibit progression of MS. Another study, conducted in Brazil, showed that treatment based on fampridine was considered efficient in 70% of the patients.
Spinal cord injury
Spinal cord injury patients have also seen improvement with 4-AP therapy. These improvements include sensory, motor and pulmonary function, with a decrease in spasticity and pain.
Tetrodotoxin poisoning
Clinical studies have shown that 4-AP is capable of reversing the effects of tetrodotoxin poisoning in animals, however, its effectiveness as an antidote in humans has not yet been determined.
Overdose
Case reports have shown that overdoses with 4-AP can lead to paresthesias, seizures, and atrial fibrillation.
Contraindications
4-aminopyridine is excreted by the kidneys. 4-AP should not be given to people with significant kidney disease (e.g., acute kidney injury or advanced chronic kidney disease) due to the higher risk of seizures with increased circulating levels of 4-AP.
Branding
The drug was originally intended, by Acorda Therapeutics, to have the brand name Amaya, however the name was changed to Ampyra to avoid potential confusion with other marketed pharmaceuticals.
Four of Acorda's patents pertaining to Ampyra were invalidated in 2017 by the United States District Court for the District of Delaware and a fifth patent expired in 2018. Since then, generic alternatives have been developed for the U.S. market.
The drug is marketed by Biogen Idec in Canada as Fampyra and as Dalstep in India by Sun Pharma.
Research
Parkinson's disease
Dalfampridine completed Phase II clinical trials for Parkinson's disease in July 2014.
See also
4-Dimethylaminopyridine, a popular laboratory reagent, is prepared directly from pyridine instead of via methylating this compound.
Pyridine
4-Pyridylnicotinamide, useful as a ligand in coordination chemistry, is prepared by the reaction of this compound with nicotinoyl chloride.
References
Potassium channel blockers
Orphan drugs
Avicides
X
4-Aminopyridines
4-Pyridyl compounds | 4-Aminopyridine | [
"Chemistry",
"Biology"
] | 1,304 | [
"Highly-toxic chemical substances",
"Harmful chemical substances",
"Biocides",
"Avicides"
] |
2,195,020 | https://en.wikipedia.org/wiki/Resultant | In mathematics, the resultant of two polynomials is a polynomial expression of their coefficients that is equal to zero if and only if the polynomials have a common root (possibly in a field extension), or, equivalently, a common factor (over their field of coefficients). In some older texts, the resultant is also called the eliminant.
The resultant is widely used in number theory, either directly or through the discriminant, which is essentially the resultant of a polynomial and its derivative. The resultant of two polynomials with rational or polynomial coefficients may be computed efficiently on a computer. It is a basic tool of computer algebra, and is a built-in function of most computer algebra systems. It is used, among others, for cylindrical algebraic decomposition, integration of rational functions and drawing of curves defined by a bivariate polynomial equation.
The resultant of n homogeneous polynomials in n variables (also called multivariate resultant, or Macaulay's resultant for distinguishing it from the usual resultant) is a generalization, introduced by Macaulay, of the usual resultant. It is, with Gröbner bases, one of the main tools of elimination theory.
Notation
The resultant of two univariate polynomials and is commonly denoted or
In many applications of the resultant, the polynomials depend on several indeterminates and may be considered as univariate polynomials in one of their indeterminates, with polynomials in the other indeterminates as coefficients. In this case, the indeterminate that is selected for defining and computing the resultant is indicated as a subscript: or
The degrees of the polynomials are used in the definition of the resultant. However, a polynomial of degree may also be considered as a polynomial of higher degree where the leading coefficients are zero. If such a higher degree is used for the resultant, it is usually indicated as a subscript or a superscript, such as or
Definition
The resultant of two univariate polynomials over a field or over a commutative ring is commonly defined as the determinant of their Sylvester matrix. More precisely, let
and
be nonzero polynomials of degrees and respectively. Let us denote by the vector space (or free module if the coefficients belong to a commutative ring) of dimension whose elements are the polynomials of degree strictly less than . The map
such that
is a linear map between two spaces of the same dimension. Over the basis of the powers of (listed in descending order), this map is represented by a square matrix of dimension , which is called the Sylvester matrix of and (for many authors and in the article Sylvester matrix, the Sylvester matrix is defined as the transpose of this matrix; this convention is not used here, as it breaks the usual convention for writing the matrix of a linear map).
The resultant of and is thus the determinant
which has columns of and columns of (the fact that the first column of 's and the first column of 's have the same length, that is , is here only for simplifying the display of the determinant).
For instance, taking and we get
If the coefficients of the polynomials belong to an integral domain, then
where and are respectively the roots, counted with their multiplicities, of and in any algebraically closed field containing the integral domain.
This is a straightforward consequence of the characterizing properties of the resultant that appear below. In the common case of integer coefficients, the algebraically closed field is generally chosen as the field of complex numbers.
Properties
In this section and its subsections, and are two polynomials in of respective degrees and , and their resultant is denoted
Characterizing properties
The following properties hold for the resultant of two polynomials with coefficients in
a commutative ring . If is a field or more generally an integral domain, the resultant is the unique function of the coefficients of two polynomials that satisfies these properties.
If is a subring of another ring , then That is and have the same resultant when considered as polynomials over or .
If (that is if is a nonzero constant) then Similarly, if , then
Zeros
The resultant of two polynomials with coefficients in an integral domain is zero if and only if they have a common divisor of positive degree.
The resultant of two polynomials with coefficients in an integral domain is zero if and only if they have a common root in an algebraically closed field containing the coefficients.
There exists a polynomial of degree less than and a polynomial of degree less than such that This is a generalization of Bézout's identity to polynomials over an arbitrary commutative ring. In other words, the resultant of two polynomials belongs to the ideal generated by these polynomials.
Invariance by ring homomorphisms
Let and be two polynomials of respective degrees and with coefficients in a commutative ring , and a ring homomorphism of into another commutative ring . Applying to the coefficients of a polynomial extends to a homomorphism of polynomial rings , which is also denoted With this notation, we have:
If preserves the degrees of and (that is if and ), then
If and then
If and and the leading coefficient of is then
If and and the leading coefficient of is then
These properties are easily deduced from the definition of the resultant as a determinant. They are mainly used in two situations. For computing a resultant of polynomials with integer coefficients, it is generally faster to compute it modulo several primes and to retrieve the desired resultant with Chinese remainder theorem. When is a polynomial ring in other indeterminates, and is the ring obtained by specializing to numerical values some or all indeterminates of , these properties may be restated as if the degrees are preserved by the specialization, the resultant of the specialization of two polynomials is the specialization of the resultant. This property is fundamental, for example, for cylindrical algebraic decomposition.
Invariance under change of variable
If and are the reciprocal polynomials of and , respectively, then
This means that the property of the resultant being zero is invariant under linear and projective changes of the variable.
Invariance under change of polynomials
If and are nonzero constants (that is they are independent of the indeterminate ), and and are as above, then
If and are as above, and is another polynomial such that the degree of is , then
It is only when and have the same degree that cannot be deduced from the degrees of the given polynomials. If either is monic, or , then If , then
These properties imply that in the Euclidean algorithm for polynomials, and all its variants (pseudo-remainder sequences), the resultant of two successive remainders (or pseudo-remainders) differs from the resultant of the initial polynomials by a factor which is easy to compute. Conversely, this allows one to deduce the resultant of the initial polynomials from the value of the last remainder or pseudo-remainder. This is the starting idea of the subresultant-pseudo-remainder-sequence algorithm, which uses the above formulae for getting subresultant polynomials as pseudo-remainders, and the resultant as the last nonzero pseudo-remainder (provided that the resultant is not zero). This algorithm works for polynomials over the integers or, more generally, over an integral domain, without any division other than exact divisions (that is, without involving fractions). It involves arithmetic operations, while the computation of the determinant of the Sylvester matrix with standard algorithms requires arithmetic operations.
Generic properties
In this section, we consider two polynomials
and
whose coefficients are distinct indeterminates. Let
be the polynomial ring over the integers defined by these indeterminates.
The resultant is often called the generic resultant for the degrees and . It has the following properties.
is an absolutely irreducible polynomial.
If is the ideal of generated by and , then is the principal ideal generated by .
Homogeneity
The generic resultant for the degrees and is homogeneous in various ways. More precisely:
It is homogeneous of degree in
It is homogeneous of degree in
It is homogeneous of degree in all the variables and
If and are given the weight (that is, the weight of each coefficient is its degree as elementary symmetric polynomial), then it is quasi-homogeneous of total weight .
If and are homogeneous multivariate polynomials of respective degrees and , then their resultant in degrees and with respect to an indeterminate , denoted in , is homogeneous of degree in the other indeterminates.
Elimination property
Let be the ideal generated by two polynomials and in a polynomial ring where is itself a polynomial ring over a field. If at least one of and is monic in , then:
The ideals and define the same algebraic set. That is, a tuple of elements of an algebraically closed field is a common zero of the elements of if and only it is a zero of
The ideal has the same radical as the principal ideal That is, each element of has a power that is a multiple of
All irreducible factors of divide every element of
The first assertion is a basic property of the resultant. The other assertions are immediate corollaries of the second one, which can be proved as follows.
As at least one of and is monic, a tuple is a zero of if and only if there exists such that is a common zero of and . Such a common zero is also a zero of all elements of Conversely, if is a common zero of the elements of it is a zero of the resultant, and there exists such that is a common zero of and . So and have exactly the same zeros.
Computation
Theoretically, the resultant could be computed by using the formula expressing it as a product of roots differences. However, as the roots may generally not be computed exactly, such an algorithm would be inefficient and numerically unstable. As the resultant is a symmetric function of the roots of each polynomial, it could also be computed by using the fundamental theorem of symmetric polynomials, but this would be highly inefficient.
As the resultant is the determinant of the Sylvester matrix (and of the Bézout matrix), it may be computed by using any algorithm for computing determinants. This needs arithmetic operations. As algorithms are known with a better complexity (see below), this method is not used in practice.
It follows from that the computation of a resultant is strongly related to the Euclidean algorithm for polynomials. This shows that the computation of the resultant of two polynomials of degrees and may be done in arithmetic operations in the field of coefficients.
However, when the coefficients are integers, rational numbers or polynomials, these arithmetic operations imply a number of GCD computations of coefficients which is of the same order and make the algorithm inefficient.
The subresultant pseudo-remainder sequences were introduced to solve this problem and avoid any fraction and any GCD computation of coefficients. A more efficient algorithm is obtained by using the good behavior of the resultant under a ring homomorphism on the coefficients: to compute a resultant of two polynomials with integer coefficients, one computes their resultants modulo sufficiently many prime numbers and then reconstructs the result with the Chinese remainder theorem.
The use of fast multiplication of integers and polynomials allows algorithms for resultants and greatest common divisors that have a better time complexity, which is of the order of the complexity of the multiplication, multiplied by the logarithm of the size of the input ( where is an upper bound of the number of digits of the input polynomials).
Application to polynomial systems
Resultants were introduced for solving systems of polynomial equations and provide the oldest proof that there exist algorithms for solving such systems. These are primarily intended for systems of two equations in two unknowns, but also allow solving general systems.
Case of two equations in two unknowns
Consider the system of two polynomial equations
where and are polynomials of respective total degrees and . Then is a polynomial in , which is generically of degree (by properties of ). A value of is a root of if and only if either there exist in an algebraically closed field containing the coefficients, such that , or and (in this case, one says that and have a common root at infinity for ).
Therefore, solutions to the system are obtained by computing the roots of , and for each root computing the common root(s) of and
Bézout's theorem results from the value of , the product of the degrees of and . In fact, after a linear change of variables, one may suppose that, for each root of the resultant, there is exactly one value of such that is a common zero of and . This shows that the number of common zeros is at most the degree of the resultant, that is at most the product of the degrees of and . With some technicalities, this proof may be extended to show that, counting multiplicities and zeros at infinity, the number of zeros is exactly the product of the degrees.
General case
At first glance, it seems that resultants may be applied to a general polynomial system of equations
by computing the resultants of every pair with respect to for eliminating one unknown, and repeating the process until getting univariate polynomials. Unfortunately, this introduces many spurious solutions, which are difficult to remove.
A method, introduced at the end of the 19th century, works as follows: introduce new indeterminates and compute
This is a polynomial in whose coefficients are polynomials in which have the property that is a common zero of these polynomial coefficients, if and only if the univariate polynomials have a common zero, possibly at infinity. This process may be iterated until finding univariate polynomials.
To get a correct algorithm two complements have to be added to the method. Firstly, at each step, a linear change of variable may be needed in order that the degrees of the polynomials in the last variable are the same as their total degree. Secondly, if, at any step, the resultant is zero, this means that the polynomials have a common factor and that the solutions split in two components: one where the common factor is zero, and the other which is obtained by factoring out this common factor before continuing.
This algorithm is very complicated and has a huge time complexity. Therefore, its interest is mainly historical.
Other applications
Number theory
The discriminant of a polynomial, which is a fundamental tool in number theory, is , where is the leading coefficient of and its degree.
If and are algebraic numbers such that , then is a root of the resultant and is a root of , where is the degree of . Combined with the fact that is a root of , this shows that the set of algebraic numbers is a field.
Let be an algebraic field extension generated by an element which has as minimal polynomial. Every element of may be written as where is a polynomial. Then is a root of and this resultant is a power of the minimal polynomial of
Algebraic geometry
Given two plane algebraic curves defined as the zeros of the polynomials and , the resultant allows the computation of their intersection. More precisely, the roots of are the x-coordinates of the intersection points and of the common vertical asymptotes, and the roots of are the y-coordinates of the intersection points and of the common horizontal asymptotes.
A rational plane curve may be defined by a parametric equation
where , and are polynomials. An implicit equation of the curve is given by
The degree of this curve is the highest degree of , and , which is equal to the total degree of the resultant.
Symbolic integration
In symbolic integration, for computing the antiderivative of a rational fraction, one uses partial fraction decomposition for decomposing the integral into a "rational part", which is a sum of rational fractions whose antiprimitives are rational fractions, and a "logarithmic part" which is a sum of rational fractions of the form
where is a square-free polynomial and is a polynomial of lower degree than . The antiderivative of such a function involves necessarily logarithms, and generally algebraic numbers (the roots of ). In fact, the antiderivative is
where the sum runs over all complex roots of .
The number of algebraic numbers involved by this expression is generally equal to the degree of , but it occurs frequently that an expression with less algebraic numbers may be computed. The Lazard–Rioboo–Trager method produces an expression, where the number of algebraic numbers is minimal, without any computation with algebraic numbers.
Let
be the square-free factorization of the resultant which appears on the right. Trager proved that the antiderivative is
where the internal sums run over the roots of the (if the sum is zero, as being the empty sum), and is a polynomial of degree in . The Lazard-Rioboo contribution is the proof that is the subresultant of degree of and It is thus obtained for free if the resultant is computed by the subresultant pseudo-remainder sequence.
Computer algebra
All preceding applications, and many others, show that the resultant is a fundamental tool in computer algebra. In fact most computer algebra systems include an efficient implementation of the computation of resultants.
Homogeneous resultant
The resultant is also defined for two homogeneous polynomial in two indeterminates. Given two homogeneous polynomials and of respective total degrees and , their homogeneous resultant is the determinant of the matrix over the monomial basis of the linear map
where runs over the bivariate homogeneous polynomials of degree , and runs over the homogeneous polynomials of degree . In other words, the homogeneous resultant of and is the resultant of
and when they are considered as polynomials of degree and (their degree in may be lower than their total degree):
(The capitalization of "Res" is used here for distinguishing the two resultants, although there is no standard rule for the capitalization of the abbreviation).
The homogeneous resultant has essentially the same properties as the usual resultant, with essentially two differences: instead of polynomial roots, one considers zeros in the projective line, and the degree of a polynomial may not change under a ring homomorphism.
That is:
The resultant of two homogeneous polynomials over an integral domain is zero if and only if they have a non-zero common zero over an algebraically closed field containing the coefficients.
If and are two bivariate homogeneous polynomials with coefficients in a commutative ring , and a ring homomorphism of into another commutative ring , then extending to polynomials over , ones has
The property of an homogeneous resultant to be zero is invariant under any projective change of variables.
Any property of the usual resultant may similarly extended to the homogeneous resultant, and the resulting property is either very similar or simpler than the corresponding property of the usual resultant.
Macaulay's resultant
Macaulay's resultant, named after Francis Sowerby Macaulay, also called the multivariate resultant, or the multipolynomial resultant, is a generalization of the homogeneous resultant to homogeneous polynomials in indeterminates. Macaulay's resultant is a polynomial in the coefficients of these homogeneous polynomials that vanishes if and only if the polynomials have a common non-zero solution in an algebraically closed field containing the coefficients, or, equivalently, if the hyper surfaces defined by the polynomials have a common zero in the dimensional projective space. The multivariate resultant is, with Gröbner bases, one of the main tools of effective elimination theory (elimination theory on computers).
Like the homogeneous resultant, Macaulay's may be defined with determinants, and thus behaves well under ring homomorphisms. However, it cannot be defined by a single determinant. It follows that it is easier to define it first on generic polynomials.
Resultant of generic homogeneous polynomials
A homogeneous polynomial of degree in variables may have up to
coefficients; it is said to be generic, if these coefficients are distinct indeterminates.
Let be generic homogeneous polynomials in indeterminates, of respective degrees Together, they involve
indeterminate coefficients.
Let be the polynomial ring over the integers, in all these
indeterminate coefficients. The polynomials belong thus to and their resultant (still to be defined) belongs to .
The Macaulay degree is the integer which is fundamental in Macaulay's theory. For defining the resultant, one considers the Macaulay matrix, which is the matrix over the monomial basis of the -linear map
in which each runs over the homogeneous polynomials of degree and the codomain is the -module of the homogeneous polynomials of degree .
If , the Macaulay matrix is the Sylvester matrix, and is a square matrix, but this is no longer true for . Thus, instead of considering the determinant, one considers all the maximal minors, that is the determinants of the square submatrices that have as many rows as the Macaulay matrix. Macaulay proved that the -ideal generated by these principal minors is a principal ideal, which is generated by the greatest common divisor of these minors. As one is working with polynomials with integer coefficients, this greatest common divisor is defined up to its sign. The generic Macaulay resultant is the greatest common divisor which becomes , when, for each , zero is substituted for all coefficients of except the coefficient of for which one is substituted.
Properties of the generic Macaulay resultant
The generic Macaulay resultant is an irreducible polynomial.
It is homogeneous of degree in the coefficients of where is the Bézout bound.
The product with the resultant of every monomial of degree in belongs to the ideal of generated by
Resultant of polynomials over a field
From now on, we consider that the homogeneous polynomials of degrees have their coefficients in a field , that is that they belong to Their resultant is defined as the element of obtained by replacing in the generic resultant the indeterminate coefficients by the actual coefficients of the
The main property of the resultant is that it is zero if and only if have a nonzero common zero in an algebraically closed extension of .
The "only if" part of this theorem results from the last property of the preceding paragraph, and is an effective version of Projective Nullstellensatz: If the resultant is nonzero, then
where is the Macaulay degree, and is the maximal homogeneous ideal. This implies that have no other common zero than the unique common zero, , of
Computability
As the computation of a resultant may be reduced to computing determinants and polynomial greatest common divisors, there are algorithms for computing resultants in a finite number of steps.
However, the generic resultant is a polynomial of very high degree (exponential in ) depending on a huge number of indeterminates. It follows that, except for very small and very small degrees of input polynomials, the generic resultant is, in practice, impossible to compute, even with modern computers. Moreover, the number of monomials of the generic resultant is so high, that, if it would be computable, the result could not be stored on available memory devices, even for rather small values of and of the degrees of the input polynomials.
Therefore, computing the resultant makes sense only for polynomials whose coefficients belong to a field or are polynomials in few indeterminates over a field.
In the case of input polynomials with coefficients in a field, the exact value of the resultant is rarely important, only its equality (or not) to zero matters. As the resultant is zero if and only if the rank of the Macaulay matrix is lower than its number of its rows, this equality to zero may by tested by applying Gaussian elimination to the Macaulay matrix. This provides a computational complexity where is the maximum degree of input polynomials.
Another case where the computation of the resultant may provide useful information is when the coefficients of the input polynomials are polynomials in a small number of indeterminates, often called parameters. In this case, the resultant, if not zero, defines a hypersurface in the parameter space. A point belongs to this hyper surface, if and only if there are values of which, together with the coordinates of the point are a zero of the input polynomials. In other words, the resultant is the result of the "elimination" of from the input polynomials.
U-resultant
Macaulay's resultant provides a method, called "U-resultant" by Macaulay, for solving systems of polynomial equations.
Given homogeneous polynomials of degrees in indeterminates over a field , their U-resultant is the resultant of the polynomials where
is the generic linear form whose coefficients are new indeterminates Notation or for these generic coefficients is traditional, and is the origin of the term U-resultant.
The U-resultant is a homogeneous polynomial in It is zero if and only if the common zeros of form a projective algebraic set of positive dimension (that is, there are infinitely many projective zeros over an algebraically closed extension of ). If the U-resultant is not zero, its degree is the Bézout bound
The U-resultant factorizes over an algebraically closed extension of into a product of linear forms. If is such a linear factor, then are the homogeneous coordinates of a common zero of Moreover, every common zero may be obtained from one of these linear factors, and the multiplicity as a factor is equal to the intersection multiplicity of the at this zero. In other words, the U-resultant provides a completely explicit version of Bézout's theorem.
Extension to more polynomials and computation
The U-resultant as defined by Macaulay requires the number of homogeneous polynomials in the system of equations to be , where is the number of indeterminates. In 1981, Daniel Lazard extended the notion to the case where the number of polynomials may differ from , and the resulting computation can be performed via a specialized Gaussian elimination procedure followed by symbolic determinant computation.
Let be homogeneous polynomials in of degrees over a field . Without loss of generality, one may suppose that Setting for , the Macaulay bound is
Let be new indeterminates and define In this case, the Macaulay matrix is defined to be the matrix, over the basis of the monomials in of the linear map
where, for each , runs over the linear space consisting of zero and the homogeneous polynomials of degree .
Reducing the Macaulay matrix by a variant of Gaussian elimination, one obtains a square matrix of linear forms in The determinant of this matrix is the U-resultant. As with the original U-resultant, it is zero if and only if have infinitely many common projective zeros (that is if the projective algebraic set defined by has infinitely many points over an algebraic closure of ). Again as with the original U-resultant, when this U-resultant is not zero, it factorizes into linear factors over any algebraically closed extension of . The coefficients of these linear factors are the homogeneous coordinates of the common zeros of and the multiplicity of a common zero equals the multiplicity of the corresponding linear factor.
The number of rows of the Macaulay matrix is less than where is the usual mathematical constant, and is the arithmetic mean of the degrees of the It follows that all solutions of a system of polynomial equations with a finite number of projective zeros can be determined in time Although this bound is large, it is nearly optimal in the following sense: if all input degrees are equal, then the time complexity of the procedure is polynomial in the expected number of solutions (Bézout's theorem). This computation may be practically viable when , and are not large.
See also
Elimination theory
Subresultant
Nonlinear algebra
References
External links
Polynomials
Determinants
Computer algebra | Resultant | [
"Mathematics",
"Technology"
] | 5,599 | [
"Polynomials",
"Computational mathematics",
"Computer algebra",
"Computer science",
"Algebra"
] |
2,195,037 | https://en.wikipedia.org/wiki/Rational%20zeta%20series | In mathematics, a rational zeta series is the representation of an arbitrary real number in terms of a series consisting of rational numbers and the Riemann zeta function or the Hurwitz zeta function. Specifically, given a real number x, the rational zeta series for x is given by
where each qn is a rational number, the value m is held fixed, and ζ(s, m) is the Hurwitz zeta function. It is not hard to show that any real number x can be expanded in this way.
Elementary series
For integer m>1, one has
For m=2, a number of interesting numbers have a simple expression as rational zeta series:
and
where γ is the Euler–Mascheroni constant. The series
follows by summing the Gauss–Kuzmin distribution. There are also series for π:
and
being notable because of its fast convergence. This last series follows from the general identity
which in turn follows from the generating function for the Bernoulli numbers
Adamchik and Srivastava give a similar series
Polygamma-related series
A number of additional relationships can be derived from the Taylor series for the polygamma function at z = 1, which is
.
The above converges for |z| < 1. A special case is
which holds for |t| < 2. Here, ψ is the digamma function and ψ(m) is the polygamma function. Many series involving the binomial coefficient may be derived:
where ν is a complex number. The above follows from the series expansion for the Hurwitz zeta
taken at y = −1. Similar series may be obtained by simple algebra:
and
and
and
For integer n ≥ 0, the series
can be written as the finite sum
The above follows from the simple recursion relation Sn + Sn + 1 = ζ(n + 2). Next, the series
may be written as
for integer n ≥ 1. The above follows from the identity Tn + Tn + 1 = Sn. This process may be applied recursively to obtain finite series for general expressions of the form
for positive integers m.
Half-integer power series
Similar series may be obtained by exploring the Hurwitz zeta function at half-integer values. Thus, for example, one has
Expressions in the form of p-series
Adamchik and Srivastava give
and
where are the Bernoulli numbers and are the Stirling numbers of the second kind.
Other series
Other constants that have notable rational zeta series are:
Khinchin's constant
Apéry's constant
References
Zeta and L-functions
Real numbers | Rational zeta series | [
"Mathematics"
] | 531 | [
"Real numbers",
"Mathematical objects",
"Numbers"
] |
2,195,142 | https://en.wikipedia.org/wiki/Wallflower%20%28people%29 | A wallflower is someone with an introverted or shy personality type (or in more extreme cases, social anxiety) who will attend parties and social gatherings, but will usually distance themselves from the crowd and actively avoid being in the limelight. They are also social around friends but not strangers, though once around friends, the strangers become less impactful. The name itself derives from the eponymous plant's unusual growth pattern against a wall as a stake or in cracks and gaps in stone walls. "Wallflowers" might literally stand against a wall and simply observe others at a social gathering, rather than mingle.
Connection to sociology
Structural function theory
Structural functionalism is a sociological theory that sees society as a number of complex parts that form a stable and functional whole. This leads to a strong and coherent family unit made of smaller parts, with the functioning family unit then going on to form the smaller parts of a wider community, society and so on.
Social conflict theory
Social conflict theory in sociology claims that society is in a state of perpetual grace conflict due to competition for limited resources. It holds that social order is maintained by domination and power, rather than consensus and conformity. According to conflict theory, those with wealth and power try to hold on to it by any means possible, chiefly by suppressing the poor and powerless.
Symbolic interaction theory
The most relevant sociological theory that the 'wallflower' relates to, symbolic interaction, describes specific gestures or social norms that are symbolic in meaning. The theory consists of three core principles: meaning, language and thought. These core principles lead to conclusions about the creation of a person’s self and socialization into a larger community.
Because the 'wallflower' will usually exhibit a lack of interaction with others, it becomes symbolic of their thoughts and feelings towards others. The most specific example would be in the body language. During many times, people who are shy have little or no eye contact with others. A person may see a man, woman, or child try to avoid eye contact with others while out walking around in public or even in private. For some, this may be a condition that becomes consistent over time and become a normal action.
In social gatherings or parties, a 'wallflower' typically remains on the periphery of the group, avoiding the center of activity. Shy individuals may prefer to stay near familiar people or keep their distance from those they do not know well. Even in the presence of friends, they often avoid situations that might draw attention to themselves or place them at the center of focus.
Social anxiety
Social anxiety is the extreme fear of being scrutinized and judged by others in social or performance situations. Social anxiety disorder can wreak havoc on the lives of those who suffer from it. Symptoms may be so extreme that they disrupt daily life. People with this disorder, also called social phobia, may have few or no social or romantic relationships, making them feel powerless, alone, or even ashamed.
Although they recognize that the fear is excessive and unreasonable, people with social anxiety disorder feel powerless against their anxiety. They are terrified they will humiliate or embarrass themselves. The anxiety can interfere significantly with daily routines, occupational performance, or social life, making it difficult to complete school, interview and get a job, and have friendships and romantic relationships.
Being a wallflower can be considered a less-intense form of social anxiety. A person with social anxiety may feel a sense of hesitation in large crowds, and may even have a sense of panic if forced to become the center of attention. This fear may cause them to do something as minor as stand away from the center of a party, but it may also cause a major or minor anxiety attack.
People with social anxiety disorder do not believe that their anxiety is related to a mental or physical illness. This type of anxiety occurs in most social situations, especially when the person feels on display or is the center of attention. Once a person avoids almost all social and public interactions, it can be said that the person has an extreme case of social anxiety disorder, more commonly called Avoidant Personality Disorder. People with social anxiety disorder have an elevated rate of relationship difficulties and substance abuse.
Panic and anxiety attacks
Anxiety attacks are a combination of physical and mental symptoms that are intense and overwhelming. The anxiety is, however, more than just regular nervousness. Symptoms of anxiety attacks and panic attacks mimic serious medical issues, such as:
Heart attacks and heart failure.
Brain tumors.
Multiple sclerosis.
Despite their intensity, anxiety attacks are generally not life-threatening.
In popular culture
In the novel The Perks of Being a Wallflower by Stephen Chbosky, as well as in the film adaptation of the same title, the main character Charlie often finds himself alone in school or at parties. He also suffers from anxiety and depression.
In the song "Here" by Alessia Cara, the artist describes wanting to enjoy herself at home and not attend any parties with her friends.
Bob Dylan sings about a wallflower in the song "Wallflower" from 1971.
Jakob Dylan, son of Bob Dylan, founded the popular band the Wallflowers in 1989.
In the song "Wallflower" by In Flames, from the album Battles, they describe life from the perspective of a wallflower.
In the My Little Pony spin-off Equestria Girls, the 2018 special "Forgotten Friendship" features the character Wallflower Blush who is an extreme introvert, says her only friends are the plants in her garden and who sings the song "Invisible" about nobody ever taking notice of her existence.
In the song "Wallflower" from the album Wallflowers by Ukrainian metal band Jinjer, the vocalist describes herself being a wallflower.
In the song "Stall Me (Bonus Track)" from the deluxe version of the album Vices & Virtues by rock band Panic! at the Disco, the lyrics mention a 'wallflower garden', meaning a group of people who are collectively distancing themselves from the rest of the party.
In the song "WALLFLOWER" by K-Pop girl group TWICE, the lyrics are about seducing a wallflower.
See also
Hermit
Loner
Recluse
References
Human communication
Behaviorism
Personality typologies | Wallflower (people) | [
"Biology"
] | 1,260 | [
"Human communication",
"Behavior",
"Human behavior",
"Behaviorism"
] |
2,195,185 | https://en.wikipedia.org/wiki/ATM%20serine/threonine%20kinase | ATM serine/threonine kinase or Ataxia-telangiectasia mutated, symbol ATM, is a serine/threonine protein kinase that is recruited and activated by DNA double-strand breaks (canonical pathway), oxidative stress, topoisomerase cleavage complexes, splicing intermediates, R-loops and in some cases by single-strand DNA breaks. It phosphorylates several key proteins that initiate activation of the DNA damage checkpoint, leading to cell cycle arrest, DNA repair or apoptosis. Several of these targets, including p53, CHK2, BRCA1, NBS1 and H2AX are tumor suppressors.
In 1995, the gene was discovered by Yosef Shiloh who named its product ATM since he found that its mutations are responsible for the disorder ataxia–telangiectasia. In 1998, the Shiloh and Kastan laboratories independently showed that ATM is a protein kinase whose activity is enhanced by DNA damage.
Throughout the cell cycle DNA is monitored for damage. Damages result from errors during replication, by-products of metabolism, general toxic drugs or ionizing radiation. The cell cycle has different DNA damage checkpoints, which inhibit the next or maintain the current cell cycle step. There are two main checkpoints, the G1/S and the G2/M, during the cell cycle, which preserve correct progression. ATM plays a role in cell cycle delay after DNA damage, especially after double-strand breaks (DSBs). ATM is recruited to sites of double strand breaks by DSB sensor proteins, such as the MRN complex. After being recruited, it phosphorylates NBS1, along other DSB repair proteins. These modified mediator proteins then amplify the DNA damage signal, and transduce the signals to downstream effectors such as CHK2 and p53.
Structure
The ATM gene codes for a 350 kDa protein consisting of 3056 amino acids. ATM belongs to the superfamily of phosphatidylinositol 3-kinase-related kinases (PIKKs). The PIKK superfamily comprises six Ser/Thr-protein kinases that show a sequence similarity to phosphatidylinositol 3-kinases (PI3Ks). This protein kinase family includes ATR (ATM- and RAD3-related), DNA-PKcs (DNA-dependent protein kinase catalytic subunit) and mTOR (mammalian target of rapamycin). Characteristic for ATM are five domains. These are from N-terminus to C-terminus the HEAT repeat domain, the FRAP-ATM-TRRAP (FAT) domain, the kinase domain (KD), the PIKK-regulatory domain (PRD) and the FAT-C-terminal (FATC) domain. The HEAT repeats directly bind to the C-terminus of NBS1. The FAT domain interacts with ATM's kinase domain to stabilize the C-terminus region of ATM itself. The KD domain resumes kinase activity, while the PRD and the FATC domain regulate it. The structure of ATM has been solved in several publications using cryo-EM. In the inactive form, the protein forms a homodimer. In the canonical pathway, ATM is activated by the MRN complex and autophosphorylation, forming active monomers capable of phosphorylating several hundred downstream targets. In the non-canonical pathway, e.g. through simulation by oxidative stress, the dimer can be activated by the formation of disulfide bonds. The entire N-terminal domain together with the FAT domain are adopt an α-helical structure, which was initially predicted by sequence analysis. This α-helical structure forms a tertiary structure, which has a curved, tubular shape present for example in the Huntingtin protein, which also contains HEAT repeats. FATC is the C-terminal domain with a length of about 30 amino acids. It is highly conserved and consists of an α-helix.
Function
A complex of the three proteins MRE11, RAD50 and NBS1 (XRS2 in yeast), called the MRN complex in humans, recruits ATM to double strand breaks (DSBs) and holds the two ends together. ATM directly interacts with the NBS1 subunit and phosphorylates the histone variant H2AX on Ser139. This phosphorylation generates binding sites for adaptor proteins with a BRCT domain. These adaptor proteins then recruit different factors including the effector protein kinase CHK2 and the tumor suppressor p53. The ATM-mediated DNA damage response consists of a rapid and a delayed response. The effector kinase CHK2 is phosphorylated and thereby activated by ATM. Activated CHK2 phosphorylates phosphatase CDC25A, which is degraded thereupon and can no longer dephosphorylate CDK1-cyclin B, resulting in cell-cycle arrest. If the DSB can not be repaired during this rapid response, ATM additionally phosphorylates MDM2 and p53 at Ser15. p53 is also phosphorylated by the effector kinase CHK2. These phosphorylation events lead to stabilization and activation of p53 and subsequent transcription of numerous p53 target genes including CDK inhibitor p21 which lead to long-term cell-cycle arrest or even apoptosis.
The protein kinase ATM may also be involved in mitochondrial homeostasis, as a regulator of mitochondrial autophagy (mitophagy) whereby old, dysfunctional mitochondria are removed. Increased ATM activity also occurs in viral infection where ATM is activated early during dengue virus infection as part of autophagy induction and ER stress response.
Regulation
A functional MRN complex is required for ATM activation after DSBs. The complex functions upstream of ATM in mammalian cells and induces conformational changes that facilitate an increase in the affinity of ATM towards its substrates, such as CHK2 and p53.
Inactive ATM is present in the cells without DSBs as dimers or multimers. Upon DNA damage, ATM autophosphorylates on residue Ser1981. This phosphorylation provokes dissociation of ATM dimers, which is followed by the release of active ATM monomers. Further autophosphorylation (of residues Ser367 and Ser1893) is required for normal activity of the ATM kinase. Activation of ATM by the MRN complex is preceded by at least two steps, i.e. recruitment of ATM to DSB ends by the mediator of DNA damage checkpoint protein 1 (MDC1) which binds to MRE11, and the subsequent stimulation of kinase activity with the NBS1 C-terminus.
The three domains FAT, PRD and FATC are all involved in regulating the activity of the KD kinase domain. The FAT domain interacts with ATM's KD domain to stabilize the C-terminus region of ATM itself. The FATC domain is critical for kinase activity and highly sensitive to mutagenesis. It mediates protein-protein interaction for example with the histone acetyltransferase TIP60 (HIV-1 Tat interacting protein 60 kDa), which acetylates ATM on residue Lys3016. The acetylation occurs in the C-terminal half of the PRD domain and is required for ATM kinase activation and for its conversion into monomers. While deletion of the entire PRD domain abolishes the kinase activity of ATM, specific small deletions show no effect.
Germline mutations and cancer risk
People who carry a heterozygous ATM mutation have increased risk of mainly pancreatic cancer, prostate cancer, stomach cancer and invasive ductal carcinoma of the breast. Homozygous ATM mutation confers the disease ataxia–telangiectasia (AT), a rare human disease characterized by cerebellar degeneration, extreme cellular sensitivity to radiation and a predisposition to cancer. All AT patients contain mutations in the ATM gene. Most other AT-like disorders are defective in genes encoding the MRN protein complex. One feature of the ATM protein is its rapid increase in kinase activity immediately following double-strand break formation. The phenotypic manifestation of AT is due to the broad range of substrates for the ATM kinase, involving DNA repair, apoptosis, G1/S, intra-S checkpoint and G2/M checkpoints, gene regulation, translation initiation, and telomere maintenance. Therefore, a defect in ATM has severe consequences in repairing certain types of damage to DNA, and cancer may result from improper repair. AT patients have an increased risk for breast cancer that has been ascribed to ATM's interaction and phosphorylation of BRCA1 and its associated proteins following DNA damage.
Somatic mutations in sporadic cancers
Mutations in the ATM gene are found at relatively low frequencies in sporadic cancers. According to COSMIC, the Catalogue Of Somatic Mutations In Cancer, the frequencies with which heterozygous mutations in ATM are found in common cancers include 0.7% in 713 ovarian cancers, 0.9% in central nervous system cancers, 1.9% in 1,120 breast cancers, 2.1% in 847 kidney cancers, 4.6% in colon cancers, 7.2% among 1,040 lung cancers and 11.1% in 1790 hematopoietic and lymphoid tissue cancers. Certain kinds of leukemias and lymphomas, including mantle cell lymphoma, T-ALL, atypical B cell chronic lymphocytic leukemia, and T-PLL are also associated with ATM defects. A comprehensive literature search on ATM deficiency in pancreatic cancer, that captured 5,234 patients, estimated that the total prevalence of germline or somatic ATM mutations in pancreatic cancer was 6.4%. ATM mutations may serve as predictive biomarkers of response for certain therapies, since preclinical studies have found that ATM deficiency can sensitise some cancer types to ATR inhibition.
Frequent epigenetic deficiencies of ATM in cancers
ATM is one of the DNA repair genes frequently hypermethylated in its promoter region in various cancers (see table of such genes in Cancer epigenetics). The promoter methylation of ATM causes reduced protein or mRNA expression of ATM.
More than 73% of brain tumors were found to be methylated in the ATM gene promoter and there was strong inverse correlation between ATM promoter methylation and its protein expression (p < 0.001).
The ATM gene promoter was observed to be hypermethylated in 53% of small (impalpable) breast cancers and was hypermethylated in 78% of stage II or greater breast cancers with a highly significant correlation (P = 0.0006) between reduced ATM mRNA abundance and aberrant methylation of the ATM gene promoter.
In non-small cell lung cancer (NSCLC), the ATM promoter methylation status of paired tumors and surrounding histologically uninvolved lung tissue was found to be 69% and 59%, respectively. However, in more advanced NSCLC the frequency of ATM promoter methylation was lower at 22%. The finding of ATM promoter methylation in surrounding histologically uninvolved lung tissue suggests that ATM deficiency may be present early in a field defect leading to progression to NSCLC.
In squamous cell carcinoma of the head and neck, 42% of tumors displayed ATM promoter methylation.
DNA damage appears to be the primary underlying cause of cancer, and deficiencies in DNA repair likely underlie many forms of cancer. If DNA repair is deficient, DNA damage tends to accumulate. Such excess DNA damage may increase mutational errors during DNA replication due to error-prone translesion synthesis. Excess DNA damage may also increase epigenetic alterations due to errors during DNA repair. Such mutations and epigenetic alterations may give rise to cancer. The frequent epigenetic deficiency of ATM in a number of cancers likely contributed to the progression of those cancers.
Meiosis
ATM functions during meiotic prophase. The wild-type ATM gene is expressed at a four-fold increased level in human testes compared to somatic cells (such as skin fibroblasts). In both mice and humans, ATM deficiency results in female and male infertility. Deficient ATM expression causes severe meiotic disruption during prophase I. In addition, impaired ATM-mediated DNA DSB repair has been identified as a likely cause of aging of mouse and human oocytes. Expression of the ATM gene, as well as other key DSB repair genes, declines with age in mouse and human oocytes and this decline is paralleled by an increase of DSBs in primordial follicles. These findings indicate that ATM-mediated homologous recombinational repair is a crucial function of meiosis.
Inhibitors
Several ATM kinase inhibitors are currently known, some of which are already in clinical trials. One of the first discovered ATM inhibitors is caffeine with an IC50 of 0.2 mM and only a low selectivity within the PIKK family. Wortmannin is an irreversible inhibitor of ATM with no selectivity over other related PIKK and PI3K kinases. The most important group of inhibitors are compounds based on the 3-methyl-1,3-dihydro-2H-imidazo[4,5-c]quinolin-2-one scaffold. The first important representative is the inhibitor is Dactolisib (NVP-BEZ235), which was first published by Novartis as a selective mTOR/PI3K inhibitor. It was later shown to also inhibit other PIKK kinases such as ATM, DNA-PK and ATR. Various optimisation efforts by AstraZeneca (AZD0156, AZD1390), Merck (M4076) and Dimitrov et al. have led to highly active ATM inhibitors with greater potency.
Interactions
Ataxia telangiectasia mutated has been shown to interact with:
Abl gene,
BRCA1,
Bloom syndrome protein,
DNA-PKcs,
FANCD2,
MRE11A,
Nibrin,
P53,
RAD17,
RAD51,
RBBP8,
RHEB,
RRM2B,
SMC1A
TERF1, and
TP53BP1.
Tefu
The Tefu protein of Drosophila melanogaster is a structural and functional homolog of the human ATM protein. Tefu, like ATM, is required for DNA repair and normal levels of meiotic recombination in oocytes.
See also
Ataxia telangiectasia
Ataxia telangiectasia and Rad3 related
References
Further reading
External links
https://web.archive.org/web/20060107000211/http://www.hprd.org/protein/06347
Drosophila telomere fusion - The Interactive Fly
GeneReviews/NCBI/NIH/UW entry on Ataxia telangiectasia
OMIM entries on Ataxia telangiectasia
Proteins
EC 2.7.11 | ATM serine/threonine kinase | [
"Chemistry"
] | 3,196 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
2,195,217 | https://en.wikipedia.org/wiki/Gopher | Pocket gophers, commonly referred to simply as gophers, are burrowing rodents of the family Geomyidae. The roughly 41 species are all endemic to North and Central America. They are commonly known for their extensive tunneling activities and their ability to destroy farms and gardens.
The name "pocket gopher" on its own may refer to any of a number of genera within the family Geomyidae. These are the "true" gophers, but several ground squirrels in the distantly related family Sciuridae are often called "gophers", as well. The origin of the word "gopher" is uncertain; the French gaufre, meaning waffle, has been suggested, on account of the gopher tunnels resembling the honeycomb-like pattern of holes in a waffle; another suggestion is that the word is of Muskogean origin.
Description
Pocket gophers weigh around , and are about in body length, with a tail long. A few species reach weights approaching . Within any particular gopher species, the males are larger than the females, and can be nearly double their weight.
Average lifespans are one to three years. The maximum lifespan for the pocket gopher is about five years. Some gophers, such as those in the genus Geomys, have lifespans that have been documented as up to seven years in the wild.
Most gophers have brown fur that often closely matches the color of the soil in which they live. Their most characteristic features are their large cheek pouches, from which the word "pocket" in their name derives. These pouches are fur-lined, can be turned inside out, and extend from the side of the mouth well back onto the shoulders. Gophers have small eyes and a short, hairy tail, which they use to feel around tunnels when they walk backwards.
Pocket gophers have often been found to carry external parasites including, most commonly, lice, but also ticks, fleas, and mites. Common predators of the gopher include weasels, snakes, and hawks.
Behavior
All pocket gophers create a network of tunnel systems that provide protection and a means of collecting food. They are larder hoarders, and their cheek pouches are used for transporting food back to their burrows. Gophers can collect large hoards. Unlike ground squirrels, gophers do not live in large communities and seldom find themselves above ground. Tunnel entrances can be identified by small piles of loose soil covering the opening. Burrows are in many areas where the soil is softer and easily tunneled. Gophers often visit vegetable gardens, lawns, or farms, as they like moist soil (see Soil biomantle). This has led to their frequent treatment as pests.
Gophers eat plant roots, shrubs, and other vegetables such as carrots, lettuce, radishes, and any other vegetables with juice. Some species are considered agricultural pests. The resulting destruction of plant life then leaves the area a stretch of denuded soil. At the same time, the soil disturbance created by turning it over can lead to the early establishment of ecological succession in communities of r-selected and other ruderal plant species. The stashing and subsequent decomposition of plant material in the gophers' larder can produce deep fertilization of the soil.
Pocket gophers are solitary outside of the breeding season, aggressively maintaining territories that vary in size depending on the resources available. Males and females may share some burrows and nesting chambers if their territories border each other, but in general, each pocket gopher inhabits its own individual tunnel system. Although they attempt to flee when threatened, they may attack other animals, including cats and humans, and can inflict serious bites with their long, sharp teeth.
Depending on the species and local conditions, pocket gophers may have a specific annual breeding season, or may breed repeatedly through the year. Each litter typically consists of two to five young, although this may be much higher in some species. The young are born blind and helpless and are weaned when around 40 days old.
Control
Geomys and Thomomys species are classed as "prohibited new organisms" under New Zealand's Hazardous Substances and New Organisms Act 1996, preventing them from being imported into the country.
Classification
Much debate exists among taxonomists about which races of pocket gophers should be recognized as full species, and the following list cannot be regarded as definitive.
Family Geomyidae
Genus Cratogeomys; some authors treat this genus as a subgenus of Pappogeomys.
Yellow-faced pocket gopher (Cratogeomys castanops)
Oriental Basin pocket gopher (C. fulvescens)
Smoky pocket gopher (C. fumosus)
Goldman's pocket gopher (C. goldmani)
Merriam's pocket gopher (C. merriami)
Perote pocket gopher (C. perotensis)
Volcan de Toluca pocket gopher (C. planiceps)
Genus Geomys – eastern pocket gophers; principally live in the southwestern United States, east of the Sierra Nevada mountains
Desert pocket gopher (Geomys arenarius)
Attwater's pocket gopher (G. attwateri)
Baird's pocket gopher (G. breviceps)
Plains pocket gopher (G. bursarius)
Hall's pocket gopher (G. jugossicularis)
Knox Jones's pocket gopher (G. knoxjonesi)
Sand Hills pocket gopher (G. lutescens)
Texas pocket gopher (G. personatus)
Southeastern pocket gopher (G. pinetis)
Strecker's pocket gopher (G. streckeri)
Central Texas pocket gopher (G. texensis)
Tropical pocket gopher (G. tropicalis)
Genus Heterogeomys – giant pocket gophers or taltuzas; live in Mexico, Central America, and Colombia; some authors treat this genus as a subgenus of Orthogeomys.
Chiriqui pocket gopher (Heterogeomys cavator)
Cherrie's pocket gopher (H. cherriei)
Darien pocket gopher (H. dariensis)
Variable pocket gopher (H. heterodus)
Hispid pocket gopher (H. hispidus)
Big pocket gopher (H. lanius)
Underwood's pocket gopher (H. underwoodi)
Genus Orthogeomys; live in Guatemala, Honduras, and Mexico;
Giant pocket gopher (O. grandis)
Genus Pappogeomys; live in Mexico
Buller's pocket gopher (P. bulleri)
Genus Thomomys – western pocket gophers; widely distributed in North America, extending into the northwestern US, Canada, and the southeastern US.
Black-and-Brown pocket gopher (T. atrovarius)
Botta's pocket gopher (T. bottae)
Camas pocket gopher (T. bulbivorus)
Wyoming pocket gopher (T. clusius)
Idaho pocket gopher (T. idahoensis)
Mazama pocket gopher (T. mazama)
Mountain pocket gopher (T. monticola)
Nayar pocket gopher (T. nayarensis)
Sierra Madre Occidental pocket gopher (T. sheldoni)
Northern pocket gopher (T. talpoides)
Townsend's pocket gopher (T. townsendii)
Southern pocket gopher (T. umbrinus)
Genus Zygogeomys
Michoacan pocket gopher (Zygogeomys trichopus)
Some sources also list a genus Hypogeomys, with one species, but this genus name is normally used for the Malagasy giant rat, which belongs to the family Nesomyidae.
In popular culture
Minnesota is nicknamed the "Gopher State", and the University of Minnesota's athletics teams are collectively known as the Golden Gophers, led by mascot Goldy Gopher. The Golden Gopher, however, refers to the Thirteen-lined ground squirrel, which is not a member of the Geomyidae family.
Gainer the Gopher is the mascot of the Saskatchewan Roughriders in the Canadian Football League.
Gopher is a recurring character in Disney's Winnie the Pooh franchise.
A gopher puppet is featured prominently in the film Caddyshack and the sequel.
The mascot of the Go programming language is the Go Gopher.
Gordon the Gopher is an English puppet gopher that appeared on Children's BBC between 1985 and 1987.
Mac and Tosh from the Looney Tunes franchise, are a couple of extremely well mannered gophers.
See also
Mole
Naked mole rat
References
External links
Article on the Animal Diversity Web site
Rodents of Central America
Rodents of Canada
Rodents of Mexico
Rodents of the United States
Agricultural pests
Fauna of the Western United States
Fauna of the California chaparral and woodlands
Rodents by common name
Extant Eocene first appearances
Taxa named by Charles Lucien Bonaparte | Gopher | [
"Biology"
] | 1,876 | [
"Pests (organism)",
"Agricultural pests"
] |
2,195,233 | https://en.wikipedia.org/wiki/Patern%C3%B2%E2%80%93B%C3%BCchi%20reaction | The Paternò–Büchi reaction, named after Emanuele Paternò and George Büchi, who established its basic utility and form, is a photochemical reaction, specifically a 2+2 photocycloaddition, which forms four-membered oxetane rings from an excited carbonyl and reacting with an alkene.
With substrates benzaldehyde and 2-methyl-2-butene the reaction product is a mixture of structural isomers:
Another substrate set is benzaldehyde and furan or heteroaromatic ketones and fluorinated alkenes.
The alternative strategy for the above reaction is called the Transposed Paternò−Büchi reaction.
See also
Aza Paternò−Büchi reaction - the aza-equivalent of the Paternò–Büchi reaction
Enone–alkene cycloadditions - photochemical reaction of an enone with an alkene to give a cyclobutene ring unit
References
Photochemistry
Organic reactions
Name reactions
Oxygen heterocycle forming reactions
Coupling reactions | Paternò–Büchi reaction | [
"Chemistry"
] | 221 | [
"Coupling reactions",
"Organic reactions",
"Name reactions",
"nan",
"Ring forming reactions"
] |
2,195,359 | https://en.wikipedia.org/wiki/Fire%20protection%20engineering | Fire protection engineering is the application of science and engineering principles to protect people, property, and their environments from the harmful and destructive effects of fire and smoke. It encompasses engineering which focuses on fire detection, suppression and mitigation and fire safety engineering which focuses on human behavior and maintaining a tenable environment for evacuation from a fire. In the United States 'fire protection engineering' is often used to include 'fire safety engineering'.
The discipline of fire engineering includes, but is not exclusive to:
Fire detection – fire alarm systems and brigade call systems
Active fire protection – fire suppression systems
Passive fire protection – fire and smoke barriers, space separation
Smoke control and management
Escape facilities – emergency exits, fire lifts, etc.
Building design, layout, and space planning
Fire prevention programs
Fire dynamics and fire modeling
Human behavior during fire events
Risk analysis, including economic factors
Wildfire management
Fire protection engineers identify risks and design safeguards that aid in preventing, controlling, and mitigating the effects of fires. Fire engineers assist architects, building owners and developers in evaluating buildings' life safety and property protection goals. Fire engineers are also employed as fire investigators, including such very large-scale cases as the analysis of the collapse of the World Trade Center. NASA uses fire engineers in its space program to help improve safety. Fire engineers are also employed to provide 3rd party review for performance based fire engineering solutions submitted in support of local building regulation applications.
History
Fire engineering's roots date back to ancient Rome, when the Emperor Nero ordered the city to be rebuilt utilizing passive fire protection methods, such as space separation and non-combustible building materials, after a catastrophic fire. The discipline of fire engineering emerged in the early 20th century as a distinct discipline, separate from civil, mechanical and chemical engineering, in response to new fire problems posed by the Industrial Revolution. Fire protection engineers of this era concerned themselves with devising methods to protect large factories, particularly spinning mills and other manufacturing properties. Another motivation to organize the discipline, define practices and conduct research to support innovations was in response to the catastrophic conflagrations and mass urban fires that swept many major cities during the latter half of the 19th century (see city or area fires). The insurance industry also helped promote advancements in the fire engineering profession and the development of fire protection systems and equipment.
In 1903 the first degree program in fire protection engineering was initiated as the Armour Institute of Technology (later becoming part of the Illinois Institute of Technology).
As the 20th century emerged, several catastrophic fires resulted in changes to buildings codes to better protect people and property from fire. It was only in the latter half of the 20th century that fire protection engineering emerged as a unique engineering profession. The primary reason for this emergence was the development of the “body of knowledge,” specific to the profession that occurred after 1950. Other factors contributing to the growth of the profession include the start of the Institution of Fire Engineers in 1918 in the UK, and the Society of Fire Protection Engineers in 1950 in the US, the emergence of independent fire protection consulting engineer, and the promulgation of engineering standards for fire protection.
Education
Fire engineers, like their counterparts in other engineering and scientific disciplines, undertake a formal course of education and continuing professional development to acquire and maintain their competence. This education typically includes foundation studies in mathematics, physics, chemistry, and technical writing. Professional engineering studies focus students on acquiring proficiency in material science, statics, dynamics, thermodynamics, fluid dynamics, heat transfer, engineering economics, ethics, systems in engineering, reliability, and environmental psychology. Studies in combustion, probabilistic risk assessment or risk management, the design of fire suppression systems, fire alarm systems, building fire safety, and the application and interpretation of model building codes, and the measurement and simulation of fire phenomena complete most curricula.
New Zealand was one of the first countries in the world to introduce performance based assessment methods into their building codes in regard to fire safety. This occurred with the introduction of their 1991 Building Act. Professor Andy Buchanan, of the University of Canterbury, established the first post graduate and only course available in New Zealand, at the time, in fire safety engineering in 1995. Applicants to the course require a minimum qualification of a bachelor's degree in engineering or bachelor's degree in a limited list of science course. Notable alumni from the university of Canterbury include Sir Ernest Rutherford, Robert (Bob) Park, Roy Kerr, Michael P. Collins, and John Britten. A master's degree in fire engineering from the University of Canterbury is recognized under the Washington Accord.
In the United States, the University of Maryland (UMD) offers the ABET-accredited B.S. degree program in Fire Protection Engineering, as well as graduate degrees and a distance M.Eng. program. Worcester Polytechnic Institute (WPI) offers an M.S. and a Ph.D. in Fire Protection Engineering as well as online graduate programs in this discipline (M.S. and a Graduate Certificate). , Cal Poly offers an M.S. in Fire Protection Engineering. Oklahoma State University offers an ABET-accredited B.S. in Fire Protection and Safety Engineering Technology (established in 1937), Eastern Kentucky University also offers an ABET-accredited B.S. in Fire Protection and Safety Engineering Technology, the Case School of Engineering at Case Western Reserve University offers a master's degree track in Fire Science and Engineering, University of New Haven offers a B.S. in Fire Protection Engineering, and the University of Cincinnati offers an associate degree in Fire Science and a bachelor's degree in Fire and Safety Engineering Technology as distance learning options, the only university in the U.S. and Canada to hold this distinction. Other institutions, such as the University of Kansas, Illinois Institute of Technology, University of California, Berkeley, University of California, San Diego, Eastern Kentucky University, and the University of Texas at Austin have or do offer courses in Fire Protection Engineering or technology.
Canada has fire engineering programs at York University and the University of Waterloo.
The practice of final fire sprinkler systems design and hydraulic calculations is commonly performed by design technicians who are often educated in-house at contracting firms throughout North America, with the objective of preparing designers for certification by testing by associations such as NICET (National Institute for Certification in Engineering Technologies). NICET certification is commonly used as a proof of competency for securing a license to design and install fire protection systems.
In Europe, the University of Edinburgh offers a degree in Fire Engineering and had its first fire research group in the 1970s. These activities are now conducted at the new BRE Centre for Fire Safety Engineering. The University of Leeds uniquely offers an MSc award in Fire and Explosion Engineering.
Other European Universities active in fire engineering are:
Bergische Universität Wuppertal
Ghent University
Imperial College London
Letterkenny Institute of Technology
Linnaeus University
Luleå University of Technology
London South Bank University
Lund University
Norwegian University of Science
Otto-von-Guericke-Universität Magdeburg
Stord/Haugesund University College
University of Applied Sciences Cologne
University of Cantabria
University of Central Lancashire
University of Greenwich
University of Manchester
University of Poitiers
University of Sheffield
University of Ulster
University of Wales (Newport)
University of Warwick
Glasgow Caledonian University
Vilnius Gediminas Technikal University
The University of Ulster introduced its first fire safety programmes in 1975, followed by the first MSc Programme in Fire Safety Engineering in the United Kingdom introduced in 1990. In 2005 this MSc Programme will celebrate 25 years of unbroken service to higher fire safety engineering education. In 2004 the Institute for Fire Safety Engineering and Technology at the University of Ulster FireSERT occupied its new fire safety engineering laboratories which were funded by £6 million pound Infrastructure Award. The new facilities are state of the art fire safety engineering laboratories including a large scale burn hall and a 10 megawatt calorimeter.
In Australia, Victoria University in Melbourne offers postgraduate courses in Building Fire Safety and Risk Engineering as does the University of Western Sydney. The Centre for Environmental Safety and Risk Engineering (CESARE) is a research unit under Victoria University and has facilities for research and testing of fire behaviour. The Charles Darwin University and the University of Queensland have active programs.
Asian universities active in fire engineering include: Hong Kong Polytechnic University, Tokyo University of Science, Toyohashi University of Technology, and the University of Science and Technology of China.
Professional registration
Suitably qualified and experienced fire protection engineers may qualify for registration as a professional engineer. The recognition of fire protection engineering as a separate discipline varies from state to state in the United States. NCEES recognizes Fire Protection Engineering as a separate discipline and offers a PE exam subject. This test was last updated for the October 2012 exam and includes the following major topics (percentages indicate approximate weight of topic):
Fire Protection Analysis (20%)
Fire Protection Management (5%)
Fire Dynamics (12.5%)
Active and Passive Systems (50%)
Egress and Occupant Movement (12.5%)
Few countries outside the United States regulate the professional practice of fire protection engineering as a discipline, although they may restrict the use of the title 'engineer' in association with its practice.
The titles 'fire engineer' and 'fire safety engineer' tend to be preferred outside the United States, especially in the United Kingdom and Commonwealth countries influenced by the British fire service.
The Institution of Fire Engineers is one international organization that qualifies many aspects of the training and qualifications of fire engineers and has the power to offer chartered status.
See also
Architecture
Architectural engineering
Building services engineering
Fire test
Institution of Fire Engineers
Listing and approval use and compliance
Product certification
References
External links
Society of Fire Protection Engineers website
Indian Institute of Fire Engineering - MSBTE recognized Fire Engineering Institute
Fire protection | Fire protection engineering | [
"Engineering"
] | 1,970 | [
"Building engineering",
"Fire protection"
] |
13,376,506 | https://en.wikipedia.org/wiki/AlphaWindows | AlphaWindows was a proposed industry standard from the Display Industry Association (an industry consortium in California) in the early 1990s that would allow a single CRT screen to implement multiple windows, each of which was to behave as a distinct computer terminal. Individual vendors offered products based on this in 1992 through the end of the 1990s.
These products were targeted at a low-end market.
The initial concept relied on custom (but low-cost) terminals which would support mouse interaction, (text) windowing support, and colored text. With that, plus special host software, the vendors proposed to support semi-graphical applications "transparently".
Organization
The Display Industry Association was at the same location as Cumulus Technology (the same street address in Palo Alto, CA). Cumulus was a manufacturer of displays since 1986. Cumulus was heavily involved with development of the AlphaWindows standard. The members of the association in 1993 were:
Terminal vendors
AT&T / NCR / ADDS (partnership)
Cumulus
DEC
Link / Wyse (partnership)
Microvitec
Siemens / Nixdorf (partnership)
TeleVideo
Software vendors
Cumulus
JSB
Nutec
SSSI
Only Cumulus was proposing both to develop the terminals and the host software. However, Cumulus did not survive: it went bankrupt.
Software
JSB Software Technologies produced MultiView Mascot. As noted in Unix Review:
, the product is owned by FutureSoft.
SSSI (Structured Software Solutions, Inc.) produced the FacetTerm session multiplexer.
References
See also
X terminal
Twin
Text user interface | AlphaWindows | [
"Technology",
"Engineering"
] | 317 | [
"Computing stubs",
"Computer engineering stubs",
"Computer engineering"
] |
13,376,878 | https://en.wikipedia.org/wiki/Decoy%20cells | Decoy cells are virally infected epithelial cells that can be found in the urine. Decoy cells owe their name to their strong resemblance to cancer cells, and may as such confuse the diagnosis of either viral infection or urothelial malignancy. During 1950s, cytotechnologist Andrew Ricci observed cells mimicking cancer cells by they were not, in a group of persons working in some kinds of industries - they were referred to as “decoy cells”, analogous to “decoy ducks” used in hunting wild ducks, by Andrew Ricci, a cytotechnologist working renown cytopathologist Dr. Leopold G. Koss.
Epidemiology and presentation
Decoy cells are mostly prevalent in immunocompromised individuals, such as transplant recipients who are treated with immunosuppressive medication in order for their immune system not to reject the foreign transplanted organ. Several viruses mediated the emergence of decoy cells, amongst which cytomegalovirus and polyomavirus. Decoy cells are virus infected urothelial cells with a distinct morphology of enlarged nuclei and intranuclear inclusions. In renal transplant recipients, such cells may be found in up to 40 percent of cases. Decoy cells are clinically relevant since they may be used as a prognostic marker for clinical conditions such as polyomavirus BK-induced nephropathy in renal transplant recipients, and haemorrhagic cystitis in haematopoietic stem cell transplant recipients.
Diagnosis
Decoy cells can be seen in a urine sample through Papanicolaou staining or phase-contrast microscopy. By Papanicolaou stain, most decoy cells have an enlarged nucleus that bears a basophilic inclusion which is surrounded by chromatin that confers a ground-glass or gelatinous appearance. Sometimes the nuclear inclusion has a vesicular aspect, the chromatin may be clumped, and it may be surrounded by a halo. When decoy cells derive from the urothelium, the heavily enlarged and altered nuclei as well as the irregular shape of the cell body can mimic the changes observed in neoplastic cells.
By phase-contrast microscopy, decoy cells show the same abnormalities described for stained specimens, namely, enlargement of the nucleus with a ground-glass or vesicular appearance, altered chromatin, enlarged nucleoli, the presence of a halo, and at times also cytoplasmic vacuoles. In our experience, these features make decoy cells different from tubular cells and transitional cells found in all other conditions. The only exception is represented by cells infected by cytomegalovirus, which frequently show a ‘bird's eye’ appearance.
As such, decoy cells may strongly resemble malign cancer cells, from which they also derive their name. This is because they can be mistaken for cancer cells, or the other way around where cancer cells can be mistaken for decoy cells.
Signs and symptoms
Decoy cells themselves do not cause any disease, and they may be found in the urine of healthy individuals. In immunodeficient individuals, such as transplant recipients or severely immunocompromised HIV-infected individuals, viruses in general more often reactivate owing to a lack of immunologic surveillance. As such, in these individuals, decoy cells are also seen more frequently.
The viruses that induce the emergence of decoy cells, may causes disease, but again mainly in immunocompromised individuals. Cytomegalovirus may be the cause of retinitis, respiratory symptoms and or enteritis. Polyomaviruses may cause progressive multifocal leukoencephalopathy (JC virus) and polyomavirus-associated nephropathy, ureteral stenosis and hemorrhagic cystitis (BK virus). The latter condition mainly occurs in hematopoietic stem cell transplant recipients.
Several publications have tried to use decoy cells as a prognostic marker for polyomavirus-associated diseases such as polyomavirus BK-associated nephropathy (BKVAN), a condition occurring only in immunocompromised individuals and especially in renal transplant recipients. BKVAN is a condition wherein overt replication of polyomavirus BK causes an interstitial inflammation in a kidney.
Treatment
Decoy cells alone do not need to be treated since they do not necessarily indicate pathology. However, in the context of overt viral replication against the background of immunodeficiency, the viruses that cause the emergence of decoy cells must be treated. For polyomavirus BK, only the restoration of immunologic function and the subsequent reconstitution of cells with antiviral activity such as natural killer cells and cytotoxic T cells has proven to be effective. Restoration of the immune system can be achieved via different paths according to the different patient groups. For example, in severely immunocompromised HIV-patients, previously called AIDS-patients, immunologic function can be restored by treatment with highly active anti-retroviral therapy. In kidney transplant recipients who are treated with immunosuppressive agents, immunologic function can be treated by tapering of the immunosuppressive regimen. Other agents that have been proposed to target polyomavirus BK, such as cidofovir, fluoroquinolones, leflunomide, and statins are far from established and the published results on their effectivity are conflicting. Also, some of these agents may cause severe long-lasting side effects.
References
Further reading
Koss LG. On decoy cells. Acta Cytol. 2005 May-Jun;49(3):233-4.
The Paris System for Reporting Urinary Cytology by Dorothy L. Rosenthal, Eva M. Wojcik, Daniel F.I. Kurtycz · 2015.
Assis PG, Carvalho MDGDC. Human polyomavirus infection: Cytological and molecular diagnosis. Rev Assoc Med Bras (1992). 2017 Nov;63(11):943-945.
Epithelial cells
Histopathology
Virology | Decoy cells | [
"Chemistry"
] | 1,282 | [
"Histopathology",
"Microscopy"
] |
13,377,745 | https://en.wikipedia.org/wiki/Dimetra | DIMETRA IP is the brand name under which Motorola markets its implementation of the TETRA digital radio communications standard. When Motorola split into Motorola Solutions and Motorola Mobility in 2011, Motorola Solutions retained Dimetra and other public safety brands and products while Motorola Mobility retained smartphones and other consumer products. Both companies continue to share the "Batwings" logo.
Overview
Tetra is a scalable radio network technology used by emergency services, other government agencies and the private sector. Dimetra IP is built around Motorola's IP (Internet Protocol) core technology while Tetra is an open standard maintained by the European Telecommunications Standards Institute. The technology is similar in nature to GSM but operates in a more secure and resilient manner due to capabilities built into the standard.
Motorola's range of TETRA products including Network Infrastructure, Mobile Radios and Services carry the Dimetra brand.
Major Dimetra installations by country
Indonesia (PT. Chevron Pacific Indonesia)
United Kingdom (Airwave)
Netherlands (C2000) , de-commissioned since 2020
Portugal (Emergency Network (Police, Medical Emergency Response Units) (Nationwide)
Taiwan (Taiwan High Speed Rail)
Hong Kong (Police)
Pakistan Ministry of Interior (Police)
India (Delhi Metro)
Denmark (Nationwide 99.5% coverage)
China (Shanghai Police)
Haiti (MINUSTAH)
Ireland (Garda, Ambulance, Prison, Naval, Customs, Revenue and Public Works) (Nationwide)
Greece (Police)
Jersey (case study)
Israel (Israel Mountain Rose)
Sri Lanka (NAV)
Poland (Police in Warsaw, Szczecin, Lodz, Kraków)
Australia (QGC / Shell / Woodside)
Australia (Zeon Digital)
Countrywide Dimetra installation in various stages of deployment (2007)
Austria
Norway
Lithuania
Australia
Maldives
Countrywide Dimetra installation in various stages of deployment (2010)
Saudi Arabia
Libya
References
Motorola TETRA
Tetra Association
External links
Tetra in Norway
Tetra in POLAND
Tetra in Portugal
Mobile telecommunications standards
Mobile radio telephone systems | Dimetra | [
"Technology"
] | 404 | [
"Mobile telecommunications",
"Mobile radio telephone systems",
"Mobile telecommunications standards"
] |
13,377,974 | https://en.wikipedia.org/wiki/Ecological%20trap | Ecological traps are scenarios in which rapid environmental change leads organisms to prefer to settle in poor-quality habitats.
The concept stems from the idea that organisms that are actively selecting habitat must rely on environmental cues to help them identify high-quality habitat. If either the habitat quality or the cue changes so that one does not reliably indicate the other, organisms may be lured into poor-quality habitat.
Overview
Ecological traps are thought to occur when the attractiveness of a habitat increases disproportionately in relation to its value for survival and reproduction. The result is preference of falsely attractive habitat and a general avoidance of high-quality but less-attractive habitats. For example, indigo buntings typically nest in shrubby habitat or broken forest transitions between closed canopy forest and open field. Human activity can create 'sharper', more abrupt forest edges and buntings prefer to nest along these edges. However, these artificial sharp forest edges also concentrate the movement of predators which predate their nests. In this way, Buntings prefer to nest in highly altered habitats where their nest success is lowest.
While the demographic consequences of this type of maladaptive habitat selection behavior have been explored in the context of the sources and sinks, ecological traps are an inherently behavioral phenomenon of individuals. Despite being a behavioural mechanism, ecological traps can have far-reaching population consequences for species with large dispersal capabilities, such as the grizzly bear (Ursus arctos). The ecological trap concept was introduced in 1972 by Dwernychuk and Boag and the many studies that followed suggested that this trap phenomenon may be widespread because of anthropogenic habitat change.
As a corollary, novel environments may represent fitness opportunities that are unrecognized by native species if high-quality habitats lack the appropriate cues to encourage settlement; these are known as perceptual traps. Theoretical and empirical studies have shown that errors made in judging habitat quality can lead to population declines or extinction. Such mismatches are not limited to habitat selection, but may occur in any behavioral context (e.g. predator avoidance, mate selection, navigation, foraging site selection, etc.). Ecological traps are thus a subset of the broader phenomena of evolutionary traps.
As ecological trap theory developed, researchers have recognized that traps may operate on a variety of spatial and temporal scales which might also hinder their detection. For example, because a bird must select habitat on several scales (a habitat patch, an individual territory within that patch, as well as a nest site within the territory), traps may operate on any one of these scales. Similarly, traps may operate on a temporal scale so that an altered environment may appear to cause a trap in one stage of an organism's life, yet have positive effects on later life stages. As a result, there has been a great deal of uncertainty as to how common traps may be, despite widespread acceptance as a theoretical possibility. However, given the accelerated rate of ecological change driven by human land-use change, global warming, exotic species invasions, and changes in ecological communities resulting from species loss, ecological traps may be an increasing and highly underappreciated threat to biodiversity.
A 2006 review of the literature on ecological traps provides guidelines for demonstrating the existence of an ecological trap. A study must show a preference for one habitat over another (or equal preference) and that individuals selecting the preferred habitat (or equally preferred habitat) have lower fitness (i.e., experience lower survival or reproductive success). Since the publication of that paper which found only a few well-documented examples of ecological traps, interest in ecological and evolutionary traps has grown very rapidly and new empirical examples are being published at an accelerating rate. There are now roughly 30 examples of ecological traps affecting a broad diversity of taxa including birds, mammals, arthropods, fish and reptiles.
Because ecological and evolutionary traps are still very poorly understood phenomena, many questions about their proximate and ultimate causes as well as their ecological consequences remain unanswered. Are traps simply an inevitable consequence of the inability of evolution to anticipate novelty or react quickly to rapid environmental change? How common are traps? Do ecological traps necessarily lead to population declines or extinctions or is it possible that they may persist indefinitely? Under what ecological and evolutionary conditions should this occur? Are organisms with certain characteristics predisposed to being "trapped"? Is rapid environmental change necessary to trigger traps? Can global warming, pollution or exotic invasive species create traps? Embracing genetic and phylogenetic approaches may provide more robust answers to the above questions as well as providing deeper insight into the proximate and ultimate basis for maladaptation in general. Because ecological and evolutionary traps are predicted to add in concert with other sources of population decline, traps are an important research priority for conservation scientists. Given the rapid current rate of global environmental change, traps may be far more common than it is realized and it will be important to examine the proximate and ultimate causes of traps if management is to prevent or eliminate traps in the future.
Polarized light pollution
Polarized light pollution is perhaps the most compelling and well-documented cue triggering ecological traps. Orientation to polarized sources of light is the most important mechanism that guides at least 300 species of dragonflies, mayflies, caddisflies, tabanid flies, diving beetles, water bugs, and other aquatic insects in their search for the water bodies they require for suitable feeding/breeding habitat and oviposition sites (Schwind 1991; Horváth and Kriska 2008). Because of their strong linear polarization signature, artificial polarizing surfaces (e.g., asphalt, gravestones, cars, plastic sheeting, oil pools, windows) are commonly mistaken for bodies of water (Horváth and Zeil 1996; Kriska et al. 1998, 2006a, 2007, 2008; Horváth et al. 2007, 2008). Light reflected by these surfaces is often more highly polarized than that of light reflected by water, and artificial polarizers can be even more attractive to polarotactic aquatic insects than a water body (Horváth and Zeil 1996; Horváth et al. 1998; Kriska et al. 1998) and appear as exaggerated water surfaces acting as supernormal optical stimuli. Consequently, dragonflies, mayflies, caddisflies, and other water-seeking species actually prefer to mate, settle, swarm, and oviposit upon these surfaces than on available water bodies.
See also
Evolutionary mismatch
Perceptual trap
Source–sink dynamics
Artificialization
Notes
References
Further reading
Caswell, H. 2001. Matrix population models: Construction, analysis, and interpretation. 2nd edition. Sinauer. Sunderland, Mass., USA.
Williams, B. K., J. D. Nichols, and M. J. Conroy. 2001. Analysis and management of animal populations. Academic Press. San Diego, USA.
Ecology terminology
Conservation biology
Ecological niche
Environmental terminology
Landscape ecology | Ecological trap | [
"Biology"
] | 1,417 | [
"Ecology terminology",
"Conservation biology"
] |
13,378,267 | https://en.wikipedia.org/wiki/European%20Journal%20of%20Combinatorics | The European Journal of Combinatorics is an international peer-reviewed scientific journal that specializes in combinatorics. The journal primarily publishes papers dealing with mathematical structures within combinatorics and/or establishing direct links between combinatorics and the theories of computing. The journal includes full-length research papers, short notes, and research problems on several topics.
This journal has been founded in 1980 by Michel Deza, Michel Las Vergnas and Pierre Rosenstiehl.
The current editor-in-chief is Patrice Ossona de Mendez and the vice editor-in-chief is Marthe Bonamy.
Abstracting and indexing
The journal is abstracted and indexed in
MathSciNet,
Science Citation Index Expanded,
Scopus, and
ZbMATH Open.
The impact factor for the European Journal of Combinatorics in 2023 was 1.0.
External links
European Journal of Combinatorics
References
Combinatorics journals
Elsevier academic journals
Academic journals established in 1980 | European Journal of Combinatorics | [
"Mathematics"
] | 197 | [
"Combinatorics journals",
"Combinatorics"
] |
13,379,267 | https://en.wikipedia.org/wiki/Sebenza | The Sebenza is a folding pocket knife manufactured by Chris Reeve Knives of Boise, Idaho. It is constructed with a stainless steel blade and titanium handle. Its handle functions as the lock mechanism similar in concept to the Walker linerlock differing in that the handle itself forms the lock bar which holds the blade open. This mechanism was invented by Chris Reeve, and is called the Reeve Integral Lock (R.I.L). It is also commonly referred to as the Framelock, and is one of the most widely implemented locking systems in the folding knife industry, where lock strength and reliability is a product requirement. The name Sebenza is derived from the Zulu word meaning "Work," a tribute to Mr. Reeve's South African origins.
Design and history
There are currently two size models of the Sebenza 31, small and large. The Small 31 has a 2.99" (76.17mm) blade and the Large 31 has a 3.61" (91.69mm) blade.
First introduced in 1990, the current basic model has a sand-blasted titanium handle and a stonewashed finish CPM MagnaCut blade.
There are numerous options for the embellishment of the Sebenza's titanium handles, such as computer-generated graphics, custom (unique) graphics, or inlays such as exotic wood, micarta, or mammoth ivory.
Originally the Chris Reeve Sebenza was available with a blade of ATS-34 steel.
In 1996, the blade material was changed to BG-42 blade steel, and later in 2001, the Sebenza blade material transitioned to CPM S30V steel. CPM S30V was developed by Crucible Steel with the collaboration of Chris Reeve. Damascus steel blades are also available as an option on the Sebenza. Since 2012, all Chris Reeve knives have transitioned to CPM S35VN steel.
A feature of the Sebenza that is highly praised by users is the ease of maintenance, as CRK actually encourages the customer to disassemble and maintain the knife by including a hex wrench, as well as small tube of fluorinated grease (to lubricate the pivot) and a tube of Loctite (for screws) in the box. Another feature of the Sebenza is the use of a bushing system around the blade's pivot that keeps the blade at a constant tight fit which is always centered.
This bushing allows the user to tighten the pivot screw completely without having to manually adjust the pivot tension.
As of May 2008, the two production models—the Regular and Classic Sebenza models—were discontinued and replaced by the 'Sebenza 21' (named so as to commemorate the 21st year of the Sebenza's production). The Sebenza 21 is based upon the previous Classic's design, and differs from the Classic only in small details.
At the 2012 Blade Show the 'Sebenza 25' (named so as to commemorate the 25th year of the Sebenza's production) was introduced. Significant changes include a more sculpted handle, the introduction of a ceramic ball lockup/detent system and the use of 'Large Hollow Grind Technology" on the blade grind. The 21 model continues in production as well. The 'Sebenza 25' was discontinued in mid-2016 and replaced by the Inkosi which shares many similarities with the 25 but with additional refinements.
In 2019, Chris Reeve Knives released the Sebenza 31, replacing the 21. This model featured minor improvements again, similar to the 25. Most notably the ceramic lockbar interface and a new inlay design. Additionally, a hole has been removed from the handles, and the pocket clip is slightly offset so that it no longer rests on the lockbar.
Awards
1987: Knifemaker's Guild of Southern Africa -- "Best Folding Knife" (Sebenza predecessor)
1993: Knifemakers' Guild -- "Most Innovative Folder at the Show"
2005: Blade Show -- "Collector Knife of the Year" (21st Anniversary Sebenza)
2006: Grays Sporting Journal -- “Gray's Best” Award
Knives Illustrated magazine named the industry's top five tactical folders of all time.
The author, Abe Elias, describes a tactical folder as "a knife used by people who need a dependable piece of solidly build equipment, a folder that gives you -- in all cases -- confidence".
His article goes on to say that "At the top of the list is the Sebenza by Chris Reeve."
References
External links
Description on Chris Reeve Knives homepage
Pocket knives
Mechanical hand tools
Camping equipment
Goods manufactured in the United States | Sebenza | [
"Physics"
] | 964 | [
"Mechanics",
"Mechanical hand tools"
] |
13,379,568 | https://en.wikipedia.org/wiki/Cube-connected%20cycles | In graph theory, the cube-connected cycles is an undirected cubic graph, formed by replacing each vertex of a hypercube graph by a cycle. It was introduced by for use as a network topology in parallel computing.
Definition
The cube-connected cycles of order n (denoted CCCn) can be defined as a graph formed from a set of n2n nodes, indexed by pairs of numbers (x, y) where 0 ≤ x < 2n and 0 ≤ y < n. Each such node is connected to three neighbors: , , and , where "⊕" denotes the bitwise exclusive or operation on binary numbers.
This graph can also be interpreted as the result of replacing each vertex of an n-dimensional hypercube graph by an n-vertex cycle. The hypercube graph vertices are indexed by the numbers x, and the positions within each cycle by the numbers y.
Properties
The cube-connected cycles of order n is the Cayley graph of a
group that acts on binary words of length n by rotation and flipping bits of the word. The generators used to form this Cayley graph from the group are the group elements that act by rotating the word one position left, rotating it one position right, or flipping its first bit. Because it is a Cayley graph, it is vertex-transitive: there is a symmetry of the graph mapping any vertex to any other vertex.
The diameter of the cube-connected cycles of order n is for any n ≥ 4; the farthest point from (x, y) is (2n − x − 1, (y + n/2) mod n). showed that the crossing number of CCCn is ((1/20) + o(1)) 4n.
According to the Lovász conjecture, the cube-connected cycle graph should always contain a Hamiltonian cycle, and this is now known to be true. More generally, although these graphs are not pancyclic, they contain cycles of all but a bounded number of possible even lengths, and when n is odd they also contain many of the possible odd lengths of cycles.
Parallel processing application
Cube-connected cycles were investigated by , who applied these graphs as the interconnection pattern of a network connecting the processors in a parallel computer. In this application, cube-connected cycles have the connectivity advantages of hypercubes while only requiring three connections per processor. Preparata and Vuillemin showed that a planar layout based on this network has optimal area × time2 complexity for many parallel processing tasks.
Notes
References
.
.
.
.
.
.
Network topology
Parametric families of graphs
Regular graphs | Cube-connected cycles | [
"Mathematics"
] | 541 | [
"Network topology",
"Topology"
] |
13,379,588 | https://en.wikipedia.org/wiki/Gas%20electron%20multiplier | A gas electron multiplier (GEM) is a type of gaseous ionization detector used in nuclear and particle physics and radiation detection.
All gaseous ionization detectors are able to collect the electrons released by ionizing radiation, guiding them to a region with a large electric field, and thereby initiating an electron avalanche. The avalanche is able to produce enough electrons to create a current or charge large enough to be detected by electronics. In most ionization detectors, the large field comes from a thin wire with a positive high-voltage potential; this same thin wire collects the electrons from the avalanche and guides them towards the readout electronics. GEMs create the large electric field in small holes in a thin polymer sheet; the avalanche occurs inside of these holes. The resulting electrons are ejected from the sheet, and a separate system must be used to collect the electrons and guide them towards the readout.
GEMs are one of the class of micropattern gaseous detectors; this class includes micromegas and other technologies.
History
GEMs were invented in 1997 in the Gas Detector Development Group at CERN by physicist Fabio Sauli.
Operation
Typical GEMs are constructed of 50–70 micrometre thick Kapton foil clad in copper on both sides. A photolithography and acid etching process makes 30–50 micrometer diameter holes through both copper layers; a second etching process extends these holes all the way through the kapton. The small holes can be made very regular and dimensionally stable. For operation, a voltage of 150–400 V is placed across the two copper layers, making large electric fields in the holes. Under these conditions, in the presence of appropriate gases, a single electron entering any hole will create an avalanche containing 100–1000 electrons; this is the "gain" of the GEM. Since the electrons exit the back of the GEM, a second GEM placed after the first one will provide an additional stage of amplification. Many experiments use double- or triple-GEM stacks to achieve gains of one million or more.
Operation of wire chambers typically involved only one voltage setting: the voltage on the wire provided both the drift field and the amplification field. A GEM-based detector requires several independent voltage settings: a drift voltage to guide electrons from the ionization point to the GEM, an amplification voltage, and an extraction/transfer voltage to guide electrons from the GEM exit to the readout plane. A detector with a large drift region can be operated as a time projection chamber; a detector with a smaller drift region operates as a simple proportional counter.
A GEM chamber can be read-out by simple conductive strips laid across a flat plane; the readout plane, like the GEM itself, can be fabricated with ordinary lithography techniques on ordinary circuit board materials. Since the readout strips are not involved in the amplification process, they can be made in any shape; 2-D strips and grids, hexagonal pads, radial/azimuthal segments, and other readout geometries are possible.
Uses
GEMs have been used in many types of particle physics experiments. One notable early user was the COMPASS experiment at CERN. GEM-based gas detectors have been proposed for components of the International Linear Collider, the STAR experiment and PHENIX experiment at the Relativistic Heavy Ion Collider, and others. The advantages of GEMs, compared to multiwire proportional chambers, include: ease of manufacturing, since large-area GEMs can in principle be mass-produced, while wire chambers require labor-intensive and error-prone assembly; flexible geometry, both for the GEM and the readout pads; and suppression of positive ions, which was a source of field distortions in time-projection chambers operated at high rates. A number of manufacturing difficulties plagued early GEMs, including non-uniformity and short circuits, but these have to a large extent been resolved.
References
Particle detectors
Experimental particle physics
CERN | Gas electron multiplier | [
"Physics",
"Technology",
"Engineering"
] | 801 | [
"Particle detectors",
"Measuring instruments",
"Experimental physics",
"Particle physics",
"Experimental particle physics"
] |
13,379,636 | https://en.wikipedia.org/wiki/Ritiometan | Ritiometan is an antibacterial used in nasal sprays. Also, it is used in an aerosol preparation for the treatment of infections of the nose and throat. It is marketed in France under the trade name Nécyrane.
References
Antimicrobials
Thioethers | Ritiometan | [
"Biology"
] | 62 | [
"Biocides",
"Antimicrobials"
] |
13,379,696 | https://en.wikipedia.org/wiki/Tuaminoheptane | Tuaminoheptane (, ; brand names Heptin, Heptadrine, Tuamine; also known as tuamine and 2-aminoheptane) is a sympathomimetic agent and vasoconstrictor which was formerly used as a nasal decongestant. It is still used in France as a nasal decongestant but its use is not recommended by the health authorities due to the lack of evidence of its effectiveness. It has also been used as a stimulant.
Tuaminoheptane has been found to act as a reuptake inhibitor and releasing agent of norepinephrine, which may underlie its decongestant and stimulant effects. It is an alkylamine. The chemical structure of the drug differs from that of other norepinephrine releasing agents, such as the phenethylamines, which, in contrast to tuaminoheptane, have an aromatic ring in their structure. Tuaminoheptane is also a skin irritant and can cause contact dermatitis via inhibition of volume-regulated anion channels, which limits its usefulness as a decongestant.
Tuaminoheptane is on the 2011 list of prohibited substances published by the World Anti-Doping Agency.
See also
1,3-Dimethylbutylamine
Heptaminol
Iproheptine
Isometheptene
Methylhexanamine
Octodrine
Oenethyl
References
External links
The World Anti-Doping Code. The 2011 Prohibited List. International Standard
Abandoned drugs
Alkylamines
Decongestants
Ion channel blockers
Norepinephrine releasing agents
Stimulants | Tuaminoheptane | [
"Chemistry"
] | 355 | [
"Drug safety",
"Abandoned drugs"
] |
13,379,834 | https://en.wikipedia.org/wiki/Unoprostone | Unoprostone (INN) is a prostaglandin analogue. Its isopropyl ester, unoprostone isopropyl, was marketed under the trade name Rescula for the management of open-angle glaucoma and ocular hypertension.
It was approved by the Food and Drug Administration in 2000.
In 2009, Sucampo Pharmaceuticals acquired the rights to the drug in the U.S. and Canada.
In 2015, the drug was discontinued in the U.S.
References
Prostaglandins
Ophthalmology drugs | Unoprostone | [
"Chemistry"
] | 116 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
13,379,855 | https://en.wikipedia.org/wiki/Paraoxon | Paraoxon is a parasympathomimetic drug which acts as an cholinesterase inhibitor. It is an organophosphate oxon, and the active metabolite of the insecticide parathion. It is also used as an ophthalmological drug against glaucoma. Paraoxon is one of the most potent acetylcholinesterase-inhibiting insecticides available, around 70% as potent as the nerve agent sarin, and so is now rarely used as an insecticide due to the risk of poisoning to humans and other animals. Paraoxon has been used by scientists to study acute and chronic effects of organophosphate intoxication. It is easily absorbed through skin, and was allegedly used as an assassination weapon by the apartheid-era South African chemical weapons program Project Coast.
See also
Armine (chemical)
References
Acetylcholinesterase inhibitors
Ethyl esters
4-Nitrophenyl compounds
Ophthalmology drugs
Organophosphates
Phenol esters | Paraoxon | [
"Chemistry"
] | 216 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
13,380,321 | https://en.wikipedia.org/wiki/Reversible%20diffusion | In mathematics, a reversible diffusion is a specific example of a reversible stochastic process. Reversible diffusions have an elegant characterization due to the Russian mathematician Andrey Nikolaevich Kolmogorov.
Kolmogorov's characterization of reversible diffusions
Let B denote a d-dimensional standard Brownian motion; let b : Rd → Rd be a Lipschitz continuous vector field. Let X : [0, +∞) × Ω → Rd be an Itō diffusion defined on a probability space (Ω, Σ, P) and solving the Itō stochastic differential equation
with square-integrable initial condition, i.e. X0 ∈ L2(Ω, Σ, P; Rd). Then the following are equivalent:
The process X is reversible with stationary distribution μ on Rd.
There exists a scalar potential Φ : Rd → R such that b = −∇Φ, μ has Radon–Nikodym derivative and
(Of course, the condition that b be the negative of the gradient of Φ only determines Φ up to an additive constant; this constant may be chosen so that exp(−2Φ(·)) is a probability density function with integral 1.)
References
(See theorem 1.4)
Stochastic differential equations
Probability theorems | Reversible diffusion | [
"Mathematics"
] | 269 | [
"Theorems in probability theory",
"Mathematical theorems",
"Mathematical problems"
] |
13,382,566 | https://en.wikipedia.org/wiki/Dezocine | Dezocine, sold under the brand name Dalgan, is an atypical opioid analgesic which is used in the treatment of pain. It is used by intravenous infusion and intramuscular injection.
Dezocine is an opioid receptor modulator, acting as a partial agonist of the μ- and κ-opioid receptors. It is a biased agonist of the μ-opioid receptor. The drug has a similar profile of effects to related opioids acting at the μ-opioid receptor, including analgesia and euphoria. Unlike other opioids acting at the κ-opioid receptor however, dezocine does not produce side effects such as dysphoria or hallucinations at any therapeutically used dose.
Dezocine was first synthesized in 1970. It was introduced for medical use in the United States in 1986 but was not marketed in other countries. Dezocine was discontinued in the United States in 2011 with no official reason given. However, it has become one of the most widely used analgesics in China. In light of the opioid epidemic, dezocine has seen a resurgence in use and interest.
Medical uses
Dezocine is generally administered intravenously (as Dalgan) to relieve post-operative pain in patients. It can also be administered in intramuscular doses, and is given once rather than continuously. It is often administered in post-operative laparoscopy patients as an alternative to fentanyl. Dezocine has potent analgesic effects, and comparable or greater pain-relieving ability than morphine, codeine, and pethidine (meperidine). It is a more effective analgesic than pentazocine, but causes relatively more respiratory depression than pentazocine. Dezocine is a useful drug for the treatment of pain, but side effects such as dizziness limit its clinical application, and it can produce opioid withdrawal syndrome in patients already dependent on other opioids. Because of its high efficacy, dezocine is often administered at a base dose of 0.1 mg/kg. Respiratory depression, a side effect of dezocine, reaches a ceiling at 0.3 to 0.4 mg/kg.
Side effects
Side effects at lower doses include mild gastrointestinal discomfort and dizziness. Because decozine has mixed agonist/antagonist effects at the opioid receptors, it has a lowered dependence potential than purely agonistic opioids. It can be prescribed, therefore, in small doses over an extended period of time without causing patients to develop and sustain an addiction. Its efficacy as an analgesic is dose-dependent; however, it displays a ceiling effect in induced respiratory depression at 0.3 to 0.4 mg/kg.
Pharmacology
Pharmacodynamics
Dezocine acts as an opioid receptor receptor modulator. It is specifically a mixed agonist–antagonist or partial agonist of the μ- and κ-opioid receptors. It is a biased agonist of the μ-opioid receptor and activates G protein signaling but not the β-arrestin pathway. This may account for some of dezocine's unique and atypical pharmacological properties. The binding affinity of dezocine varies depending on the opioid receptor, with the drug having the highest affinity for the μ-opioid receptor, intermediate affinity for the κ-opioid receptor, and the lowest affinity for the δ-opioid receptor. In addition to its opioid activity, dezocine has been found to act as a serotonin–norepinephrine reuptake inhibitor (SNRI), with pIC50 values of 5.86 for the serotonin transporter (SERT) and 5.68 for the norepinephrine transporter (NET). These actions theoretically might contribute to its analgesic efficacy.
Dezocine is five times as potent as pethidine and one-fifth as potent as butorphanol as an analgesic. Due to its partial agonist nature at the μ-opioid receptor, dezocine has significantly reduced side effects relative to opioid analgesics acting as full agonists of the receptor such as morphine. Moreover, dezocine is not a controlled substance and there are no reports of addiction related to its use, indicating that, unlike virtually all other clinically employed μ-opioid receptor agonists (including weak partial agonists like buprenorphine), and for reasons that are not fully clear, it is apparently non-addictive. This unique benefit makes long-term low-dose treatment of chronic pain and/or opioid dependence with dezocine more feasible than with most other opioids. Despite having a stronger respiratory depressant effect than morphine, dezocine shows a ceiling effect on its respiratory depressive action so above a certain dose this effect does not get any more severe.
Pharmacokinetics
Dezocine has an bioavailability by intramuscular injection of 97%. It has a mean t1/2α of fewer than two minutes, and its biological half-life is 2.2 hours.
Chemistry
Dezocine has a structure similar to the benzomorphan group of opioids. Dezocine is unusual among opioids as it is one of the only primary amines known to be active as an opioid (along with bisnortilidine, an active metabolite of tilidine).
Synthesis
Dezocine [(−)-13β-amino-5,6,7,8,9,10,11,12-octahydro-5α-methyl-5,11-methanobenzocyclodecen-31-ol, hydrobromide] is a pale white crystal powder. It has no apparent odor. The salt is soluble at 20 mg/ml, and a 2% solution has a pH of 4.6.
The synthesis of dezocine begins with the condensation of 1-methyl-7-methoxy-2-tetralone with 1,5-dibromopentane through use of NaH or potassium tert-butoxide. This yields 1-(5-bromopentyl)-1-methyl-7-methoxy-2-tetralone, which is then cyclized with NaH to produce 5-methyl-3-methoxy-5,6,7,8,9,10,11,12-octahydro-5,11-methanobenzocyclodecen-13-one. The product is then treated with hydroxylamine hydrochloride, to yield an oxime. A reduction reaction in hydrogen gas produces an isomeric mixture, from which the final product is crystallized and cleaved with HBr.
History
Dezocine was patented by American Home Products Corp. in 1978. Clinical trials ran from 1979 to 1985, before its approval by the U.S. Food and Drug Administration (FDA) in 1986. As of 2011, dezocine's usage is discontinued in the United States, but it is still widely used in some other countries such as China.
Society and culture
Generic names
Dezocine is the generic name of the drug and its and .
Brand names
The major brand name of dezocine is Dalgan.
Availability
In 2000, dezocine was listed as being marketed only in the United States. It has since been marketed in China. Dezocine was discontinued in the United States in 2011.
Legal status
As of 2011, dezocine is not used in the United States or Canada. It is not commercially available in either of these countries, nor is it offered as a prescribed analgesic for postoperative care. In China however, it is commonly used after surgery.
Research
Depression
Dezocine shows antidepressant-like effects in animals. Its antidepressant-like effects in animals appear to be dependent on activation of serotonin 5-HT1A receptors and inhibition of κ-opioid receptors (KORs) but not on activation of the μ-opioid receptor. A clinical trial found that dezocine added to sufentanil for postoperative analgesia significantly reduced depressive symptoms in people undergoing colorectal cancer surgery relative to sufentanil alone. There is a case report of a single incidental dose of dezocine resulting in rapid and sustained improvement in depression, anhedonia, and motivational deficits in a woman with treatment-resistant depression. On the basis of the preceding findings, there is interest in dezocine as a potential antidepressant in the treatment of depression, for instance in people with opioid use disorder.
References
Biased ligands
Kappa-opioid receptor antagonists
Opioids
Serotonin–norepinephrine reuptake inhibitors | Dezocine | [
"Chemistry"
] | 1,875 | [
"Functional groups",
"Signal transduction",
"Biased ligands",
"Amines",
"Bases (chemistry)"
] |
13,384,253 | https://en.wikipedia.org/wiki/HAZMAT%20Class%201%20Explosives | Hazmat Class 1 are explosive materials which are any substance or article, including a device, which is designed to function by explosion or which, by chemical reaction within itself is able to function in a similar manner even if not designed to function by explosion.
Class 1 consists of six 'divisions', that describes the potential hazard posed by the explosive. The division number is the second number after the decimal point on a placard.
The classification has an additional layer, of categorization, known as 'compatibility groups', which breaks explosives in the same division into one of 13 groups, identified by a letter, which is used to separate incompatible explosives from each other. This letter also appears on the placard, following the number.
The movement of class 1 materials is tightly regulated, especially for divisions 1.1 and 1.2, which represent some of the most dangerous explosives, with the greatest potential for destruction and loss of life. Regulations in the United States require drivers have and follow a pre-prepared route, not park the vehicle within of bridges, tunnels, a fire, or crowded places. The vehicle must be attended to by its driver at all times while its parked. Drivers are also required to carry the following paperwork and keep it in an accessible and easy to locate location: written emergency instructions, written route plan, a copy of Federal Motor Carrier Safety Regulations, Part 397 - Transport of Hazardous Materials; driving and parking rules. Some tunnels and bridges severely restrict or completely forbid vehicles carrying Class 1 cargoes.
Divisions
Placards
Compatibility table
Transportation segregation table
Compatibility group table
See also
Dangerous goods
Explosive
Notes
References
Hazardous materials | HAZMAT Class 1 Explosives | [
"Physics",
"Chemistry",
"Technology"
] | 330 | [
"Materials",
"Hazardous materials",
"Matter"
] |
13,384,260 | https://en.wikipedia.org/wiki/Red%20Star%20Yeast | Red Star Yeast Company, LLC is a joint-venture of Lesaffre and Archer Daniels Midland.
Red Star operates two plants in the United States—a plant in Headland, Alabama, and a plant built in 2006 in Cedar Rapids, Iowa. Lesaffre Yeast Corporation (prior to the joint venture) used to operate plants in Milwaukee, Wisconsin; Baltimore, Maryland; and Oakland, California, but those facilities have been closed since 2006. Their corporate office is located in Milwaukee.
Red Star Yeast and Products was the former division of Sensient Technologies (formerly Universal Foods), which distributed the Red Star brand. Red Star Yeast was then sold to French-based Lesaffre Group in 2001. In 2004, Lesaffre and Archer Daniels Midland Company (ADM) created the joint venture that the company operates under today.
All Red Star Yeast products are certified kosher except for Passover.
External links
Food additives
Leavening agents
Yeasts
Privately held companies based in Wisconsin
Companies based in Milwaukee
Archer Daniels Midland
Joint ventures | Red Star Yeast | [
"Biology"
] | 212 | [
"Yeasts",
"Fungi"
] |
13,384,414 | https://en.wikipedia.org/wiki/Bessel%27s%20correction | In statistics, Bessel's correction is the use of n − 1 instead of n in the formula for the sample variance and sample standard deviation, where n is the number of observations in a sample. This method corrects the bias in the estimation of the population variance. It also partially corrects the bias in the estimation of the population standard deviation. However, the correction often increases the mean squared error in these estimations. This technique is named after Friedrich Bessel.
Formulation
In estimating the population variance from a sample when the population mean is unknown, the uncorrected sample variance is the mean of the squares of deviations of sample values from the sample mean (i.e., using a multiplicative factor 1/n). In this case, the sample variance is a biased estimator of the population variance.
Multiplying the uncorrected sample variance by the factor
gives an unbiased estimator of the population variance. In some literature, the above factor is called Bessel's correction.
One can understand Bessel's correction as the degrees of freedom in the residuals vector (residuals, not errors, because the population mean is unknown):
where is the sample mean. While there are n independent observations in the sample, there are only n − 1 independent residuals, as they sum to 0. For a more intuitive explanation of the need for Bessel's correction, see .
Generally Bessel's correction is an approach to reduce the bias due to finite sample size. Such finite-sample bias correction is also needed for other estimates like skew and kurtosis, but in these the inaccuracies are often significantly larger. To fully remove such bias it is necessary to do a more complex multi-parameter estimation. For instance a correct correction for the standard deviation depends on the kurtosis (normalized central 4th moment), but this again has a finite sample bias and it depends on the standard deviation, i.e., both estimations have to be merged.
Caveats
There are three caveats to consider regarding Bessel's correction:
It does not yield an unbiased estimator of standard deviation.
The corrected estimator often has a higher mean squared error (MSE) than the uncorrected estimator. Furthermore, there is no population distribution for which it has the minimum MSE because a different scale factor can always be chosen to minimize MSE.
It is only necessary when the population mean is unknown (and estimated as the sample mean). In practice, this generally happens.
Firstly, while the sample variance (using Bessel's correction) is an unbiased estimator of the population variance, its square root, the sample standard deviation, is a biased estimate of the population standard deviation; because the square root is a concave function, the bias is downward, by Jensen's inequality. There is no general formula for an unbiased estimator of the population standard deviation, though there are correction factors for particular distributions, such as the normal; see unbiased estimation of standard deviation for details. An approximation for the exact correction factor for the normal distribution is given by using n − 1.5 in the formula: the bias decays quadratically (rather than linearly, as in the uncorrected form and Bessel's corrected form).
Secondly, the unbiased estimator does not minimize mean squared error (MSE), and generally has worse MSE than the uncorrected estimator (this varies with excess kurtosis). MSE can be minimized by using a different factor. The optimal value depends on excess kurtosis, as discussed in mean squared error: variance; for the normal distribution this is optimized by dividing by n + 1 (instead of n − 1 or n).
Thirdly, Bessel's correction is only necessary when the population mean is unknown, and one is estimating both population mean and population variance from a given sample, using the sample mean to estimate the population mean. In that case there are n degrees of freedom in a sample of n points, and simultaneous estimation of mean and variance means one degree of freedom goes to the sample mean and the remaining n − 1 degrees of freedom (the residuals) go to the sample variance. However, if the population mean is known, then the deviations of the observations from the population mean have n degrees of freedom (because the mean is not being estimated – the deviations are not residuals but errors) and Bessel's correction is not applicable.
Source of bias
Most simply, to understand the bias that needs correcting, think of an extreme case. Suppose the population is (0,0,0,1,2,9), which has a population mean of 2 and a population variance of . A sample of n = 1 is drawn, and it turns out to be The best estimate of the population mean is But what if we use the formula to estimate the variance? The estimate of the variance would be zero – and the estimate would be zero for any population and any sample of n = 1. The problem is that in estimating the sample mean, the process has already made our estimate of the mean close to the value we sampled—identical, for n = 1. In the case of n = 1, the variance just cannot be estimated, because there is no variability in the sample.
But consider n = 2. Suppose the sample were (0, 2). Then and , but with Bessel's correction, , which is an unbiased estimate (if all possible samples of n = 2 are taken and this method is used, the average estimate will be 12.4, same as the sample variance with Bessel's correction.)
To see this in more detail, consider the following example. Suppose the mean of the whole population is 2050, but the statistician does not know that, and must estimate it based on this small sample chosen randomly from the population:
One may compute the sample average:
This may serve as an observable estimate of the unobservable population average, which is 2050. Now we face the problem of estimating the population variance. That is the average of the squares of the deviations from 2050. If we knew that the population average is 2050, we could proceed as follows:
But our estimate of the population average is the sample average, 2052. The actual average, 2050, is unknown. So the sample average, 2052, must be used:
The variance is now smaller, and it (almost) always is. The only exception occurs when the sample average and the population average are the same. To understand why, consider that variance measures distance from a point, and within a given sample, the average is precisely that point which minimises the distances. A variance calculation using any other average value must produce a larger result.
To see this algebraically, we use a simple identity:
With representing the deviation of an individual sample from the sample mean, and representing the deviation of the sample mean from the population mean. Note that we've simply decomposed the actual deviation of an individual sample from the (unknown) population mean into two components: the deviation of the single sample from the sample mean, which we can compute, and the additional deviation of the sample mean from the population mean, which we can not. Now, we apply this identity to the squares of deviations from the population mean:
Now apply this to all five observations and observe certain patterns:
The sum of the entries in the middle column must be zero because the term a will be added across all 5 rows, which itself must equal zero. That is because a contains the 5 individual samples (left side within parentheses) which – when added – naturally have the same sum as adding 5 times the sample mean of those 5 numbers (2052). This means that a subtraction of these two sums must equal zero. The factor 2 and the term b in the middle column are equal for all rows, meaning that the relative difference across all rows in the middle column stays the same and can therefore be disregarded. The following statements explain the meaning of the remaining columns:
The sum of the entries in the first column (a2) is the sum of the squares of the distance from sample to sample mean;
The sum of the entries in the last column (b2) is the sum of squared distances between the measured sample mean and the correct population mean
Every single row now consists of pairs of a2 (biased, because the sample mean is used) and b2 (correction of bias, because it takes the difference between the "real" population mean and the inaccurate sample mean into account). Therefore, the sum of all entries of the first and last column now represents the correct variance, meaning that now the sum of squared distance between samples and population mean is used
The sum of the a2-column and the b2-column must be bigger than the sum within entries of the a2-column, since all the entries within the b2-column are positive (except when the population mean is the same as the sample mean, in which case all of the numbers in the last column will be 0).
Therefore:
The sum of squares of the distance from samples to the population mean will always be bigger than the sum of squares of the distance to the sample mean, except when the sample mean happens to be the same as the population mean, in which case the two are equal.
That is why the sum of squares of the deviations from the sample mean is too small to give an unbiased estimate of the population variance when the average of those squares is found. The smaller the sample size, the larger is the difference between the sample variance and the population variance.
Terminology
This correction is so common that the term "sample variance" and "sample standard deviation" are frequently used to mean the corrected estimators (unbiased sample variation, less biased sample standard deviation), using n − 1. However caution is needed: some calculators and software packages may provide for both or only the more unusual formulation. This article uses the following symbols and definitions:
μ is the population mean
is the sample mean
σ2 is the population variance
sn2 is the biased sample variance (i.e., without Bessel's correction)
s2 is the unbiased sample variance (i.e., with Bessel's correction)
The standard deviations will then be the square roots of the respective variances. Since the square root introduces bias, the terminology "uncorrected" and "corrected" is preferred for the standard deviation estimators:
sn is the uncorrected sample standard deviation (i.e., without Bessel's correction)
s is the corrected sample standard deviation (i.e., with Bessel's correction), which is less biased, but still biased
Formula
The sample mean is given by
The biased sample variance is then written:
and the unbiased sample variance is written:
Proof
Suppose thus that are independent and identically distributed random variables with expectation and variance .
Knowing the values of the at an outcome of the underlying sample space, we would like to get a good estimate for the variance , which is unknown. To this end, we construct a mathematical formula containing the such that the expectation of this formula is precisely . This means that on average, this formula should produce the right answer.
The educated, but naive way of guessing such a formula would be
,
where ; this would be the variance if we had a discrete random variable on the discrete probability space that had value at . But let us calculate the expected value of this expression:
here we have (by independence, symmetric cancellation and identical distributions)
and therefore
.
In contrast,
.
Therefore, our initial guess was wrong by a factor of
,
and this is precisely Bessel's correction.
See also
Cochran's theorem
Bias of an estimator
Standard deviation
Unbiased estimation of standard deviation
Jensen's inequality
Notes
External links
Animated experiment demonstrating the correction, at Khan Academy
Statistical deviation and dispersion
Estimation methods
Articles containing proofs | Bessel's correction | [
"Mathematics"
] | 2,490 | [
"Articles containing proofs"
] |
13,384,613 | https://en.wikipedia.org/wiki/Ultrasound%20research%20interface | An ultrasound research interface (URI) is a software tool loaded onto a diagnostic clinical ultrasound device which provides functionality beyond typical clinical modes of operation.
A normal clinical ultrasound user only has access to the ultrasound data in its final processed form, typically a B-Mode image, in DICOM format. For reasons of device usability they also have limited access to the processing parameters that can be modified.
A URI allows a researcher to achieve different results by either acquiring the image at various intervals through the processing chain, or changing the processing parameters.
Typical B-mode receive processing chain
A typical digital ultrasound processing chain for B-Mode imaging may look as follows:
Multiple analog signals are acquired from the ultrasound transducer (the transmitter/receiver applied to the patient)
Analog signals may pass through one or more analog notch filters and a variable-gain amplifier (VCA)
Multiple analog-to-digital converters convert the analog radio frequency (RF) signal to a digital RF signal sampled at a predetermined rate (typical ranges are from 20MHz to 160MHz) and at a predetermined number of bits (typical ranges are from 10 bits to 16 bits)
Beamforming is applied to individual RF signals by applying time delays and summations as a function of time and transformed into a single RF signal
The RF signal is run through one or more digital FIR or IIR filters to extract the most interesting parts of the signal given the clinical operation
The filtered RF signal runs through an envelope detector and is log compressed into a grayscale format
Multiple signals processed in this way are lined up together and interpolated and rasterized into a readable image.
Data access
A URI may provide data access at many different stages of the processing chain, these include:
Pre-beamformed digital RF data from individual channels
Beamformed RF data
Envelope detected data
Interpolated image data
Where many diagnostic ultrasound devices have Doppler imaging modes for measuring blood flow, the URI may also provide access to Doppler related signal data, which can include:
Demodulated (I/Q) data
FFT spectral data
Autocorrelated velocity color Doppler data
Tools
A URI may include many different tools for enabling the researcher to make better use of the device and the data captured, some of these tools include:
Custom MATLAB programs for reading and processing signal and image data
Software Development Kits (SDKs) for communicating with the URI, signal processing and other specialized modes of operation available on the URI
References
Ultrasound
Medical ultrasonography
Medical physics | Ultrasound research interface | [
"Physics"
] | 520 | [
"Applied and interdisciplinary physics",
"Medical physics"
] |
13,384,808 | https://en.wikipedia.org/wiki/HMGA2 | High-mobility group AT-hook 2, also known as HMGA2, is a protein that, in humans, is encoded by the HMGA2 gene.
Function
This gene encodes a protein that belongs to the non-histone chromosomal high-mobility group (HMG) protein family. HMG proteins function as architectural factors and are essential components of the enhanceosome. This protein contains structural DNA-binding domains and may act as a transcriptional regulating factor. Identification of the deletion, amplification, and rearrangement of this gene that are associated with lipomas suggests a role in adipogenesis and mesenchymal differentiation. A gene knock-out study of the mouse counterpart demonstrated that this gene is involved in diet-induced obesity. Alternate transcriptional splice variants, encoding different isoforms, have been characterized.
The expression of HMGA2 in adult tissues is commonly associated with both malignant and benign tumor formation, as well as certain characteristic cancer-promoting mutations. Homologous proteins with highly conserved sequences are found in other mammalian species, including lab mice (Mus musculus).
HMGA2 contains three basic DNA-binding domains (AT-hooks) that cause the protein to bind to adenine-thymine (AT)-rich regions of nuclear DNA. HMGA2 does not directly promote or inhibit the transcription of any genes, but alters the structure of DNA and promotes the assembly of protein complexes that do regulate the transcription of genes. With few exceptions, HMGA2 is expressed in humans only during early development, and is reduced to undetectable or nearly undetectable levels of transcription in adult tissues. The microRNA let-7 is largely responsible for this time-dependent regulation of HMGA2. The apparent function of HMGA2 in proliferation and differentiation of cells during development is supported by the observation that mice with mutant HMGA2 genes are unusually small (the pygmy or mini-mouse phenotype), and genome-wide association studies linking HMGA2-associated SNPs to variation in human height.
Regulation by let-7
Let-7 inhibits production of specific proteins by complementary binding to their mRNA transcripts. The HMGA2 mature mRNA transcript contains seven regions complementary or nearly complementary to let-7 in its 3' untranslated region (UTR). Let-7 expression is very low during early human development, which coincides with the greatest transcription of HMGA2. The time-dependent drop in HMGA2 expression is caused by a rise in let-7 expression.
Clinical significance
Relationship with cancer
Heightened expression of HMGA2 is found in a variety of human cancers, but the precise mechanism by which HMGA2 contributes to the formation of cancer is unknown. The same mutations that lead to pituitary adenomas in mice can be found in similar cancers in humans. Its presence is associated with poor prognosis for the patient, but also with sensitization of the cancer cells to certain forms of cancer therapy. To be specific, HMGA2-high cancers display an abnormally strong response to double strand breaks in DNA caused by radiation therapy and some forms of chemotherapy. Artificial addition of HMGA2 to some forms of cancer unresponsive to DNA damage cause them to respond to the treatment instead, although the mechanism by which this phenomenon occurs is also not understood. However, the expression of HMGA2 is also associated with increased rates of metastasis in breast cancer, and both metastasis and recurrence of squamous cell carcinoma. These properties are responsible for patients' poor prognoses. As with HMGA2's effects on the response to radiation and chemotherapy, the mechanism by which HMGA2 exerts these effects is unknown.
A very common finding in HMGA2-high cancers is the under-expression of let-7. This is not unexpected, given let-7's natural role in the regulation of HMGA2. However, many cancers are found with normal levels of let-7 that are also HMGA2 high. Many of these cancers express the normal HMGA2 protein, but the mature mRNA transcript is truncated, missing a portion of the 3'UTR that contains the critical let-7 complementary regions. Without these, let-7 is unable to bind to HMGA2 mRNA, and, thus, is unable to repress it. The truncated mRNAs may arise from a chromosomal translocation that results in loss of a portion of the HMGA2 gene.
ERCC1
Overexpressed HMGA2 may play a role in the frequent repression of ERCC1 in cancers. The let-7a miRNA normally represses the HMGA2 gene, and in normal adult tissues, almost no HMGA2 protein is present. (See also Let-7 microRNA precursor.) Reduction or absence of let-7a miRNA allows high expression of the HMGA2 protein. As shown by Borrmann et al., HMGA2 targets and modifies the chromatin architecture at the ERCC1 gene, reducing its expression. These authors noted that repression of ERCC1 (by HGMA2) can reduce DNA repair, leading to increased genome instability.
ERCC1 protein expression is reduced or absent in 84% to 100% of human colorectal cancers. ERCC1 protein expression was also reduced in a diet-related mouse model of colon cancer. As indicated in the ERCC1 article, however, two other epigenetic mechanisms of repression of ERCC1 also may have a role in reducing expression of ERCC1 (promoter DNA methylation and microRNA repression).
Chromatin immunoprecipitation
Genome-wide analysis of HMGA2 target genes was performed by chromatin immunoprecipitation in a gastric cell line with overexpressed HMGA2, and 1,366 genes were identified as potential targets. The pathways they identified as associated with malignant neoplasia progression were the adherens junction pathway, MAPK signaling pathway, Wnt signaling pathway, p53 signaling pathway, VEGF signaling pathway, Notch signaling pathway, and TGF beta signaling pathway.
Non-homologous end joining DNA repair
Overexpression of HMGA2 delayed the release of DNA-PKcs (needed for non-homologous end joining DNA repair) from double strand break sites. Overexpression of HMGA2 alone was sufficient to induce chromosomal aberrations, a hallmark of deficiency in NHEJ-mediated DNA repair. These properties implicate HMGA2 in the promotion of genome instability and tumorigenesis. showed that
Base excision repair pathway
HMGA2 protein can cleave DNA containing apurinic/apyrimidinic (AP) sites (is an AP lyase). In addition, this protein also possesses the related 5’-deoxyribosyl phosphate (dRP) lyase activity. An interaction between human AP endonuclease 1 and HMGA2 in cancer cells has been demonstrated indicating that HMGA2 can be incorporated into the cellular base excision repair (BER) machinery. Increased expression of HMGA2 increased BER, and allowed cells with increased HMGA2 to be resistant to hydroxyurea, a chemotherapeutic agent for solid tumors.
Interactions
HMGA2 has been shown to interact with PIAS3 and NFKB1.
The transport of HMGA2 to the nucleus is mediated by an interaction between its second AT-hook and importin-α2.
See also
HMGA
References
Further reading
External links
Ellensburg 13-year-old grapples with life at 7 feet 3 inches tall:
Transcription factors | HMGA2 | [
"Chemistry",
"Biology"
] | 1,595 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
13,386,184 | https://en.wikipedia.org/wiki/Choke%20exchange | A choke exchange is a telephone exchange designed to handle many simultaneous call attempts to telephone numbers of that exchange. Choke exchanges are typically used to service telephone numbers of talk radio caller and contest lines of radio stations and event ticket vendors.
Motivation
A central office might only have physical plant resources to handle ca. 8% of allocated telephone numbers, based on historical call traffic averages. A choke exchange has trunk facilities to other exchanges designed in a manner that high call volume is handled through the choke connection rather than overwhelming the rest of the local telephone network. Other local exchanges have a limited number of direct trunks (junctions) to the choke exchange, which may only serve one or more customers, such as a radio station contest line, which may experience many simultaneous calls. But instead of calls being overflowed to main or tandem routes shared with other calls, the unsuccessful callers receive a reorder tone from their local or tandem exchange. If the calls were overflowed to the tandem route, the caller would receive a busy tone from the exchange serving the radio station, and the sudden peak would disrupt calls between other customers.
With common-channel signaling (CCS), e.g., Signalling System No. 7, separate choke exchanges may not be required for these customers.
Examples
Examples of choke exchanges in North America have included:
One of the early choke lines (exchanges) was instituted due to a widely advertised contest by a local radio station in the Miami/Fort Lauderdale area. WHYI-FM advertised their "Last Contest". The top prize was an automobile. Since the advertising lasted over a month, there were a very large volume of calls when they announced for people to call in. There were so many calls that the local exchanges ran out of dial tones. This caused major issues since at the time if a caller had no dial tone, the caller could not dial at all. After it was over, the area emergency services filed complaints, and were heard. Shortly afterward the 305-550 exchange came into being. The first number on it was 305-550-9100 (for Y100 radio station). Due to the issues involved because of the "Last Contest", this may have been what caused the creation of the choke exchanges.
See also
Routing in the PSTN
References
Telephone exchange equipment
Telephone numbers
Telephone numbers in Canada
Telephone numbers in the United States | Choke exchange | [
"Mathematics"
] | 476 | [
"Mathematical objects",
"Numbers",
"Telephone numbers"
] |
13,387,024 | https://en.wikipedia.org/wiki/Self-consolidating%20concrete | Self-consolidating concrete or self-compacting concrete (SCC) is a concrete mix which has a low yield stress, high deformability, good segregation resistance (prevents separation of particles in the mix), and moderate viscosity (necessary to ensure uniform suspension of solid particles during transportation, placement (without external compaction), and thereafter until the concrete sets).
In everyday terms, when poured, SCC is an extremely fluid mix with the following distinctive practical features – it flows very easily within and around the formwork, can flow through obstructions and around corners ("passing ability"), is close to self-leveling (although not actually self-levelling), does not require vibration or tamping after pouring, and follows the shape and surface texture of a mold (or form) very closely once set. As a result, pouring SCC is also much less labor-intensive compared to standard concrete mixes. Once poured, SCC is usually similar to standard concrete in terms of its setting and curing time (gaining strength), and strength. SCC does not use a high proportion of water to become fluid – in fact SCC may contain less water than standard concretes. Instead, SCC gains its fluid properties from an unusually high proportion of fine aggregate, such as sand (typically 50%), combined with superplasticizers (additives that ensure particles disperse and do not settle in the fluid mix) and viscosity-enhancing admixtures (VEA).
Ordinarily, concrete is a dense, viscous material when mixed, and when used in construction, requires the use of vibration or other techniques (known as compaction) to remove air bubbles (cavitation), and honeycomb-like holes, especially at the surfaces, where air has been trapped during pouring. This kind of air content (unlike that in aerated concrete) is not desired and weakens the concrete if left. However it is laborious and takes time to remove by vibration, and improper or inadequate vibration can lead to undetected problems later. Additionally some complex forms cannot easily be vibrated. Self-consolidating concrete is designed to avoid this problem, and not require compaction, therefore reducing labor, time, and a possible source of technical and quality control issues.
SCC was conceptualized in 1986 by Prof. Okamura at Kochi University, Japan, at a time when skilled labor was in limited supply, causing difficulties in concrete-related industries. The first generation of SCC used in North America was characterized by the use of relatively high content of binder as well as high dosages of chemicals admixtures, usually superplasticizer to enhance flowability and stability. Such high-performance concrete had been used mostly in repair applications and for casting concrete in restricted areas. The first generation of SCC was therefore characterized and specified for specialized applications.
SCC can be used for casting heavily reinforced sections, places where there can be no access to vibrators for compaction and in complex shapes of formwork which may otherwise be impossible to cast, giving a far superior surface than conventional concrete. The relatively high cost of material used in such concrete continues to hinder its widespread use in various segments of the construction industry, including commercial construction, however the productivity economics take over in achieving favorable performance benefits and works out to be economical in pre-cast industry. The incorporation of powder, including supplementary cementitious materials and filler, can increase the volume of the paste, hence enhancing deformability, and can also increase the cohesiveness of the paste and stability of the concrete. The reduction in cement content and increase in packing density of materials finer than 80 μm, like fly ash etc. can reduce the water-cement ratio, and the high-range water reducer (HRWR) demand. The reduction in free water can reduce the concentration of viscosity-enhancing admixture (VEA) necessary to ensure proper stability during casting and thereafter until the onset of hardening. It has been demonstrated that a total fine aggregate content ("fines", usually sand) of about 50% of total aggregate is appropriate in an SCC mix.
There are many studies on different types of SCC which address its fresh properties, strength, durability and microstructural properties. Types of self-consolidating concrete include low-fines SCC (LF-SCC) and semi-flowable SCC (SF-SCC) etc.
SCC can be produced using different industrial wastes as cement replacing materials. They can be used for pavement construction <2-6>.
Reference:
https://doi.org/10.1016/j.conbuildmat.2022.130036
Overview
SCC is measured using the flow table test (slump-flow test) rather than the usual concrete slump test, as it is too fluid to keep its shape when the cone is removed. A typical SCC mix will have slump-flow of around 500 – 700 mm.
SCC is weakened, not strengthened, by vibration. As vibration is not needed for compacting the mix, all that it achieves is to separate and segregate it.
See also
Concrete slump test
Flow table test
References
2. Low-fines self-consolidating concrete using rice husk ash for road pavement: An environment-friendly and sustainable approach
https://doi.org/10.1016/j.conbuildmat.2022.130036
3. Kannur, B., Chore, H.S. Utilization of sugarcane bagasse ash as cement-replacing materials for concrete pavement: an overview. Innov. Infrastruct. Solut. 6, 184 (2021).
https://doi.org/10.1007/s41062-021-00539-4
4.Strength and durability study of low-fines self-consolidating concrete as a pavement material using fly ash and bagasse ash
Bhupati Kannur &H. S. Chore.
https://doi.org/10.1080/19648189.2022.2140207
5.Bhupati Kannur, Hemant Sharad Chore. Semi-flowable self-consolidating concrete using industrial wastes for construction of rigid pavements in India: An overview. https://doi.org/10.1016/j.jtte.2023.01.001
6.B Kannur, HS Chore. Assessing Semiflowable Self-Consolidating Concrete with Sugarcane Bagasse Ash for Application in Rigid Pavement. Journal of Materials in Civil Engineering 35 (10), 04023358, 2023.
https://doi.org/10.1061/JMCEE7.MTENG-16355
External links
Proportioning of self-compacting concrete – the UCL method – paper summarizing common mixes, uses, choices of additives, properties, and extensive information on SCCs.
Working With SCC Needn’t Be Hit or Miss – precast concrete makers' experience is SCC / what to do and not do.
Concrete | Self-consolidating concrete | [
"Engineering"
] | 1,497 | [
"Structural engineering",
"Concrete"
] |
13,388,542 | https://en.wikipedia.org/wiki/List%20of%20WLAN%20channels | Wireless LAN (WLAN) channels are frequently accessed using IEEE 802.11 protocols. The 802.11 standard provides several radio frequency bands for use in Wi-Fi communications, each divided into a multitude of channels numbered at 5 MHz spacing (except in the 45/60 GHz band, where they are 0.54/1.08/2.16 GHz apart) between the centre frequency of the channel. The standards allow for channels to be bonded together into wider channels for faster throughput.
860/900 MHz (802.11ah)
802.11ah operates in sub-gigahertz unlicensed bands. Each world region supports different sub-bands, and the channels number depends on the starting frequency on the sub-band it belongs to. Therefore there is no global channels numbering plan, and the channels numbers are incompatible between world regions (and even between sub-bands of a same world region).
The following sub-bands are defined in the 802.11ah specifications:
2.4 GHz (802.11b/g/n/ax/be)
14 channels are designated in the 2.4 GHz range, spaced 5 MHz apart from each other except for a 12 MHz space before channel 14. The abbreviation F0 designates each channel's fundamental frequency.
Interference happens when two networks try to operate in the same band, or when their bands overlap. The two modulation methods used have different characteristics of band usage and therefore occupy different widths:
The DSSS method used by legacy 802.11 and 802.11b (and the 11b-compatible rates of 11 g) use 22 MHz of bandwidth. This is from the 11 MHz chip rate used by the coding system. No guard band is prescribed; the channel definition provides 3 MHz between 1, 6, and 11.
The OFDM method used by 802.11a/g/n occupies a bandwidth of 16.25 MHz. The nameplate bandwidth is set to be 20 MHz, rounding up to a multiple of channel width and providing some guard band for signal to attenuate along the edge of the band. This guardband is mainly used to accommodate older routers with modem chipsets prone to full channel occupancy, as most modern Wi‑Fi routers are not prone to excessive channel occupancy.
While overlapping frequencies can be configured at a location and will usually work, it can cause interference resulting in slowdowns, sometimes severe, particularly in heavy use. Certain subsets of frequencies can be used simultaneously at any one location without interference (see diagrams for typical allocations). The consideration of spacing stems from both the basic bandwidth occupation (described above), which depends on the protocol, and from attenuation of interfering signals over distance. In the worst case, using every fourth or fifth channel by leaving three or four channels clear between used channels causes minimal interference, and narrower spacing still can be used at further distances. The "interference" is usually not actual bit-errors, but the wireless transmitters making space for each other. Interference resulting in bit-error is rare. The requirement of the standard is for a transmitter to yield when it decodes another at a level of 3 dB above the noise floor, or when the non-decoded noise level is higher than a threshold Pth which, for Wi-Fi 5 and earlier, is between -76 and -80 dBm.
As shown in the diagram, bonding two 20 MHz channels to form a 40 MHz channel is permitted in the 2.4 GHz bands. These are generally referred to by the centres of the primary 20 MHz channel and the adjacent secondary 20 MHz channel (e.g. 1+5, 9+13, 13–9, 5–1). The primary 20 MHz channel is used for signalling and backwards compatibility, the secondary is only used when sending data at full speed.
3.65 GHz (802.11y)
Except where noted, all information taken from Annex J of IEEE 802.11y-2008
This range is documented as only being allowed as a licensed band in the United States. However, not in the original specification, under newer frequency allocations from the FCC, it falls under the Citizens Broadband Radio Service band. This allows for unlicensed use, under Tier 3 GAA rules, provided that the user doesn't cause harmful interference to Incumbent Access users or Priority Access Licensees and accepts all interference from these users, and also follows of all the technical requirements in CFR 47 Part 96 Subpart E.
A 40 MHz band is available from 3655 to 3695 MHz. It may be divided into eight 5 MHz channels, four 10 MHz channels, or two 20 MHz channels.
The division into 5 MHz channels consumes all eight possible channel numbers, and so (unlike other bands) it is not possible to infer the width of a channel from its number. Instead each wider channel shares its channel number with the 5 MHz channel just above its mid frequency:
channel 132 can be either 3660-3665 or 3655-3665;
channel 133 can be either 3665-3670 or 3655-3675;
and so on.
4.9–5.0 GHz (802.11j) WLAN
In Japan since 2002, 80 MHz of spectrum from 4910 to 4990 MHz has been available for both indoor and outdoor use, once registered.
Until 2017, an additional 60 MHz of spectrum from 5030 to 5090 MHz was available for registered use, however it has since been re-purposed and can no longer be used.
50 MHz of spectrum from 4940 to 4990 MHz (WLAN channels 20–26) are in use by public safety entities in the United States. Within this spectrum there are two non-overlapping channels allocated, each 20 MHz wide. The most commonly used channels are 22 and 24.
5 GHz (802.11a/h/n/ac/ax/be)
Country-specific information
United States
DFS and TPC
Source:
In 2007, the FCC (United States) began requiring that devices operating in the bands of 5.250–5.350 GHz and 5.470–5.725 GHz must employ dynamic frequency selection (DFS) and transmit power control (TPC) capabilities. This is to avoid interference with weather-radar and military applications. In 2010, the FCC further clarified the use of channels in the 5.470–5.725 GHz band to avoid interference with TDWR, a type of weather radar system. In FCC parlance, these restrictions are now referred to collectively as the Old Rules. On 10 June 2015, the FCC approved a new ruleset for 5 GHz device operation (called the New Rules), which adds 160 and 80 MHz channel identifiers, and re-enables previously prohibited DFS channels, in Publication Number 905462. This FCC publication eliminates the ability for manufacturers to have devices approved or modified under the Old Rules in phases; the New Rules apply in all circumstances
Source:
United Kingdom
The UK's Ofcom regulations for unlicensed use of the 5 GHz band is similar to Europe, except that DFS is not required for the frequency range 5.725–5.850 GHz and the SRD maximum mean e.i.r.p is 200 mW instead of 25 mW.
Additionally, 5.925–6.425 GHz is also available for unlicensed use, as long as it is used indoors with an SRD of 250 mW.
Germany
Germany requires DFS and TPC capabilities on 5.250–5.350 GHz and 5.470–5.725 GHz as well; in addition, the frequency range 5.150–5.350 GHz is allowed only for indoor use, leaving only 5.470–5.725 GHz for outdoor and indoor use.
Since this is the German implementation of EU Rule 2005/513/EC, similar regulations must be expected throughout the European Union.
European standard EN 301 893 covers 5.15–5.725 GHz operation, and v2.1.1 has been adopted.
6 GHz can now be used.
Austria
Austria adopted Decision 2005/513/EC directly into national law. The same restrictions as in Germany apply, only 5.470–5.725 GHz is allowed to be used outdoors and indoors.
Japan
Japan's use of 10 and 20 MHz-wide 5 GHz wireless channels is codified by Association of Radio Industries and Businesses (ARIB) document STD-T71, Broadband Mobile Access Communication System (CSMA). Additional rule specifications relating to 40, 80, and 160 MHz channel allocation has been taken on by Japan's Ministry of Internal Affairs and Communications (MIC).
Brazil
In Brazil, the use of TPC is required in the 5.150–5.350 GHz and 5.470–5.725 GHz bands is required, but devices without TPC are allowed with a reduction of 3 dB. DFS is required in the 5.250–5.350 GHz and 5.470–5.725 GHz bands, and optional in the 5.150–5.250 GHz band.
Australia
some of the Australian channels require DFS to be utilised (a significant change from the 2000 regulations, which allowed lower power operation without DFS). As per AS/NZS 4268 B1 and B2, transmitters designed to operate in any part of 5250–5350 MHz and 5470–5725 MHz bands shall implement DFS in accordance with sections 4.7 and 5.3.8 and Annex D of ETSI EN 301 893 or alternatively in accordance with FCC paragraph 15.407(h)(2). Also as per AS/NZS 4268 B3 and B4, transmitters designed to operate in any part of 5250–5350 MHz and 5470–5725 MHz bands shall implement TPC in accordance with sections 4.4 and 5.3.4 of ETSI EN 301 893 or alternatively in accordance with FCC paragraph 15.407(h)(1).
New Zealand
New Zealand regulation differs from Australian.
Philippines
In the Philippines, the National Telecommunications Commission (NTC) allows the use of 5150 MHz to 5350 MHz and 5470 MHz to 5850 MHz frequency bands indoors with an effective radiated power (ERP) not exceeding 250 mW. Indoor Wireless Data Network (WDN) equipment and devices shall not use external antenna. All outdoor equipment/radio station whether for private WDN or public WDN shall be covered by appropriate permits and licenses required under existing rules and regulations.
Singapore
Singapore regulation requires DFS and TPC to be used in the 5.250–5.350 GHz band to transmit more than 100 mW effective radiated power (EIRP), but no more than 200 mW, and requires DFS capability on 5.250–5.350 GHz below or equal to 100 mW EIRP, and requires DFS and TPC capabilities on 5.470–5.725 below or equal to 1000 mW EIRP. Operating 5.725–5.850 GHz above 1000 mW and below or equal to 4000 mW EIRP shall be approved on exceptional basis.
South Korea
In South Korea, the Ministry of Science and ICT has public notices. 신고하지 아니하고 개설할 수 있는 무선국용 무선설비의 기술기준, Technical standard for radio equipment for radio stations that can be opened without reporting. They allowed 160 MHz channel bandwidth from 2018 to 2016–27.
China
China MIIT expanded allowed channels to add UNII-1, 5150–5250 MHz, UNII-2, 5250–5350 MHz (DFS/TPC), similar to European standards EN 301.893 V1.7.1.
China MIIT expanded allowed channels to add U-NII-3, 5725–5850 MHz.
Indonesia
Indonesia allows use of the band with maximum EIRP of () and maximum bandwidth of , and the band with the same maximum EIRP and maximum bandwidth of for indoor use. Outdoors, use of the band with maximum EIRP of () is allowed, with a maximum bandwidth of .
India
In exercise of the powers conferred by sections 4 and 7 of the Indian Telegraph Act, 1885 (13 of 1885) and sections 4 and 10 of the Indian Wireless Telegraphy Act, 1933 (17 of 1933) and in supersession of notification under G.S.R. 46(E), dated 28 January 2005 and notification under G.S.R. 36(E), dated 10 January 2007 and notification under G.S.R. 38(E), dated 19 January 2007, the Central Government made the rules, called the Use of Wireless Access System including Radio Local Area Network in 5 GHz band (Exemption from Licensing Requirement) Rules, 2018. The rules include criteria like 26 dB bandwidth of the modulated signal measured relative to the maximum level of the modulated carrier, the maximum power within the specified measurement bandwidth, within the device operating band; measurements in the 5725–5875 MHz band are made over a bandwidth of 500 kHz; measurements in the 5150–5250 MHz, 5250–5350 MHz, and 5470–5725 MHz bands are made over a bandwidth of 1 MHz or 26 dB emission bandwidth of the device. No licence shall be required under indoor and outdoor environment to establish, maintain, work, possess or deal in any wireless equipment for the purpose of low power wireless access systems. Transmitters operating in 5725–5875 MHz, all emissions within the frequency range from the band edge to 10 MHz above or below the band edge shall not exceed an EIRP of ; for frequencies 10 MHz or greater above or below the band edge, emission shall not exceed an EIRP of .
5.9 GHz (802.11p)
The 802.11p amendment published on 15 July 2010, specifies WLAN in the licensed band of 5.9 GHz (5.850–5.925 GHz).
6 GHz (802.11ax and 802.11be)
The Wi-Fi Alliance has introduced the term Wi‑Fi 6E to identify and certify IEEE 802.11ax devices that support this new band, which is also used by Wi-Fi 7 (IEEE 802.11be).
Initialisms (precise definition below):
LPI: low-power indoor
VLP: very-low-power
United States
On 23 April 2020, the FCC voted on and ratified a Report and Order to allocate 1.2 GHz of unlicensed spectrum in the 6 GHz band (5.925–7.125 GHz) for Wi-Fi use.
Standard power
Standard-power access points are permitted indoors and outdoors at a maximum EIRP of 36 dBm in the U-NII-5 and U-NII-7 sub-bands with automatic frequency coordination (AFC).
Low-power indoor (LPI) operation
Note: Partial channels indicate channels that span UNII boundaries, which is permitted in 6 GHz LPI operation. Under the proposed channel numbers, the U-NII-7/U-NII-8 boundary is spanned by channels 185 (20 MHz), 187 (40 MHz), 183 (80 MHz), and 175 (160 MHz). The U-NII-6/U-NII-7 boundary is spanned by channels 115 (40 MHz), 119 (80 MHz), and channel 111 (160 MHz).
For use in indoor environments, access points are limited to a maximum EIRP of 30 dBm and a maximum power spectral density of 5 dBm/MHz. They can operate in this mode on all four U-NII bands (5,6,7,8) without the use of automatic frequency coordination. To help ensure they are used only indoors, these types of access points are not permitted to be connectorized for external antennas, weather-resistant, or run on battery power.
Very-low-power devices
The FCC may issue a ruling in the future on a third class of very low power devices such as hotspots and short-range applications.
Canada
In November 2020, the Innovation, Science and Economic Development (ISED) of Canada published "Consultation on the Technical and Policy Framework for Licence-Exempt Use in the 6 GHz Band". They proposed to allow licence-exempt operations in the 6 GHz spectrum for three classes of radio local area networks (RLANs):
Standard power
For indoor and outdoor use. Maximum EIRP of 36 dBm and maximum power spectral density (PSD) of 23 dBm/MHz. Should employ Automated Frequency Coordination (AFC) control.
Low-power indoor (LPI)
For indoor use only. Maximum EIRP of 30 dBm and maximum PSD of 5 dBm/MHz.
Very-low-power (VLP)
For indoor and outdoor use. Maximum EIRP of 14 dBm and maximum PSD of -8 dBm/MHz.
Europe
ECC Decision (20)01 from 20 November 2020 allocated the frequency band from 5945 to 6425 MHz (corresponding almost to the US U-NII-5 band) for use by low-power indoor and very-low-power devices for Wireless Access Systems/Radio Local Area Networks (WAS/RLAN), with a portion specifically reserved for rail networks and intelligent transport systems.
United Kingdom
Since July 2020, the UK's Ofcom permitted unlicensed use of the lower 6 GHz band (5945 to 6425 MHz, corresponding to the US U-NII-5 band) by Low Power indoor and Very Low Power indoor and mobile Outdoor device.
Australia
In April 2021, Australia's ACMA opened consultations for the 6 GHz band. The lower 6 GHz band (5925 to 6425 MHz, corresponding to the US U-NII-5 band) was approved for 250 mW EIRP indoors and 25 mW outdoors on March 4, 2022. Further consideration is also being given to releasing the upper 6 GHz band (6425 to 7125 MHz) for WLAN use as well, although nothing has been officially proposed at this time. In March 2024, it was reported that the ACMA had begun industry consultation to lay the ground work to release the upper 6Ghz bands in the near future. As of August 2024, the proposed options for the use of the upper 6Ghz bands had been published by the ACMA.
Japan
In September 2022, the Ministry of Internal Affairs and Communications announced amendments to the ministerial order and notices related to the Radio Act.
Low-power indoor (LPI)
For indoor use only. Maximum EIRP of 200 mW.
Very-low-power (VLP)
For indoor and outdoor use. Maximum EIRP of 25 mW.
Russia
In December 2022, Russian State Commission for Radio Frequencies authorised 6 GHz operation for low-power indoor (LPI) use with transmitter power control (TPC) limited to maximum EIRP of 200 mW and maximum PSD of 10 mW/MHz, and very low power (VLP) indoor and mobile outdoor use with maximum EIRP of 25 mW and maximum PSD of 1.3 mW/MHz.
Singapore
In May 2023, Singapore's IMDA will amend its Regulations to allocate the radio frequency spectrum 5,925 MHz – 6,425 MHz for Wi-Fi use in Singapore.
Philippines
On May 23, 2024, the Philippines' National Telecommunications Commission (NTC) is considering the use of 5925 MHz to 6425 MHz frequency bands indoors with an effective radiated power (ERP) not exceeding 250 mW and outdoors with an effective radiated power not exceeding 25 mW. On July 5, 2024, the NTC has released Memorandum Circular No. 002-07-2024, allowing 6 GHz Wi-Fi use, with the added restriction that the use on unmanned aircraft systems is prohibited.
45 GHz (802.11aj)
The 802.11aj standards, also known as WiGig, operate in the spectrum.
60 GHz (802.11ad/aj/ay)
The 802.11ad/aj/ay standards, also known as WiGig, operate in the V band unlicensed ISM band spectrum.
Indonesia
Indonesia allows the use of the band with maximum EIRP of (), and maximum bandwidth of , for indoor use.
See also
2.4 GHz radio use
High-speed multimedia radio
Notes
References
IEEE 802.11
Radio-related lists
Wi-Fi | List of WLAN channels | [
"Technology"
] | 4,256 | [
"Wireless networking",
"Wi-Fi"
] |
8,716,956 | https://en.wikipedia.org/wiki/The%20pig%20%28tool%29 | The pig is a specialty firefighting tool used mainly for roof ventilation, forcible entry and wall breaching. Invented by a member of the Austin Fire Department, the tool combines the butt-end of a flat head axe on one side and a pick on the other. The pig can be married with a Halligan to create a forcible entry system as an alternative to the classic axe and Halligan combination.
References
External links
Hand tools
Firefighter tools | The pig (tool) | [
"Engineering"
] | 94 | [
"Human–machine interaction",
"Hand tools"
] |
8,718,494 | https://en.wikipedia.org/wiki/HD%2095109 | HD 95109 (U Carinae) is a Classical Cepheid variable, a type of variable star, in the constellation Carina. Its apparent magnitude is 6.86.
U Car is a δ Cepheid variable with a period of 38.7681 days. Isaac Roberts discovered that the star's brightness varies in 1891, and it was one of the earliest Cepheids to be discovered. It has also one of the longest periods, and hence is one of the most luminous in the class. There are still only a few Cepheids with longer periods, including RS Puppis, SV Vulpeculae, and the unusual S Vulpeculae.
The brightness variation in U Car is caused by fundamental mode pulsations. The radius and temperature both vary, with the radius changing by during each cycle. The temperature variation causes the spectral type to vary between F6 and G7.
References
Carina (constellation)
G-type supergiants
Carinae, U
095109
4276
052589
Classical Cepheid variables
F-type supergiants
Durchmusterung objects | HD 95109 | [
"Astronomy"
] | 238 | [
"Carina (constellation)",
"Constellations"
] |
8,718,541 | https://en.wikipedia.org/wiki/W%20Carinae | The designations W Carinae and w Carinae are distinct and refer to two different stars:
W Carinae, a classical Cepheid variable in the constellation Vela that is now called V Velorum
w Carinae (V520 Carinae), a red giant in the constellation of Carina
Carina (constellation)
Carinae, w | W Carinae | [
"Astronomy"
] | 73 | [
"Carina (constellation)",
"Constellations"
] |
8,718,556 | https://en.wikipedia.org/wiki/Y%20Carinae | Y Carinae (Y Car) is a Classical Cepheid variable, a type of variable star, in the constellation Carina. Its apparent magnitude varies from 7.53 to 8.48.
Alexander W. Roberts discovered that the brightness of the star varies, in 1893.
The primary Cepheid pulsation period is 3.6 days, but it also pulsates with a secondary period of 2.56 days. It is known as a double-mode Cepheid, or a beat Cepheid since the two periods interfere to produce slow variations at a beat frequency.
The variable primary star is in a triple system with a very close pair of hot main sequence stars. The period of the outer pair is 2.76 years. The inner pair are constrained to orbit in less than 31 days, but the exact nature of the orbit is unknown. The existence of the close binary pair throws into doubt previous calculations of the mass of the pulsating star. The existence of high numbers of triple systems and short period Cepheids suggests that some at least of the short period Cepheids may have formed by mergers.
References
Carina (constellation)
F-type giants
Carinae, Y
091595
Classical Cepheid variables
051653
Durchmusterung objects
J10331084-5829550
Triple star systems | Y Carinae | [
"Astronomy"
] | 283 | [
"Carina (constellation)",
"Constellations"
] |
8,718,658 | https://en.wikipedia.org/wiki/David%20Musser | David "Dave" Musser is a professor emeritus of computer science at the Rensselaer Polytechnic Institute in Troy, New York, United States.
He is known for his work in generic programming, particularly as applied to C++, and his collaboration with Alexander Stepanov. Their work together includes coining the term "generic programming" in , and led to the creation of the C++ Standard Template Library (STL).
In , he developed the sorting algorithm called introsort (also known as introspective sort), and the related selection algorithm called introselect, to provide algorithms that are both efficient and have optimal worst-case performance, for use in the STL.
In 2007 he retired from Rensselaer.
Selected publications
References
External links
David Musser's home page
Year of birth missing (living people)
Living people
Computer scientists
Rensselaer Polytechnic Institute faculty | David Musser | [
"Technology"
] | 182 | [
"Computer science",
"Computer scientists"
] |
8,718,695 | https://en.wikipedia.org/wiki/Nikto%20%28vulnerability%20scanner%29 | Nikto is a free software command-line vulnerability scanner that scans web servers for dangerous files or CGIs, outdated server software and other problems. It performs generic and server type specific checks. It also captures and prints any cookies received. The Nikto code itself is free software, but the data files it uses to drive the program are not. Version 1.00 was released December 27, 2001.
Features
Nikto can detect over 6700 potentially dangerous files or CGIs, checks for outdated versions of over 1250 servers, and version specific problems on over 270 servers. It also checks for server configuration items such as the presence of multiple index files and HTTP server options, and will attempt to identify installed web servers and software. Scan items and plugins are frequently updated and can be automatically updated.
Variations
There are some variations of Nikto, one of which is MacNikto. MacNikto is an AppleScript GUI shell script wrapper built in Apple's Xcode and Interface Builder, released under the terms of the GPL. It provides easy access to a subset of the features available in the command-line version, installed along with the MacNikto application.
References
External links
CIRT Nikto page
Computer security software
Security compliance
Free security software
Security testing tools | Nikto (vulnerability scanner) | [
"Engineering"
] | 256 | [
"Cybersecurity engineering",
"Computer security software"
] |
8,719,288 | https://en.wikipedia.org/wiki/Conservation%20form | Conservation form or Eulerian form refers to an arrangement of an equation or system of equations, usually representing a hyperbolic system, that emphasizes that a property represented is conserved, i.e. a type of continuity equation. The term is usually used in the context of continuum mechanics.
General form
Equations in conservation form take the form
for any conserved quantity , with a suitable function . An equation of this form can be transformed into an integral equation
using the divergence theorem. The integral equation states that the change rate of the integral of the quantity over an arbitrary control volume is given by the flux through the boundary of the control volume, with being the outer surface normal through the boundary. is neither produced nor consumed inside of and is hence conserved. A typical choice for is , with velocity , meaning that the quantity flows with a given velocity field.
The integral form of such equations is usually the physically more natural formulation, and the differential equation arises from differentiation. Since the integral equation can also have non-differentiable solutions, the equality of both formulations can break down in some cases, leading to weak solutions and severe numerical difficulties in simulations of such equations.
Example
An example of a set of equations written in conservation form are the Euler equations of fluid flow:
Each of these represents the conservation of mass, momentum and energy, respectively.
See also
Conservation law
Lagrangian and Eulerian specification of the flow field
Further reading
Randall J. LeVeque: Finite Volume Methods for Hyperbolic Problems. Cambridge University Press, Cambridge 2002, (Cambridge Texts in Applied Mathematics).
Algebra
Conservation equations | Conservation form | [
"Physics",
"Mathematics"
] | 320 | [
"Symmetry",
"Conservation laws",
"Mathematical objects",
"Equations",
"Conservation equations",
"Algebra",
"Physics theorems"
] |
8,719,641 | https://en.wikipedia.org/wiki/RELIKT-1 | RELIKT-1 from (sometimes RELICT-1) was a Soviet cosmic microwave background anisotropy experiment launched on board the Prognoz 9 satellite on 1 July 1983. It operated until February 1984. It was the first CMB satellite (followed by the Cosmic Background Explorer in 1989) and measured the CMB dipole, the Galactic plane, and gave upper limits on the quadrupole moment.
A follow-up, RELIKT-2, would have been launched around 1993, and a RELIKT-3 was proposed, but neither took place due to the dissolution of the Soviet Union.
Launch and observations
RELIKT-1 was launched on board the Prognoz-9 satellite on 1 January 1983. The satellite was in a highly eccentric orbit, with perigee around 1,000 km and apogee around 750,000 km, and an orbital period of 26 days.
RELIKT-1 observed at 37 GHz (8 mm), with a bandwidth of 0.4 GHz and an angular resolution of 5.8°. It used a superheterodyne, or Dicke-type modulation radiometer with an automatic balancer for the two input levels with a 30-second time constant. The noise in 1 second was 31mK, with a system temperature of 300K, and a receiver temperature of 110K. The signal was sampled twice a second, and the noise was correlated between samples.
The receiver used two corrugated horn antennas, one pointing parallel to the spacecraft spin axis, the other pointing at a parabolic antenna to point at 90° from the spin axis. The satellite rotated every 120 seconds. The experiment weight , and consumed 50W of power.
The radiometer was calibrated to 5% accuracy before launch, as was an internal noise source (which was used every four days during observations). Additionally the moon was used as a calibrator, as it was observed twice a month, and the in-flight system temperatures were measured to vary by 4% on a weekly basis.
The satellite rotation axis was kept constant for a week, giving 5040 scans of a great circle, after which it was changed to a new axis. The signal was recorded onto a tape recorder, and transmitted to Earth every four days. It observed for 6 months, giving 31 different scans that covered the whole sky, all of which intersected at the ecliptic poles. The experiment ceased observations in February 1984, after collecting 15 million measurements.
Results
It measured the CMB dipole, the Galactic plane, and reported constraints on the quadrupole moment.
The first dipole measurement was reported in 1984, while the telescope was still observing, at 2.1±0.5mK, and upper limits on the quadrupole of 0.2mK. It also detected brighter-than-expected Galactic plane emission from compact HII regions.
A reanalysis of the data by Strukov et al. in 1992 found a quadrupole between and at 90% confidence level, and also reported a negative anomaly at l=150°, b=-70° at a 99% confidence level,
Another reanalysis of the data by Klypin, Stukov and Skulachev in 1992 found a dipole of 3.15±0.12mK, with a direction of 11h17m±10m and -7.5°±2.5°. It placed a limit on the CMB quadrupole of with a 95% confidence level, assuming a Harrison-Zeldovich spectrum, or without assuming a model. The results were close to those measured by the Cosmic Background Explorer and the Tenerife Experiment.
RELIKT-2
The second RELIKT satellite would have been launched in mid-1993. It would have had five channels to observe at 21.7 (13.8), 24.5 (8.7), 59.0 (5.1), 83.0 (3.6) and 193 GHz (1.6mm), using degenerated paramps. It would have had corrugated horns to give a resolution of 7°, and a more distant orbit to avoid contamination from the Moon and Sun, with a mission duration around 2 years, to give a better sensitivity than COBE. It would have been cooled to 100K. It was constructed, and was undergoing tests in 1992. It would have been launched as the Libris satellite on a Molniya rocket. The launch was put back to 1996, with expanded plans to observe with 1.5-3° resolution from two spacecraft in 1995, but ultimately never took place because of the Soviet Union's break-up and lack of funding.
A RELIKT-3 was also planned, which would have observed at 34–90 GHz with a resolution around 1°.
Notes
References
1983 in spaceflight
1983 in the Soviet Union
Cosmic microwave background experiments
Space telescopes
Soviet space observatories
Spacecraft launched in 1983 | RELIKT-1 | [
"Astronomy"
] | 1,016 | [
"Space telescopes",
"Soviet space observatories"
] |
8,720,014 | https://en.wikipedia.org/wiki/%CE%92-Cryptoxanthin | β-Cryptoxanthin is a natural carotenoid pigment. It has been isolated from a variety of sources including the fruit of plants in the genus Physalis, orange rind, winter squashes such as butternut, papaya, egg yolk, butter, apples, and bovine blood serum.
Chemistry
In terms of structure, β-cryptoxanthin is closely related to β-carotene, with only the addition of a hydroxyl group. It is a member of the class of carotenoids known as xanthophylls.
In a pure form, β-cryptoxanthin is a red crystalline solid with a metallic luster. It is freely soluble in chloroform, benzene, pyridine, and carbon disulfide.
Biology and medicine
In the human body, β-cryptoxanthin is converted to vitamin A (retinol) and is, therefore, considered a provitamin A. As with other carotenoids, β-cryptoxanthin is an antioxidant and may help prevent free radical damage to cells and DNA, as well as stimulate the repair of oxidative damage to DNA.
Recent findings of an inverse association between β-cryptoxanthin and lung cancer risk in several observational epidemiological studies suggest that β-cryptoxanthin could potentially act as a chemopreventive agent against lung cancer. On the other hand, in the Grade IV histology group of adult patients diagnosed with malignant glioma, moderate to high intake of β-cryptoxanthin (for second tertile and for highest tertile compared to lowest tertile, in all cases) was associated with poorer survival.
Other uses
β-Cryptoxanthin is also used as a substance to colour food products (INS number 161c). It is not approved for use in the EU or USA; however, it is approved for use in Australia and New Zealand.
References
Carotenoids
Tetraterpenes
Cyclohexenes | Β-Cryptoxanthin | [
"Biology"
] | 423 | [
"Biomarkers",
"Carotenoids"
] |
8,720,264 | https://en.wikipedia.org/wiki/History%20of%20the%20battery | Batteries provided the main source of electricity before the development of electric generators and electrical grids around the end of the 19th century. Successive improvements in battery technology facilitated major electrical advances, from early scientific studies to the rise of telegraphs and telephones, eventually leading to portable computers, mobile phones, electric cars, and many other electrical devices.
Students and engineers developed several commercially important types of battery. "Wet cells" were open containers that held liquid electrolyte and metallic electrodes. When the electrodes were completely consumed, the wet cell was renewed by replacing the electrodes and electrolyte. Open containers are unsuitable for mobile or portable use. Wet cells were used commercially in the telegraph and telephone systems. Early electric cars used semi-sealed wet cells.
One important classification for batteries is by their life cycle. "Primary" batteries can produce current as soon as assembled, but once the active elements are consumed, they cannot be electrically recharged. The development of the lead-acid battery and subsequent "secondary" or "chargeable" types allowed energy to be restored to the cell, extending the life of permanently assembled cells. The introduction of nickel and lithium based batteries in the latter half of the 20th century made the development of innumerable portable electronic devices feasible, from powerful flashlights to mobile phones. Very large stationary batteries find some applications in grid energy storage, helping to stabilize electric power distribution networks.
Invention
From the mid 18th century on, before there were batteries, experimenters used Leyden jars to store electrical charge. As an early form of capacitor, Leyden jars, unlike electrochemical cells, stored their charge physically and would release it all at once. Many experimenters took to hooking several Leyden jars together to create a stronger charge and one of them, the colonial American inventor Benjamin Franklin, may have been the first to call his grouping an "electrical battery", a play on the military term for weapons functioning together.
Based on some findings by Luigi Galvani, Alessandro Volta, a friend and fellow scientist, believed observed electrical phenomena were caused by two different metals joined by a moist intermediary. He verified this hypothesis through experiments and published the results in 1791. In 1800, Volta invented the first true battery, storing and releasing a charge through a chemical reaction instead of physically, which came to be known as the voltaic pile. The voltaic pile consisted of pairs of copper and zinc discs piled on top of each other, separated by a layer of cloth or cardboard soaked in brine (i.e., the electrolyte). Unlike the Leyden jar, the voltaic pile produced continuous electricity and stable current, and lost little charge over time when not in use, though his early models could not produce a voltage strong enough to produce sparks. He experimented with various metals and found that zinc and silver gave the best results.
Volta believed the current was the result of two different materials simply touching each other – an obsolete scientific theory known as contact tension – and not the result of chemical reactions. As a consequence, he regarded the corrosion of the zinc plates as an unrelated flaw that could perhaps be fixed by changing the materials somehow. However, no scientist ever succeeded in preventing this corrosion. In fact, it was observed that the corrosion was faster when a higher current was drawn. This suggested that the corrosion was actually integral to the battery's ability to produce a current. This, in part, led to the rejection of Volta's contact tension theory in favor of the electrochemical theory. Volta's illustrations of his Crown of Cups and voltaic pile have extra metal disks, now known to be unnecessary, on both the top and bottom. The figure associated with this section, of the zinc-copper voltaic pile, has the modern design, an indication that "contact tension" is not the source of electromotive force for the voltaic pile.
Volta's original pile models had some technical flaws, one of them involving the electrolyte leaking and causing short-circuits due to the weight of the discs compressing the brine-soaked cloth. A Scotsman named William Cruickshank solved this problem by laying the elements in a box instead of piling them in a stack. This was known as the trough battery. Volta himself invented a variant that consisted of a chain of cups filled with a salt solution, linked together by metallic arcs dipped into the liquid. This was known as the Crown of Cups. These arcs were made of two different metals (e.g., zinc and copper) soldered together. This model also proved to be more efficient than his original piles, though it did not prove as popular.
Another problem with Volta's batteries was short battery life (an hour's worth at best), which was caused by two phenomena. The first was that the current produced electrolyzed the electrolyte solution, resulting in a film of hydrogen bubbles forming on the copper, which steadily increased the internal resistance of the battery (this effect, called polarization, is counteracted in modern cells by additional measures). The other was a phenomenon called local action, wherein minute short-circuits would form around impurities in the zinc, causing the zinc to degrade. The latter problem was solved in 1835 by the English inventor William Sturgeon, who found that amalgamated zinc, whose surface had been treated with some mercury, did not suffer from local action.
Despite its flaws, Volta's batteries provide a steadier current than Leyden jars, and made possible many new experiments and discoveries, such as the first electrolysis of water by the English surgeon Anthony Carlisle and the English chemist William Nicholson.
First practical batteries
Daniell cell
An English professor of chemistry named John Frederic Daniell found a way to solve the hydrogen bubble problem in the Voltaic Pile by using a second electrolyte to consume the hydrogen produced by the first. In 1836, he invented the Daniell cell, which consists of a copper pot filled with a copper sulfate solution, in which is immersed an unglazed earthenware container filled with sulfuric acid and a zinc electrode. The earthenware barrier is porous, which allows ions to pass through but keeps the solutions from mixing.
The Daniell cell was a great improvement over the existing technology used in the early days of battery development and was the first practical source of electricity. It provides a longer and more reliable current than the Voltaic cell. It is also safer and less corrosive. It has an operating voltage of roughly 1.1 volts. It soon became the industry standard for use, especially with the new telegraph networks.
The Daniell cell was also used as the first working standard for definition of the volt, which is the unit of electromotive force.
Bird's cell
A version of the Daniell cell was invented in 1837 by the Guy's Hospital physician Golding Bird who used a plaster of Paris barrier to keep the solutions separate. Bird's experiments with this cell were of some importance to the new discipline of electrometallurgy.
Porous pot cell
The porous pot version of the Daniell cell was invented by John Dancer, a Liverpool instrument maker, in 1838. It consists of a central zinc anode dipped into a porous earthenware pot containing a zinc sulfate solution. The porous pot is, in turn, immersed in a solution of copper sulfate contained in a copper can, which acts as the cell's cathode. The use of a porous barrier allows ions to pass through but keeps the solutions from mixing.
Gravity cell
In the 1860s, a Frenchman named Callaud invented a variant of the Daniell cell called the gravity cell. This simpler version dispensed with the porous barrier. This reduces the internal resistance of the system and, thus, the battery yields a stronger current. It quickly became the battery of choice for the American and British telegraph networks, and was widely used until the 1950s.
The gravity cell consists of a glass jar, in which a copper cathode sits on the bottom and a zinc anode is suspended beneath the rim. Copper sulfate crystals are scattered around the cathode and then the jar is filled with distilled water. As the current is drawn, a layer of zinc sulfate solution forms at the top around the anode. This top layer is kept separate from the bottom copper sulfate layer by its lower density and by the polarity of the cell.
The zinc sulfate layer is clear in contrast to the deep blue copper sulfate layer, which allows a technician to measure the battery life with a glance. On the other hand, this setup means the battery can be used only in a stationary appliance, or else the solutions mix or spill. Another disadvantage is that a current has to be continually drawn to keep the two solutions from mixing by diffusion, so it is unsuitable for intermittent use.
Poggendorff cell
The German scientist Johann Christian Poggendorff overcame the problems with separating the electrolyte and the depolariser using a porous earthenware pot in 1842. In the Poggendorff cell, sometimes called Grenet Cell due to the works of Eugene Grenet around 1859, the electrolyte is dilute sulphuric acid and the depolariser is chromic acid. The two acids are physically mixed together, eliminating the porous pot. The positive electrode (cathode) is two carbon plates, with a zinc plate (negative or anode) positioned between them. Because of the tendency of the acid mixture to react with the zinc, a mechanism is provided to raise the zinc electrode clear of the acids.
The cell provides 1.9 volts. It was popular with experimenters for many years due to its relatively high voltage; greater ability to produce a consistent current and lack of any fumes, but the relative fragility of its thin glass enclosure and the necessity of having to raise the zinc plate when the cell is not in use eventually saw it fall out of favour. The cell was also known as the 'chromic acid cell', but principally as the 'bichromate cell'. This latter name came from the practice of producing the chromic acid by adding sulphuric acid to potassium dichromate, even though the cell itself contains no dichromate.
The Fuller cell was developed from the Poggendorff cell. Although the chemistry is principally the same, the two acids are once again separated by a porous container and the zinc is treated with mercury to form an amalgam.
Grove cell
The Welshman William Robert Grove invented the Grove cell in 1839. It consists of a zinc anode dipped in sulfuric acid and a platinum cathode dipped in nitric acid, separated by porous earthenware. The Grove cell provides a high current and nearly twice the voltage of the Daniell cell, which made it the favoured cell of the American telegraph networks for a time. However, it gives off poisonous nitric oxide fumes when operated. The voltage also drops sharply as the charge diminishes, which became a liability as telegraph networks grew more complex. Platinum was and still is very expensive.
Dun cell
Alfred Dun 1885, nitro-muriatic acid () – iron and carbon:
In the new element there can be used advantageously as exciting-liquid in the first case such solutions as have in a concentrated condition great depolarizing-power, which effect the whole depolarization chemically without necessitating the mechanical expedient of increased carbon surface. It is preferred to use iron as the positive electrode, and as exciting-liquid nitro muriatic acid (), the mixture consisting of muriatic and nitric acids. The nitro-muriatic acid, as explained above, serves for filling both cells. For the carbon-cells it is used strong or very slightly diluted, but for the other cells very diluted, (about one-twentieth, or at the most one-tenth). The element containing in one cell carbon and concentrated nitro-muriatic acid and in the other cell iron and dilute nitro-muriatic acid remains constant for at least twenty hours when employed for electric incandescent lighting.
Rechargeable batteries and dry cells
Lead-acid
Up to this point, all existing batteries would be permanently drained when all their chemical reactants were spent. In 1859, Gaston Planté invented the lead–acid battery, the first-ever battery that could be recharged by passing a reverse current through it. A lead-acid cell consists of a lead anode and a lead dioxide cathode immersed in sulfuric acid. Both electrodes react with the acid to produce lead sulfate, but the reaction at the lead anode releases electrons whilst the reaction at the lead dioxide consumes them, thus producing a current. These chemical reactions can be reversed by passing a reverse current through the battery, thereby recharging it.
Planté's first model consisted of two lead sheets separated by rubber strips and rolled into a spiral. His batteries were first used to power the lights in train carriages while stopped at a station. In 1881, Camille Alphonse Faure invented an improved version that consists of a lead grid lattice into which is pressed a lead oxide paste, forming a plate. Multiple plates can be stacked for greater performance. This design is easier to mass-produce.
Compared to other batteries, Planté's is rather heavy and bulky for the amount of energy it can hold. However, it can produce remarkably large currents in surges, because it has very low internal resistance, meaning that a single battery can be used to power multiple circuits.
The lead-acid battery is still used today in automobiles and other applications where weight is not a big factor. The basic principle has not changed since 1859. In the early 1930s, a gel electrolyte (instead of a liquid) produced by adding silica to a charged cell was used in the LT battery of portable vacuum-tube radios. In the 1970s, "sealed" versions became common (commonly known as a "gel cell" or "SLA"), allowing the battery to be used in different positions without failure or leakage.
Today cells are classified as "primary" if they produce a current only until their chemical reactants are exhausted, and "secondary" if the chemical reactions can be reversed by recharging the cell. The lead-acid cell was the first "secondary" cell.
Leclanché cell
In 1866, Georges Leclanché invented a battery that consists of a zinc anode and a manganese dioxide cathode wrapped in a porous material, dipped in a jar of ammonium chloride solution. The manganese dioxide cathode has a little carbon mixed into it as well, which improves conductivity and absorption. It provided a voltage of 1.4 volts. This cell achieved very quick success in telegraphy, signaling, and electric bell work.
The dry cell form was used to power early telephones—usually from an adjacent wooden box affixed to fit batteries before telephones could draw power from the telephone line itself. The Leclanché cell can not provide a sustained current for very long. In lengthy conversations, the battery would run down, rendering the conversation inaudible. This is because certain chemical reactions in the cell increase the internal resistance and, thus, lower the voltage.
Zinc-carbon cell, the first dry cell
Many experimenters tried to immobilize the electrolyte of an electrochemical cell to make it more convenient to use. The Zamboni pile of 1812 is a high-voltage dry battery but capable of delivering only minute currents. Various experiments were made with cellulose, sawdust, spun glass, asbestos fibers, and gelatine.
In 1886, Carl Gassner obtained a German patent on a variant of the Leclanché cell, which came to be known as the dry cell because it does not have a free liquid electrolyte. Instead, the ammonium chloride is mixed with plaster of Paris to create a paste, with a small amount of zinc chloride added in to extend the shelf life. The manganese dioxide cathode is dipped in this paste, and both are sealed in a zinc shell, which also acts as the anode. In November 1887, he obtained for the same device.
Unlike previous wet cells, Gassner's dry cell is more solid, does not require maintenance, does not spill, and can be used in any orientation. It provides a potential of 1.5 volts. The first mass-produced model was the Columbia dry cell, first marketed by the National Carbon Company in 1896. The NCC improved Gassner's model by replacing the plaster of Paris with coiled cardboard, an innovation that left more space for the cathode and made the battery easier to assemble. It was the first convenient battery for the masses and made portable electrical devices practical, and led directly to the invention of the flashlight.
The zinc–carbon battery (as it came to be known) is still manufactured today.
In parallel, in 1887 Wilhelm Hellesen developed his own dry cell design. It has been claimed that Hellesen's design preceded that of Gassner.
In 1887, a dry-battery was developed by Sakizō Yai (屋井 先蔵) of Japan, then patented in 1892. In 1893, Sakizō Yai's dry-battery was exhibited in World's Columbian Exposition and commanded considerable international attention.
NiCd, the first alkaline battery
In 1899, a Swedish scientist named Waldemar Jungner invented the nickel–cadmium battery, a rechargeable battery that has nickel and cadmium electrodes in a potassium hydroxide solution; the first battery to use an alkaline electrolyte. It was commercialized in Sweden in 1910 and reached the United States in 1946. The first models were robust and had significantly better energy density than lead-acid batteries, but were much more expensive.
20th century: new technologies and ubiquity
Nickel-iron
Waldemar Jungner patented a nickel–iron battery in 1899, the same year as his Ni-Cad battery patent, but found it to be inferior to its cadmium counterpart and, as a consequence, never bothered developing it. It produced a lot more hydrogen gas when being charged, meaning it could not be sealed, and the charging process was less efficient (it was, however, cheaper).
Seeing a way to make a profit in the already competitive lead-acid battery market, Thomas Edison worked in the 1890s on developing an alkaline based battery that he could get a patent on. Edison thought that if he produced a lightweight and durable battery electric cars would become the standard, with his firm as its main battery vendor. After many experiments, and probably borrowing from Jungner's design, he patented an alkaline based nickel–iron battery in 1901. However, customers found his first model of the alkaline nickel–iron battery to be prone to leakage leading to short battery life, and it did not outperform the lead-acid cell by much either. Although Edison was able to produce a more reliable and powerful model seven years later, by this time the inexpensive and reliable Model T Ford had made gasoline engine cars the standard. Nevertheless, Edison's battery achieved great success in other applications such as electric and diesel-electric rail vehicles, providing backup power for railroad crossing signals, or to provide power for the lamps used in mines.
Common alkaline batteries
Until the late 1950s, the zinc–carbon battery continued to be a popular primary cell battery, but its relatively low battery life hampered sales. The Canadian engineer Lewis Urry, working for the Union Carbide, first at the National Carbon Co. in Ontario and, by 1955, at the National Carbon Company Parma Research Laboratory in Cleveland, Ohio, was tasked with finding a way to extend the life of zinc-carbon batteries. Building on earlier work by Edison, Urry decided instead that alkaline batteries held more promise. Until then, longer-lasting alkaline batteries were unfeasibly expensive. Urry's battery consists of a manganese dioxide cathode and a powdered zinc anode with an alkaline electrolyte. Using powdered zinc gives the anode a greater surface area. These batteries were put on the market in 1959.
Nickel–hydrogen and nickel–metal hydride
The nickel–hydrogen battery entered the market as an energy-storage subsystem for commercial communication satellites.
The first consumer grade nickel–metal hydride batteries (NiMH) for smaller applications appeared on the market in 1989 as a variation of the 1970s nickel–hydrogen battery. NiMH batteries tend to have longer lifespans than NiCd batteries (and their lifespans continue to increase as manufacturers experiment with new alloys) and, since cadmium is toxic, NiMH batteries are less damaging to the environment.
Alkali metal-ion batteries
Lithium is the alkali metal with lowest density and with the greatest electrochemical potential and energy-to-weight ratio. The low atomic weight and small size of its ions also speeds its diffusion, likely making it an ideal battery material. Experimentation with lithium batteries began in 1912 under American physical chemist Gilbert N. Lewis, but commercial lithium batteries did not come to market until the 1970s in the form of the lithium-ion battery. Three volt lithium primary cells such as the CR123A type and three volt button cells are still widely used, especially in cameras and very small devices.
Three important developments regarding lithium batteries occurred in the 1980s. In 1980, an American chemist, John B. Goodenough, discovered the LiCoO2 (Lithium cobalt oxide) cathode (positive lead) and a Moroccan research scientist, Rachid Yazami, discovered the graphite anode (negative lead) with the solid electrolyte. In 1981, Japanese chemists Tokio Yamabe and Shizukuni Yata discovered a novel nano-carbonacious-PAS (polyacene) and found that it was very effective for the anode in the conventional liquid electrolyte. This led a research team managed by Akira Yoshino of Asahi Chemical, Japan, to build the first lithium-ion battery prototype in 1985, a rechargeable and more stable version of the lithium battery; Sony commercialized the lithium-ion battery in 1991. In 2019, John Goodenough, Stanley Whittingham, and Akira Yoshino, were awarded the Nobel Prize in Chemistry, for their development of lithium-ion batteries.
In 1997, the lithium polymer battery was released by Sony and Asahi Kasei. These batteries hold their electrolyte in a solid polymer composite instead of in a liquid solvent, and the electrodes and separators are laminated to each other. The latter difference allows the battery to be encased in a flexible wrapping instead of in a rigid metal casing, which means such batteries can be specifically shaped to fit a particular device. This advantage has favored lithium polymer batteries in the design of portable electronic devices such as mobile phones and personal digital assistants, and of radio-controlled aircraft, as such batteries allow for a more flexible and compact design. They generally have a lower energy density than normal lithium-ion batteries.
High costs and concerns about mineral extraction associated with lithium chemistry have renewed interest in sodium-ion battery development, with early electric vehicle product launches in 2023.
Solid State batteries
In 2024, Solid-state batteries represent a significant technological leap forward, offering numerous advantages over traditional lithium-ion batteries. Unlike lithium-ion batteries, which use liquid or gel electrolytes, solid-state batteries utilize solid electrolytes. This key difference enhances safety, as solid electrolytes are less likely to catch fire or leak. Solid state batteries can also achieve higher energy densities, therefore lasting longer than traditional lithium-based batteries.
The automotive industry is keenly interested in this new technology as it promises safer and more efficient vehicles. Companies like Toyota, Ford, and QuantumScape are invested heavily in the development of solid-state batteries.
See also
Baghdad Battery, an artifact that has similar properties to a modern battery
Memory effect
Comparison of commercial battery types
History of electrochemistry
List of battery sizes
List of battery types
Search for the Super Battery, a 2017 PBS film
Burgess Battery Company
Notes and references
“Advances in solid-state batteries: Materials, interfaces, characterizations, and devices.” MRS Bulletin, 16 Jan. 2024, link.springer.com/article/10.1557/s43577-023-00649-7.
Volle, Adam. “Solid-state battery | Definition, History, & Facts.” Britannica, www.britannica.com/technology/solid-state-battery.
Electric battery
Battery
Alessandro Volta
Battery | History of the battery | [
"Technology"
] | 5,079 | [
"Science and technology studies",
"History of science and technology",
"History of technology"
] |
8,720,712 | https://en.wikipedia.org/wiki/Multiplication%20theorem | In mathematics, the multiplication theorem is a certain type of identity obeyed by many special functions related to the gamma function. For the explicit case of the gamma function, the identity is a product of values; thus the name. The various relations all stem from the same underlying principle; that is, the relation for one special function can be derived from that for the others, and is simply a manifestation of the same identity in different guises.
Finite characteristic
The multiplication theorem takes two common forms. In the first case, a finite number of terms are added or multiplied to give the relation. In the second case, an infinite number of terms are added or multiplied. The finite form typically occurs only for the gamma and related functions, for which the identity follows from a p-adic relation over a finite field. For example, the multiplication theorem for the gamma function follows from the Chowla–Selberg formula, which follows from the theory of complex multiplication. The infinite sums are much more common, and follow from characteristic zero relations on the hypergeometric series.
The following tabulates the various appearances of the multiplication theorem for finite characteristic; the characteristic zero relations are given further down. In all cases, n and k are non-negative integers. For the special case of n = 2, the theorem is commonly referred to as the duplication formula.
Gamma function–Legendre formula
The duplication formula and the multiplication theorem for the gamma function are the prototypical examples. The duplication formula for the gamma function is
It is also called the Legendre duplication formula or Legendre relation, in honor of Adrien-Marie Legendre. The multiplication theorem is
for integer k ≥ 1, and is sometimes called Gauss's multiplication formula, in honour of Carl Friedrich Gauss. The multiplication theorem for the gamma functions can be understood to be a special case, for the trivial Dirichlet character, of the Chowla–Selberg formula.
Sine function
Formally similar duplication formulas hold for the sine function, which are rather simple consequences of the trigonometric identities. Here one has the duplication formula
and, more generally, for any integer k, one has
Polygamma function, harmonic numbers
The polygamma function is the logarithmic derivative of the gamma function, and thus, the multiplication theorem becomes additive, instead of multiplicative:
for , and, for , one has the digamma function:
The polygamma identities can be used to obtain a multiplication theorem for harmonic numbers.
Hurwitz zeta function
The Hurwitz zeta function generalizes the polygamma function to non-integer orders, and thus obeys a very similar multiplication theorem:
where is the Riemann zeta function. This is a special case of
and
Multiplication formulas for the non-principal characters may be given in the form of Dirichlet L-functions.
Periodic zeta function
The periodic zeta function is sometimes defined as
where Lis(z) is the polylogarithm. It obeys the duplication formula
As such, it is an eigenvector of the Bernoulli operator with eigenvalue 21−s. The multiplication theorem is
The periodic zeta function occurs in the reflection formula for the Hurwitz zeta function, which is why the relation that it obeys, and the Hurwitz zeta relation, differ by the interchange of s → 1−s.
The Bernoulli polynomials may be obtained as a limiting case of the periodic zeta function, taking s to be an integer, and thus the multiplication theorem there can be derived from the above. Similarly, substituting q = log z leads to the multiplication theorem for the polylogarithm.
Polylogarithm
The duplication formula takes the form
The general multiplication formula is in the form of a Gauss sum or discrete Fourier transform:
These identities follow from that on the periodic zeta function, taking z = log q.
Kummer's function
The duplication formula for Kummer's function is
and thus resembles that for the polylogarithm, but twisted by i.
Bernoulli polynomials
For the Bernoulli polynomials, the multiplication theorems were given by Joseph Ludwig Raabe in 1851:
and for the Euler polynomials,
and
The Bernoulli polynomials may be obtained as a special case of the Hurwitz zeta function, and thus the identities follow from there.
Bernoulli map
The Bernoulli map is a certain simple model of a dissipative dynamical system, describing the effect of a shift operator on an infinite string of coin-flips (the Cantor set). The Bernoulli map is a one-sided version of the closely related Baker's map. The Bernoulli map generalizes to a k-adic version, which acts on infinite strings of k symbols: this is the Bernoulli scheme. The transfer operator corresponding to the shift operator on the Bernoulli scheme is given by
Perhaps not surprisingly, the eigenvectors of this operator are given by the Bernoulli polynomials. That is, one has that
It is the fact that the eigenvalues that marks this as a dissipative system: for a non-dissipative measure-preserving dynamical system, the eigenvalues of the transfer operator lie on the unit circle.
One may construct a function obeying the multiplication theorem from any totally multiplicative function. Let be totally multiplicative; that is, for any integers m, n. Define its Fourier series as
Assuming that the sum converges, so that g(x) exists, one then has that it obeys the multiplication theorem; that is, that
That is, g(x) is an eigenfunction of Bernoulli transfer operator, with eigenvalue f(k). The multiplication theorem for the Bernoulli polynomials then follows as a special case of the multiplicative function . The Dirichlet characters are fully multiplicative, and thus can be readily used to obtain additional identities of this form.
Characteristic zero
The multiplication theorem over a field of characteristic zero does not close after a finite number of terms, but requires an infinite series to be expressed. Examples include that for the Bessel function :
where and may be taken as arbitrary complex numbers. Such characteristic-zero identities follow generally from one of many possible identities on the hypergeometric series.
Notes
References
Milton Abramowitz and Irene A. Stegun, eds. Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, (1972) Dover, New York. (Multiplication theorems are individually listed chapter by chapter)
C. Truesdell, "On the Addition and Multiplication Theorems for the Special Functions", Proceedings of the National Academy of Sciences, Mathematics, (1950) pp. 752–757.
Special functions
Zeta and L-functions
Gamma and related functions
Mathematical theorems | Multiplication theorem | [
"Mathematics"
] | 1,397 | [
"Special functions",
"Combinatorics",
"nan",
"Mathematical problems",
"Mathematical theorems"
] |
8,721,272 | https://en.wikipedia.org/wiki/PHI-base | The Pathogen-Host Interactions database (PHI-base) is a biological database that contains manually curated information on genes experimentally proven to affect the outcome of pathogen-host interactions. The database has been maintained by researchers at Rothamsted Research and external collaborators since 2005.
PHI-base has been part of the UK node of ELIXIR, the European life-science infrastructure for biological information, since 2016.
Background
The Pathogen-Host Interactions database was developed to utilise the growing number of verified genes that mediate an organism's ability to cause disease and/or trigger host responses.
The web-accessible database catalogues experimentally verified pathogenicity, virulence, and effector genes from bacterial, fungal, and oomycete pathogens which infect animal, plant, and fungal hosts. PHI-base was the first online resource devoted to the identification and presentation of information on fungal and oomycete pathogenicity genes and their host interactions. PHI-base is a resource for the discovery of candidate targets in medically and agronomically important fungal and oomycete pathogens for intervention with synthetic chemistries and natural products (fungicides).
Each entry in PHI-base is curated by domain experts and supported by strong experimental evidence (gene disruption experiments) as well as literature references in which the experiments are described. Each gene in PHI-base is presented with its nucleotide and deduced amino acid sequence as well as a detailed structured description of the predicted protein's function during the host infection process. To facilitate data interoperability, genes are annotated using controlled vocabularies (Gene Ontology terms, EC Numbers, etc.), and links to other external data sources such as UniProt, EMBL, and the NCBI taxonomy services.
Current developments
Version 4.17 (May 2024) of PHI-base provides information on 9973 genes from 296 pathogens and 249 hosts and their impact on 22415 interactions as well on efficacy information on ~20 drugs and the target sequences in the pathogen. PHI-base currently focuses on plant pathogenic and human pathogenic organisms including fungi, oomycetes, and bacteria. The entire contents of the database can be downloaded in a tab delimited format. Since the launch of version 4, the PHI-base is also searchable using the PHIB-BLAST search tool, which uses the BLAST algorithm to compare a user's sequence against the sequences available from PHI-base. The database providers recently announced the launch of PHI-base 5, a new gene-centric version of PHI-base, through a press release on the Rothamsted Research website. A summary of the improvements made is also available.
In 2016 the plant portion of PHI-base was used to establish a Semantic PHI-base search tool.
PHI-base has been aligned with Ensembl Genomes since 2011, FungiDB since 2016, and Global Biotic Interactions (GloBI) since 2018. All new PHI-base releases are integrated by these independent databases.
PHI-base is a resource for many applications including:
› The discovery of conserved genes in medically and agronomically important pathogens, which may be potential targets for chemical intervention
› Comparative genome analyses
› Annotation of newly sequenced pathogen genomes
› Functional interpretation of RNA sequencing and microarray experiments
› The rapid cross-checking of phenotypic differences between pathogenic species when writing articles for peer review
PHI-base use has been cited in over 900 peer-reviewed articles.
Since 2015, the website has linked to an online literature curation tool called PHI-Canto, enabling community-driven literature curation for various pathogenic species. PHI-Canto employs a community curation framework that not only offers a curation tool but also includes a phenotype ontology and controlled vocabularies using unified languages and rules used in biology experiments. The central concept of this framework is the introduction of a 'Metagenotype', which allows the annotation and assignment of phenotypes to specific pathogen mutant-host interactions. PHI-Canto extends the single species curation tool developed for PomBase (https://www.pombase.org), the model organism database for fission yeast.
Funding
PHI-base is a National Capability funded by the Biotechnology and Biological Sciences Research Council (BBSRC), a UK research council.
References
External links
PHI-base
Biological interactions
Genetics databases
Genetics in the United Kingdom
Genomics
Online databases
Pathogenic microbes
Plant pathogens and diseases
Rothamsted Experimental Station
Science and technology in Hertfordshire | PHI-base | [
"Biology"
] | 940 | [
"Behavior",
"Plant pathogens and diseases",
"Plants",
"Biological interactions",
"nan",
"Ethology"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.