id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
6,180,505 | https://en.wikipedia.org/wiki/Ozires%20Silva | Ozires Silva (born 8 January 1931) is a Brazilian entrepreneur who is the founder of Embraer.
Ozires was born in Bauru–São Paulo state. He graduated on Escola de Aeronáutica do Campo dos Afonsos (Rio de Janeiro) as a military pilot. He then served the Brazilian Air Force for 4 years in the Amazon rainforest region.
In 1962, Ozires graduated from the Aeronautics Technological Institute (ITA) as an aeronautical engineer. He was immediately hired by the Brazilian General Command for Aerospace Technology (CTA), where he would become part of the IPD - Instituto de Pesquisas e Desenvolvimento (current IAE - Instituto de Aeronáutica e Espaço). He soon became the lead engineer of the Bandeirante project.
The first Bandeirante prototype flew on October 26, 1968. After that, Ozires attempted to convince private industries to produce the Bandeirante in series, without success. His efforts, however, contributed to the creation of a government-owned aircraft manufacturer, Embraer, of which he became president on July 29, 1969.
In 1986, Ozires left Embraer. He became president of Petrobras for a short time, and also became Minister of Infra-Structure from March 15, 1990, to March 27, 1991. He returned to Embraer from 1991 to 1995 to conduct the privatization process. He was also president of Varig from 2000 to 2002.
Ozires Silva was one of the first witnesses to report the May 19 1986 Brazilian UFO incident, when he was flying on an executive Xingu turbo-prop plane.
Ozires is also former (2004) Director of Technology of AVAMAX, Executive Vice-president of Academia Brasileira de Estudos Avançados "Dr. Adolfo Bezerra de Menezes Cavalcanti", and President of Pele Nova, a biotechnology company and President at World Trade Center Advisory Board.
He is currently dean of , a private university in Brazil.
Published books
A Decolagem de um Sonho - História da Criação da Embraer - Lemos Editorial - 1998
Cartas a um Jovem Empreendedor - Elsevier Editora - 2006
A Decolagem de um Grande Sonho - Elsevier Editora - 2008
Ethanol - A Brazilian revolution - 2008
References
Ozires Silva biography (in Portuguese)
History of Brazil Aeronautics Industry (in Portuguese)
External links
Embraer
Varig
IAE - Aeronautics and Space Institute (in Portuguese)
Petrobras (in Portuguese)
Academia Brasileira de Estudos Avançados "Dr. Adolfo Bezerra de Menezes Cavalcanti" (in Portuguese)
Pele Nova (in Portuguese)
1931 births
Living people
People from Bauru
Aerospace engineers
Brazilian military aviators
Brazilian chief executives
Brazilian Air Force personnel
Recipients of the Great Cross of the National Order of Scientific Merit (Brazil)
Government ministers of Brazil | Ozires Silva | Engineering | 615 |
48,488,309 | https://en.wikipedia.org/wiki/Jiayang%20Coal%20Railway | The Jiayang Coal Railway, also known as Shixi–Huangcunjing Railway or Shibanxi Railway, is a narrow-gauge railway in Sichuan in Qianwei County near Leshan, China.
History
A coal mine was established at Bajiaogou (芭蕉沟) in 1938. The coal was initially transported in manually-propelled trucks on a gauge railway track to Mamio, where the coal was loaded onto barges on the Mabian river.
To make transport more efficient, the construction of a long narrow-gauge line to Shixi, with a gauge of , began in 1958. The line was formally inaugurated on 12 July 1959. In 1960 it was re-gauged to . The station at Mifengyan is part of a zig zag.
Initially, coal wagons were used to carry both coal and passengers. In the 1960s, trains with separate coal and passenger rolling stock became available. In 1975, six pairs of passenger-only trains operated daily, independently from the coal transport, but that was reduced to four passenger trains per day over the years.
Additional mines near Jiaoba and Yuejin, Sichuan, were also connected to the railway line. In 2000, the Yuejin–Shixi line was electrified with 550V DC. The transport of coal on the remainder was discontinued in 2003, but passenger trains still operated, due to the lack of a road connection. In May 2004, the local government requested that the line be maintained as a heritage railway, as more and more tourists became interested in it. In 2010 it was listed as national cultural heritage. Tickets for tourists cost ¥80, but locals are only charged ¥0.5.
Rolling stock
Initially converted lorries were used, and later steam locomotives (probably of the RJ type) were added. After re-gauging, larger ZM16-4-C2 steam locomotives were used. These are still being used on the non-electrified section. In 2001, the Numbers 07, 08, 09, 10 und 14 were still operational.
Two SJ380A diesel engines with numbers 01 and 02 were purchased and commissioned in 1991. They were already taken out of use in 1996. For the electrified part of the line between Shixi and Yuejin three ZL 14-7 electrical locomotives with four wheels each were purchased in the year 2000.
Coal was initially transported in waggons with a bamboo superstructure, and later with a wooden superstructure. Today, two-axle steel waggons are used. Two-axle carriages with open windows are used for passenger service, and there are four-axle carriages available for checked-in luggage and tourist trains. Their windows are covered by transparent plastic and some of them are painted in brown.
Operation
Four pairs of passenger trains per day pass the full length of the line within 75 minutes in one direction. Additionally, one or two pairs of trains are used daily for coal transport between Huangcunjing and Shixi. Special trains for tourists are available on request. For these, only tourist carriages are being used. One or two tourist carriages are also added to normal passenger trains. All of these trains are pulled by ZM16-4 steam locomotives, even on the electrified section of the line. The locomotives are refilled in Yuejin station on their way to Huangcunjing.
Coal is transported between Yuejing and Shixi by electric locomotives. One ZL 14-7 goes forward and backwards between the loading and unloading stations.
Literature
Zhang Xiang: Jiayang Narrow Gauge Steam. 2011, .
Gao Luchuan: Jiayang Narrow Gauge Steam - A living fossil of the industrial revolution 1959~2009. China Photographic Publishing House, 2009, .
David Akast: A steam-powered taste of old China. South China Morning Post 2014: https://amp.scmp.com/magazines/post-magazine/article/1400541/blasts-past-steam-powered-taste-old-china
References
External links
Guide for Jiayang Steam Train
Short description with links to other reports
China’s last steam train BBC Travel.
China's Jiayang Railway: Journey back in time on the world's last passenger steam train service
Travel report (German, the 6th part provides links to previous parts)
Coal in China
Rail transport in Sichuan
Mining railways
2 ft 6 in gauge railways in China
600 mm gauge railways in China
Railways with Zig Zags | Jiayang Coal Railway | Engineering | 891 |
29,435,011 | https://en.wikipedia.org/wiki/Chemical%20process%20of%20decomposition | Decomposition in animals is a process that begins immediately after death and involves the destruction of soft tissue, leaving behind skeletonized remains. The chemical process of decomposition is complex and involves the breakdown of soft tissue, as the body passes through the sequential stages of decomposition. Autolysis and putrefaction also play major roles in the disintegration of cells and tissues.
The human body is composed of approximately: 64% water, 20% protein, 10% fat, 1% carbohydrate, 5% minerals. The decomposition of soft tissue is characterized by the breakdown of these macromolecules, and thus a large proportion of the decomposition products should reflect the amount of protein and fat content initially present in the body. As such, the chemical process of decomposition involves the breakdown of proteins, carbohydrates, lipids, nucleic acids, and bone.
Protein degradation
Proteins make up a variety of different tissues within the body, which may be classified as soft or hard tissue proteins. As such, proteins within the body are not degraded at a uniform rate.
Proteolysis
Proteolysis is the process that breaks down proteins. It is regulated by moisture, temperature, and bacteria. This process does not occur at a uniform rate and thus some proteins are degraded during early decomposition, while others are degraded during later stages of decomposition. During the early stages of decomposition, soft tissue proteins are broken down. These include proteins that:
line the gastrointestinal tract and pancreatic epithelium
form the brain, liver, and kidneys
During later stages of decomposition, more resistant tissue proteins are degraded by the effects of putrefaction. These include:
reticulin
muscle protein
collagen (a hard tissue protein), which survives even longer than the former tissue proteins
Keratin is a protein which is found in skin, hair, and nails. It is most resistant to the enzymes involved in proteolysis and must be broken down by special keratinolytic microorganisms. This is the reason that hair and nails are commonly found with skeletal remains.
Proteolysis products
In general, proteolysis breaks down proteins into:
proteoses
peptones
polypeptides
amino acids
Continuing proteolysis leads to the production of phenolic substances. In addition, the following gases will also be produced:
carbon dioxide
hydrogen sulphide, which is highly toxic
ammonia
methane
The sulfur-containing amino acids cysteine and methionine undergo bacterial decomposition to yield:
ammonia
thiols (decomposition gases known for their foul odours)
pyruvic acid
sulphides
hydrogen sulphide gas
Ferrous sulphide will be produced if iron is present, which can be seen as a black precipitate
Two common decarboxylation products of protein associated with decomposition are putrescine and cadaverine. These compounds are toxic at high levels and have distinctive, foul odours. It is believed that they are components of the characteristic odours of decomposition commonly detected by cadaver dogs.
A summary of the protein degradation products can be found in Table 1 below.
Nitrogen release
Nitrogen is a component of amino acids and is released upon deamination. It is typically released in the form of ammonia, which may be used by plants or microbes in the surrounding environment, converted to nitrate, or can accumulate in soil (if the body is located on top of or within soil). It has been suggested that the presence of nitrogen in soil may enhance nearby plant growth.
In acidic soil conditions, ammonia will be converted to ammonium ions, which can be used by plants or microbes. Under alkaline conditions, some of the ammonium ions entering soil may be converted back to ammonia. Any remaining ammonium in the environment can undergo nitrification and denitrification to yield nitrate and nitrite. In the absence of nitrifying bacteria, or organisms capable of oxidizing ammonia, ammonia will accumulate in the soil.
Phosphorus release
Phosphorus can be released from various components of the body, including proteins (especially those making up nucleic acids), sugar phosphate, and phospholipids. The route phosphorus takes once it is released is complex and relies on the pH of the surrounding environment. In most soils, phosphorus exists as insoluble inorganic complexes, associated with iron, calcium, magnesium, and aluminum. Soil microorganisms can also transform insoluble organic complexes into soluble ones.
Carbohydrate degradation
Early in decomposition, carbohydrates will be broken down by microorganisms. The process begins with the breakdown of glycogen into glucose monomers. These sugar monomers can be completely decomposed to carbon dioxide and water or incompletely decomposed to various organic acids and alcohols, or other oxygenated species, such as ketones, aldehydes, esters and ethers.
Depending on the availability of oxygen in the environment, sugars will be decomposed by different organisms and into different products, although both routes may occur simultaneously. Under aerobic conditions, fungi and bacteria will decompose sugars into the following organic acids:
glucuronic acid
citric acid
oxalic acid
Under anaerobic conditions, bacteria will decompose sugars into:
lactic acid
butyric acid
acetic acid
which are collectively responsible for the acidic environment commonly associated with decomposing bodies.
Other bacterial fermentation products include alcohols, such as butyl and ethyl alcohol, acetone, and gases, such as methane and hydrogen.
A summary of the carbohydrate degradation products can be found in Table 1 below.
Lipid degradation
Lipids in the body are mainly contained in adipose tissue, which is made up of about 5-30% water, 2-3% protein, and 60-85% lipids, by weight, of which 90-99% are triglycerides. Adipose tissue is largely composed of neutral lipids, which collectively refers to triglycerides, diglyercides, phospholipids, and cholesterol esters, of which triglycerides are the most common. The fatty acid content of the triglycerides varies from person to person, but contains oleic acid in the greatest amount, followed by linoleic, palmitoleic, and palmitic acids.
Neutral lipid degradation
Neutral lipids are hydrolyzed by lipases shortly after death, to free the fatty acids from their glycerol backbone. This creates a mixture of saturated and unsaturated fatty acids. Under the right conditions (when sufficient water and bacterial enzymes are present), neutral lipids will be completely degraded until they are reduced to fatty acids. Under suitable conditions, the fatty acids can be transformed into adipocere. In contrast, fatty acids may react with sodium and potassium ions present in tissue, to produce salts of fatty acids. When the body is located near soil, the sodium and potassium ions can be replaced by calcium and magnesium ions to form soaps of saturated fatty acids, which can also contribute to the formation of adipocere.
Fatty acid degradation
The fatty acids resulting from hydrolysis can undergo one of two routes of degradation, depending on the availability of oxygen. It is possible, however, for both routes to take place at the same time in different areas of the body.
Anaerobic degradation
Anaerobic bacteria dominate within a body following death, which promote the anaerobic degradation of fatty acids by hydrogenation. The process of hydrogenation transforms unsaturated bonds (double and triple bonds) into single bonds. This essentially increases the amounts of saturated fatty acids, while decreasing the proportion of unsaturated fatty acids. Therefore, hydrogenation of oleic and palmitoleic acids, for example, will yield stearic, and palmitic acids, respectively.
\overset{unsaturated\ fatty\ acid}{CHOO-(CH2)7.CH=CH-(CH2)5.CH3} + H2 ->[\ce{bacterial\ enzymes}] \overset{saturated\ fatty\ acid}{CHOO-(CH2)7.CH2-CH2-(CH2)5.CH3}
Aerobic degradation
In the presence of oxygen, the fatty acids will undergo oxidation. Lipid oxidation is a chain reaction process in which oxygen attacks the double bond in a fatty acid, to yield peroxide linkages. Eventually, the process will produce aldehydes and ketones.
Initiation
{RH} + O2 -> {R} + OH
Propagation
Termination
A summary of the lipid degradation products can be found in Table 1 below.
Nucleic acid degradation
The breakdown of nucleic acids produces nitrogenous bases, phosphates, and sugars. These three products are further broken down by degradation pathways of other macromolecules. The nitrogen from the nitrogenous bases will be transformed in the same way that it is in proteins. Similarly, phosphates will be released from the body and undergo the same changes as those released from proteins and phospholipids. Finally, sugars, also known as carbohydrates, will be degraded based on the availability of oxygen.
Bone degradation
Bone is a composite tissue that is made up of three main fractions:
a protein fraction that mainly consists of collagen (a hard tissue protein that is more resistant to degradation than other tissue proteins), which serves as support
a mineral fraction that consists of hydroxyapatite(the mineral that contains the calcium and phosphorus in a bone), which stiffens the protein structure
a ground substance made of other organic compounds
The collagen and hydroxyapatite are held together by a strong protein-mineral bond that provides bone with its strength and its ability to remain long after the soft tissue of a body has been degraded.
The process that degrades bone is referred to as diagenesis. The first step in the process involves the elimination of the organic collagen fraction by the action of bacterial collagenases. These collagenases break down protein into peptides. The peptides are subsequently reduced to their constituent amino acids, which can be leached away by groundwater. Once the collagen has been removed from bone, the hydroxyapatite content is degraded by inorganic mineral weathering, meaning that important ions, such as calcium, are lost to the environment. The strong protein-mineral bond that provided bone with its strength will become compromised by this degradation, leading to an overall weakened structure, which will continue to weaken until full disintegration of bone occurs.
Factors affecting bone degradation
Bone is quite resistant to degradation but will eventually be broken down by physical breaking, decalcification, and dissolution. The rate at which bone is degraded, however, is highly dependent on its surrounding environment. When soil is present, its destruction is influenced by both abiotic (water, temperature, soil type, and pH) and biotic (fauna and flora) agents.
Abiotic factors
Water accelerates the process by leaching essential organic minerals from bone. As such, soil type plays a role, because it will affect the water content of the environment. For example, some soils, like clay soils, retain water better than others, like sandy or silty soils. Further, acidic soils are better able to dissolve the inorganic matrix of hydroxyapatite than basic soils, thus accelerating the disintegration of bone.
Biotic factors
Microorganisms, mainly bacteria and fungi, play a role in bone degradation. They are capable of invading bone tissue and causing minerals to leach into the surrounding environment, leading to disturbances in its structure. Small and large mammals often disturb bones by removing them from grave sites or gnawing on them, which contributes to their destruction. Finally, plant roots located above burial sites can be extremely destructive to bone. Fine roots can travel through the tissue and split long bones, while larger roots can produce openings in bones that may be mistaken for fractures.
References
Biodegradation
Chemical reactions
Biostratinomy | Chemical process of decomposition | Chemistry | 2,489 |
24,001,630 | https://en.wikipedia.org/wiki/Mechanical%20similarity | In classical mechanics, a branch overlapping in physics and applied mathematics, mechanical similarity occurs when the potential energy is a homogeneous function of the positions of the particles, with the result that the trajectories of the particles in the system are geometrically similar paths, differing in size but retaining shape.
Consider a system of any number of particles and assume that the interaction energy between any pair of particles has the form
where r is the distance between the two particles. In such a case the solutions to the equations of motion are a series of geometrically similar paths, and the times of motion t at corresponding points on the paths are related to the linear size l of the path by
Examples
The period of small oscillations (k = 2) is independent of their amplitude.
The time of free fall under gravity (k = 1) is proportional to the square root of the initial altitude.
The square of the time of revolution of the planets (k = −1) is proportional to the cube of the orbital size.
See also
Virial theorem
References
Landau LD and Lifshitz EM (1976) Mechanics §10, 3rd. ed., Pergamon Press. (hardcover) and (softcover).
Classical mechanics | Mechanical similarity | Physics | 248 |
73,027,987 | https://en.wikipedia.org/wiki/Piecewise%20algebraic%20space | In mathematics, a piecewise algebraic space is a generalization of a semialgebraic set, introduced by Maxim Kontsevich and Yan Soibelman. The motivation was for the proof of Deligne's conjecture on Hochschild cohomology. Robert Hardt, Pascal Lambrechts, Victor Turchin, and Ismar Volić later developed the theory.
References
Maxim Kontsevich and Yan Soibelman. “Deformations of algebras over operads and the Deligne conjecture”. In: Conférence Moshé Flato 1999, Vol. I (Dijon). Vol. 21. Math. Phys. Stud. Dordrecht: Kluwer Acad. Publ., 2000, pp. 255–307. arXiv: math/0001151.
Algebraic geometry | Piecewise algebraic space | Mathematics | 169 |
40,726,052 | https://en.wikipedia.org/wiki/Community%20genetics | Community genetics is a recently emerged field in biology that fuses elements of community ecology, evolutionary biology, and molecular and quantitative genetics. Antonovics first articulated the vision for such a field, and Whitham et al. formalized its definition as "The study of the genetic interactions that occur between species and their abiotic environment in complex communities." The field aims to bridge the gaps in the study of evolution and ecology, within the multivariate community context in which ecological and evolutionary features are embedded. The documentary movie A Thousand Invisible Cords provides an introduction to the field and its implications.
To date, the primary focus of most community genetics studies has been on the influences of genetic variation in plants on foliar arthropod communities. In a wide variety of ecosystems, different plant genotypes often support different compositions of associated foliar arthropod communities. Such community phenotypes have been observed in natural hybrid complexes, among genotypes and sibling families within a single species and among different plant populations. To understand the broader impacts of differences among plant genotypes on biodiversity as a whole, researchers have begun to examine the response of other organisms, such as foliar endophytes, mycorrhizal fungi, soil microbes, litter-dwelling arthropods, herbaceous plants and epiphytes. These effects are frequently examined with foundation species in temperate ecosystems, who structure ecosystems by modulating and stabilizing resources and ecosystem processes. The emphasis on foundation species allows researchers to focus on the likely most important players in a system without becoming overwhelmed by the complexity of all the genetically variable interactions occurring at the same time. However, unique effects of plant genotypes have also been found with non-foundation species, and can occur in tropical, boreal and alpine systems.
The vision for the field of community genetics extends beyond documentation of different communities on different genotypes of a focal species. Other aspects of this field include
understanding how species interactions within a community are modulated by host genotype,
implications of host genotype on the fitness and evolution of community members, and
selection on hosts influencing associated communities.
Future progress in the field of community genetics is strongly dependent on breakthroughs in modern molecular DNA-based technology, such as genome sequencing. The application of a community genetics approach to understanding how species and communities of interacting organisms are reacting to rapid changes in climate, as well as informing restoration, are two important applied aspects of community genetics.
References
Community ecology
Evolutionary biology
Molecular genetics | Community genetics | Chemistry,Biology | 512 |
2,186,444 | https://en.wikipedia.org/wiki/Laser-hybrid%20welding | Laser-hybrid welding is a type of welding process that combines the principles of laser beam welding and arc welding.
The combination of laser light and an electrical arc into an amalgamated welding process has existed since the 1970s, but has only recently been used in industrial applications. There are three main types of hybrid welding process, depending on the arc used: TIG, plasma arc or MIG augmented laser welding. While TIG-augmented laser welding was the first to be researched, MIG is the first to go into industry and is commonly known as hybrid laser welding.
Whereas in the early days laser sources still had to prove their suitability for industrial use, today they are standard equipment in many manufacturing enterprises.
The combination of laser welding with another weld process is called a "hybrid welding process". This means that a laser beam and an electrical arc act simultaneously in one welding zone, influencing and supporting each other.
Laser
Laser welding not only requires high laser power but also a high quality beam to obtain the desired "deep-weld effect". The resulting higher quality of beam can be exploited either to obtain a smaller focus diameter or a larger focal distance. A variety of laser types are used for this process, in particular Nd:YAG where the laser light can be transmitted via a water-cooled glass fiber. The beam is projected onto the workpiece by collimating and focusing optics. Carbon dioxide laser can also be used where the beam is transmitted via lens or mirrors.
Laser-hybrid process
For welding metallic objects, the laser beam is focused to obtain intensities of more than 1 MW/cm2. When the laser beam hits the surface of the material, this spot is heated up to vaporization temperature, and a vapor cavity is formed in the weld metal due to the escaping metal vapor. This is known as a keyhole. The extraordinary feature of the weld seam is its high depth-to-width ratio. The energy-flow density of the freely burning arc is slightly more than 100 kW/cm2. Unlike a dual process where two separate weld processes act in succession, hybrid welding may be viewed as a combination of both weld processes acting simultaneously in one and the same process zone. Depending on the kind of arc or laser process used, and depending on the process parameters, the two systems will influence each other in different ways.
The combination of the laser process and the arc process results in an increase in both weld penetration depth and welding speed (as compared to each process alone). The metal vapor escaping from the vapor cavity acts upon the arc plasma. Absorption of the laser radiation in the processing plasma remains negligible. Depending on the ratio of the two power inputs, the character of the overall process may be mainly determined either by the laser or by the arc.
Absorption of the laser radiation is substantially influenced by the temperature of the workpiece surface. Before the laser welding process can start, the initial reflectance must be overcome, especially on aluminum surfaces. This can be achieved by preheating the material. In the hybrid process, the arc heats the metal, helping the laser beam to couple in. After the vaporisation temperature has been reached, the vapor cavity is formed, and nearly all radiation energy can be put into the workpiece. The energy required for this is thus determined by the temperature-dependent absorption and by the amount of energy lost by conduction into the rest of the workpiece. In laser-hybrid welding, using MIG, vaporisation takes place not only from the surface of the workpiece but also from the filler wire, so that more metal vapor is available to facilitate the absorption of the laser radiation.
Fatigue behavior
Over the years a great deal of research has been done to understand fatigue behavior, particularly for new techniques like laser-hybrid welding, but knowledge is still limited. Laser-hybrid welding is an advanced welding technology that creates narrow deep welds and offers greater freedom to control the weld surface geometry. Therefore, fatigue analysis and life prediction of hybrid weld joints has become more important and is the subject of ongoing research.
References
See also
List of laser articles
Welding
Welding
Welding | Laser-hybrid welding | Engineering | 835 |
20,556,798 | https://en.wikipedia.org/wiki/Myocardial%20infarction | A myocardial infarction (MI), commonly known as a heart attack, occurs when blood flow decreases or stops in one of the coronary arteries of the heart, causing infarction (tissue death) to the heart muscle. The most common symptom is retrosternal chest pain or discomfort that classically radiates to the left shoulder, arm, or jaw. The pain may occasionally feel like heartburn. This is the dangerous type of Acute coronary syndrome.
Other symptoms may include shortness of breath, nausea, feeling faint, a cold sweat, feeling tired, and decreased level of consciousness. About 30% of people have atypical symptoms. Women more often present without chest pain and instead have neck pain, arm pain or feel tired. Among those over 75 years old, about 5% have had an MI with little or no history of symptoms. An MI may cause heart failure, an irregular heartbeat, cardiogenic shock or cardiac arrest.
Most MIs occur due to coronary artery disease. Risk factors include high blood pressure, smoking, diabetes, lack of exercise, obesity, high blood cholesterol, poor diet, and excessive alcohol intake. The complete blockage of a coronary artery caused by a rupture of an atherosclerotic plaque is usually the underlying mechanism of an MI. MIs are less commonly caused by coronary artery spasms, which may be due to cocaine, significant emotional stress (often known as Takotsubo syndrome or broken heart syndrome) and extreme cold, among others. Many tests are helpful with diagnosis, including electrocardiograms (ECGs), blood tests and coronary angiography. An ECG, which is a recording of the heart's electrical activity, may confirm an ST elevation MI (STEMI), if ST elevation is present. Commonly used blood tests include troponin and less often creatine kinase MB.
Treatment of an MI is time-critical. Aspirin is an appropriate immediate treatment for a suspected MI. Nitroglycerin or opioids may be used to help with chest pain; however, they do not improve overall outcomes. Supplemental oxygen is recommended in those with low oxygen levels or shortness of breath. In a STEMI, treatments attempt to restore blood flow to the heart and include percutaneous coronary intervention (PCI), where the arteries are pushed open and may be stented, or thrombolysis, where the blockage is removed using medications. People who have a non-ST elevation myocardial infarction (NSTEMI) are often managed with the blood thinner heparin, with the additional use of PCI in those at high risk. In people with blockages of multiple coronary arteries and diabetes, coronary artery bypass surgery (CABG) may be recommended rather than angioplasty. After an MI, lifestyle modifications, along with long-term treatment with aspirin, beta blockers and statins, are typically recommended.
Worldwide, about 15.9 million myocardial infarctions occurred in 2015. More than 3 million people had an ST elevation MI, and more than 4 million had an NSTEMI. STEMIs occur about twice as often in men as women. About one million people have an MI each year in the United States. In the developed world, the risk of death in those who have had a STEMI is about 10%. Rates of MI for a given age have decreased globally between 1990 and 2010. In 2011, an MI was one of the top five most expensive conditions during inpatient hospitalizations in the US, with a cost of about $11.5 billion for 612,000 hospital stays.
Terminology
Myocardial infarction (MI) refers to tissue death (infarction) of the heart muscle (myocardium) caused by ischemia, the lack of oxygen delivery to myocardial tissue. It is a type of acute coronary syndrome, which describes a sudden or short-term change in symptoms related to blood flow to the heart. Unlike the other type of acute coronary syndrome, unstable angina, a myocardial infarction occurs when there is cell death, which can be estimated by measuring by a blood test for biomarkers (the cardiac protein troponin). When there is evidence of an MI, it may be classified as an ST elevation myocardial infarction (STEMI) or Non-ST elevation myocardial infarction (NSTEMI) based on the results of an ECG.
The phrase "heart attack" is often used non-specifically to refer to myocardial infarction. An MI is different from—but can cause—cardiac arrest, where the heart is not contracting at all or so poorly that all vital organs cease to function, thus leading to death. It is also distinct from heart failure, in which the pumping action of the heart is impaired. However, an MI may lead to heart failure.
Signs and symptoms
Chest pain that may or may not radiate to other parts of the body is the most typical and significant symptom of myocardial infarction. It might be accompanied by other symptoms such as sweating.
Pain
Chest pain is one of the most common symptoms of acute myocardial infarction and is often described as a sensation of tightness, pressure, or squeezing. Pain radiates most often to the left arm, but may also radiate to the lower jaw, neck, right arm, back, and upper abdomen. The pain most suggestive of an acute MI, with the highest likelihood ratio, is pain radiating to the right arm and shoulder. Similarly, chest pain similar to a previous heart attack is also suggestive. The pain associated with MI is usually diffuse, does not change with position, and lasts for more than 20 minutes. It might be described as pressure, tightness, knifelike, tearing, burning sensation (all these are also manifested during other diseases). It could be felt as an unexplained anxiety, and pain might be absent altogether. Levine's sign, in which a person localizes the chest pain by clenching one or both fists over their sternum, has classically been thought to be predictive of cardiac chest pain, although a prospective observational study showed it had a poor positive predictive value.
Typically, chest pain because of ischemia, be it unstable angina or myocardial infarction, lessens with the use of nitroglycerin, but nitroglycerin may also relieve chest pain arising from non-cardiac causes.
Other
Chest pain may be accompanied by sweating, nausea or vomiting, and fainting, and these symptoms may also occur without any pain at all. Dizziness or lightheadedness is common and occurs due to reduction in oxygen and blood to the brain. In females, the most common symptoms of myocardial infarction include shortness of breath, weakness, and fatigue. Females are more likely to have unusual or unexplained tiredness and nausea or vomiting as symptoms. Females having heart attacks are more likely to have palpitations, back pain, labored breath, vomiting, and left arm pain than males, although the studies showing these differences had high variability. Females are less likely to report chest pain during a heart attack and more likely to report nausea, jaw pain, neck pain, cough, and fatigue, although these findings are inconsistent across studies. Females with heart attacks also had more indigestion, dizziness, loss of appetite, and loss of consciousness. Shortness of breath is a common, and sometimes the only symptom, occurring when damage to the heart limits the output of the left ventricle, with breathlessness arising either from low oxygen in the blood or pulmonary edema.
Other less common symptoms include weakness, light-headedness, palpitations, and abnormalities in heart rate or blood pressure. These symptoms are likely induced by a massive surge of catecholamines from the sympathetic nervous system, which occurs in response to pain and, where present, low blood pressure. Loss of consciousness can occur in myocardial infarctions due to inadequate blood flow to the brain and cardiogenic shock, and sudden death, frequently due to the development of ventricular fibrillation. When the brain was without oxygen for too long due to a myocardial infarction, coma and persistent vegetative state can occur. Cardiac arrest, and atypical symptoms such as palpitations, occur more frequently in females, the elderly, those with diabetes, in people who have just had surgery, and in critically ill patients.
Absence
"Silent" myocardial infarctions can happen without any symptoms at all. These cases can be discovered later on electrocardiograms, using blood enzyme tests, or at autopsy after a person has died. Such silent myocardial infarctions represent between 22 and 64% of all infarctions, and are more common in the elderly, in those with diabetes mellitus and after heart transplantation. In people with diabetes, differences in pain threshold, autonomic neuropathy, and psychological factors have been cited as possible explanations for the lack of symptoms. In heart transplantation, the donor heart is not fully innervated by the nervous system of the recipient.
Risk factors
The most prominent risk factors for myocardial infarction are older age, actively smoking, high blood pressure, diabetes mellitus, and total cholesterol and high-density lipoprotein levels. Many risk factors of myocardial infarction are shared with coronary artery disease, the primary cause of myocardial infarction, with other risk factors including male sex, low levels of physical activity, a past family history, obesity, and alcohol use. Risk factors for myocardial disease are often included in risk factor stratification scores, such as the Framingham Risk Score. At any given age, men are more at risk than women for the development of cardiovascular disease. High levels of blood cholesterol is a known risk factor, particularly high low-density lipoprotein, low high-density lipoprotein, and high triglycerides.
Many risk factors for myocardial infarction are potentially modifiable, with the most important being tobacco smoking (including secondhand smoke). Smoking appears to be the cause of about 36% and obesity the cause of 20% of coronary artery disease. Lack of physical activity has been linked to 7–12% of cases. Less common causes include stress-related causes such as job stress, which accounts for about 3% of cases, and chronic high stress levels.
Diet
There is varying evidence about the importance of saturated fat in the development of myocardial infarctions. Eating polyunsaturated fat instead of saturated fats has been shown in studies to be associated with a decreased risk of myocardial infarction, while other studies find little evidence that reducing dietary saturated fat or increasing polyunsaturated fat intake affects heart attack risk. Dietary cholesterol does not appear to have a significant effect on blood cholesterol and thus recommendations about its consumption may not be needed. Trans fats do appear to increase risk. Acute and prolonged intake of high quantities of alcoholic drinks (3–4 or more daily) increases the risk of a heart attack.
Genetics
Family history of ischemic heart disease or MI, particularly if one has a male first-degree relative (father, brother) who had a myocardial infarction before age 55 years, or a female first-degree relative (mother, sister) less than age 65 increases a person's risk of MI.
Genome-wide association studies have found 27 genetic variants that are associated with an increased risk of myocardial infarction. The strongest association of MI has been found with chromosome 9 on the short arm p at locus 21, which contains genes CDKN2A and 2B, although the single nucleotide polymorphisms that are implicated are within a non-coding region. The majority of these variants are in regions that have not been previously implicated in coronary artery disease. The following genes have an association with MI: PCSK9, SORT1, MIA3, WDR12, MRAS, PHACTR1, LPA, TCF21, MTHFDSL, ZC3HC1, CDKN2A, 2B, ABO, PDGF0, APOA5, MNF1ASM283, COL4A1, HHIPC1, SMAD3, ADAMTS7, RAS1, SMG6, SNF8, LDLR, SLC5A3, MRPS6, KCNE2.
Other
The risk of having a myocardial infarction increases with older age, low physical activity, and low socioeconomic status. Heart attacks appear to occur more commonly in the morning hours, especially between 6AM and noon. Evidence suggests that heart attacks are at least three times more likely to occur in the morning than in the late evening. Shift work is also associated with a higher risk of MI. One analysis has found an increase in heart attacks immediately following the start of daylight saving time.
Women who use combined oral contraceptive pills have a modestly increased risk of myocardial infarction, especially in the presence of other risk factors. The use of non-steroidal anti inflammatory drugs (NSAIDs), even for as short as a week, increases risk.
Endometriosis in women under the age of 40 is an identified risk factor.
Air pollution is also an important modifiable risk. Short-term exposure to air pollution such as carbon monoxide, nitrogen dioxide, and sulfur dioxide (but not ozone) has been associated with MI and other acute cardiovascular events. For sudden cardiac deaths, every increment of 30 units in Pollutant Standards Index correlated with an 8% increased risk of out-of-hospital cardiac arrest on the day of exposure. Extremes of temperature are also associated.
A number of acute and chronic infections including Chlamydophila pneumoniae, influenza, Helicobacter pylori, and Porphyromonas gingivalis among others have been linked to atherosclerosis and myocardial infarction. Myocardial infarction can also occur as a late consequence of Kawasaki disease.
Calcium deposits in the coronary arteries can be detected with CT scans. Calcium seen in coronary arteries can provide predictive information beyond that of classical risk factors. High blood levels of the amino acid homocysteine is associated with premature atherosclerosis; whether elevated homocysteine in the normal range is causal is controversial.
In people without evident coronary artery disease, possible causes for the myocardial infarction are coronary spasm or coronary artery dissection.
Mechanism
Atherosclerosis
The most common cause of a myocardial infarction is the rupture of an atherosclerotic plaque on an artery supplying heart muscle. Plaques can become unstable, rupture, and additionally promote the formation of a blood clot that blocks the artery; this can occur in minutes. Blockage of an artery can lead to tissue death in tissue being supplied by that artery. Atherosclerotic plaques are often present for decades before they result in symptoms.
The gradual buildup of cholesterol and fibrous tissue in plaques in the wall of the coronary arteries or other arteries, typically over decades, is termed atherosclerosis. Atherosclerosis is characterized by progressive inflammation of the walls of the arteries. Inflammatory cells, particularly macrophages, move into affected arterial walls. Over time, they become laden with cholesterol products, particularly LDL, and become foam cells. A cholesterol core forms as foam cells die. In response to growth factors secreted by macrophages, smooth muscle and other cells move into the plaque and act to stabilize it. A stable plaque may have a thick fibrous cap with calcification. If there is ongoing inflammation, the cap may be thin or ulcerate. Exposed to the pressure associated with blood flow, plaques, especially those with a thin lining, may rupture and trigger the formation of a blood clot (thrombus). The cholesterol crystals have been associated with plaque rupture through mechanical injury and inflammation.
Other causes
Atherosclerotic disease is not the only cause of myocardial infarction, but it may exacerbate or contribute to other causes. A myocardial infarction may result from a heart with a limited blood supply subject to increased oxygen demands, such as in fever, a fast heart rate, hyperthyroidism, too few red blood cells in the bloodstream, or low blood pressure. Damage or failure of procedures such as percutaneous coronary intervention (PCI) or coronary artery bypass grafts (CABG) may cause a myocardial infarction. Spasm of coronary arteries, such as Prinzmetal's angina may cause blockage.
Tissue death
If impaired blood flow to the heart lasts long enough, it triggers a process called the ischemic cascade; the heart cells in the territory of the blocked coronary artery die (infarction), chiefly through necrosis, and do not grow back. A collagen scar forms in their place. When an artery is blocked, cells lack oxygen, needed to produce ATP in mitochondria. ATP is required for the maintenance of electrolyte balance, particularly through the Na/K ATPase. This leads to an ischemic cascade of intracellular changes, necrosis and apoptosis of affected cells.
Cells in the area with the worst blood supply, just below the inner surface of the heart (endocardium), are most susceptible to damage. Ischemia first affects this region, the subendocardial region, and tissue begins to die within 15–30 minutes of loss of blood supply. The dead tissue is surrounded by a zone of potentially reversible ischemia that progresses to become a full-thickness transmural infarct. The initial "wave" of infarction can take place over 3–4 hours. These changes are seen on gross pathology and cannot be predicted by the presence or absence of Q waves on an ECG. The position, size and extent of an infarct depends on the affected artery, totality of the blockage, duration of the blockage, the presence of collateral blood vessels, oxygen demand, and success of interventional procedures.
Tissue death and myocardial scarring alter the normal conduction pathways of the heart and weaken affected areas. The size and location put a person at risk of abnormal heart rhythms (arrhythmias) or heart block, aneurysm of the heart ventricles, inflammation of the heart wall following infarction, and rupture of the heart wall that can have catastrophic consequences.
Injury to the myocardium also occurs during re-perfusion. This might manifest as ventricular arrhythmia. The re-perfusion injury is a consequence of the calcium and sodium uptake from the cardiac cells and the release of oxygen radicals during reperfusion. No-reflow phenomenon—when blood is still unable to be distributed to the affected myocardium despite clearing the occlusion—also contributes to myocardial injury. Topical endothelial swelling is one of many factors contributing to this phenomenon.
Diagnosis
Criteria
A myocardial infarction, according to current consensus, is defined by elevated cardiac biomarkers with a rising or falling trend and at least one of the following:
Symptoms relating to ischemia
Changes on an electrocardiogram (ECG), such as ST segment changes, new left bundle branch block, or pathologic Q waves
Changes in the motion of the heart wall on imaging
Demonstration of a thrombus on angiogram or at autopsy.
Types
A myocardial infarction is usually clinically classified as an ST-elevation MI (STEMI) or a non-ST elevation MI (NSTEMI). These are based on ST elevation, a portion of a heartbeat graphically recorded on an ECG. STEMIs make up about 25–40% of myocardial infarctions. A more explicit classification system, based on international consensus in 2012, also exists. This classifies myocardial infarctions into five types:
Spontaneous MI related to plaque erosion and/or rupture fissuring, or dissection
MI related to ischemia, such as from increased oxygen demand or decreased supply, e.g., coronary artery spasm, coronary embolism, anemia, arrhythmias, high blood pressure, or low blood pressure
Sudden unexpected cardiac death, including cardiac arrest, where symptoms may suggest MI, an ECG may be taken with suggestive changes, or a blood clot is found in a coronary artery by angiography and/or at autopsy, but where blood samples could not be obtained, or at a time before the appearance of cardiac biomarkers in the blood
Associated with coronary angioplasty or stents
Associated with percutaneous coronary intervention (PCI)
Associated with stent thrombosis as documented by angiography or at autopsy
Associated with CABG
Associated with spontaneous coronary artery dissection in young, fit women
Cardiac biomarkers
There are many different biomarkers used to determine the presence of cardiac muscle damage. Troponins, measured through a blood test, are considered to be the best, and are preferred because they have greater sensitivity and specificity for measuring injury to the heart muscle than other tests. A rise in troponin occurs within 2–3 hours of injury to the heart muscle, and peaks within 1–2 days. The level of the troponin, as well as a change over time, are useful in measuring and diagnosing or excluding myocardial infarctions, and the diagnostic accuracy of troponin testing is improving over time. One high-sensitivity cardiac troponin can rule out a heart attack as long as the ECG is normal.
Other tests, such as CK-MB or myoglobin, are discouraged. CK-MB is not as specific as troponins for acute myocardial injury, and may be elevated with past cardiac surgery, inflammation or electrical cardioversion; it rises within 4–8 hours and returns to normal within 2–3 days. Copeptin may be useful to rule out MI rapidly when used along with troponin.
Electrocardiogram
Electrocardiograms (ECGs) are a series of leads placed on a person's chest that measure electrical activity associated with contraction of the heart muscle. The taking of an ECG is an important part of the workup of an AMI, and ECGs are often not just taken once but may be repeated over minutes to hours, or in response to changes in signs or symptoms.
ECG readouts produce a waveform with different labeled features. In addition to a rise in biomarkers, a rise in the ST segment, changes in the shape or flipping of T waves, new Q waves, or a new left bundle branch block can be used to diagnose an AMI. In addition, ST elevation can be used to diagnose an ST segment myocardial infarction (STEMI). A rise must be new in V2 and V3 ≥2 mm (0,2 mV) for males or ≥1.5 mm (0.15 mV) for females or ≥1 mm (0.1 mV) in two other adjacent chest or limb leads. ST elevation is associated with infarction, and may be preceded by changes indicating ischemia, such as ST depression or inversion of the T waves. Abnormalities can help differentiate the location of an infarct, based on the leads that are affected by changes. Early STEMIs may be preceded by peaked T waves. Other ECG abnormalities relating to complications of acute myocardial infarctions may also be evident, such as atrial or ventricular fibrillation.
Imaging
Noninvasive imaging plays an important role in the diagnosis and characterisation of myocardial infarction. Tests such as chest X-rays can be used to explore and exclude alternate causes of a person's symptoms. Echocardiography may assist in modifying clinical suspicion of ongoing myocardial infarction in patients that can't be ruled out or ruled in following initial ECG and Troponin testing. Myocardial perfusion imaging has no role in the acute diagnostic algorithm; however, it can confirm a clinical suspicion of Chronic Coronary Syndrome when the patient's history, physical examination (including cardiac examination) ECG, and cardiac biomarkers suggest coronary artery disease.
Echocardiography, an ultrasound scan of the heart, is able to visualize the heart, its size, shape, and any abnormal motion of the heart walls as they beat that may indicate a myocardial infarction. The flow of blood can be imaged, and contrast dyes may be given to improve image. Other scans using radioactive contrast include SPECT CT-scans using thallium, sestamibi (MIBI scans) or tetrofosmin; or a PET scan using Fludeoxyglucose or rubidium-82. These nuclear medicine scans can visualize the perfusion of heart muscle. SPECT may also be used to determine viability of tissue, and whether areas of ischemia are inducible.
Medical societies and professional guidelines recommend that the physician confirm a person is at high risk for Chronic Coronary Syndrome before conducting diagnostic non-invasive imaging tests to make a diagnosis, as such tests are unlikely to change management and result in increased costs. Patients who have a normal ECG and who are able to exercise, for example, most likely do not merit routine imaging.
Differential diagnosis
There are many causes of chest pain, which can originate from the heart, lungs, gastrointestinal tract, aorta, and other muscles, bones and nerves surrounding the chest. In addition to myocardial infarction, other causes include angina, insufficient blood supply (ischemia) to the heart muscles without evidence of cell death, gastroesophageal reflux disease; pulmonary embolism, tumors of the lungs, pneumonia, rib fracture, costochondritis, heart failure and other musculoskeletal injuries. Rarer severe differential diagnoses include aortic dissection, esophageal rupture, tension pneumothorax, and pericardial effusion causing cardiac tamponade. The chest pain in an MI may mimic heartburn. Causes of sudden-onset breathlessness generally involve the lungs or heart – including pulmonary edema, pneumonia, allergic reactions and asthma, and pulmonary embolus, acute respiratory distress syndrome and metabolic acidosis. There are many different causes of fatigue, and myocardial infarction is not a common cause.
Prevention
There is a large crossover between the lifestyle and activity recommendations to prevent a myocardial infarction, and those that may be adopted as secondary prevention after an initial myocardial infarction, because of shared risk factors and an aim to reduce atherosclerosis affecting heart vessels. The influenza vaccine also appear to protect against myocardial infarction with a benefit of 15 to 45%.
Primary prevention
Lifestyle
Physical activity can reduce the risk of cardiovascular disease, and people at risk are advised to engage in 150 minutes of moderate or 75 minutes of vigorous intensity aerobic exercise a week. Keeping a healthy weight, drinking alcohol within the recommended limits, and quitting smoking reduce the risk of cardiovascular disease.
Substituting unsaturated fats such as olive oil and rapeseed oil instead of saturated fats may reduce the risk of myocardial infarction, although there is not universal agreement. Dietary modifications are recommended by some national authorities, with recommendations including increasing the intake of wholegrain starch, reducing sugar intake (particularly of refined sugar), consuming five portions of fruit and vegetables daily, consuming two or more portions of fish per week, and consuming 4–5 portions of unsalted nuts, seeds, or legumes per week. The dietary pattern with the greatest support is the Mediterranean diet. Vitamins and mineral supplements are of no proven benefit, and neither are plant stanols or sterols.
Public health measures may also act at a population level to reduce the risk of myocardial infarction, for example by reducing unhealthy diets (excessive salt, saturated fat, and trans-fat) including food labeling and marketing requirements as well as requirements for catering and restaurants and stimulating physical activity. This may be part of regional cardiovascular disease prevention programs or through the health impact assessment of regional and local plans and policies.
Most guidelines recommend combining different preventive strategies. A 2015 Cochrane Review found some evidence that such an approach might help with blood pressure, body mass index and waist circumference. However, there was insufficient evidence to show an effect on mortality or actual cardio-vascular events.
Medication
Statins, drugs that act to lower blood cholesterol, decrease the incidence and mortality rates of myocardial infarctions. They are often recommended in those at an elevated risk of cardiovascular diseases.
Aspirin has been studied extensively in people considered at increased risk of myocardial infarction. Based on numerous studies in different groups (e.g. people with or without diabetes), there does not appear to be a benefit strong enough to outweigh the risk of excessive bleeding. Nevertheless, many clinical practice guidelines continue to recommend aspirin for primary prevention, and some researchers feel that those with very high cardiovascular risk but low risk of bleeding should continue to receive aspirin.
Secondary prevention
There is a large crossover between the lifestyle and activity recommendations to prevent a myocardial infarction, and those that may be adopted as secondary prevention after an initial myocardial infarct. Recommendations include stopping smoking, a gradual return to exercise, eating a healthy diet, low in saturated fat and low in cholesterol, drinking alcohol within recommended limits, exercising, and trying to achieve a healthy weight. Exercise is both safe and effective even if people have had stents or heart failure, and is recommended to start gradually after 1–2 weeks. Counselling should be provided relating to medications used, and for warning signs of depression. Previous studies suggested a benefit from omega-3 fatty acid supplementation but this has not been confirmed.
Medications
Following a heart attack, nitrates, when taken for two days, and ACE-inhibitors decrease the risk of death. Other medications include:
Aspirin is continued indefinitely, as well as another antiplatelet agent such as clopidogrel or ticagrelor ("dual antiplatelet therapy" or DAPT) for up to twelve months. If someone has another medical condition that requires anticoagulation (e.g. with warfarin) this may need to be adjusted based on risk of further cardiac events as well as bleeding risk. In those who have had a stent, more than 12 months of clopidogrel plus aspirin does not affect the risk of death.
Beta blocker therapy such as metoprolol or carvedilol is recommended to be started within 24 hours, provided there is no acute heart failure or heart block. The dose should be increased to the highest tolerated. Contrary to most guidelines, the use of beta blockers does not appear to affect the risk of death, possibly because other treatments for MI have improved. When beta blocker medication is given within the first 24–72 hours of a STEMI no lives are saved. However, 1 in 200 people were prevented from a repeat heart attack, and another 1 in 200 from having an abnormal heart rhythm. Additionally, for 1 in 91 the medication causes a temporary decrease in the heart's ability to pump blood.
ACE inhibitor therapy should be started within 24 hours and continued indefinitely at the highest tolerated dose. This is provided there is no evidence of worsening kidney failure, high potassium, low blood pressure, or known narrowing of the renal arteries. Those who cannot tolerate ACE inhibitors may be treated with an angiotensin II receptor antagonist.
Statin therapy has been shown to reduce mortality and subsequent cardiac events and should be commenced to lower LDL cholesterol. Other medications, such as ezetimibe, may also be added with this goal in mind.
Aldosterone antagonists (spironolactone or eplerenone) may be used if there is evidence of left ventricular dysfunction after an MI, ideally after beginning treatment with an ACE inhibitor.
Other
A defibrillator, an electric device connected to the heart and surgically inserted under the skin, may be recommended. This is particularly if there are any ongoing signs of heart failure, with a low left ventricular ejection fraction and a New York Heart Association grade II or III after 40 days of the infarction. Defibrillators detect potentially fatal arrhythmia and deliver an electrical shock to the person to depolarize a critical mass of the heart muscle.
First aid
Taking aspirin helps to reduce the risk of mortality in people with myocardial infarction.
Management
A myocardial infarction requires immediate medical attention. Treatment aims to preserve as much heart muscle as possible, and to prevent further complications. Treatment depends on whether the myocardial infarction is a STEMI or NSTEMI. Treatment in general aims to unblock blood vessels, reduce blood clot enlargement, reduce ischemia, and modify risk factors with the aim of preventing future MIs. In addition, the main treatment for myocardial infarctions with ECG evidence of ST elevation (STEMI) include thrombolysis or percutaneous coronary intervention, although PCI is also ideally conducted within 1–3 days for NSTEMI. In addition to clinical judgement, risk stratification may be used to guide treatment, such as with the TIMI and GRACE scoring systems.
Pain
The pain associated with myocardial infarction is often treated with nitroglycerin, a vasodilator, or opioid medications such as morphine. Nitroglycerin (given under the tongue or injected into a vein) may improve blood supply to the heart. It is an important part of therapy for its pain relief effects, though there is no proven benefit to mortality. Morphine or other opioid medications may also be used, and are effective for the pain associated with STEMI. There is poor evidence that morphine shows any benefit to overall outcomes, and there is some evidence of potential harm.
Antithrombotics
Aspirin, an antiplatelet drug, is given as a loading dose to reduce the clot size and reduce further clotting in the affected artery. It is known to decrease mortality associated with acute myocardial infarction by at least 50%. P2Y12 inhibitors such as clopidogrel, prasugrel and ticagrelor are given concurrently, also as a loading dose, with the dose depending on whether further surgical management or fibrinolysis is planned. Prasugrel and ticagrelor are recommended in European and American guidelines, as they are active more quickly and consistently than clopidogrel. P2Y12 inhibitors are recommended in both NSTEMI and STEMI, including in PCI, with evidence also to suggest improved mortality. Heparins, particularly in the unfractionated form, act at several points in the clotting cascade, help to prevent the enlargement of a clot, and are also given in myocardial infarction, owing to evidence suggesting improved mortality rates. In very high-risk scenarios, inhibitors of the platelet glycoprotein αIIbβ3a receptor such as eptifibatide or tirofiban may be used.
There is varying evidence on the mortality benefits in NSTEMI. A 2014 review of P2Y12 inhibitors such as clopidogrel found they do not change the risk of death when given to people with a suspected NSTEMI prior to PCI, nor do heparins change the risk of death. They do decrease the risk of having a further myocardial infarction.
Angiogram
Primary percutaneous coronary intervention (PCI) is the treatment of choice for STEMI if it can be performed in a timely manner, ideally within 90–120 minutes of contact with a medical provider. Some recommend it is also done in NSTEMI within 1–3 days, particularly when considered high-risk. A 2017 review, however, did not find a difference between early versus later PCI in NSTEMI.
PCI involves small probes, inserted through peripheral blood vessels such as the femoral artery or radial artery into the blood vessels of the heart. The probes are then used to identify and clear blockages using small balloons, which are dragged through the blocked segment, dragging away the clot, or the insertion of stents. Coronary artery bypass grafting is only considered when the affected area of heart muscle is large, and PCI is unsuitable, for example with difficult cardiac anatomy. After PCI, people are generally placed on aspirin indefinitely and on dual antiplatelet therapy (generally aspirin and clopidogrel) for at least a year.
Fibrinolysis
If PCI cannot be performed within 90 to 120 minutes in STEMI then fibrinolysis, preferably within 30 minutes of arrival to hospital, is recommended. If a person has had symptoms for 12 to 24 hours evidence for effectiveness of thrombolysis is less and if they have had symptoms for more than 24 hours it is not recommended. Thrombolysis involves the administration of medication that activates the enzymes that normally dissolve blood clots. These medications include tissue plasminogen activator, reteplase, streptokinase, and tenecteplase. Thrombolysis is not recommended in a number of situations, particularly when associated with a high risk of bleeding or the potential for problematic bleeding, such as active bleeding, past strokes or bleeds into the brain, or severe hypertension. Situations in which thrombolysis may be considered, but with caution, include recent surgery, use of anticoagulants, pregnancy, and proclivity to bleeding. Major risks of thrombolysis are major bleeding and intracranial bleeding. Pre-hospital thrombolysis reduces time to thrombolytic treatment, based on studies conducted in higher income countries; however, it is unclear whether this has an impact on mortality rates.
Other
In the past, high flow oxygen was recommended for everyone with a possible myocardial infarction. More recently, no evidence was found for routine use in those with normal oxygen levels and there is potential harm from the intervention. Therefore, oxygen is currently only recommended if oxygen levels are found to be low or if someone is in respiratory distress.
If despite thrombolysis there is significant cardiogenic shock, continued severe chest pain, or less than a 50% improvement in ST elevation on the ECG recording after 90 minutes, then rescue PCI is indicated emergently.
Those who have had cardiac arrest may benefit from targeted temperature management with evaluation for implementation of hypothermia protocols. Furthermore, those with cardiac arrest, and ST elevation at any time, should usually have angiography. Aldosterone antagonists appear to be useful in people who have had an STEMI and do not have heart failure.
Rehabilitation and exercise
Cardiac rehabilitation benefits many who have experienced myocardial infarction, even if there has been substantial heart damage and resultant left ventricular failure. It should start soon after discharge from the hospital. The program may include lifestyle advice, exercise, social support, as well as recommendations about driving, flying, sports participation, stress management, and sexual intercourse. Returning to sexual activity after myocardial infarction is a major concern for most patients, and is an important area to be discussed in the provision of holistic care.
In the short-term, exercise-based cardiovascular rehabilitation programs may reduce the risk of a myocardial infarction, reduces a large number of hospitalizations from all causes, reduces hospital costs, improves health-related quality of life, and has a small effect on all-cause mortality. Longer-term studies indicate that exercise-based cardiovascular rehabilitation programs may reduce cardiovascular mortality and myocardial infarction.
Prognosis
The prognosis after myocardial infarction varies greatly depending on the extent and location of the affected heart muscle, and the development and management of complications. Prognosis is worse with older age and social isolation. Anterior infarcts, persistent ventricular tachycardia or fibrillation, development of heart blocks, and left ventricular impairment are all associated with poorer prognosis. Without treatment, about a quarter of those affected by MI die within minutes and about forty percent within the first month. Morbidity and mortality from myocardial infarction has, however, improved over the years due to earlier and better treatment: in those who have a STEMI in the United States, between 5 and 6 percent die before leaving the hospital and 7 to 18 percent die within a year.
It is unusual for babies to experience a myocardial infarction, but when they do, about half die. In the short-term, neonatal survivors seem to have a normal quality of life.
Complications
Complications may occur immediately following the myocardial infarction or may take time to develop. Disturbances of heart rhythms, including atrial fibrillation, ventricular tachycardia and fibrillation and heart block can arise as a result of ischemia, cardiac scarring, and infarct location. Stroke is also a risk, either as a result of clots transmitted from the heart during PCI, as a result of bleeding following anticoagulation, or as a result of disturbances in the heart's ability to pump effectively as a result of the infarction. Regurgitation of blood through the mitral valve is possible, particularly if the infarction causes dysfunction of the papillary muscle. Cardiogenic shock as a result of the heart being unable to adequately pump blood may develop, dependent on infarct size, and is most likely to occur within the days following an acute myocardial infarction. Cardiogenic shock is the largest cause of in-hospital mortality. Rupture of the ventricular dividing wall or left ventricular wall may occur within the initial weeks. Dressler's syndrome, a reaction following larger infarcts and a cause of pericarditis is also possible.
Heart failure may develop as a long-term consequence, with an impaired ability of heart muscle to pump, scarring, and an increase in the size of the existing muscle. Aneurysm of the left ventricle myocardium develops in about 10% of MI and is itself a risk factor for heart failure, ventricular arrhythmia, and the development of clots.
Risk factors for complications and death include age, hemodynamic parameters (such as heart failure, cardiac arrest on admission, systolic blood pressure, or Killip class of two or greater), ST-segment deviation, diabetes, serum creatinine, peripheral vascular disease, and elevation of cardiac markers.
Epidemiology
Myocardial infarction is a common presentation of coronary artery disease. The World Health Organization estimated in 2004, that 12.2% of worldwide deaths were from ischemic heart disease; with it being the leading cause of death in high- or middle-income countries and second only to lower respiratory infections in lower-income countries. Worldwide, more than 3 million people have STEMIs and 4 million have NSTEMIs a year. STEMIs occur about twice as often in men as women.
Rates of death from ischemic heart disease (IHD) have slowed or declined in most high-income countries, although cardiovascular disease still accounted for one in three of all deaths in the US in 2008. For example, rates of death from cardiovascular disease have decreased almost a third between 2001 and 2011 in the United States.
In contrast, IHD is becoming a more common cause of death in the developing world. For example, in India, IHD had become the leading cause of death by 2004, accounting for 1.46 million deaths (14% of total deaths) and deaths due to IHD were expected to double during 1985–2015. Globally, disability adjusted life years (DALYs) lost to ischemic heart disease are predicted to account for 5.5% of total DALYs in 2030, making it the second-most-important cause of disability (after unipolar depressive disorder), as well as the leading cause of death by this date.
Social determinants of health
Social determinants such as neighborhood disadvantage, immigration status, lack of social support, social isolation, and access to health services play an important role in myocardial infarction risk and survival. Studies have shown that low socioeconomic status is associated with an increased risk of poorer survival. There are well-documented disparities in myocardial infarction survival by socioeconomic status, race, education, and census-tract-level poverty.
Race: In the U.S. African Americans have a greater burden of myocardial infarction and other cardiovascular events. On a population level, there is a higher overall prevalence of risk factors that are unrecognized and therefore not treated, which places these individuals at a greater likelihood of experiencing adverse outcomes and therefore potentially higher morbidity and mortality. Similarly, South Asians (including South Asians that have migrated to other countries around the world) experience higher rates of acute myocardial infarctions at younger ages, which can be largely explained by a higher prevalence of risk factors at younger ages.
Socioeconomic status: Among individuals who live in the low-socioeconomic (SES) areas, which is close to 25% of the US population, myocardial infarctions (MIs) occurred twice as often compared with people who lived in higher SES areas.
Immigration status: In 2018 many lawfully present immigrants who were eligible for coverage remained uninsured because immigrant families faced a range of enrollment barriers, including fear, confusion about eligibility policies, difficulty navigating the enrollment process, and language and literacy challenges. Uninsured undocumented immigrants are ineligible for coverage options due to their immigration status.
Health care access: Lack of health insurance and financial concerns about accessing care were associated with delays in seeking emergency care for acute myocardial infarction which can have significant, adverse consequences on patient outcomes.
Education: Researchers found that compared to people with graduate degrees, those with lower educational attainment appeared to have a higher risk of heart attack, dying from a cardiovascular event, and overall death.
Society and culture
Depictions of heart attacks in popular media often include collapsing or loss of consciousness which are not common symptoms; these depictions contribute to widespread misunderstanding about the symptoms of myocardial infarctions, which in turn contributes to people not getting care when they should.
Legal implications
At common law, in general, a myocardial infarction is a disease but may sometimes be an injury. This can create coverage issues in the administration of no-fault insurance schemes such as workers' compensation. In general, a heart attack is not covered; however, it may be a work-related injury if it results, for example, from unusual emotional stress or unusual exertion. In addition, in some jurisdictions, heart attacks had by persons in particular occupations such as police officers may be classified as line-of-duty injuries by statute or policy. In some countries or states, a person having had an MI may be prevented from participating in activity that puts other people's lives at risk, for example driving a car or flying an airplane.
References
Sources
Further reading
External links
American Heart Association's Heart Attack web site — Information and resources for preventing, recognizing, and treating a heart attack.
TIMI Score for UA/NSTEMI and STEMI
HEART Score for Major Cardiac Events
Aging-associated diseases
Causes of death
Ischemic heart diseases
Medical emergencies
Articles containing video clips
Wikipedia medicine articles ready to translate
Acute pain
Wikipedia emergency medicine articles ready to translate | Myocardial infarction | Biology | 10,054 |
25,381,974 | https://en.wikipedia.org/wiki/Vague%20torus | In classical mechanics, a vague torus is a region in phase space that is characterized by approximate constants of motion, as opposed to an actual torus defined by exact constants of motion.
The concept of vague tori is used to describe regular (quasiperiodic) segments of otherwise chaotic trajectories.
References
Dynamical systems | Vague torus | Physics,Mathematics | 70 |
70,690,897 | https://en.wikipedia.org/wiki/Polystyrene%20%28drug%20delivery%29 | Polystyrene is a synthetic hydrocarbon polymer that is widely adaptive and can be used for a variety of purposes in drug delivery. These methods include polystyrene microspheres, nanoparticles, and solid foams. In the biomedical engineering field, these methods assist researchers in drug delivery, diagnostics, and imaging strategies.
A common group of medication that utilizes a combination of polystyrene and sulfonate functional groups are polystyrene sulfonates. This medication is primarily used to treat hyperkalemia, a condition that results from an increased blood potassium level. FDA approved equivalents of polystyrene sulfonates are KIONEX, KALEXATE, and SPS. While these are the only current FDA approved drug that utilizes polystyrene, polystyrene sees a number of applications in other pharmacological contexts with nanoparticles and microspheres.
Drug Delivery Applications
Solid foams
Polystyrene integrated solid foams are not commonly used in biomedical applications but have shown promise as a new drug delivery vehicle. The manipulation of the porous foam networks is a fundamental component in solid foam dosing – affecting variables such as dissolution, adsorption, and drug diffusion. Solid foam structures are particularly attractive due to the predictability in drug release profiles through the highly tunable porosity and high surface area of these foams.
The process of creating these structures is typically a hassle, requiring multiple step processes in order to synthesis a foam of desired properties. However, polystyrene solid foams have been created through simpler methods such as extrusion from a blowing agent or polystyrene bead expansion. While these methods are typically utilized for insulation or similar industry uses, this production method has also seen use in drug delivery applications [5]. Polystyrene solid foams can also be produced through emulsions. An emulsion can be created through the combination of two immiscible liquids. While many methods are used to create emulsion, Canal et al. used a unique method known as phase inversion temperature (PIT). PIT utilizes phase transitions to produce highly concentrated amounts of emulsion quickly. Through changes in temperature, solubility, and low interfacial tension, PIT is able to efficiently promote emulsion. The porosity of these solid foams is able to be fine-tuned, showing promise for osteogenic and therapeutic applications. For example, proposed osteogenic applications include the promotion of bone integration. The study conducted by Canal et al., utilized polystyrene solid foams as a drug delivery method to evaluate the drug release profile of ketoprofen. Researchers have stated that understanding the release profile for various drugs with polystyrene solid foams could significantly improve treatment outcomes for many disease states.
Nanoparticles
Nanoparticles have been used in drug delivery for applications such as diagnosis and treatment of diseases, with polymeric nanoparticles gaining significant traction as a carrier of drugs or biomolecules over the last few decades. These structures are extremely small, having a diameter < 100 nm. The high surface to volume ratio allows nanoparticles to display properties that are different than their bulk material in biological systems. These properties have been the sole reason of their use in physiological environments. While the structure of nanoparticles is straightforward, the efficacy of nanoparticles is affected by variables such as size and surface modifications which determines their overall biocompatibility and biological interaction.
Size and Nanoparticle Internalization
Polystyrene nanoparticles are the model nanoparticle used for drug delivery applications because they are easy to synthesize in varying sizes. Size is an important factor in cellular uptake rates, which is important for specific pathways such as the endocytic pathway. In a study conducted by Rejman et al., researchers were able to show that polystyrene nanoparticles with diameters of 50 nm and 100 nm were internalized faster than nanoparticles with diameters of 200 nm and 500 nm. Internalization is vital in understanding the impact the designed nanoparticles are having on the target cells. Nanoparticle internalization depends on a couple of key factors such as nanoparticle size, cell type, and time. Nanoparticles of larger size are typically internalized through processes such as phagocytosis or micropinocytosis. Smaller nanoparticles are typically internalized through processes such as macro-pinocytosis, phagocytosis, clathrin-mediated endocytosis, caveolae-mediated endocytosis, and clathrin-and caveolae-independent pathways. The diversity in pathways is one of the greatest challenges with utilizing these nanoparticles since a case-by-case approach is typically required to maximize the entry pathways. To measure nanoparticle internalization, techniques such as fluorescence activated cell sorting/scanning (FACS), inductively coupled plasma (ICP) mass spectroscopy, confocal laser scanning microscopy (CLSM), and imaging flow cytometry (IFC) are utilized, each offering their own advantages and disadvantages.
Biocompatibility and Biological Integration
The main advantage of polystyrene nanoparticles is their biocompatibility, which allows them to be used broadly for biomedical devices and the study of bio-nano interactions. Furthermore, their ability to not degrade in cellular environments proves to be an asset in biomedical applications. A unique property of polystyrene nanoparticles, like some other polymers, is their ability to fuse with proteins. When proteins bind to the surface of the nanoparticle, a protein corona is formed. A protein corona encapsulates the identity of the nanoparticle, and the properties of the corona can be manipulated based on the physical properties of the nanoparticle. The corona can be defined as “soft” or “hard” depending on bonding strength and surface-bound protein exchange rate. As such, a soft protein corona is defined by nanoparticles that are loosely bound and proteins that are easily exchangeable. In contrast, a hard protein corona has nanoparticles that are tightly bound and proteins that are not as easily exchangeable. These kinetics are vital in understanding how nanoparticles will respond in biological fluid. The hardness of the protein corona plays a role in the Vroman effect, a principle that describes how proteins with higher affinities replace proteins of lower affinity [8], [11]. The Vroman effect is influenced by protein concentration relative to the surface area and diffusion coefficients. Overall, this affects the protein surface binding affinity. For example, Ehrenburg et al. have shown that fibrinogen presence rapidly declines with polystyrene nanoparticles containing functional groups, such as COOH and CH3. This allows a protein such as albumin, with a lower affinity, to adsorb and become replaced by fibrinogen. Overall, polymeric nanoparticles that can fuse with proteins have a significant advantage over other polymeric nanoparticles due to this versatility in biological interaction.
Surface Modifications
Certain properties of polystyrene nanoparticles can be modified depending on the scenario. For instance, the surface of polystyrene nanoparticles can be manipulated by surface oxidation, which creates a surface that is highly receptive with cell cultures. These surface-level modifications also express a lower polydispersity index and can create stable colloids in biological liquid. Similarly, the surface of these nanoparticles can be treated with ethylene oxide or UV irradiation for sterilization purposes. Due to the emphasis on biocompatibility, Loos et al. have utilized polystyrene nanoparticles as a model to analyze how different surface properties affect biomedical variables. Overall, it was determined that a strong understanding of surface properties is vital to manipulate parameters such as pharmacokinetics, biocompatibility, and tissue and cell affinity. In a study conducted by Lundqvist et al., the protein corona was studied with three surface modified polystyrene (plain, carboxyl-modified, and amine-modified) nanoparticles of two different sizes (50 nm and 100 nm). This study ultimately showed that surface corona properties are also affected by size and surface composition.
Current Applications
Polystyrene nanoparticles have been used in various applications such as cancer treatment. The primary issue associated with treating cancer is that many chemotherapies suffer from poor penetration into tumor cells. In a study conducted by Larina et al., researchers utilized polystyrene nanoparticles in conjunction with ultrasound radiation to influence tumor regression. They proposed a method of utilizing ultrasound-induced cavitation to enhance drug delivery to cancer cells. Nanoparticles have typically been used in these applications because they are able to accumulate in these tumor sites actively or passively. For this application, since cavitation is an important factor, polystyrene nanoparticles were used since their presence allows cavitation to occur at lower pressure intensities. Within mice models, their study found that ultrasound irradiation and polystyrene nanoparticles with a combination of 5-FU injections showed strong levels of tumor inhibition and total tumor regression.
The effect of polystyrene nanoparticles on various cell lines have also been researched. Application with human gastric adenocarcinoma cell (AGS) lines has been studied due to these cells being the first line of contact with nanoparticles from ingestion. The goal of the study by Forte et al. was a further understanding of nanoparticle interaction with biological systems by studying the kinetic uptake of polystyrene nanoparticle uptake by AGS cells. Just as previous studies have shown, it was concluded that the primary factors that influence drug delivery strategies are the size and concentration of these nanoparticles.
Polystyrene nanoparticle composites have also been the focus of literature due to their adaptability. Composites are useful since the properties for the constituent materials can be combined in a way that is unlike the original components. This is extremely relevant in drug delivery applications to fine tune specific parameters case-by-case. In a study conducted by Lim et al., a composite of mono-disperse Fe3O4 and polystyrene nanoparticles were utilized for cardiac myocyte treatment via magnetic targeting. Other polystyrene composites have been created with silica nanoparticles. These materials are attractive for a number of reasons such as having low toxicity, being able to control its particle size, strong chemical and thermal stability, biocompatibility, and degradability in physiological environments. Since many of these properties are already present in polystyrene nanoparticles (i.e., biocompatibility and particle size), these structures only enhance its effect in biological environments. As a result, composites such as these have seen increased use a mode of drug delivery.
Microspheres
Microspheres (or microparticles) are a group of small spherical particles that typically have a diameter ranging from 1 μm to 1000 μm. While microspheres can be created through natural or synthetic purposes, synthetic polymer microspheres offer useful advantages over other options. The most common types of polymeric microspheres are polyethylene and polystyrene; however, polystyrene microspheres are especially useful in biomedical applications because they are able to actively facilitate cell sorting and immunoprecipitation. This results in proteins and ligands adsorbing readily, similar to polystyrene nanoparticles. Polystyrene microparticles are also hydrophobic meaning that they will not swell when exposed to a biological environment. Microspheres are applicable with a myriad of drug delivery applications (e.g., ophthalmic, gene, intra-tumoral, local, oral, nasal, gastrointestinal, peroral, vaginal, transdermal, and colonic drug delivery). Polystyrene microspheres have also seen use in magnetic and radiolabeled microspheres. Similarly, model microspheres such as carboxylated polystyrene microspheres have been used for many studies due to high ligand conjugation through carbodiimide chemistry.
Microsphere Synthesis
The way that microspheres are prepared can influence their physical properties. Preparation methods such as precipitation polymerization, seed polymerization, microemulsion, and dispersion polymerization have been used in the past to create polystyrene microspheres. Precipitation polymerization is a robust method of polymer synthesis where a monomer and initiator are dissolved in a solvent. This method is advantageous due to low viscosities, clean surfaces, low solid content, and irregular geometries, factors which are beneficial in physiological environments. Seed polymerization is a preparation method used to create core-shell emulsions. These structures have good stability and narrow particle size distribution, however, due to a long and complex preparation process there is a high likelihood of monomers becoming embedded inside the particles. Microemulsions are a method of creating emulsions through an emulsifier. By creating particles with microbubbles, this method can create particles that have similar particle size and stability. Dispersion polymerization is a method of creating particles with similar size with the advantage of being easy to perform and operate. With this method, particle size can easily be modified by manipulating the concentrations of stabilizer, co-monomer, and water. Due to these reasons, dispersion polymerization has become one of the primary methods of polystyrene microsphere synthesis. Each of these methods offer their own advantages and disadvantages and are chosen for microsphere synthesis accordingly.
Current Applications
Polystyrene microspheres have previously been used for serological tests (i.e., rheumatoid arthritis, disseminated lupus erythematous, and pregnancy tests). Saravanan et al. have shown that polystyrene microspheres can be used for controlled drug delivery applications with ibuprofen. One of the biggest limitations associated with drug delivery is that intravenously injected drug carriers (e.g., microspheres and liposomes) become trapped by mononuclear phagocyte system (MGS) cells. This limitation is important to overcome for the progression of treatment outcomes for diseases such as AIDS and tuberculosis which primarily rely on the macrophage response system. A study by Makino et al. delved into the required size and surface modifications required for alveolar macrophages to uptake polystyrene microspheres. It was shown that microspheres with a softer surface were more accessible to alveolar macrophages. Moreover, primary amine groups were also shown to be more effective over carboxyl groups. As a result, polystyrene microspheres have seen increased use a mode of drug delivery.
Polystyrene Toxicity
One of the most important factors to consider is the toxicity of the polystyrene particles. Many in vitro studies have been conducted to understand how these structures can affect reactive oxygen species generation and cell viability. Overall, these studies showed that polystyrene nanoparticles did not affect cell viability.
Similarly, it is important to consider polystyrene toxicity in human models. The use of polystyrene has been under scrutiny by various international and local agencies due to the effects of polystyrene on the environment. As a result, there has always been a cause for concern for how polystyrene can affect human health. The Environmental Protection Agency (EPA) and studies conducted by Mutti et al. claim that the chronic toxicity of styrene is 300ppm (1,000 μg/m3). Within the polymer industry, these levels typically don't go over 20ppm. Furthermore, the FDA reports that the admissible daily intake (ADI) is 90,000 μg/person/day.
References
Drug delivery devices | Polystyrene (drug delivery) | Chemistry | 3,364 |
1,018,614 | https://en.wikipedia.org/wiki/Tool-assisted%20speedrun | A tool-assisted speedrun or tool-assisted superplay (TAS; ) is generally defined as a speedrun or playthrough composed of precise inputs recorded with tools such as video game emulators. Tool-assisted speedruns are generally created with the goal of creating theoretically perfect playthroughs. This may include the fastest possible route to complete a game or showcasing new optimizations to existing world records.
TAS requires research into the theoretical limits of the games and their respective competitive categories. The fastest categories have no restrictions and often involve a level of gameplay impractical or impossible for a human player, and those made according to real-time attack rules serve to research the limits of human players.
The TAS developer has full control over the game's movement, per video frame, to record a sequence of fully precise inputs. Other tools include save states and branches, rewriting recorded inputs, splicing together best sequences, macros, and scripts to automate gameplay actions. These tools grant TAS creators precision and accuracy beyond a human player.
History
The term was coined during early Doom speedrunning. When Andy "Aurican" Kempling released a modified version of the Doom source code that made it possible to record demos in slow motion and in several sessions, it was possible for the first players to start recording tool-assisted demos. In a few months, in June 1999, Finnish Esko Koskimaa, Swedish Peo Sjöblom, and Israeli Yonatan Donner opened the first site to share these demos, "Tools-Assisted Speedruns".
In 2003, a video of a Japanese player named Morimoto completing the NES game Super Mario Bros. 3 in 11 minutes and performing stunts started floating around the Internet. The video was controversial, because not many people knew about tool-assisted speedruns, especially for the Nintendo Entertainment System. The video was not clearly labeled as such, so many people considered an emulator cheating. It inspired Joel "Bisqwit" Yliluoma to start the NESvideos website for TAS for the NES, and it was renamed TASVideos.
Tool-assisted speedruns have been made for some ROM hacks and for published games. In 2014, the speedrunning application TASBot was developed, capable of direct controller input.
Method
Creating a tool-assisted speedrun is the process of finding the optimal set of inputs to fulfill a given criterion — usually completing a game as fast as possible. No limits are imposed on the tools used for this search, but the result has to be a set of timed key-presses that, when played back on the actual console, achieves the target criterion. The basic method used to construct such a set of inputs is to record one's input while playing the game on an emulator, all the while saving and loading the emulator's state repeatedly to test out various possibilities and only keep the best result. To make this more precise, the game is slowed down. Initially, it was common to slow down to some low fraction of normal speed. However, due to advances in the field, it is now expected that the game is paused during recording, with emulation advanced one frame at a time to eliminate any mistakes made due to the urgency.
The use of savestates facilitates luck manipulation, which uses player input as entropy to make favorable outcomes. Examples include making the ideal piece drop in Tetris, or getting a rare item drop from a defeated enemy.
Re-recording emulators
Tool-assisted speedrunning relies on the same series of inputs being played back at different times always giving the same results. The emulation must be deterministic with regard to the saved inputs, and random seeds must not change. Otherwise, a speedrun that was optimal on one playback might not even complete it on a second playback. This desynchronization occurs when the state of the emulated machine at a particular time index no longer corresponds with that which existed at the same point in the movie's production. Desyncs can also be caused by incomplete savestates, which cause the emulated machine to be restored in a state different from that which existed when it was saved. Desyncs can also occur when a user attempts to match inputs from an input file downloaded from TASVideos and fail to match the correct enemy reactions due to bad AI or RNG.
Verification
Some players have fraudulently recorded speedruns, either by creating montages of other speedrun or altering the playing time, posting them as TAS or RTA. Because tool-assisted speedruns can account for all aspects of the game code, including its inner workings, and press buttons precisely and accurately, they can be used to help verify whether an unassisted speedrun record is legitimate.
One of the best-known cases is Billy Mitchell, whose Donkey Kong and Pac-Man Guinness records were revoked in 2018, because he used the emulator MAME.
In 2018, the world record for Dragster by Todd Rogers was removed from Twin Galaxies and Guinness records after an experiment showed that his 5.51 second time was impossible to achieve even with a TAS.
Examples
In Super Mario Bros., the current Famicom and NES human-theory world record, created by Maru, stands at 4:57.54 (4:54.265 in RTA timing). In Super Mario Bros. 3, arbitrary code execution along with credits warp allows injecting a hack that simulates a Unix-like console, providing extra features to Mario. The current TAS standing at 216 milliseconds (13 frames) was performed by exploiting a small bug with the Famicom and NES hardware in which the CPU makes many extra "read" requests from one of the controller inputs, registering many more button presses than have occurred; the A button is mashed at a rate of 8 kilohertz (8000 times per second), performing the credits warp glitch. In Super Mario World, arbitrary code execution allows injection of playable versions of Flappy Bird, Pong, Snake, and Super Mario Bros.
See also
Time attack — a mode which allows the player to finish a game (or a part of it) as fast as possible, saving record times.
Score attack — the attempt to reach a record logged point value in a game.
Electronic sports — video games that are played as competitive sports.
Piano roll
Meta Runner — a web series inspired by the tool assisted speedruns.
References
External links
TASVideos tool-assisted speedruns and resources
Speedrunning
Video game terminology
Cheating in video games | Tool-assisted speedrun | Technology | 1,362 |
685,311 | https://en.wikipedia.org/wiki/Experimental%20physics | Experimental physics is the category of disciplines and sub-disciplines in the field of physics that are concerned with the observation of physical phenomena and experiments. Methods vary from discipline to discipline, from simple experiments and observations, such as Galileo's experiments, to more complicated ones, such as the Large Hadron Collider.
Overview
Experimental physics is a branch of physics that is concerned with data acquisition, data-acquisition methods, and the detailed conceptualization (beyond simple thought experiments) and realization of laboratory experiments. It is often contrasted with theoretical physics, which is more concerned with predicting and explaining the physical behaviour of nature than with acquiring empirical data.
Although experimental and theoretical physics are concerned with different aspects of nature, they both share the same goal of understanding it and have a symbiotic relationship. The former provides data about the universe, which can then be analyzed in order to be understood, while the latter provides explanations for the data and thus offers insight into how to better acquire data and set up experiments. Theoretical physics can also offer insight into what data is needed in order to gain a better understanding of the universe, and into what experiments to devise in order to obtain it.
The tension between experimental and theoretical aspects of physics was expressed by James Clerk Maxwell as "It is not till we attempt to bring the theoretical part of our training into contact with the practical that we begin to experience the full effect of what Faraday has called 'mental inertia' - not only the difficulty of recognizing, among the concrete objects before us, the abstract relation which we have learned from books, but the distracting pain of wrenching the mind away from the symbols to the objects, and from the objects back to the symbols. This however is the price we have to pay for new ideas."
History
As a distinct field, experimental physics was established in early modern Europe, during what is known as the Scientific Revolution, by physicists such as Galileo Galilei, Christiaan Huygens, Johannes Kepler, Blaise Pascal and Sir Isaac Newton. In the early 17th century, Galileo made extensive use of experimentation to validate physical theories, which is the key idea in the modern scientific method. Galileo formulated and successfully tested several results in dynamics, in particular the law of inertia, which later became the first law in Newton's laws of motion. In Galileo's Two New Sciences, a dialogue between the characters Simplicio and Salviati discuss the motion of a ship (as a moving frame) and how that ship's cargo is indifferent to its motion. Huygens used the motion of a boat along a Dutch canal to illustrate an early form of the conservation of momentum.
Experimental physics is considered to have reached a high point with the publication of the Philosophiae Naturalis Principia Mathematica in 1687 by Sir Isaac Newton (1643–1727). In 1687, Newton published the Principia, detailing two comprehensive and successful physical laws: Newton's laws of motion, from which arise classical mechanics; and Newton's law of universal gravitation, which describes the fundamental force of gravity. Both laws agreed well with experiment. The Principia also included several theories in fluid dynamics.
From the late 17th century onward, thermodynamics was developed by physicist and chemist Robert Boyle, Thomas Young, and many others. In 1733, Daniel Bernoulli used statistical arguments with classical mechanics to derive thermodynamic results, initiating the field of statistical mechanics. In 1798, Benjamin Thompson (Count Rumford) demonstrated the conversion of mechanical work into heat, and in 1847 James Prescott Joule stated the law of conservation of energy, in the form of heat as well as mechanical energy. Ludwig Boltzmann, in the nineteenth century, is responsible for the modern form of statistical mechanics.
Besides classical mechanics and thermodynamics, another great field of experimental inquiry within physics was the nature of electricity. Observations in the 17th and eighteenth century by scientists such as Boyle, Stephen Gray, and Benjamin Franklin created a foundation for later work. These observations also established our basic understanding of electrical charge and current. By 1808 John Dalton had discovered that atoms of different elements have different weights and proposed the modern theory of the atom.
It was Hans Christian Ørsted who first proposed the connection between electricity and magnetism after observing the deflection of a compass needle by a nearby electric current. By the early 1830s Michael Faraday had demonstrated that magnetic fields and electricity could generate each other. In 1864 James Clerk Maxwell presented to the Royal Society a set of equations that described this relationship between electricity and magnetism. Maxwell's equations also predicted correctly that light is an electromagnetic wave. Starting with astronomy, the principles of natural philosophy crystallized into fundamental laws of physics which were enunciated and improved in the succeeding centuries. By the 19th century, the sciences had segmented into multiple fields with specialized researchers and the field of physics, although logically pre-eminent, no longer could claim sole ownership of the entire field of scientific research.
Current experiments
Some examples of prominent experimental physics projects are:
Relativistic Heavy Ion Collider which collides heavy ions such as gold ions (it is the first heavy ion collider) and protons, it is located at Brookhaven National Laboratory, on Long Island, USA.
HERA, which collides electrons or positrons and protons, and is part of DESY, located in Hamburg, Germany.
LHC, or the Large Hadron Collider, which completed construction in 2008 but suffered a series of setbacks. The LHC began operations in 2008, but was shut down for maintenance until the summer of 2009. It is the world's most energetic collider upon completion, it is located at CERN, on the French-Swiss border near Geneva. The collider became fully operational March 29, 2010 a year and a half later than originally planned.
LIGO, the Laser Interferometer Gravitational-Wave Observatory, is a large-scale physics experiment and observatory to detect cosmic gravitational waves and to develop gravitational-wave observations as an astronomical tool. Currently two LIGO observatories exist: LIGO Livingston Observatory in Livingston, Louisiana, and LIGO Hanford Observatory near Richland, Washington.
JWST, or the James Webb Space Telescope, launched in 2021. It will be the successor to the Hubble Space Telescope. It will survey the sky in the infrared region. The main goals of the JWST will be in order to understand the initial stages of the universe, galaxy formation as well as the formations of stars and planets, and the origins of life.
Mississippi State Axion Search (2016 completion), Light Shining Through a Wall Experiment (LSW); EM Source: .7m, 50W continuous radio wave emitter
Method
Experimental physics uses two main methods of experimental research, controlled experiments, and natural experiments. Controlled experiments are often used in laboratories as laboratories can offer a controlled environment. Natural experiments are used, for example, in astrophysics when observing celestial objects where control of the variables in effect is impossible.
Famous experiments
Famous experiments include:
Bell test experiments
Cavendish experiment
Chicago Pile-1
Cowan–Reines neutrino experiment
Davisson–Germer experiment
Delayed-choice quantum eraser
Double-slit experiment
Eddington experiment
Eötvös experiment
Fizeau experiment
Foucault pendulum
Franck–Hertz experiment
Geiger–Marsden experiment
Gravity Probe A and Gravity Probe B
Hafele–Keating experiment
Homestake experiment
Kite experiment
Oil drop experiment
Michelson–Morley experiment
Rømer's determination of the speed of light
Stern–Gerlach experiment
Torricelli's experiment
Wu experiment
Experimental techniques
Some well-known experimental techniques include:
Crystallography
Ellipsometry
Faraday cage
Interferometry
NMR
Laser cooling
Laser spectroscopy
Raman spectroscopy
Signal processing
Spectroscopy
STM
Vacuum technique
X-ray spectroscopy
Inelastic neutron scattering
Prominent experimental physicists
Famous experimental physicists include:
Archimedes (c. 287 BC – c. 212 BC)
Alhazen (965–1039)
Al-Biruni (973–1043)
Al-Khazini (fl. 1115–1130)
Galileo Galilei (1564–1642)
Evangelista Torricelli (1608–1647)
Robert Boyle (1627–1691)
Christiaan Huygens (1629–1695)
Robert Hooke (1635–1703)
Isaac Newton (1643–1727)
Ole Rømer (1644–1710)
Stephen Gray (1666–1736)
Daniel Bernoulli (1700-1782)
Benjamin Franklin (1706–1790)
Laura Bassi (1711–1778)
Henry Cavendish (1731–1810)
Joseph Priestley (1733–1804)
William Herschel (1738–1822)
Alessandro Volta (1745–1827)
Pierre-Simon Laplace (1749–1827)
Benjamin Thompson (1753–1814)
John Dalton (1766–1844)
Thomas Young (1773–1829)
Carl Friedrich Gauss (1777–1855)
Hans Christian Ørsted (1777–1851)
Humphry Davy (1778–1829)
Augustin-Jean Fresnel (1788–1827)
Michael Faraday (1791–1867)
James Prescott Joule (1818–1889)
William Thomson, Lord Kelvin (1824–1907)
James Clerk Maxwell (1831–1879)
Ernst Mach (1838–1916)
John William Strutt (3rd Baron Rayleigh) (1842–1919)
Wilhelm Röntgen (1845–1923)
Karl Ferdinand Braun (1850–1918)
Henri Becquerel (1852–1908)
Albert Abraham Michelson (1852–1931)
Heike Kamerlingh Onnes (1853–1926)
J. J. Thomson (1856–1940)
Heinrich Hertz (1857–1894)
Jagadish Chandra Bose (1858–1937)
Pierre Curie (1859–1906)
William Henry Bragg (1862–1942)
Marie Curie (1867–1934)
Robert Andrews Millikan (1868–1953)
Ernest Rutherford (1871–1937)
Lise Meitner (1878–1968)
Max von Laue (1879–1960)
Clinton Davisson (1881–1958)
Hans Geiger (1882–1945)
C. V. Raman (1888–1970)
William Lawrence Bragg (1890–1971)
James Chadwick (1891–1974)
Arthur Compton (1892–1962)
Pyotr Kapitsa (1894–1984)
Charles Drummond Ellis (1895–1980)
John Cockcroft (1897–1967)
Patrick Blackett (Baron Blackett) (1897–1974)
Ukichiro Nakaya (1900–1962)
Enrico Fermi (1901–1954)
Ernest Lawrence (1901–1958)
Walter Houser Brattain (1902–1987)
Pavel Cherenkov (1904–1990)
Abraham Alikhanov (1904–1970)
Carl David Anderson (1905–1991)
Felix Bloch (1905–1983)
Ernst Ruska (1906–1988)
John Bardeen (1908–1991)
William Shockley (1910–1989)
Dorothy Hodgkin (1910–1994)
Luis Walter Alvarez (1911–1988)
Chien-Shiung Wu (1912–1997)
Willis Lamb (1913–2008)
Charles Hard Townes (1915–2015)
Rosalind Franklin (1920–1958)
Owen Chamberlain (1920–2006)
Nicolaas Bloembergen (1920–2017)
Vera Rubin (1928–2016)
Mildred Dresselhaus (1930–2017)
Rainer Weiss (1932–)
Carlo Rubbia (1934–)
Barry Barish (1936–)
Samar Mubarakmand (1942–)
Serge Haroche (1944–)
Anton Zeilinger (1945–)
Alain Aspect (1947–)
Gerd Binnig (1947–)
Steven Chu (1948–)
Wolfgang Ketterle (1957–)
Andre Geim (1958–)
Lene Hau (1959–)
Timelines
See the timelines below for listings of physics experiments.
Timeline of atomic and subatomic physics
Timeline of classical mechanics
Timeline of electromagnetism and classical optics
Timeline of gravitational physics and relativity
Timeline of nuclear fusion
Timeline of particle discoveries
Timeline of particle physics technology
Timeline of states of matter and phase transitions
Timeline of thermodynamics
See also
Physics
Engineering
Experimental science
Measuring instrument
Pulse programming
References
Further reading
External links | Experimental physics | Physics | 2,510 |
2,151,693 | https://en.wikipedia.org/wiki/Cassegrain%20reflector | The Cassegrain reflector is a combination of a primary concave mirror and a secondary convex mirror, often used in optical telescopes and radio antennas, the main characteristic being that the optical path folds back onto itself, relative to the optical system's primary mirror entrance aperture. This design puts the focal point at a convenient location behind the primary mirror and the convex secondary adds a telephoto effect creating a much longer focal length in a mechanically short system.
In a symmetrical Cassegrain both mirrors are aligned about the optical axis, and the primary mirror usually contains a hole in the center, thus permitting the light to reach an eyepiece, a camera, or an image sensor. Alternatively, as in many radio telescopes, the final focus may be in front of the primary. In an asymmetrical Cassegrain, the mirror(s) may be tilted to avoid obscuration of the primary or to avoid the need for a hole in the primary mirror (or both).
The classic Cassegrain configuration uses a parabolic reflector as the primary while the secondary mirror is hyperbolic. Modern variants may have a hyperbolic primary for increased performance (for example, the Ritchey–Chrétien design); and either or both mirrors may be spherical or elliptical for ease of manufacturing.
The Cassegrain reflector is named after a published reflecting telescope design that appeared in the April 25, 1672 Journal des sçavans which has been attributed to Laurent Cassegrain. Similar designs using convex secondary mirrors have been found in the Bonaventura Cavalieri's 1632 writings describing burning mirrors and Marin Mersenne's 1636 writings describing telescope designs. James Gregory's 1662 attempts to create a reflecting telescope included a Cassegrain configuration, judging by a convex secondary mirror found among his experiments.
The Cassegrain design is also used in catadioptric systems.
Cassegrain designs
"Classic" Cassegrain telescopes
The "classic" Cassegrain has a parabolic primary mirror and a hyperbolic secondary mirror that reflects the light back down through a hole in the primary. Folding the optics makes this a compact design. On smaller telescopes, and camera lenses, the secondary is often mounted on an optically flat, optically clear glass plate that closes the telescope tube. This support eliminates the "star-shaped" diffraction effects caused by a straight-vaned support spider. The closed tube stays clean, and the primary is protected, at the cost of some loss of light-gathering power.
It makes use of the special properties of parabolic and hyperbolic reflectors. A concave parabolic reflector will reflect all incoming light rays parallel to its axis of symmetry to a single point, the focus. A convex hyperbolic reflector has two foci and will reflect all light rays directed at one of its two foci towards its other focus. The mirrors in this type of telescope are designed and positioned so that they share one focus and so that the second focus of the hyperbolic mirror will be at the same point at which the image is to be observed, usually just outside the eyepiece.
In most Cassegrain systems, the secondary mirror blocks a central portion of the aperture. This ring-shaped entrance aperture significantly reduces a portion of the modulation transfer function (MTF) over a range of low spatial frequencies, compared to a full-aperture design such as a refractor or an offset Cassegrain. This MTF notch has the effect of lowering image contrast when imaging broad features. In addition, the support for the secondary (the spider) may introduce diffraction spikes in images.
The radii of curvature of the primary and secondary mirrors, respectively, in the classic configuration are
and
where
is the effective focal length of the system,
is the back focal length (the distance from the secondary to the focus),
is the distance between the two mirrors and
is the secondary magnification.
If, instead of and , the known quantities are the focal length of the primary mirror, , and the distance to the focus behind the primary mirror, , then and .
The conic constant of the primary mirror is that of a parabola, . Thanks to that there is no spherical aberration introduced by the primary mirror. The secondary mirror, however, is of a hyperbolic shape with one focus coinciding with that of the primary mirror and the other focus being at the back focal length . Thus, the classical Cassegrain has ideal focus for the chief ray (the center spot diagram is one point). We have,
,
where
.
Actually, as the conic constants should not depend on scaling, the formulae for both and can be greatly simplified and presented only as functions of the secondary magnification. Finally,
and
.
Ritchey-Chrétien
The Ritchey-Chrétien is a specialized Cassegrain reflector which has two hyperbolic mirrors (instead of a parabolic primary). It is free of coma and spherical aberration at a flat focal plane, making it well suited for wide field and photographic observations. It was invented by George Willis Ritchey and Henri Chrétien in the early 1910s. This design is very common in large professional research telescopes, including the Hubble Space Telescope, the Keck Telescopes, and the Very Large Telescope (VLT); it is also found in high-grade amateur telescopes.
Dall-Kirkham
The Dall-Kirkham Cassegrain telescope design was created by Horace Dall in 1928 and took on the name in an article published in Scientific American in 1930 following discussion between amateur astronomer Allan Kirkham and Albert G. Ingalls, the magazine's astronomy editor at the time. It uses a concave elliptical primary mirror and a convex spherical secondary. While this system is easier to polish than a classic Cassegrain or Ritchey-Chretien system, the off-axis coma is significantly worse, so the image degrades quickly off-axis. Because this is less noticeable at longer focal ratios, Dall-Kirkhams are seldom faster than f/15.
Off-axis configurations
An unusual variant of the Cassegrain is the Schiefspiegler telescope ("skewed" or "oblique reflector"; also known as the "Kutter telescope" after its inventor, Anton Kutter) which uses tilted mirrors to avoid the secondary mirror casting a shadow on the primary. However, while eliminating diffraction patterns this leads to several other aberrations that must be corrected.
Several different off-axis configurations are used for radio antennas.
Another off-axis, unobstructed design and variant of the Cassegrain is the 'Yolo' reflector invented by Arthur Leonard. This design uses a spherical or parabolic primary and a mechanically warped spherical secondary to correct for off-axis induced astigmatism. When set up correctly the Yolo can give uncompromising unobstructed views of planetary objects and non-wide field targets, with no lack of contrast or image quality caused by spherical aberration. The lack of obstruction also eliminates the diffraction associated with Cassegrain and Newtonian reflector astrophotography.
Catadioptric Cassegrains
Catadioptric Cassegrains use two mirrors, often with a spherical primary mirror to reduce cost, combined with refractive corrector element(s) to correct the resulting aberrations.
Schmidt-Cassegrain
The Schmidt-Cassegrain was developed from the wide-field Schmidt camera, although the Cassegrain configuration gives it a much narrower field of view. The first optical element is a Schmidt corrector plate. The plate is figured by placing a vacuum on one side, and grinding the exact correction required to correct the spherical aberration caused by the spherical primary mirror. Schmidt-Cassegrains are popular with amateur astronomers. An early Schmidt-Cassegrain camera was patented in 1946 by artist/architect/physicist Roger Hayward, with the film holder placed outside the telescope.
Maksutov-Cassegrain
The Maksutov-Cassegrain is a variation of the Maksutov telescope named after the Soviet optician and astronomer Dmitri Dmitrievich Maksutov. It starts with an optically transparent corrector lens that is a section of a hollow sphere. It has a spherical primary mirror, and a spherical secondary that is usually a mirrored section of the corrector lens.
Argunov-Cassegrain
In the Argunov-Cassegrain telescope all optics are spherical, and the classical Cassegrain secondary mirror is replaced by a sub-aperture corrector consisting of three air spaced lens elements. The element farthest from the primary mirror is a Mangin mirror, which acts as a secondary mirror.
Klevtsov-Cassegrain
The Klevtsov-Cassegrain, like the Argunov-Cassegrain, uses a sub-aperture corrector consisting of a small meniscus lens and a Mangin mirror as its "secondary mirror".
Cassegrain radio antennas
Cassegrain designs are also utilized in satellite telecommunication earth station antennas and radio telescopes, ranging in size from 2.4 metres to 70 metres. The centrally located sub-reflector serves to focus radio frequency signals in a similar fashion to optical telescopes.
An example of a cassegrain radio antenna is the 70-meter dish at JPL's Goldstone antenna complex. For this antenna, the final focus is in front of the primary, at the top of the pedestal protruding from the mirror.
See also
Catadioptric system
Celestron (Schmidt–Cassegrains, Maksutov Cassegrains)
List of telescope types
Meade Instruments (Schmidt–Cassegrains, Maksutov Cassegrains)
Questar (Maksutov Cassegrains)
Refracting telescope
Vixen (Cassegrains, Klevtsov–Cassegrain)
References
External links
Antennas (radio)
Radio frequency propagation
Radio frequency antenna types
Telescope types | Cassegrain reflector | Physics | 2,054 |
2,968,782 | https://en.wikipedia.org/wiki/Weil%20reciprocity%20law | In mathematics, the Weil reciprocity law is a result of André Weil holding in the function field K(C) of an algebraic curve C over an algebraically closed field K. Given functions f and g in K(C), i.e. rational functions on C, then
f((g)) = g((f))
where the notation has this meaning: (h) is the divisor of the function h, or in other words the formal sum of its zeroes and poles counted with multiplicity; and a function applied to a formal sum means the product (with multiplicities, poles counting as a negative multiplicity) of the values of the function at the points of the divisor. With this definition there must be the side-condition, that the divisors of f and g have disjoint support (which can be removed).
In the case of the projective line, this can be proved by manipulations with the resultant of polynomials.
To remove the condition of disjoint support, for each point P on C a local symbol
(f, g)P
is defined, in such a way that the statement given is equivalent to saying that the product over all P of the local symbols is 1. When f and g both take the values 0 or ∞ at P, the definition is essentially in limiting or removable singularity terms, by considering (up to sign)
fagb
with a and b such that the function has neither a zero nor a pole at P. This is achieved by taking a to be the multiplicity of g at P, and −b the multiplicity of f at P. The definition is then
(f, g)P = (−1)ab fagb.
See for example Jean-Pierre Serre, Groupes algébriques et corps de classes, pp. 44–46, for this as a special case of a theory on mapping algebraic curves into commutative groups.
There is a generalisation of Serge Lang to abelian varieties (Lang, Abelian Varieties).
References
André Weil, Oeuvres Scientifiques I, p. 291 (in Lettre à Artin, a 1942 letter to Artin, explaining the 1940 Comptes Rendus note Sur les fonctions algébriques à corps de constantes finis)
for a proof in the Riemann surface case
Algebraic curves
Theorems in algebraic geometry | Weil reciprocity law | Mathematics | 501 |
21,475,852 | https://en.wikipedia.org/wiki/Camcon%20Technology | Camcon Technology (also Camcon Auto) is a Cambridge-based company focused on the core research and development of the Camcon Binary Actuator, a new class of digital valve technology. It develops high-speed and low-energy control of liquid and gas used in the healthcare industry.
About Camcon
Camcon is based in the UK's Silicon Fen and was founded by Wladyslaw Wygnanski in 2000 as a vehicle to support the development and commercialisation of a new class of binary actuating technology. The Intellectual Property company aims to make the Camcon Binary Actuator a worldwide standard and 32 worldwide patents have already been granted.
The company is developing products based on the Camcon Binary Actuator in a number of industrial markets, including oil & gas, medical, automotive and aviation, where its unique characteristics are deemed to offer the largest financial and technological return.
Camcon is funded by Hit & Run Music Publishing, the management team behind the Genesis (band) and ACUS Managing Partners, an active management venture capitalist that specialises in funding early-stage technology companies.
In 2008, Lord Young of Graffham took on the role of chairman and invested the capital that the company required to complete its current development programme and see the introduction of Camcon products into the market.
References
External links
Official website
Actuators
Companies based in Cambridge
Digital technology
Gas technologies | Camcon Technology | Technology | 281 |
10,300,881 | https://en.wikipedia.org/wiki/Amanita%20brunnescens | Amanita brunnescens, also known as the brown American star-footed amanita or cleft-footed amanita is a native North American mushroom of the large genus Amanita. It differs from A. phalloides (the death cap) by its fragile volva and tendency to bruise brown.
Taxonomy
Originally presumed to be the highly toxic Amanita phalloides (the death cap) by renowned American mycologist Charles Horton Peck, it was described and named by George F. Atkinson of Cornell University. He named it after the fact that it bruised brown.
Description
Amanita brunnescens has a mostly brown cap, with possible tones of olive, grey, or red. At maturity the cap is often around wide. The cap margins lack universal veil remnants. The shape of the cap can be bell-shaped to convex, becoming planar as it matures. The flesh within the cap is mostly white or cream and can bruise brown. The characteristic Amanita gills are free from the stipe and white. The stipe is also white, with a smooth basal bulb that distinctly splits into a "cleft-foot". It stains reddish-brown on the lower half, especially when handled, and averages about 9 cm tall. A partial veil is present, often white with possible brown coloration. There is no volva, but there may be volval remnants if the fruiting body is excavated carefully, that are white to brownish.
The odor, if present, is of raw potatoes. a piece of the stipe may need to be cut in order to detect the faint scent.
The spore print is white, and "the spores measure (7.0-) 8.0–9.2 (-9.5) × (6.5-) 7.2–8.5 (-9.2) μm and are globose to subglobose (occasionally broadly ellipsoid) and amyloid. Clamps are absent from bases of basidia."
Variations
Amanita brunnescens var. pallida is almost identical to the description above, but with a white cap color.
Similar species
A. brunnescens' most distinguishing features are the characteristic Amanita stature with warts and partial veil, as well as the cleft foot and reddening base.
Successful mushroom identification relies on the collection of mature, healthy and undamaged specimens. Amanitas can be separated from other agarics by their (usually) tall and slender stature, presence of universal veil remnants and/or a volva, and sometimes a partial veil. Other genera with volvas include Volvariella and Volvopluteus which will never have a partial veil or universal veil remnants on the cap. Those genera also have pink spore prints instead of Amanita's white.
Section Validae, to which A. brunnescens belongs, can be distinguished from other Amanita sections via the smooth bulbous base without a volva, stipe with a partial veil, and no veil remnants hanging off of cap margin. Other Amanita sections have either a conspicuous volva or a concentric/scaly stipe base.
Amanita amerirubescens has an indistinct (sometimes a little swollen) stipe base, a brassy yellow to reddish cap, and when young, yellowish warts on cap. This species also reddens.
Amanita flavorubens is similar to A. amerirubescens, but has a yellower cap and warts that retain their yellow color for longer.
Amanita asteropus is the European version of A. brunnescens.
Toxicity
It is of unknown edibility and may be poisonous.
See also
List of Amanita species
References
brunnescens
Fungi of North America
Fungi described in 1918
Fungus species | Amanita brunnescens | Biology | 797 |
52,007,945 | https://en.wikipedia.org/wiki/NGC%20285 | NGC 285 is a lenticular galaxy in the constellation Cetus. It was discovered on October 2, 1886, by Francis Leavenworth.
References
External links
0285
18861002
Cetus
Lenticular galaxies
Discoveries by Francis Leavenworth
003141 | NGC 285 | Astronomy | 52 |
19,880,509 | https://en.wikipedia.org/wiki/Great%20Rebuilding | A Great Rebuilding is a period in which a heightened level of construction work, architectural change, or rebuilding occurred.
More specifically, W. G. Hoskins defined the term "The Great Rebuilding" in England as the period from the mid-16th century until 1640. Hoskins' initial theory held that during this period, improved economic conditions in England led to the expansion, rebuilding or architectural improvement of a large number of rural buildings.
The precise time period, extent and impact of "The Great Rebuilding" is contested. Ronald Brunskill accepts that in much of England it spanned the period 1570–1640, but that the period varied both by region and by social class. It was earliest in South East England, later in South West England and Cornwall, about 1670–1720 in Northern England and later still in Wales. In each region it affected higher-income social classes first and then progressed to lower-income classes.
References
Sources and further reading
Architectural history
Vernacular architecture | Great Rebuilding | Engineering | 197 |
46,703,786 | https://en.wikipedia.org/wiki/Deeper%20Fishfinder | Deeper Smart Sonar is a wireless, castable echo-sounder compatible with iOS and Android smartphones and tablets. Wi-Fi connection enabled to maximize both the distance between the sounder and the device holder up to 330 ft / 100 m and the depth range up to 260 ft / 80 m. The scanning frequency allows the device to capture fast-moving objects and the scanning resolution measures small objects.
Usage
Deeper sonar can be cast to any spot in the water. While floating on the water surface it connects to a smart device and transmits information which is used for finding fish, getting depth information, exploring bottom contour and vegetation, and temperature.
Operation
Operation of Deeper is based on echolocation and wi-fi technologies. Echolocation is a method for detecting and locating objects submerged in water. When a sound signal is produced, the time it takes for the signal to reach an object and for its echo to return is issued to calculate the distance between the sonar and the object. Wi-Fi allows transference of the sonar readings to a smartphone or tablet from up to 330 ft / 100 meters.
Technical specifications
Deeper sonar can be cast from varying heights and positions.
App
Deeper app is compatible with iOS 8.0 and Android 4.0 to the latest iOS and Android devices .
Features:
Real-time mapping & offline maps
Unlimited data history
Ice fishing mode
Solunar calendar, notes, camera, social media sharing
Sleep feature to pause battery use when in water (low power-consumption mode)
See also
Fishfinder
References
External links
Sonar
Fishing equipment
Wireless
Lithuanian brands | Deeper Fishfinder | Engineering | 317 |
22,399,444 | https://en.wikipedia.org/wiki/VistA | The Veterans Health Information Systems and Technology Architecture (VISTA) is the system of record for the clinical, administrative and financial operations of the Veterans Health Administration VISTA consists of over 180 clinical, financial, and administrative applications integrated within a single shared lifelong database (figure 1).
The Veterans Health Administration (VHA) is the largest integrated national healthcare delivery system in the United States, providing care for nearly 9 million veterans by 180,000 medical professionals.
VistA received the Computerworld Smithsonian Award for best use of Information Technology in Medicine, and more recently received the highest overall satisfaction rating by physician users of EHRs in the U.S.
In May, 2018, the VA awarded a contract to modernize VistA by implementing a commercial EHR. The projected completion for implementing the commercial EHR was by 2028. By March 2023 - half way through the program - only 5 the total of 150 VA medical centers (3%) had piloted the new system. Numerous reports of safety and reliability had emerged at the commercial EHR sites, and four veterans had suffered premature death. As a result, in April 2023 the House Veterans Affairs Committee for Health IT issued a bill to terminate the commercial EHR contract
Clinical Functions
Financial-Administrative Functions
Infrastructure Functions
Patient Web Portal Functions
Achievements
For its development of VistA, the United States Department of Veterans Affairs (VA) / Veterans Health Administration (VHA) was named the recipient of the Innovations in American Government Award presented by the Ash Institute of the John F. Kennedy School of Government at Harvard University in July, 2006.
The adoption of VistA has allowed the VA to achieve a pharmacy prescription accuracy rate of 99.997%, and the VA outperforms most public sector hospitals on many other quality metrics, all attributable to VistA.
Hospitals using VistA are one of only a few healthcare systems in the U.S. that have achieved the highest level of electronic health record integration HIMSS Stage 7, while a non-VA hospital using VistA is one of only 42 US hospitals that has achieved HIMSS stage 6.
Licensing and dissemination
The VistA system is public domain software, available through the Freedom Of Information Act directly from the VA website or through a growing network of distributors, such as the OSEHRA VistA-M.git tree.
VistA modules and projects
Database backend
VistA was developed using the M or MUMPS integrated application database. The VA currently runs its VistA systems on a proprietary version of MUMPS called Caché, but an open source MUMPS database engine, called GT.M, for Linux and Unix systems has also been developed.
Patient Web Portal
MyHealtheVet is a web portal that allows veterans to access and update their personal health record, refill prescriptions, and schedule appointments. This also allows veterans to port their health records to institutions outside the VA health system or keep a personal copy of their health records, a Personal Health Record (PHR).
VistA Imaging
The Veterans Administration developed VistA Imaging, which is a PACS (radiology imaging) systems and for integrating image-based information, such as X-Rays, CAT-scans, EKGs, pathology slides, and scanned documents into the VistA electronic medical records system. Integration of images into a medical record is critical to efficient high-quality patient care.
Deployments and uses
Role in development of a national healthcare network
The VistA electronic healthcare record has been widely credited for reforming the VA healthcare system, improving safety and efficiency substantially. The results have spurred a national impetus to adopt electronic medical records similar to VistA nationwide.
A Clinical Data Repository (CDR) /Health Data Repository (HDR) (CHDR) allows interoperability between the DoD's Clinical Data Repository (CDR) & the VA's Health Data Repository (HDR). This is accomplished through the Bidirectonal Health Information Exchange (BHIE). Bidirectional real time exchange of pharmacy, allergy, demographic and laboratory data occurred in phase 1. Phase 2 involved additional drug–drug interaction and allergy checking. Initial deployment of the system was completed in March 2007 at the El Paso, Augusta, Pensacola, Puget Sound, Chicago, San Diego, and Las Vegas facilities.
VistA has been interfaced with commercial off-the-shelf products. Standards and protocols used by VA are consistent with current industry standards and include HL7, DICOM, and other protocols.
Tools for CCR/CCD support have been developed for VistA, allowing VistA to communicate with other EHRs using these standardized information exchange protocols. This includes the Mirth open source cross platform HL7 interface and NHIN Connect, the open source health information exchange adaptor.
The VistA EHR has been used by the VA in combination with Telemedicine to provide surgical care to rural areas in Nebraska and Western Iowa over a area.
Usage in non-governmental hospitals
Under the Freedom of Information Act (FOIA), the VistA system, the CPRS graphical interface, and unlimited ongoing updates (500–600 per year) are provided as public domain software.
This was done by the U.S. government in an effort to make VistA available as a low cost Electronic Health Record (EHR) for non-governmental hospitals and other healthcare entities.
The VA has produced a version of VistA that runs on GT.M in a Linux operating system, and which was suitable for use in private settings. VistA has since been adapted by companies such as Medsphere to hundreds of hospitals and clinics in the private sector. VistA has been deployed internationally, running the healthcare information system of entire national healthcare systems, such as the Kingdom of Jordan. Some United States universities, such as UC Davis and Texas Tech, have implemented VistA. The non-profit organization, WorldVistA, was established to extend and collaboratively improve the VistA electronic health record and health information system for use outside in the private and public sector throughout the U.S. and internationally.
VistA (and other derivative EMR/EHR systems) can be interfaced with healthcare databases not initially used by the VA system, including billing software, lab databases, and image databases (radiology, for example).
VistA implementations have been deployed (or are currently being deployed) in non-VA healthcare facilities in Texas, Arizona, Florida, Hawaii, New Jersey, Oklahoma, West Virginia, California, New York, and Washington, D.C.
In one state, the cost of a multiple hospital VistA-based EHR network was implemented for one tenth the price of a commercial EHR network in another hospital network in the same state ($9 million versus $90 million for 7–8 hospitals each). (Both VistA and the commercial system used the MUMPS database).
VistA has even been adapted into a Health Information System (VMACS) at the veterinary medical teaching hospital at UC Davis.
International deployments
VistA software modules have been installed around the world, or are being considered for installation, in healthcare institutions such as the World Health Organization, and in countries such as Mexico, American Samoa, Kurdistan, Iraq, Finland, Jordan, Germany, Kenya, Nigeria, Egypt, Malaysia, India, Brazil, Pakistan, and Denmark.
In September 2009, Dell Computer bought Perot Systems, the company installing VistA in Jordan (the Hakeem project).
History
The name "VistA" (Veterans Health Information Systems and Technology Architecture) was adopted by VA in 1994, when the Under Secretary for Health of the U.S. Department of Veterans Affairs (VA), Dr. Ken Kizer, renamed what was previously called the Decentralized Hospital Computer Program (DHCP).
Both Dr. Robert Kolodner (National Health Information Technology Coordinator) and George Timson (an architect of VistA who has been involved with it since the early years) date VistA's actual architecture genesis, then, to 1977. The program was launched in 1978 with the deployment of the initial modules in about twenty VA Medical Centers. The program was named the Decentralized Hospital Computer Program (DHCP) in 1981.
In December 1981, Congressman Sonny Montgomery of Mississippi arranged for the Decentralized Hospital Computer Program (DHCP) to be written into law as the medical-information systems development program of the VA. VA Administrator Robert P. Nimmo signed an Executive Order in February 1982 describing how the DHCP was to be organized and managed within the VA's Department of Medicine and Surgery.
In conjunction with the VA's DHCP development, the (IHS) Indian Health Service deployed a system built on and augmenting DHCP throughout its Federal and Tribal facilities as the Resource and Patient Management System (RPMS). This implementation emphasized the integration of outpatient clinics into the system, and many of its elements were soon re-incorporated into the VA system (through a system of technology sharing). Subsequent VistA systems therefore included elements from both RPMS and DHCP. Health IT sharing between VA and IHS continues to the present day.
The U.S. Department of Defense (DoD) then contracted with Science Applications International Corporation (SAIC) for a heavily modified and extended form of the DHCP system for use in DoD healthcare facilities, naming it the Composite Health Care System (CHCS).
Meanwhile, in the early 1980s, major hospitals in Finland were the first institutions outside of the United States to adopt and adapt the VistA system to their language and institutional processes, creating a suite of applications called MUSTI and Multilab. (Since then, institutions in Germany, Egypt, Nigeria, and other nations abroad have adopted and adapted this system for their use, as well.)
The four major adopters of VistA – VA (VistA), DoD (CHCS), IHS (RPMS), and the Finnish Musti consortium – each took VistA in a different direction, creating related but distinct "dialects" of VistA. VA VistA and RPMS exchanged ideas and software repeatedly over the years, and RPMS periodically folded back into its code base new versions of the VA VistA packages. These two dialects are therefore the most closely related. The Musti software drifted further away from these two but retained compatibility with the infrastructure of RPMS and VA VistA (while adding additional GUI and web capabilities to improve function). Meanwhile, the CHCS code base diverged from that of the VA's VistA in the mid-eighties and has never been reintegrated. The VA and the DoD had been instructed for years to improve the sharing of medical information between the two systems, but for political reasons made little progress toward bringing the two dialects back together. More recently, CHCS's development was brought to a complete stop by continued political opposition within the DoD, and it has now been supplanted by a related, but different, system called AHLTA. While AHLTA is the new system for DoD, the core systems beneath AHLTA (for Computerized Physician Order Entry, appointing, referral management, and creation of new patient registrations) remain those of the underlying CHCS system. (While some ongoing development has occurred for CHCS, the majority of funds are consumed by the AHLTA project.) Thus, the VistA code base was split four ways.
Many VistA professionals then informally banded together as the "Hardhats" (a name the original VistA programmers used for themselves) to promote that the FOIA (Freedom of Information Act) release of VA VistA (that allows it to be in the public domain) be standardized for universal usage.
WorldVistA was formed from this group and was incorporated in March 2003 as a non-profit corporation. This allowed the WorldVistA board of directors to pursue certain activities (obtaining grants, creating contracts, and making formal alliances) that they otherwise could not pursue as an informal organization. It is, however, an organization independent of the VA system and its version of VistA therefore differs from that of the VA's. Nevertheless, it maintains as an objective that its public version be compatible (interoperable) with the VA's official version. It has developed packages of WorldVistA for multiple operating systems, including Linux (Debian/Ubuntu and Red Hat) -based and Microsoft Windows-based operating systems. Co-operation with the maintainers and vendors of OpenVistA, another widely deployed open source public version of VistA, helps maintain interoperability and a standardized framework.
In 2011 the Open Source Electronic Health Record Agent (OSEHRA) project was started (in cooperation with the Department of Veterans Affairs) to provide a common code repository for VistA (and other EHR and health IT) software. On February 10, 2020 the Open Source Electronic Health Record Alliance (OSEHRA) announced that they would cease operations on February 14 of 2020.
In summary, it was the joint collaboration of thousands of clinicians and systems experts from the United States and other nations, many of them volunteers, that the VistA system has developed.
Supporters of VistA
There have been many champions of VistA as the electronic healthcare record system for a universal healthcare plan. VistA can act as a standalone system, allowing self-contained management and retention of healthcare data within an institution. Combined with HIE (or other data exchange protocol) it can be part of a peer-to-peer model of universal healthcare. It is also scalable to be used as a centralized system (allowing regional or even national management of healthcare records).
In addition to the unwavering support of congressional representatives such as Congressman Sonny Montgomery of Mississippi, numerous IT specialists, physicians, and other healthcare professionals have donated significant amounts of time in adapting the VistA system for use in non-governmental healthcare settings.
The ranking member of the House Veterans Affairs Committee's Oversight and Investigation Subcommittee, Rep. Ginny Brown-Waite of Florida, recommended that the Department of Defense (DOD) adopt VA's VistA system following accusations of inefficiencies in the DOD healthcare system. The DOD hospitals use Armed Forces Health Longitudinal Technology Application (AHLTA) which has not been as successful as VistA and has not been adapted to non-military environments (as has been done with VistA).
In November 2005, the U.S. Senate passed the Wired for Health Care Quality Act, introduced by Sen. Enzi of Wyoming with 38 co-sponsors, that would require the government to use the VA's technology standards as a basis for national standards allowing all health care providers to communicate with each other as part of a nationwide health information exchange. The legislation would also authorize $280 million in grants, which would help persuade reluctant providers to invest in the new technology. There has been no action on the bill since December 2005. Two similar House bills were introduced in late 2005 and early 2006; no action has been taken on either of them, either.
In late 2008, House Ways and Means Health Subcommittee Chair Congressman Pete Stark (D-CA) introduced the Health-e Information Technology Act of 2008 (H.R. 6898) that calls for the creation of a low-cost public IT system for those providers who do not want to invest in a proprietary one.
In April 2009, Sen. John D. Rockefeller of West Virginia introduced the Health Information Technology Public Utility Act of 2009 calling for the government to create an open source electronic health records solution and offer it at little or no cost to safety-net hospitals and small rural providers.
Detractors of VistA
The main complaint of VistA and CPRS is the outdated and inefficient interface, which is more similar to information systems designed in the 1990s. Given the complexity of medical care, the burden of navigated an archaic system enacts significant burden on healthcare providers, contributing to inefficiency and provider burnout. </ref>
VistA Derivatives
WorldVistA or WorldVistA EHR
OpenVistA (Medsphere)
vxVistA (Document Storage Systems, Inc.)
Astronaut VistA
See also
Electronic health record
Health informatics
MUMPS
Veterans Health Administration
United States Department of Veterans Affairs
FileMan
VA Kernel
GNUmed
GNU Health
References
External links
Vistapedia: the WorldVistA Wiki
Hardhats – a VistA user community
Hardhats Google Group – a forum to discuss installation of WorldVistA
VistA Monograph wiki (OLPC project)
VistA Software Alliance (VistA Software Vendor Trade Organization)
VistA Imaging overview (Department of Veterans Affairs)
– Ash Institute News Release
VistA Glossary LiuTiu Medical Administrative Lexicon (Brokenly translated into English from Russian)
Ubuntu Doctors Guild Information about implementing VistA and other open source medical applications in Ubuntu Linux
A 40-year 'conspiracy' at the VA Politico, 2017
Videos about VistA:
History of Vista Architecture Interview with Tom Munnecke
Interview with Rob Kolodner regarding VistA's potential for the National Health Information Network
Impact of VistA Interview with Dr. Ross Fletcher
Interview with Philip Longman
Events leading up to the development of VistA Interview with Henry Heffernan
History of Vista Interview with Ruth Dayhoff
Early development of the Decentralized Hospital Computer Program Interview with Marty Johnson
Early days of the VA "Underground Railroad" Interview with Tom Munnecke
United States Department of Veterans Affairs
Electronic health records
Government software
Public-domain software | VistA | Technology | 3,471 |
9,688,913 | https://en.wikipedia.org/wiki/Kapustinskii%20equation | The Kapustinskii equation calculates the lattice energy UL for an ionic crystal, which is experimentally difficult to determine. It is named after Anatoli Fedorovich Kapustinskii who published the formula in 1956.
{|
|-
|where ||K = 1.20200 J·m·mol−1
|-
| ||d = 3.45 m
|-
| ||ν is the number of ions in the empirical formula,
|-
| || z+ and z− are the numbers of elementary charge on the cation and anion, respectively, and
|-
| || r+ and r− are the radii of the cation and anion, respectively, in meters.
|}
The calculated lattice energy gives a good estimation for the Born–Landé equation; the real value differs in most cases by less than 5%.
Furthermore, one is able to determine the ionic radii (or more properly, the thermochemical radius) using the Kapustinskii equation when the lattice energy is known. This is useful for rather complex ions like sulfate (SO) or phosphate (PO).
Derivation from the Born–Landé equation
Kapustinskii originally proposed the following simpler form, which he faulted as "associated with antiquated concepts of the character of repulsion forces".
Here, K' = 1.079 J·m·mol−1. This form of the Kapustinskii equation may be derived as an approximation of the Born–Landé equation, below.
Kapustinskii replaced r0, the measured distance between ions, with the sum of the corresponding ionic radii. In addition, the Born exponent, n, was assumed to have a mean value of 9. Finally, Kapustinskii noted that the Madelung constant, M, was approximately 0.88 times the number of ions in the empirical formula. The derivation of the later form of the Kapustinskii equation followed similar logic, starting from the quantum chemical treatment in which the final term is where d is as defined above. Replacing r0 as before yields the full Kapustinskii equation.
See also
Born–Haber cycle
References
Literature
A. F. Kapustinskii; Zhur. Fiz. Khim. Nr. 5, 1943, pp. 59 ff.
Chemical bonding
Crystallography
Eponymous equations of physics
Soviet inventions | Kapustinskii equation | Physics,Chemistry,Materials_science,Engineering | 506 |
1,726,293 | https://en.wikipedia.org/wiki/Telop | A TELOP (TELevision OPtical Slide Projector) was the trademark name of a multifunction, four-channel "project-all" slide projector developed by the Gray Research & Development Company for television usage, introduced in 1949. It was best remembered in the industry as an opaque slide projector for title cards.
Before Telop
In the early days of television, there were two types of slides for broadcast—a transparent slide or transparency, and an opaque slide, or Balop (a genericized trademark of Bausch & Lomb's Balopticon projectors). Transparency slides were prepared as 2-inch square cards mounted in cardboard or glass, or film, surrounded by a half-inch of masking on all four sides. Opaque, "Balop" slides were cards on stock or larger (always maintaining the 4:3 aspect ratio) that were photographed by the use of a Balopticon. Opaque cards were popular as shooting a card on a stand often caused keystoning problems. A fixed size and axis would ensure no geometric distortion.
Telop models
The Gray Company introduced the original Telop in 1949 (with additional models, later to be known as the Telop I). The dual projector unit offered both transparent and opaque projection. The standard sizes were used for both transparent and card slides, but they could also be made on 35mm film, on glass, or on film cards. The third and fourth units on the Telop were attachments with a vertical ticker-tape type roll strip that could be typed on and a horizontal unit similar to a small teleprompter used for title "crawls."
By 1952, when the Telop II was introduced, CBS and NBC were using Telop machines in combination with TV cameras to permit instant fading from one object to another by superimposition. The Telop II was a smaller version of the original model for new TV stations on a budget and featured two openings rather than the original four. At the same time, Gray also developed a Telojector, a gun-turret style slide projector for slides which had two projectors, facilitating easier lap dissolves between the two.
The Telop III was introduced in 1954, was a refinement of the previous makes. Reduced to a single-channel version of the Telop, the Telop III also added a remote-control for switching up to 50 cards and added heat filters for opaque cards. Automation was the emphasis for this model, as was a tie in with the Polaroid Camera Co., in which Polaroid instant photographs could be used in a Telop for on-site use.
Genericized trademark
Because of its popularity, the Telop became a catch-all term for large-format slide projectors and opaque cards, even after the Gray Company stopped manufacturing Telop projectors.
to indicate text superimposed on a screen, such as captions, subtitles, or scrolling tickers.
References
External links
Television technology
Television terminology | Telop | Technology | 610 |
26,301,668 | https://en.wikipedia.org/wiki/Brennand%20Farm | Brennand Farm is often claimed to be the true centre of Great Britain.
The centre, as calculated by Ordnance Survey as the centroid of the two-dimensional shape of Great Britain, including all its islands, is at Whitendale Hanging Stones, about north of the farm and about north-west of the village of Dunsop Bridge - which has the nearest BT phone box to the 'true centre'. A plaque reads "You are calling from the BT payphone that marks the centre of Great Britain".
Haltwhistle claims to be the centre by a different method.
References
Geography of Ribble Valley
Geographical centres | Brennand Farm | Physics,Mathematics | 127 |
24,115,807 | https://en.wikipedia.org/wiki/Cellphone%20overage%20charges | Overage charges are incurred when a mobile phone (cellphone) is used for more time than the quota fixed under a post-payment plan.
Payment plans and overage charges
Billing for mobile phones can take the form of either a pre-paid plan or a post-paid plan. In a pre-paid plan, the cell phone user pays for the minutes before using them. This kind of plan is popular in many Asian, South American, and some European countries.
In post-paid plans, the cell phone user pays at the end of the month for the minutes used during that month. Post-paid plans are common in North America and are catching up in other countries. The cell phone providers (that is, the wireless carriers) typically charge a monthly fee for the post-paid plans. In turn, users get a monthly quota of minutes. When a user goes over the minutes allowed under the particular post-paid cell phone plan, they are charged separately for the extra minutes. In North America, this fee for the extra minutes is called overage fees or overage charges.
For example, suppose that a mobile phone user signs up for a post-paid cell phone plan that costs $40 per month and is allowed a quota of 700 minutes under that plan. If this user were to end up using 750 minutes in a month, then they would be charged an overage fee for the extra 50 minutes.
Mobile networks may offer the ability to check how much of a quota has been used, or users may sign up to third-party monitoring services. Overage charges are being phased out by some major providers.
Overage charges in Germany
In Germany as of 2015, most post-paid cell phone plans are (for calls within Germany) either true flat rates (monthly rates between 17 and 40 EUR, depending on how much data and how many SMS are included) or include a more limited amount of included minutes (100-200 Minutes), with a modest overage of 7 to 21 Euro-cent per minute. The time of days is not accounted for.
Overage charges in North American countries
North American post-paid cell phone plans typically divide the minutes used in various categories such as peak minutes, evening and night minutes (also called off-peak minutes), weekend minutes, mobile-to-mobile minutes, and so on. The peak minutes also go by the name of "anytime minutes" or "whenever minutes".
The peak minutes allowed under most post-paid plans are usually limited (such as 750 peak minutes allowed in a month for $40 per month) and minutes in all other categories are either free, or come with a large monthly quota (such as 5000 weekend and weeknight minutes). Therefore, in North American countries, the overage charges typically mean "peak overage charges", that is, a cell phone user gets charged separately for the extra peak minutes used in a month.
See also
History of mobile phones
Mobile phone operator
References
Mobile telecommunications | Cellphone overage charges | Technology | 601 |
38,529,941 | https://en.wikipedia.org/wiki/Personal%20safety%20app | A personal safety app or SOS app is a mobile application designed to provide individuals with additional security and assistance in various situations. These apps offer a range of features and functionalities that users can utilize to enhance their personal safety. Common features include emergency alerts, location sharing, safety tips, SOS buttons, audible alarms, and community safety reporting. Users can employ these apps to quickly send emergency alerts, share their real-time location with trusted contacts, and access safety-related information and resources.
Features
While most personal safety apps are offered as freeware, some are either distributed as a freemium app with paid features which can be unlocked through in-app purchases, supported through advertising, or marketed as paid applications. These include various features, including sending text messages, e-mails, IMs, or even Tweets to close friends (containing approximate location,) or emitting a loud intermittent "shrill whistle" in the manner of a rape alarm. Additional features include geofencing and preventive alerts. Some apps allow to customize the alert message sent and the ringtone that signals the reception of a new alert.
They normally include different triggering mechanisms to cope with different emergency situations. Common triggering mechanisms include pressing and holding the phone's switch button for a few seconds, shaking the phone vigorously, tapping on an alert button, and even loud screaming sound which the app can detect. When the alert signal is triggered, these apps automatically go to work, sending text messages and emails with exact location of the user to emergency contacts listed on the app.
In April 2016, the Indian government mandated that all cellphones sold in the country must contain a panic button function by 2017, activated through either a dedicated button or pressing the power key three times.
Security Concerns
Many users have expressed concerns about the safety of personal safety apps and have raised questions regarding how the data collected by these apps is processed and used. These concerns are often related to issues of privacy and data security. Users worry that the personal information, location data, and emergency alerts they share with these apps could be mishandled or accessed by unauthorized parties. Additionally there is a growing awareness of potential risks associated with the storage and transmission of sensitive data. As a result, some users are cautious about using personal safety apps and may choose to limit the information they share or thoroughly review the privacy policies and data handling practices of the apps the use. These concerns underscore the importance of transparent data practices and robust security measures within the personal safety app industry to address and alleviate user apprehensions.
Inbuilt SOS Features in Mobile Operating Systems
In recent years, major technology companies like Apple and Google have incorporated inbuilt SOS features directly into their mobile operating systems. These features are designed to provide users with quick and efficient methods of seeking assistance during emergencies. For instance, Apple's iOS includes an Emergency SOS feature that allows users to rapidly call emergency services and notify designated contacts by pressing a specific combination of hardware buttons or using the device's touchscreen. Similarly Google's Android operating system offers and Emergency Information feature that enables users to input vital medical and emergency contact information accessible even when the device is locked. These inbuilt SOS features offer an additional layer of safety and convenience for users, complementing standalone personal safety apps and reinforcing the importance of technology in enhancing personal security.
References
Alarms
Mobile software | Personal safety app | Technology | 689 |
56,303,902 | https://en.wikipedia.org/wiki/2-Methyleneglutaronitrile | 2-Methylene glutaronitrile is a dimerization product of acrylonitrile and a starting material for di- and triamines, for the biocide 2-bromo-2-(bromomethyl)pentanedinitrile and for heterocycles, such as 3-cyanopyridine.
Preparation
2-Methylene glutaronitrile is a side-product in the production of hexanedinitrile which is used (after hydrogenation to 1,6-diaminohexane) as a key component for engineering polymers such as the polyamides (PA 66) or polyurethanes. Hexanedinitrile can be industrially produced by electrochemical hydrodimerisation or by catalytic dimerization of acrylonitrile.
A catalytic tail-tail dimerization of two acrylonitrile molecules forms hexanedinitrile:
Also head-to-tail dimerization can occur in the process. In the presence of tricyclohexylphosphine (PCy3) a yield of up to 77% 2-methylene glutaronitrile can be obtained:
Metal halides (such as zinc chloride or aluminium chloride) are used with tertiary amines (such as triethylamine) as catalysts for the dimerization. Crude yields of up to 84% are achieved. Often, significant amounts of product are lost during the work-up (e. g. extraction and distillation) because of the tendency to polymerization of 2-methylene glutaronitrile.
In addition to the linear dimerization products 1,4-dicyano-2-butene and 1,4-dicyano-3-butene (obtained as cis-trans isomer mixtures) usually also other oligomers (and polymers) of acrylonitrile are formed. During the electrochemical hydrooligomerization of acrylonitrile, these are trimers, such as 1,3,6- and 1,3,5-tricyanohexane or tetramers, such as 1,3,6,8- and 1,3,5,8-tetracyanooctane. The reaction of acrylonitrile with tributylphosphine affords 2-methyleneglutaronitrile in a modest yield of about 10% after fractional distillation. The DABCO-catalyzed acrylonitrile dimerization of 2,4-dicyano-1-butene after 10 days at room temperature is with 40% yield similarly inefficient.
Use
The earlier patent literature describes processes for the isomerization of 2-methylene glutaronitrile to 1,4-dicyanobutenes as hexanedinitrile precursors, which became obsolete with the optimization of the electrochemical hydrodimerization of acrylonitrile to hexanedinitrile.
The electrochemical hydrodimerization of 2-methylene glutaronitrile produces 1,3,6,8-tetracyanooctane.
In the hydrogenation of 2-methylene glutaronitrile in the presence of palladium on carbon, hydrogen is attached to the double bond and 2-methylglutaronitrile is obtained in virtually quantitative yield.
The hydrogenation of the nitrile groups requires more severe conditions and the presence of ammonia or amines to suppress the formation of secondary amines. This second hydrogenation step is carried out with Raney-cobalt as the hydrogenation catalyst to give 2-methyl-1,5-pentanediamine in 80% yield.
Hydrogenation of 2-methylene glutaronitrile in the presence of ammonia with manganese-containing sodium oxide-doped cobalt catalyst (at 80 to 100 °C and pressures of 200 atm in a tubular reactor) leads to the addition of ammonia to the double bond and directly converts the compound to 2-aminomethyl-1,5-pentanediamine with yields of 66%.
The branched triamine can be used in epoxides and polyurethanes.
2-Methylenglutaronitrile reacts with methanamide upon catalysis with 4-(dimethylamino)-pyridine (DMAP) at 60 °C in 47% yield to give 1-(N-methanoylamino)-2,4-dicyanobutane, from which α- aminomethylglutaric acid is formed by subsequent hydrolysis.
Heating 2-methyleneglutaronitrile with an alkaline ion exchanger, pyridine and water to 150 °C in an autoclave yields the lactam 5-cyano-2-piperidone in 80% yield.
2-Methylene glutaronitrile can be polymerized to various homo- and copolymers via anionic polymerization with sodium cyanide, sodium in liquid ammonia or with butyllithium. However, the polymers are formed only in low yields and show unsatisfactory properties such as intrinsic viscosities and poor mechanical properties.
The main use of 2-methyleneglutaronitrile is as starting material for the broad-spectrum biocide 2-bromo-2-(bromomethyl)pentanedinitrile (methyldibromo-glutaronitrile), which is formed in virtually quantitative yield by the addition of bromine to the double bond.
From the chlorine-analogous 2-chloro-2-(chloromethyl)pentenenitrile, 3-cyanopyridine is obtained by heating to 150 °C with tin(IV)chloride.
References
Vinylidene compounds
Nitriles | 2-Methyleneglutaronitrile | Chemistry | 1,231 |
22,446,276 | https://en.wikipedia.org/wiki/Hebeloma%20aestivale | Hebeloma aestivale is a European species of mushroom in the family Hymenogastraceae.
Description
Hebeloma aestivale is characterised by elongated, slightly clubbed cheilocystidia and dextrinoid spores where the outer perispore layer becomes loose.
Taxonomy
It is in the Velutipes section of the genus Hebeloma.
Distribution and habitat
Originally described from Denmark, H. aestivale has also been described in the United Kingdom, where it is one of the more commonly documented Hebeloma species.
See also
List of Hebeloma species
References
aestivale
Fungi described in 1995
Fungi of Europe
Fungus species | Hebeloma aestivale | Biology | 134 |
5,381,102 | https://en.wikipedia.org/wiki/Ammonium%20ferric%20citrate | Ammonium ferric citrate (also known as ferric ammonium citrate or ammoniacal ferrous citrate) has the formula . The iron in this compound is trivalent. All three carboxyl groups and the central hydroxyl group of citric acid are deprotonated. A distinguishing feature of this compound is that it is very soluble in water, in contrast to ferric citrate which is not very soluble.
In its crystal structure each moiety of citric acid has lost four protons. The deprotonated hydroxyl group and two of the carboxylate groups ligate to the ferric center, while the third carboxylate group coordinates with the ammonium.
Uses
Ammonium ferric citrate has a range of uses, including:
As a food ingredient, it has an INS number 381, and is used as an acidity regulator. Most notably used in the Scottish beverage Irn-Bru.
Water purification
As a reducing agent of metal salts of low activity like gold and silver
With potassium ferricyanide as part of the cyanotype photographic process
Used in Kligler's Iron Agar (KIA) test to identify enterobacteriaceae bacteria by observing their metabolism of different sugars, producing hydrogen sulfide
In medical imaging, ammonium ferric citrate is used as a contrast medium.
As a hematinic
See also
Food additive
List of food additives
References
Ammonium compounds
Citrates
Iron(III) compounds
MRI contrast agents
Photographic chemicals
Double salts | Ammonium ferric citrate | Chemistry | 326 |
31,002,280 | https://en.wikipedia.org/wiki/Electrochemical%20energy%20conversion | Electrochemical energy conversion is a field of energy technology concerned with electrochemical methods of energy conversion including fuel cells and photoelectrochemical. This field of technology also includes electrical storage devices like batteries and supercapacitors. It is increasingly important in context of automotive propulsion systems. There has been the creation of more powerful, longer running batteries allowing longer run times for electric vehicles. These systems would include the energy conversion fuel cells and photoelectrochemical mentioned above.
See also
Bioelectrochemical reactor
Chemotronics
Electrochemical cell
Electrochemical engineering
Electrochemical reduction of carbon dioxide
Electrofuels
Electrohydrogenesis
Electromethanogenesis
Enzymatic biofuel cell
Photoelectrochemical cell
Photoelectrochemical reduction of CO2
Notes
External links
International Journal of Energy Research
MSAL
NIST
scientific journal article
Georgia tech
Electrochemistry
Electrochemical engineering
Energy engineering
Energy conversion
Biochemical engineering | Electrochemical energy conversion | Chemistry,Engineering,Biology | 184 |
11,421,211 | https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20F1/F2/snoR5a | In molecular biology, Small nucleolar RNA F1/F2/snoR5a refers to a group of related non-coding RNA (ncRNA) molecules which function in the biogenesis of other small nuclear RNAs (snRNAs). These small nucleolar RNAs (snoRNAs) are modifying RNAs and usually located in the nucleolus of the eukaryotic cell which is a major site of snRNA biogenesis.
These three snoRNas identified in rice (Oryza sativa), called F1, F2 and snoR5a, belong to the H/ACA box class of snoRNAs as they have the predicted hairpin-hinge-hairpin-tail structure and has the conserved H/ACA-box motifs. The majority of H/ACA box class of snoRNAs are involved in guiding the modification of uridine) to pseudouridine in other RNAs
References
External links
Small nuclear RNA | Small nucleolar RNA F1/F2/snoR5a | Chemistry | 203 |
15,778,762 | https://en.wikipedia.org/wiki/Aplaviroc | Aplaviroc (INN, codenamed AK602 and GSK-873140) is a CCR5 entry inhibitor that belongs to a class of 2,5-diketopiperazines developed for the treatment of HIV infection. It was developed by GlaxoSmithKline.
In October 2005, all studies of aplaviroc were discontinued due to liver toxicity concerns. Some authors have claimed that evidence of poor efficacy may have contributed to termination of the drug's development; the ASCENT study, one of the discontinued trials, showed aplaviroc to be under-effective in many patients even at high concentrations.
See also
CCR5 receptor antagonist
References
Further reading
Abandoned drugs
Benzoic acids
Diketopiperazines
Entry inhibitors
Hepatotoxins
Spiro compounds
Diphenyl ethers
Butyl compounds | Aplaviroc | Chemistry | 174 |
57,367,053 | https://en.wikipedia.org/wiki/Mobile%20metering | Mobile metering (recording of data using a mobile meter) is a technology which enables mobile recording of metering data. While railway companies such as the German Deutsche Bahn have been using this technology for years in their trains, it is now also being used for recording the charging transactions of electric vehicles (EVs).
In the latter case, a mobile electricity meter is integrated either into the vehicle itself or into the respective charging cable. This, together with the necessary communication technology (SIM card), makes it possible to transmit charging data (down to the kWh) to a matching backend. Lean, switchable system sockets suffice for charging – they serve as outlets for the power grid. These system sockets can be reduced to a technical minimum, as the vehicle or the cable, respectively, already carry the necessary billing and communication technology. This makes these sockets especially affordable and avoids running costs compared to conventional charging infrastructure, such as costs for maintenance or meter point operation.
As a result, precise metering, secure data transmission and efficient billing fulfill all preconditions for a comprehensive and future-proof charging and billing solution for electric mobility.
Development
The mobile meter was developed in the projects „On Board Metering I & II“, that kicked off in March 2003, sponsored by the German ministry for economic affairs and technology. Participants of the project were:
ITF-EDF Fröschl Ltd. (Specialist for control centers)
PTB, The National Metrology Institute of Germany
VOLTARIS Ltd. (Specialists for metering and energy services)
ubitricity Gesellschaft für verteilte Energiesysteme mbH (Mobile electricity provider / project leader)
Aims of the project
The project's approach to electric mobility was not tackling it as a singular challenge. The aim was rather to make a significant contribution to the energy transition by taking electric mobility one step further.
For this to happen, the EV was to become a system-relevant factor as an energy storage device. The goal was to create as much power grid connection points as possible while at the same time safeguarding exact metering and billing of electricity. This way, the vehicle could gain access to the grid anytime it is parked (ratio of EVs to grid connection points greater than 1). Up to that date, charging infrastructure for electric cars was thought of as stationary, similar to conventional gas stations for combustion cars.
For this, a shift of technology from the infrastructure to the vehicle side (or the cable, respectively) was needed. Such a network of ubiquitous charging spots was only to be realized with charging infrastructure that would cause comparably low costs over longer periods of time.
Potentials of the model
The disruptive approach of mobile metering has by now opened up new possibilities and business models for electric mobility.
Contribution to grid stability. Electric cars can be integrated into the grid as intelligent storage devices. This makes it easier to incorporate the fluctuating electricity production from renewable resources into the overall grid architecture.
The technology shift from the infrastructure to the vehicle/cable reduces the cost of charging spots. A ubiquitous roll-out of charging infrastructure can be executed with much lower capital investments.
Free choice of electricity provider and exact billing. The electricity contract can be closed for the vehicle or cable thanks to the mobile meter, while the choice of the respective provider is free.
References
Bibliography
Automotive technologies
Charging stations
Electric power distribution
Electric vehicles
Electrical grid
Electricity economics
Energy storage
Grid energy storage
Metrology
Mobile technology
Renewable energy economics | Mobile metering | Technology | 700 |
36,196,484 | https://en.wikipedia.org/wiki/C24H23NO | {{DISPLAYTITLE:C24H23NO}}
The molecular formula C24H23NO (molar mass: 341.44 g/mol, exact mass: 341.1780 u) may refer to:
JWH-018, also known as 1-pentyl-3-(1-naphthoyl)indole or AM-678
JWH-148
Molecular formulas | C24H23NO | Physics,Chemistry | 88 |
29,262,539 | https://en.wikipedia.org/wiki/Fold%20number | Fold number refers to how many double folds that are required to cause rupture of a paper test piece under standardized conditions. Fold number is defined in ISO 5626:1993 as the antilogarithm of the mean folding endurance:
where f is the fold number, Fi is the folding endurance for each test piece and n is total number of test pieces used.
In the introduction of ISO 5626:1993 it is emphasized that fold number, as defined in that very International Standard, does not equal the mean number of double folds observed. The latter is however still the definition used in some countries. If the numerical value of the folding endurance is not rounded off, these will however be equal.
In the former Swedish standard SS 152005 ("Pappersordlista") from 1992, with paper related terms defined in Swedish and English, fold number is explained as "the number of double folds which a test strip withstands under specified conditions before a break occurs in the strip"; that is, not the antilogarithm of the mean folding endurance.
See also
Folding endurance
Double fold
References
Paper
Materials testing | Fold number | Materials_science,Engineering | 228 |
801,785 | https://en.wikipedia.org/wiki/Laundry%20room | A laundry room or utility room is a room where clothes are washed, and sometimes also dried. In a modern home, laundry rooms are often equipped with an automatic washing machine and clothes dryer, and often a large basin, called a laundry tub, for hand-washing of delicate clothing articles such as sweaters, as well as an ironing board. Laundry rooms may also include storage cabinets, countertops for folding clothes, and, space permitting, a small sewing machine.
The term utility room is more commonly used in British English, while Australian English and North American English generally refer to this room as a laundry room, except in the American Southeast. "Utility" refers to an item which is designed for usefulness or practical use, so in turn most of the items kept in this room have functional attributes, i.e. "form follows function".
History
The utility room was a modern spin off to the scullery room where important kitchen items were kept during its usage in England, the term was further defined around the 14th century as a household department where kitchen items are taken care of. The term utility room was mentioned in 1760, when a cottage was built in a rural location in the United Kingdom that was accessible through Penarth and Cardiff. A utility room for general purposes also depicted its use as a guest room in case of an immediate need. A 1944 Scottish housing and planning report recommended new state-built homes for families could provide a utility room as a general purpose workroom for the home (for washing clothes, cleaning boots and jobbing repairs). An American publication called the Pittsburgh Post-Gazette reported on July 24, 1949 that utility rooms had become more popular than basements in new constructions. On June 28, 1959, in a report of a typical American house being built in Moscow, Russia, the house was described to have a utility room immediately at the right side after the entrance. The Chicago Tribune reported that the laundry room was then commonly being referred to as the utility room in a publication on September 30, 1970.
Uses
The utility room has several uses but typically functions as an area to do laundry. This room contains laundry equipment such as a washing machine, tumble dryer, ironing boards and clothes iron. The room is also used for closet organization and storage. The room would normally contain a second coat closet which is used to store seasonal clothing such as winter coats or clothing which are no longer used daily. Storage spaces would contain other appliances which would generally be in the kitchen if it was in usage daily. Furnaces and the water heater are sometime incorporated to the room as well. Shelving and trash may sometimes be seen at this area as not to congest the other parts of the house.
Location
In older homes, the laundry is typically located in the basement, but in many modern homes, the laundry room might be found on the main floor near the kitchen or, less often, upstairs near the bedrooms.
Another typical location is adjacent to the garage and the laundry room serves as a mudroom for the entrance from the garage. As the garage is often at a different elevation (or grade) from the rest of the house, the laundry room serves as an entrance from the garage that may be sunken from the rest of the house. This prevents or reduces the need for stairs between the garage and the house.
Most houses in the United Kingdom do not have laundry rooms; as such, the washing machine and dryer are usually located in the kitchen or garage.
In Hungary, some older apartment buildings and most workers' hostels have communal laundry rooms, called (lit. "washing kitchen") in Hungarian. In the former, when residents started to all own individual washing machines in their apartments, obsoleted laundry rooms were sometimes converted into small apartments, shops or workshops (e.g. a shoemaker's) or used simply for storage.
In Sweden, laundry rooms, called a "tvättstuga", are found in almost all older apartment buildings. Swedish laundry rooms are often located in the basements of the buildings, but can also be found in detached buildings. In the 1980s, analogue booking boards with locking cylinders were introduced to regulate washing times and create a booking system. Some of these have been replaced in the 2000s by electronic counterparts with electronic keys or tokens.
See also
Drying room
Furnace room
Lavoir, a public place for the washing of clothes
Mechanical room
Root cellar
Scullery, a room used for washing up dishes and laundering clothes, or as an overflow kitchen
Storage room
Technical room
References
External links
Rooms
room | Laundry room | Engineering | 920 |
66,010,066 | https://en.wikipedia.org/wiki/List%20of%20female%20nominees%20for%20the%20Nobel%20Prize | The Nobel Prize () is a set of five different prizes that, according to its benefactor Alfred Nobel, in his 1895 will, must be awarded "to those who, during the preceding year, have conferred the greatest benefit to humankind". The five prizes are awarded in the fields of Physics, Chemistry, Physiology or Medicine, Literature, and Peace.
As of 2023, 65 Nobel Prizes and the Memorial Prize in Economic Sciences have been awarded to 64 women and since 1901, the year wherein the awarding of the prizes began, hundreds of women have already been nominated and shortlisted carefully in each field.
The first woman to win a Nobel Prize was Marie Curie, who won the Nobel Prize in Physics in 1903 with her husband, Pierre Curie, and Henri Becquerel. Curie is also the only woman to have won multiple Nobel Prizes; in 1911, she won the Nobel Prize in Chemistry. Curie's daughter, Irène Joliot-Curie, won the Nobel Prize in Chemistry in 1935, making the two the only mother-daughter pair to have won Nobel Prizes. Of the currently revealed female nominees both in physics and chemistry, the notable scientists Henrietta Swan Leavitt, Astrid Cleve, Harriet Brooks, Alice Ball, Mileva Marić, Inge Lehmann, Cecilia Payne-Gaposchkin, Leona Woods and Helen Parsons were not included.
In 1912, Mary Edwards Walker became the first ever woman nominated for prize in physiology or medicine but her nomination was later declared invalid by the Nobel Committee because her nominator was not invited to nominate that year. Hence, Cécile Vogt-Mugnier, nominated first in 1922, became the official first female nominee but never won despite numerous recommendations. She was followed by Maud Slye who was nominated in the year 1923, but again never won. Only in 1947, that the Nobel Prize in Physiology or Medicine was finally awarded to a woman, Gerty Cori, sharing with her husband Carl Ferdinand Cori. Of the currently revealed female nominees, the physiologists Nettie Stevens, Frieda Robscheit-Robbins, Rosalind Franklin, Miriam Michael Stimson, Louise Pearce, Virginia Apgar, Hattie Alexander and Alice Catherine Evans were not included.
The most number of female nominees was in the field of literature. The first woman to be nominated was the German memoirist Malwida von Meysenbug for the year 1901. She was nominated by the French historian Gabriel Monod but unfortunately did not win the prize. Her nomination was followed by Émilie Lerou and Selma Lagerlöf for the year 1904. Lagerlöf would later on become the first woman to win the prize in the year 1909. Of the 77 currently revealed female nominees for the literature category, the celebrated authors Kate Chopin, Delmira Agustini, Edith Nesbit, Alfonsina Storni, Marina Tsvetaeva, Virginia Woolf, Simone Weil, Gertrude Stein, Willa Cather, Emma Orczy, Zora Neale Hurston, Edith Hamilton, Flannery O'Connor, Fannie Hurst, Clarice Lispector, Hannah Arendt and Agatha Christie were not included.
The first women nominated for the Nobel Peace Prize were Belva Ann Lockwood and Bertha von Suttner, who would eventually be awarded in 1905. The latter was considered for authoring Lay Down Your Arms! and contributing to the creation of the Prize. Of the 57 currently revealed female nominees, the famous Susan B. Anthony, Florence Nightingale, Clara Barton, Harriet Tubman, Mary Harris Jones, Olive Schreiner, Aletta Jacobs, Emmeline Pankhurst, Ida B. Wells, Käthe Kollwitz, Muriel Lester, Katharine Drexel, Helene Schweitzer, Marie Stopes, Vera Brittain, Ava Helen Pauling, Golda Meir, Rachel Carson and Rosa Parks were not included.
Physics
Starting from 1902 to 1970, 11 women have been nominated for the Nobel Prize in Physics and three of the nominees were subsequently awarded.
Chemistry
Starting 1911 to 1970, 15 women have been nominated for the Nobel Prize in Chemistry and 3 of these nominees were subsequently awarded.
Physiology or Medicine
Starting from 1922 to 1953, 15 women have been nominated for the Nobel Prize in Physiology or Medicine wherein one was declared invalid, one was purportedly recommended and one was subsequently awarded.
Literature
From 1901 to 1974, 81 women have been nominated for the Nobel Prize in Literature and 8 of these nominees were subsequently awarded.
Peace
From 1901 to 1974, 60 women have been nominated for the Nobel Peace Prize and five of these nominees were subsequently awarded. Currently, the Nobel archives has revealed nominations from 1901 to 1973, the other enlisted women were verified nominations based on public and private news agencies.
Economic Sciences
From 1969 to 1971, 3 women have been nominated for the Nobel Memorial Prize in Economic Sciences but none of them were subsequently awarded.
Motivations
See also
List of Nobel laureates
List of female Nobel laureates
List of women writers
List of women's rights activists
List of female scientists in the 20th century
Matilda effect
References
External links
Nobel
Female
Nobel
Nobel Prize | List of female nominees for the Nobel Prize | Technology | 1,043 |
34,508,195 | https://en.wikipedia.org/wiki/Hexafluoroplatinate | A hexafluoroplatinate is a chemical compound which contains the hexafluoroplatinate anion. It is produced by combining substances with platinum hexafluoride.
Examples of hexafluoroplatinates
Dioxygenyl hexafluoroplatinate (O2PtF6), containing the rare dioxygenyl oxycation.
Xenon hexafluoroplatinate ("XePtF6"), the first noble gas compound ever synthesised. (The Xe+ ion in XePtF6 is unstable, being a radical; as a result, XePtF6 itself is unstable and quickly disproportionates into XeFPtF5, XeFPt2F11, and Xe2F3PtF6.)
See also
Hexachloroplatinate
References
Anions
Fluorometallates | Hexafluoroplatinate | Physics,Chemistry | 189 |
23,475,106 | https://en.wikipedia.org/wiki/Richard%20Dedekind | Julius Wilhelm Richard Dedekind (; 6 October 1831 – 12 February 1916) was a German mathematician who made important contributions to number theory, abstract algebra (particularly ring theory), and
the axiomatic foundations of arithmetic. His best known contribution is the definition of real numbers through the notion of Dedekind cut. He is also considered a pioneer in the development of modern set theory and of the philosophy of mathematics known as logicism.
Life
Dedekind's father was Julius Levin Ulrich Dedekind, an administrator of Collegium Carolinum in Braunschweig. His mother was Caroline Henriette Dedekind (née Emperius), the daughter of a professor at the Collegium. Richard Dedekind had three older siblings. As an adult, he never used the names Julius Wilhelm. He was born in Braunschweig (often called "Brunswick" in English), which is where he lived most of his life and died. His body rests at Braunschweig Main Cemetery.
He first attended the Collegium Carolinum in 1848 before transferring to the University of Göttingen in 1850. There, Dedekind was taught number theory by professor Moritz Stern. Gauss was still teaching, although mostly at an elementary level, and Dedekind became his last student. Dedekind received his doctorate in 1852, for a thesis titled Über die Theorie der Eulerschen Integrale ("On the Theory of Eulerian integrals"). This thesis did not display the talent evident in Dedekind's subsequent publications.
At that time, the University of Berlin, not Göttingen, was the main facility for mathematical research in Germany. Thus Dedekind went to Berlin for two years of study, where he and Bernhard Riemann were contemporaries; they were both awarded the habilitation in 1854. Dedekind returned to Göttingen to teach as a Privatdozent, giving courses on probability and geometry. He studied for a while with Peter Gustav Lejeune Dirichlet, and they became good friends. Because of lingering weaknesses in his mathematical knowledge, he studied elliptic and abelian functions. Yet he was also the first at Göttingen to lecture concerning Galois theory. About this time, he became one of the first people to understand the importance of the notion of groups for algebra and arithmetic.
In 1858, he began teaching at the Polytechnic school in Zürich (now ETH Zürich). When the Collegium Carolinum was upgraded to a Technische Hochschule (Institute of Technology) in 1862, Dedekind returned to his native Braunschweig, where he spent the rest of his life, teaching at the Institute. He retired in 1894, but did occasional teaching and continued to publish. He never married, instead living with his sister Julia.
Dedekind was elected to the Academies of Berlin (1880) and Rome, and to the French Academy of Sciences (1900). He received honorary doctorates from the universities of Oslo, Zurich, and Braunschweig.
Work
While teaching calculus for the first time at the Polytechnic school, Dedekind developed the notion now known as a Dedekind cut (German: Schnitt), now a standard definition of the real numbers. The idea of a cut is that an irrational number divides the rational numbers into two classes (sets), with all the numbers of one class (greater) being strictly greater than all the numbers of the other (lesser) class. For example, the square root of 2 defines all the nonnegative numbers whose squares are less than 2 and the negative numbers into the lesser class, and the positive numbers whose squares are greater than 2 into the greater class. Every location on the number line continuum contains either a rational or an irrational number. Thus there are no empty locations, gaps, or discontinuities. Dedekind published his thoughts on irrational numbers and Dedekind cuts in his pamphlet "Stetigkeit und irrationale Zahlen" ("Continuity and irrational numbers"); in modern terminology, Vollständigkeit, completeness.
Dedekind defined two sets to be "similar" when there exists a one-to-one correspondence between them. He invoked similarity to give the first precise definition of an infinite set: a set is infinite when it is "similar to a proper part of itself," in modern terminology, is equinumerous to one of its proper subsets. Thus the set N of natural numbers can be shown to be similar to the subset of N whose members are the squares of every member of N, (N → N2):
N 1 2 3 4 5 6 7 8 9 10 ...
↓
N2 1 4 9 16 25 36 49 64 81 100 ...
Dedekind's work in this area anticipated that of Georg Cantor, who is commonly considered the founder of set theory. Likewise, his contributions to the foundations of mathematics anticipated later works by major proponents of logicism, such as Gottlob Frege and Bertrand Russell.
Dedekind edited the collected works of Lejeune Dirichlet, Gauss, and Riemann. Dedekind's study of Lejeune Dirichlet's work led him to his later study of algebraic number fields and ideals. In 1863, he published Lejeune Dirichlet's lectures on number theory as Vorlesungen über Zahlentheorie ("Lectures on Number Theory") about which it has been written that:
The 1879 and 1894 editions of the Vorlesungen included supplements introducing the notion of an ideal, fundamental to ring theory. (The word "Ring", introduced later by Hilbert, does not appear in Dedekind's work.) Dedekind defined an ideal as a subset of a set of numbers, composed of algebraic integers that satisfy polynomial equations with integer coefficients. The concept underwent further development in the hands of Hilbert and, especially, of Emmy Noether. Ideals generalize Ernst Eduard Kummer's ideal numbers, devised as part of Kummer's 1843 attempt to prove Fermat's Last Theorem. (Thus Dedekind can be said to have been Kummer's most important disciple.) In an 1882 article, Dedekind and Heinrich Martin Weber applied ideals to Riemann surfaces, giving an algebraic proof of the Riemann–Roch theorem.
In 1888, he published a short monograph titled Was sind und was sollen die Zahlen? ("What are numbers and what are they good for?" Ewald 1996: 790), which included his definition of an infinite set. He also proposed an axiomatic foundation for the natural numbers, whose primitive notions were the number one and the successor function. The next year, Giuseppe Peano, citing Dedekind, formulated an equivalent but simpler set of axioms, now the standard ones.
Dedekind made other contributions to algebra. For instance, around 1900, he wrote the first papers on modular lattices. In 1872, while on holiday in Interlaken, Dedekind met Georg Cantor. Thus began an enduring relationship of mutual respect, and Dedekind became one of the first mathematicians to admire Cantor's work concerning infinite sets, proving a valued ally in Cantor's disputes with Leopold Kronecker, who was philosophically opposed to Cantor's transfinite numbers.
Bibliography
Primary literature in English:
1890. "Letter to Keferstein" in Jean van Heijenoort, 1967. A Source Book in Mathematical Logic, 1879–1931. Harvard Univ. Press: 98–103.
1963 (1901). Essays on the Theory of Numbers. Beman, W. W., ed. and trans. Dover. Contains English translations of Stetigkeit und irrationale Zahlen and Was sind und was sollen die Zahlen?
1996. Theory of Algebraic Integers. Stillwell, John, ed. and trans. Cambridge Uni. Press. A translation of Über die Theorie der ganzen algebraischen Zahlen.
Ewald, William B., ed., 1996. From Kant to Hilbert: A Source Book in the Foundations of Mathematics, 2 vols. Oxford Uni. Press.
1854. "On the introduction of new functions in mathematics," 754–61.
1872. "Continuity and irrational numbers," 765–78. (translation of Stetigkeit...)
1888. What are numbers and what should they be?, 787–832. (translation of Was sind und...)
1872–82, 1899. Correspondence with Cantor, 843–77, 930–40.
Primary literature in German:
Gesammelte mathematische Werke (Complete mathematical works, Vol. 1–3). Retrieved 5 August 2009.
See also
List of things named after Richard Dedekind
Dedekind cut
Dedekind domain
Dedekind eta function
Dedekind-infinite set
Dedekind number
Dedekind psi function
Dedekind sum
Dedekind zeta function
Ideal (ring theory)
Notes
References
Further reading
Edwards, H. M., 1983, "Dedekind's invention of ideals," Bull. London Math. Soc. 15: 8–17.
Gillies, Douglas A., 1982. Frege, Dedekind, and Peano on the foundations of arithmetic. Assen, Netherlands: Van Gorcum.
Ferreirós, José, 2007. Labyrinth of Thought: A history of set theory and its role in modern mathematics. Basel: Birkhäuser, chap. 3, 4 and 7.
Ivor Grattan-Guinness, 2000. The Search for Mathematical Roots 1870–1940. Princeton Uni. Press.
There is an online bibliography of the secondary literature on Dedekind. Also consult Stillwell's "Introduction" to Dedekind (1996).
External links
Dedekind, Richard, Essays on the Theory of Numbers. Open Court Publishing Company, Chicago, 1901. at the Internet Archive
Dedekind's Contributions to the Foundations of Mathematics http://plato.stanford.edu/entries/dedekind-foundations/.
1831 births
1916 deaths
19th-century German mathematicians
19th-century German philosophers
20th-century German mathematicians
Academic staff of ETH Zurich
Academic staff of the Technical University of Braunschweig
University of Göttingen alumni
Academic staff of the University of Göttingen
Humboldt University of Berlin alumni
German number theorists
Algebraists
Scientists from Braunschweig
People from the Duchy of Brunswick
Members of the French Academy of Sciences
Philosophers of mathematics
Mathematicians from the German Empire | Richard Dedekind | Mathematics | 2,170 |
8,296,985 | https://en.wikipedia.org/wiki/General%20tau%20theory | General tau theory deals with the guidance of bodily movements. It was developed from work on J. J. Gibson's notion of ecological invariants in the visual flow-field during a perception-in-action event, and subsequently generalised by David N. Lee in the late 1990s to an amodal theory of perceptuomotor control.
The theory considers the organism acting as a unified whole in dynamic relations with its environment, rather than conceiving of the organism as a complex mechanical device reducible into analysable parts. The theory is embedded in ecological thinking, paying attention to both organism and environment, and drawing information from their forms of interaction. It was developed by thinking about the relational, or ecological invariants in engagements between organism and environment. This whole-systems approach offers insight into the nature of living and offers pragmatic, human benefits in both designing the constructed world (e.g. in cockpit design) and in therapy of movement disorders (e.g. Parkinson's disease).
References
External links
Perception Movement Action Research Consortium
Related
Tau_effect
Motor skills | General tau theory | Biology | 223 |
63,241,541 | https://en.wikipedia.org/wiki/9%20Cygni | 9 Cygni is a binary star system in the northern constellation of Cygnus. 9 Cygni is its Flamsteed designation. The two stars have a combined magnitude of 5.39, so it can be seen with the naked eye under good viewing conditions. Parallax measurements made by Gaia put the star at a distance of around () away.
The two stars of 9 Cygni are a G-type giant and an A-type star. Both stars are over twice as massive as the Sun. They orbit once every 4.56 years, separated with a semi-major axis of . However, the eccentricity is high, at 0.82. The primary is a red clump giant, a star on the cool end of the horizontal branch fusing helium in its core. The secondary star has begun to evolve off the main sequence; it is sometimes classified as a giant star and sometimes as a main-sequence star.
References
See also
Spectroscopic binary
G-type giants
A-type main-sequence stars
Binary stars
Cygnus (constellation)
Durchmusterung objects
Cygni, 9
184759
096302
7441 | 9 Cygni | Astronomy | 237 |
22,538,963 | https://en.wikipedia.org/wiki/Stain-blocking%20primer | Stain-blocking primers are used to cover stains such as watermarks, nicotine (actually tar), markers, smoke, and prevent them bleeding through newly applied layers of paint. They also provide adhesion over problematic surfaces, giving better film leveling, and durability. Commonly used stain-blocking paints include acrylic and alkyd.
Volatile organic compounds
Low volatile organic compounds (VOCs) formulations partially or completely eliminates odor, making them safer for the environment.
However, in the United States, solvent-based products with high VOC levels still represent approximately 25% of the total market volume for interior stain-blocking primers. They continue to maintain this significant market share even though many national, regional or local legislations and initiatives concerning the reduction of VOCs have been recently established.
Since their introduction to the US market in 1997, low VOC, odorless stain-blocking primers have become known for their unique combination of highly effective stain blocking technologies. These properties are traditionally associated with solvent-based products. They also display comfortable application and low odor – characteristics commonly associated with water-based products. A good primer has to be compatible with a wide variety of substrates that may be encountered in an interior situation such as: drywall, cement, concrete, plaster and spackling, wood, paneling, old paint, metals, fiberboard, etc. Very frequently, particularly in renovation work, the surfaces encountered will be covered with a variety of stains such as water-soluble types from water leaks, smoke, nicotine, inks, etc., as well as solvent soluble stains such as tar, tannin, and others.
Recently, many states decreased the VOC limit from 450 g/L to 350 g/L limit for the specialty primers, sealers and undercoats category. As state regulatory agencies continue to introduce stricter legislation concerning VOCs, this will pose an even tougher challenge to manufacturers of solvent-based stain-blocking primers. In order to decrease the VOC of a coating, it is necessary to remove solvent from the formulation. This has the effect of increasing relative solids content (both pigment + binder) in the formulation, which has a negative effect on viscosity. Increasing resin content primarily increases high-shear viscosity and that is important for sprayability. Increasing pigment content primarily affects low and medium shear viscosities, which are important for flow, leveling, and sag resistance. When formulating a low odor, low VOC primer, if the plan is to increase the binder, then resins with lower viscosity profiles and solubility in low-odor, isoparaffinic solvents are needed.
Low-odor, isoparaffinic solvent, in addition to being lower in odor than solvents typically used in solvent-based systems, also has a higher margin of safety versus stronger solvents. The following discussion on Air Change Index (ACI) illustrates that less air turnover is needed when using low-odor, isoparaffinic solvent in comparison to other solvent-based systems such as alkyds.
Air Change Index (ACI)
The indoor use of solvent-based primer results in increased solvent exposure due to build-up of fumes in the air. One way to reduce the amount of solvent in the air is to use fans to renew the air. The Air Change Index (ACI) of a solvent is an indication of the margin of safety of a solvent. The lower the ACI, the less air turnover needed.
The ACI is calculated based on the amount of solvent in the primer, the volume of the room to be primed, the volatility of the solvent and the amount of primer to be applied. The calculations were made based on a room and primer with coverage of 375 ft2/gallon. The below calculated ACIs were done on solvent-based primer containing 350 g/L of solvents and the water-based primer containing 50 g/L of butyl glycol.
The values on this chart indicate that the use of mineral spirits based primers would need 6 to 7 air changes per hour requiring significant ventilation. The use of low-odor, isoparaffinic solvent-based primers would only need 1 to 2 air changes per hour. Finally, the use of water-based primers would need less than 1 air change per hour. Another way to interpret this information is that the margin of safety is 3 to 4 times higher with low-odor, isoparaffinic solvent than with mineral spirits, and 5 to 6 times higher than with toluene. However, proper ventilation is still good practice with any type of primer.
Stain-blocking primers with low-odor, isoparaffinic solvent are considered to have less of a negative impact on the health of paint contractors and building residents. Despite the lower levels of VOCs, application, coverage and overall performance are comparable to their high VOC counterparts.
References
Paints | Stain-blocking primer | Chemistry | 1,022 |
13,772,472 | https://en.wikipedia.org/wiki/%CE%A9-logic | In set theory, Ω-logic is an infinitary logic and deductive system proposed by as part of an attempt to generalize the theory of determinacy of pointclasses to cover the structure . Just as the axiom of projective determinacy yields a canonical theory of , he sought to find axioms that would give a canonical theory for the larger structure. The theory he developed involves a controversial argument that the continuum hypothesis is false.
Analysis
Woodin's Ω-conjecture asserts that if there is a proper class of Woodin cardinals (for technical reasons, most results in the theory are most easily stated under this assumption), then Ω-logic satisfies an analogue of the completeness theorem. From this conjecture, it can be shown that, if there is any single axiom which is comprehensive over (in Ω-logic), it must imply that the continuum is not . Woodin also isolated a specific axiom, a variation of Martin's maximum, which states that any Ω-consistent (over ) sentence is true; this axiom implies that the continuum is .
Woodin also related his Ω-conjecture to a proposed abstract definition of large cardinals: he took a "large cardinal property" to be a property of ordinals which implies that α is a strong inaccessible, and which is invariant under forcing by sets of cardinal less than α. Then the Ω-conjecture implies that if there are arbitrarily large models containing a large cardinal, this fact will be provable in Ω-logic.
The theory involves a definition of Ω-validity: a statement is an Ω-valid consequence of a set theory T if it holds in every model of T having the form for some ordinal and some forcing notion . This notion is clearly preserved under forcing, and in the presence of a proper class of Woodin cardinals it will also be invariant under forcing (in other words, Ω-satisfiability is preserved under forcing as well). There is also a notion of Ω-provability; here the "proofs" consist of universally Baire sets and are checked by verifying that for every countable transitive model of the theory, and every forcing notion in the model, the generic extension of the model (as calculated in V) contains the "proof", restricted its own reals. For a proof-set A the condition to be checked here is called "A-closed". A complexity measure can be given on the proofs by their ranks in the Wadge hierarchy. Woodin showed that this notion of "provability" implies Ω-validity for sentences which are over V. The Ω-conjecture states that the converse of this result also holds. In all currently known core models, it is known to be true; moreover the consistency strength of the large cardinals corresponds to the least proof-rank required to "prove" the existence of the cardinals.
Notes
References
External links
W. H. Woodin, Slides for 3 talks
Set theory
Systems of formal logic | Ω-logic | Mathematics | 613 |
8,910,528 | https://en.wikipedia.org/wiki/Zonal%20polynomial | In mathematics, a zonal polynomial is a multivariate symmetric homogeneous polynomial. The zonal polynomials form a basis of the space of symmetric polynomials. Zonal polynomials appear in special functions with matrix argument which on the other hand appear in matrixvariate distributions such as the Wishart distribution when integrating over compact Lie groups. The theory was started in multivariate statistics in the 1960s and 1970s in a series of papers by Alan Treleven James and his doctorial student Alan Graham Constantine.
They appear as zonal spherical functions of the Gelfand pairs
(here, is the hyperoctahedral group) and , which means that they describe canonical basis of the double class
algebras and .
The zonal polynomials are the case of the C normalization of the Jack function.
References
Literature
Robb Muirhead, Aspects of Multivariate Statistical Theory, John Wiley & Sons, Inc., New York, 1984.
Homogeneous polynomials
Symmetric functions
Multivariate statistics | Zonal polynomial | Physics,Mathematics | 193 |
2,002,950 | https://en.wikipedia.org/wiki/Boron%20trioxide | Boron trioxide or diboron trioxide is the oxide of boron with the formula . It is a colorless transparent solid, almost always glassy (amorphous), which can be crystallized only with great difficulty. It is also called boric oxide or boria. It has many important industrial applications, chiefly in ceramics as a flux for glazes and enamels and in the production of glasses.
Structure
Boron trioxide has three known forms, one amorphous and two crystalline.
Amorphous form
The amorphous form (g-) is by far the most common. It is thought to be composed of boroxol rings which are six-membered rings composed of alternating 3-coordinate boron and 2-coordinate oxygen.
Because of the difficulty of building disordered models at the correct density with many boroxol rings, this view was initially controversial, but such models have recently been constructed and exhibit properties in excellent agreement with experiment. It is now recognized, from experimental and theoretical studies, that the fraction of boron atoms belonging to boroxol rings in glassy is somewhere between 0.73 and 0.83, with 0.75 = 3/4 corresponding to a 1:1 ratio between ring and non-ring units. The number of boroxol rings decays in the liquid state with increasing temperature.
Crystalline α form
The crystalline form (α-) is exclusively composed of BO3 triangles. Its crystal structure was initially believed to be the enantiomorphic space groups P31(#144) and P32(#145), like γ-glycine; but was later revised to the enantiomorphic space groups P3121(#152) and P3221(#154) in the trigonal crystal system, like α-quartz
Crystallization of α- from the molten state at ambient pressure is strongly kinetically disfavored (compare liquid and crystal densities). It can be obtained with prolonged annealing of the amorphous solid ~200 °C under at least 10 kbar of pressure.
Crystalline β form
The trigonal network undergoes a coesite-like transformation to monoclinic β- at several gigapascals (9.5 GPa).
Preparation
Boron trioxide is produced by treating borax with sulfuric acid in a fusion furnace. At temperatures above 750 °C, the molten boron oxide layer separates out from sodium sulfate. It is then decanted, cooled and obtained in 96–97% purity.
Another method is heating boric acid above ~300 °C. Boric acid will initially decompose into steam, (H2O(g)) and metaboric acid (HBO2) at around 170 °C, and further heating above 300 °C will produce more steam and diboron trioxide. The reactions are:
H3BO3 → HBO2 + H2O
2 HBO2 → + H2O
Boric acid goes to anhydrous microcrystalline in a heated fluidized bed. Carefully controlled heating rate avoids gumming as water evolves.
Boron oxide will also form when diborane (B2H6) reacts with oxygen in the air or trace amounts of moisture:
2B2H6(g) + 3O2(g) → 2(s) + 6H2(g)
B2H6(g) + 3H2O(g) → (s) + 6H2(g)
Reactions
Molten boron oxide attacks silicates. Containers can be passivated internally with a graphitized carbon layer obtained by thermal decomposition of acetylene.
Applications
Major component of borosilicate glass
Fluxing agent for glass and enamels
An additive used in glass fibres (optical fibres)
The inert capping layer in the Liquid Encapsulation Czochralski process for the production of gallium arsenide single crystal
As an acid catalyst in organic synthesis
As a starting material for the production of other boron compounds, such as boron carbide
See also
Boron suboxide
Boric acid
Sassolite
Tris(2,2,2-trifluoroethyl) borate
References
External links
National Pollutant Inventory: Boron and compounds
Australian Government information
US NIH hazard information. See NIH.
Material Safety Data Sheet
CDC - NIOSH Pocket Guide to Chemical Hazards - Boron oxide
Boron compounds
Acidic oxides
Glass compositions
Sesquioxides | Boron trioxide | Chemistry | 926 |
6,133,331 | https://en.wikipedia.org/wiki/Application-level%20gateway | An application-level gateway (ALG, also known as application-layer gateway, application gateway, application proxy, or application-level proxy) is a security component that augments a firewall or NAT employed in a mobile network. It allows customized NAT traversal filters to be plugged into the gateway to support address and port translation for certain application layer "control/data" protocols such as FTP, BitTorrent, SIP, RTSP, file transfer in IM applications. In order for these protocols to work through NAT or a firewall, either the application has to know about an address/port number combination that allows incoming packets, or the NAT has to monitor the control traffic and open up port mappings (firewall pinholes) dynamically as required. Legitimate application data can thus be passed through the security checks of the firewall or NAT that would have otherwise restricted the traffic for not meeting its limited filter criteria.
Functions
An ALG may offer the following functions:
allowing client applications to use dynamic ephemeral TCP/UDP ports to communicate with the known ports used by the server applications, even though a firewall configuration may allow only a limited number of known ports. In the absence of an ALG, either the ports would get blocked or the network administrator would need to explicitly open up a large number of ports in the firewall — rendering the network vulnerable to attacks on those ports.
converting the network layer address information found inside an application payload between the addresses acceptable by the hosts on either side of the firewall/NAT. This aspect introduces the term 'gateway' for an ALG.
recognizing application-specific commands and offering granular security controls over them
synchronizing between multiple streams/sessions of data between two hosts exchanging data. For example, an FTP application may use separate connections for passing control commands and for exchanging data between the client and a remote server. During large file transfers, the control connection may remain idle. An ALG can prevent the control connection getting timed out by network devices before the lengthy file transfer completes.
Deep packet inspection of all the packets handled by ALGs over a given network makes this functionality possible. An ALG understands the protocol used by the specific applications that it supports.
For instance, for Session Initiation Protocol (SIP) Back-to-Back User agent (B2BUA), an ALG can allow firewall traversal with SIP. If the firewall has its SIP traffic terminated on an ALG then the responsibility for permitting SIP sessions passes to the ALG instead of the firewall. An ALG can solve another major SIP headache: NAT traversal. Basically a NAT with a built-in ALG can rewrite information within the SIP messages and can hold address bindings until the session terminates. A SIP ALG will also handle SDP in the body of SIP messages (which is used ubiquitously in VoIP to set up media endpoints), since SDP also contains literal IP addresses and ports that must be translated.
It is common for SIP ALG on some equipment to interfere with other technologies that try to solve the same problem, and various providers recommend turning it off.
An ALG is very similar to a proxy server, as it sits between the client and real server, facilitating the exchange. There seems to be an industry convention that an ALG does its job without the application being configured to use it, by intercepting the messages. A proxy, on the other hand, usually needs to be configured in the client application. The client is then explicitly aware of the proxy and connects to it, rather than the real server.
Microsoft Windows
The Application Layer Gateway service in Microsoft Windows provides support for third-party plugins that allow network protocols to pass through the Windows Firewall and work behind it and Internet Connection Sharing. ALG plugins can open ports and change data that is embedded in packets, such as ports and IP addresses. Windows Server 2003 also includes an ALG FTP plugin. The ALG FTP plugin is designed to support active FTP sessions through the NAT engine in Windows. To do this, the ALG FTP plugin redirects all traffic that passes through the NAT and that is destined for port 21 (FTP control port) to a private listening port in the 3000–5000 range on the Microsoft loopback adapter. The ALG FTP plugin then monitors/updates traffic on the FTP control channel so that the FTP plugin can plumb port mappings through the NAT for the FTP data channels.
Linux
The Linux kernel's Netfilter framework, which implements NAT in Linux, has features and modules for several NAT ALGs:
Amanda protocol
FTP
IRC
SIP
TFTP
IPsec
H.323
PPTP
L2TP
See also
Session border controller
References
External links
DNS Application Level Gateway (DNS_ALG)
Computer network security
Internet Protocol based network software
Application Layer Gateway | Application-level gateway | Engineering | 1,001 |
64,971,289 | https://en.wikipedia.org/wiki/Katja%20Loos | Katja Loos is professor at the Zernike Institute for Advanced Materials of the University of Groningen, The Netherlands holding the chair of Macromolecular Chemistry and New Polymeric Materials.
She currently serves as the President of the European Polymer Federation (EPF).
Biography
Katja Loos studied chemistry at the Johannes Gutenberg Universität in Mainz, Germany and graduated in 1996. During her graduate studies she focused her studies on Organic Chemistry and Polymer Chemistry. In 1992 and 1993 she was an international exchange student at the University of Massachusetts in Amherst, USA.
In 2001, she received her PhD in Macromolecular Chemistry from the University of Bayreuth, Germany. Her thesis was focused on hybrid materials bearing amylose using enzymatic polymerizations. During her PhD research she worked in 1997 as an international exchange researcher at the Universidade Federal do Rio Grande do Sul, Porto Alegre, Brazil.
In 2001, she received a Feodor Lynen research fellowship of the Alexander von Humboldt Foundation to conduct postdoctoral research at the Polytechnic University in Brooklyn, NY, USA, where she worked on fundamentals of self-assembled monolayers and immobilization supports for biocatalysts.
In 2003, she started an independent research group at the University of Groningen, the Netherlands.
Katja Loos worked as guest professor at the Technical University of Catalonia, Barcelona, Spain in 2006 and at the Technical University Dresden, Germany in 2016.
Research
The research of Loos is focused on enzymatic polymerization, especially the biocatalytic synthesis of saccharides, polyamides and furan based polymers, as well as the synthesis and self-assembly of block copolymers using supramolecular motifs and containing ferroelectric blocks.
Loos published over 270 scholarly peer-reviewed publications, various patents and book chapters. Her publications frequently get included in special themed collections of scientific journals like “Women in Polymer Science” from Wiley en “Women at the Forefront of Chemistry” of the American Chemical Society
She is the editor of the only currently available textbook in the field of Enzymatic Polymerizations.
She is editor of the scientific journal Polymer and guest-edited special issues of various scientific journals.
Since 2017 she is a member of the board of the Zernike Institute for Advanced Materials of the University of Groningen. She serves as the vice-chair of the program council Chemistry of Advanced Materials of ChemistryNL, a member of the board of the MaterialenNL Platform and is a member of the board of the Dutch national postgraduate research school Polymer Technology Netherlands (PTN).
Katja Loos is the national representative of the Netherlands to the European Polymer Federation (EPF).
In addition to her research, Katja Loos advocates for diversity in science and open access publishing
Awards and honours
Katja Loos was awarded two travel scholarships of the German Academic Exchange Service (DAAD) for research stays at the University of Massachusetts in Amherst, USA, in 1992 and 1993 and at Universidade Federal do Rio Grande do Sul, Porto Alegre, Brasil, in 1997.
In 2001, she received a Feodor Lynen Fellowship award of the Alexander von Humboldt Foundation to conduct her postdoctoral research.
The Netherlands Organisation for Scientific Research (NWO) awarded her a VIDI innovational research grant in 2009 and a VICI innovational research grant in 2014
In 2016 the Technical University Dresden and the German Research Council (DFG) within the scope of its excellency initiative awarded her the Eleonore Trefftz guest professorship.
The Alexander von Humboldt Foundation awarded Katja Loos in 2019 the Friedrich Wilhelm Bessel Research Award.
In 2019, she was named "Topper of the year" by Science Guide.
She is one of the recipients of the IUPAC 2021 Distinguished Women in Chemistry or Chemical Engineering award.
In 2022 she won the Team Science Award of the Dutch Research Council (NWO) with her research group HyBRit.
In 2023 she received the title of Knight of the Order of the Netherlands Lion, the prestigious Dutch order of chivalry founded by King William I in 1815.
Katja Loos is a Fellow of the Dutch Polymer Institute (DPI) and the Royal Society of Chemistry (FRSC)
References
External links
Website research group
Information on university website
Profile on NARCIS
Profile on AcademiaNet
External links
1971 births
Living people
Academic staff of the University of Groningen
21st-century Dutch chemists
21st-century German chemists
Johannes Gutenberg University Mainz alumni
University of Bayreuth alumni
21st-century Dutch inventors
Women inventors
Scientists from Frankfurt
Polymer scientists and engineers
Polytechnic Institute of New York University alumni
21st-century Dutch women scientists | Katja Loos | Chemistry,Materials_science | 952 |
56,941,249 | https://en.wikipedia.org/wiki/Cast%20saw | A cast saw is an oscillating saw used to remove orthopedic casts. Instead of a rotating blade, cast saws use a sharp, small-toothed blade rapidly oscillating or vibrating back and forth over a minimal angle to cut material and are therefore not circular saws. This device is often used with a cast spreader.
The patient's skin frequently comes into contact with the cast saw blade without cutting although it can cause lacerations when used over bony prominences. The design enables the saw to cut rigid materials such as plaster or fiberglass. In contrast, soft tissues such as skin move back and forth with the blade, dissipating the shear forces, and preventing injury.
Modern cast saws date back to the plaster cast cutting saw which was submitted for a patent on April 2, 1945, by Homer H. Stryker, an orthopedic surgeon from Kalamazoo, Michigan.
Cast removal procedures result in complications in less than 1% of patients. These complications include skin abrasions or thermal injuries from friction between the saw and cast. Temperatures exceeding have been recorded during the removal of fiberglass casts. The proper use of the saw is to perforate (instead of cutting) the cast, which can then be separated using a cast spreader.
Alternatives include cast cutting shears which were patented in 1950 by Neil McKay.
See also
Multi-tool (power tool)
References
External links
Demonstration of a cast saw on:
bare skin
plaster material
Power tools
Orthopedic treatment
Saws | Cast saw | Physics | 313 |
32,447,379 | https://en.wikipedia.org/wiki/TetGen | TetGen is a mesh generator developed by Hang Si which is designed to partition any 3D geometry into tetrahedrons by employing a form of Delaunay triangulation whose algorithm was developed by the author.
TetGen has since been incorporated into other software packages such as Mathematica and Gmsh.
Some improvements by speed in quality in Version 1.6 were introduced.
See also
Gmsh
Salome (software)
References
External links
Weierstrass Institute: Hang Si's personal homepage
Numerical analysis software for Linux
Cross-platform software
Mesh generators
Numerical analysis software for macOS
Numerical analysis software for Windows
Free mathematics software
Free software programmed in C++
Cross-platform free software | TetGen | Mathematics | 141 |
355,559 | https://en.wikipedia.org/wiki/Mobbing | Mobbing, as a sociological term, refers either to bullying in any context, or specifically to that within the workplace, especially when perpetrated by a group rather than an individual.
Psychological and health effects
Victims of workplace mobbing frequently suffer from: adjustment disorders, somatic symptoms, psychological trauma (e.g., trauma tremors or sudden onset selective mutism), post-traumatic stress disorder (PTSD), or major depression.
In mobbing targets with PTSD, Leymann notes that the "mental effects were fully comparable with PTSD from war or prison camp experiences." Some patients may develop alcoholism or other substance abuse disorders. Family relationships routinely suffer and victims sometimes display acts of aggression towards strangers in the street. Workplace targets and witnesses may even develop brief psychotic episodes , generally with paranoid symptoms. Leymann estimated that 15% of suicides in Sweden could be directly attributed to workplace mobbing.
Development of the concept
Konrad Lorenz, in his book entitled On Aggression (1966), first described mobbing among birds and other animals, attributing it to instincts rooted in the Darwinian struggle to thrive (see animal mobbing behavior). In his view, most humans are subject to similar innate impulses but capable of bringing them under rational control. Lorenz's explanation for his choice of the English word "mobbing" was omitted in the English translation by Marjorie Kerr Wilson. According to Kenneth Westhues, Lorenz chose the word "mobbing" because he remembered in the collective attack by birds, the old German term hassen auf, which means "to hate after" or "to put a hate on" was applied and this emphasised "the depth of antipathy with which the attack is made" rather than the English word 'mobbing' which emphasised the collective aspect of the attack.
In the 1970s, the Swedish physician applied Lorenz's conceptualization to the collective aggression of children against a targeted child. In the 1980s, professor and practising psychologist Heinz Leymann applied the term to ganging up in the workplace. In 2011, anthropologist Janice Harper suggested that some anti-bullying approaches effectively constitute a form of mobbing by using the label "bully" to dehumanize, encouraging people to shun and avoid people labeled bullies, and in some cases sabotage their work or refuse to work with them, while almost always calling for their exclusion and termination from employment.
Cause
Janice Harper followed her Huffington Post essay with a series of essays in both The Huffington Post and in her column "Beyond Bullying: Peacebuilding at Work, School and Home" in Psychology Today that argued that mobbing is a form of group aggression innate to primates, and that those who engage in mobbing are not necessarily "evil" or "psychopathic", but responding in a predictable and patterned manner when someone in a position of leadership or influence communicates to the group that someone must go. For that reason, she indicated that anyone can and will engage in mobbing, and that once mobbing gets underway, just as in the animal kingdom it will almost always continue and intensify as long as the target remains with the group. She subsequently published a book on the topic in which she explored animal behavior, organizational cultures and historical forms of group aggression, suggesting that mobbing is a form of group aggression on a continuum of structural violence with genocide as the most extreme form of mob aggression.
Online
Social networking sites and blogs have enabled anonymous groups to coordinate and attack other people. The victims of these groups can be targeted by various attacks and threats, sometimes causing the victims to use pseudonyms or go offline to avoid them.
In the workplace
British anti-bullying researchers Andrea Adams and Tim Field have used the expression "workplace bullying" instead of what Leymann called "mobbing" in a workplace context. They identify mobbing as a particular type of bullying that is not as apparent as most, defining it as "an emotional assault. It begins when an individual becomes the target of disrespectful and harmful behavior. Through innuendo, rumors, and public discrediting, a hostile environment is created in which one individual gathers others to willingly, or unwillingly, participate in continuous malevolent actions to force a person out of the workplace."
Adams and Field believe that mobbing is typically found in work environments that have poorly organised production or working methods and incapable or inattentive management and that mobbing victims are usually "exceptional individuals who demonstrated intelligence, competence, creativity, integrity, accomplishment and dedication".
In contrast, Janice Harper suggests that workplace mobbing is typically found in organizations where there is limited opportunity for employees to exit, whether through tenure systems or contracts that make it difficult to terminate an employee (such as universities or unionized organizations), and/or where finding comparable work in the same community makes it difficult for the employee to voluntarily leave (such as academic positions, religious institutions, or military). In these employments, efforts to eliminate the worker will intensify to push the worker out against his or her will through shunning, sabotage, false accusations and a series of investigations and poor reviews. Another form of employment where workers are mobbed are those that require the use of uniforms or other markers of group inclusion (law enforcement, fire fighting, military), organizations where a single gender has predominated, but another gender is beginning to enter (STEM fields, fire fighting, military, nursing, teaching, and construction). Finally, she suggests that organizations where there are limited opportunities for advancement can be prone to mobbing because those who do advance are more likely to view challenges to their leadership as threats to their precarious positions. Harper further challenges the idea that workers are targeted for their exceptional competence. In some cases, she suggests, exceptional workers are mobbed because they are viewed as threatening to someone, but some workers who are mobbed are not necessarily good workers. Rather, Harper contends, some mobbing targets are outcasts or unproductive workers who cannot easily be terminated, and are thus treated inhumanely to push them out. While Harper emphasizes the cruelty and damaging consequences of mobbing, her organizational analysis focuses on the structural, rather than moral, nature of the organization. Moreover, she views the behavior itself, which she terms workplace aggression, as grounded in group psychology, rather than individual psychosis—even when the mobbing is initiated due to a leader's personal psychosis, the dynamics of group aggression will transform the leader's bullying into group mobbing—two vastly distinct psychological and social phenomena.
Shallcross, Ramsay and Barker consider workplace "mobbing" to be a generally unfamiliar term in some English speaking countries. Some researchers claim that mobbing is simply another name for bullying. Workplace mobbing can be considered as a "virus" or a "cancer" that spreads throughout the workplace via gossip, rumour and unfounded accusations. It is a deliberate attempt to force a person out of their workplace by humiliation, general harassment, emotional abuse and/or terror. Mobbing can be described as being "ganged up on." Mobbing is executed by a leader (who can be a manager, a co-worker, or a subordinate). The leader then rallies others into a systematic and frequent "mob-like" behaviour toward the victim.
Mobbing as "downward bullying" by superiors is also known as "bossing", and "upward bullying" by colleagues as "staffing", in some European countries, for instance, in German-speaking regions.
At school
Following on from the work of Heinemann, Elliot identifies mobbing as a common phenomenon in the form of group bullying at school. It involves "ganging up" on someone using tactics of rumor, innuendo, discrediting, isolating, intimidating, and above all, making it look as if the targeted person is responsible (victim blaming). It is to be distinguished from normal conflicts (between pupils of similar standing and power), which are an integral part of everyday school life.
In academia
Kenneth Westhues' study of mobbing in academia found that vulnerability was increased by personal differences such as being a foreigner or of a different sex; by working in fields such as music or literature which have recently come under the sway of less objective and more post-modern scholarship; financial pressure; or having an aggressive superior. Other factors included envy, heresy and campus politics.
Checklists
Sociologists and authors have created checklists and other tools to identify mobbing behaviour. Common approaches to assessing mobbing behavior is through quantifying frequency of mobbing behavior based on a given definition of the behavior or through quantifying what respondents believe encompasses mobbing behavior. These are referred to as "self-labeling" and "behavior experience" methods respectively.
Limitations of some mobbing examination tools are:
Participant exhaustion due to examination length
Limited sample exposure resulting in limited result generalizability
Confounding with constructs that result in the same affect as mobbing but are not purposely harmful
Common Tools used to measure mobbing behavior are:
Leyman Inventory of Psychological Terror (LIPT)
Negative Acts Questionnaire – Revised (NAQ-R)
Luxembourg Workplace Mobbing Scale (LWMS)
Counteracting
From an organizational perspective, it has been suggested that mobbing behavior can be curtailed by acknowledging behaviors as mobbing behaviors and that such behaviors result in harm and/or negative consequences. Precise definitions of such traits are critical due to ambiguity of unacceptable and acceptable behaviors potentially leading to unintentional mobbing behavior. Attenuation of mobbing behavior can further be enhanced by developing policies that explicitly address specific behaviors that are culturally accepted to result in harm or negative affect. This provides a framework from which mobbing victims can respond to mobbing. Lack of such a framework may result in a situation where each instance of mobbing is treated on an individual basis with no recourse of prevention. It may also indicate that such behaviors are warranted and within the realm of acceptable behavior within an organization. Direct responses to grievances related to mobbing that are handled outside of a courtroom and training programs outlining antibully-countermeasures also demonstrate a reduction in mobbing behavior.
Persecutory delusions
See also
References
Further reading
Davenport NZ, Schwartz RD & Elliott GP Mobbing, Emotional Abuse in the American Workplace, 3rd ed., 2005, Civil Society Publishing. Ames, IA,
Shallcross L., Ramsay S. & Barker M. "Workplace Mobbing: Expulsion, Exclusion, and Transformation (2008) (blind peer reviewed) Australia and New Zealand Academy of Management Conference (ANZAM)
Westhues. Eliminating Professors: A Guide to the Dismissal Process. Lewiston, New York: Edwin Mellen Press.Westhues K The Envy of Excellence: Administrative Mobbing of High-Achieving Professors Lewiston, New York: Edwin Mellen Press.Westhues K "At the Mercy of the Mob" OHS Canada, Canada's Occupational Health & Safety Magazine (18:8), pp. 30–36.
Institute for education of works councils Germany – Information about Mobbing, Mediation and conflict resolution (German)
Zapf D. & Einarsen S. 2005 "Mobbing at Work: Escalated Conflicts in Organizations." Counterproductive Work Behavior: Investigations of Actors and Targets. Fox, Suzy & Spector, Paul E. Washington, DC: American Psychological Association. p. vii
Abuse
Aggression
Harassment and bullying
Interpersonal conflict
Injustice
Persecution
Group processes
Occupational health psychology
Stalking
1960s neologisms
Majority–minority relations | Mobbing | Biology | 2,354 |
5,082,711 | https://en.wikipedia.org/wiki/Zeta%20Cassiopeiae | Zeta Cassiopeiae, Latinized from ζ Cassiopeiae, and officially named Fulu , is a variable star in the constellation of Cassiopeia. It has a blue-white hue and is classified as a B-type subgiant with an apparent magnitude of +3.66. Based upon parallax measurements, it is approximately 590 light-years from the Sun.
Nomenclature
ζ Cassiopeiae (Latinised to Zeta Cassiopeiae) is the star's Bayer designation.
In Chinese astronomy, Zeta Cassiopeiae is called 附路, Pinyin: Fùlù, meaning Auxiliary Road, because this star is marking itself and standing alone in the Auxiliary Road asterism, Legs (mansion) (see Chinese constellation). 附路 (Fùlù) was westernized into Foo Loo, but that name was also designated for Eta Cassiopeiae by R.H. Allen, with the meaning of "a by-path" In 2016, the IAU organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN approved the name Fulu for Zeta Cassiopeiae on 30 June 2017 and it is now so included in the List of IAU-approved Star Names.
Properties
Zeta Cassiopeiae is a B2 subgiant, indicating that it has exhausted its core hydrogen and started to evolve away from the main sequence. It has a temperature of over 20,000 K, is about eight times the mass of the sun, and is 5,500 times as luminous.
Variability
Zeta Cassiopeiae is a probable member of an unusual group of variable stars known as "Slowly Pulsating B" (SPB) stars. It shows a pulsation frequency of 0.64 per day (or once every 1.56 days) and displays a weak magnetic field with a strength of roughly , which varies with a period of 5.37 days. This likely matches the rotation rate of the star, which, when combined with the low projected rotational velocity, indicates the star may be seen nearly pole-on. Zeta Cassiopeiae is a candidate magnetic Bp star that shows an overabundance of helium. The star contains a randomly oriented fossil magnetic field, which impacts the outflow of the stellar wind. Collisions between streams from this stellar wind creates a shock front, with cooling particles settling toward a co-rotating disk.
References
Cassiopeiae, Zeta
Cassiopeiae, 17
B-type subgiants
Cassiopeia (constellation)
Slowly pulsating B-type stars
0153
003360
002920
BD+53 0105 | Zeta Cassiopeiae | Astronomy | 547 |
35,508,402 | https://en.wikipedia.org/wiki/WR%20124 | WR 124 is a Wolf–Rayet star in the constellation of Sagitta surrounded by a ring nebula of expelled material known as M1-67. It is one of the fastest runaway stars in the Milky Way with a radial velocity around . It was discovered by Paul W. Merrill in 1938, identified as a high-velocity Wolf–Rayet star. It is listed in the General Catalogue of Variable Stars as QR Sagittae with a range of 0.08 magnitudes. NASA's James Webb Space Telescope has captured detailed infrared images of WR 124, revealing significant dust production and offering new insights into the life cycles of massive stars and their contributions to the cosmic dust budget.
Distance
A 2010 study of WR 124 directly measured the expansion rate of the M1-67 nebula expelled from the star using Hubble Space Telescope camera images taken 11 years apart, and compared that to the expansion velocity measured by the Doppler shift of the nebular emission lines. This yielded a distance of , which is less than previous studies, and the resulting luminosity of 150,000 times the Sun () is much lower than previously calculated. The luminosity is also lower than predicted by models for a star of this spectral class. Previous studies had found distances of to , with corresponding luminosities of , as expected for a typical WN8h which is a very young star just moving away from the main sequence. The distance to WR 124 calculated from the parallax published in Gaia Data Release 2 is . Gaia Early Data Release 3 gives a similar parallax, which would suggest a distance .
Physical characteristics
With an assumed visual absolute magnitude of −7.22 and 3.1 magnitudes of extinction, WR 124 would be away. The temperature of around means that most of its energy is emitted at ultraviolet wavelengths, the bolometric luminosity is and the radius is . The mass is calculated from evolutionary models to be .
WR 124 is measured to still be about 15% hydrogen with most of the remaining mass being helium. A young highly massive and luminous WN8h star would still be burning hydrogen in its core, but a less luminous and older star would be burning helium in its core. The result of modelling the star purely from its observed characteristics is a luminosity of and a mass of , corresponding to a relatively young hydrogen-burning star at around . In either case, it has only a few hundred thousand years before it explodes as a type Ib or Ic supernova.
The mass loss rate is – per year, depending on the distance and properties determined for the star.
Nebula
WR 124 is surrounded by an intensely hot nebula formed from the star's extreme stellar wind. The nebula M1-67 is expanding at a rate of over and is nearly 6 light-years across, leading to the dynamical age of 20,000 years. M1-67 has little internal structure, though large clumps of material have been detected, some of which have 30 times the mass of Earth and stretch out up to . If placed in the Solar System, one of these clumps would span the distance from the Sun to Saturn.
External links
http://apod.nasa.gov/apod/ap981109.html
http://hubblesite.org/newscenter/archive/releases/1998/38/image/a
References
Wolf–Rayet stars
Runaway stars
Sagitta
Sagittae, QR
Merrill's star
094289 | WR 124 | Astronomy | 714 |
12,878,579 | https://en.wikipedia.org/wiki/Animal%20testing%20on%20invertebrates | Most animal testing involves invertebrates, especially Drosophila melanogaster, a fruit fly, and Caenorhabditis elegans, a nematode. These animals offer scientists many advantages over vertebrates, including their short life cycle, simple anatomy and the ease with which large numbers of individuals may be studied. Invertebrates are often cost-effective, as thousands of flies or nematodes can be housed in a single room.
With the exception of some cephalopods in the European Union, invertebrate species are not protected under most animal research legislation, and therefore the total number of invertebrates used remains unknown.
Main uses
Research on invertebrates is the foundation for current understanding of the genetics of animal development. C. elegans is especially valuable as the precise lineage of all the organism's 959 somatic cells is known, giving a complete picture of how this organism goes from a single cell in a fertilized egg, to an adult animal. The genome of this nematode has also been fully sequenced and any one of these genes can easily be inactivated through RNA interference, by feeding the worms antisense RNA. A major success in the work on C. elegans was the discovery that particular cells are programmed to die during development, leading to the discovery that programmed cell death is an active process under genetic control. The simple nervous system of this nematode allows the effects of genetics on the development of nerves to be studied in detail. However, the lack of an adaptive immune system and the simplicity of its organs prevent C. elegans from being used in medical research such as vaccine development.
The fly D. melanogaster is the most widely used animal in genetic studies. This comes from the simplicity of breeding and housing the flies, which allows large numbers to be used in experiments. Molecular biology is relatively simple in these organisms and a huge variety of mutant and genetically modified flies have been developed. Fly genetics has been vital in the study of development, the cell cycle, behavior, and neuroscience. The similarities in the basic biochemistry of all animals allows flies to be used as simple systems to investigate the genetics of conditions such as heart disease and neurodegenerative disease. However, like nematodes, D. melanogaster is not widely used in applied medical research, as the fly immune system differs greatly from that found in humans, and diseases in flies can be very different from diseases in humans.
Other uses of invertebrates include studies on social behavior.
See also
Animal testing on non-human primates
Animal testing on rodents
Testing cosmetics on animals
Pain in invertebrates
References
Further reading
General
Lawrence PA. "The Making of a Fly: The Genetics of Animal Design." Blackwell Publishing Limited (March 1, 1992)
Demerec M. "Biology of Drosophila" Macmillan Pub Co. (January 2000)
Hall, DH. "C. Elegans Atlas" Cold Spring Harbor Laboratory Press (November 30, 2007)
Practical
Goldstein LSB, (Ed) Fryberg EA. "Methods in Cell Biology: Drosophila Melanogaster : Practical Uses in Cell and Molecular Biology" Academic Press (January 1995)
Epstein HF, (Ed), Shakes DC. "Methods in Cell Biology: Caenorhabditis Elegans : Modern Biological Analysis of an Organism" Academic Press (October 1995)
External links
FlyBase Main Drosophila research database.
WormBase Main C. elegans research database.
Animal rights
Invertebrates | Animal testing on invertebrates | Chemistry | 708 |
44,166,309 | https://en.wikipedia.org/wiki/Ceratocystis%20cacaofunesta | Ceratocystis cacaofunesta is an ascomycete fungus that causes a wilt disease in cacao trees. It has led to significant economic losses in Latin America.
Taxonomy
Once considered to be a form of Ceratocystis fimbriata, the fungus was described as a new species in 2005. The specific epithet "cacaofunesta" means "cacao-killing". Two closely related sublineages exist within this species, one centered in western Ecuador and the other containing isolates from Brazil, Colombia and Costa Rica.
Ceratocystis wilt of cacao
The disease known as "Ceratocystis wilt of cacao" (or "Mal de machete") is a serious disease of the cocoa tree (Theobroma cacao) in Latin America. The fungus is indigenous to Central and South America.
This fungus is able to penetrate cacao trees through stem wounds that are caused either by insects or through infected cutting tools. Wounds made by harvesting pods, removing stem sprouts or weeding may become infected. The disease is a systemic infection that damages the entire plant. The fungus enters its host through the xylem, causing a deep stain leading to the obstruction of water and nutrient transport. It moves systemically through the plant. Eventually, the plant turns yellow and then brown, leading to wilting and the sudden death of the tree.
The disease has been of major importance in Costa Rica, Trinidad and Tobago, Ecuador, parts of Colombia and Venezuela. In the 1990s, C. cacaofunesta was introduced to the southern region of Bahia, which is the largest Brazilian cacao-producing state. This disease is responsible for reductions in the cacao population in plantation areas, which has resulted in great economic losses in the affected regions. The fungus has killed as many as half of the trees in some locations.
References
External links
Fungal tree pathogens and diseases
Microascales
Fungi described in 2005
Fungus species | Ceratocystis cacaofunesta | Biology | 406 |
6,040,409 | https://en.wikipedia.org/wiki/Long-period%20fiber%20grating | A long-period fiber grating couples light from a guided mode into forward propagating cladding modes where it is lost due to absorption and scattering. The coupling from the guided mode to cladding modes is wavelength dependent so we can obtain a spectrally selective loss.
It is an optical fiber structure with the properties periodically varying along the fiber, such that the conditions for the interaction of several copropagating modes are satisfied. The period of such a structure is of the order of a fraction of a millimeter. In contrast to the fiber Bragg gratings, LPFGs couple copropagating modes with close propagation constants; therefore, the period of such a grating can considerably exceed the wavelength of radiation propagating in the fiber. Because the period of an LPFG is much larger than the wavelength, LPFGs are relatively simple to manufacture. Since LPFGs couple copropagating modes, their resonances can only be observed in transmission spectra. The transmission spectrum has dips at the wavelengths corresponding to resonances with various cladding modes (in a single-mode fiber).
Depending on the symmetry of the perturbation that is used to write the LPFG, modes of different symmetries may be coupled. For instance, cylindrically symmetric gratings couple symmetric LP0m modes of the fiber. Microbend gratings, which are antisymmetric with respect to the fiber axis, create a resonance between the core mode and the asymmetric LP1m modes of the core and the cladding.
Long period grating has a wide variety of applications, including band-rejection filters, gain flattening filter and sensors.
Various gratings with complex structures have been
designed: gratings combining several LPFGs, LPFGs with superstructures, chirped gratings, and gratings with apodization. Various LPFG-based devices have been developed: filters, sensors, fiber dispersion compensators, etc.
References
S.W. James, R.P. Tatam, "Optical fibre long-period grating sensors: characteristics and application," Meas. Sci. Technol. 14, R49–R61 (2003).
O.V. Ivanov, S.A. Nikitov, Yu.V. Gulyaev, "Cladding modes of optical fibers: properties and applications," Physics-Uspekhi 49, 175-202 (2006).
T. Erdogan, "Cladding-mode resonances in short- and long-period fiber grating filters", J. Opt. Soc. Am. A, 14, pp. 1760–1773, 1997.
External links
Theory of LPFGs (Fiber Optic Research Center)
Fiber optics
Diffraction | Long-period fiber grating | Physics,Chemistry,Materials_science | 577 |
42,563,941 | https://en.wikipedia.org/wiki/Seminar%20for%20Applied%20Mathematics | The Seminar for Applied Mathematics (SAM; from 1948 to 1969 Institute for Applied Mathematics) was founded in 1948 by Prof. Eduard Stiefel. It is part of the Department of Mathematics (D-MATH) of the Swiss Federal Institute of Technology, ETH Zurich. The Seminar consists of four regular professorships (as of 2014), two assistant professorships, two permanent senior scientists, approximately 14 positions for assistants which are either filled by senior assistants, postdoctoral fellows or Ph.D. students, as well as secretarial staff and a systems administrator.
It is represented by the Head of SAM. The SAM is a center for research and teaching in numerical mathematics, mathematical modelling and computing in Science and Technology in the D-MATH of the ETH Zürich.
Mission
To conduct fundamental research in the development and mathematical analysis of efficient discretizations for problems in engineering and the sciences as well as their implementation on supercomputers
To provide education in applied mathematics, numerics and scientific computing on all levels
To help bridge gaps between computational directions in engineering and the sciences and those in the mathematical community
To provide a consulting service in all areas of numerical mathematics to ETH as a whole, and also to government agencies and industry
References
Numerical Analysis in Zurich – 50 Years Ago by Martin H. Gutknecht of ETH Zurich.
External links
Applied mathematics
ETH Zurich | Seminar for Applied Mathematics | Mathematics | 277 |
73,548,756 | https://en.wikipedia.org/wiki/Animals%20in%20Meitei%20culture | Animals () have significant roles in different elements of Meitei culture, including but not limited to Meitei cuisine, Meitei dances, Meitei festivals, Meitei folklore, Meitei folktales, Meitei literature, Meitei mythology, Meitei religion, etc.
Deer in Meitei culture
In one of the epic cycles of incarnations in Moirang, Kadeng Thangjahanba hunted and brought a lovely Sangai deer alive from a hunting ground called "Torbung Lamjao" as a gift of love for his girlfriend, Lady Tonu Laijinglembi.
However, when he heard the news that his sweetheart lady married King Laijing Ningthou Punsiba of ancient Moirang, during his absence, he got extremely disappointed and sad. And so, with the painful and sad feelings, he realised and sensed the feelings of the deer for getting separated from its mate (partner). So, he released the deer in the wild of the Keibul Lamjao (modern day Keibul Lamjao National Park regions). Since then, the Sangai species started living in the Keibul Lamjao region as their natural habitat.
Dogs in Meitei culture
Dogs are mentioned as friends or companions of human beings, in many ancient tales and texts. In many cases, when dogs died, they were given respect by performing elaborate death ceremonies, equal to that of human beings.
When goddess Konthoujam Tampha Lairembi saw smokes in her native place, she was restless. She came down to earth from heaven to find out who was dead. On reaching the place, her mother told her as follows:
Elephants in Meitei culture
In the Meitei epic of the Khamba and Thoibi, the crown prince Chingkhu Akhuba of ancient Moirang and Kongyamba, planned to kill to Khuman Khamba.
Kongyamba and his accomplices together threatened Khamba to give up Moirang Thoibi, which Khamba rejected. Then they fought, and Khamba bet all of them, and was about to kill Kongyamba, but the men that stood by, the friends of Kongyamba, dragged Khamba off, and bound him to the elephant of the crown prince, with ropes. Then they goaded the elephant, but the God Thangching stayed it so that it didn't move. Finally, Kongyamba lost patience. He pricked a spear to the elephant so that it moved in the pain. But it still didn't harm Khamba. Khamba seemed to be dead. Meanwhile, on the other hand, Goddess Panthoibi came in a dream to Thoibi and told her everything that was happening. So, Thoibi rushed to the spot and saved Khamba from the elephant torture.
Fishes in Meitei culture
Horses in Meitei culture
Lions in Meitei culture
Kanglā Shā
In Meitei mythology and religion, , also spelled as , is a guardian dragon lion, whose appearance is described as a creature with a lion's body and a dragon's head, and two horns. Besides being sacred to the Meitei cultural heritage,
it is frequently portrayed in the royal symbol of the Meitei royalties (Ningthouja dynasty).
The most popular iconographic colossal statues of the "Kangla Sa" stand inside the Kangla Fort in Imphal.
In Meitei traditional race competitions, winners of the race are declared only after symbolically touching the statue of the dragon "Kangla Sha". This ideology is clearly mentioned in the story of the marathon competition between Khuman Khamba and Nongban in the epic saga of Khamba and Thoibi of ancient Moirang.
Nongshāba
In Meitei religion mythology, Nongshaba () is a Lion God and a king of the gods. He produced light in the primordial universe and is often addressed as the "maker of the sun". He is worshipped by the Meitei people, specifically by those of the Ningthouja clans as well as the Moirang clans. He was worshipped by the Meitei people of Moirang clan as an ancestral lineage God.
He is the chief of all the in Ancient Kangleipak (early Manipur).
Pakhangba
In Meitei culture, Pakhangba is a very powerful dragon. He is known as a protector and ruler of the universe. He is a son to Mother Earth. Pakhangba is, no question, one of the most distinctive frightening dragons. He is known for having remote resemblances and equivalencies to Typhon of the Greeks, Bahamut of the Arabians, Nagas of the Hindus, and Quetzalcoatl of the Native Americans. His identity is the subject of numerous stories, some of which even combine him with significant historical figures.
Monkeys in Meitei culture
The Meitei folktale of , also known as the , is about the story of an old couple who were tricked by a gang of monkeys.
In the story, a childless old couple treat a group of monkeys, from the nearby forest kindly, like their own children. The monkeys give the old couple advice about planting taro in their kitchen garden. So, according to their suggestion, they boie the tubers in a pot until soft, then cooled them, wrapped them in banana leaves, and plant them in the garden. At midnight, the monkeys secretly steal and eat all the cooked taro and plant some inedible giant wild taro in their place. The next day, the old couple find the fully grown taro. They immediately cook and eat the full-grown taro and suffer an allergic reaction. Only after they take the hentak medicine is their allergy relieved. Realising they have been tricked, the old couple plan their revenge. So, the old man pretends to be dead, and the old woman cries out loudly so that the monkeys hear her. When the monkeys come and ask her what happened, she tells them that the old man died after eating the taro. She asks them to help her carry old man's body out to the lawn. As soon as the monkeys enter the housthe old man takes up his stick and beats them. Frightened, they all ran away. The old couple know that the monkeys will come back. So, they climb into the atticand hide. When the monkeys return a larger gang to take revenge, the attic breaks and falls on them, and they flee. Knowing that they might come back again, the old couple hide inside a large pot. When the monkeys come back, the couple farted continuously and the sound scares the monkeys, who flee and never return.
Pythons in Meitei culture
In Meitei mythology, Poubi Lai () was an ancient python. It lived in the deep waters of the Loktak Lake.
It is also referred to as the "Loch Ness Monster of Manipur".
Rodents in Meitei culture
In Meitei mythology, Shapi Leima (), is one of the three favorite daughters of the sky god and mistress and the queen of all rodents.
Tigers in Meitei culture
Tigers are among the most mentioned animals in different elements of Meitei culture.
Keibu Keioiba
In the Meitei mythology and folklore, Keibu Keioiba (), also known as Kabui Keioiba (), is a mythical creature with the head of a tiger and the body of a human. He is often described as half man and half tiger.
He was once a skilful priest named Kabui Salang Maiba. With his witchcraft, he transfigured himself into the form of a ferocious tiger. As a punishment of his pride (divine retribution), he could not completely turn back to his original human form.
Khoirentak tiger
In the Meitei folktale of the Khamba and Thoibi, Khuman Khamba and Nongban were in conflict regarding the affairs of princess Moirang Thoibi. Both men wanted to marry the princess. Among the two suitors, the princess had already chosen Khamba but still Nongban did not give up easily. The matter was set before the King of ancient Moirang in his court, and he ordered them to settle the matter by the trial by ordeal of the spear. However, an old woman said that there was a tiger in the forest hardby that attacked the people. So, the King chose the tiger hunt to be the witness and the ordeal. Whoever among the two that killed the tiger will get the Princess Thoibi as his wife. On the next day, the King and his ministers gathered there in stages. Many people gathered at the spot, that it seemed like a white cloth spread on the ground. Then the two went inside the forest. Near a dead body of a freshly killed girl, the tiger was found. Nongban tried to spear the tiger but he missed his target. Then the tiger sprang upon them and bit Nongban. Khamba wounded the beast, and drove it off. Then he carried Nongban to the gallery. Then Khamba entered the forest once again and found the tiger crouching in a hollow half hidden by the forest, but in full view of the gallery of the King.
Tiger of Goddess Panthoibi
Tortoises and turtles in Meitei culture
Tortoise/Turtle in the story of Sandrembi and Chaisra
In the Meitei folktale of Sandrembi and Chaisra, Sandrembi's mother transformed herself into a tortoise/turtle, after some time, she was killed by Sandrembi's stepmother, who was her cowife and rival. Upon being instructed in Sandrembi's dream, Sandrembi took the tortoise from the lake and kept it inside a pitcher for five consecutive days without any break. It was told to her that her mother could re-assume her human form from the tortoise form only if kept inside a pitcher for five consecutive days without any disturbance. However, before the completion of the five days, Chaisra discovered the tortoise and so, she insisted her mother to force Sandrembi to cook the tortoise meat for her. Poor Sandrembi was forced to boil her own mother in the tortoise form. Sandrembi tried to take away the fuel stick on hearing the tortoise mother's crying words of pain from the boiling pan/pot but she was forced to put the fuel in by her stepmother. Like this, Sandrembi could not save her tortoise mother from being killed.
See also
Hills and mountains in Meitei culture
Plants in Meitei culture
Birds in Meitei culture
Notes
References
Animals in art
Animals in culture
Animals in entertainment
Works about animals
Animals in mythology
Animals in popular culture
Animals in religion
Meitei culture
Meitei folklore
Meitei literature
Meitei mythology
Sanamahism | Animals in Meitei culture | Biology | 2,288 |
7,041,409 | https://en.wikipedia.org/wiki/Basidiobolomycosis | Basidiobolomycosis is a fungal disease caused by Basidiobolus ranarum. It may appear as one or more painless firm nodules in the skin which becomes purplish with an edge that appears to be slowly growing outwards. A serious but less common type affects the stomach and intestine, which usually presents with abdominal pain, fever and a mass.
B. ranarum, can be found in soil, decaying vegetables and has been isolated from insects, some reptiles, amphibians, and mammals. The disease results from direct entry of the fungus through broken skin such as an insect bite or trauma, or eating contaminated food. It generally affects people who are well.
Diagnosis is by medical imaging, biopsy, microscopy, culture and histopathology. Treatment usually involves amphotericin B and surgery.
Although B. ranarum is found around the world, the disease Basidiobolomycosis is generally reported in tropical and subtropical areas of Africa, South America, Asia and Southwestern United States. It is rare. The first case in a human was reported from Indonesia in 1956 as a skin infection.
Signs and symptoms
Basidiobolomycosis may appear as a firm nodule in the skin which becomes purplish with an edge that appears to be slowly growing outwards. It is generally painless but may feel itchy or burning. There can be one lesion or several, and usually on the arms or legs of children. Pus may be present if a bacterial infection also occurs. The infection can spread to nearby structures such as muscles, bones and lymph nodes.
A serious but less common type affects the stomach and intestine, which usually presents with tummy ache, fever and a lump. Lymphoedema may occur.
Mechanism
Basidiobolomycosis is a type of Entomophthoromycosis, the other being conidiobolomycosis, and is caused by Basidiobolus ranarum, a fungus belonging to the order Entomophthorales. B. ranarum has been found in soil, decaying vegetables and has been isolated from insects some reptiles, amphibians, and mammals. The disease results from direct entry of the fungus through broken skin such as an insect bite or trauma, or eating contaminated food. Diabetes may be a risk factor. The exact way in which infection results is not completely understood.
Diagnosis
Diagnosis is by culture and biopsy.
A review in 2015 showed that the most common finding on imaging of the abdomen was a mass in the bowel, the liver, or multiple sites and bowel wall thickening. Initially, many were considered to have either a cancer of the bowel or Crohns disease.
Treatment
Treatment usually involves itraconazole or amphotericin B, combined with surgical debridement. Bowel involvement may be better treated with voriconazole.
Epidemiology
The condition is rare but emerging. Men and children are affected more than females. The disease is generally reported in tropical and subtropical areas of Africa, South America, Asia and several cases in Southwestern United States.
History
The first case in a human was reported from Indonesia as a skin infection in 1956. In 1964, the first case involving stomach and intestine was reported.
Society and culture
Cases among gardeners in Arizona, US, may indicate an occupational hazard, but is unproven.
Other animals
Basidiobolomycosis has been reported in a dog.
References
External links
Animal fungal diseases
Fungal diseases | Basidiobolomycosis | Biology | 728 |
49,277,442 | https://en.wikipedia.org/wiki/Stone%20carving%20in%20Odisha | Stone carving in Odisha is the ancient practice of sculpting stone into art and utilitarian objects. It is an ancient practice in the Indian state of Odisha. Stone carving is practiced by artisans mainly in Puri, Bhubaneswar and Lalitgiri in the Cuttack district, though some carvings can be found in Khiching in the Mayurbhanj District. Stone carving is one of the major handcrafts of Odisha. The art form primarily consists of custom carved works, with the Sun Temple of Konark and its intricate sculpture and delicate carvings on the red vivid sandstone exemplifying the practice. Other noteworthy monuments include the Stupas of Udayagiri and Ratnagiri and the temples at Jagannath, Lingaraj, Mukteshwar and as well as other temples in the region.
Stones
Sandstone, soapstone, serpentinite, Makrana marble, and granite were used in Konark stone carving. Skillful artists may use the soft, white soapstone, , or the slightly harder greenish chlorite or . Rocks such as the pinkish , or and the hardest of all, black granite and are commonly used.
Procedure
An outline of sorts is first drawn on the cut-to-size stone. Once the outline is engraved, the final figure is brought out by removing the unwanted portions. For the harder stones, this is done by chiseling out the extra material. With softer stones, it is done by scraping out the extra material with a sharp flat-edged iron tool. Hammers and chisels of various sizes are used (e.g., the , , , and ).
Products
Subjects are often traditional images, including mythological figures. Utilitarian items like candle stands, pen stands, paperweights, bookends, lamp bases and stoneware utensils are also created. Turning and polishing with a wooden lathe called Kunda, the craftsmen produce beautiful polished plates (thali), containers (gina, pathuri), cups and glasses. These are used for , ritual worships and for daily eating. Stoneware containers are particularly good for storing curd as they do not react to acid. They are also filled with water and used for holding the legs of wooden almirahs to keep out the ants.
References
Geographical indications in Odisha
Stonemasonry | Stone carving in Odisha | Engineering | 483 |
56,872,763 | https://en.wikipedia.org/wiki/Tina%20van%20de%20Flierdt | Tina van de Flierdt (born 1973) is a Professor of Isotope Geochemistry at Imperial College London.
Education
Van de Flierdt grew up in rural western Germany. In 2000 van de Flierdt completed a diploma in Geology at the University of Bonn. She earned a PhD at ETH Zurich in 2003, working with Alexander Halliday.
Career
Van de Flierdt is interested in the marine-terminating sector of the East Antarctic Ice Sheet during past warm periods. Her research looks to develop new geochemical and isotopic tracers in marine geochemistry, paleoceanography and paleoclimate, with particular focus on radiogenic isotopes. She is co-lead of the MAGIC Isotope group in the Department of Earth Sciences at Imperial College London. She is also a research at the Lamont–Doherty Earth Observatory at Columbia University.
She is part of the international Geotraces program. Part of the Geotraces program is to ensure results for trace elements and isotopes collected on different cruises by different laboratories can be compared in a meaningful way. Van de Flierdt is building a global database of neodymium in the oceans and researching the implications for paleoceanography research.
In 2012 she won a Leverhulme Trust grant to research deep sea corals. She was part of the Natural Environment Research Council project SWEET, Super-Warm Early Eocene Temperatures and climate. She has led several major NERC grants, totalling well over a £1,000,000 as principal investigator. Van de Flierdt is a member of the Royal Society's International Exchange Committee. She is an editor of Geochimica et Cosmochimica Acta. She has appeared on the podcast Forecast: Climate Conversations.
References
External links
1973 births
Living people
Geochemists
21st-century German chemists
21st-century German geologists
Women geochemists
Academics of Imperial College London
ETH Zurich alumni
University of Bonn alumni
German women chemists
Rare earth scientists
21st-century German women scientists
Lamont–Doherty Earth Observatory people | Tina van de Flierdt | Chemistry | 415 |
76,515,535 | https://en.wikipedia.org/wiki/Parsaclisib | Parsaclisib is an investigational drug that it being evaluated for the treatment of B-cell malignancies. It is a PI3Kδ (phosphoinositide 3-kinase) inhibitor.
References
Pyrazolopyrimidines
Pyrrolidones
Chloroarenes
Fluoroarenes
Ethoxy compounds
Amines | Parsaclisib | Chemistry | 77 |
20,389,237 | https://en.wikipedia.org/wiki/Engineering%20informatics | The term Engineering Informatics may be related to information engineering (differently understood information processing), computer engineering (development of computer hardware-software systems), or computational engineering (development of software for engineering purposes), among other meanings. This word is used with different context in different countries. In general, some people assume that the central area of interest in informatics is information processing within man-made artificial (engineering) systems, called also computational or computer systems. The focus on artificial systems separates informatics from psychology and cognitive science, which focus on information processing within natural systems (primarily people). However, nowadays these fields have areas where they overlap, e.g. in field of affective computing.
Computer Engineering as a discipline of field study
Computer-aided design (CAD), intelligent CAD, engineering analysis, collaborative design support, computer-aided engineering, and product life-cycle management are some of the terms that have emerged over the past decades of computing in engineering. Codification and automation of engineering knowledge and methods have had major impact on engineering practice. The use of computers by engineers has consistently tracked advancements in computer and information sciences. Computing, algorithms, computational methods, and engineering have increasingly intertwined themselves as developments in theory and practice in both disciplines influence each other. Therefore, it is now time to begin using the term “engineering informatics” to cover the science of the information that flows through these processes.
Informatics, with origins in the German word "Informatik" referring to automated information processing, has evolved to its current broad definition. The rise of the term informatics can be attributed to the breadth of disciplines that are now accepted and envisioned as contributing to the field of computing and information sciences. A common definition of informatics adopted by many departments/schools of informatics comes from the University of Edinburgh: "the study of the structure, behavior, and interactions of natural and artificial computational systems that store, process and communicate information.” Informatics includes the science of information, the practice of information processing, and the engineering of information systems.
The history of engineering and computers shows a trend of increasing sophistication in the type of engineering problems being solved. Early CAD was primarily geometry driven (using mathematics and computer science). Then came the engineering use of AI, driven by theories of cognitive science and computational models of cognition (logic and pattern based). More recently, models of collaboration and representation and acquisition of collective knowledge have been introduced, driven by fields of social sciences (ethnography, sociology of work) and philosophy.
Information technology and sciences to have both created the need for, and play a role in, facilitating the management of complex sociotechnical processes. Information is context specific and its engineering is an integral part of any exchange among people and machines. Thus, informatics is the process of:
creating and codifying the linguistic worlds (representational structures) represented by the object worlds in the relevant domain, and
managing the attendant meanings through their contexts of use and accumulation through synthesis and classification.
Engineering informatics is a reflective task beyond the software/hardware that supports engineering; it is a cross-disciplinary perspective on the nature of collective intellectual work. It thereby becomes critical that a consciousness of the use of languages and their implications in the storage and retrieval of information in a work community be addressed as part of any information engineering task.
The role informatics plays in engineering products and services has become significant in the past decades. Most of the development has happened in an ad hoc manner, as can be expected. Techniques appeared in computer science and in programming practice; these techniques get used in engineering as is. Early computing in engineering was limited due to the capacities of computers. Computational power and telecommunications systems have started to converge, resulting in the possibilities of untethered connections and exchange of information that was just a distant dream in the early computing days. These developments have made the problems of distance less onerous and allow for global design, manufacturing, and supply chains. However, the problem of managing a global supply chain still is a daunting task with numerous incompatibilities in information exchange and coordination.
The problem of integrating entire sets of industries in a flexible and ad hoc manner is still a dream especially for small-scale industries within the larger global environment. For this dream to become a reality, standards become critical. With technology evolving continuously, the task of creating information standards for varieties of exchanges from the syntactic to the semantic is a challenge yet to be resolved.
Computer scientists or engineers by themselves cannot solve engineering informatics problems or the processes required to manage information in the context of engineered systems—it has to be a collaborative effort. The lack of skills among computer scientists in engineering and engineers in computing has led to problems bridging the disciplines. What pedagogical stance can help prepare students to deal with the complexities that are inherent in the task of engineering informatics? The culture of learning has to encourage the appreciation of diversity at the same time looking for the core essence and canonical nature of the experiences. While the products of today are increasingly designed for variety, we still have not mastered this process conceptually, let alone are we preparing our students. The fundamental characteristic of engineering informatics is that it is applicable at local levels of decision making in a design process as well as at the holistic level of product management and organizational design.
Nowadays, people are entering an era of networks where different infrastructural networks can be connected through information networks. The information network can connect the manufacturing network to the design and supply chain network in almost real time using information systems that include sensors and ID tags. One's imagination is the limit in this integrative power of information networks. It is this new complex world that we need to teach students, among other things, the ability to reflect on the information they use and how to handle this information, what it means to use (or not) computational tools, the need to create tools at different scales of inquiry and across disciplines, and how to view one's own discipline from an engineering informatics point of view.
Engineering technology areas
It encompasses engineering technology areas in:
Neural Network Engineering and Intelligent System Application
Decision Support System and Information Modelling System
Reverse Software Engineering and Reusable Software Engineering
The application of Cryptography in Computer Security System
Enterprise Architectural Framework and Application
Distributed Engineering and Business Services
Sensing, Monitoring, Control and Structural Dynamics
Human and Social Modelling for Design Simulations
Computational Engineering
Virtual Office and Optimization
Networking computing for Engineering
IT Applications in Engineering
Systems and Network Technologies
Interactive Media and Internet Development
Supply Chain and Logistics Management
etc.
Universities and institutions offering Engineering Informatics
Engineering Informatics is a field of undergraduate study in some universities and polytechnics:
Argentina
Universidad Argentina de la Empresa, Buenos Aires, Argentina
Czech Republic
University of Chemistry and Technology, Prague, Czech Republic
Tomas Bata University in Zlín, Zlín, Czech Republic
Egypt
Information Technology Institute, Smart Village, 6th of October City, Egypt (Post-Graduate Diploma for Engineers)
Germany
Otto-von-Guericke University Magdeburg, Magdeburg, Germany
Hochschule für Technik und Wirtschaft Berlin, Berlin, Germany
Technische Universität Ilmenau, Ilmenau, Germany
Georgia
Georgian Technical University, Tbilisi, Georgia
Guatemala
Universidad Mesoamericana, Guatemala City, Guatemala
Greece
University of Western Macedonia, Kozani, Greece
International Hellenic University, Thessaloniki, Greece
Technological Educational Institute of Central Macedonia, Serres, Greece
Technological Educational Institute of West Macedonia, Kastoria, Greece
Hungary
Budapest University of Technology and Economics, Budapest, Hungary
Óbuda University, Budapest, Hungary
University of Pannonia - Faculty of Information Technology, Veszprém, Hungary
Dennis Gabor College, Budapest, Hungary
Indonesia
Trisakti University, Jakarta, Indonesia
Pancasila University, Jakarta, Indonesia
University of Bunda Mulia, Jakarta, Indonesia
Gunadarma University, Bekasi, Indonesia
University of Amikom Yogyakarta, Indonesia
Duta Wacana Christian University, Yogyakarta, Indonesia
University of Muhammadiyah Malang, Malang, Indonesia
Bandung Institute of Technology, Bandung, Indonesia
Sepuluh Nopember Institute of Technology, Surabaya, Indonesia
State Islamic University Sunan Gunung Djati Bandung, Bandung, Indonesia
Japan
Waseda University, Shinjuku, Tokyo, Japan
University of Tokyo, Bunkyo, Tokyo, Japan
Lithuania
Vilnius Gediminas Technical University, Vilnius, Lithuania
Mexico
Instituto Politécnico Nacional, - UPIICSA, Mexico
Paraguay
Universidad Nacional de Asunción, San Lorenzo, Paraguay
Autonomous University of Asuncion
Universidad del Norte, Asunción, Paraguay
Universidad de la Integración de las Américas, Asunción, Paraguay
Universidad Nacional de Caaguazú, Caaguazú, Paraguay
Universidad Internacional Tres Fronteras, Ciudad del Este, Alto Paraná, Paraguay
Universidad Privada del Este, Presidente Franco, Alto Paraná, Paraguay
Catholic University of Asunción
American University, Asunción, Paraguay
National University of Itapua, Encarnación, Itapúa, Paraguay
Portugal
University of Madeira, Funchal, Portugal
University of Minho, Braga, Portugal
NOVA University Lisbon, Lisbon, Portugal
ISCTE – University Institute of Lisbon, Lisbon, Portugal
University of Algarve - Faculty of Sciences and Technology, Algarve, Portugal
University of Aveiro, Aveiro, Portugal
University of Beira Interior, Covilhã, Portugal
University of Coimbra - Faculty of Sciences and Technology, Coimbra, Portugal
University of Évora - School of Sciences and Technology, Évora, Portugal
University of Lisbon - Faculty of Sciences, Lisbon, Portugal
University of Trás-os-Montes and Alto Douro - School of Sciences and Technology, Vila Real, Portugal
Instituto Politécnico do Porto - Instituto Superior de Engenharia do Porto, Porto, Portugal
Singapore
Nanyang Polytechnic, Ang Mo Kio, Singapore
Taiwan
Chung Hua University, Hsinchu, Taiwan
United Kingdom
Newcastle University, Newcastle upon Tyne, North East England, United Kingdom
University of Cambridge, Cambridge, England, United Kingdom
Cardiff University, Cardiff, Wales, United Kingdom
United States
Columbia University, Manhattan, New York City, United States
Harvard University, Manhattan, Cambridge, Massachusetts, United States
Massachusetts Institute of Technology, Cambridge, Massachusetts, United States
Princeton University, New Jersey, United States
Stanford University, Stanford, California, United States
University of California, Berkeley, California, United States
Venezuela
Universidad Centroccidental Lisandro Alvarado, Barquisimeto, Venezuela
Andrés Bello Catholic University, Caracas, Venezuela
Alejandro de Humboldt University, Caracas, Venezuela
Universidad de Oriente, Anzoátegui, Venezuela
Universidad Nacional Experimental de Guayana, Ciudad Guayana
Universidad Nacional Experimental del Táchira, San Cristóbal, Venezuela
Universidad Politécnica Territorial de Mérida, Mérida, Venezuela
Universidad Politécnica Territorial del Estado Aragua, Aragua, Venezuela
Publications
Advanced Engineering Informatics is a journal publication in the field of engineering informatics.
The Need for a Science of Engineering Informatics. Artificial Intelligence for Engineering Design, Analysis and Manufacturing (AI-EDAM), 2007, 21:1(23–26).
JCISE Special Issue, March 2008 This special issue has a guest editorial and a few research papers in the Engineering Informatics domain.
Special Issue on “Engineering Informatics”, by Eswaran Subrahmanian and Sudarsan Rachuri, J. Comput. Inf. Sci. Eng. 8(1), 010301 (Feb 28, 2008).
Research
Engineering Informatics Group, a research group at Stanford University, USA
References
Engineering disciplines
Information science by discipline
Computational fields of study | Engineering informatics | Technology,Engineering | 2,378 |
405,980 | https://en.wikipedia.org/wiki/Cockade | A cockade is a knot of ribbons, or other circular- or oval-shaped symbol of distinctive colours which is usually worn on a hat or cap. The word cockade derives from the French cocarde, from Old French coquarde, feminine of coquard (vain, arrogant), from coc (cock), of imitative origin. The earliest documented use was in 1709.
The first cockades were introduced in Europe in the 15th century. The armies of the European states used them to signal the nationality of their soldiers to discern allies from enemies. These first cockades were inspired by the distinctive coloured bands and ribbons that were used in the Late Middle Ages by knights, both in war and in tournaments, which had the same purpose, namely to distinguish the opponent from the fellow soldier.
The cockade later became a revolutionary symbol par excellence during the insurrectional uprisings of the 18th and 19th centuries. Its main characteristic was that of being able to be clearly visible, thus giving way to unequivocally identify the political ideas of the person who wore it, as well as that of being, in case of need, better hideable than, for example, a flag.
18th century
In the 18th and 19th centuries, coloured cockades were used in Europe to show the allegiance of their wearers to some political faction, or to show their rank or to indicate a servant's livery. Because individual armies might wear a variety of differing regimental uniforms, cockades were used as an effective and economical means of national identification.
A cockade was pinned on the side of a man's tricorne or cocked hat, or on his lapel. Women could also wear it on their hat or in their hair.
In pre-revolutionary France, the cockade of the Bourbon dynasty was all white. In the Kingdom of Great Britain supporters of a Jacobite restoration wore white cockades, while the recently established Hanoverian monarchy used a black cockade. The Hanoverians also accorded the right to all German nobility to wear the black cockade in the United Kingdom.
During the 1780 Gordon Riots in London, the blue cockade became a symbol of anti-government feelings and was worn by most of the rioters.
During the American Revolution, the Continental Army initially wore cockades of various colors as an ad hoc form of rank insignia, as General George Washington wrote:
Before long however, the Continental Army reverted to wearing the black cockade they inherited from the British. Later, when France became an ally of the United States, the Continental Army pinned the white cockade of the French Ancien Régime onto their old black cockade; the French reciprocally pinned the black cockade onto their white cockade, as a mark of the French-American alliance. The black-and-white cockade thus became known as the "Union Cockade".
In the Storming of the Bastille, Camille Desmoulins initially encouraged the revolutionary crowd to wear green. This colour was later rejected as it was associated with the Count of Artois. Instead, revolutionaries would wear cockades with the traditional colours of the arms of Paris: red and blue. Later, the Bourbon white was added to this cockade, thus producing the original cockade of France. Later, distinctive colours and styles of cockade would indicate the wearer's faction; although the meanings of the various styles were not entirely consistent, and they varied somewhat by region and period.
The cockade of Italy is one of the national symbols of the coutry and is composed of the three colours of the Italian flag with the green in the centre, the white immediately outside and the red on the edge. The cockade, a revolutionary symbol, was the protagonist of the uprisings that characterized the Italian unification, being pinned on the jacket or on the hats in its tricolour form by many of the patriots of this period of Italian history. The Italian tricolour cockade appeared for the first time in Genoa on 21 August 1789, and with it the colours of the three Italian national colours. Seven years later, the first tricolour military banner was adopted by the Lombard Legion in Milan on 11 October 1796, and eight years later, the birth of the flag of Italy had its origins on 7 January 1797, when it became for the first time a national flag of an Italian sovereign State, the Cispadane Republic.
European military
From the 15th century, various European monarchy realms used cockades to denote the nationalities of their militaries. Their origin reverts to the distinctive colored band or ribbon worn by late medieval armies or jousting knights on their arms or headgear to distinguish friend from foe in the field of battle. Ribbon-style cockades were worn later upon helmets and brimmed hats or tricornes and bicornes just as the French did, and also on cocked hats and shakoes. Coloured metal cockades were worn at the right side of helmets; while small button-type cockades were worn at the front of kepis and peaked caps. In addition to the significance of these symbols in denoting loyalty to a particular monarch, the coloured cockade served to provide a common and economical field sign at a time when the colours of uniform coats might vary widely between regiments in a single army.
During the Napoleonic wars, the armies of France and Russia, had the imperial French cockade or the larger cockade of St. George pinned on the front of their shakos.
The Second German Empire (1870–1918) used two cockades on each army headgear: one (black-white-red) for the empire; the other for one of the monarchies the empire was composed of, which had used their own colors long before. The only exceptions were the Kingdoms of Bavaria and Württemberg, having preserved the right to keep their own armed forces which were not integrated in the Imperial Army. Their only cockades were either white-blue-white (Bavaria) or black-red-black (Württemberg).
The Weimar Republic (1919–1933) removed these, as they might promote separatism which would lead to the dissolution of the German nation-state into regional countries again.
When the Nazis came to power, they rejected the democratic German colours of black-red-gold used by the Weimar Republic. Nazis reintroduced the imperial colours (in German: die kaiserlichen Farben or Reichsfarben) of black on the outside, white next, and a red center. The Nazi government used black-white-red on all army caps. These colours represented the biggest and the smallest countries of the Reich: large Prussia (black and white) and the tiny Hanseatic League city states of Hamburg, Bremen and Lübeck (white and red).
France began the first Air Service in 1909 and soon picked the traditional French cockade as the first national emblem, now usually termed a roundel, on military aircraft. During World War I, other countries adopted national cockades and used these coloured emblems as roundels on their military aircraft. These designs often bear an additional central device or emblem to further identify national aircraft, those from the French navy bearing a black anchor within the French cockade.
Hungarian revolutionaries wore cockades during the Hungarian revolution of 1848 and during the 1956 revolution. Because of this, Hungarians traditionally wear cockades on 15 March.
Confederate States
Echoing their use when Americans rebelled against Britain, cockades – usually made with blue ribbons and worn on clothing or hats – were widespread tokens of Southern support for secession preceding the American Civil War of 1861–1865.
List of national cockades
Below is a list of national cockades (colors listed from center to ring):
Component states of the German Empire (1871–1918)
The German Empire had, besides the national cockade, also cockades for several of its states, seen in the following table:
See also
Cap badge
Rosette (politics)
Roundel
References
References
External links
Hats
Ceremonial clothing
Symbols | Cockade | Mathematics | 1,611 |
28,231,199 | https://en.wikipedia.org/wiki/Rubroboletus%20rhodoxanthus | Rubroboletus rhodoxanthus is a species of bolete in the family Boletaceae, native to Europe. Previously known as Boletus rhodoxanthus, it was transferred in 2014 to the newly erected genus Rubroboletus, based on DNA data.
It produces large, colourful fruit bodies with pink patches on the cap, red pores in the hymenial surface and has a robust stem decorated in a dense, red-coloured network pattern. When longitudinally sliced, its flesh is distinctly bright yellow in the stem and discolours blue only in the cap, an excellent diagnostic feature distinguishing it from similar species.
The fungus is more widespread in warm broad-leaved forests of southern Europe, where it grows in mycorrhizal symbiosis with trees of the family Fagaceae, particularly oak (Quercus) and beech (Fagus). However, it is rare in northern regions and regarded as critically endangered or extinct in some countries.
Rubroboletus rhodoxanthus is generally regarded as inedible and may cause adverse gastrointestinal symptoms if consumed.
Taxonomy and phylogeny
The fungus was first described in 1836 by Czech mycologist Julius Vincenz von Krombholz, who considered it to be a variety of Boletus sanguineus. In 1925, it was recombined as a distinct species by German mycologist Franz Joseph Kallenbach, and the fungus remained in genus Boletus until 2014. The species epithet is derived from the Ancient Greek words ρόδο (rhódo, "rose" or "pink") and ξανθός (xanthós, "blonde" or "fair").
The first extensive phylogenetic studies on Boletaceae in 2006 and 2013, indicated that Boletus was not monophyletic and hence an artificial arrangement. A 2014 study by Wu and colleagues recognised 22 generic clades within Boletaceae, concluding that Boletus dupainii and some closely related red-pored species belong to a distinct clade, distant from the core clade of Boletus (comprising Boletus edulis and allied taxa). The new genus Rubroboletus was therefore described to accommodate species in this clade and B. rhodoxanthus was transferred to this genus. The placement of the species in genus Suillellus, following an online recombination by Blanco-Dios, was not supported by molecular data and has been subsequently rejected by later authors.
Description
The cap is at first hemispherical, gradually becoming convex to almost flat as the fungus expands, with a diameter of , but can sometimes grow up to . It is at first slightly velvety and coloured mostly whitish-grey, but soon becomes smooth, pinkish-grey, pinkish-beige or pinkish-red, especially towards the margin or when handled.
The tubes are adnate to emarginate, long and initially yellow, becoming somewhat olivaceous-yellow in very mature fruit bodies and staining blue when cut. The pores (tube mouths) are orange to deep red and instantly bluing when handled.
The stem is long by wide, bulbous or clavate when young, becoming more elongated and cylindrical at maturity. It is orange or orange-yellow at the top (apex), gradually becoming orange-red to carmine-red in the lower part and bears a dense, orange-red to carmine-red reticulation (network pattern).
The flesh is distinctly bright yellow and unchanging in the stem, but paler and turning blue when cut only in the cap. It has a mild taste.
The spores are olive-brown in mass. When viewed under the microscope they are ellipsoid to fusiform (spindle-shaped), measuring 10–15.5 by 4–5.5 μm. The cap cuticle is a trichodermium of septate cylindrical hyphae, sometimes finely incrusted.
Similar species
Rubroboletus legaliae is very similar, but has a distinctive smell of chicory or hay and whitish flesh that stains blue in the cap, as well as the stem when cut.
Rubroboletus satanas has a whitish cap without flushes of pink and whitish flesh that usually stains pale blue also in the stem when cut.
Rubroboletus rubrosanguineus is mycorrhizal with spruce (Picea) or fir (Abies) and has pale yellow flesh that stains blue throughout.
Rubroboletus demonensis, so far known only from southern Italy (Calabria and Sicily), has usually brighter colours on the cap ranging from pale grey to blood-red or purple, has yellow flesh that stains weakly to moderately blue throughout and microscopically has smaller spores, measuring 12.5–14 × 4.5–5 μm.
Imperator rhodopurpureus differs by its pinkish-red to crimson-red cap that has a roughened or "hammered" appearance and stains blue when handled, but also by its flesh that stains intensely dark blue throughout.
Distribution and habitat
Regarded as a rare species in northern Europe, Rubroboletus rhodoxanthus is more frequently encountered in warm, southern regions. It forms ectomycorrhizal associations with members of the Fagaceae, particularly oak (Quercus) and beech (Fagus), but sometimes also with chestnut (Castanea). Molecular phylogenetic testing has confirmed its presence in France, Italy, Portugal and the islands of Cyprus and Sardinia, but it is probably widespread throughout most of the Mediterranean region.
It has been reported as locally frequent on the island of Cyprus, where it appears in seasons with early rainfall, growing on serpentine soil under the endemic golden oak (Quercus alnifolia). In contrast, it is considered critically endangered in the Czech Republic and reported as extinct in England. In the British Islands it known only from Northern Ireland.
Toxicity
Rubroboletus rhodoxanthus is generally regarded as inedible or even poisonous, and can cause an adverse gastrointestinal reaction if eaten. In the Colour Atlas of Poisonous Fungi, Bresinsky and Besl claim that the fungus might be edible if thoroughly cooked, but warn against collecting it because of its rarity and possibility of confusion with R. satanas.
References
rhodoxanthus
Fungi described in 1836
Fungi of Europe
Poisonous fungi
Fungus species | Rubroboletus rhodoxanthus | Biology,Environmental_science | 1,340 |
6,603,892 | https://en.wikipedia.org/wiki/Fiber%20%28mathematics%29 | In mathematics, the fiber (US English) or fibre (British English) of an element under a function is the preimage of the singleton set , that is
As an example of abuse of notation, this set is often denoted as , which is technically incorrect since the inverse relation of is not necessarily a function.
Properties and applications
In naive set theory
If and are the domain and image of , respectively, then the fibers of are the sets in
which is a partition of the domain set . Note that must be restricted to the image set of , since otherwise would be the empty set which is not allowed in a partition. The fiber containing an element is the set
For example, let be the function from to that sends point to . The fiber of 5 under are all the points on the straight line with equation . The fibers of are that line and all the straight lines parallel to it, which form a partition of the plane .
More generally, if is a linear map from some linear vector space to some other linear space , the fibers of are affine subspaces of , which are all the translated copies of the null space of .
If is a real-valued function of several real variables, the fibers of the function are the level sets of . If is also a continuous function and is in the image of the level set will typically be a curve in 2D, a surface in 3D, and, more generally, a hypersurface in the domain of
The fibers of are the equivalence classes of the equivalence relation defined on the domain such that if and only if .
In topology
In point set topology, one generally considers functions from topological spaces to topological spaces.
If is a continuous function and if (or more generally, the image set ) is a T1 space then every fiber is a closed subset of In particular, if is a local homeomorphism from to , each fiber of is a discrete subspace of .
A function between topological spaces is called if every fiber is a connected subspace of its domain. A function is monotone in this topological sense if and only if it is non-increasing or non-decreasing, which is the usual meaning of "monotone function" in real analysis.
A function between topological spaces is (sometimes) called a if every fiber is a compact subspace of its domain. However, many authors use other non-equivalent competing definitions of "proper map" so it is advisable to always check how a particular author defines this term. A continuous closed surjective function whose fibers are all compact is called a .
A fiber bundle is a function between topological spaces and whose fibers have certain special properties related to the topology of those spaces.
In algebraic geometry
In algebraic geometry, if is a morphism of schemes, the fiber of a point in is the fiber product of schemes
where is the residue field at
See also
Fibration
Fiber bundle
Fiber product
Preimage theorem
Zero set
References
Basic concepts in set theory
Mathematical relations | Fiber (mathematics) | Mathematics | 596 |
40,149,992 | https://en.wikipedia.org/wiki/Burstiness | In statistics, burstiness is the intermittent increases and decreases in activity or frequency of an event.
One measure of burstiness is the Fano factor—a ratio between the variance and mean of counts.
Burstiness is observable in natural phenomena, such as natural disasters, or other phenomena, such as network/data/email network traffic or vehicular traffic. Burstiness is, in part, due to changes in the probability distribution of inter-event times. Distributions of bursty processes or events are characterised by heavy, or fat, tails.
Burstiness of inter-contact time between nodes in a time-varying network can decidedly slow spreading processes over the network. This is of great interest for studying the spread of information and disease.
Burstiness score
One relatively simple measure of burstiness is burstiness score. The burstiness score of a subset of time period relative to an event is a measure of how often appears in compared to its occurrences in . It is defined by
Where is the total number of occurrences of event in subset and is the total number of occurrences of in .
Burstiness score can be used to determine if is a "bursty period" relative to . A positive score says that occurs more often during subset than over total time , making a bursty period. A negative score implies otherwise.
See also
Burst transmission
Poisson clumping
Time-varying network
References
Markov models
Applied statistics | Burstiness | Mathematics | 284 |
410,596 | https://en.wikipedia.org/wiki/Alaska%20Airlines%20Flight%20261 | Alaska Airlines Flight 261 was an Alaska Airlines flight of a McDonnell Douglas MD-80 series aircraft that crashed into the Pacific Ocean on January 31, 2000, roughly north of Anacapa Island, California, following a catastrophic loss of pitch control, killing all 88 on board: 5 crew and 83 passengers. The flight was a scheduled international passenger flight from Licenciado Gustavo Díaz Ordaz International Airport in Puerto Vallarta, Jalisco, Mexico, to Seattle–Tacoma International Airport near Seattle, Washington, United States, with an intermediate stop at San Francisco International Airport near San Francisco, California.
The subsequent investigation by the National Transportation Safety Board (NTSB) determined that inadequate maintenance led to excessive wear and eventual failure of a critical flight control system during flight. The probable cause was stated to be "a loss of airplane pitch control resulting from the in-flight failure of the horizontal stabilizer trim system jackscrew assembly's Acme nut threads." For their efforts to save the plane, both pilots were posthumously awarded the Air Line Pilots Association Gold Medal for Heroism.
Background
Aircraft
The aircraft involved in the accident was a McDonnell Douglas MD-83, serial number 53077, and registered as N963AS. The MD-83 was a longer-range version of the original MD-80 (itself an improved version of the DC-9) with higher weight allowances, increased fuel capacity, and more powerful Pratt & Whitney JT8D-219 engines. The aircraft had logged 26,584 flight hours and 14,315 cycles since it was delivered in 1992.
Crew
The pilots of Flight 261 were both highly experienced. Captain Ted Thompson, 53, had accrued 17,750 flight hours, and had more than 4,000 hours experience flying MD-80s. First Officer William "Bill" Tansky, 57, had accumulated 8,140 hours as first officer on the MD-80. Thompson had flown for Alaska Airlines for 18 years and Tansky for 15; neither pilot had been involved in an accident or incident prior to the crash. Both pilots lived in Greater Los Angeles area and had previous military experience — Thompson in the U.S. Air Force and Tansky in the U.S. Navy. Three Seattle-based flight attendants were also on board, completing the five-person crew.
Passengers
The five crew members and 47 of the passengers on board the plane were bound for Seattle. Of the remaining passengers, 30 were traveling to San Francisco; three were bound for Eugene, Oregon; and three passengers were headed for Fairbanks, Alaska. Of the passengers, one was Mexican and one was British, with all others being U.S. citizens.
At least 35 occupants of Flight 261 were connected in some manner with Alaska Airlines or its sister carrier Horizon Air, including 12 people directly employed by the company. As is common practice among airlines, employees can sit in seats that would otherwise have been left empty. Employees can also grant the same privilege to their family members or friends. Bouquets of flowers started arriving at the company's headquarters in SeaTac, Washington, the day after the crash.
Notable passengers
Jean Gandesbery, author of the book Seven Mile Lake: Scenes from a Minnesota Life, died alongside her husband, Robert.
Cynthia Oti, an investment broker and financial talk show host at San Francisco's KSFO-AM, was killed.
Tom Stockley, wine columnist for The Seattle Times, died alongside his wife Margaret.
Morris Thompson, commissioner of the Bureau of Indian Affairs in Alaska from 1973 to 1976, died alongside his wife Thelma and daughter Sheryl.
Accident flight
Alaska Airlines Flight 261 departed from Puerto Vallarta's Licenciado Gustavo Díaz Ordaz International Airport at 13:37 PST (21:37 UTC), and climbed to its intended cruising altitude of flight level 310 (). The plane was scheduled to land at San Francisco International Airport (SFO) less than 4 hours later. Sometime before 15:49 (23:49 UTC), the flight crew contacted the airline's dispatch and maintenance-control facilities in SeaTac, Washington, on a company radio frequency shared with operations and maintenance facilities at Los Angeles International Airport (LAX), to discuss a jammed horizontal stabilizer and a possible diversion to LAX. The jammed stabilizer prevented the operation of the trim system, which would normally make slight adjustments to the flight control surfaces to keep the plane stable in flight. At their cruising altitude and speed, the position of the jammed stabilizer required the pilots to pull on their yokes with about of force to keep level. Neither the flight crew nor company maintenance could determine the cause of the jam. Repeated attempts to overcome the jam with the primary and alternate trim systems were unsuccessful.
During this time, the flight crew had several discussions with the company dispatcher about whether to divert to LAX or continue on as planned to SFO. Ultimately, the pilots chose to divert. Later, the NTSB found that while "the flight crew's decision to divert the flight to Los Angeles... was prudent and appropriate", "Alaska Airlines dispatch personnel appear to have attempted to influence the flight crew to continue to San Francisco... instead of diverting to Los Angeles". Cockpit voice recorder (CVR) transcripts indicate that the dispatcher was concerned about the effect on the schedule ("flow"), should the flight divert.
At 16:09 (00:09 UTC), the flight crew successfully used the primary trim system to unjam the stuck horizontal stabilizer. Upon being freed, however, it quickly moved to an extreme "nose-down" position, forcing the aircraft into an almost vertical nosedive. The plane dropped from about to between in around 80 seconds. Both pilots struggled together to regain control of the aircraft, and only by pulling with 130 to 140 lb (580 to 620 N) on the controls did the flight crew stop the descent of the aircraft and stabilize the MD-83 at roughly .
Alaska 261 informed air traffic control (ATC) of their control problems. After the flight crew stated their intention to land at LAX, ATC asked whether they wanted to proceed to a lower altitude in preparation for the approach. The captain replied: "I need to get down to about ten, change my configuration, make sure I can control the jet and I'd like to do that out here over the bay if I may." Later, during the public hearings into the accident, the request by the pilot not to fly over populated areas was mentioned. During this time, the flight crew considered, and rejected, any further attempts to correct the runaway trim. They descended to a lower altitude and started to configure the aircraft for landing at LAX.
Beginning at 16:19 (00:19 UTC), the CVR recorded the sounds of at least four distinct "thumps", followed 17 seconds later by an "extremely loud noise", as the overstrained jackscrew assembly failed completely and the jackscrew separated from the acme nut holding it in place. As a result, the horizontal stabilizer failed at and the aircraft rapidly pitched over into a dive while rolling to the left. The crippled plane had been given a block altitude, and several aircraft in the vicinity had been alerted by ATC to maintain visual contact with the stricken jet. These aircraft immediately contacted the controller. One pilot radioed, "That plane has just started to do a big huge plunge." Another reported, "Yes sir, ah, I concur. He is, uh, definitely in a nose down, uh, position, descending quite rapidly." ATC then tried to contact the plane. The crew of a SkyWest airliner reported, "He's, uh, definitely out of control." Although the CVR captured the co-pilot saying "mayday", no radio communications were received from the flight crew during the final event.
The CVR transcript reveals the pilots' constant attempts for the duration of the dive to regain control of the aircraft. After the jackscrew failed, the plane pitched -70° and was rolling over to the left. Performing an upset recovery maneuver, the captain commanded to "push and roll, push and roll," managing to increase the pitch to -28°, he stated, "ok, we are inverted...and now we gotta get it." Over the next minute, completely inverted and still diving at a -9 degree pitch, the crew struggled to roll the plane, with the captain calling to "push push push...push the blue side up," "ok now let's kick rudder...left rudder left rudder", to which the copilot responded, "I can't reach it". The captain then replied, "ok right rudder...right rudder," followed 18 seconds later by "gotta get it over again...at least upside down we're flying."
Despite the attempt to fly the plane inverted, which almost entirely arrested its descent, the aircraft had lost too much altitude in the dive and was far beyond recovery. A few seconds before 16:21 (00:21 UTC), Flight 261 hit the Pacific Ocean at high speed between the coastal city of Port Hueneme, California, and Anacapa Island. At this time, pilots from aircraft flying in the vicinity reported in, with one pilot saying, "and he's just hit the water." Another reported, "Ah, yes sir, he, ah, he, ah, hit the water. He's, ah, down." The aircraft was destroyed by the impact forces, and all occupants on board were killed by blunt-force impact trauma.
Investigation
Wreckage recovery and analysis
The USS Cleveland (LPD-7) assisted in recovery operations.
Using side-scan sonar, remotely operated vehicles, and a commercial fishing trawler, workers recovered about 85% of the fuselage (including the tail section) and a majority of the wing components. In addition, both engines, as well as the flight data recorder (FDR) and CVR were retrieved. All wreckage recovered from the crash site was unloaded at the Seabees' Naval Construction Battalion Center Port Hueneme, California, for examination and documentation by NTSB investigators. Both the horizontal stabilizer trim system jackscrew (also referred to as "acme screw") and the corresponding acme nut, through which the jackscrew turns, were found. The jackscrew was constructed from case-hardened steel and is long and in diameter. The acme nut was constructed from a softer copper alloy containing aluminum, nickel, and bronze. As the jackscrew rotates, it moves up or down through the (fixed) acme nut, and this linear motion moves the horizontal stabilizer for the trim system. Upon subsequent examination, the jackscrew was found to have metallic filaments wrapped around it which were later determined to be the remains of the acme nut thread.
The later analysis estimated that 90% of the thread in the acme nut had already worn away previously and that it had finally stripped out during the flight while en route to San Francisco. Once the thread had failed, the horizontal stabilizer assembly was subjected to aerodynamic forces that it was not designed to withstand, leading to the complete failure of the stabilizer assembly. Based on the time since the last inspection of the jackscrew assembly, the NTSB determined that the acme nut thread had deteriorated at per 1,000 flight hours, much faster than the expected wear of per 1,000 flight‑hours. Over the course of the investigation, the NTSB considered a number of potential reasons for the substantial amount of deterioration of the nut thread on the jackscrew assembly, including the substitution by Alaska Airlines (with the approval of the aircraft manufacturer McDonnell Douglas) of Aeroshell 33 grease instead of the previously approved lubricant, Mobilgrease 28. The use of Aeroshell 33 was found to be not a factor in this accident. Insufficient lubrication of the components was also considered as a reason for the wear. Examination of the jackscrew and acme nut revealed that no effective lubrication was present on these components at the time of the accident. Ultimately, the lack of lubrication of the acme-nut thread and the resultant excessive wear were determined to be the direct causes of the accident. Both of these circumstances resulted from Alaska Airlines' attempts to cut costs.
Identification of passengers
Due to the extreme impact forces, only a few bodies were found intact, and none were visually identifiable. All passengers were identified using fingerprints, dental records, tattoos, personal items, and anthropological examination.
Inadequate lubrication and end-play checks
The investigation then proceeded to examine why scheduled maintenance had failed to adequately lubricate the jackscrew assembly. In interviews with the Alaska Airlines mechanic at SFO, who last performed the lubrication, the task was shown to take about one hour, whereas the aircraft manufacturer estimated the task should take four hours. This and other evidence suggested to the NTSB that "the SFO mechanic who was responsible for lubricating the jackscrew assembly in September 1999 did not adequately perform the task." Laboratory tests indicated that the excessive wear of the jackscrew assembly could not have accumulated in just the four-month period between the September 1999 maintenance and the accident flight. Therefore, the NTSB concluded, "more than just the last lubrication was missed or inadequately performed."
A periodic maintenance inspection called an "end-play check" was used to monitor wear on the jackscrew assembly. The NTSB examined why the last end-play check on the accident aircraft in September 1997 did not uncover excessive wear. The investigation found that Alaska Airlines had fabricated tools to be used in the end-play check that did not meet the manufacturer's requirements. Testing revealed that the nonstandard tools ("restraining fixtures") used by Alaska Airlines could result in inaccurate measurements and that if accurate measurements had been obtained at the time of the last inspection, these measurements possibly would have indicated the excessive wear and the need to replace the affected components.
Extension of maintenance intervals
Between 1985 and 1996, Alaska Airlines progressively increased the period between both jackscrew lubrication and end-play checks, with the approval of the Federal Aviation Administration (FAA). Since each lubrication or end-play check subsequently not conducted had represented an opportunity to adequately lubricate the jackscrew or detect excessive wear, the NTSB examined the justification of these extensions. In the case of extended lubrication intervals, the investigation could not determine what information, if any, was presented by Alaska Airlines to the FAA prior to 1996. Testimony from an FAA inspector regarding an extension granted in 1996 was that Alaska Airlines submitted documentation from McDonnell Douglas as justification for their extension.
End-play checks were conducted during a periodic comprehensive airframe overhaul process called a C-check. Testimony from the director of reliability and maintenance programs of Alaska Airlines was that a data-analysis package based on the maintenance history of five sample aircraft was submitted to the FAA to justify the extended period between C-checks. Individual maintenance tasks (such as the end-play check) were not separately considered in this extension. The NTSB found, "Alaska Airlines' end-play check interval extension should have been, but was not, supported by adequate technical data to demonstrate that the extension would not present a potential hazard."
FAA oversight
A special inspection conducted by the NTSB in April 2000 of Alaska Airlines uncovered widespread significant deficiencies that "the FAA should have uncovered earlier." The investigation concluded, "FAA surveillance of Alaska Airlines had been deficient for at least several years." The NTSB noted that in July 2001, an FAA panel determined that Alaska Airlines had corrected the previously identified deficiencies. However, several factors led the board to question "the depth and effectiveness of Alaska Airlines corrective actions" and "the overall adequacy of Alaska Airlines' maintenance program."
Systemic problems were identified by the investigation into the FAA's oversight of maintenance programs including inadequate staffing, its approval process of maintenance interval extensions, and the aircraft certification requirements.
Aircraft design and certification issues
The jackscrew assembly was designed with two independent threads, each of which was strong enough to withstand the forces placed on it. Maintenance procedures such as lubrication and end-play checks were to catch any excessive wear before it progressed to a point of failure of the system. The aircraft designers assumed that at least one set of threads would always be present to carry the loads placed on it; therefore, the effects of catastrophic failure of this system were not considered, and no "fail-safe" provisions were needed.
For this design component to be approved ("certified") by the FAA without any fail-safe provision, a failure had to be considered "extremely improbable". This was defined as "having a probability on the order of 1 or less each flight hour". The accident showed that certain wear mechanisms could affect both sets of threads and that the wear might not be detected. The NTSB determined that the design of "the horizontal stabilizer jackscrew assembly did not account for the loss of the acme-nut threads as a catastrophic single-point failure mode".
Jackscrew design improvement
In 2001, the National Aeronautics and Space Administration (NASA) recognized the risk to its hardware (such as the Space Shuttle) attendant upon the use of similar jackscrews. An engineering fix developed by engineers of NASA and United Space Alliance promised to make progressive failures easy to see and thus complete failures of a jackscrew less likely.
John Liotine
In 1998, an Alaska Airlines mechanic named John Liotine, who worked in the Alaska Airlines maintenance center in Oakland, California, told the FAA that supervisors were approving records of maintenance that they were not allowed to approve or that indicated work had been completed when, in fact, it had not. Liotine began working with federal investigators by secretly audio recording his supervisors. On December 22, 1998, federal authorities raided an Alaska Airlines property and seized maintenance records. In August 1999, Alaska Airlines put Liotine on paid leave, and in 2000, Liotine filed a libel suit against the airline. The crash of AS261 became a part of the federal investigation against Alaska Airlines, because, in 1997, Liotine had recommended that the jackscrew and gimbal nut of the accident aircraft be replaced, but had been overruled by another supervisor. In December 2001, federal prosecutors stated that they were not going to file criminal charges against Alaska Airlines. Around that time, Alaska Airlines agreed to settle the libel suit by paying about $500,000; as part of the settlement, Liotine resigned.
Conclusions
In addition to the probable cause, the NTSB found these contributing factors:
Alaska Airlines extended its lubrication interval for its McDonnell Douglas MD-80 horizontal stabilizer components based on McDonnell Douglas's recommendation, and the FAA approved the extended schedule. This increased the likelihood that a missed or inadequate lubrication would result in the near complete deterioration of the jackscrew-assembly acme-nut threads. The extended lubrication interval was a direct cause of the excessive wear and contributed to the Alaska Airlines Flight 261 accident.
Alaska Airlines extended the end-play check interval and the FAA approved the change. This allowed the acme-nut threads to deteriorate to the point of failure without the opportunity for detection.
The absence on the McDonnell Douglas MD-80 of a fail-safe mechanism to prevent the catastrophic effects of total acme nut loss.
During the course of the investigation, and later in its final report, the NTSB issued 24 safety recommendations, covering maintenance, regulatory oversight, and aircraft design issues. More than half of these were directly related to jackscrew lubrication and end-play measurement. Also included was a recommendation that pilots were to be instructed that in the event of a flight-control system malfunction, they should not attempt corrective procedures beyond those specified in the checklist procedures, and in particular, in the event of a horizontal stabilizer trim-control system malfunction, the primary and alternate trim motors should not be activated, and if unable to correct the problem through the checklists, they should land at the nearest suitable airport.
In NTSB board member John J. Goglia's statement for the final report, with which the other three board members concurred, he wrote:
Aftermath
After the crash, Alaska Airlines management said that it hoped to handle the aftermath in a manner similar to that conducted by Swissair after the Swissair Flight 111 accident. They wished to avoid the mistakes made by Trans World Airlines in the aftermath of the TWA Flight 800 accident, in other words, TWA's failure to provide timely information and compassion to the families of the victims.
Steve Miletich of The Seattle Times wrote that the western portion of Washington, "had never before experienced such a loss from a plane crash".
Liability
Both Boeing (who had acquired McDonnell Douglas through a merger in 1997) and Alaska Airlines eventually accepted liability for the crash, and all but one of the lawsuits brought by surviving family members were settled out of court before going to trial. While the financial terms of settlements had not been officially disclosed, The Seattle Times reported the total amount to be in excess of US$300 million, covered entirely by insurance. According to other sources, the individual settlements were "anywhere from a couple million dollars up to $20 million", purportedly "among the largest ever in an air disaster". Candy Hatcher of the Seattle Post-Intelligencer wrote: "Many lost faith in Alaska Airlines, a homegrown company that had taken pride in its safety record and billed itself as a family airline."
Flight number
As of 2023, Flight 261 no longer exists, and Alaska Airlines no longer operates the Puerto Vallarta–San Francisco–Seattle/Tacoma route. Alaska Airlines now flies from Puerto Vallarta–Seattle/Tacoma nonstop with Flights 1380 and 1411 and Puerto Vallarta—San Francisco nonstop with Flights 1369 and 1370. The airline retired the last of its MD-80s in 2008 and now uses Boeing 737s for these routes.
Memorials
Captain Thompson and First Officer Tansky were both posthumously awarded the Air Line Pilots Association Gold Medal for Heroism, in recognition of their actions during the emergency. The Ted Thompson/Bill Tansky Scholarship Fund was named in memory of the two pilots.
The victims' families approved the construction of a memorial sundial, designed by Santa Barbara artist James "Bud" Bottoms, which was placed at Port Hueneme on the California coast. The names of each of the victims are engraved on individual bronze plates mounted on the perimeter of the dial. The sundial casts a shadow on a memorial plaque at 16:22 each January 31.
Many residents of Seattle had been deeply affected by the disaster. As part of a memorial vigil in 2000, a column of light was beamed from the top of the Space Needle. Students and faculty at the John Hay Elementary School in Queen Anne, Seattle, held a memorial for four Hay students who were killed in the crash. In April 2001, John Hay Elementary dedicated the "John Hay Pathway Garden" as a permanent memorial to the students and their families who were killed on Flight 261. The City of Seattle public park Soundview Terrace was renovated in honor of the four Pearson and six Clemetson family members who were killed on board Flight 261 from the same Seattle neighborhood of Queen Anne. The park's playground was named "Rachel's Playground", in memory of six-year-old Rachel Pearson, who was on board the MD-83 and who was often seen playing at the park.
Attempted fraud
Two victims were falsely named in paternity suits as the fathers of children in Guatemala in an attempt to gain insurance and settlement money. Subsequent DNA testing proved these claims to be false.
The crash has appeared in various advance-fee fraud ("419") email scams, in which a scammer uses the name of someone who died in the crash to lure unsuspecting victims into sending money to the scammer by claiming the crash victim left huge amounts of unclaimed funds in a foreign bank account. The names of Morris Thompson and Ronald and Joyce Lake were used in schemes unrelated to them.
In popular culture
In the Canadian TV series Mayday, the flight was featured in the season-one (2003) "Cutting Corners" episode (called Air Emergency and Air Disasters in the U.S. and Air Crash Investigation in the UK and elsewhere around the world). The dramatization was broadcast in the United States with the title "Fatal Error". The flight was also included in a Mayday season-six (2007) Science of Disaster special titled "Fatal Flaw", which was called "Fatal Fix" in the United Kingdom, Australia, and Asia. The crash was covered again (with an entirely new cast) in season 22, episode 5 of Mayday, titled "Pacific Plunge".
The film drama Flight (2012) featured an airplane crash of an aircraft resembling an MD-83, which flies inverted and ultimately crash lands, though the film's version recorded just six fatalities (four passengers, two crew) of the 102 persons aboard. In the film, NTSB investigators determine the probable cause of this crash to be the fatigue of a jackscrew due to excess wear and poor maintenance. The final seconds of the CVR of Flight 261 indicates the plane stabilized and was flying inverted shortly before the crash, an event depicted in the film. Screenwriter John Gatins later explained that the film's featured crash was "loosely inspired" by the events of Flight 261.
Maps
See also
Aeroflot Flight 8641 – another accident resulting from a jackscrew failure
Emery Worldwide Flight 17
Air Moorea Flight 1121
United Airlines Flight 585
Lion Air Flight 610
Japan Air Lines Flight 123
Ethiopian Airlines Flight 302
Delta Air Lines Flight 1080 – another case where an improperly greased stabilizer jammed
China Airlines Flight 611 – another accident caused by similar improper maintenance as Flight 261 leading to inflight break-up 2 years later
References
External links
NTSB Final Report
NTSB investigation docket
Alaska Airlines news reports about 261 (Archive)
Cockpit voice recorder transcript and accident summary
Families of Alaska Airlines Flight 261
"Navy expands search for debris at Alaska Airlines Flight 261 crash scene", United States Navy (Archive)
Applying Lessons Learned from Accidents, Alaska Airlines Flight 261 – Informative analysis at faa.gov, with technical diagrams and photos (Archive)
Federal Aviation Administration – Lessons Learned Home: Alaska Airlines Flight 261
Seattle Post-Intelligencer special report
Short Bios of Flight 261 Passengers
261
Airliner accidents and incidents caused by maintenance errors
Airliner accidents and incidents caused by mechanical failure
Aviation accidents and incidents in the United States in 2000
2000 in California
Accidents and incidents involving the McDonnell Douglas MD-83
Airliner accidents and incidents in California
Articles containing video clips
January 2000 events in the United States
Aviation accidents and incidents caused by loss of control | Alaska Airlines Flight 261 | Materials_science | 5,528 |
16,174,280 | https://en.wikipedia.org/wiki/Isoparametric%20manifold | In Riemannian geometry, an isoparametric manifold is a type of (immersed) submanifold of Euclidean space whose normal bundle is flat and whose principal curvatures are constant along any parallel normal vector field. The set of isoparametric manifolds is stable under the mean curvature flow.
Examples
A straight line in the plane is an obvious example of isoparametric manifold. Any affine subspace of the Euclidean n-dimensional space is also an example since the principal curvatures of any shape operator are zero.
Another simplest example of an isoparametric manifold is a sphere in Euclidean space.
Another example is as follows. Suppose that G is a Lie group and G/H is a symmetric space with canonical decomposition
of the Lie algebra g of G into a direct sum (orthogonal with respect to the Killing form) of the Lie algebra h or H with a complementary subspace p. Then a principal orbit of the adjoint representation of H on p is an isoparametric manifold in p. Non principal orbits are examples of the so-called submanifolds with principal constant curvatures. Actually, by Thorbergsson's theorem any complete, full and irreducible isoparametric submanifold of codimension > 2 is an orbit of a s-representation, i.e. an H-orbit as above where the symmetric space G/H has no flat factor.
The theory of isoparametric submanifolds is deeply related to the theory of holonomy groups. Actually, any isoparametric submanifold is foliated by the holonomy tubes of a submanifold with constant principal curvatures i.e. a focal submanifold. The paper "Submanifolds with constant principal curvatures and normal holonomy groups" is a very good introduction to such theory. For more detailed explanations about holonomy tubes and focalizations see the book Submanifolds and Holonomy.
References
See also
Isoparametric function
Riemannian geometry
Manifolds | Isoparametric manifold | Mathematics | 410 |
113,087 | https://en.wikipedia.org/wiki/System%20of%20linear%20equations | In mathematics, a system of linear equations (or linear system) is a collection of two or more linear equations involving the same variables.
For example,
is a system of three equations in the three variables . A solution to a linear system is an assignment of values to the variables such that all the equations are simultaneously satisfied. In the example above, a solution is given by the ordered triple
since it makes all three equations valid.
Linear systems are a fundamental part of linear algebra, a subject used in most modern mathematics. Computational algorithms for finding the solutions are an important part of numerical linear algebra, and play a prominent role in engineering, physics, chemistry, computer science, and economics. A system of non-linear equations can often be approximated by a linear system (see linearization), a helpful technique when making a mathematical model or computer simulation of a relatively complex system.
Very often, and in this article, the coefficients and solutions of the equations are constrained to be real or complex numbers, but the theory and algorithms apply to coefficients and solutions in any field. For other algebraic structures, other theories have been developed. For coefficients and solutions in an integral domain, such as the ring of integers, see Linear equation over a ring. For coefficients and solutions that are polynomials, see Gröbner basis. For finding the "best" integer solutions among many, see Integer linear programming. For an example of a more exotic structure to which linear algebra can be applied, see Tropical geometry.
Elementary examples
Trivial example
The system of one equation in one unknown
has the solution
However, most interesting linear systems have at least two equations.
Simple nontrivial example
The simplest kind of nontrivial linear system involves two equations and two variables:
One method for solving such a system is as follows. First, solve the top equation for in terms of :
Now substitute this expression for x into the bottom equation:
This results in a single equation involving only the variable . Solving gives , and substituting this back into the equation for yields . This method generalizes to systems with additional variables (see "elimination of variables" below, or the article on elementary algebra.)
General form
A general system of m linear equations with n unknowns and coefficients can be written as
where are the unknowns, are the coefficients of the system, and are the constant terms.
Often the coefficients and unknowns are real or complex numbers, but integers and rational numbers are also seen, as are polynomials and elements of an abstract algebraic structure.
Vector equation
One extremely helpful view is that each unknown is a weight for a column vector in a linear combination.
This allows all the language and theory of vector spaces (or more generally, modules) to be brought to bear. For example, the collection of all possible linear combinations of the vectors on the left-hand side is called their span, and the equations have a solution just when the right-hand vector is within that span. If every vector within that span has exactly one expression as a linear combination of the given left-hand vectors, then any solution is unique. In any event, the span has a basis of linearly independent vectors that do guarantee exactly one expression; and the number of vectors in that basis (its dimension) cannot be larger than m or n, but it can be smaller. This is important because if we have m independent vectors a solution is guaranteed regardless of the right-hand side, and otherwise not guaranteed.
Matrix equation
The vector equation is equivalent to a matrix equation of the form
where A is an m×n matrix, x is a column vector with n entries, and b is a column vector with m entries.
The number of vectors in a basis for the span is now expressed as the rank of the matrix.
Solution set
A solution of a linear system is an assignment of values to the variables
such that each of the equations is satisfied. The set of all possible solutions is called the solution set.
A linear system may behave in any one of three possible ways:
The system has infinitely many solutions.
The system has a unique solution.
The system has no solution.
Geometric interpretation
For a system involving two variables (x and y), each linear equation determines a line on the xy-plane. Because a solution to a linear system must satisfy all of the equations, the solution set is the intersection of these lines, and is hence either a line, a single point, or the empty set.
For three variables, each linear equation determines a plane in three-dimensional space, and the solution set is the intersection of these planes. Thus the solution set may be a plane, a line, a single point, or the empty set. For example, as three parallel planes do not have a common point, the solution set of their equations is empty; the solution set of the equations of three planes intersecting at a point is single point; if three planes pass through two points, their equations have at least two common solutions; in fact the solution set is infinite and consists in all the line passing through these points.
For n variables, each linear equation determines a hyperplane in n-dimensional space. The solution set is the intersection of these hyperplanes, and is a flat, which may have any dimension lower than n.
General behavior
In general, the behavior of a linear system is determined by the relationship between the number of equations and the number of unknowns. Here, "in general" means that a different behavior may occur for specific values of the coefficients of the equations.
In general, a system with fewer equations than unknowns has infinitely many solutions, but it may have no solution. Such a system is known as an underdetermined system.
In general, a system with the same number of equations and unknowns has a single unique solution.
In general, a system with more equations than unknowns has no solution. Such a system is also known as an overdetermined system.
In the first case, the dimension of the solution set is, in general, equal to , where n is the number of variables and m is the number of equations.
The following pictures illustrate this trichotomy in the case of two variables:
{| border=0 cellpadding=5
|-
| width="150" align="center" |
| width="150" align="center" |
| width="150" align="center" |
|-
| align="center" | One equation
| align="center" | Two equations
| align="center" | Three equations
|}
The first system has infinitely many solutions, namely all of the points on the blue line. The second system has a single unique solution, namely the intersection of the two lines. The third system has no solutions, since the three lines share no common point.
It must be kept in mind that the pictures above show only the most common case (the general case). It is possible for a system of two equations and two unknowns to have no solution (if the two lines are parallel), or for a system of three equations and two unknowns to be solvable (if the three lines intersect at a single point).
A system of linear equations behave differently from the general case if the equations are linearly dependent, or if it is inconsistent and has no more equations than unknowns.
Properties
Independence
The equations of a linear system are independent if none of the equations can be derived algebraically from the others. When the equations are independent, each equation contains new information about the variables, and removing any of the equations increases the size of the solution set. For linear equations, logical independence is the same as linear independence.
For example, the equations
are not independent — they are the same equation when scaled by a factor of two, and they would produce identical graphs. This is an example of equivalence in a system of linear equations.
For a more complicated example, the equations
are not independent, because the third equation is the sum of the other two. Indeed, any one of these equations can be derived from the other two, and any one of the equations can be removed without affecting the solution set. The graphs of these equations are three lines that intersect at a single point.
Consistency
A linear system is inconsistent if it has no solution, and otherwise, it is said to be consistent. When the system is inconsistent, it is possible to derive a contradiction from the equations, that may always be rewritten as the statement .
For example, the equations
are inconsistent. In fact, by subtracting the first equation from the second one and multiplying both sides of the result by 1/6, we get . The graphs of these equations on the xy-plane are a pair of parallel lines.
It is possible for three linear equations to be inconsistent, even though any two of them are consistent together. For example, the equations
are inconsistent. Adding the first two equations together gives , which can be subtracted from the third equation to yield . Any two of these equations have a common solution. The same phenomenon can occur for any number of equations.
In general, inconsistencies occur if the left-hand sides of the equations in a system are linearly dependent, and the constant terms do not satisfy the dependence relation. A system of equations whose left-hand sides are linearly independent is always consistent.
Putting it another way, according to the Rouché–Capelli theorem, any system of equations (overdetermined or otherwise) is inconsistent if the rank of the augmented matrix is greater than the rank of the coefficient matrix. If, on the other hand, the ranks of these two matrices are equal, the system must have at least one solution. The solution is unique if and only if the rank equals the number of variables. Otherwise the general solution has k free parameters where k is the difference between the number of variables and the rank; hence in such a case there is an infinitude of solutions. The rank of a system of equations (that is, the rank of the augmented matrix) can never be higher than [the number of variables] + 1, which means that a system with any number of equations can always be reduced to a system that has a number of independent equations that is at most equal to [the number of variables] + 1.
Equivalence
Two linear systems using the same set of variables are equivalent if each of the equations in the second system can be derived algebraically from the equations in the first system, and vice versa. Two systems are equivalent if either both are inconsistent or each equation of each of them is a linear combination of the equations of the other one. It follows that two linear systems are equivalent if and only if they have the same solution set.
Solving a linear system
There are several algorithms for solving a system of linear equations.
Describing the solution
When the solution set is finite, it is reduced to a single element. In this case, the unique solution is described by a sequence of equations whose left-hand sides are the names of the unknowns and right-hand sides are the corresponding values, for example . When an order on the unknowns has been fixed, for example the alphabetical order the solution may be described as a vector of values, like for the previous example.
To describe a set with an infinite number of solutions, typically some of the variables are designated as free (or independent, or as parameters), meaning that they are allowed to take any value, while the remaining variables are dependent on the values of the free variables.
For example, consider the following system:
The solution set to this system can be described by the following equations:
Here z is the free variable, while x and y are dependent on z. Any point in the solution set can be obtained by first choosing a value for z, and then computing the corresponding values for x and y.
Each free variable gives the solution space one degree of freedom, the number of which is equal to the dimension of the solution set. For example, the solution set for the above equation is a line, since a point in the solution set can be chosen by specifying the value of the parameter z. An infinite solution of higher order may describe a plane, or higher-dimensional set.
Different choices for the free variables may lead to different descriptions of the same solution set. For example, the solution to the above equations can alternatively be described as follows:
Here x is the free variable, and y and z are dependent.
Elimination of variables
The simplest method for solving a system of linear equations is to repeatedly eliminate variables. This method can be described as follows:
In the first equation, solve for one of the variables in terms of the others.
Substitute this expression into the remaining equations. This yields a system of equations with one fewer equation and unknown.
Repeat steps 1 and 2 until the system is reduced to a single linear equation.
Solve this equation, and then back-substitute until the entire solution is found.
For example, consider the following system:
Solving the first equation for x gives , and plugging this into the second and third equation yields
Since the LHS of both of these equations equal y, equating the RHS of the equations. We now have:
Substituting z = 2 into the second or third equation gives y = 8, and the values of y and z into the first equation yields x = −15. Therefore, the solution set is the ordered triple .
Row reduction
In row reduction (also known as Gaussian elimination), the linear system is represented as an augmented matrix
This matrix is then modified using elementary row operations until it reaches reduced row echelon form. There are three types of elementary row operations:
Type 1: Swap the positions of two rows.
Type 2: Multiply a row by a nonzero scalar.
Type 3: Add to one row a scalar multiple of another.
Because these operations are reversible, the augmented matrix produced always represents a linear system that is equivalent to the original.
There are several specific algorithms to row-reduce an augmented matrix, the simplest of which are Gaussian elimination and Gauss–Jordan elimination. The following computation shows Gauss–Jordan elimination applied to the matrix above:
The last matrix is in reduced row echelon form, and represents the system , , . A comparison with the example in the previous section on the algebraic elimination of variables shows that these two methods are in fact the same; the difference lies in how the computations are written down.
Cramer's rule
Cramer's rule is an explicit formula for the solution of a system of linear equations, with each variable given by a quotient of two determinants. For example, the solution to the system
is given by
For each variable, the denominator is the determinant of the matrix of coefficients, while the numerator is the determinant of a matrix in which one column has been replaced by the vector of constant terms.
Though Cramer's rule is important theoretically, it has little practical value for large matrices, since the computation of large determinants is somewhat cumbersome. (Indeed, large determinants are most easily computed using row reduction.)
Further, Cramer's rule has very poor numerical properties, making it unsuitable for solving even small systems reliably, unless the operations are performed in rational arithmetic with unbounded precision.
Matrix solution
If the equation system is expressed in the matrix form , the entire solution set can also be expressed in matrix form. If the matrix A is square (has m rows and n=m columns) and has full rank (all m rows are independent), then the system has a unique solution given by
where is the inverse of A. More generally, regardless of whether m=n or not and regardless of the rank of A, all solutions (if any exist) are given using the Moore–Penrose inverse of A, denoted , as follows:
where is a vector of free parameters that ranges over all possible n×1 vectors. A necessary and sufficient condition for any solution(s) to exist is that the potential solution obtained using satisfy — that is, that If this condition does not hold, the equation system is inconsistent and has no solution. If the condition holds, the system is consistent and at least one solution exists. For example, in the above-mentioned case in which A is square and of full rank, simply equals and the general solution equation simplifies to
as previously stated, where has completely dropped out of the solution, leaving only a single solution. In other cases, though, remains and hence an infinitude of potential values of the free parameter vector give an infinitude of solutions of the equation.
Other methods
While systems of three or four equations can be readily solved by hand (see Cracovian), computers are often used for larger systems. The standard algorithm for solving a system of linear equations is based on Gaussian elimination with some modifications. Firstly, it is essential to avoid division by small numbers, which may lead to inaccurate results. This can be done by reordering the equations if necessary, a process known as pivoting. Secondly, the algorithm does not exactly do Gaussian elimination, but it computes the LU decomposition of the matrix A. This is mostly an organizational tool, but it is much quicker if one has to solve several systems with the same matrix A but different vectors b.
If the matrix A has some special structure, this can be exploited to obtain faster or more accurate algorithms. For instance, systems with a symmetric positive definite matrix can be solved twice as fast with the Cholesky decomposition. Levinson recursion is a fast method for Toeplitz matrices. Special methods exist also for matrices with many zero elements (so-called sparse matrices), which appear often in applications.
A completely different approach is often taken for very large systems, which would otherwise take too much time or memory. The idea is to start with an initial approximation to the solution (which does not have to be accurate at all), and to change this approximation in several steps to bring it closer to the true solution. Once the approximation is sufficiently accurate, this is taken to be the solution to the system. This leads to the class of iterative methods. For some sparse matrices, the introduction of randomness improves the speed of the iterative methods. One example of an iterative method is the Jacobi method, where the matrix is split into its diagonal component and its non-diagonal component . An initial guess is used at the start of the algorithm. Each subsequent guess is computed using the iterative equation:
When the difference between guesses and is sufficiently small, the algorithm is said to have converged on the solution.
There is also a quantum algorithm for linear systems of equations.
Homogeneous systems
A system of linear equations is homogeneous if all of the constant terms are zero:
A homogeneous system is equivalent to a matrix equation of the form
where A is an matrix, x is a column vector with n entries, and 0 is the zero vector with m entries.
Homogeneous solution set
Every homogeneous system has at least one solution, known as the zero (or trivial) solution, which is obtained by assigning the value of zero to each of the variables. If the system has a non-singular matrix () then it is also the only solution. If the system has a singular matrix then there is a solution set with an infinite number of solutions. This solution set has the following additional properties:
If u and v are two vectors representing solutions to a homogeneous system, then the vector sum is also a solution to the system.
If u is a vector representing a solution to a homogeneous system, and r is any scalar, then ru is also a solution to the system.
These are exactly the properties required for the solution set to be a linear subspace of Rn. In particular, the solution set to a homogeneous system is the same as the null space of the corresponding matrix A.
Relation to nonhomogeneous systems
There is a close relationship between the solutions to a linear system and the solutions to the corresponding homogeneous system:
Specifically, if p is any specific solution to the linear system , then the entire solution set can be described as
Geometrically, this says that the solution set for is a translation of the solution set for . Specifically, the flat for the first system can be obtained by translating the linear subspace for the homogeneous system by the vector p.
This reasoning only applies if the system has at least one solution. This occurs if and only if the vector b lies in the image of the linear transformation A.
See also
Arrangement of hyperplanes
Simultaneous equations
References
Bibliography
Further reading
External links
Equations
Linear algebra
Numerical linear algebra | System of linear equations | Mathematics | 4,182 |
33,966,580 | https://en.wikipedia.org/wiki/Human%20placentophagy | Human placentophagy, or consumption of the placenta, is defined as "the ingestion of a human placenta postpartum, at any time, by any person, either in raw or altered (e.g., cooked, dried, steeped in liquid) form". Placentophagy can be divided into two categories, maternal placentophagy and non-maternal placentophagy.
While there are several anecdotes of different cultures practicing placentophagy in varying contexts, maternal placentophagy started in the US in the 1970s, with little to no evidence of its practice in any traditional or historic culture. Midwives and alternative-health advocates in the U.S. are the primary groups encouraging post-partum maternal placentophagy.
Maternal placentophagy has a small following in Western cultures, fostered by celebrities like January Jones. The placenta has high protein, rich iron and nutrient content, but there is inconclusive scientific evidence about any health benefit to its consumption. The risks of human placentophagy are also still unclear, but there has been one confirmed case of an infant needing hospitalization due to a group B strep blood infection tied to their mother's consumption of placenta capsules.
Maternal placentophagy
Maternal placentophagy is defined as "a mother’s ingestion of her own placenta postpartum, in any form, at any time". Of the more than 4000 species of placental mammals, most, including herbivores, regularly engage in maternal placentophagy, thought to be an instinct to hide any trace of childbirth from predators in the wild. The exceptions to placentophagy include mainly humans, pinnipeds, Sirenia cetaceans, Pterissodactyls, and camelids.
Non-maternal placentophagy
Non-maternal placentophagy is defined as "the ingestion of the placenta by any person other than the mother, at any time". Such instances of placentophagy have been attributed to the following: a shift toward carnivorousness at parturition, specific hunger, and general hunger. With most Eutherian mammals, the placenta is consumed postpartum by the mother. Historically, humans more commonly consume the placenta of another woman under special circumstances.
Historical occurrences
In a 1979 article in the Bulletin of the New York Academy of Medicine, William Ober evaluated the possibility that certain ancient cultures that practiced human sacrifice also practiced human placentophagy. However, a 2010 survey of 179 societies found that none practices placentophagy regularly. A 2007 study similarly found that placentophagy has never been described as a culturally normative practice in any historical source.
Placentophagy might have occurred during the Siege of Jerusalem (587 BC), due to the famine experienced by the Judeans, according to scholar Jack Miles in his Pulitzer Prize-winning God: A Biography.
Decline of maternal placentophagy in humans
From an evolutionary perspective, it appears that the human species must have stopped practicing maternal placentophagy at a fairly early stage, since there is no evidence that it has ever been common. One hypothesis that has been offered is that the smoke of firewood caused environmental toxins to accumulate in the placenta, leading to harmful health outcomes for prehistoric mothers who stayed close to the community hearth and ate their placentas. However, there is no direct evidence for a taboo against placentophagy in human myth. The shift away from placentophagy may have occurred over one million years before present. It may have been the consequence of a more aquatic lifestyle, in agreement with the absence of placentophagy in aquatic mammals (cetacea, pinnipeds and sirenia).
Traditional medicine
Human placenta has been used traditionally in Chinese medicine, though the mother is not identified as the recipient of these treatments. A sixteenth-century Chinese medical text, the Compendium of Materia Medica, states in a section on medical uses of the placenta that, "when a woman in Liuqiu has a baby, the placenta is eaten", and that in Bagui, "the placenta of a boy is specially prepared and eaten by the mother’s family and relatives." Another Chinese medical text, the Great Pharmacopoeia of 1596, recommends placental tissue mixed with human milk to help overcome the effects of qi exhaustion. Dried, powdered placenta would be stirred into three wine-cups of milk to make a Connected Destiny Elixir. The elixir would be warmed in sunlight, then taken as treatment. It is not known exactly how traditional this remedy was, nor exactly how far back it dates.
In Jamaica, bits of placental membranes were put into an infant's tea to prevent convulsions caused by ghosts. In ancient Egypt, as well, pieces of placenta were soaked in milk and fed to the infant to test for infant mortality.
The Chaga of Tanganyika place the placenta in a receptacle for two months to dry. Once dry, it is ground into flour from which a porridge is made. The porridge is served to old women of the family as a way of preserving the child's life.
In Central India, women of the Kol Tribe eat placenta to aid reproductive function. It is believed that consumption of placenta by a childless woman "may dispel the influences that keep her barren".
The Kurtachi of the Solomon Islands mixed placenta into the mother's supply of powdered lime for chewing with the areca nut.
In the Maremma region of Italy it was at one time common to mix pieces of placenta into the food of a new mother without her knowledge, to promote a healthy flow of milk.
Cultural and spiritual beliefs
Beliefs behind the practices of consuming the placenta, whether in part or in whole, commonly reflect acknowledgment for the vast work of this organ for the baby in utero, serving as its 'protector' and providing critical vital functions for the baby before birth. The placenta can be seen as the Tree of Life, as a genetic 'twin' to the fetus, an angel, and reasons for ingesting the placenta may reflect spiritual beliefs as much as the pragmatic ones listed above. Traditional practices to revere and honor the placenta that do not include consumption may include placenta burial, such as in Saudi Arabia. Such traditions reflect human birthing practices wherein umbilical cords may not have been severed while the cord is still pulsing, avoiding blood loss and infection, and may include practices that retain the placental connection until after it has been delivered and the baby is already nursing.
Modern placentophagy
Modern practice of placentophagy is rare, as most contemporary human cultures do not promote its consumption. Placentophagy did receive popular culture attention in 2012, however, when American actress January Jones credited eating her placenta as helping her get back to work on the set of Mad Men after just six weeks.
Instances of placentophagy have been recorded among certain modern cultures. In the 1960s "male and female Vietnamese nurses and midwives of Chinese and Thai background consum[ed] the placentas of their young, healthy patients" for reasons unspecified, as reported by a Czechoslovakian medical officer in at the Hospital of Czechoslovak-Vietnamese Friendship in Haiphong. Placentas were stripped of their membranous parts and fried with onions before being eaten.
A more recent cross-cultural ethnographic study by researchers at the University of Nevada, Las Vegas surveyed 179 contemporary human societies, and identified only one culture (Chicano, or Mexican-American) that mentioned the practice of maternal placentophagy. This account, centering on Chicano and Anglo midwifery in San Antonio, Texas, stated, "cooking and eating part of the placenta has…been reported by a couple of midwives. One Anglo mother ... was reported to have roasted the placenta." This instance, however, may not be indicative of any larger cultural trends, as no other records of placentophagy were found in the Chicano culture. This same study also recorded three references of non-maternal placentophagy:
Traditional Gullah medicine dictates that when a baby is born with a caul, with amniotic membranes over the face at birth, the placenta is made into a tea and then consumed by the child to "prevent them from seeing spirits that would otherwise haunt [them]".
Practice of paternal placentophagy was identified in the Malekula of Melanesia. "In Espiritu Santo, the new father [eats] a pudding made from the cooked placenta and blood."
Oral administration of the placenta was reported in Sino-Vietnamese medicine to aid the recovery of those suffering from tuberculosis.
In a follow-up study, the UNLV researchers were joined by colleagues at the University of South Florida, and surveyed women who had engaged in maternal placentophagy previously. Of the 189 placentophagic women surveyed, the researchers found that 95 percent of participants had "positive" or "very positive" subjective experiences from eating their own placenta, citing beliefs of "improved mood", "increased energy", and "improved lactation". The authors themselves, however, state that "exceedingly little research has been conducted to assess these claims and no systematic analysis has been performed to evaluate the experiences of women who engage in this behavior." In the United States as many as 30% of women who planned community births may consume the placenta, often citing avoidance of postpartum depression as the reason.
Current beliefs among placentophagists
During pregnancy, women often become iron deficient because iron is transported across the placenta to the fetus. Because low levels of iron are known to negatively affect mood, researchers are exploring the possible link between iron status and PPD. Placentophagy advocates claim that the placenta provides an excellent source of dietary iron, and may therefore improve maternal postpartum iron status. However, a recent randomized, double-blind, placebo-controlled pilot study conducted by researchers at UNLV found that consuming a commonly recommended daily intake of encapsulated placenta (approximately 3,000 mg per day) only provides about one-quarter of the RDA for iron for lactating women. The study found no differences in maternal iron status over a three-week postpartum period between women consuming 3300 mg/day of cooked, encapsulated placenta, and study participants taking a beef "placebo".
Preparation
In many areas placenta encapsulation specialists can be found to professionally prepare the placenta for consumption. Also, many online alternative health sources give instructions for preparing it personally. One common method of preparation is encapsulation. The encapsulation process can be one of two ways: steamed or raw. With the steamed encapsulation process, the placenta is gently steamed with various herbs (ginger, lemon, frankincense, myrrh, etc.), then fully dehydrated, ground into a fine powder, and put into capsules. The raw method does not involve steaming first. The placenta will be fully dehydrated, then ground and put into capsules.
Controversy
Many researchers remain skeptical of whether the practice of placentophagy is of value to humans. A 2015 review found that while a minority of women in western countries perceive placentophagy as reducing the risk of postpartum depression and enhancing recovery, there is no evidence that this is the case. The same study also found inconclusive evidence that placentophagy was of any benefit to facilitating uterine contraction, resumption of normal cyclic estrogen cycle, and milk production. As well, the authors stated that the risks of placentophagy also warrant more investigation.
A researcher who had previously researched why animals eat their placentas stated in 2007 that "people can believe what they want, but there's no research to substantiate claims of human benefit. The cooking process will destroy all the protein and hormones. Drying it out or freezing it would destroy other things." UNLV researchers found that some essential minerals and steroid hormones remained in human placenta that was cooked and processed for encapsulation and consumption.
Although human placentophagy entails the consumption of human tissue by a human or humans, its status as cannibalism is debated.
See also
Fetal cannibalism
Medical cannibalism
References
External links
Food and drink introduced in the 1970s
Placenta
Cultural anthropology
Alternative medicine
Carnivory
Cannibalism | Human placentophagy | Biology | 2,622 |
78,421,468 | https://en.wikipedia.org/wiki/Corsehill%20%28stone%29 | Corsehill stone is a type of building stone, extracted from Corsehill Quarry in Annandale, Dumfries and Galloway, Scotland. It is a red sandstone of Triassic age, used extensively for buildings in the 19th and 20th centuries.
Quarry
On November 8th, 1993, the United States Senate passed a resolution calling for the construction of a memorial to honour the victims of the Lockerbie Bombing. Blocks of red sandstone from the Corsehill Quarry were used to build the Lockerbie Bombing cairn in Arlington National Cemetery.
References
Building materials | Corsehill (stone) | Physics,Engineering | 112 |
24,505,448 | https://en.wikipedia.org/wiki/Gymnopilus%20velutinus | Gymnopilus velutinus is a species of mushroom in the family Hymenogastraceae.
See also
List of Gymnopilus species
External links
Gymnopilus velutinus at Index Fungorum
velutinus
Fungus species | Gymnopilus velutinus | Biology | 55 |
4,299,762 | https://en.wikipedia.org/wiki/Eyes%20Galaxies | The Eyes Galaxies (NGC 4435-NGC 4438, also known as Arp 120) are a pair of galaxies about 52 million light-years away in the constellation Virgo. The pair are members of the string of galaxies known as Markarian's Chain.
NGC 4435
NGC 4435 is a barred lenticular galaxy currently interacting with NGC 4438. Studies of the galaxy by the Spitzer Space Telescope revealed a relatively young (190 million years) stellar population within the galaxy's nucleus, which may have originated through the interaction with NGC 4438 compressing gas and dust in that region, triggering a starburst. It also appears to have a long tidal tail possibly caused by the interaction; however, other studies suggest the apparent tail is actually foreground galactic cirrus within the Milky Way unrelated to NGC 4435.
NGC 4438
NGC 4438 is the most curious interacting galaxy in the Virgo Cluster, due to the uncertainty surrounding the energy mechanism that heats the nuclear source; this energy mechanism may be a starburst region, or a black hole-powered active galactic nucleus (AGN). Both hypotheses are currently under investigation by astronomers.
This galaxy shows a highly distorted disk, including long tidal tails due to the gravitational interactions with other galaxies in the cluster and its companion. The aforementioned features explain why sources differ as to its classification, defining it either as a lenticular or spiral galaxy. NGC 4438 also shows signs of a past, extended - but modest - starburst, a considerable deficiency of neutral hydrogen, as well as a displacement of the components of its interstellar medium - atomic hydrogen, molecular hydrogen, interstellar dust, and hot gas - in the direction of NGC 4435. This observation suggests both a tidal interaction with NGC 4435 and the effects of ram-pressure stripping as NGC 4438 moves at high speed through Virgo's intracluster medium, increased by the encounter between both galaxies.
As interacting galaxies
While there is evidence to suggest that the environmental damage to the interstellar medium of NGC 4438 may have been caused by an off-center collision with NGC 4435 millions of years ago, a recent discovery of several filaments of ionized gas links NGC 4438 with the large neighboring elliptical galaxy Messier 86, in addition to a discovery of gas and dust within M86 that may have been stripped from NGC 4438 during a past encounter between the two. Given the high density of galaxies in the center of the Virgo galaxy cluster, it is possible that the three galaxies, NGC 4435, NGC 4438, and M86, have had past interactions.
In popular culture
In the 2014 film Interstellar, "NGC 4438" along with specific observation data can be seen in Murphy Cooper (Jessica Chastain)'s notepad during the film's climactic sequence. As the presence of a supermassive black hole in the AGC of NGC 4438 is one of two leading theories, the galaxy is potentially that accessed by the wormhole in the film.
References
External links
NGC 4438
Virgo (constellation)
Virgo Cluster
Interacting galaxies
NGC objects
120 | Eyes Galaxies | Astronomy | 638 |
45,046,380 | https://en.wikipedia.org/wiki/Sinch%20AB | Sinch AB, formerly CLX Communications, is a communications platform as a service (CPaaS) company which powers messaging, voice, and email communications between businesses and their customers. Headquartered in Stockholm, Sweden, the company employs over 4000 people in more than 60 countries.
History
CLX Communications
CLX Communications, better known as CLX, was founded in 2008 by Johan Hedberg, Robert Gerstmann, Kristian Männik, Henrik Sandell, Björn Zethraeus and Kjell Arvidsson, as a telecommunications and cloud communications platform as a service (PaaS) company. CLX acquired numerous companies in the industry between 2009 and 2018 and then later under the brand name Sinch from 2019.
In 2014, CLX acquired Voltari's mobile messaging business in the US and Canada.
After announcing its intention to proceed with an initial public offering (IPO) in September 2015, CLX completed the IPO of its shares and began trading on Nasdaq Stockholm in October 2015, with the introduction price set at SEK 59. The company is listed on Nasdaq Stockholm, on the Mid Cap list under the Technology sector.
In 2016 CLX acquired Mblox for US$117 million. At the time, Mblox was one of the largest messaging service providers in the world, delivering 7 billion messages in 2015.
In 2017, CLX acquired German provider Xura Secure Communications GmbH in February 2017, and UK-based Dialogue in May 2017.
Also in 2017, CLX entered into a strategic partnership with Google "to provide the next generation of messaging services to brand marketers using Rich Communications Services (RCS) standard embedded directly into consumers' native messaging apps". Part of Google's Early Access Program (EAP), CLX enabled enterprises to build with RCS and was one of the first companies to offer an upgraded messaging experience.
In March 2018, CLX entered into a definitive agreement to acquire Danish company Unwire Communication ApS for DKK 148 million. Upon completion, CLX became the largest CPaaS provider in the Nordic region. The following month, they announced that they had acquired Seattle-based company Vehicle for US$8 million.
Symsoft
Symsoft was founded in Stockholm in 1989, to supply charging and messaging services to mobile operators. The company was initially founded as a consultancy company but shifted focus in the late 1990s to sell products. Formerly listed on Nasdaq Stockholm, it provided mobile communications in the areas of Real-Time BSS, SMS and other messaging services, with SS7 security for mobile network operators (MNOs), mobile virtual network operators (MVNOs) and mobile virtual network enablers (MVNEs).
The first deployment of the Symsoft SMS real-time charging was with TeliaMobile, now Telia Company, in Sweden in 1999. Building on the Telia case and other early projects relating to mobile messaging and real-time charging Symsoft launched the Symsoft Service Delivery Platform (formerly known as the Nobill platform).
In 2009, CLX acquired Symsoft, which then formed the Operator Division of CLX Communications. The division offered software and services to customers in the areas of IoT platforms, real-time BSS, VAS, fraud prevention solutions for revenue retention and full-service solutions for virtual operators.
As of 2016, Symsoft had experience in upgrading and replacing legacy systems in the areas of BSS, Value Added Services and Security. The company serves mobile operators, MVNOs and MVNEs such as America Móvil, Virgin Mobile, Polkomtel, Saudi Telecom, Simfonics Telefónica, Telia Company, Unify Mobile and 3 in 30+ countries.
Mblox
Mblox Inc. was a global company that provided a Carrier-grade SaaS-based mobile messaging platform to enterprises, including global one-way and two-way SMS, MMS, push notifications, ring-tones, shortcodes and virtual mobile numbers. Mblox Ltd. was founded in 1999, by Andrew Bud in London, England. In 2003, MBlox, Ltd. was acquired by a newly restructured and capitalized Mobilesys, Inc. Founded by David Coelho, the company assumed the name MBlox, Inc. at Andrew Bud's insistence and operated dual headquarters in Sunnyvale, California, and London, England. This transaction was spearheaded by Q-Advisors, a boutique investment bank founded by Michael Quinn in 2001 and located in Denver, Colorado. The CEO of mBlox, Inc. was William "Chip" Hoffman, who architected and executed the transaction with the help of Gary Cuccio (Chairman), and Jay Emmet (COO). The new company - Mblox Inc. – became the world leader in SMS technology use and carrier clearing and billing, growing from $1.75 million in 2002 to over $2 billion in pro forma revenue by April 2004 (19 months). This growth was primarily organic with geographic expansion into over 30 countries, including London, San Francisco, Atlanta, Stockholm, Paris, Munich, Madrid, Singapore, Manila, and Sydney.
In 2003, the company acquired carrier-grade message delivery capabilities through the purchase of an SMSC from Comverse. After stepping down in mid-2004, William "Chip" Hoffman handed the reins to then-Chairman Gary Cuccio, former Senior Executive with Pacific Telesis, and COO of the wildly successful Omnipoint Wireless. Cuccio added the CEO role to his chairmanship.
From 2005 to 2006, the company opened offices in Singapore and Australia. The year 2009 was marked by the opening of the Milan office, Italy.
2010: Acquired Mashmobile (Sweden)
2011: Added direct Latin America two-way messaging services
2012: Added push and rich-push messaging capability
2013: Opened offices in Brazil and South Africa
2014: Acquired Zoove and CardBoardFish; increasing reach and product offerings
2015: Mblox acquired 4INFO SMS Services
2016: Sold Zoove
2016: Mblox was acquired by CLX Communications AB (Publ)
In December 2017, the parent company CLX Communications retired the Mblox and CardBoardFish brands.
Public criticism and fines
Starting in 2006, with the release of the globally successful "Crazy Frog" ring-tone, MBlox became a target for regulators. Since phone bills from some carriers attribute charges to the company doing the billing, rather than to the business that actually sold and provided the service, Mblox had been accused in Internet forums of enabling a process called cramming, or automatically signing mobile customers up for unsolicited services and billing them accordingly. As of 2008, the company had been fined 22 times for cramming-related offences, totaling hundreds of thousands of dollars. Since Mblox had never had any involvement, direct or indirect, in creating or promoting such services, its responsibility was to try to prevent its customers abusing the SMS-based billing services it provided them. Following the sudden spate of problems in 2008–09, in 2010 the UK regulator Phonepay Plus commended Mblox for "a significant investment in new technology, personnel and resources to aid compliance and prevent further harm occurring to consumers from services operating over its platform." In the UK, Mblox had not been cited in any case since December 2011, at which time the regulator described its actions as "exemplary". On March 19, 2013, Mblox sold its PSMS business to OpenMarket as it chose to focus exclusively on Enterprise to Consumer mobile messaging. This brought an end to its involvement in mobile payments
Sinch
Sinch was initially founded in May 2014 by Andreas Bernström in Stockholm and San Francisco. Originally the technology behind Rebtel, Sinch was spun-out with $12 million in funding, focusing on the mobile first, app developer market. Sinch launched its Voice and Instant Messaging products in May 2014, and quickly launched their SMS API product at the end of 2014.
In 2016, CLX also acquired Sinch for a total consideration of SEK 138.9 million on a debt-free basis.
Brand Unification
In February 2019, it was announced that CLX had "launched a new corporate brand and visual identity" to unify all its business units under the same name. After acquiring Sinch in 2016, they have now leveraged the brand name to encompass the entire company to "more accurately and immediately [depict] its current offerings and mission".
Sinch effectively has inherited the history of CLX Communications and is considered a different brand to the one launched in 2014.
CLX completed an initial public offering (IPO) and were listed on Nasdaq Stockholm in October 2015. Since February 2019, the company can be found on the Mid Cap list under the Technology sector under the ticker 'SINCH'.
In September 2019, Sinch acquired myElefant, a Software-as-a-Service platform for rich interactive messaging, for an upfront cash consideration of EUR 18.5 million, with an additional cash earnout of up to three million within two years if certain gross profit targets were met.
In October 2019, Sinch acquired TWW do Brasil S.A., one of the largest SMS connectivity providers in Brazil, for an enterprise value of BRL 180,750 million.
In March 2020, Sinch entered into a definitive agreement to acquire Chatlayer BV, a cloud-based chatbot and voicebot platform, for an enterprise value of EUR 6.9 million.
Also in March 2020, Sinch acquired Brazilian messaging provider Wavy for a total cash consideration of BRL 355 million and 1,534,582 new shares in Sinch. This corresponded to an enterprise value of SEK 1,187 million.
In May 2020, Sinch acquired SAP Digital Interconnect (SDI) — a unit within SAP offering services in programmable communications, carrier messaging, and enterprise solutions — for a total cash consideration of EUR 225 million.
In June, 2020, Sinch acquired ACL Mobile Ltd (ACL), a vendor of communications services in India and Southeast Asia, for a total consideration of INR 5,350 million (approximately SEK 655 million).
In February 2021, Sinch acquired Inteliquent, an interconnection provider for voice communications, for $1.14 billion.
In June 2021, Sinch acquired MessageMedia for a total enterprise value of USD 1.3 billion, with a total cash consideration of USD 1.1 billion and 1,128,487 new shares in Sinch. MessageMedia provides mobile messaging solutions for small and medium-sized businesses in the United States and Australia, New Zealand, and Europe.
In September 2021, Sinch entered into a definitive agreement to acquire MessengerPeople (now Sinch Engage), a SaaS-based conversational messaging platform, for a total enterprise value of EUR 48 million, with a total cash consideration of EUR 33.6 million and EUR 14.4 million paid in the form of new shares in Sinch.
In September 2021, Sinch acquired cloud-based email delivery platform Pathwire (now Sinch Mailgun, Sinch Mailjet, and Sinch Email on Acid) for a cash consideration of USD 925 million and 51 million new shares in Sinch, which corresponds to an enterprise value of approximately USD 1.9 billion.
Competitors
Sinch's main competitors are Twilio, Infobip, and Vonage.
Controversy
In May 2022, the Federal Trade Commission (FTC) of the United States of America sent a cease-and-desist letter to the Inteliquent division of Sinch for hosting illegal robocall campaigns. In the cease-and-desist letter, the FTC cited robocalls from Social Security Administration imposters, AT&T/DirecTV imposters, Utility disconnection/rebate/rate reduction scams, Auto warranty robocalls,
and Credit card interest rate reduction robocalls.
Awards
CEO and Founder Andreas was announced as one of The 9 Most Innovative People in VoIP in 2014.
See also
Cloud communications
GSMA
Short Messaging Service
Telecommunications
References
Cloud computing providers
Mobile telecommunication services
Mobile technology
Telecommunications companies of Sweden
Cloud communication platforms
Telecommunications companies established in 2008
Swedish companies established in 2008
2016 mergers and acquisitions
Companies based in Stockholm
Companies listed on Nasdaq Stockholm
Companies in the OMX Stockholm 30 | Sinch AB | Technology | 2,543 |
37,464,576 | https://en.wikipedia.org/wiki/Russula%20silvicola | Russula silvicola is a species of agaric fungus in the family Russulaceae. Found in North America, it was described as new to science in 1975. It is considered inedible. It has a strong peppery flavor.
See also
List of Russula species
References
External links
silvicola
Fungi described in 1975
Fungi of North America
Inedible fungi
Fungus species | Russula silvicola | Biology | 81 |
23,974,743 | https://en.wikipedia.org/wiki/C10H21N | {{DISPLAYTITLE:C10H21N}}
The molecular formula C10H21N (molar mass: 155.28 g/mol, exact mass: 155.1674 u) may refer to:
Levopropylhexedrine (Eventin)
Pempidine
Propylhexedrine (Benzedrex) | C10H21N | Chemistry | 75 |
26,840,918 | https://en.wikipedia.org/wiki/The%20New%20Media%20Reader | The New Media Reader is a new media textbook edited by Noah Wardrip-Fruin and Nick Montfort and published through The MIT Press. The reader features essays from a variety of contributors such as Lev Manovich, Richard Stallman, and Alan Turing. It is currently in use at multiple college campuses including Brown University
", Duke University, and the University of California at Santa Cruz.
The purpose of this book, as described by the authors, is to articulate the that has often gone unheard and under the radar. They made an extra effort to include illustrations that we originally intended but often neglected in subsequent printings. This book hopes to " assemble a representative collection of critical thoughts, events, and developments...as a new medium, or enabling a new media." By new media they are not referring to new media at this given moment in time, but rather media that was new and original at the time of its introduction. They mention that many times these ideas seemed radical and unorthodox at the time but have paved the way from many modern ideas and the authors hope to "provide understanding and offer fuel for inspiration".
References
External links
The New Media Reader on Amazon.com
New Media from Borges to HTML - Lev Manovich
New media
Media studies textbooks
2003 non-fiction books
Books about the Digital Revolution
Electronic literature works
Electronic literature criticism | The New Media Reader | Technology | 275 |
6,415,314 | https://en.wikipedia.org/wiki/Chromosome%20abnormality | A chromosomal abnormality, chromosomal anomaly, chromosomal aberration, chromosomal mutation, or chromosomal disorder is a missing, extra, or irregular portion of chromosomal DNA. These can occur in the form of numerical abnormalities, where there is an atypical number of chromosomes, or as structural abnormalities, where one or more individual chromosomes are altered. Chromosome mutation was formerly used in a strict sense to mean a change in a chromosomal segment, involving more than one gene. Chromosome anomalies usually occur when there is an error in cell division following meiosis or mitosis. Chromosome abnormalities may be detected or confirmed by comparing an individual's karyotype, or full set of chromosomes, to a typical karyotype for the species via genetic testing.
Sometimes chromosomal abnormalities arise in the early stages of an embryo, sperm, or infant. They can be caused by various environmental factors. The implications of chromosomal abnormalities depend on the specific problem, they may have quite different ramifications. Some examples are Down syndrome and Turner syndrome.
Numerical abnormality
An abnormal number of chromosomes is known as aneuploidy, and occurs when an individual is either missing a chromosome from a pair (resulting in monosomy) or has more than two chromosomes of a pair (trisomy, tetrasomy, etc.). Aneuploidy can be full, involving a whole chromosome missing or added, or partial, where only part of a chromosome is missing or added. Aneuploidy can occur with sex chromosomes or autosomes.
Rather than having monosomy, or only one copy, the majority of aneuploid people have trisomy, or three copies of one chromosome. An example of trisomy in humans is Down syndrome, which is a developmental disorder caused by an extra copy of chromosome 21; the disorder is therefore also called "trisomy 21".
An example of monosomy in humans is Turner syndrome, where the individual is born with only one sex chromosome, an X.
Sperm aneuploidy
Exposure of males to certain lifestyle, environmental and/or occupational hazards may increase the risk of aneuploid spermatozoa. In particular, risk of aneuploidy is increased by tobacco smoking, and occupational exposure to benzene, insecticides, and perfluorinated compounds. Increased aneuploidy is often associated with increased DNA damage in spermatozoa.
Structural abnormalities
When the chromosome's structure is altered, this can take several forms:
Deletions: A portion of the chromosome is missing or has been deleted. Known disorders in humans include Wolf–Hirschhorn syndrome, which is caused by partial deletion of the short arm of chromosome 4; and Jacobsen syndrome, also called the terminal 11q deletion disorder.
Duplications: A portion of the chromosome has been duplicated, resulting in extra genetic material. Known human disorders include Charcot–Marie–Tooth disease type 1A, which may be caused by duplication of the gene encoding peripheral myelin protein 22 (PMP22) on chromosome 17.
Inversions: A portion of the chromosome has broken off, turned upside down, and reattached, therefore the genetic material is inverted.
Insertions: A portion of one chromosome has been deleted from its normal place and inserted into another chromosome.
Translocations: A portion of one chromosome has been transferred to another chromosome. There are two main types of translocations:
Reciprocal translocation: Segments from two different chromosomes have been exchanged.
Robertsonian translocation: An entire chromosome has attached to another at the centromere - in humans, these only occur with chromosomes 13, 14, 15, 21, and 22.
Rings: A portion of a chromosome has broken off and formed a circle or ring. This happens with or without the loss of genetic material.
Isochromosome: Formed by the mirror image copy of a chromosome segment including the centromere.
Chromosome instability syndromes are a group of disorders characterized by chromosomal instability and breakage. They often lead to an increased tendency to develop certain types of malignancies.
Inheritance
Most chromosome abnormalities occur as an accident in the egg cell or sperm, and therefore the anomaly is present in every cell of the body. Some anomalies, however, can happen after conception, resulting in Mosaicism (where some cells have the anomaly and some do not). Chromosome anomalies can be inherited from a parent or be "de novo". This is why chromosome studies are often performed on parents when a child is found to have an anomaly. If the parents do not possess the abnormality it was not initially inherited; however, it may be transmitted to subsequent generations.
Acquired chromosome abnormalities
Most cancers, if not all, could cause chromosome abnormalities, with either the formation of hybrid genes and fusion proteins, deregulation of genes and overexpression of proteins, or loss of tumor suppressor genes (see the "Mitelman Database" and the Atlas of Genetics and Cytogenetics in Oncology and Haematology,). Furthermore, certain consistent chromosomal abnormalities can turn normal cells into a leukemic cell such as the translocation of a gene, resulting in its inappropriate expression.
DNA damage during spermatogenesis
During the mitotic and meiotic cell divisions of mammalian gametogenesis, DNA repair is effective at removing DNA damages. However, in spermatogenesis the ability to repair DNA damages decreases substantially in the latter part of the process as haploid spermatids undergo major nuclear chromatin remodeling into highly compacted sperm nuclei. As reviewed by Marchetti et al., the last few weeks of sperm development before fertilization are highly susceptible to the accumulation of sperm DNA damage. Such sperm DNA damage can be transmitted unrepaired into the egg where it is subject to removal by the maternal repair machinery. However, errors in maternal DNA repair of sperm DNA damage can result in zygotes with chromosomal structural aberrations.
Melphalan is a bifunctional alkylating agent frequently used in chemotherapy. Meiotic inter-strand DNA damages caused by melphalan can escape paternal repair and cause chromosomal aberrations in the zygote by maternal misrepair. Thus both pre- and post-fertilization DNA repair appear to be important in avoiding chromosome abnormalities and assuring the genome integrity of the conceptus.
Detection
Depending on the information one wants to obtain, different techniques and samples are needed.
For the prenatal diagnosis of a fetus, amniocentesis, chorionic villus sampling or circulating foetal cells would be collected and analysed in order to detect possible chromosomal abnormalities.
For the preimplantational diagnosis of an embryo, a blastocyst biopsy would be performed.
For a lymphoma or leukemia screening the technique used would be a bone marrow biopsy.
Nomenclature
The International System for Human Cytogenomic Nomenclature (ISCN) is an international standard for human chromosome nomenclature, which includes band names, symbols and abbreviated terms used in the description of human chromosome and chromosome abnormalities. Abbreviations include a minus sign (-) for chromosome deletions, and del for deletions of parts of a chromosome.
See also
Aneuploidy
Chromosome segregation
Genetic disorder
List of genetic disorders
Gene therapy
Nondisjunction
Obstetrical complications
References
External links
Chromosomal abnormalities
Cytogenetics
Genetics concepts | Chromosome abnormality | Biology | 1,549 |
70,730,840 | https://en.wikipedia.org/wiki/Salbostatin | Salbostatin is an antibiotic and trehalase inhibitor with the molecular formula C13H23O8. Salbostatin is produced by the bacterium Streptomyces albus.
See also
Pseudouridimycin
References
Further reading
Salbostatin
Polyols
Secondary amines
Amino sugars | Salbostatin | Chemistry,Biology | 65 |
53,160,070 | https://en.wikipedia.org/wiki/Bini%20the%20Bunny | Bini the Bunny is a rabbit, known for a series of videos posted on the internet. Bini refers to two bunnies: Bini the Bunny Senior and his younger brother, Bini Junior. Bini Junior, who is 2 years old as of 2023, has learned various tricks from his older sibling, including how to play basketball. Both brothers are known for their unique and entertaining skills.
Bini Sr. referred to by the media and fans as the only rabbit in the world who can paint, play basketball, play the guitar/piano and comb and style hair.
As of 2017, Bini and his owner, Shai (Asor) Lighter are Guinness Book of World Record holders for the most slam dunks by a rabbit in one minute.
Bini's most popular video with over 20 million views was created in 2016, titled "When Your Bunny is Addicted to Arcade Games". Bini's social pages have more than 1 million followers. Bini was featured in various TV shows including The Tonight Show with Jimmy Fallon, two Netflix originals, and participated in America's Got Talent.
Background
Bini has starred in more than 60 videos where his talents and intelligence are exemplified. He first became recognizable through his 2013 YouTube video "Funny bunny plays basketball -Bini the bunny". In 2016, Bini and his owner Shai moved from Israel to Los Angeles, California.
Bini the Bunny's tricks include playing basketball, painting with acrylic paints on canvas, combing and brushing humans' hair, jumping, dancing and spinning on command. In April 2020, Bini the Bunny learned to play the guitar and shared it in his first ever guitar-playing video, in which he plays Señorita by Shawn Mendes and Camila Cabello.
Media appearances
His tricks have been documented on video on various media outlets, and he has been featured in articles by Huffington Post, AOL, USA Today, Fox News, and Sky News, among others. In 2016, Bini the Bunny was featured on LittleThings.com by Jessica Rothhaar and on German TV Network RTL Germany on the variety show Best Of .
In July 2017, Bini the Bunny and Shai Lighter were featured on the UK Channel E4 Rude Tube show, Season 11, Episode 9
In July 2017, Bini was on US Weekly magazine
In September 2017, Bini was featured on the cover of Guinness World Records: Amazing Animals
Bini the Bunny and Shai Lighter appeared on The Gong Show on ABC on July 26, 2018. The Gong Show starring Mike Myers, and featured celebrity judges: Will Arnett, Alyson Hannigan, and Lil Rel Howery
In 2018, Bini was invited to the Red Carpet Premiere for the Peter Rabbit movie as a "celebrity rabbit".
On September 27, 2018, Bini the Bunny's first short-feature film was released to the world, titled Rabbit Home Alone which features Shai Lighter, and an array of actors as well as Bini. The movie is a parody of 1990 feature film Home Alone.
In 2019, Bini was featured in National Geographic's book - Weird But True! USA.
In June 2019, Bini the Bunny and his owner Shai Lighter were interviewed on Australia's The Morning Show on Channel 7.
In June 2020, Bini was featured in a new National Geographic's book - Pet Records. The book describes Bini's slam-dunks record.
In April 2021, Bini was featured in the Netflix reality show, Pet Stars (Season 1, episodes 3 and 5).
In June 2021, Bini was featured on The Tonight Show with Jimmy Fallon and showed Jimmy his basketball skill.
In July 2021, Bini the Bunny was featured in the book Little Bunny Dreamer.
In July 2021, Bini the Bunny auditioned and passed the first round in America's Got Talent, he performed in front of the celebrity judges- Simon Cowell, Sofía Vergara, Heidi Klum and Howie Mandel (season 16, episode 8).
In November 2021, Bini the Bunny competed and was featured in World Pet Games on Fox - interspecies competitions.
In January 2022, Bini was featured in a documentary show about intelligent animals - The Secret Life Of Our Pets - Episode 2 on ITV channel .
In February 2022, Bini and Shai Asor Lighter were featured in the PEOPLE magazine and TV Show - (FEB 14, 2022 EDITION).
In June 2022, Bini and Shai were featured and interviewed in the Netflix documentary show, The Hidden Lives of Pets (a documentary about extraordinary pets).
Little Bunny Dreamer
In July 2021, Shai Lighter released his kid's book Little Bunny Dreamer, based on the real Bini the Bunny and his talents.
Hoppy Brush
In February 2020, Shai Lighter released a mobile game based on Bini the Bunny's life.
The game is called Hoppy Brush, where players match colors to create paintings. In the game, Bini sells these paintings to help him rescue his mother.
All profits from the game go to the real-life Bini and Shai Lighter.
Awards
In October 2016, Bini the Bunny was recognized and awarded a record by Guinness World Records for most slam dunks by a rabbit in 60 seconds.
In 2017, the Facebook page for Bini the Bunny was recognized as "official" by Facebook. Bini has more than 200,000 Facebook fans.
In 2018, YouTube awarded Bini and Shai the Silver Creator Award for reaching and passing 100,000 subscribers.
In 2021. Bini became the first Rabbit to compete in America's Got Talent and pass the Judges Auditions level.
In 2021, Bini competed on World Pet Games on Fox - interspecies competitions - Bini's category was Dunk-Off, he won the bronze medal and broke his own record.
References
Further reading
Individual rabbits
2012 animal births
Animals on the Internet
Male mammals
Visual arts by animals
Animal world record holders | Bini the Bunny | Biology | 1,222 |
12,381,914 | https://en.wikipedia.org/wiki/Cartouche%20%28design%29 | A cartouche (also cartouch) is an oval or oblong design with a slightly convex surface, typically edged with ornamental scrollwork. It is used to hold a painted or low-relief design. Since the early 16th century, the cartouche is a scrolling frame device, derived originally from Italian . Such cartouches are characteristically stretched, pierced and scrolling.
Another cartouche figures prominently in the 16th-century title page of Giorgio Vasari's Lives of the Most Excellent Painters, Sculptors, and Architects, framing a minor vignette with a pierced and scrolling papery cartouche.
The engraved trade card of the London clockmaker Percy Webster shows a vignette of the shop in a scrolling cartouche frame of Rococo design that is composed entirely of scrolling devices.
History
Antiquity
Cartouches are found on buildings, funerary steles and sarcophagi. The cartouche is generally rectangular, delimited by a molding or one or more incised lines, with two symmetrical trapezoids on the lateral edges.
Chinese
From the Renaissance to Art Deco
The Renaissance brought back elements of Greco-Roman culture, including ornaments like the cartouche. Compared to their ancient ancestors, the ones from the Renaissance are usually much more complex. Cartouches continue to be used in styles that succeed the Renaissance. Most have the usual look of a symmetrical oval with scrolls developed during the Renaissance and Baroque periods, but some are highly stylized, showing the diversity of styles popular over time. They were used constantly, and were one of the main motifs of Rococo and Beaux Arts architecture.
Their use started to fade in Art Deco, a style created as a collective effort of multiple French designers to make a new modern style around 1910. This is because artists of this movement tried to create new ornaments for their time, most often stylizing motifs used before, or coming up with completely new ones. Art Deco also followed the principle of simplicity, another reason for the rarity of complex ornaments like cartouches or mascarons in Art Deco.
Postmodernism and Retro resuses
At the end of the WW2, with the rise in popularity of the International Style, characterized by the complete lack of any ornamentation, led to the complete abandonment of any ornaments, including cartouches.
They reappear later in some Postmodernism, a movement that questioned Modernism (the status quo after WW2), and which promoted the inclusion of elements of historic styles in new designs. An early text questioning Modernism was by architect Robert Venturi, Complexity and Contradiction in Architecture (1966), in which he recommended a revival of the 'presence of the past' in architectural design. He tried to include in his own buildings qualities that he described as 'inclusion, inconsistency, compromise, accommodation, adaptation, superadjacency, equivalence, multiple focus, juxtaposition, or good and bad space.' Venturi encouraged 'quotation', which means reusing elements of the past in new designs. Part manifesto, part architectural scrapbook accumulated over the previous decade, the book represented the vision for a new generation of architects and designers who had grown up with Modernism but who felt increasingly constrained by its perceived rigidities. Multiple Postmodern architects and designers put simplified reinterpretations of the elements found in Classical decoration on their creations. However, they were in most cases highly simplified, and more reinterpretations than true reuses of the elements intended. Because of their complexity, cartouches were extremely rarely used in Postmodern architecture and design.
Cartouches enjoyed more popularity in Retro style of the 21st century, through designs inspired mainly by the 18th and 19th centuries.
See also
Tondo (art): round (circular)
Medallion (architecture): round or oval
Architectural sculpture
Cartouche (cartography)
Cartouche
Resist: a technique in ceramics to highlight cartouches, etc.
Console (heraldry)
Footnotes
Works cited
External links
Ornaments
Decorative arts
Architectural elements
Ornaments (architecture) | Cartouche (design) | Technology,Engineering | 830 |
20,332,189 | https://en.wikipedia.org/wiki/European%20Programme%20for%20Critical%20Infrastructure%20Protection | The European Programme for Critical Infrastructure Protection (EPCIP) is the doctrine and programmes created to identify and protect critical infrastructure that, in case of fault, incident or attack, could seriously impact both the country where it is hosted and at least one other European Member State.
History
The EPCIP came about as a result of a consultation in 2004 by the European Council, seeking a programme to protect critical infrastructure through its 'Communication on Critical Infrastructure Protection in the Fight against Terrorism'. In December 2004 it endorsed the intention of the European Commission to propose a European Programme for Critical Infrastructure Protection (EPCIP) and agreed to the creation of a European Critical Infrastructure Warning Information Network (CIWIN).
In December the European Commission issued its finalised design as a directive EU COM(2006) 786; this obliged all member states to adopt the components of the EPCIP into their national statutes. Not only did it apply to main area of the European Union but also to the wider European Economic Area.
EPCIP also identified National Critical Infrastructure (NCI) where its disruption would only affect a single Member State. It set the responsibility for protecting items of NCI on its owner/operators and on the relevant Member State, and encouraged each Member State to establish its own National CIP programme.
References
EU COM(2006) 786 EU Communication from the Commission on a European Programme for Critical Infrastructure Protection
Council DIR 2008/114/EC Council Directive 2008/114/EC of 8 December 2008 on the identification and designation of European critical infrastructures and the assessment of the need to improve their protection
European Union transport policy
Security engineering
Infrastructure
National security institutions
Infrastructure in Europe | European Programme for Critical Infrastructure Protection | Engineering | 333 |
75,608,730 | https://en.wikipedia.org/wiki/Heat%20Flow%20Experiment | The Heat Flow Experiment was a United States NASA lunar science experiment that aimed to measure the rate of heat loss from the surface of the Moon. Four experiments were carried out on board Apollo missions. Two experiments were successfully deployed as part of Apollo 15 and Apollo 17. The instrument on Apollo 16 was deployed but the cable from it to the ALSEP central station was broken and the experiment was rendered inoperable. A heat flow experiment was carried onboard Apollo 13 but the mission was aborted in-flight and the instrument never reached the surface.
Background
Establishing some of the thermal properties of the Moon's surface was already feasible by the time of the Apollo missions. Measuring infrared emissions via telescope and the measuring of microwave emission spectra from the Moon was already possible from the surface of the Earth. These techniques had already established some of the characteristics of the Moon's surface including temperature, thermal conductivity and heat capacity. The degree to which these properties were limited by the low levels of IR emission, long wavelengths limiting data resolution, and how the Moon's thermal properties vary with depth.
No one person can be attributed with the proposal to measure heat flow from the Moon given the large number of proposals NASA sought from academia, industry and science groups at NASA itself. Several of these proposed such an experiment. The result though was that a small committee was formed to explore how thermal measurements of the Moon could be taken. The committee decided that the focus of any thermal experiment should be focused on heat flow from the Moon's interior.
The committee considered several approaches that included multiple probes and another that included "blankets". The blanket technique was initially ruled out due to the complexity of matching the thermal albedo of the blanket probes with that of the Moon's surface. The method that became the basis for the instrument was a cylindrical heater paired with a temperature sensor a set distance away. Further work by this group established that the probe would need to be inserted into the subsurface to avoid large temperature fluctuations caused by the day-night cycle at the surface. Bendix Corporation was selected as the principal contractor for the instrument and Arthur D. Little was the sub-contractor. Gulton Industries Inc. was selected to develop the electronic circuitry.
Due to the need for the probe to be placed at a depth below the regolith surface, it was known that a drill to penetrate the lunar surface would be required. Development of the drill was led by Martin Marietta, who had previous experience developing tools for NASA.
Instrument
The instrument package consists of two probes, each consisting of two long sections. Each section end has a gradient thermometer that can measure at two points from each end. Each section can therefore measure temperatures along four points. The cables that connect the probe to the experiment's electronics housing also have 4 thermocouples at from the topmost gradient sensor. Each section end also contains a heater to enable the measurement of material conductivity. Each heater had two power settings, 0.002 W and 0.5 W that would allow an exploration across a range of possible material conductivities. Readings from the experiment were taken either every 7.1 minutes or every 54 seconds depending on the heater mode. The probe sections were placed through the use of the Lunar Surface drill, ideally to a depth of below the surface.
Missions
Apollo 13
The heat flow experiment was originally planned to be carried out on Apollo 13, but due to the aborting of that mission, this did not occur. This instrument burnt up in Earth's atmosphere while still on board the Lunar Module. There was not sufficient time to add the HFE to Apollo 14.
The failure of Apollo 13 was perceived by its principal investigator to have had an impact on the collection of science. The planned landing site for Apollo was found to have a substantial presence of long-lived radioisotope. The project's principal investigator believed that if the Apollo 13 instrument deployment had been attempted, it would have led to better mitigations on later missions for the problems experienced with the drill and the compact regolith.
Apollo 15
Drilling of the holes on Apollo 15 was undertaken by David Scott, the mission's commander. After drilling down , the drill started to become ineffective but despite a number of challenges Scott managed to drill down to a depth of . By this point Scott was having to apply his full weight and the decision was made to insert the first probe to prove out functionality. A second drill hole was started but difficulties with drilling were experienced immediately and finishing of the second drill hole was delayed for the second mission EVA. The second drill hole only managed to make a depth of 100 cm and the probe was not fully below the lunar surface. Despite these difficulties, the probes were able to take readings.
The cause of the challenges was due to the deeper levels of the lunar soil not having been disturbed for at least half a billion years. This resulted in extreme compaction that meant further compression of the material could not occur without large amounts of force.
Apollo 16
On Apollo 16 the holes for the probes were dug by Charles Duke who managed to drill down to below the surface. The drill on Apollo 16 had been modified to rectify the issues experienced on the prior mission, Apollo 15. The experiment came to an end before it started when John Young tripped over the cable connecting the experiment to the ALSEP central station. The cabling was designed to resist tensile strains from being tugged, but it was not designed to resist tearing motions. Repairing was considered but rejected due to it needing several hours of surface time.
Apollo 17
Both of the Apollo 17 boreholes were drilled without problem and both probes were installed without issue, continuing to operate for several years.
Science
The experiment found that the very near surface of the lunar regolith, consisting of a few centimeters, was dominated by the radiative transfer of heat. This is primarily because the material is fairly loose, with limited soil particle contact reducing conductive transfer. During the lunar noon, 70% of all heat transfer was radiative. After the first , the soil compaction increases and the subsequent density increases from 1.1–1.2 g/cc to 1.75–2.1 g/cc. The result is a substantially increased conductivity.
The HFE found a thermal gradient of between 1.5–2.0 K/m with a heat flow of around 17 mW/m2. When accounting for measurement uncertainty, this aligned well with seismic and magnetic data. This would imply temperatures that would be relatively close to melting at depths of around .
See also
Heat Flow and Physical Properties Package
References
Lunar science
Physics experiments
Apollo 13
Apollo 15
Apollo 16
Apollo 17
Apollo program hardware | Heat Flow Experiment | Physics | 1,358 |
77,967,710 | https://en.wikipedia.org/wiki/AR%20Cephei | AR Cephei (AR Cep) is a variable star in the constellation Cepheus. It is classified as a semiregular star, and has a maximum apparent magnitude of +7.32.
Distance
AR Cephei is located approximately 1,114 light-years (350 parsecs) from the Solar System, and has a radial velocity of -14.58000 ± 0.48 km/s, meaning that it is moving toward the Sun at ~14 kilometers every second.
History
Aernout de Sitter discovered the star in 1933. It was given its variable star designation, AR Cephei, in 1939.
See also
List of variable stars
T Cephei
References
Semiregular variable stars
Cephei, AR
Cepheus (constellation)
Variable stars | AR Cephei | Astronomy | 161 |
56,145,576 | https://en.wikipedia.org/wiki/Vestronidase%20alfa | Vestronidase alfa, sold under brand name Mepsevii, is a medication for the treatment of Sly syndrome. It is a recombinant form of the human enzyme beta-glucuronidase. It was approved in the United States in November 2017, to treat children and adults with an inherited metabolic condition called mucopolysaccharidosis type VII (MPS VII), also known as Sly syndrome. MPS VII is an extremely rare, progressive condition that affects most tissues and organs.
The most common side effects after treatment with vestronidase alfa include infusion site reactions, diarrhea, rash (urticaria) and anaphylaxis (sudden, severe allergic reaction).
The US. Food and Drug Administration (FDA) considers it to be a first-in-class medication. It was approved for use in the European Union in August 2018.
Medical uses
Mepsevii is indicated for the treatment of non-neurological manifestations of Mucopolysaccharidosis VII (MPS VII; Sly syndrome).
History
The safety and efficacy of vestronidase alfa were established in a clinical trial and expanded access protocols enrolling a total of 23 participants ranging from five months to 25 years of age. Participants received treatment with vestronidase alfa at doses up to 4mg/kg once every two weeks for up to 164 weeks. Efficacy was primarily assessed via the six-minute walk test in ten participants who could perform the test. After 24 weeks of treatment, the mean difference in distance walked relative to placebo was 18 meters. Additional follow-up for up to 120 weeks suggested continued improvement in three participants and stabilization in the others. Two participants in the vestronidase alfa development program experienced marked improvement in pulmonary function. Overall, the results observed would not have been anticipated in the absence of treatment. The effect of vestronidase alfa on the central nervous system manifestations of MPS VII has not been determined.
The FDA approved vestronidase alfa-vjbk based primarily on evidence from one clinical trial (NCT02230566) of 12 participants with mucopolysaccharidosis VII. The trial was conducted at four sites in the United States.
The benefit and side effects of vestronidase alfa were based primarily on one trial. Participants were randomly assigned to four groups. Three groups of participants received placebo treatment before starting vestronidase alfa treatment and one group received vestronidase alfa only. vestronidase alfa or placebo were given once every two weeks as intravenous (IV) infusions. Neither participants nor healthcare providers knew which treatment was given until after the trial was completed.
The benefit of 24 weeks of vestronidase alfa treatment was primarily evaluated by the 6-minute walking test (6MWT) and compared to placebo treatment in ten participants who could perform the test. The 6MWT measured the distance a patient could walk on a flat surface in 6 minutes. An additional follow-up using 6MWT was done for up to 120 weeks.
The application for vestronidase alfa was granted fast track designation, orphan drug designation, and a rare pediatric disease priority review voucher. This was the twelfth rare pediatric disease priority review voucher issued.
The US Food and Drug Administration (FDA) granted approval of Mepsevii to Ultragenyx Pharmaceutical, Inc, and required the manufacturer to conduct a post-marketing study to evaluate the long-term safety of the product.
References
External links
Orphan drugs
Recombinant proteins | Vestronidase alfa | Biology | 729 |
5,916,509 | https://en.wikipedia.org/wiki/NGC%201032 | NGC 1032 is a spiral galaxy that is about 117 million light-years away in the constellation Cetus. It was discovered on 18 December 1783 by German-British astronomer William Herschel.
According to the SIMBAD database, NGC 1032 is an Active Galaxy Nucleus Candidate, i.e. it has a compact region at the center of a galaxy that emits a significant amount of energy across the electromagnetic spectrum, with characteristics indicating that this luminosity is not produced by the stars.
One supernova has been observed in NGC 1032. In January 2005, SN 2005E was discovered, initially classified as a type Ib or type Ic. However, later analysis determined that it was instead a calcium-rich supernova, a (then) new type of astronomical transient.
References
External links
Spiral galaxies
Cetus
1032
02147
10060
Astronomical objects discovered in 1783
Discoveries by William Herschel | NGC 1032 | Astronomy | 182 |
13,235,454 | https://en.wikipedia.org/wiki/International%20Mass%20Spectrometry%20Foundation | The International Mass Spectrometry Foundation (IMSF) is a non-profit scientific organization in the field of mass spectrometry. It operates the International Mass Spectrometry Society, which consists of 37 member societies and sponsors the International Mass Spectrometry Conference that is held once every two years.
Aims
The foundation has four aims:
organizing international conferences and workshops in mass spectrometry
improving mass spectrometry education
standardizing terminology in the field
aiding in the dissemination of mass spectrometry through publications
Conferences
Before the formation of the IMSF, the first International Mass Spectrometry Conference was held in London in 1958 and 41 papers were presented. Since then, conferences were held every three years until 2012, and every two years since. Conference proceedings are published in a book series, Advances in Mass Spectrometry, which is the oldest continuous series of publications in mass spectrometry. The International Mass Spectrometry Society evolved from this series of International Mass Spectrometry Conferences. The IMSF was officially registered in the Netherlands in 1998 following an agreement at the 1994 conference.
Past meetings were held in these locations:
Awards
The society sponsors several awards including the Curt Brunnée Award for achievements in instrumentation by a scientist under 45 years of age, the Thomson Medal Award for achievements in mass spectrometry, as well as travel awards and student paper awards:
Curt Brunnée Award winners:
See also
American Society for Mass Spectrometry
British Mass Spectrometry Society
Canadian Society for Mass Spectrometry
List of female mass spectrometrists
References
External links
Chemistry societies
Mass spectrometry
Organisations based in Gelderland
Scientific organisations based in the Netherlands | International Mass Spectrometry Foundation | Physics,Chemistry | 349 |
826,507 | https://en.wikipedia.org/wiki/CDC%20Cyber | The CDC Cyber range of mainframe-class supercomputers were the primary products of Control Data Corporation (CDC) during the 1970s and 1980s. In their day, they were the computer architecture of choice for scientific and mathematically intensive computing. They were used for modeling fluid flow, material science stress analysis, electrochemical machining analysis, probabilistic analysis, energy and academic computing, radiation shielding modeling, and other applications. The lineup also included the Cyber 18 and Cyber 1000 minicomputers. Like their predecessor, the CDC 6600, they were unusual in using the ones' complement binary representation.
Models
The Cyber line included five different series of computers:
The 70 and 170 series based on the architecture of the CDC 6600 and CDC 7600 supercomputers, respectively
The 200 series based on the CDC STAR-100—released in the 1970s.
The 180 series developed by a team in Canada—released in the 1980s (after the 200 series)
The Cyberplus or Advanced Flexible Processor (AFP)
The Cyber 18 minicomputer based on the CDC 1700
Primarily aimed at large office applications instead of the traditional supercomputer tasks, some of the Cyber machines nevertheless included basic vector instructions for added performance in traditional CDC roles.
Cyber 70 and 170 series
The Cyber 70 and 170 architectures were successors to the earlier CDC 6600 and CDC 7600 series and therefore shared almost all of the earlier architecture's characteristics. The Cyber-70 series is a minor upgrade from the earlier systems. The Cyber-73 was largely the same hardware as the CDC 6400 - with the addition of a Compare and Move Unit (CMU). The CMU instructions speeded up comparison and moving of non-word aligned 6-bit character data. The Cyber-73 could be configured with either one or two CPUs. The dual CPU version replaced the CDC 6500. As was the case with the CDC 6200, CDC also offered a Cyber-72. The Cyber-72 had identical hardware to a Cyber-73, but added additional clock cycles to each instruction to slow it down. This allowed CDC to offer a lower performance version at a lower price point without the need to develop new hardware. It could also be delivered with dual CPUs. The Cyber 74 was an updated version of the CDC 6600. The Cyber 76 was essentially a renamed CDC 7600. Neither the Cyber-74 nor the Cyber-76 had CMU instructions.
The Cyber-170 series represented CDCs move from discrete electronic components and core memory to integrated circuits and semiconductor memory. The 172, 173, and 174 use integrated circuits and semiconductor memory whereas the 175 uses high-speed discrete transistors. The Cyber-170/700 series is a late-1970s refresh of the Cyber-170 line.
The central processor (CPU) and central memory (CM) operated in units of 60-bit words. In CDC lingo, the term "byte" referred to 12-bit entities (which coincided with the word size used by the peripheral processors). Characters were six bits, operation codes were six bits, and central memory addresses were 18 bits. Central processor instructions were either 15 bits or 30 bits.
The 18-bit addressing inherent to the Cyber 170 series imposed a limit of 262,144 (256K) words of main memory, which is semiconductor memory in this series. The central processor has no I/O instructions, relying upon the peripheral processor (PP) units to do I/O.
A Cyber 170-series system consists of one or two CPUs that run at either 25 or 40 MHz, and is equipped with 10, 14, 17, or 20 peripheral processors (PP), and up to 24 high-performance channels for high-speed I/O. Due to the relatively slow memory reference times of the CPU (in some models, memory reference instructions were slower than floating-point divides), the higher-end CPUs (e.g., Cyber-74, Cyber-76, Cyber-175, and Cyber-176) are equipped with eight or twelve words of high-speed memory used as an instruction cache. Any loop that fit into the cache (which is usually called in-stack) runs very fast, without referencing main memory for instruction fetch. The lower-end models do not contain an instruction stack. However, since up to four instructions are packed into each 60-bit word, some degree of prefetching is inherent in the design.
As with predecessor systems, the Cyber 170 series has eight 18-bit address registers (A0 through A7), eight 18-bit index registers (B0 through B7), and eight 60-bit operand registers (X0 through X7). Seven of the A registers are tied to their corresponding X register. Setting A1 through A5 reads that address and fetches it into the corresponding X1 through X5 register. Likewise, setting register A6 or A7 writes the corresponding X6 or X7 register to central memory at the address written to the A register. A0 is effectively a scratch register.
The higher-end CPUs consisted of multiple functional units (e.g., shift, increment, floating add) which allowed some degree of parallel execution of instructions. This parallelism allows assembly programmers to minimize the effects of the system's slow memory fetch time by pre-fetching data from central memory well before that data is needed. By interleaving independent instructions between the memory fetch instruction and the instructions manipulating the fetched operand, the time occupied by the memory fetch can be used for other computation. With this technique, coupled with the handcrafting of tight loops that fit within the instruction stack, a skilled Cyber assembly programmer can write extremely efficient code that makes the most of the power of the hardware.
The peripheral processor subsystem uses a technique known as barrel and slot to share the execution unit; each PP had its own memory and registers, but the processor (the slot) itself executed one instruction from each PP in turn (the barrel). This is a crude form of hardware multiprogramming. The peripheral processors have 4096 bytes of 12-bit memory words and an 18-bit accumulator register. Each PP has access to all I/O channels and all of the system's central memory (CM) in addition to the PP's own memory. The PP instruction set lacks, for example, extensive arithmetic capabilities and does not run user code; the peripheral processor subsystem's purpose is to process I/O and thereby free the more powerful central processor unit(s) to running user computations.
A feature of the lower Cyber CPUs is the Compare Move Unit (CMU). It provides four additional instructions intended to aid text processing applications. In an unusual departure from the rest of the 15- and 30-bit instructions, these are 60-bit instructions (three actually use all 60 bits, the other use 30 bits, but its alignment requires 60 bits to be used). The instructions are: move a short string, move a long string, compare strings, and compare a collated string. They operate on six-bit fields (numbered 1 through 10) in central memory. For example, a single instruction can specify "move the 72 character string starting at word 1000 character 3 to location 2000 character 9". The CMU hardware is not included in the higher-end Cyber CPUs, because hand coded loops could run as fast or faster than the CMU instructions.
Later systems typically run CDC's NOS (Network Operating System). Version 1 of NOS continued to be updated until about 1981; NOS version 2 was released early 1982, with the final version of 2.8.7 PSR 871, delivered in December 1997, which continues to have minor unofficial bug fixes, Y2K mitigation, etc in support of DtCyber. Besides NOS, the only other operating systems commonly used on the 170 series was NOS/BE or its predecessor SCOPE, a product of CDC's Sunnyvale division. These operating systems provide time-sharing of batch and interactive applications. The predecessor to NOS was Kronos which was in common use up until 1975 or so. Due to the strong dependency of developed applications on the particular installation's character set, many installations chose to run the older operating systems rather than convert their applications. Other installations would patch newer versions of the operating system to use the older character set to maintain application compatibility.
Cyber 180 series
Cyber 180 development began in the Advanced Systems Laboratory, a joint CDC/NCR development venture started in 1973 and located in Escondido, California. The machine family was originally called Integrated Product Line (IPL) and was intended to be a virtual memory replacement for the NCR 6150 and CDC Cyber 70 product lines. The IPL system was also called the Cyber 80 in development documents. The Software Writer's Language (SWL), a high-level Pascal-like language, was developed for the project with the intent that all languages and the operating system (IPLOS) were going to be written in SWL. SWL was later renamed PASCAL-X and eventually became Cybil. The joint venture was abandoned in 1976, with CDC continuing system development and renaming the Cyber 80 as Cyber 180. The first machines of the series were announced in 1982 and the product announcement for the NOS/VE operating system occurred in 1983.
As the computing world standardized to an eight-bit byte size, CDC customers started pushing for the Cyber machines to do the same. The result was a new series of systems that could operate in both 60- and 64-bit modes. The 64-bit operating system was called NOS/VE, and supported the virtual memory capabilities of the hardware. The older 60-bit operating systems, NOS and NOS/BE, could run in a special address space for compatibility with the older systems.
The true 180-mode machines are microcoded processors that can support both instruction sets simultaneously. Their hardware is completely different from the earlier 6000/70/170 machines. The small 170-mode exchange package was mapped into the much larger 180-mode exchange package; within the 180-mode exchange package, there is a virtual machine identifier (VMID) that determines whether the 8/16/64-bit two's complement 180 instruction set or the 12/60-bit ones' complement 170 instruction set is executed.
There were three true 180s in the initial lineup, codenamed P1, P2, P3. P2 and P3 were larger water-cooled designs. The P2 was designed in Mississauga, Ontario, by the same team that later designed the smaller P1, and the P3 was designed in Arden Hills, Minnesota. The P1 was a novel air-cooled, 60-board cabinet designed by a group in Mississauga; the P1 ran on 60 Hz current (no motor-generator sets needed). A fourth high-end 180 model 990 (codenamed THETA) was also under development in Arden Hills.
The 180s were initially marketed as 170/8xx machines with no mention of the new 8/64-bit system inside. However, the primary control program is a 180-mode program known as Environmental Interface (EI). The 170 operating system (NOS) used a single, large, fixed page within the main memory. There were a few clues that an alert user could pick up on, such as the "building page tables" message that flashed on the operator's console at startup and deadstart panels with 16 (instead of 12) toggle switches per PP word on the P2 and P3.
The peripheral processors in the true 180s are always 16-bit machines with the sign bit determining whether a 16/64 bit or 12/60 bit PP instruction is being executed. The single word I/O instructions in the PPs are always 16-bit instructions, so at deadstart the PPs can set up the proper environment to run both EI plus NOS and the customer's existing 170-mode software. To hide this process from the customer, earlier in the 1980s CDC had ceased distribution of the source code for its Deadstart Diagnostic Sequence (DDS) package and turned it into the proprietary Common Tests & Initialization (CTI) package.
The initial 170/800 lineup was: 170/825 (P1), 170/835 (P2), 170/855 (P3), 170/865 and 170/875. The 825 was released initially after some delay loops had been added to its microcode; it seemed the design folks in Toronto had done a little too well and it was too close to the P2 in performance. The 865 and 875 models were revamped 170/760 heads (one or two processors with 6600/7600-style parallel functional units) with larger memories. The 865 used normal 170 memory; the 875 took its faster main processor memory from the Cyber 205 line.
A year or two after the initial release, CDC announced the 800-series' true capabilities to its customers, and the true 180s were relabeled as the 180/825 (P1), 180/835 (P2), and 180/855 (P3). At some point, the model 815 was introduced with the delayed microcode and the faster microcode was restored to the model 825. Eventually the THETA was released as the Cyber 990.
Cyber 200 series
In 1974, CDC introduced the STAR architecture. The STAR is an entirely new 64-bit design with virtual memory and vector processing instructions added for high performance on a certain class of math tasks. The STAR's vector pipeline is a memory to memory pipe, which supports vector lengths of up to 65,536 elements. The latencies of the vector pipeline are very long, so peak speed is approached only when very long vectors are used. The scalar processor was deliberately simplified to provide room for the vector processor and is relatively slow in comparison to the CDC 7600. As such, the original STAR proved to be a great disappointment when it was released (see Amdahl's Law). Best estimates claim that three STAR-100 systems were delivered.
It appeared that all of the problems in the STAR were solvable. In the late 1970s, CDC addressed some of these issues with the Cyber 203. The new name kept with their new branding, and perhaps to distance itself from the STAR's failure. The Cyber 203 contains redesigned scalar processing and loosely coupled I/O design, but retains the STAR's vector pipeline. Best estimates claim that two Cyber 203s were delivered or upgraded from STAR-100s.
In 1980, the successor to the Cyber 203, the Cyber 205 was announced. The UK Meteorological Office at Bracknell, England was the first customer and they received their Cyber 205 in 1981. The Cyber 205 replaces the STAR vector pipeline with redesigned vector pipelines: both scalar and vector units utilized ECL gate array ICs and are cooled with Freon. Cyber 205 systems were available with two or four vector pipelines, with the four-pipe version theoretically delivering 400 64-bit MFLOPs and 800 32-bit MFLOPs. These speeds are rarely seen in practice other than by handcrafted assembly language. The ECL gate array ICs contain 168 logic gates each, with the clock tree networks being tuned by hand-crafted coax length adjustment. The instruction set would be considered V-CISC (very complex instruction set) among modern processors. Many specialized operations facilitate hardware searches, matrix mathematics, and special instructions that enable decryption.
The original Cyber 205 was renamed to Cyber 205 Series 400 in 1983 when the Cyber 205 Series 600 was introduced. The Series 600 differs in memory technology and packaging but is otherwise the same. A single four-pipe Cyber 205 was installed. All other sites appear to be two-pipe installations with final count to be determined.
The Cyber 205 architecture evolved into the ETA10 as the design team spun off into ETA Systems in September 1983. A final development was the Cyber 250, which was scheduled for release in 1987 priced at $20 million; it was later renamed the ETA30 after ETA Systems was absorbed back into CDC.
CDC CYBER 205
Architecture: ECL/LSI logic
20 ns cycle time (or 50 MHz)
Up to 800 Mflops FP32 ans 400 Mflops FP64
1, 2, 4, 8 or 16 million 64-bit words with 25.6 or 51.2 Gigabits/second
8 I/O ports with up to 16 200 Mbits/second each
Cyberplus or Advanced Flexible Processor (AFP)
Each Cyberplus (aka Advanced Flexible Processor, AFP) is a 16-bit processor with optional 64-bit floating point capabilities and has 256 K or 512 K words of 64-bit memory. The AFP was the successor to the Flexible Processor (FP), whose design development started in 1972 under black-project circumstances targeted at processing radar and photo image data. The FP control unit had a hardware network for conditional microinstruction execution, with four mask registers and a condition-hold register; three bits in the microinstruction format select among nearly 50 conditions for determining execution, including result sign and overflow, I/O conditions, and loop control.
At least 21 Cyberplus multiprocessor installations were operational in 1986. These parallel processing systems include from 1 to 256 Cyberplus processors providing 250 MFLOPS each, which are connected to an existing Cyber system via a direct memory interconnect architecture (MIA), this was available on NOS 2.2 for the Cyber 170/835, 845, 855 and 180/990 models.
Physically, each Cyberplus processor unit was of typical mainframe module size, similar to the Cyber 180 systems, with the exact width dependent on whether the optional FPU was installed, and weighed approximately 1 tonne.
Software that was bundled with the Cyberplus was:
System software
FORTRAN cross compiler
MICA (Machine Instruction Cross Assembler)
Load File Builder Utility
ECHOS (simulator)
Debug facility
Dump utility
Dump analyzer utility
Maintenance software
Some sites using the Cyberplus were the University of Georgia and the Gesellschaft für Trendanalysen (GfTA) (Association for Trend Analyses) in Germany.
A fully configured 256 processor Cyberplus system would have a theoretical performance of 64 GFLOPS, and weigh around 256 tonnes. A nine-unit system was reputedly capable of performing comparative analysis (including pre-processing convolutions) on 1 megapixel images at a rate of one image pair per second.
Cyber 18
The Cyber 18 is a 16-bit minicomputer which was a successor to the CDC 1700 minicomputer. It was mostly used in real-time environments. One noteworthy application is as the basis of the 2550—a communications processor used by CDC 6000 series and Cyber 70/Cyber 170 mainframes. The 2550 was a product of CDC's Communications Systems Division, in Santa Ana, California (STAOPS). STAOPS also produced another communication processor (CP), used in networks hosted by IBM mainframes. This M1000 CP, later renamed C1000, came from an acquisition of Marshall MDM Communications. A three-board set was added to the Cyber 18 to create the 2550.
The Cyber 18 was generally programmed in Pascal and assembly language; FORTRAN, BASIC, and RPG II were also available. Operating systems included RTOS (Real-Time Operating System), MSOS 5 (Mass Storage Operating System), and TIMESHARE 3 (time-sharing system).
"Cyber 18-17" was just a new name for the System 17, based on the 1784 processor. Other Cyber 18s (Cyber 18-05, 18-10, 18-20, and 18-30) had microprogrammable processors with up to 128K words of memory, four additional general registers, and an enhanced instruction set. The Cyber 18-30 had dual processors. A special version of the Cyber 18, known as the MP32, that was 32-bit instead of 16-bit was created for the National Security Agency for crypto-analysis work. The MP32 had the Fortran math runtime library package built into its microcode. The Soviet Union tried to buy several of these systems and they were being built when the U.S. Government cancelled the order. The parts for the MP32 were absorbed into the Cyber 18 production. One of the uses of the Cyber 18 was monitoring the Alaskan Pipeline.
Cyber 1000
The M1000 / C1000, later renamed Cyber 1000, was used as a message store and forward system used by the Federal Reserve System. A version of the Cyber 1000 with its hard drive removed was used by Bell Telephone. This was a RISC processor (reduced instruction set computer). An improved version known as the Cyber 1000-2 with the Line Termination Sub-System added 256 Zilog Z80 microprocessors. The Bell Operating Companies purchased large numbers of these systems in the mid-to-late 1980s for data communications. In the late 1980s the XN10 was released with an improved processor (a direct memory access instruction was added) as well as a size reduction from two cabinets to one. The XN20 was an improved version of the XN10 with a much smaller footprint. The Line Termination Sub-System was redesigned to use the improved Z180 microprocessor (the Buffer Controller card, Programmable Line Controller card and two Communication Line Interface cards were incorporated on to a single card). The XN20 was in pre-production stage when the Communication Systems Division was shut down in 1992.
Jack Ralph was the chief architect of the Cyber 1000-2, XN-10 and XN-20 systems. Dan Nay was the chief engineer of the XN-20.
Cyber 2000
See also
CDC 6000 series—contains several predecessors to the Cyber 70 series
Explanatory notes
References
External links
CDC IBM front end bows (Article in Network World, December 1, 1986, page 4)
Cyber documentation at bitsavers.org
Cyber
Control Data Corporation mainframe computers
Supercomputers | CDC Cyber | Technology | 4,571 |
5,439,545 | https://en.wikipedia.org/wiki/Tungsten%28VI%29%20oxytetrabromide | Tungsten(VI) oxytetrabromide is the inorganic compound with the formula WOBr4. This a red-brown, hygroscopic solid sublimes at elevated temperatures. It forms adducts with Lewis bases. The solid consists of weakly associated square pyramidal monomers. The related tungsten(VI) oxytetrachloride has been more heavily studied. The compound is usually classified as an oxyhalide.
References
Tungsten(VI) compounds
Oxobromides | Tungsten(VI) oxytetrabromide | Chemistry | 104 |
34,167,136 | https://en.wikipedia.org/wiki/Neuroleptic-induced%20deficit%20syndrome | Neuroleptic-induced deficit syndrome (NIDS) is a psychopathological syndrome that develops in some patients who take high doses of an antipsychotic for an extended time. It is most often caused by high-potency typical antipsychotics, but can also be caused by high doses of many atypicals, especially those closer in profile to typical ones (that have higher D2 dopamine receptor affinity and relatively low 5-HT2 serotonin receptor binding affinity), like paliperidone and amisulpride.
Symptoms
Neuroleptic-induced deficit syndrome is principally characterized by the same symptoms that constitute the negative symptoms of schizophrenia: emotional blunting, apathy, hypobulia, anhedonia, indifference, difficulty or total inability in thinking, difficulty or total inability in concentrating, lack of initiative, attention deficits, and desocialization. This can easily lead to misdiagnosis and mistreatment. Instead of decreasing the antipsychotic, the doctor may increase their dose to try to "improve" what they perceive to be negative symptoms of schizophrenia, rather than antipsychotic side effects. The concept of neuroleptic-induced deficit syndrome was initially presented for schizophrenia, and it has rarely been associated in other mental disorders. In recent years, atypical neuroleptics are being more often managed to patients with bipolar disorder, so some studies about neuroleptic-induced deficit syndrome in bipolar disorder patients are now available.
There are significant difficulties in the differential diagnosis of primary negative symptoms and neuroleptic deficiency syndrome (secondary negative symptoms), as well as depression.
Case
A Japanese man, who was being treated for schizophrenia, exhibited neuroleptics-induced deficit syndrome and obsessive–compulsive symptoms. His symptoms were remarkably improved by quitting a course of antipsychotics followed by the introduction of the antidepressant fluvoxamine. He had been misdiagnosed with schizophrenia, the real diagnosis was obsessive–compulsive disorder.
References
Adverse effects of psychoactive drugs
Antipsychotics
Psychopathological syndromes | Neuroleptic-induced deficit syndrome | Chemistry | 445 |
66,973,081 | https://en.wikipedia.org/wiki/Praseodymium%28III%29%20nitrate | Praseodymium(III) nitrate is any inorganic compound with the chemical formula . These salts are used in the extraction and purification of praseodymium from its ores. The hexahydrate has been characterized by X-ray crystallography.
Praseodymium nitrate can be prepared by treating praseodymium oxide with nitric acid:
References
Praseodymium(III) compounds
Nitrates
Phosphors and scintillators | Praseodymium(III) nitrate | Chemistry | 100 |
26,136,898 | https://en.wikipedia.org/wiki/Locally%20closed%20subset | In topology, a branch of mathematics, a subset of a topological space is said to be locally closed if any of the following equivalent conditions are satisfied:
is the intersection of an open set and a closed set in
For each point there is a neighborhood of such that is closed in
is open in its closure
The set is closed in
is the difference of two closed sets in
is the difference of two open sets in
The second condition justifies the terminology locally closed and is Bourbaki's definition of locally closed. To see the second condition implies the third, use the facts that for subsets is closed in if and only if and that for a subset and an open subset
Examples
The interval is a locally closed subset of For another example, consider the relative interior of a closed disk in It is locally closed since it is an intersection of the closed disk and an open ball.
On the other hand, is not a locally closed subset of .
Recall that, by definition, a submanifold of an -manifold is a subset such that for each point in there is a chart around it such that Hence, a submanifold is locally closed.
Here is an example in algebraic geometry. Let U be an open affine chart on a projective variety X (in the Zariski topology). Then each closed subvariety Y of U is locally closed in X; namely, where denotes the closure of Y in X. (See also quasi-projective variety and quasi-affine variety.)
Properties
Finite intersections and the pre-image under a continuous map of locally closed sets are locally closed. On the other hand, a union and a complement of locally closed subsets need not be locally closed. (This motivates the notion of a constructible set.)
Especially in stratification theory, for a locally closed subset the complement is called the boundary of (not to be confused with topological boundary). If is a closed submanifold-with-boundary of a manifold then the relative interior (that is, interior as a manifold) of is locally closed in and the boundary of it as a manifold is the same as the boundary of it as a locally closed subset.
A topological space is said to be if every subset is locally closed. See Glossary of topology#S for more of this notion.
See also
Notes
References
External links
General topology | Locally closed subset | Mathematics | 476 |
14,680,518 | https://en.wikipedia.org/wiki/Polyphosphate%20kinase | In enzymology, a polyphosphate kinase (), or polyphosphate polymerase, is an enzyme that catalyzes the formation of polyphosphate from ATP, with chain lengths of up to a thousand or more orthophosphate moieties.
ATP + (phosphate)n ADP + (phosphate)n+1
Thus, the two substrates of this enzyme are ATP and polyphosphate [(phosphate)n], whereas its two products are ADP and polyphosphate extended by one phosphate moiety [(phosphate)n+1].
This enzyme is a membrane protein and goes through an intermediate stage during the reaction where it is autophosphorylated with a phosphate group covalently linked to a basic amino acyl residue through an N-P bond.
Several enzymes catalyze polyphosphate polymerization. Some of these enzymes couple phosphotransfer to transmembrane transport. These enzyme/transporters are categorized in the Transporter Classification Database (TCDB) under the Polyphosphate Polymerase/YidH Superfamily (TC# 4.E.1) and are transferases that transfer phosphoryl groups (phosphotransferases) with polyphosphate as the acceptor. The systematic name of this enzyme class is ATP:polyphosphate phosphotransferase. This enzyme is also called polyphosphoric acid kinase.
Families
The Polyphosphate Polymerase Superfamily (TC# 4.E.1) includes the following families:
4.E.1 - The Vacuolar (Acidocalcisome) Polyphosphate Polymerase (V-PPP) Family
9.B.51 - The Uncharacterized DUF202/YidH (YidH) Family
The Vacuolar (Acidocalcisome) Polyphosphate Polymerase (V-PPP) Family
Eukaryotes contain inorganic polyphosphate (polyP) and acidocalcisomes, which sequester polyP and store amino acids and divalent cations. Gerasimaitė et al. showed that polyP produced in the cytosol of yeast is toxic. Reconstitution of polyP translocation with purified vacuoles, the acidocalcisomes of yeast, showed that cytosolic polyP cannot be imported whereas polyP produced by the vacuolar transporter chaperone (VTC) complex, an endogenous vacuolar polyP polymerase, is efficiently imported and does not interfere with growth. PolyP synthesis and import require an electrochemical gradient, probably as a (partial) driving force for polyP translocation. VTC exposes its catalytic domain to the cytosol and has nine vacuolar transmembrane segments (TMSs). Mutations in the VTC transmembrane regions, which may constitute the translocation channel, block not only polyP translocation but also synthesis. Since these mutations are far from the cytosolic catalytic domain of VTC, this suggests that the VTC complex obligatorily couples synthesis of polyP to its vesicular import in order to avoid toxic intermediates in the cytosol. The process therefore conforms to the classical definition of Group Translocation, where the substrate is modified during transport. Sequestration of otherwise toxic polyP may be one reason for the existence of this mechanism in acidocalcisomes. The vacuolar polyphosphate kinase (polymerase) is described in TCDB with family TC# 4.E.1.
Function
CYTH-like superfamily enzymes, which include polyphosphate polymerases, hydrolyze triphosphate-containing substrates and require metal cations as cofactors. They have a unique active site located at the center of an eight-stranded antiparallel beta barrel tunnel (the triphosphate tunnel). The name CYTH originated from the gene designation for bacterial class IV adenylyl cyclases (CyaB), and from thiamine triphosphatase (THTPA). Class IV adenylate cyclases catalyze the conversion of ATP to 3',5'-cyclic AMP (cAMP) and PPi. Thiamine triphosphatase is a soluble cytosolic enzyme which converts thiamine triphosphate to thiamine diphosphate. This domain superfamily also contains RNA triphosphatases, membrane-associated polyphosphate polymerases, tripolyphosphatases, nucleoside triphosphatases, nucleoside tetraphosphatases and other proteins with unknown functions.
The generalized reaction catalyzed by the vectorial polyphosphate polymerases is:
ATP + (phosphate)n in the cytoplasm ADP + (phosphate)n+1 in the vacuolar lumen
Structure
VTC2 has three recognized domains: an N-terminal SPX domain, a large central CYTH-like domain and a smaller transmembrane VTC1 (DUF202) domain. The SPX domain is found in Syg1, Pho81, XPR1 (SPX), and related proteins. This domain is found at the amino termini of a variety of proteins. In the yeast protein, Syg1, the N-terminus directly binds to the G-protein beta subunit and inhibits transduction of the mating pheromone signal. Similarly, the N-terminus of the human XPR1 protein binds directly to the beta subunit of the G-protein heterotrimer, leading to increased production of cAMP. Thus, this domain is involved in G-protein associated signal transduction. The N-termini of several proteins involved in the regulation of phosphate transport, including the putative phosphate level sensors, Pho81 from Saccharomyces cerevisiae and NUC-2 from Neurospora crassa, have this domain.
The SPX domains of the S. cerevisiae low-affinity phosphate transporters, Pho87 and Pho90, auto-regulate uptake and prevent efflux. This SPX-dependent inhibition is mediated by a physical interaction with Spl2. NUC-2 contains several ankyrin repeats. Several members of this family are annotated as XPR1 proteins: the xenotropic and polytropic retrovirus receptor confers susceptibility to infection with xenotropic and polytropic murine leukaemia viruses (MLV). Infection by these retroviruses can inhibit XPR1-mediated cAMP signaling and result in cell toxicity and death. The similarity between Syg1 phosphate regulators and XPR1 sequences has been noted, as has the additional similarity to several predicted proteins of unknown function, from Drosophila melanogaster, Arabidopsis thaliana, Caenorhabditis elegans, Schizosaccharomyces pombe, S. cerevisiae, and many other diverse organisms.
As of 2015, several structures have been solved for this class of enzymes, with PDB accession codes , , , , , .
The Uncharacterized DUF202/YidH (YidH) Family
Members of the YidH Family are found in bacteria, archaea and eukaryotes. Members of this family include YidH of E. coli (TC# 9.B.51.1.1) which has 115 amino acyl residues and 3 TMSs of α-helical nature. The first TMS has a low level of hydrophobicity, the second has a moderate level of hydrophobicity, and the third has very hydrophobic character. These traits appear to be characteristic of all members of this family. A representative list of proteins belonging to this family can be found in the Transporter Classification Database. In fungi, a long homologue of 351 aas has a similar 3 TMS DUF202 domain at its extreme C-terminus.
References
Further reading
EC 2.7.4
Enzymes of known structure
Membrane proteins | Polyphosphate kinase | Biology | 1,706 |
1,810,132 | https://en.wikipedia.org/wiki/Nitroguanidine | Nitroguanidine - sometimes abbreviated NGu - is a colorless, crystalline solid that melts at 257 °C and decomposes at 254 °C. Nitroguanidine is an extremely insensitive but powerful high explosive. Wetting it with > 20 wt.-% water effects desensitization from HD 1.1 down to HD 4.1 (flammable solid).
Nitroguanidine is used as an energetic material, i.e., propellant or high explosive, precursor for insecticides, and for other purposes.
Manufacture
Nitroguanidine is produced worldwide on a large scale starting with the reaction of dicyandiamide (DCD) with ammonium nitrate to afford the salt guanidinium nitrate, which is then nitrated by treatment with concentrated sulfuric acid at low temperature.
The guanidinium nitrate intermediate may also be produced via the Boatright–Mackay–Roberts (BMR) process, in which molten urea is reacted with molten ammonium nitrate in the presence of silica gel. This process had been commercialized because of its attractive economic features.
Uses
Explosives
Nitroguanidine has been in use since the 1930s as an ingredient in triple-base gun propellants in which it reduces flame temperature, muzzle flash, and erosion of the gun barrel but preserves chamber pressure due to high nitrogen content. Its extreme insensitivity combined with low cost has made it a popular ingredient in insensitive high explosive formulations (e.g AFX-453, AFX-760, IMX-101, AL-IMX-101, IMX-103, etc.).
The first triple-base propellant, featuring 20-25% of nitroguanidine and 30-45% nitroglycerine, was developed at the Dynami Nobel factory at Avigliana and patented by its director Dr. Modesto Abelli (1859-1911) in 1905.
Nitroguanidine's explosive decomposition is given by the following equation:
H4N4CO2 (s) → 2 H2O (g) + 2 N2 (g) + C (s)
Pesticides
Nitroguanidine derivatives are used as insecticides, having a comparable effect to nicotine. Derivatives include clothianidin, dinotefuran, imidacloprid, and thiamethoxam.
Biochemistry
The nitrosoylated derivative, nitrosoguanidine, is often used to mutagenize bacterial cells for biochemical studies.
Structure
Following several decades of debate, it could be confirmed by NMR spectroscopy, and both x-ray and neutron diffraction that nitroguanidine exclusively exists as the nitroimine tautomer both in solid state and solution.
References
Explosive chemicals
Propellants | Nitroguanidine | Chemistry | 586 |
6,885,256 | https://en.wikipedia.org/wiki/EF-Tu | EF-Tu (elongation factor thermo unstable) is a prokaryotic elongation factor responsible for catalyzing the binding of an aminoacyl-tRNA (aa-tRNA) to the ribosome. It is a G-protein, and facilitates the selection and binding of an aa-tRNA to the A-site of the ribosome. As a reflection of its crucial role in translation, EF-Tu is one of the most abundant and highly conserved proteins in prokaryotes. It is found in eukaryotic mitochondria as TUFM.
As a family of elongation factors, EF-Tu also includes its eukaryotic and archaeal homolog, the alpha subunit of eEF-1 (EF-1A).
Background
Elongation factors are part of the mechanism that synthesizes new proteins through translation in the ribosome. Transfer RNAs (tRNAs) carry the individual amino acids that become integrated into a protein sequence, and have an anticodon for the specific amino acid that they are charged with. Messenger RNA (mRNA) carries the genetic information that encodes the primary structure of a protein, and contains codons that code for each amino acid. The ribosome creates the protein chain by following the mRNA code and integrating the amino acid of an aminoacyl-tRNA (also known as a charged tRNA) to the growing polypeptide chain.
There are three sites on the ribosome for tRNA binding. These are the aminoacyl/acceptor site (abbreviated A), the peptidyl site (abbreviated P), and the exit site (abbreviated E). The P-site holds the tRNA connected to the polypeptide chain being synthesized, and the A-site is the binding site for a charged tRNA with an anticodon complementary to the mRNA codon associated with the site. After binding of a charged tRNA to the A-site, a peptide bond is formed between the growing polypeptide chain on the P-site tRNA and the amino acid of the A-site tRNA, and the entire polypeptide is transferred from the P-site tRNA to the A-site tRNA. Then, in a process catalyzed by the prokaryotic elongation factor EF-G (historically known as translocase), the coordinated translocation of the tRNAs and mRNA occurs, with the P-site tRNA moving to the E-site, where it dissociates from the ribosome, and the A-site tRNA moves to take its place in the P-site.
Biological functions
Protein synthesis
EF-Tu participates in the polypeptide elongation process of protein synthesis. In prokaryotes, the primary function of EF-Tu is to transport the correct aa-tRNA to the A-site of the ribosome. As a G-protein, it uses GTP to facilitate its function. Outside of the ribosome, EF-Tu complexed with GTP (EF-Tu • GTP) complexes with aa-tRNA to form a stable EF-Tu • GTP • aa-tRNA ternary complex. EF-Tu • GTP binds all correctly-charged aa-tRNAs with approximately identical affinity, except those charged with initiation residues and selenocysteine. This can be accomplished because although different amino acid residues have varying side-chain properties, the tRNAs associated with those residues have varying structures to compensate for differences in side-chain binding affinities.
The binding of an aa-tRNA to EF-Tu • GTP allows for the ternary complex to be translocated to the A-site of an active ribosome, in which the anticodon of the tRNA binds to the codon of the mRNA. If the correct anticodon binds to the mRNA codon, the ribosome changes configuration and alters the geometry of the GTPase domain of EF-Tu, resulting in the hydrolysis of the GTP associated with the EF-Tu to GDP and Pi. As such, the ribosome functions as a GTPase-activating protein (GAP) for EF-Tu. Upon GTP hydrolysis, the conformation of EF-Tu changes drastically and dissociates from the aa-tRNA and ribosome complex. The aa-tRNA then fully enters the A-site, where its amino acid is brought near the P-site's polypeptide and the ribosome catalyzes the covalent transfer of the polypeptide onto the amino acid.
In the cytoplasm, the deactivated EF-Tu • GDP is acted on by the prokaryotic elongation factor EF-Ts, which causes EF-Tu to release its bound GDP. Upon dissociation of EF-Ts, EF-Tu is able to complex with a GTP due to the 5– to 10–fold higher concentration of GTP than GDP in the cytoplasm, resulting in reactivated EF-Tu • GTP, which can then associate with another aa-tRNA.
Maintaining translational accuracy
EF-Tu contributes to translational accuracy in three ways. In translation, a fundamental problem is that near-cognate anticodons have similar binding affinity to a codon as cognate anticodons, such that anticodon-codon binding in the ribosome alone is not sufficient to maintain high translational fidelity. This is addressed by the ribosome not activating the GTPase activity of EF-Tu if the tRNA in the ribosome's A-site does not match the mRNA codon, thus preferentially increasing the likelihood for the incorrect tRNA to leave the ribosome. Additionally, regardless of tRNA matching, EF-Tu also induces a delay after freeing itself from the aa-tRNA, before the aa-tRNA fully enters the A-site (a process called accommodation). This delay period is a second opportunity for incorrectly charged aa-tRNAs to move out of the A-site before the incorrect amino acid is irreversibly added to the polypeptide chain. A third mechanism is the less well understood function of EF-Tu to crudely check aa-tRNA associations and reject complexes where the amino acid is not bound to the correct tRNA coding for it.
Other functions
EF-Tu has been found in large quantities in the cytoskeletons of bacteria, co-localizing underneath the cell membrane with MreB, a cytoskeletal element that maintains cell shape. Defects in EF-Tu have been shown to result in defects in bacterial morphology. Additionally, EF-Tu has displayed some chaperone-like characteristics, with some experimental evidence suggesting that it promotes the refolding of a number of denatured proteins in vitro. EF-Tu has been found to moonlight on the cell surface of the pathogenic bacteria Staphylococcus aureus, Mycoplasma pneumoniae, and Mycoplasma hyopneumoniae, where EF-Tu is processed and can bind to a range of host molecules. In Bacillus cereus, EF-Tu also moonlights on the surface, where it acts as an environmental sensor and binds to substance P.
Structure
EF-Tu is a monomeric protein with molecular weight around 43 kDa in Escherichia coli. The protein consists of three structural domains: a GTP-binding domain and two oligonucleotide-binding domains, often referred to as domain 2 and domain 3. The N-terminal domain I of EF-Tu is the GTP-binding domain. It consists of a six beta-strand core flanked by six alpha-helices. Domains II and III of EF-Tu, the oligonucleotide-binding domains, both adopt beta-barrel structures.
The GTP-binding domain I undergoes a dramatic conformational change upon GTP hydrolysis to GDP, allowing EF-Tu to dissociate from aa-tRNA and leave the ribosome. Reactivation of EF-Tu is achieved by GTP binding in the cytoplasm, which leads to a significant conformational change that reactivates the tRNA-binding site of EF-Tu. In particular, GTP binding to EF-Tu results in a ~90° rotation of domain I relative to domains II and III, exposing the residues of the tRNA-binding active site.
Domain 2 adopts a beta-barrel structure, and is involved in binding to charged tRNA. This domain is structurally related to the C-terminal domain of EF2, to which it displays weak sequence similarity. This domain is also found in other proteins such as translation initiation factor IF-2 and tetracycline-resistance proteins. Domain 3 represents the C-terminal domain, which adopts a beta-barrel structure, and is involved in binding to both charged tRNA and to EF1B (or EF-Ts).
Evolution
The GTP-binding domain is conserved in both EF-1alpha/EF-Tu and also in EF-2/EF-G and thus seems typical for GTP-dependent proteins which bind non-initiator tRNAs to the ribosome. The GTP-binding translation factor family also includes the eukaryotic peptide chain release factor GTP-binding subunits and prokaryotic peptide chain release factor 3 (RF-3); the prokaryotic GTP-binding protein lepA and its homologue in yeast (GUF1) and Caenorhabditis elegans (ZK1236.1); yeast HBS1; rat Eef1a1 (formerly "statin S1"); and the prokaryotic selenocysteine-specific elongation factor selB.
Disease relevance
Along with the ribosome, EF-Tu is one of the most important targets for antibiotic-mediated inhibition of translation. Antibiotics targeting EF-Tu can be categorized into one of two groups, depending on the mechanism of action, and one of four structural families. The first group includes the antibiotics pulvomycin and GE2270A, and inhibits the formation of the ternary complex. The second group includes the antibiotics kirromycin and enacyloxin, and prevents the release of EF-Tu from the ribosome after GTP hydrolysis.
See also
Prokaryotic elongation factors
EF-Ts (elongation factor thermo stable)
EF-G (elongation factor G)
EF-P (elongation factor P)
eEF-1
EFR (EF-Tu receptor)
References
External links
Protein biosynthesis
Protein domains | EF-Tu | Chemistry,Biology | 2,302 |
11,570,463 | https://en.wikipedia.org/wiki/Fomes%20lamaensis | Fomes lamaensis is a plant pathogen of the tea plant.
See also
List of tea diseases
References
External links
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Tea diseases
Polyporaceae
Fungus species | Fomes lamaensis | Biology | 43 |
531,749 | https://en.wikipedia.org/wiki/Software%20Engineering%202004 | The Software Engineering 2004 (SE2004) —formerly known as Computing Curriculum Software Engineering (CCSE)— is a document that provides recommendations for undergraduate education in software engineering. SE2004 was initially developed by a steering committee between 2001 and 2004. Its development was sponsored by the Association for Computing Machinery and the IEEE Computer Society. Important components of SE2004 include the Software Engineering Education Knowledge, a list of topics that all graduates should know, as well as a set of guidelines for implementing curricula and a set of proposed courses.
External links
SE2004 Home Page
Software Engineering 2004 - Curriculum Guidelines for Undergraduate Degree Programs in Software Engineering
Software engineering papers
Computer science education | Software Engineering 2004 | Technology,Engineering | 136 |
56,897,087 | https://en.wikipedia.org/wiki/Agglomerated%20food%20powder | Agglomerated food powder is a unit operation during which native particles are assembled to form bigger agglomerates, in which the original particle can still be distinguished. Agglomeration can be achieved through processes that use liquid as a binder (wet methods) or methods that do not involve any binder (dry methods).
Description
The liquid used in wet methods can be added directly to the product or via a humid environment. Using a fluidized bed dryer and multiple step spray drying are two examples of wet methods while roller compacting and extrusion are two examples of dry methods.
Advantages of agglomeration for food include:
Dust reduction: Dust reduction is achieved when the smallest particles (or "fines") in the product are combined into larger particles.
Improved flow: Flow improvement occurs as the larger, and sometimes more spherical, particles more easily pass over each other than the smaller or more irregularly-shaped particles in the original material.
Improved dispersion and/or solubility: Improved dispersion and solubility is sometimes achieved with instantization, in which the solubility of a product allows it to instantly dissolve upon its addition to water. For a powder to be considered instant it should go through wettability, sinkability, dispersibility, and solubility within a few seconds. Non-fat dry milk and high quality protein powders are good examples of instant powders.
Optimized bulk density: Consistent bulk density is important in accurate and consistent filling of packaging.
Improved product characteristics
Increased homogeneity of the finished product, reducing segregation of fine particles (such as powdered vitamins or spray-dried flavors) from larger particles (such as granulated sugars or acids). As a powder is agitated, smaller particles will fall to the bottom, and larger raise to the top. Agglomeration can reduce the range of particle sizes present in the product, reducing segregation.
Disadvantages of food agglomeration:
Extra cost. The benefits of handling an agglomerate often outweigh the extra cost involved in processing.
Additional processing time. Agglomeration of a finished blend is an additional step after blending.
Particle size distribution is an important parameter to monitor in agglomerated food products. In both wet and dry agglomeration, particles of undesired sizes must be removed to ensure the best possible finished product performance. High-powered cyclones are the most common way to separate undesired fine particles (or "fines") from larger agglomerates (or "overs"). Cyclones utilize the combination of wind power and the different densities of the two products to pull the fines out of the mix. The fines can then be reworked through the agglomeration process to reduce yield loss. In contrast, shaker screens are often used to separate out the overs from the rest of the product. The overs can be reworked into the process by first being broken into smaller particles.
Wet agglomeration methods
Wet agglomeration is a process that introduces a liquid binder to develop adhesion forces between the dry particles to be agglomerated. Mixing disperses the liquid over the particles evenly and promotes growth of the aggregate to the desired size. A final drying step is required to stabilize the agglomerates.
In all wet agglomeration methods, the first step is wetting the particles. This initiates adhesion forces between the particles. The next step, nucleation, is the process by which the native particles come together and are held with liquid bridges and capillary forces. Then, through coalescence or the growth phase, these small groups of particles come together to create larger particles until the particles are the desired size. Consolidation occurs as the agglomerates increase in density and strength through drying and collisions with other particles. Mixing as the powder dries also causes some particles to break and erode, creating smaller particles and fines. To achieve the correct particle size, erosion and growth must be balanced. The last step in wet agglomeration is the final stabilization through drying. The agglomerated particles are dried to less than 5% water content, and cooled to below their glass transition temperature.
Wet agglomeration falls into two categories based on method of agitation: Mechanical mixing and pneumatic mixing.
Pneumatic mixing
Steam-jet agglomeration: A continuous process wherein fine powders are exposed to steam to provide the necessary adhesion properties. Agglomeration is controlled by particle size distribution in the raw materials, gas and steam flow conditions and the adhesion forces between the particles. After the steam section the particles are exposed to warm air flowing upwards and countercurrent to the particles, which solidifies the liquid bridges formed between the particles. Advantages: used for many years in the food industry, a continuous process
Spray drying: Spray drying starts with a liquid raw material which is sprayed as fine droplets into heated air which causes the droplets to dry into fine particles. To agglomerate, fully dried particles (collected from the dry air outlet) are re-introduced at the point where the particles are partly dried and still sticky, to collide and create porous agglomerates. Spray drying agglomeration creates irregularly shaped, highly porous particles with excellent instant properties.
Fluid bed agglomeration uses an air stream to both agitate the particles and to dry the agglomerates. Fluidized bed dryers are hot-air driers in which the air is forced up through a layer of product. The air is evenly distributed along the chamber to maintain a uniform velocity and prevent areas of higher velocities. These dryers can either be batch or continuous. Batch processing methods tend to have more uniform moisture content throughout the product after drying whereas continuous driers vary more throughout the process and may require a finishing set. When used for agglomeration, fluidized bed dryers have spray nozzles located at various heights within the chamber, allowing a spray of water or a solution of other ingredients (the binding solution) to be sprayed on the particles as they are fluidized. This encourages the particles to stick together, and can impart other properties to the finished product. Examples of binding solutions are a water/sugar solution, or lecithin. In this method, particle wetting, growth, consolidation, erosion and drying all occur at the same time.
Mechanical mixing
Pan or disk agglomerators (rarely used for food powders): Pan or disk agglomerators use the rotation of a disk or bowl to agitate the powder as it is sprayed with water. This type of agglomeration creates agglomerates with higher density than agglomerates produced in fluidized beds.
Drum agglomerators: use a drum to agitate the powder as liquid is added via spraying along the drum. This is a continuous process, and the agglomerates are spherical due to the rotation of the drum. Advantages: can successfully agglomerate powders with a wide particle size distribution, and have lower energy needs than fluidized bed agglomerators. Drum agglomerators can handle very large capacities, but this is not an advantage in the food industry. Disadvantage: broad particle size distribution.
Mixer agglomeration: Mixer agglomerators agitate the powder with the use of a blade inside a bowl. The geometry of mixer agglomerators vary widely. The blade can be oriented vertically, horizontally or obliquely. Shear is variable, and the wetting solution is sprayed over the bulk of the powder as it is mixing. Advantages: can work with powders of large particle size distribution, and allow for good distribution of viscous liquids. Equipment is straight forward and common. This type of agglomeration results in relatively dense and spherical agglomerates.
Dry agglomeration methods
Dry agglomeration is agglomeration performed without water or binding liquids, instead using compression only.
Roller compaction
Roller compaction is a process in which powders are forced between two rolls, which compress the powders into dense sticks or sheets. These sticks or sheets are then ground into granules. Material properties will affect the mechanical properties of the resulting granules. Food particles with crystalline structures will deform plastically under pressure, and amorphous materials will deform viscoelastically. Roller compaction is more commonly used on individual ingredients of a finished powdered food product, than on a blend of ingredients producing a granulated finished product.
Some advantages of roller compaction are
No need for binding solution
Dust-free powders are possible
Consistent granule size and bulk density.
Good option for moisture and heat-sensitive materials, as no drying is required.
Disadvantages:
May not be easily soluble in cold water due to high density and low porosity
High amount of fines which need to be re-worked may be produced
Specialized equipment is required, typically with large batch sizes (> 3k pounds)
Examples of agglomerated food powders: Sucrose, sodium chloride, monosodium glutamate and fibers.
Extrusion
Extrusion is executed by mixing the powder with liquid, additives, or dispersants and then compressing the mixture and forcing it through a die. The product is then dried and broken down to the desired particle size. Extruded powders are dense. Extrusion is typically used for ingredients such as minerals and highly-hygroscopic products which benefit from reduced surface area, as well as products that are subject to oxidation. Extrusion for agglomeration should not be confused with the more common food extrusion process that involves creating a dough that is cooked and expands as it passes through the die.
See also
Granulation (process)
Particle size
Particle-size distribution
Pelletizing
References
External links
Welchdry.com Roller Compaction
www.glatt.com Spray Agglomeration
Patent US7998505B2
Food processing
Particle technology
Dried foods
Powders | Agglomerated food powder | Physics,Chemistry,Engineering | 2,064 |
50,365,519 | https://en.wikipedia.org/wiki/Light-dark%20box%20test | The light-dark box test (LDB) is a popular animal model used in pharmacology to assay unconditioned anxiety responses in rodents. The extent to which behavior in the LDB measures anxiety is controversial.
Overview
The LDB apparatus has two compartments. The light compartment is 2/3 of the box and is brightly lit and open. The dark compartment is 1/3 of the total box and is covered and dark. A door of 7 cm connects the two compartments.
Rodents prefer darker areas over lighter areas. However, when presented in a novel environment, rodents have a tendency to explore. These two conflicting emotions lead to observable anxiety like symptoms.
Rodents typically spend more time in the dark compartment than in the light compartment (scototaxis). If rodents are injected with anxiolytic drugs, percentage of time spent in the light compartment will increase. Locomotion and rearing, which is when the rodent stands up on its hind legs and is a sign of exploration, in the dark compartment also increase. When injected with anxiogenic drugs, more time is spent in the dark compartment.
The LDB does not require any prior training. No food or water is deprived and only natural stressors such as light are used.
Procedure
Rodents are placed in the light compartment of the apparatus and are allowed to move around. Typically rodents will move around the periphery of the compartment until they find the door. This process can take between 7–12 seconds. All four paws must be placed into the opposite chamber to be considered an entry.
Controversy
As with all animal models there is significant controversy about the LDB. The only drug class that has shown consistent results are the benzodiazepines. When other anti-anxiety drugs such as SSRIs have been tested contradicting observations have been noted questioning its validity.
Different labs account variable results due to the differences between the type and severity of external stressors. Baseline levels in the control group which are key in determining the exploratory activity can vary due to strain, age, and weight.
The LDB has not been validated for female rodents.
The LDB can give too many false positives. A drug that has anxiolytic properties and increased locomotion properties cannot be accurately tested in the LDB. Increased locomotion effects can affect the percentage of time spent in the compartments and rearing. Preliminary screening for locomotor effects is needed to determine a false positive results.
Due to the flaws in animal modeling the LDB is best done in a battery of anxiety modeling tests with high predictive validity to determine the potential therapeutic value of a novel agent.
See also
Elevated plus maze
Open Field Test
Marble Burying
Morris Water Navigation Task
Animal Models of Depression
References
Animal testing techniques
Psychology experiments | Light-dark box test | Chemistry | 566 |
1,687,858 | https://en.wikipedia.org/wiki/Hotline%20Communications | Hotline Communications Limited (HCL) was a software company founded in 1997, based in Toronto, Canada, with employees also in the United States and Australia. Hotline Communications' main activity was the publishing and distribution of a multi-purpose client/server communication software product named Hotline Connect, informally called, simply, Hotline. Initially, Hotline Communications sought a wide audience for its products, and organizations as diverse as Avid Technology, Apple Computer Australia, and public high schools used Hotline. At its peak, Hotline received millions of dollars in venture capital funding, grew to employ more than fifty people, served millions of users, and won accolades at trade shows and in newspapers and computer magazines around the world.
Hotline eventually attracted more of an "underground" community, which saw it as an easier to use successor to the Internet Relay Chat (IRC) community. In 2001 Hotline Communications lost the bulk of its VC funding, and went out of business later that year. All of its assets were acquired in 2002 by Hotsprings, Inc., a new company formed by some ex-employees and shareholders. Hotsprings Inc. has since also abandoned development of the Hotline Connect software suite; the last iteration of Hotline Connect was released in December 2003.
Since 2008 some Hotliners have been slowly purchasing the defunct URLs so that they now have the main server, tracker, and BigRedH.com. This has allowed a revival of sorts while many open source clients have popped up over the last few years. Many active projects now have led to a revival of sorts seeing over 30 servers on the main tracker (HLTracker.com).
All new clients and old have been preserved on the new wiki site (moved from advertisement based wiki hosting to hlwiki.com. The community has since flourished and new servers pop up every day with new clients allowing modern computer support while still having older clients for 20+ year old computer support.
History
Hotline was designed in 1996 and known as "hotwire" by Australian programmer Adam Hinkley (known online by his username, "Hinks"), then 17 years old, as a classic Mac OS application. The source code for the Hotline applications was based on a class library, "AppWarrior" (AW), which Hinkley wrote. AppWarrior would later become litigious, as Hinkley wrote parts of it while he was employed by an Australian company, Redrock Holdings. Six other fans of Hotline, David Murphy, Alex King, Phil Hilton, Jason Roks, David Bordin, and Terrance Gregory, joined Adam Hinkley's efforts to promote and market the Hotline programs, working day and night and using the company's own products to stay in touch from across the US, Canada, and Australia. Eventually, Canadian Jason Roks approached Adam Hinkley and encouraged him to move to Toronto, where Hotline Communications, Ltd. was incorporated. In 1997, Hotline won a "Best of the Show" award from one of the award ceremonies concurrent with the Boston MacWorld Expo. It received accolades in computer magazines and the mainstream press from Macworld Sweden (which awarded it a "Golden Mouse Award") to the Los Angeles Times, which called it one of the "best kept secrets on the internet", as well as a short article in Wired Magazine's September 1997 issue. At the time, the company's main objective was to release a stable Windows-compatible version to reach a wider audience.
However, a few months after Hinkley moved to Canada, he and his colleagues at Hotline Communications got into a major disagreement and Hinkley left the firm, encrypting source files for Hotline on Hotline Communication's computers, thus crippling the company. Lawsuits against Hinkley were filed by both Hotline Communications and Redrock, and Hinkley lost copyright of his "AppWarrior" library as well as rights over the "Hotline" software. The legal battle and Hinkley's case drew some media attention, especially on the Internet.
In the late 1990s, when Hotline's popularity was at its peak, thousands of Hotline servers were available, catering to a wide variety of user interests, including pornography, MP3s, and pirated software; it was these underground purposes that drew the majority of media attention. To access these files most Hotline servers required a login and password. The password was often obtainable by visiting a website provided by the server admin and clicking on a banner advertisement. Server admins would earn money per click and in return offer pirated software, music, movies and pornography to users.
At the end of the 1990s, by then outdated Hotline software started to gradually fade, as peer-to-peer systems like Gnutella and Kazaa became increasingly popular. Many early Hotline users felt sympathy for Hinkley and viewed Hotline Communications with a bad eye and the Hotline Connect suite did not sell well. In September 2001, Hotline Communications announced development of version 2.0 of the Hotline suite had been stopped, beta versions of which had not been well received by the community, and laid off most of its employees. In mid-October of the same year, the company announced the re-hire of their engineering team "in anticipation of the release of Hotline 2.0" on their website (offline as of May 2006). However, no stable build of Hotline 2.0 was ever released.
As of June 2024, hltracker.com and tracked.stickytack.com continue to provide tracker services for Hotline clients.
Hotline Connect software suite
The Hotline applications were distributed as shareware and combined chat, message board and file transfer capabilities and operated using a client/server (not peer-to-peer) model. Hotline predates the Napster and Gnutella file sharing products. The Hotline protocol was a binary protocol which accounted for its high speed efficient transfers in the days when most internet users still used modems and dialup. The protocol was reverse engineered by the internet community, leading to a wide variety of third-party clients being written in RealBasic, one of the few easy-to-use development environments available on the Macintosh at the time.
Hotline Connect consisted of three applications, distributed separately (via Internet download or on promotional CDs):
Hotline Client: an application used to access Hotline servers set up by users running the Hotline Server software. Hotline Connect users with a client installed could connect to servers they knew using the host's IP address.
Hotline Server: an easy-to-configure server application.
Hotline Tracker: a name server, used to keep track of the IP addresses of several Hotline servers.
Hotline successors
A company named Haxial Software (rumored to be Adam Hinkley) released a Hotline-like product named KDX.
Jörn and Mirko Hartmann released similar software deliberately kept Mac-only called Carracho in 1998 Carracho GbR - Welcome, still used today by a small, tight-knit group of users. There have also been several third-party applications that implement the Hotline protocol.
There have been several open-source versions of the Hotline Client and Server suite, which were not based on the official source code, and provide several protocol enhancements (also known as HOPE - HOtline Protocol Extension).
Some versions also support an IRC bridge or KDX bridge. Most of the work on the Hotline enhancements have been done by r0r (HOPE, KDX), kang (IRC) and Devin Teske. See Darknet for details.
Hotline's largest community Mixed Blood, started by Prime Chuck and SAINT in 1998, is still active to this day on Wired. Wired is similar to Hotline and developed by Zanka software.
Another very large hotline server of the same time, Not All Mine, or notallmine.net, converted to HTML format and went private around 2006.
Modern clients (constantly being updated)
Obsession(QT4) - An Open Source client for Windows (Other OS Supported)
Hotline(SwiftUI) - A modern Open Source client for the most recent Mac OS X
Pitbull Pro - A remake of the original OS9/X client (Windows)
Hotstuff! - A more recently made Classic client (Mac OS 7.6-Mac OS 10.5)
Open source clients
Fidelio - open-source Hotline client, no longer updated since 2002 (Linux/Unix)
mhxd - open-source Hotline server and client
synhxd - open-source Hotline server and client
gtkhx - open-source Hotline client, no longer updated since 2001 (Linux/Unix)
References
Company and product history
"Wayback Machine search results for 'bigredh.com'", Internet Archive Wayback Machine. Accessed on April 7, 2005. — A chronicle of the different versions of the bigredh.com website. "BigRedH.com" was Hotline Communication's website and e-commerce platform from June 2001 on. E-commerce "service provider" Digital River designed the original website (Digital River press release).
"Wayback Machine search results for 'hotspringsinc.com'", Internet Archive Wayback Machine. Accessed on April 7, 2005. — A chronicle of the different versions of the hotspringsinc.com website.
"Macworld Hotline Revisited Article
"Hotline – The Glory Days of P2P"
Legal battle
"Hotline's Civil War", Salon.com Archive. Accessed on February 26, 1999.
"Employment Law, Intellectual Property Law and the New Economy" (radio program transcript), The Law Report (website), Australian Broadcasting Corporation Radio National (website). Accessed April 7, 2005. — Discussion of the Hotline vs. Hinkley legal battle in the broader context of intellectual property law in Australia.
"Adam Hinkley's IP Hindsights", Slashdot.org. Accessed on April 7, 2005. — Slashdot discussion about original Hotline programmer Adam Hinkley's legal battle against his former Hotline Communications, Ltd. associates.
"Redrock Holdings Pty Ltd & Hotline Communications Ltd & Ors v Hinkley (2001) VSC 91 (4 April 2001)", Text of the 2001 Victoria Supreme Court judgement assigning ownership of Hotline Connect to Hotline Communications.
Defunct software companies of Canada
Online chat
Internet forums
File sharing communities
File sharing networks
1997 establishments in Ontario
Canadian companies established in 1997 | Hotline Communications | Technology | 2,206 |
22,940,609 | https://en.wikipedia.org/wiki/Soil%20gas | Soil gases (soil atmosphere) are the gases found in the air space between soil components. The spaces between the solid soil particles, if they do not contain water, are filled with air. The primary soil gases are nitrogen, carbon dioxide and oxygen. Oxygen is critical because it allows for respiration of both plant roots and soil organisms. Other natural soil gases include nitric oxide, nitrous oxide, methane, and ammonia. Some environmental contaminants below ground produce gas which diffuses through the soil such as from landfill wastes, mining activities, and contamination by petroleum hydrocarbons which produce volatile organic compounds.
Gases fill soil pores in the soil structure as water drains or is removed from a soil pore by evaporation or root absorption. The network of pores within the soil aerates, or ventilates, the soil. This aeration network becomes blocked when water enters soil pores. Not only are both soil air and soil water very dynamic parts of soil, but both are often inversely related.
Composition
The composition of gases present in the soil's pores, referred to commonly as the soil atmosphere or atmosphere of the soil, is similar to that of the Earth's atmosphere. Unlike the atmosphere, moreover, soil gas composition is less stagnant due to the various chemical and biological processes taking place in the soil. The resulting changes in composition from these processes can be defined by their variation time (i.e. daily vs. seasonal). Despite this spatial- and temporal-dependent fluctuation, soil gases typically boast greater concentrations of carbon dioxide and water vapor in comparison to the atmosphere. Furthermore, concentration of other gases, such as methane and nitrous oxide, are relatively minor yet significant in determining greenhouse gas flux and anthropogenic impact on soils.
Processes
Gas molecules in soil are in continuous thermal motion according to the kinetic theory of gases, and there is also collision between molecules – a random walk process. In soil, a concentration gradient causes net movement of molecules from high concentration to low concentration, which gives the movement of gas by diffusion. Numerically, it is explained by the Fick's law of diffusion. Soil gas migration, specifically that of hydrocarbon species with one to five carbons, can also be caused by microseepage.
The soil atmosphere's variable composition and constant motion can be attributed to chemical processes such as diffusion, decomposition, and, in some regions of the world, thawing, among other processes. Diffusion of soil air with the atmosphere causes the preferential replacement of soil gases with atmospheric air. More significantly, moreover, variation in soil gas composition due to seasonal, or even daily, temperature and/or moisture change can influence the rate of soil respiration.
According to the USDA, soil respiration refers to the quantity of carbon dioxide released from soil. This excess carbon dioxide is created by the decomposition of organic material by microbial organisms, in the presence of oxygen. Given the importance of both soil gases to soil life, significant fluctuation of carbon dioxide and oxygen can result in changes in rate of decay, while changes in microbial abundance can inversely influence soil gas composition.
In regions of the world where freezing of soils or drought is common, soil thawing and rewetting due to seasonal or meteorological changes influences soil gas flux. Both processes hydrate the soil and increase nutrient availability leading to an increase in microbial activity. This results in greater soil respiration and influences the composition of soil gases.
Studies and Research
Soil gases have been used for multiple scientific studies to explore topics such as microseepage, earthquakes, and gaseous exchange between the soil and the atmosphere. Microseepage refers to the limited release of hydrocarbons on the soil surface and can be used to look for petroleum deposits based on the assumption that hydrocarbons vertically migrate to the soil surface in small quantities. Migration of soil gases, specifically radon, can also be examined as earthquake precursors. Furthermore, for processes such as soil thawing and rewetting, for example, large sudden changes in soil respiration can cause increased flux of soil gases such as carbon dioxide and methane, which are greenhouse gases. These fluxes and interactions between soil gases and atmospheric air can further be analyzed by distance from the soil surface.
References
Soil physics
Geochemistry
Soil
Environmental science
Environmental chemistry | Soil gas | Physics,Chemistry,Environmental_science | 881 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.