id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
5,536,626 | https://en.wikipedia.org/wiki/Digital%20channel%20election | A digital channel election was the process by which television stations in the United States chose which physical radio-frequency TV channel they would permanently use after the analog shutdown in 2009. The process was managed and mandated by the Federal Communications Commission for all full-power TV stations. Low-powered television (LPTV) stations are going through a somewhat different process, and are also allowed to flash-cut to digital.
Process
Stations could choose to keep their initial digital TV channel allocation, do a flash-cut to their former analog TV channel, or attempt to select another channel, often an analog channel or pre-transition digital channel from another station that had been orphaned. Stations on channels 52 to 69 did not have the first option, as the FCC and then the U.S. Congress revoked them from the bandplan.
Many stations have chosen to keep their new channels permanently, after being forced to buy all new transmitters and television antennas. In some cases where the station's current analog tower could not handle the stress of the new digital antenna's weight and wind load, station owners had to construct entirely new broadcast towers in order to comply with the FCC's DTV mandate.
Most broadcasters were bitter at having to purchase digital equipment and broadcast a digital signal when very few homeowners had digital television sets. The FCC allowed broadcasters the opportunity to petition the Federal Communications Commission (FCC) for special temporary authority (STA) to operate their digital facilities at low power, thereby allowing broadcasters additional time in which to purchase their full-power digital facilities. However, the FCC gave a stern July 2006 deadline for all full-power television stations to at least replicate 80% of their current analog coverage area, or run the risk of losing protection from encroachment by other stations.
Most stations made an election in the first round, and most of those received their requested channels. Applicant conflicts with neighboring stations had to request a different channel in the second round. The third and final round occurred in May 2006.
Some stations requested that the FCC assign the best available channel.
Considerations
Aside from the practical considerations above, there are also technical considerations which are based on the physics of the radio spectrum. These affect the radio propagation of DTV just as with other signals.
The low VHF channels from 2 to 6, while requiring the lowest power (up to 100 kW analog video or 20 kW digital), are prone to electromagnetic interference. The ATSC digital TV system has severe problems with susceptibility to impulse noise, bursts of interference which briefly render the entire channel unusable, due to its inability to instantaneously determine where in a video frame to resume display when the signal returns. The result is macroblocking and pixelation of the entire signal whenever impulse noise sources (such as motors, appliances or electrical storms) are active. They also are the lowest in frequency and therefore the longest in wavelength, requiring the largest antennas both to transmit and receive. They are also prone to atmospheric ducting, especially at night when the ground (and the air near it) cools rapidly. Because of the antenna size (a properly-sized VHF TV 2 dipole spans approximately eight feet (2.7 meters)) and the fact that there are only five channels in this band, most set-top antennas are designed to receive the higher TV bands.
Furthermore, channel 6 abuts the FM broadcast band at 88 MHz, possibly causing and receiving interference from adjacent channels. (The FCC refused to remove this band from the bandplan, because taking the high UHF channels instead would bring in more money at auction. This also contradicts what has been done in every other country that has forced a DTV transition, all giving up the VHF bands.) A completely unaddressed issue is the use of HD Radio on 88.1 FM, where the lower sideband overlaps the far upper sideband of digital TV channel 6.
The upper VHF (band III), including channels 7 to 13, is better about the above problems, but still not as good as the UHF band. By keeping these for TV, it also prevents the use of the band for Digital Audio Broadcasting, as is done with local radio stations in Europe.
The UHF band contains 55 channels from 14 to 69, which excludes channel 37 in the U.S. Channels 52 to 69 are unavailable for digital TV, on a permanent basis, leaving only 37 channels. Stations generally try to choose a lower frequency, which causes some crowding and therefore election conflicts on the lowest channels. Still, the UHF band has great advantages over VHF, in large part because of its propagation characteristics and lack of impulse noise. The shorter wavelength also means that smaller antennas are needed, an advantage for both the broadcaster and the viewer. Another advantage is that the great majority of stations use this band, requiring only one type of antenna (and sometimes amplifier) to receive all of those stations. Key disadvantages of UHF operation include the need for greater transmitter power and the reduced coverage area; the edge diffraction of signals around terrestrial obstacles degrades rapidly as frequency is increased.
Effects
Channel elections generally will not affect consumers in the long run, because virtual channel numbering will keep stations appearing on their original analog channel numbers, except the times that a station has trouble transmitting PSIP metadata.
However, most ATSC tuners must re-scan for stations that change their RF channel. On some, this is as simple as manually punching in the new RF channel, at which point the decoder will read the PSIP data and re-map to the proper channel number. However, this may not delete the original mapping, leaving the original "dead" channels interleaved with the new ones (such as 5.1 old, 5.1 new, 5.2 old, 5.2 new), or possibly confusing the receiver (and the user). In many cases, a receiver will not automatically add the new mapping at all if an old one exists. Completely re-scanning will normally solve this, but may not pick up stations that are weak or temporarily off-air during the scan, causing the need to manually enter them (if this is even possible with the given receiver).
Where stations are moving to a different frequency band (such as UHF to VHF), this will affect antenna selection. Many antennas marketed for HDTV use are UHF-only or perform poorly on VHF, while many 82-channel VHF/UHF antennas are a compromise design strongly favoring VHF channels.
References
Digital television
Broadcast engineering | Digital channel election | [
"Engineering"
] | 1,319 | [
"Broadcast engineering",
"Electronic engineering"
] |
5,536,866 | https://en.wikipedia.org/wiki/Alvan%20Clark%20%26%20Sons | Alvan Clark & Sons was an American maker of optics that became famous for crafting lenses for some of the largest refracting telescopes of the 19th and early 20th centuries. Founded in 1846 in Cambridgeport, Massachusetts, by Alvan Clark (1804–1887, a descendant of Cape Cod whalers who started as a portrait painter), and his sons George Bassett Clark (1827–1891) and Alvan Graham Clark (1832–1897). Five times, the firm built the largest refracting telescopes in the world. The Clark firm gained "worldwide fame and distribution", wrote one author on astronomy in 1899.
The Dearborn telescope (housed successively at the University of Chicago, Northwestern University and Adler Planetarium) was commissioned in 1856 by the University of Mississippi. The outbreak of the Civil War prevented them from ever taking ownership. As a result, it was being tested in Cambridgeport when Alvan Graham observed Sirius B in 1862.
In 1873 they built the objective lens for the refractor at the United States Naval Observatory. In 1883, they build the telescope for the Pulkovo Observatory in Russia, the objective for the refractor at Lick Observatory was made in 1887, and the lens for the Yerkes Observatory refractor, in 1897, only ever exceeded in size by the lens made for Great Paris Exhibition Telescope of 1900.
The company also built a number of smaller instruments, which are still highly prized among collectors and amateur astronomers.
The company's assets were acquired by the Sprague-Hathaway Manufacturing Company in 1933, but continued to operate under the Clark name. In 1936, Sprague-Hathaway moved the Clark shop to a new location in West Somerville, Massachusetts, where manufacturing continued in association with the Perkin-Elmer Corporation, another maker of precision instruments. Most of Clark's equipment was disposed of as scrap during World War II, and Sprague-Hathaway itself was liquidated in 1958.
See also
Chabot Space & Science Center, Oakland, California
Charles Sumner Tainter
References
Deborah Jean Warner and Robert B. Ariail, Alvan Clark & Sons, artists in optics (2nd English ed.) Richmond, VA. : Willmann-Bell, in association with National Museum of American History, Smithsonian Institution, 1995 (1996 printing), 298 p.
Timothy Ferris, Seeing in the Dark Simon & Schuster 2002; 117p.
Defunct technology companies of the United States
Telescope manufacturers
Companies established in 1846
Instrument-making corporations
Companies based in Cambridge, Massachusetts
Defunct companies based in Massachusetts
1846 establishments in Massachusetts | Alvan Clark & Sons | [
"Astronomy"
] | 519 | [
"Telescope manufacturers",
"People associated with astronomy"
] |
5,537,399 | https://en.wikipedia.org/wiki/Protein%20filament | In biology, a protein filament is a long chain of protein monomers, such as those found in hair, muscle, or in flagella. Protein filaments form together to make the cytoskeleton of the cell. They are often bundled together to provide support, strength, and rigidity to the cell. When the filaments are packed up together, they are able to form three different cellular parts. The three major classes of protein filaments that make up the cytoskeleton include: actin filaments, microtubules and intermediate filaments.
Cellular types
Microfilaments
Compared to the other parts of the cytoskeletons, the microfilaments contain the thinnest filaments, with a diameter of approximately 7 nm. Microfilaments are part of the cytoskeleton that are composed of protein called actin. Two strands of actin intertwined together form a filamentous structure allowing for the movement of motor proteins. Microfilaments can either occur in the monomeric G-actin or filamentous F-actin. Microfilaments are important when it comes to the overall organization of the plasma membrane. Actin filaments are considered to be both helical and flexible. They are composed of several actin monomers chained together which add to their flexibility. They are found in several places in the body including the microvilli, contractile rings, stress fibers, cellular cortex, etc. In a contractile ring, actin have the ability to help with cellular division while in the cellular cortex they can help with the structural integrity of the cell.
Microfilament Polymerization
Microfilament polymerization is divided into three steps. The nucleation step is the first step, and it is the rate limiting and slowest step of the process. Elongation is the next step in this process, and it is the rapid addition of actin monomers at both the plus and minus end of the microfilament. The final step is the steady state. At this state the addition of monomers will equal the subtraction of monomers causing the microfilament to no longer grow. This is known as the critical concentration of actin. There are several toxins that have been known to limit the polymerization of actin. Cytochalasin is a toxin that will bind to the actin polymer, so it can no longer bind to the incoming actin monomers. Actin originally attached in the polymer is still leaving the microfilament causing depolymerization. Phalloidin is a toxin that will bind to actin locking the filament in place. Monomers are neither adding or leaving this polymer which causes the stabilization of the molecule. Latrunculin is similar to cytochalasin, but it is a toxin which will bind to the actin monomers preventing it from adding onto the actin polymer. This will cause the depolymerization of the actin polymer in the cell.
Actin Based Motor Protein- Myosin
There are several different proteins that interact with actin in the body. However, one of the most famous types of motor proteins is myosin. Myosin will bind to these actins causing the movement of actin. This movement of myosin along the microfilament can cause muscle contraction, membrane association, endocytosis, and organelle transport. The actin microfilament is composed of three bands and one disk. The A band is the part of the actin that will bind to the myosin during muscle contraction. The I band is the part of the actin that is not bound to the myosin, but it will still move during muscle contraction. The H zone is the space in between two adjacent actin that will shrink when the muscle begins to contract. The Z disk is the part of the microfilament that characterizes the overall end of each side of the sarcomere, a structural unit of a myofibril.
Proteins Limiting Microfilaments
These microfilaments have the potential to be limited by several factors or proteins. Tropomodulin is a protein that will cap the ends of the actin filaments causing the overall stability of the structure. Nebulin is another protein that can bind to the sides of the actin preventing the attachment of myosin to them. This causes stabilization of the actin limiting muscle contraction. Titin is another protein, but it binds to the myosin rather than the actin microfilament. Titin will help stabilize the contraction and myosin-actin structure.
Microtubules
Microtubules are the largest type of filament, with a diameter of 25 nm wide, in the cytoskeleton. A single microtubule consists of 13 linear microfilaments. Unlike microfilaments, microtubules are composed of a protein called tubulin. The tubulin consists of dimers, named either "αβ-tubulin" or "tubulin dimers", which polymerize to form the microtubules. These microtubules are structurally quantified into three main groups: singlets, doublets, and triplets. Singlets are microtubule structures that are known to be found in the cytoplasm. Doublets are structures found in the cilia and flagella. Triplets are found in the basal bodies and centrioles. There are two main populations of these microtubules. There are unstable short-lived microtubules that will assemble and disassemble rapidly. The other population are stable long-lived microtubules. These microtubules will remain polymerized for longer periods of time and can be found in flagella, red blood cells, and nerve cells. Microtubules have the ability to play a significant role in the organization of organelles and vesicles, beating of cilia and flagella, nerve and red blood cell structure, and alignment/ separation of chromosomes during mitosis and meiosis.
Orientation in Cells
When a cell is in the interphase process, microtubules tend to all orient the same way. Their negatively charged end will be close to the nucleus of the cell, while their positively end will be oriented away from the cell body. The basal body found within the cell helps the microtubule to orient in this specific fashion. In mitotic cells, they will see similar orientation as the positively charged end will be orientated away from the cell while the negatively charged end will be towards the Microtubule Organizing Center (MTOC). The positive end of these microtubules will attach to the kinetochore on the chromosome allowing for cellular division when applicable. Nerve cells tend to be a different from these other two forms of orientation. In an axon nerve cell, microtubules will arrange with their negatively charged end toward the cell body and positively charged end away from the cell body. However, in dendrites, microtubules can have a different orientation. In dendrites, microtubules can have their positively charged end toward the cell body, but their negatively charged end will likely be away from the cell body.
Drugs Disrupting Microtubules
Colchicine is an example of a drug that has been known to be used as a microtubule inhibitor. It binds to both the α and β tubulin on dimers in microtubules. At low concentrations this can cause stabilization of microtubules, but at high concentrations it can lead to depolymerization of microtubules. Taxol is another drug often times used to help treat breast cancer through targeting microtubules. Taxol binds to the side of a tubule and can lead to disruption in cell division.
Role in Cellular Division
There are three main type of microtubules involved with cellular division. Astral microtubules are those extending out of the centrosome toward the cell cortex. They can connect to the plasma membrane via cortical landmark deposits. These deposits are determined via polarity cues, growth and differentiation factors, or adhesion contacts. Polar microtubules will extend toward the middle of the cell and overlap the equator where the cell is dividing. Kinetochore microtubules will extend and bind to the kinetochore on the chromosomes assisting in the division of a cell. These microtubules will attach to the kinetochore at their positive end. NDC80 is a protein found at this binding point that will help with the stabilization of this interaction during cellular division. During the cellular division process, the overall microtubule length will not change. It will however produce a tread-milling effect that can cause the separation of these chromosomes.
Intermediate filaments
Intermediate filaments are part of the cytoskeleton structure found in most eukaryotic cells. An example of an intermediate filament is a Neurofilament. They provide support for the structure of the axon and are a major part of the cytoskeleton. Intermediate filaments contain an average diameter of 10 nm, which is smaller than that of microtubules, but larger than that of microfilaments. These 10 nm filaments are made up of polypeptide chains, which belong to the same family as intermediate filaments. Intermediate filaments are not involved with the direct movement of cells unlike microtubules and microfilaments. Intermediate filaments can play a role in cell communication in a process known as crosstalk. This cross talk has the potential to help with the mechanosensing. This mechanosensing can help protect the cell during cellular migration within the body. They can also help with the linkage of actin and microtubules to the cytoskeleton which will lead to the eventual movement and division of cells. Lastly these intermediate filaments have the ability to help with vascular permeability through organizing continuous adherens junctions through plectin cross-linking.
Classification of Intermediate Filaments
Intermediate filaments are composed of several proteins unlike microfilaments and microtubules which are composed of primarily actin and tubulin. These proteins have been classified into 6 major categories based on their similar characteristics. Type 1 and 2 intermediate filaments are those that are composed of keratins, and they are mainly found in epithelial cells. Type 3 intermediate filaments contain vimentin. They can be found in a variety of cells which include smooth muscle cells, fibroblasts, and white blood cells. Type 4 intermediate filaments are the neurofilaments found in neurons. They can be found in many different motor axons supporting these cells. Type 5 intermediate filaments are composed of nuclear lamins which can be found in the nuclear envelope of many eukaryotic cells. They will help to assemble an orthogonal network in these cells in the nuclear membrane. Type 6 intermediate filaments are involved with nestin that interact with the stem cells of central nervous system.
References
Protein structure | Protein filament | [
"Chemistry"
] | 2,284 | [
"Protein structure",
"Structural biology"
] |
5,537,557 | https://en.wikipedia.org/wiki/166P/NEAT | 166P/NEAT is a periodic comet and centaur in the outer Solar System. It was discovered by the Near Earth Asteroid Tracking (NEAT) project in 2001 and initially classified a comet with provisional designation P/2001 T4 (NEAT), as it was apparent from the discovery observations that the body exhibited a cometary coma. It is one of few known bodies with centaur-like orbits that display a coma, along with 60558 Echeclus, 2060 Chiron, 165P/LINEAR and 167P/CINEOS. It is also one of the reddest centaurs.
166P/NEAT has a perihelion distance of 8.56 AU, and is a Chiron-type comet with (TJupiter > 3; a > aJupiter).
References
External links
Orbital simulation from JPL (Java) / Ephemeris
166P on Seiichi Yoshida's comet list
Chiron-type comets
Periodic comets
0166
20011015 | 166P/NEAT | [
"Astronomy"
] | 207 | [
"Astronomy stubs",
"Comet stubs"
] |
5,538,293 | https://en.wikipedia.org/wiki/Antioxidant%20effect%20of%20polyphenols%20and%20natural%20phenols | A polyphenol antioxidant is a hypothesized type of antioxidant studied in vitro. Numbering over 4,000 distinct chemical structures mostly from plants, such polyphenols have not been demonstrated to be antioxidants in vivo.
In vitro at high experimental doses, polyphenols may affect cell-to-cell signaling, receptor sensitivity, inflammatory enzyme activity or gene regulation. None of these hypothetical effects has been confirmed in humans by high-quality clinical research, .
Sources of polyphenols
The main source of polyphenols is dietary, since they are found in a wide array of phytochemical-bearing foods. For example, honey; most legumes; fruits such as apples, blackberries, blueberries, cantaloupe, pomegranate, cherries, cranberries, grapes, pears, plums, raspberries, aronia berries, and strawberries (berries in general have high polyphenol content) and vegetables such as broccoli, cabbage, celery, onion and parsley are rich in polyphenols. Red wine, chocolate, black tea, white tea, green tea, olive oil and many grains are sources. Ingestion of polyphenols occurs by consuming a wide array of plant foods.
Biochemical theory
The regulation theory considers a polyphenolic ability to scavenge free radicals and up-regulate certain metal chelation reactions. Various reactive oxygen species, such as singlet oxygen, peroxynitrite and hydrogen peroxide, must be continually removed from cells to maintain healthy metabolic function. Diminishing the concentrations of reactive oxygen species can have several benefits possibly associated with ion transport systems and so may affect redox signaling. There is no substantial evidence, however, that dietary polyphenols have an antioxidant effect in vivo.
The “deactivation” of oxidant species by polyphenolic antioxidants (POH) is based, with regard to food systems that are deteriorated by peroxyl radicals (R•), on the donation of hydrogen, which interrupts chain reactions:
R• + PhOH → R-H + PhO•
Phenoxyl radicals (PO•) generated according to this reaction may be stabilized through resonance and/or intramolecular hydrogen bonding, as proposed for quercetin, or combine to yield dimerisation products, thus terminating the chain reaction:
PhO• + PhO•→ PhO-OPh
Potential biological consequences
Consuming dietary polyphenols have been evaluated for biological activity in vitro, but there is no evidence from high-quality clinical research that they have effects in vivo. Preliminary research has been conducted and regulatory status was reviewed in 2009 by the U.S. Food and Drug Administration (FDA), with no recommended intake values established, indicating absence of proof for nutritional value. Other possible effects may result from consumption of foods rich in polyphenols, but are not yet proved scientifically in humans; accordingly, health claims on food labels are not allowed by the FDA.
Difficulty in analyzing effects of specific chemicals
It is difficult to evaluate the physiological effects of specific natural phenolic antioxidants, since such a large number of individual compounds may occur even in a single food and their fate in vivo cannot be measured.
Other more detailed chemical research has elucidated the difficulty of isolating individual phenolics. Because significant variation in phenolic content occurs among various brands of tea, there are possible inconsistencies among epidemiological studies implying beneficial health effects of phenolic antioxidants of green tea blends. The Oxygen Radical Absorbance Capacity (ORAC) test is a laboratory indicator of antioxidant potential in foods and dietary supplements. However, ORAC results cannot be confirmed to be physiologically applicable and have been designated as unreliable.
Practical aspects of dietary polyphenols
There is debate regarding the total body absorption of dietary intake of polyphenolic compounds. While some indicate potential health effects of certain specific polyphenols, most studies demonstrate low bioavailability and rapid excretion of polyphenols, indicating their potential roles only in small concentrations in vivo. More research is needed to understand the interactions between a variety of these chemicals acting in concert within the human body.
Topical application of polyphenols
There is no substantial evidence that reactive oxygen species play a role in the process of skin aging. The skin is exposed to various exogenous sources of oxidative stress, including ultraviolet radiation whose spectral components may be responsible for the extrinsic type of skin aging, sometimes termed photoaging. Controlled long-term studies on the efficacy of low molecular weight antioxidants in the prevention or treatment of skin aging in humans are absent.
Combination of antioxidants in vitro
Experiments on linoleic acid subjected to 2,2′-azobis (2-amidinopropane) dihydrochloride-induced oxidation with different combinations of phenolics show that binary mixtures can lead to either a synergetic effect or to an antagonistic effect.
Antioxidant levels of purified anthocyanin extracts were much higher than expected from anthocyanin content indicating synergistic effect of anthocyanin mixtures.
Antioxidant capacity tests
Oxygen radical absorbance capacity (ORAC)
Ferricyanide reducing power
2,2-diphenyl-1-picrylhydrazyl radical scavenging activity
See also
List of phytochemicals in food
List of antioxidants in food
Health effects of polyphenols
Free-radical theory
Nitric oxide
Resveratrol
Astaxanthin
References
Angiology
Chemopreventive agents
Antioxidants | Antioxidant effect of polyphenols and natural phenols | [
"Chemistry"
] | 1,203 | [
"Pharmacology",
"Chemopreventive agents"
] |
5,538,792 | https://en.wikipedia.org/wiki/Ethnolichenology | Ethnolichenology is the study of the relationship between lichens and people. Lichens have and are being used for many different purposes by human cultures across the world. The most common human use of lichens is for dye, but they have also been used for medicine, food and other purposes.
Lichens for dye
Lichens are a common source of natural dyes. The lichen dye is usually extracted by either boiling water or ammonia fermentation. Although usually called ammonia fermentation, this method is not actually a fermentation and involves letting the lichen steep in ammonia (traditionally urine) for at least two to three weeks.
In North America the most significant lichen dye is Letharia vulpina. Indigenous people through most of this lichen's range in North America traditionally make a yellow dye from this lichen by boiling it in water.
Many of the traditional dyes of the Scottish Highlands were made from lichens including red dyes from the cudbear lichen, Lecanora tartarea, the common orange lichen, Xanthoria parietina, and several species of leafy Parmelia lichens. Brown or yellow lichen dyes (called crottle or crotal), made from Parmelia saxatilis scraped off rocks, and red lichen dyes (called corkir) were used extensively to produce tartans.
Purple dyes from lichens were historically very important throughout Europe from the 15th to 17th centuries. They were generally extracted from Roccella spp. lichens imported from the Canary Islands, Cape Verde Islands, Madagascar, or India. These lichens, and the dye extracted from them, are called orchil (variants archil, orchilla). The same dye was also produced from Ochrolechia spp. lichens in Britain and was called cudbear. Both Roccella spp. and Ochrolechia spp. contain the lichen substance orcin, which converts into the purple dye orcein in the ammonia fermentation process.
Litmus, a water-soluble pH indicator dye mixture, is extracted from Roccella species.
Lichens for medicine
Many lichens have been used medicinally across the world. A lichen's usefulness as a medicine is often related to the lichen secondary compounds that are abundant in most lichen thalli. Different lichens produce a wide variety of these compounds, most of which are unique to lichens and many of which are antibiotic. It has been estimated that 50% of all lichen species have antibiotic properties. Many lichen extracts have been found to be effective in killing Gram-positive bacteria, which included species that cause boils, scarlet fever, and pneumonia
One of the most potent lichen antibiotics is usnic acid, as a result Usnea spp. are commonly used in traditional medicines. Usnea was used in the United States as a fungal remedy of the mouth, stomach, intestines, anus, vagina, nose, ear, and skin, and in Finland it was used to treat wounds, skin eruptions, and athlete's foot. In Russia a preparation of the sodium salt of usnic acid was sold under the name Binan for the treatment of varicose and trophic ulcers, second and third degree burns, and for plastic surgery.
Other lichens commonly featured in folk medicines include Iceland moss and Lungwort.
Lichens for poisons
Only a few lichens are truly poisonous, with species of Letharia and Vulpicida being the primary examples. These lichens are yellow because they have high concentrations of the bright yellow toxin vulpinic acid.
Wolf lichen (Letharia vulpina) was used in Scandinavia to poison wolves. The process begins by adding the lichens to various baits such as reindeer blood and other meats, while sometimes mixing the concoction with ground glass or strychnine. Wolves that ate the concoction were reported to succumb in less than 24 hours. The Achomawi people of northern California use Letharia to poison arrowheads. The arrowheads would be soaked in the lichens for a year sometimes with the addition of rattlesnake venom. Although toxic, wolf lichens were used to treat sores and inflammation by indigenous people in north California and southern British Columbia, and even taken internally as medicine.
Lichens for food
There are records of lichens being used as food by many different human cultures across the world. Lichens are eaten by people in North America, Europe, Asia, and Africa, and perhaps elsewhere. Often lichens are merely famine foods eaten in times of dire needs, but in some cultures lichens are a staple food or even a delicacy. They are also a source of vitamin D.
In the past Iceland moss (Cetraria islandica) was an important human food in northern Europe and Scandinavia, and was cooked in many different ways, such as bread, porridge, pudding, soup, or salad. Bryoria fremontii was an important food in parts of North America, where it was usually pitcooked. It is even featured in a Secwepemc story. Reindeer lichen (Cladonia spp.) is a staple food of reindeer and caribou in the Arctic. Northern peoples in North America and Siberia traditionally eat the partially digested lichen after they remove it from the rumen of caribou that have been killed. It is often called 'stomach icecream'. Rock tripe (Umbilicaria spp. and Lasalia spp.) is a lichen that has frequently been used as an emergency food in North America. One species of Umbilicaria, Iwa-take (U. esculenta), is used in a variety of traditional Korean and Japanese foods. It is quite expensive, and is collected off the sides of cliffs. In India, Parmotrema perlatum lichen is a popular ingredient of many spice mixes, such as garam masala, kaala masala and goda masala, bhojwar masala from Hyderabad and potli masala of Uttar Pradesh. In India, The Middle East, and Niger, Rimelia reticulata, Ramalina conduplicans, and Parmotrema tinctorum are used as spices and flavor enhancers.
Very few lichens are poisonous. Poisonous lichens include those high in vulpinic acid or usnic acid. Most (but not all) lichens that contain vulpinic acid are yellow, so any yellow lichen should be considered to be potentially poisonous.
Lichens for embalming
Embalming began in Ancient Egypt around 5,000 years ago. The use of lichens in embalming dates to the 18th Dynasty, where Pseudevernia furfuracea was found in an Egyptian vase. The process began with a slit in the abdomen; the organs and viscera were removed. The organs and viscera were wrapped in separate linen packets and replaced in the body or put in canopic jars between the legs. The body cavity was then packed with lichen, sawdust, bruised myrrh, cassia, and other spices. Pseudevernia furfuracea was employed due to its preservative and aromatic qualities. Also it was used simply as a highly absorbent, light-weight packaging material. It also contains antibiotic substances. These qualities helped inhibit bacterial decay of the mummies. Another discovery is that the Egyptians would grind and mix Pseudevernia furfuracea with their flour for bread. The bread was then placed with the mummy and thought to be the first meal for the mummy in its afterlife. Pseudevernia furfuracea was imported shiploads from the Grecian archipelago to Alexandria. Today, embalming fluids are colored from the lichen dye orchil into a product called Cudbear, illustrating how a historical procedure can influence future practices.
Other human uses of lichens
Lichens have been and are still being used for many other purposes, including
Alcohol production (for fermentable carbohydrates, as catalysts, and/or as flavour/preservatives)
Cosmetics (for hair, and/or sweet smelling powders)
Perfumes (see Oakmoss)
Decorations (including costumes and artwork)
Fibre (clothing, housing, cooking, sanitation)
Animal feed (both fodder and forage)
Fuel
Industrial purposes (production of acid, antibiotic, carbohydrate, litmus)
Tanning
Hunting/fishing (to find prey, or to lure them in)
Navigation
Insect repellent/insecticide
Preservatives (for food or beer)
Rituals
Tobacco
Narcotics
Hallucinogens (see Dictyonema)
References
External links
Sylvia Sharnoff's ethnolichenology bibliographical database
Branches of botany
Ethnobiology
Lichenology
Symbiosis
Branches of mycology
Lichens and humans | Ethnolichenology | [
"Biology",
"Environmental_science"
] | 1,891 | [
"Branches of botany",
"Branches of mycology",
"Behavior",
"Symbiosis",
"Biological interactions",
"Lichenology",
"Environmental social science",
"Ethnobiology"
] |
5,539,109 | https://en.wikipedia.org/wiki/Miroslav%20Fiedler | Miroslav Fiedler (7 April 1926 – 20 November 2015) was a Czech mathematician known for his contributions to linear algebra, graph theory and algebraic graph theory.
His article, "Algebraic Connectivity of Graphs", published in the Czechoslovak Math Journal in 1973, established the use of the eigenvalues of the Laplacian matrix of a graph to create tools for measuring algebraic connectivity in algebraic graph theory. Fiedler is honored by the Fiedler eigenvalue (the second smallest eigenvalue of the graph Laplacian), with its associated Fiedler eigenvector, as the names for the quantities that characterize algebraic connectivity. Since Fiedler's original contribution, this structure has become essential to large areas of research in network theory, flocking, distributed control, clustering, multi-robot applications and image segmentation.
References
External links
Home page at the Academy of Sciences of the Czech Republic.
1926 births
2015 deaths
Mathematicians from Prague
Czech mathematicians
Graph theorists
Recipients of Medal of Merit (Czech Republic)
Combinatorialists
Charles University alumni | Miroslav Fiedler | [
"Mathematics"
] | 220 | [
"Graph theory",
"Combinatorics",
"Combinatorialists",
"Mathematical relations",
"Graph theorists"
] |
5,539,197 | https://en.wikipedia.org/wiki/Excluded%20point%20topology | In mathematics, the excluded point topology is a topology where exclusion of a particular point defines openness. Formally, let X be any non-empty set and p ∈ X. The collection
of subsets of X is then the excluded point topology on X. There are a variety of cases which are individually named:
If X has two points, it is called the Sierpiński space. This case is somewhat special and is handled separately.
If X is finite (with at least 3 points), the topology on X is called the finite excluded point topology
If X is countably infinite, the topology on X is called the countable excluded point topology
If X is uncountable, the topology on X is called the uncountable excluded point topology
A generalization is the open extension topology; if has the discrete topology, then the open extension topology on is the excluded point topology.
This topology is used to provide interesting examples and counterexamples.
Properties
Let be a space with the excluded point topology with special point
The space is compact, as the only neighborhood of is the whole space.
The topology is an Alexandrov topology. The smallest neighborhood of is the whole space the smallest neighborhood of a point is the singleton These smallest neighborhoods are compact. Their closures are respectively and which are also compact. So the space is locally relatively compact (each point admits a local base of relatively compact neighborhoods) and locally compact in the sense that each point has a local base of compact neighborhoods. But points do not admit a local base of closed compact neighborhoods.
The space is ultraconnected, as any nonempty closed set contains the point Therefore the space is also connected and path-connected.
See also
Finite topological space
Fort space
List of topologies
Particular point topology
References
Topological spaces | Excluded point topology | [
"Mathematics"
] | 359 | [
"Topological spaces",
"Mathematical structures",
"Topology",
"Space (mathematics)"
] |
5,539,282 | https://en.wikipedia.org/wiki/Borel%20hierarchy | In mathematical logic, the Borel hierarchy is a stratification of the Borel algebra generated by the open subsets of a Polish space; elements of this algebra are called Borel sets. Each Borel set is assigned a unique countable ordinal number called the rank of the Borel set. The Borel hierarchy is of particular interest in descriptive set theory.
One common use of the Borel hierarchy is to prove facts about the Borel sets using transfinite induction on rank. Properties of sets of small finite ranks are important in measure theory and analysis.
Borel sets
The Borel algebra in an arbitrary topological space is the smallest collection of subsets of the space that contains the open sets and is closed under countable unions and complementation. It can be shown that the Borel algebra is closed under countable intersections as well.
A short proof that the Borel algebra is well-defined proceeds by showing that the entire powerset of the space is closed under complements and countable unions, and thus the Borel algebra is the intersection of all families of subsets of the space that have these closure properties. This proof does not give a simple procedure for determining whether a set is Borel. A motivation for the Borel hierarchy is to provide a more explicit characterization of the Borel sets.
Boldface Borel hierarchy
The Borel hierarchy or boldface Borel hierarchy on a space X consists of classes , , and for every countable ordinal greater than zero. Each of these classes consists of subsets of X. The classes are defined inductively from the following rules:
A set is in if and only if it is open.
A set is in if and only if its complement is in .
A set is in for if and only if there is a sequence of sets such that each is in for some and .
A set is in if and only if it is both in and in .
The motivation for the hierarchy is to follow the way in which a Borel set could be constructed from open sets using complementation and countable unions.
A Borel set is said to have finite rank if it is in for some finite ordinal ; otherwise it has infinite rank.
If then the hierarchy can be shown to have the following properties:
For every α, . Thus, once a set is in or , that set will be in all classes in the hierarchy corresponding to ordinals greater than α
. Moreover, a set is in this union if and only if it is Borel.
If is an uncountable Polish space, it can be shown that is not contained in for any , and thus the hierarchy does not collapse.
Borel sets of small rank
The classes of small rank are known by alternate names in classical descriptive set theory.
The sets are the open sets. The sets are the closed sets.
The sets are countable unions of closed sets, and are called Fσ sets. The sets are the dual class, and can be written as a countable intersection of open sets. These sets are called Gδ sets.
Lightface hierarchy
The lightface Borel hierarchy (also called the effective Borel hierarchypp.163--164) is an effective version of the boldface Borel hierarchy. It is important in effective descriptive set theory and recursion theory. The lightface Borel hierarchy extends the arithmetical hierarchy of subsets of an effective Polish space. It is closely related to the hyperarithmetical hierarchy.
The lightface Borel hierarchy can be defined on any effective Polish space. It consists of classes , and for each nonzero countable ordinal less than the Church–Kleene ordinal . Each class consists of subsets of the space. The classes, and codes for elements of the classes, are inductively defined as follows:
A set is if and only if it is effectively open, that is, an open set which is the union of a computably enumerable sequence of basic open sets. A code for such a set is a pair (0,e), where e is the index of a program enumerating the sequence of basic open sets.
A set is if and only if its complement is . A code for one of these sets is a pair (1,c) where c is a code for the complementary set.
A set is if there is a computably enumerable sequence of codes for a sequence of sets such that each is for some and . A code for a set is a pair (2,e), where e is an index of a program enumerating the codes of the sequence .
A code for a lightface Borel set gives complete information about how to recover the set from sets of smaller rank. This contrasts with the boldface hierarchy, where no such effectivity is required. Each lightface Borel set has infinitely many distinct codes. Other coding systems are possible; the crucial idea is that a code must effectively distinguish between effectively open sets, complements of sets represented by previous codes, and computable enumerations of sequences of codes.
It can be shown that for each there are sets in , and thus the hierarchy does not collapse. No new sets would be added at stage , however.
A famous theorem due to Spector and Kleene states that a set is in the lightface Borel hierarchy if and only if it is at level of the analytical hierarchy. These sets are also called hyperarithmetic. Additionally, for all natural numbers , the classes and of the effective Borel hierarchy are the same as the classes and of the arithmetical hierarchy of the same name.p.168
The code for a lightface Borel set A can be used to inductively define a tree whose nodes are labeled by codes. The root of the tree is labeled by the code for A. If a node is labeled by a code of the form (1,c) then it has a child node whose code is c. If a node is labeled by a code of the form (2,e) then it has one child for each code enumerated by the program with index e. If a node is labeled with a code of the form (0,e) then it has no children. This tree describes how A is built from sets of smaller rank. The ordinals used in the construction of A ensure that this tree has no infinite path, because any infinite path through the tree would have to include infinitely many codes starting with 2, and thus would give an infinite decreasing sequence of ordinals. Conversely, if an arbitrary subtree of has its nodes labeled by codes in a consistent way, and the tree has no infinite paths, then the code at the root of the tree is a code for a lightface Borel set. The rank of this set is bounded by the order type of the tree in the Kleene–Brouwer order. Because the tree is arithmetically definable, this rank must be less than . This is the origin of the Church–Kleene ordinal in the definition of the lightface hierarchy.
Relation to other hierarchies
See also
Projective hierarchy
Wadge hierarchy
Veblen hierarchy
References
Sources
Kechris, Alexander. Classical Descriptive Set Theory. Graduate Texts in Mathematics v. 156, Springer-Verlag, 1995. .
Jech, Thomas. Set Theory, 3rd edition. Springer, 2003. .
Descriptive set theory
Mathematical logic hierarchies | Borel hierarchy | [
"Mathematics"
] | 1,518 | [
"Mathematical logic",
"Mathematical logic hierarchies"
] |
5,539,816 | https://en.wikipedia.org/wiki/Ether%E2%80%90%C3%A0%E2%80%90go%E2%80%90go%20potassium%20channel | An ether‐à‐go‐go potassium channel is a potassium channel which is inwardly-rectifying and voltage-gated.
They are named after the ether‐à‐go‐go gene, which codes for one such channel in the fruit fly Drosophila melanogaster.
Examples include hERG, KCNH6, and KCNH7.
References
Potassium channels | Ether‐à‐go‐go potassium channel | [
"Chemistry"
] | 78 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
5,539,917 | https://en.wikipedia.org/wiki/Membrane%20glycoproteins | Membrane glycoproteins are membrane proteins which help in cell recognition, including fibronectin, laminin and osteonectin.
See also
Glycocalyx, a glycoprotein which surrounds the membranes of bacterial, epithelial and other cells
External links
Glycoproteins | Membrane glycoproteins | [
"Chemistry"
] | 69 | [
"Glycoproteins",
"Glycobiology"
] |
5,540,117 | https://en.wikipedia.org/wiki/Azinphos-methyl | Azinphos-methyl (Guthion) (also spelled azinophos-methyl) is a broad spectrum organophosphate insecticide manufactured by Bayer CropScience, Gowan Co., and Makhteshim Agan. Like other pesticides in this class, it owes its insecticidal properties (and human toxicity) to the fact that it is an acetylcholinesterase inhibitor (the same mechanism is responsible for the toxic effects of the V-series nerve agent chemical weapons). It is classified as an extremely hazardous substance in the United States as defined in Section 302 of the U.S. Emergency Planning and Community Right-to-Know Act (42 U.S.C. 11002), and is subject to strict reporting requirements by facilities which produce, store, or use it in significant quantities.
History and uses
Azinphos-methyl is a neurotoxin derived from nerve agents developed during World War II. It was first registered in the US in 1959 as an insecticide and is also used as active ingredient in organophosphate (OP) pesticides. It is not registered for consumer or residential use. It has been linked to health problems of farmers who apply it, and the U.S. Environmental Protection Agency (EPA) considered a denial of reregistration, citing, “concern to farm workers, pesticide applicators, and aquatic ecosystems. The use of AZM has been fully banned in the USA since 30 September 2013, ending a phase-out period of twelve years.
Azinphos-methyl has been banned in the European Union since 2006 and in Turkey since 2013.
The New Zealand Environmental Risk Management Authority made a decision to phase out azinphos-methyl over a five-year period starting from 2009. In 2014, it was still used in Australia and partly in New Zealand.
Available forms
AzM is often used as active ingredient in organophosphate pesticides like Guthion, Gusathion (GUS), Gusathion-M, Crysthyron, Cotnion, Cotnion-methyl, Metriltrizotion, Carfene, Bay 9027, Bay 17147, and R-1852. This is why Guthion is often used as a nickname for AzM.
Studies have shown that pure AzM is less toxic than GUS. This increased toxicity can be explained by the interactions between the different compounds in the mixture.
Synthesis
The synthesis (in this case, of carbon-14-labelled material) can be seen in figure 1. In the first step, o-nitroaniline (compound 1) is purified through dissolution in hot water-ethanol mixture in relation 2:1. [Activated carbon] is added and the result is filtrated for clarifying. The filtrate is chilled while kept in movement to generate crystals, usually at 4 °C, but if needed it can also be cooled to -10 °C. The crystals are then collected, washed and dried. If it is pure enough it is used for the following steps, which take place at 0 till 5 °C.
To produce o-Nitrobenzonitrile-14C (compound 2), the first component o-nitroaniline and (concentrated reagent grade) hydrochloric acid are put together with ice and water. Sodium nitrite, dissolved in water, is added to this thin slurry. After the formation of a pale-yellow solution, which indicates the completion of the diazotization reaction, the pH should be adjusted to 6. After this, the solution is introduced to a mixture of cuprous cyanide and toluene. At room temperature the toluene layer is removed. The aqueous layer is washed and dried and the purified product is isolated by crystallization.
The third product is Anthranilamide-14C (compound 3). It is formed out of o-Nitrobenzonitrile-14C, which is first solved in ethanol and hydrazine hydrate. The solvent is heated subsequently, treated in a well-ventilated hood with small periodic charges, smaller than 10 mg, of Raney nickel. Under nitrogen atmosphere the ethanolic solution is clarified and dried.
The next step is to form 1,2,3-Benzotriazin-4(3H)-one-14C (compound 4). In water dissolved sodium nitrite is added to anthranilamide and hydrochloric acid in ice water. Because this is a diazotization reaction, the product is pale-yellow again. After this the pH is adjusted to 8,5. This causes the ring closure to form 1,2,3-Benzotriazin-4(3H)-one-14C. This results in a sodium salt slurry that can be treated with hydrochloric acid, what lowers the pH down to 2 till 4. The 1,2,3-Benzotriazin-4(3H)-one-14C is collected, washed and dried.
In the following step 1,2,3-Benzotriazin-4-(3-chloromethyl)-one-14C has to be formed. Therefore, 1,2,3-Benzotriazin-4(3H)-one-14C and paraformaldehyde are added to ethylene dichloride and heated to 40 °C. Then thionyl chloride is added and the whole solvent is further heated to 65 °C. After four hours of heating the solution is cooled down to room temperature. Water is added and the solution is neutralized. The ethylene dichloride layer is removed and put together with the result of the washed aqueous layer. The solvent was filtered and dried. The last step is the actual synthesis of Azinphos methyl. Ethylene dichloride is added to the compound resulting from the fifth step, 1,2,3-Benzotriazin-4-(3-chloromethyl)-one-14C. This mixture is heated to 50 °C and sodium bicarbonate and O,O-dimethyl phosphorodithioate sodium salt in water are added. The ethylene dichloride layer is removed, reextracted with ethylene dichloride and purified by filtration. The pure filtrate is dried. This product is once again purified by recrystallization from methanol. What is left is pure azinphos-methyl in form of white crystals.
Absorption
Azinphos-methyl can enter the body via inhalation, ingestion and dermal contact. Ingestion of azinphos-methyl is responsible for the low-dose exposure to a large part of the population, due to their presence as residues in food and drinking water. After ingestion it can be absorbed from the digestive tract. By skin contact, AzM can also enter the body through dermal cells. Absorption through the skin is responsible for the occupational exposure to relatively high doses, mainly in agriculture workers.
Mechanism of toxicity
Once azinphos-methyl is absorbed it can cause neurotoxic effects, like other organophosphate insecticides. At high concentrations AzM itself can be toxic because it can function as an acetylcholinesterase (AChE) inhibitor. But its toxicity is mainly due to the bioactivation by a cytochrome P450 (CYP450)-mediated desulfuration to its phosphate triester or oxon(gutoxon) (see figure 2). Gutoxon can react with a serine hydroxyl group at the active site of the AChE. The active site is then blocked and AChE is inactivated. Under normal circumstances acetylcholinesterase rapidly and efficiently degrades the neurotransmitter acetylcholine (ACh) and thereby terminates the biological activity of acetylcholine. Inhibition of AChE results in an immediate accumulation of free unbound ACh at the ending of all cholinergic nerves, which leads to overstimulation of the nervous system.
Efficacy and side effects
Cholinergic nerves play an important role in the normal function of the central nervous, endocrine, neuromuscular, immunological, and respiratory system. As all cholinergic fibers contain high concentrations of ACh and AChE at their terminals, inhibition of AChE can impair their function. So exposure to azinphosmethyl, whereas it inhibits AChEs, may disturb a lot of important systems and may have various effects.
In the autonomic nervous system, accumulation of acetylcholine leads to the overstimulation of muscarinic receptors of the parasympathetic nervous system. This can affect exocrine glands (increased salivation, perspiration, lacrimation), the respiratory system (excessive bronchial secretions, tightness of the chest, and wheezing), the gastrointestinal tract (nausea, vomiting, diarrhea), the eyes (miosis, blurred vision) and the cardiovascular system (decrease in blood pressure, and bradycardia). Overstimulation of the nicotinic receptors in the para- or sympathetic nervous system may also cause adverse effects on the cardiovascular system, such as pallor, tachycardia and increased blood pressure. In the somatic nervous system, accumulation of acetylcholine may cause muscle fasciculation, paralysis, cramps, and flaccid or rigid tone. Overstimulation of the nerves in the central nervous system, specifically in the brain, may result in drowsiness, mental confusion and lethargy. More severe effects on the central nervous system include a state of coma without reflexes, cyanosis and depression of the respiratory centers. Thus the inhibition of the enzyme AChE may have a lot of different effects.
Detoxification
To prevent the toxic effects, AzM can be biotransformed.
Although AzM (in figure 2 named guthion) can be bioactivated by a cytochrome P450 (CYP450)-mediated desulfuration to its phosphate triester or oxon (gutoxon), it may also be detoxified by CYP itself (reaction 2 in figure 2). CYP450 is namely able to catalyze the oxidative cleavage of the P-S-C bond in AzM to yield DMTP and MMBA.
The other pathways of detoxification involves glutathione (GSH)-mediated dealkylation via cleavage of the P-O-CH3 bond, which than forms mono-demethylated AzM and GS-CH3 (reaction 3 in figure 2). This mono-demethylated AzM may be further demethylated to di-demethylated AzM and again GS-CH3 (reaction 4 in figure 2).
AzM also may undergo glutathione-catalyzed dearylation which forms DMPDT and glutathione-conjugated mercaptomethyl benzazimide (reaction 5 in figure 2)
Gutoxon, the compound that mainly causes AzM to be toxic, can also be detoxified. Gutoxon can again be detoxified with the help of CYP450. CYP450 catalyzes the oxidative cleavage of gutoxon, which than yields DMP and MMBA (reaction 6 in figure 2). Other detoxification pathways of gutoxon are via glutathione-mediated dealkylation, which goes via cleavage of the P-O-CH3 bond to form demethylated AzM and GS-CH3 (reaction 7 in figure 2), and via glutathione-catalyzed dearylation to yield DMTP and glutathione-conjugated mercaptomethyl benzazimide (reaction 8 in figure 2).
Treatment
There are two different main mechanism of treatment for toxification with AzM. One possibility is to treat the patient before exposure to AzM and the other one is to treat the patient after poisoning.
Competitive antagonists of AChE can be used for pre-treatment. They can reduce mortality, which is caused by exposure to AzM. Organophosphorus AChE inhibitors can bind temporally to the catalytic site of the enzyme. Because of this binding, AzM cannot phosphorylate the enzyme anymore and the enzyme is shorter inhibited.
The mechanism for treatment after exposure is to block the muscarinic receptor activation. Anticonvulsants are used to control the seizures and oximes are used to reactivate the inhibited AChE. Oximes remove the phosphoryl group bound to the active site of the AChE by binding to it.
There are a few oximes that are the most efficacious by AzM poisoning, namely oxime K-27 and physostigmine.
These two treatments are also used together, some patients are namely treated with atropine (a competitive antagonist of AChE) and reactivating oximes. When patients are resistant to atropine, the patients can be treated with low doses of anisodamine, a cholinergic and alpha-1 adrenergic antagonist, to achieve a shorter recovery time.
Treatment with a combination of different alkaloids or synergistically with atropine is safer than using high antroponine concentrations, which can be toxic.
Another possibility is to use membrane bioreactor technology. When this technology is used, no other chemical compounds need to be added.
In general, pretreatment is much more efficient than post-treatment.
Indications (biomarkers)
The most common biomarker for exposure to AzM is the inhibition of AChE. Also other esterase enzymes as CaE and BChE are inhibited by AzM. In general AzM exposure can be better detected by AChE inhibition than CaE inhibition. In amphibians and also zebrafish, AChE is a more sensitive biomarker for low AzM exposure-levels.
As already mentioned in paragraph 7 “detoxification”, AzM can be metabolized into nontoxic dimethylated alkylphosphates (AP), with the help of CYP450 and glutathione. These APs are: dimethylphosphate (DM), dimethylthiophosphate (DMTP) and dimethyldithiophosphate (DMDTP). These three metabolites may be excreted into the urine and can be used as reliable biomarkers of exposure to AzM. However these metabolites are not specific to AzM, because other organophosphate pesticides might also be metabolized into the three alkylphosphates.
The amount of erythrocyte acetylcholinesterase (RBE-AChE) in the blood can also be used as a biomarker of effect for AzM. According to Zavon (1965) RBC-AChE is the best indicator of AChE activity at the nerve synapse, because this closely parallels the level of AChE in the CNS and PNS. A depression of RBC-AChE will correlate with effects due to a rapid depression of AChE enzymes found in other tissues, this is due to the fact that both enzymes can be inhibited by AzM.
Environmental degradation
AzM is very stable when dissolved in acidic, neutral or slightly alkaline water but above pH11 it is rapidly hydrolyzed to anthranilic acid, benzamide, and other chemicals. In natural water-rich environments microorganisms and sunlight cause AzM to break down faster, the half-life is highly variable depending on the condition, from several days to several months. Under the normal conditions, biodegradation and evaporation are the main routes of disappearance, after evaporation AzM has more exposure to UV-light, which causes photodecomposition. With little bioactivity and no exposure to UV light, it can reach half-lives of roughly a year.
Effect on animals
Possible effects on animals are endocrine disruption, reproductive and immune dysfunction and cancer.
A remarkable phenomenon that has been demonstrated in numerous animal studies is that repeated exposure to organophosphates causes the mammals to be less susceptible to the toxic effects of the AChE inhibitors, even though cholinesterase activities are not normal. This phenomenon is caused by the excess of agonists (ACh) within the synapse, ultimately leading to a down-regulation of cholinergic receptors. Consequently, a given concentration of ACh within the synapse causes fewer receptors to be available, which then causes a lower response.
Studies have shown that the AChEs in fish brains are more prone to organophosphates than amphibian brains. This can be explained by the affinity for AzM and rate of phosphorylation of the enzymes. Frog brain AChE has for example a lower affinity for AzM and a slower rate of phosphorylation than fish brain AChE.
The effects on amphibians are “reduced size, notochord bending, abnormal pigmentation, defective gut and gills, swimming in circles, body shortening, and impaired growth”.
In sea urchins, specifically the Paracentrotus lividus, AzM modifies the cytoskeleton assembly at high concentrations and can alter the deposition of the skeleton of the larva at low concentrations.
In mice, AzM causes weight loss, inhibits brain cholinesterase (ChE) and lowers the food consumption of the mice. A decrease of 45-50% of brain ChE is lethal in mice. Also in earthworms and rats, AzM decreases AChE activity.
In order to prevent stretching it too long, you may take a look at the following animal studies and their references:
Zebrafish
Amphipod Hyalella curvispina, the earthworm Eisenia Andrei
Tilapia Oreochromis mossambicus
Frog Pseudacris regilla and salamander Ambystoma gracile
Toad Rhinella arenarum
Rainbow trout oncorhynchus mykiss
Comparison between the toad Rhinella arenarum and the rainbow trout oncorhynchus mykiss
Comparison between fish Mysidopsis bahia and Cyprinodon variegatus
See also
Azinphos-ethyl
Colony collapse disorder
References
External links
Compendium of Pesticide Common Names
EPA's Azinphos-methyl Page
CDC - NIOSH Pocket Guide to Chemical Hazards - Azinphos-methyl
Extoxnet - Azinphos-methyl
Acetylcholinesterase inhibitors
Pesticides
Organophosphate insecticides
Phosphorodithioates
Benzotriazines | Azinphos-methyl | [
"Chemistry",
"Biology",
"Environmental_science"
] | 3,940 | [
"Toxicology",
"Pesticides",
"Functional groups",
"Phosphorodithioates",
"Biocides"
] |
996,699 | https://en.wikipedia.org/wiki/Green%20economy | A green economy is an economy that aims at reducing environmental risks and ecological scarcities, and that aims for sustainable development without degrading the environment. It is closely related with ecological economics, but has a more politically applied focus. The 2011 UNEP Green Economy Report argues "that to be green, an economy must not only be efficient, but also fair. Fairness implies recognizing global and country level equity dimensions, particularly in assuring a Just Transition to an economy that is low-carbon, resource efficient, and socially inclusive."
A feature distinguishing it from prior economic regimes is the direct valuation of natural capital and ecological services as having economic value (see The Economics of Ecosystems and Biodiversity and Bank of Natural Capital) and a full cost accounting regime in which costs externalized onto society via ecosystems are reliably traced back to, and accounted for as liabilities of, the entity that does the harm or neglects an asset.
Green sticker and ecolabel practices have emerged as consumer facing indicators of friendliness to the environment and sustainable development. Many industries are starting to adopt these standards as a way to promote their greening practices in a globalizing economy. Also known as sustainability standards, these standards are special rules that guarantee the products bought do not hurt the environment and the people that make them. The number of these standards has grown recently and they can now help build a new, greener economy. They focus on economic sectors like forestry, farming, mining or fishing, among others; concentrate on environmental factors like protecting water sources and biodiversity, or reducing greenhouse gas emissions; support social protections and workers' rights; and home in on specific parts of production processes.
Green economists and economics
Green economics is loosely defined as any theory of economics by which an economy is considered to be component of the ecosystem in which it resides (after Lynn Margulis). A holistic approach to the subject is typical, such that economic ideas are commingled with any number of other subjects, depending on the particular theorist. Proponents of feminism, postmodernism, the environmental movement, peace movement, Green politics, green anarchism and anti-globalization movement have used the term to describe very different ideas, all external to mainstream economics.
According to Büscher, the increasing liberalisation of politics since the 1990s has meant that biodiversity must 'legitimise itself' in economic terms. Many non-governmental organisations, governments, banks, companies and so forth have started to claim the right to Define and defend biodiversity and in a distinctly neoliberal manner that subjects the concept's social, political, and ecological dimensions to their value as determined by capitalist markets.
Some economists view green economics as a branch or subfield of more established schools. For instance, it can be regarded as classical economics where the traditional land is generalized to natural capital and has some attributes in common with labor and physical capital (since natural capital assets like rivers directly substitute for human-made ones such as canals). Or, it can be viewed as Marxist economics with nature represented as a form of Lumpenproletariat, an exploited base of non-human workers providing surplus value to the human economy, or as a branch of neoclassical economics in which the price of life for developing vs. developed nations is held steady at a ratio reflecting a balance of power and that of non-human life is very low.
An increasing commitment by the UNEP (and national governments such as the UK) to the ideas of natural capital and full cost accounting under the banner 'green economy' could blur distinctions between the schools and redefine them all as variations of "green economics". As of 2010 the Bretton Woods institutions (notably the World Bank and International Monetary Fund (via its "Green Fund" initiative) responsible for global monetary policy have stated a clear intention to move towards biodiversity valuation and a more official and universal biodiversity finance.
The UNEP 2011 Green Economy Report informs that "based on existing studies, the annual financing demand to green the global economy was estimated to be in the range US$1.05 to US$2.59 trillion. To place this demand in perspective, it is about one-tenth of total global investment per year, as measured by global Gross Capital Formation."
At COP26, the European Investment Bank announced a set of just transition common principles agreed upon with multilateral development banks, which also align with the Paris Agreement. The principles refer to focusing financing on the transition to net zero carbon economies, while keeping socioeconomic effects in mind, along with policy engagement and plans for inclusion and gender equality, all aiming to deliver long-term economic transformation.
The African Development Bank, Asian Development Bank, Islamic Development Bank, Council of Europe Development Bank, Asian Infrastructure Investment Bank, European Bank for Reconstruction and Development, New Development Bank, and Inter-American Development Bank are among the multilateral development banks that have vowed to uphold the principles of climate change mitigation and a Just Transition. The World Bank Group also contributed.
Definition
Karl Burkart defined a green economy as based on six main sectors:
Renewable energy
Green buildings
Sustainable transport
Water management
Waste management
Land management
The International Chamber of Commerce (ICC), representing global business, defines the green economy as "an economy in which economic growth and environmental responsibility work together in a mutually reinforcing fashion while supporting progress on social development".
In 2012, the ICC published the Green Economy Roadmap, containing contributions from international experts consulted bi-yearly. The Roadmap represents a comprehensive and multidisciplinary effort to clarify and frame the concept of "green economy". It highlights the role of business in bringing solutions to global challenges. It sets out the following 10 conditions which relate to business/intra-industry and collaborative action for a transition towards a green economy:
Open and competitive markets
Metrics, accounting, and reporting
Finance and investment
Awareness
Life cycle approach
Resource efficiency and decoupling
Employment
Education and skills
Governance and partnership
Integrated policy and decision-making
Finance and investing
Green growth
Approximately 57% of businesses responding to a survey are investing in energy efficiency, 64% in reducing and recycling trash, and 32% in new, less polluting industries and technologies. Roughly 40% of businesses made investments in energy efficiency in 2021.
Ecological measurements
Measuring economic output and progress is done through the use of economic index indicators. Green indices emerged from the need to measure human ecological impact, efficiency sectors like transport, energy, buildings and tourism, as well as the investment flows targeted to areas like renewable energy and cleantech innovation.
2016 - 2022 Green Score City Index is an ongoing study measuring the anthropogenic impact human activity has on nature.
2010 - 2018 Global Green Economy Index™ (GGEI), published by consultancy Dual Citizen LLC is in its 6th edition. It measures the green economic performance and perceptions of it in 130 countries along four main dimensions of leadership & climate change, efficiency sectors, markets & investment and the environment.
2009 - 2013 Circles of Sustainability project scored 5 cities in 5 separate countries.
2009 - 2012 Green City Index A global study commissioned by Siemens
Ecological footprint measurements are a way to gauge anthropogenic impact and are another standard used by municipal governments.
Green energy issues
Green economies require a transition to green energy generation based on renewable energy to replace fossil fuels as well as energy conservation and efficient energy use. Renewables, like solar energy and wind energy, may eliminate the use of fossil fuels for electricity by 2035 and replace fossil fuel usage altogether by 2050.
The market failure to respond to environmental protection and climate protection needs can be attributed to high external costs and high initial costs for research, development, and marketing of green energy sources and green products. The green economy may need government subsidies as market incentives to motivate firms to invest and produce green products and services. The German Renewable Energy Act, legislations of many other member states of the European Union and the American Recovery and Reinvestment Act of 2009, all provide such market incentives. However, other experts argue that green strategies can be highly profitable for corporations that understand the business case for sustainability and can market green products and services beyond the traditional green consumer.
In the United States, it seemed as though the nuclear industry was coming to an end by the mid-1990s. Until 2013, there had been no new nuclear power facilities built since 1977. One reason was due to the economic reliance on fossil fuel-based energy sources. Additionally, there was a public fear of nuclear energy due to the Three Mile Island accident and the Chernobyl disaster. The Bush administration passed the 2005 Energy Bill that granted the nuclear industry around 10 million dollars to encourage research and development efforts. With the increasing threat of climate change, nuclear energy has been highlighted as an option to work to decarbonize the atmosphere and reverse climate change. Nuclear power forces environmentalists and citizens around the world to weigh the pro and cons of using nuclear power as a renewable energy source. The controversial nature of nuclear power has the potential to split the green economy movement into two branches— anti-nuclear and pro-nuclear.
According to a European climate survey, 63% of EU residents, 59% of Britons, 50% of Americans and 60% of Chinese respondents are in favor of switching to renewable energy. As of 2021, 18% of Americans are in favor of natural gas as a source of energy. For Britons and EU citizens nuclear energy is a more popular energy alternative.
After the COVID-19 pandemic, Eastern European and Central Asian businesses fall behind their Southern European counterparts in terms of the average quality of their green management practices, notably in terms of specified energy consumption and emissions objectives.
External variables, such as consumer pressure and energy taxes, are more relevant than firm-level features, such as size and age, in influencing the quality of green management practices. Firms with less financial limitations and stronger green management practices are more likely to invest in a bigger variety of green initiatives. Energy efficiency investments are good to both the bottom line and the environment.
The shift to greener energy and the adoption of more climate regulations are expected to have a 30% positive impact on businesses, mostly through new business prospects, and a 30% negative impact, according to businesses that took part in a survey in 2022. A little over 40% of the same businesses do not anticipate the transition to greener alternatives to alter their operations.
Criticism
A number of organisations and individuals have criticised aspects of the 'Green Economy', particularly the mainstream conceptions of it based on using price mechanisms to protect nature, arguing that this will extend corporate control into new areas from forestry to water. Venezuelan professor Edgardo Lander says that the UNEP's report, Towards a Green Economy, while well-intentioned "ignores the fact that the capacity of existing political systems to establish regulations and restrictions to the free operation of the markets – even when a large majority of the population call for them – is seriously limited by the political and financial power of the corporations."
Ulrich Hoffmann, in a paper for UNCTAD also says that the focus on Green Economy and "green growth" in particular, "based on an evolutionary (and often reductionist) approach will not be sufficient to cope with the complexities of [[climate
change]]" and "may rather give much false hope and excuses to do nothing really fundamental that can bring about a U-turn of global greenhouse gas emissions. Clive Spash, an ecological economist, has criticised the use of economic growth to address environmental losses, and argued that the Green Economy, as advocated by the UN, is not a new approach at all and is actually a diversion from the real drivers of environmental crisis. He has also criticised the UN's project on the economics of ecosystems and biodiversity (TEEB), and the basis for valuing ecosystems services in monetary terms.
See also
References
External links
Green Growth Knowledge Platform
Green Economy Coalition
UNEP – Green Economy
Schools of economic thought
Industrial ecology
Natural resources
Resource economics
Economy by field | Green economy | [
"Chemistry",
"Engineering"
] | 2,415 | [
"Industrial ecology",
"Industrial engineering",
"Environmental engineering"
] |
996,739 | https://en.wikipedia.org/wiki/Frances%20Power%20Cobbe | Frances Power Cobbe (4 December 1822 – 5 April 1904) was an Anglo-Irish writer, philosopher, religious thinker, social reformer, anti-vivisection activist and leading women's suffrage campaigner. She founded a number of animal advocacy groups, including the National Anti-Vivisection Society (NAVS) in 1875 and the British Union for the Abolition of Vivisection (BUAV) in 1898, and was a member of the executive council of the London National Society for Women's Suffrage.
Life
Frances Power Cobbe was a member of the prominent Cobbe family, descended from Archbishop Charles Cobbe, Primate of Ireland. She was born in Newbridge House in the family estate in present-day Donabate, County Dublin.
Cobbe was educated mainly at home by governesses with a brief period at a school in Brighton. She studied English literature, French, German, Italian, music, and the Bible. She then read heavily in the family library especially in religion and theology, joined several subscription libraries, and studied Greek and geometry with a local clergyman. She organised her own study schedule and ended up very well educated.
In the late 1830s Cobbe went through a crisis of faith. The humane theology of Theodore Parker, an American transcendentalist and abolitionist, restored her faith (she went on later to edit Parker's collected writings). She began to set out her ideas in what became an Essay on True Religion. Her father disapproved and for a while expelled her from the home. She kept studying and writing anyway and eventually revised the Essay into her first book, the Essay on Intuitive Morals. The first volume came out anonymously in 1855.
In 1857 Cobbe's father died and left her an annuity. She took the chance to travel on her own around parts of Europe and the Near East. This took her to Italy where she met a community of similarly independent women: Isa Blagden with whom she went on briefly to share a house, the sculptor Harriet Hosmer, the poet Elizabeth Barrett Browning, the painter Rosa Bonheur, the scientist Mary Somerville and the Welsh sculptor who became her partner, Mary Lloyd (sculptor). In letters and published writing, Cobbe referred to Lloyd alternately as "husband," "wife," and "dear friend." Cobbe also formed a lasting attachment to Italy and went there regularly. She contributed many newspaper and journal articles on Italy, some of which became her 1864 book Italics.
Returning to England Cobbe tried working at the Red Lodge Reformatory and living with the owner, Mary Carpenter, from 1858 to 1859. The turbulent relationship between the two meant that Cobbe left the school and moved out.
Cobbe now focused on writing and began to publish her first articles in Victorian periodicals. She quickly became very successful and was able to support herself by writing. She and Lloyd began to live together in London.
Cobbe kept up a steady stream of journal essays, many of them reissued as books. She became a leader writer for the London newspaper The Echo (London). Cobbe became involved in feminist campaigns for the vote, for women to be admitted to study at university on the same terms as men, and for married women's property rights. She was on the executive council of the London National Society for Women's Suffrage. Her 1878 essay Wife-Torture in England influenced the passage of the 1878 Matrimonial Causes Act, which gave women of violent husbands the right to a legal separation.
Cobbe became very concerned about the rise of animal experimentation or vivisection and founded the Victoria Street Society, which later became the National Anti-Vivisection Society, in 1875. The organisation campaigned for laws to regulate vivisection. She and her allies had already prepared a draft bill, Henniker's Bill, presented to parliament in 1875. They proposed regular inspections of licensed premises and that experimenters must always use anaesthetics except under time-limited personal licences. In response Charles Darwin, Thomas Henry Huxley, John Burdon Sanderson and others drafted a rival Playfair's Bill which proposed a lighter system of regulation. Ultimately the Cruelty to Animals Act, 1876 introduced a compromise system. Cobbe found it so watered-down that she gave up on regulation and began to campaign for the abolition of vivisection. The anti-vivisection movement became split between the abolitionists and the moderates. Cobbe later came to think the Victoria Street Society had become too moderate and started the British Union for the Abolition of Vivisection in 1898.
In 1884, Cobbe and Lloyd retired to Hengwrt in Wales. Cobbe stayed there after Lloyd died in 1896. Cobbe continued to publish and campaign right until her death. However her friend, the writer Blanche Atkinson, wrote, “The sorrow of Miss Lloyd's death changed the whole aspect of existence for Miss Cobbe. The joy of life had gone. It had been such a friendship as is rarely seen – perfect in love, sympathy, and mutual understand.” They are buried together at Saint Illtyd Church Cemetery, Llanelltyd, Gwynedd, Wales.
In her will, Cobbe bequeathed all the copyrights of her works to Atkinson .
Thought and ideas
In Cobbe's first book An Essay on Intuitive Morals, vol. 1, she combined Kantian ethics, theism, and intuitionism. She had encountered Kant in the early 1850s. She argued that the key concept in ethics is duty, that duties presuppose a moral law, and a moral law presupposes an absolute moral legislator - God. She argued that we know by intuition what the law requires us to do. We can trust our intuition because it is "God's tuition". We can do what the law requires because we have noumenal selves as well as being in the world of phenomena. She rejected eudaimonism and utilitarianism.
Cobbe applied her moral theory to animal rights, first in The Rights of Man and the Claims of Brutes from 1863. She argued that humans may do harm to animals in order to satisfy real wants but not from mere "wantonness". For example, humans may eat meat but not kill birds for feathers to decorate hats. The harm or pain inflicted must be the minimum possible. For Cobbe this set limits to vivisection, for example, it must always be done under anaesthetia.
Cobbe engaged with Darwinism. She had met the Darwin family in 1868. Emma Darwin liked her, saying "Miss Cobbe was very agreeable." Cobbe persuaded Charles Darwin to read Immanuel Kant's Metaphysics of Morals. Darwin had a review copy of Descent of Man sent to her (as well as to Alfred Russel Wallace and St. George Jackson Mivart. This led to her critique of Darwin, Darwinism in Morals, in The Theological Review in April 1871. Cobbe thought morality could not be explained by evolution and needed reference to God. Darwin could show why we do feel sympathy for others, but not why we ought to feel it.
However, the debate with Darwin led Cobbe to revise her views about duties to animals. She started to think that sympathy was central and we must above all treat animals in ways that show sympathy for them. Vivisection violated this. She also introduced a distinction between sympathy and what she called heteropathy, similar to hostility or cruelty. She thought we naturally have cruel instincts that found an outlet in vivisection. Religion in contrast cultivated sympathy, but science was undermining it. This became part of a wide-ranging account of the direction of European civilisation.
These were just some of the huge range of philosophical topics on which Cobbe wrote. They included aesthetics, philosophy of mind, philosophy of religion, history, pessimism, life after death, and many more. Her books included The Pursuits of Women (1863), Essays New and Old on Ethical and Social Subjects (1865), Darwinism in Morals, and other Essays (1872), The Hopes of the Human Race (1874), The Duties of Women (1881), The Peak in Darien, with some other Inquiries touching concerns of the Soul and the Body (1882), The Scientific Spirit of the Age (1888) and The Modern Rack: Papers on Vivisection (1889), as well as her autobiography.
Legacy
In the late nineteenth century Cobbe was very well known for her philosophical views. For example, Margaret Oliphant in The Victorian Age of English Literature, when discussing philosophy, said "There are few ladies to be found among these ranks, but the name of Miss Frances Power Cobbe may be mentioned as that of a clear writer and profound thinker".
A portrait of her is included in a mural by Walter P. Starmer unveiled in 1921 in the church of St Jude-on-the-Hill in Hampstead Garden Suburb, London.
Her name and picture (and those of 58 other women's suffrage supporters) are on the plinth of the statue of Millicent Fawcett in Parliament Square, London, unveiled in 2018.
Her name is listed (as F. Power Cobbe) on the Reformers’ Memorial in Kensal Green Cemetery in London.
The Animal Theology professorship at the Graduate Theological Foundation is named after Cobbe.
Her philosophical contribution is now being rediscovered as part of the recovery of women in the history of philosophy.
Bibliography
The intuitive theory of morals. Theory of morals, 1855
Essays on the pursuits of Woman, 1863
The red flag in John bull's eyes, 1863
The cities of the past, 1864
Broken Lights: an Inquiry into the Present Condition and Future Prospects of Religious Faith, 1864
Religious duty, 1864
The confessions of a lost Dog, 1867
Dawning Lights : an Inquiry Concerning the Secular Results of the New Reformation, 1867
Criminals, Idiots, Women, and Minors, 1869
Alone to the Alone: Prayers for Theists, 1871
Darwinism in Morals, and Other Essays, 1872
The Hopes of the Human Race, 1874
The Moral Aspects of Vivisection, 1875
The Age of Science: A Newspaper of the Twenthies Century, 1877
The Duties of Women, 1881
The Peak in Darien, 1882
Life of Frances Power Cobbe as told by herself. Vol. I; Vol. II, 1894
See also
Brown Dog affair
Lizzy Lind af Hageby
Caroline Earle White
List of animal rights advocates
Women and animal advocacy
References
Further reading
Frances Power Cobbe, The Modern Rack: Papers on Vivisection. London: Swan Sonnenschein, 1889.
Buettinger, Craig. "Women and antivivisection in late nineteenth century America", Journal of Social History, Vol. 30, No. 4 (Summer, 1997), pp. 857–872.
Caine, Barbara. Victorian feminists. Oxford 1992
Hamilton, Susan. Frances Power Cobbe and Victorian Feminism. Palgrave Macmillan, 2006.
Mitchell, Sally. Frances Power Cobbe: Victorian Feminist, Journalist, Reformer. University of Virginia Press, 2004.
Rakow, Lana and Kramarae, Cheris. The Revolution in Words: Women's Source Library. London, Routledge 2003
Stone, Alison. Entries on Cobbe's philosophical thought, Encyclopedia of Concise Concepts by Women in Philosophy Encyclopedia of Concise Concepts by Women Philosophers - History Of Women Philosophers
Stone, Alison (2022). Frances Power Cobbe. Cambridge University Press.
Lori Williamson, Power and protest : Frances Power Cobbe and Victorian society. 2005. . A 320-page biography.
Victorian feminist, social reformer and anti-vivisectionist, discussion on BBC Radio 4's Woman's Hour, 27 June 2005
State University of New York – Frances Power Cobbe (1822–1904)
The archives of the British Union for the Abolition of Vivisection (ref U DBV) are held at the Hull History Centre. Details of holdings are on its online catalogue.
External links
Frances Power Cobbe archives at the National Library of Wales
Frances
1822 births
1904 deaths
British anti-vivisectionists
Feminist writers
Irish animal rights activists
Irish feminists
Irish non-fiction writers
Irish women non-fiction writers
Irish suffragists
LGBTQ feminists
LGBTQ philosophers
Irish lesbian writers
Non-Darwinian evolution
People from Fingal
Women of the Victorian era
Irish women writers
British social reformers
British women philosophers
British philosophers
Irish women's rights activists
19th-century Irish women writers
Irish women philosophers
19th-century Irish philosophers
19th-century British women writers
Irish anti-vivisectionists
Activists from Fingal | Frances Power Cobbe | [
"Biology"
] | 2,566 | [
"Non-Darwinian evolution",
"Biology theories"
] |
996,772 | https://en.wikipedia.org/wiki/George%20Combe | George Combe (21 October 1788 – 14 August 1858) was a Scottish lawyer and a spokesman of the phrenological movement for over 20 years. He founded the Edinburgh Phrenological Society in 1820 and wrote The Constitution of Man (1828). After marriage in 1833, Combe devoted his later years to promoting phrenology internationally.
Early life
George Combe was born at Livingston's Yards, Edinburgh, the son of Marion (née Newton, died 1819) and George Combe, a prosperous brewer in the city. His younger brother was the physician Andrew Combe. After attending the High School of Edinburgh, he studied law at the University of Edinburgh, entered a lawyer's office in 1804, and in 1812 began a solicitor's practice at 11 Bank Street.
In 1820 Combe moved his office to Mylnes Court on the Royal Mile and moved house to 8 Hermitage Place in Stockbridge. In 1825 he moved with Andrew to 2 Brown Square off the Grassmarket. The Combe brothers lived together in a large dwelling at 25 Northumberland Street in the New Town from 1829.
Phrenological Society
In 1815, the Edinburgh Review contained an article on the system of "craniology" devised by Franz Joseph Gall and Johann Gaspar Spurzheim, denouncing it as "a piece of thorough quackery from beginning to end". When Spurzheim came to Edinburgh in 1816, Combe was invited to a friend's house, where he watched Spurzheim dissect a human brain. Impressed by the demonstration, he attended a second series of Spurzheim's lectures. On investigating the subject for himself, he became satisfied that the fundamental principles of phrenology were true: "that the brain is the organ of mind; that the brain is an aggregate of several parts, each subserving a distinct mental faculty; and that the size of the cerebral organ is, caeteris paribus, an index of power or energy of function."
His first essay on phrenology was published in Scots Magazine in 1817, and were followed by a series of papers Literary and Statistical Magazine. The writings were collected and published in 1819 in book form as Essays on Phrenology and, in later editions, as A System of Phrenology. In 1820, Combe helped to found the Phrenological Society of Edinburgh, which in 1823 established a Phrenological Journal. His lectures and writings also drew attention to phrenology in Europe and the United States.
Debate with Hamilton
Combe began to lecture at Edinburgh in 1822. He published a Manual, Elements of Phrenology, in June 1824. He took private tuition in elocution; contemporaries described him as clever and opinionated. Combe's discussions had an air of confidentiality and theatrical urgency. Converts came in, societies sprang up and controversies began.
A second edition of Elements, 1825, was attacked by Francis Jeffrey in the Edinburgh Review of September 1825. Combe replied in a pamphlet and the journal. The phrenologists were attacked again in 1826 and 1827 by Sir William Hamilton in addresses to the Royal Society of Edinburgh. The sharp controversy included challenges to public disputes and mutual charges of misrepresentation, in which Spurzheim took part. The correspondence appeared in the fourth and fifth volumes of the Phrenological Journal.
Social interests: schools, prisons and asylums
In 1836, Combe stood for the chair of Logic at the University of Edinburgh against two other candidates: Sir William Hamilton and Isaac Taylor. Hamilton won by 18 votes against 14 for Taylor. In 1838 Combe visited the United States to study the treatment of criminals there. He initiated a programme of public education in chemistry, physiology, history and moral philosophy.
Combe sought to improve public education through a national, non-sectarian system. He helped to set up a school in Edinburgh run on the principles of William Ellis, and did some teaching there in phrenology and physiology. It was prompted by the London Birkbeck School, which had opened on 17 July 1848. Combe was strongly behind the view that the state should be involved in the education system. In this he was backed by William Jolly, an inspector of schools, and noted by Frank Pierrepont Graves.
Combe was much concerned about prison reform. He and William A. F. Browne opened a debate on introducing humane treatment of psychiatric patients in publicly funded asylums.
Later life
John Ramsay L'Amy, son of James L'Amy, trained under Combe at his offices at 25 Northumberland Street in Edinburgh's New Town.
In 1842, Combe gave a course of 22 lectures on phrenology at the Ruprecht Karl University of Heidelberg and travelled about Europe enquiring into management of schools, prisons and asylums.
On retiring, Combe took a substantial terraced townhouse, 45 Melville Street, in Edinburgh's West End. He was revising the 9th edition of the Constitution of Man when he died at Moor Park, Farnham in August 1858. He lies under a simple headstone in the Dean Cemetery, Edinburgh, against the north wall of the original section. His wife Cecilia Siddons is buried with him.
Works
In 1817, Combe's first essay on phrenology in The Scots Magazine was followed by a series of papers on the subject in the Literary and Statistical Magazine. These appeared in book form in 1819 as Essays on Phrenology, entitled A System of Phrenology in later editions.
Combe's most popular work, The Constitution of Man, appeared in 1828 but was widely denounced as materialist and atheist. He argued in it: "Mental qualities are determined by the size, form and constitution of the brain; and these are transmitted by hereditary descent."
Combe was one of an active Edinburgh scene of people thinking about the nature of heredity and its possible malleability, as Lamarck proposed. Combe himself was no Lamarckian, but in the decades before Darwin's Origin of Species was published, the Constitution was probably the single most important vehicle for disseminating naturalistic progressivism in the English-speaking world.
Combe's 1838 Answers to the Objections Urged Against Phrenology was followed in 1840 by Moral Philosophy and in 1841 by Notes on the United States of North America. Phrenology Applied to Painting and Sculpture ensued in 1855. The culmination of Combe's autobiographical philosophy appeared in "On the Relation between Science and Religion", first publicly issued in 1857. Combe moved into the economic arena with a pamphlet on The Currency Question (1858). A fuller phrenological approach to political economy was set out later by William Ballantyne Hodgson.
Family
In 1833, Combe married Cecilia Siddons, daughter of the actress Sarah Siddons and sister of Henry Siddons, author of Practical Illustrations of Rhetorical Gesture and Action (1807). She brought him a fortune and a happy, though childless marriage, preceded by a phrenological check for compatibility. A few years later, he retired from the law in comfortable circumstances.
Bibliography
George Combe (1828), The Constitution of Man Considered in Relation to External Objects. J. Anderson jun. (reissued by Cambridge University Press, 2009; )
George Combe (1830), A System of Phrenology Edinburgh: J Anderson. Full Text Available at archive.org
George Combe (1857), On the Relation Between Science and Religion. Maclachlan and Stewart (reissued by Cambridge University Press, 2009; )
Notes
Attribution:
External links
Articles on Phrenological practice by George Combe, Andrew Combe, and other early Phrenologists.
1788 births
1858 deaths
Scientists from Edinburgh
Phrenology
Phrenologists
Scottish non-fiction writers
Mental health professionals
Burials at the Dean Cemetery
Mental health activists
Alumni of the University of Edinburgh School of Law
Kemble family | George Combe | [
"Biology"
] | 1,596 | [
"Phrenology",
"Biology theories",
"Obsolete biology theories"
] |
996,828 | https://en.wikipedia.org/wiki/Orbiting%20body | In astrodynamics, an orbiting body is any physical body that orbits a more massive one, called the primary body. The orbiting body is properly referred to as the secondary body (), which is less massive than the primary body ().
Thus, or .
Under standard assumptions in astrodynamics, the barycenter of the two bodies is a focus of both orbits.
An orbiting body may be a spacecraft (i.e. an artificial satellite) or a natural satellite, such as a planet, dwarf planet, moon, moonlet, asteroid, or comet.
A system of two orbiting bodies is modeled by the Two-Body Problem and a system of three orbiting bodies is modeled by the Three-Body Problem. These problems can be generalized to an N-body problem. While there are a few analytical solutions to the n-body problem, it can be reduced to a 2-body system if the secondary body stays out of other bodies' Sphere of Influence and remains in the primary body's sphere of influence.
See also
Barycenter
Double planet
Primary (astronomy)
Satellite
Two-body problem
Three-body problem
N-body problem
References
Orbits
Astrodynamics
Physical objects | Orbiting body | [
"Physics",
"Astronomy",
"Engineering"
] | 244 | [
"Astrodynamics",
"Astronomy stubs",
"Astrophysics",
"Astrophysics stubs",
"Physical objects",
"Aerospace engineering",
"Matter"
] |
996,886 | https://en.wikipedia.org/wiki/NGC%203603 | NGC 3603 is a nebula situated in the Carina–Sagittarius Arm of the Milky Way around 20,000 light-years away from the Solar System. It is a massive H II region containing a very compact open cluster (probably a super star cluster) HD 97950.
Observations
NGC 3603 was observed by John Herschel on 14 March 1834 during his visit to South Africa, who remarked that it was "a very remarkable object...perhaps a globular cluster". Herschel catalogued it as nebula 3334 in his Results of Astronomical Observations made at the Cape of Good Hope, published in 1847. In 1864 the Royal Society published his General Catalogue of Nebulae and Clusters, where he listed it as number 2354. It was subsequently incorporated into the New General Catalogue as by J. L. E. Dreyer as NGC 3603.
The central cluster was catalogued as the star HD 97950, but has long been recognised as nebulous or multiple. It was also noted for having an unusual emission spectrum and the spectral type was given as Oe in the Henry Draper Catalogue. This was later refined to WN5 + O as the emission was recognised as characteristic of a Wolf–Rayet star. Eventually, the cluster would be resolved and found to contain three of the most massive and most luminous stars known, as well as a number of luminous O class stars and many fainter stars.
Features
NGC 3603 is the most massive visible cloud of glowing gas and plasma, known as a H II region, in the Milky Way. The central star cluster is the densest concentration of very massive stars known in the galaxy. Strong ultraviolet radiation and stellar winds have cleared the gas and dust, giving an unobscured view of the cluster.
Three prominent Wolf–Rayet stars have been detected within the cluster, all originally unresolved and known as the single star HD 97950. The brightest of the three, HD 97950A1 (or NGC 3603-A1) is actually a pair of Wolf–Rayet stars that orbit around each other once every 3.77 days. The primary is an estimated mass , while its companion is . The star designated HD 97950B is a single star more massive and more luminous than either of the individual members of HD 97950A1. It is 2,880,000 times as luminous as the sun and 132 times as massive.
NGC 3603 is visible in the telescope as a small rather insignificant nebulosity with a yellowish tinge due to the effects of interstellar absorption. In the mid-1960s, optical studies combined with radio astronomical observations showed it to be an extremely strong thermal radio source. Later observations of other galaxies introduced the concept of starburst regions, in some cases whole galaxies, of extremely rapid star formation. NGC 3603 is now considered to be such a region, and it has been compared by some authors to the larger cluster 30 Doradus, in the Large Magellanic Cloud.
Sher 25, the B class supergiant on the outskirts of NGC 3603, is surrounded by ejected material in an hourglass shape similar to that found for the supernova 1987A, and this has aroused intense interest in the future evolution of stars such as Sher 25.
Two of the most luminous young stars known are found within NGC 3603, but outside the central cluster. WR 42e and NGC 3603 MTT 58 both have a spectral type of O2If*/WN6 indicating an extremely massive young star. WR 42e is a possible runaway from a three-body encounter, while MTT 58 appears to still be embedded within its parental cocoon and is in a possible binary with an O3If star.
References
External links
Hubble Space Telescope: Star Cluster Bursts into Life in New Hubble Image
European Southern Observatory: The Stars behind the Curtain
Carina (constellation)
H II regions
3603
Star-forming regions | NGC 3603 | [
"Astronomy"
] | 801 | [
"Carina (constellation)",
"Constellations"
] |
996,933 | https://en.wikipedia.org/wiki/Style%20sheet%20%28web%20development%29 | A web style sheet is a form of separation of content and presentation for web design in which the markup (i.e., HTML or XHTML) of a webpage contains the page's semantic content and structure, but does not define its visual layout (style). Instead, the style is defined in an external style sheet file using a style sheet language such as CSS or XSLT. This design approach is identified as a "separation" because it largely supersedes the antecedent methodology in which a page's markup defined both style and structure.
The philosophy underlying this methodology is a specific case of separation of concerns.
Benefits
Separation of style and content has advantages, but has only become practical after improvements in popular web browsers' CSS implementations.
Speed
Overall, users experience of a site utilising style sheets will generally be quicker than sites that don’t use the technology. ‘Overall’ as the first page will probably load more slowly – because the style sheet AND the content will need to be transferred. Subsequent pages will load faster because no style information will need to be downloaded – the CSS file will already be in the browser’s cache.
Maintainability
Holding all the presentation styles in one file can reduce the maintenance time and reduces the chance of error, thereby improving presentation consistency. For example, the font color associated with a type of text element may be specified — and therefore easily modified — throughout an entire website simply by changing one short string of characters in a single file. The alternative approach, using styles embedded in each individual page, would require a cumbersome, time consuming, and error-prone edit of every file.
Accessibility
Sites that use CSS with either XHTML or HTML are easier to tweak so that they appear similar in different browsers (Chrome, Internet Explorer, Mozilla Firefox, Opera, Safari, etc.).
Sites using CSS "degrade gracefully" in browsers unable to display graphical content, such as Lynx, or those so very old that they cannot use CSS. Browsers ignore CSS that they do not understand, such as CSS 3 statements. This enables a wide variety of user agents to be able to access the content of a site even if they cannot render the style sheet or are not designed with graphical capability in mind. For example, a browser using a refreshable braille display for output could disregard layout information entirely, and the user would still have access to all page content.
Customization
If a page's layout information is stored externally, a user can decide to disable the layout information entirely, leaving the site's bare content still in a readable form. Site authors may also offer multiple style sheets, which can be used to completely change the appearance of the site without altering any of its content.
Most modern web browsers also allow the user to define their own style sheet, which can include rules that override the author's layout rules. This allows users, for example, to bold every hyperlink on every page they visit. Browser extensions like Stylish and Stylus have been created to facilitate management of such user style sheets.
Consistency
Because the semantic file contains only the meanings an author intends to convey, the styling of the various elements of the document's content is very consistent. For example, headings, emphasized text, lists and mathematical expressions all receive consistently applied style properties from the external style sheet. Authors need not concern themselves with the style properties at the time of composition. These presentational details can be deferred until the moment of presentation.
Portability
The deferment of presentational details until the time of presentation means that a document can be easily re-purposed for an entirely different presentation medium with merely the application of a new style sheet already prepared for the new medium and consistent with elemental or structural vocabulary of the semantic document. A carefully authored document for a web page can easily be printed to a hard-bound volume complete with headers and footers, page numbers and a generated table of contents simply by applying a new style sheet.
Practical disadvantages today
As of 2006, specifications (for example, XHTML, XSL, CSS) and software tools implementing these specification are only reaching the early stages of maturity. So there are some practical issues facing authors who seek to embrace this method of separating content and style.
Narrow adoption without the parsing and generation tools
While the style specifications are quite mature and still maturing, the software tools have been slow to adapt. Most of the major web development tools still embrace a mixed presentation-content model. So authors and designers looking for GUI based tools for their work find it difficult to follow the semantic web method. In addition to GUI tools, shared repositories for generalized style sheets would probably aid adoption of these methods.
See also
Separation of concerns
References
External links
CSS Zen Garden: A site which challenges designers to create new page layouts without touching the XHTML source. Includes dozens of layouts. CSS source can be viewed for every layout.
Web development | Style sheet (web development) | [
"Engineering"
] | 1,032 | [
"Software engineering",
"Web development"
] |
996,950 | https://en.wikipedia.org/wiki/Starburst%20region | A starburst region is a region of space that is undergoing a large amount of star formation. A starburst is an astrophysical process that involves star formation occurring at a rate that is large compared to the rate that is typically observed. This starburst activity will consume the available interstellar gas supply over a timespan that is much shorter than the lifetime of the galaxy. For example, the nebula NGC 6334 has a star formation rate estimated to be 3600 solar masses per million years compared to the star formation rate of the entire Milky Way of about seven million solar masses per million years. Due to the high amount of star formation a starburst is usually accompanied by much higher gas pressure and a larger ratio of hydrogen cyanide to carbon monoxide emission-lines than are usually observed.
Starbursts can occur in entire galaxies or just regions of space. For example, the Tarantula Nebula is a nebula in the Large Magellanic Cloud which has one of the highest star formation rates in the Local Group. By contrast, a starburst galaxy is an entire galaxy that is experiencing a very high star formation rate. One notable example is Messier 82 in which the gas pressure is 100 times greater than in the local neighborhood, and it is forming stars at about the same rate as the entire Milky Way in a region only about across. At this rate M82 will consume its 200 million solar masses of atomic and molecular hydrogen in 100 million years (its free-fall time).
Starburst regions can occur in different shapes, for example in Messier 94 the inner ring is a starburst region. Messier 82 has a starburst core of about 600 parsec in diameter. Starbursts are common during galaxy mergers such as the Antennae Galaxies. In the case of mergers, the starburst can either be local or galaxy-wide depending on the galaxies and how they are merging.
See also
References
Stellar astronomy | Starburst region | [
"Astronomy"
] | 399 | [
"Astronomical sub-disciplines",
"Stellar astronomy"
] |
996,955 | https://en.wikipedia.org/wiki/Characteristic%20energy | In astrodynamics, the characteristic energy () is a measure of the excess specific energy over that required to just barely escape from a massive body. The units are length2 time−2, i.e. velocity squared, or energy per mass.
Every object in a 2-body ballistic trajectory has a constant specific orbital energy equal to the sum of its specific kinetic and specific potential energy:
where is the standard gravitational parameter of the massive body with mass , and is the radial distance from its center. As an object in an escape trajectory moves outward, its kinetic energy decreases as its potential energy (which is always negative) increases, maintaining a constant sum.
Note that C3 is twice the specific orbital energy of the escaping object.
Non-escape trajectory
A spacecraft with insufficient energy to escape will remain in a closed orbit (unless it intersects the central body), with
where
is the standard gravitational parameter,
is the semi-major axis of the orbit's ellipse.
If the orbit is circular, of radius r, then
Parabolic trajectory
A spacecraft leaving the central body on a parabolic trajectory has exactly the energy needed to escape and no more:
Hyperbolic trajectory
A spacecraft that is leaving the central body on a hyperbolic trajectory has more than enough energy to escape:
where
is the standard gravitational parameter,
is the semi-major axis of the orbit's hyperbola (which may be negative in some convention).
Also,
where is the asymptotic velocity at infinite distance. Spacecraft's velocity approaches as it is further away from the central object's gravity.
History of the notation
According to Chauncey Uphoff, the ultimate source of the notation C3 is Forest Ray Moulton's textbook An Introduction to Celestial Mechanics. In the second edition (1914) of this book, Moulton solves the problem of the motion of two bodies under an attractive gravitational force in chapter 5. After reducing the problem to the relative motion of the bodies in the plane, he defines the constant of the motion c3 by the equation
ẋ2 + ẏ2 = 2k2 M/r + c3,
where M is the total mass of the two bodies and k2 is Moulton's notation for the gravitational constant. He defines c1, c2, and c4 to be other constants of the motion. The notation C3 probably became popularized via the JPL technical report TR-32-30 ("Design of Lunar and Interplanetary Ascent Trajectories", Victor C. Clarke, Jr., March 15, 1962), which used Moulton's terminology.
Examples
MAVEN, a Mars-bound spacecraft, was launched into a trajectory with a characteristic energy of 12.2 km2/s2 with respect to the Earth. When simplified to a two-body problem, this would mean the MAVEN escaped Earth on a hyperbolic trajectory slowly decreasing its speed towards . However, since the Sun's gravitational field is much stronger than Earth's, the two-body solution is insufficient. The characteristic energy with respect to Sun was negative, and MAVEN – instead of heading to infinity – entered an elliptical orbit around the Sun. But the maximal velocity on the new orbit could be approximated to 33.5 km/s by assuming that it reached practical "infinity" at 3.5 km/s and that such Earth-bound "infinity" also moves with Earth's orbital velocity of about 30 km/s.
The InSight mission to Mars launched with a C3 of 8.19 km2/s2. The Parker Solar Probe (via Venus) plans a maximum C3 of 154 km2/s2.
Typical ballistic C3 (km2/s2) to get from Earth to various planets: Mars 8-16, Jupiter 80, Saturn or Uranus 147. To Pluto (with its orbital inclination) needs about 160–164 km2/s2.
See also
Specific orbital energy
Orbit
Parabolic trajectory
Hyperbolic trajectory
References
Footnotes
Astrodynamics
Orbits
Energy (physics) | Characteristic energy | [
"Physics",
"Mathematics",
"Engineering"
] | 819 | [
"Astrodynamics",
"Physical quantities",
"Quantity",
"Energy (physics)",
"Aerospace engineering",
"Wikipedia categories named after physical quantities"
] |
996,960 | https://en.wikipedia.org/wiki/Starburst%20%28symbol%29 | A starburst is graphic design or typographical element that resembles diverging rays of light or consists of a star-like image with rays emanating from it. One is notably used as the current logo of the American retailer Walmart.
In Unicode, there are various star and asterisk symbols. The ones most commonly associated with the idea of a starburst are the "sixteen pointed asterisk" U+273A (✺) and the "combining Cyrillic millions" character U+0489 ( ҉ ).
References
Visual motifs | Starburst (symbol) | [
"Mathematics"
] | 115 | [
"Symbols",
"Visual motifs"
] |
996,973 | https://en.wikipedia.org/wiki/Magnesium%20fluoride | Magnesium fluoride is an ionically bonded inorganic compound with the formula . The compound is a colorless to white crystalline salt and is transparent over a wide range of wavelengths, with commercial uses in optics that are also used in space telescopes. It occurs naturally as the rare mineral sellaite.
Production
Magnesium fluoride is prepared from magnesium oxide with sources of hydrogen fluoride such as ammonium bifluoride, by the breakdown of it:
Related metathesis reactions are also feasible:
Structure
The compound crystallizes as tetragonal birefringent crystals. The structure of the magnesium fluoride is similar to that of rutile, featuring octahedral cations and 3-coordinate anions.
In the gas phase, monomeric molecules adopt a linear molecular geometry.
Uses
Optics
Magnesium fluoride is transparent over an extremely wide range of wavelengths. Windows, lenses, and prisms made of this material can be used over the entire range of wavelengths from 0.120 μm (vacuum ultraviolet) to 8.0 μm (infrared). High-quality, synthetic magnesium fluoride is one of two materials (the other being lithium fluoride) that will transmit in the vacuum ultraviolet range at 121 nm (Lyman alpha). Lower-grade magnesium fluoride is inferior to calcium fluoride in the infrared range.
Magnesium fluoride is tough and polishes well but is slightly birefringent and should therefore be cut with the optic axis perpendicular to the plane of the window or lens. Due to its suitable refractive index of 1.37, magnesium fluoride is commonly applied in thin layers to the surfaces of optical elements as an inexpensive anti-reflective coating. Its Verdet constant is 0.00810arcmin⋅G−1⋅cm−1 at 632.8 nm.
Safety
Chronic exposure to magnesium fluoride may affect the skeleton, kidneys, central nervous system, respiratory system, eyes and skin, and may cause or aggravate attacks of asthma.
References
External links
A java applet showing the effect of MgF2 on a lens
Infrared windows at Lawrence Berkeley National Laboratory
National Pollutant Inventory - Fluoride and compounds fact sheet
Crystran Data Crystran MSDS
Fluorides
Magnesium compounds
Alkaline earth metal halides
Optical materials | Magnesium fluoride | [
"Physics",
"Chemistry"
] | 473 | [
"Salts",
"Materials",
"Optical materials",
"Fluorides",
"Matter"
] |
997,021 | https://en.wikipedia.org/wiki/Asymptotic%20gain%20model | The asymptotic gain model (also known as the Rosenstark method) is a representation of the gain of negative feedback amplifiers given by the asymptotic gain relation:
where is the return ratio with the input source disabled (equal to the negative of the loop gain in the case of a single-loop system composed of unilateral blocks), G∞ is the asymptotic gain and G0 is the direct transmission term. This form for the gain can provide intuitive insight into the circuit and often is easier to derive than a direct attack on the gain.
Figure 1 shows a block diagram that leads to the asymptotic gain expression. The asymptotic gain relation also can be expressed as a signal flow graph. See Figure 2. The asymptotic gain model is a special case of the extra element theorem.
As follows directly from limiting cases of the gain expression, the asymptotic gain G∞ is simply the gain of the system when the return ratio approaches infinity:
while the direct transmission term G0 is the gain of the system when the return ratio is zero:
Advantages
This model is useful because it completely characterizes feedback amplifiers, including loading effects and the bilateral properties of amplifiers and feedback networks.
Often feedback amplifiers are designed such that the return ratio T is much greater than unity. In this case, and assuming the direct transmission term G0 is small (as it often is), the gain G of the system is approximately equal to the asymptotic gain G∞.
The asymptotic gain is (usually) only a function of passive elements in a circuit, and can often be found by inspection.
The feedback topology (series-series, series-shunt, etc.) need not be identified beforehand as the analysis is the same in all cases.
Implementation
Direct application of the model involves these steps:
Select a dependent source in the circuit.
Find the return ratio for that source.
Find the gain G∞ directly from the circuit by replacing the circuit with one corresponding to T = ∞.
Find the gain G0 directly from the circuit by replacing the circuit with one corresponding to T = 0.
Substitute the values for T, G∞ and G0 into the asymptotic gain formula.
These steps can be implemented directly in SPICE using the small-signal circuit of hand analysis. In this approach the dependent sources of the devices are readily accessed. In contrast, for experimental measurements using real devices or SPICE simulations using numerically generated device models with inaccessible dependent sources, evaluating the return ratio requires special methods.
Connection with classical feedback theory
Classical feedback theory neglects feedforward (G0). If feedforward is dropped, the gain from the asymptotic gain model becomes
while in classical feedback theory, in terms of the open loop gain A, the gain with feedback (closed loop gain) is:
Comparison of the two expressions indicates the feedback factor βFB is:
while the open-loop gain is:
If the accuracy is adequate (usually it is), these formulas suggest an alternative evaluation of T: evaluate the open-loop gain and G∞ and use these expressions to find T. Often these two evaluations are easier than evaluation of T directly.
Examples
The steps in deriving the gain using the asymptotic gain formula are outlined below for two negative feedback amplifiers. The single transistor example shows how the method works in principle for a transconductance amplifier, while the second two-transistor example shows the approach to more complex cases using a current amplifier.
Single-stage transistor amplifier
Consider the simple FET feedback amplifier in Figure 3. The aim is to find the low-frequency, open-circuit, transresistance gain of this circuit G = vout / iin using the asymptotic gain model.
The small-signal equivalent circuit is shown in Figure 4, where the transistor is replaced by its hybrid-pi model.
Return ratio
It is most straightforward to begin by finding the return ratio T, because G0 and G∞ are defined as limiting forms of the gain as T tends to either zero or infinity. To take these limits, it is necessary to know what parameters T depends upon. There is only one dependent source in this circuit, so as a starting point the return ratio related to this source is determined as outlined in the article on return ratio.
The return ratio is found using Figure 5. In Figure 5, the input current source is set to zero, By cutting the dependent source out of the output side of the circuit, and short-circuiting its terminals, the output side of the circuit is isolated from the input and the feedback loop is broken. A test current it replaces the dependent source. Then the return current generated in the dependent source by the test current is found. The return ratio is then T = −ir / it. Using this method, and noticing that RD is in parallel with rO, T is determined as:
where the approximation is accurate in the common case where rO >> RD. With this relationship it is clear that the limits T → 0, or ∞ are realized if we let transconductance gm → 0, or ∞.
Asymptotic gain
Finding the asymptotic gain G∞ provides insight, and usually can be done by inspection. To find G∞ we let gm → ∞ and find the resulting gain. The drain current, iD = gm vGS, must be finite. Hence, as gm approaches infinity, vGS also must approach zero. As the source is grounded, vGS = 0 implies vG = 0 as well. With vG = 0 and the fact that all the input current flows through Rf (as the FET has an infinite input impedance), the output voltage is simply −iin Rf. Hence
Alternatively G∞ is the gain found by replacing the transistor by an ideal amplifier with infinite gain - a nullor.
Direct feedthrough
To find the direct feedthrough we simply let gm → 0 and compute the resulting gain. The currents through Rf and the parallel combination of RD || rO must therefore be the same and equal to iin. The output voltage is therefore iin (RD || rO).
Hence
where the approximation is accurate in the common case where rO >> RD.
Overall gain
The overall transresistance gain of this amplifier is therefore:
Examining this equation, it appears to be advantageous to make RD large in order make the overall gain approach the asymptotic gain, which makes the gain insensitive to amplifier parameters (gm and RD). In addition, a large first term reduces the importance of the direct feedthrough factor, which degrades the amplifier. One way to increase RD is to replace this resistor by an active load, for example, a current mirror.
Two-stage transistor amplifier
Figure 6 shows a two-transistor amplifier with a feedback resistor Rf. This amplifier is often referred to as a shunt-series feedback amplifier, and analyzed on the basis that resistor R2 is in series with the output and samples output current, while Rf is in shunt (parallel) with the input and subtracts from the input current. See the article on negative feedback amplifier and references by Meyer or Sedra. That is, the amplifier uses current feedback. It frequently is ambiguous just what type of feedback is involved in an amplifier, and the asymptotic gain approach has the advantage/disadvantage that it works whether or not you understand the circuit.
Figure 6 indicates the output node, but does not indicate the choice of output variable. In what follows, the output variable is selected as the short-circuit current of the amplifier, that is, the collector current of the output transistor. Other choices for output are discussed later.
To implement the asymptotic gain model, the dependent source associated with either transistor can be used. Here the first transistor is chosen.
Return ratio
The circuit to determine the return ratio is shown in the top panel of Figure 7. Labels show the currents in the various branches as found using a combination of Ohm's law and Kirchhoff's laws. Resistor R1 = RB // rπ1 and R3 = RC2 // RL. KVL from the ground of R1 to the ground of R2 provides:
KVL provides the collector voltage at the top of RC as
Finally, KCL at this collector provides
Substituting the first equation into the second and the second into the third, the return ratio is found as
Gain G0 with T = 0
The circuit to determine G0 is shown in the center panel of Figure 7. In Figure 7, the output variable is the output current βiB (the short-circuit load current), which leads to the short-circuit current gain of the amplifier, namely βiB / iS:
Using Ohm's law, the voltage at the top of R1 is found as
or, rearranging terms,
Using KCL at the top of R2:
Emitter voltage vE already is known in terms of iB from the diagram of Figure 7. Substituting the second equation in the first, iB is determined in terms of iS alone, and G0 becomes:
Gain G0 represents feedforward through the feedback network, and commonly is negligible.
Gain G∞ with T → ∞
The circuit to determine G∞ is shown in the bottom panel of Figure 7. The introduction of the ideal op amp (a nullor) in this circuit is explained as follows. When T → ∞, the gain of the amplifier goes to infinity as well, and in such a case the differential voltage driving the amplifier (the voltage across the input transistor rπ1) is driven to zero and (according to Ohm's law when there is no voltage) it draws no input current. On the other hand, the output current and output voltage are whatever the circuit demands. This behavior is like a nullor, so a nullor can be introduced to represent the infinite gain transistor.
The current gain is read directly off the schematic:
Comparison with classical feedback theory
Using the classical model, the feed-forward is neglected and the feedback factor βFB is (assuming transistor β >> 1):
and the open-loop gain A is:
Overall gain
The above expressions can be substituted into the asymptotic gain model equation to find the overall gain G. The resulting gain is the current gain of the amplifier with a short-circuit load.
Gain using alternative output variables
In the amplifier of Figure 6, RL and RC2 are in parallel.
To obtain the transresistance gain, say Aρ, that is, the gain using voltage as output variable, the short-circuit current gain G is multiplied by RC2 // RL in accordance with Ohm's law:
The open-circuit voltage gain is found from Aρ by setting RL → ∞.
To obtain the current gain when load current iL in load resistor RL is the output variable, say Ai, the formula for current division is used: iL = iout × RC2 / ( RC2 + RL ) and the short-circuit current gain G is multiplied by this loading factor:
Of course, the short-circuit current gain is recovered by setting RL = 0 Ω.
References and notes
See also
Blackman's theorem
Extra element theorem
Mason's gain formula
Feedback amplifiers
Return ratio
Signal-flow graph
External links
Lecture notes on the asymptotic gain model
Electronic feedback
Electronic amplifiers
Control theory
Signal processing
Analog circuits | Asymptotic gain model | [
"Mathematics",
"Technology",
"Engineering"
] | 2,371 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing",
"Applied mathematics",
"Control theory",
"Analog circuits",
"Electronic engineering",
"Electronic amplifiers",
"Amplifiers",
"Dynamical systems"
] |
997,141 | https://en.wikipedia.org/wiki/Areostationary%20orbit | An areostationary orbit, areosynchronous equatorial orbit (AEO), or Mars geostationary orbit is a circular areosynchronous orbit (ASO) approximately in altitude above the Mars equator and following the direction of Mars's rotation.
An object in such an orbit has an orbital period equal to Mars's rotational period, and so to ground observers it appears motionless in a fixed position in the sky. It is the Martian analog of a Geostationary orbit (GEO). The prefix areo- derives from Ares, the ancient Greek god of war and counterpart to the Roman god Mars, with whom the planet was identified.
Although it would allow for uninterrupted communication and observation of the Martian surface, no artificial satellites have been placed in this orbit due to the technical complexity of achieving and maintaining one.
Characteristics
The radius of an areostationary orbit can be calculated using Kepler's Third Law.
Where:
Substituting the mass of Mars for M and the Martian sidereal day for T and solving for the semimajor axis yields a synchronous orbit radius of above the surface of the Mars equator. Subtracting Mars's radius gives an orbital altitude of .
Two stable longitudes exist - 17.92°W and 167.83°E. Satellites placed at any other longitude will tend to drift to these stable longitudes over time.
Feasibility
Several factors make placing a spacecraft into an areostationary orbit more difficult than a geostationary orbit. Since the areostationary orbit lies between Mars's two natural satellites, Phobos (semi-major axis: 9,376 km) and Deimos (semi-major axis: 23,463 km), any satellites in the orbit will suffer increased orbital station keeping costs due to unwanted orbital resonance effects. Mars's gravity is also much less spherical than earth due to uneven volcanism (i.e. Olympus Mons). This creates additional gravitational disturbances not present on earth, further destabilizing the orbit. Solar radiation pressure and sun-based perturbations are also present, as with an earth-based geostationary orbit. Actually placing a satellite into such an orbit is further complicated by the distance from earth and related challenges shared by any attempted Mars mission.
Uses
Satellites in an areostationary orbit would allow for greater amounts of data to be relayed back from the Martian surface easier than by using current methods. Satellites in the orbit would also be ideal advantageous for monitoring Martian weather and mapping of the Martian surface.
In the early 2000s NASA explored the feasibility of placing communications satellites in an areocentric orbit as a part of the Mars Communication Network. In the concept, an areostationary relay satellite would transmit data from a network of landers and smaller satellites in lower Martian orbits back to earth.
See also
Geostationary orbit
Areosynchronous orbit
List of orbits
References
External links
Mars Network - Marsats - Historic NASA site devoted to proposed communications infrastructure for Mars exploration
Bandwidth available from an areostationary satellite
Mars orbits
Astrodynamics | Areostationary orbit | [
"Engineering"
] | 637 | [
"Astrodynamics",
"Aerospace engineering"
] |
997,178 | https://en.wikipedia.org/wiki/Oxygen%20minimum%20zone | The oxygen minimum zone (OMZ), sometimes referred to as the shadow zone, is the zone in which oxygen saturation in seawater in the ocean is at its lowest. This zone occurs at depths of about , depending on local circumstances. OMZs are found worldwide, typically along the western coast of continents, in areas where an interplay of physical and biological processes concurrently lower the oxygen concentration (biological processes) and restrict the water from mixing with surrounding waters (physical processes), creating a "pool" of water where oxygen concentrations fall from the normal range of 4–6 mg/L to below 2 mg/L.
Physical and biological processes
Surface ocean waters generally have oxygen concentrations close to equilibrium with the Earth's atmosphere. In general, colder waters hold more oxygen than warmer waters. As water moves out of the mixed layer into the thermocline, it is exposed to a rain of organic matter from above. Aerobic bacteria feed on this organic matter; oxygen is used as part of the bacterial metabolic process, lowering its concentration within the water. Therefore, the concentration of oxygen in deep water is dependent on the amount of oxygen it had when it was at the surface, minus depletion by deep sea organisms.
The downward flux of organic matter decreases sharply with depth, with 80–90% being consumed in the top . The deep ocean thus has higher oxygen because rates of oxygen consumption are low compared with the supply of cold, oxygen-rich deep waters from polar regions. In the surface layers, oxygen is supplied by photosynthesis of phytoplankton. Depths in between, however, have higher rates of oxygen consumption and lower rates of advective supply of oxygen-rich waters. In much of the ocean, mixing processes enable the resupply of oxygen to these waters (see upwelling).
A distribution of the open-ocean oxygen minimum zones is controlled by the large-scale ocean circulation as well as local physical as well as biological processes. For example, wind blowing parallel to the coast causes Ekman transport that upwells nutrients from deep water. The increased nutrients support phytoplankton blooms, zooplankton grazing, and an overall productive food web at the surface. The byproducts of these blooms and the subsequent grazing sink in the form of particulate and dissolved nutrients (from phytodetritus, dead organisms, fecal pellets, excretions, shed shells, scales, and other parts). This "rain" of organic matter (see the biological pump) feeds the microbial loop and may lead to bacterial blooms in water below the euphotic zone due to the influx of nutrients. Since oxygen is not being produced as a byproduct of photosynthesis below the euphotic zone, these microbes use up what oxygen is in the water as they break down the falling organic matter thus creating the lower oxygen conditions.
Physical processes then constrain the mixing and isolate this low oxygen water from outside water. Vertical mixing is constrained due to the separation from the mixed layer by depth. Horizontal mixing is constrained by bathymetry and boundaries formed by interactions with sub-tropical gyres and other major current systems. Low oxygen water may spread (by advection) from under areas of high productivity up to these physical boundaries to create a stagnant pool of water with no direct connection to the ocean surface even though (as in the Eastern Tropical North Pacific) there may be relatively little organic matter falling from the surface.
Microbes
In OMZs oxygen concentration drops to levels <10 nM at the base of the oxycline and can remain anoxic for over 700 m depth. This lack of oxygen can be reinforced or increased due to physical processes changing oxygen supply such as eddy-driven advection, sluggish ventilation, increases in ocean stratification, and increases in ocean temperature which reduces oxygen solubility.
At a microscopic scale the processes causing ocean deoxygenation rely on microbial aerobic respiration. Aerobic respiration is a metabolic process that microorganisms like bacteria or archaea use to obtain energy by degrading organic matter, consuming oxygen, producing CO2 and obtaining energy in the form of ATP. In the ocean surface photosynthetic microorganisms called phytoplankton use solar energy and CO2 to build organic molecules (organic matter) releasing oxygen in the process. A large fraction of the organic matter from photosynthesis becomes dissolved organic matter (DOM) that is consumed by bacteria during aerobic respiration in sunlit waters. Another fraction of organic matter sinks to the deep ocean forming aggregates called marine snow. These sinking aggregates are consumed via degradation of organic matter and respiration at depth.
At depths in the ocean where no light can reach, aerobic respiration is the dominant process. When the oxygen in a parcel of water is consumed, the oxygen cannot be replaced without the water reaching the surface ocean. When oxygen concentrations drop to below <10 nM, microbial processes that are normally inhibited by oxygen can take place like denitrification and anammox. Both processes extract elemental nitrogen from nitrogen compounds and that elemental nitrogen which does not stay in solution escapes as a gas, resulting in a net loss of nitrogen from the ocean.
Bioavailability of oxygen
Oxygen demand
An organism's demand for oxygen is dependent on its metabolic rate. Metabolic rates can be affected by external factors such as the temperature of the water, and internal factors such as the species, life stage, size, and activity level of the organism. The body temperature of ectotherms (such as fishes and invertebrates) fluctuates with the temperature of the water. As the external temperature increases, ectotherm metabolisms increase as well, increasing their demand for oxygen. Different species have different basal metabolic rates and therefore different oxygen demands.
Life stages of organisms also have different metabolic demands. In general, younger stages tend to grow in size and advance in developmental complexity quickly. As the organism reaches maturity, metabolic demands switch from growth and development to maintenance, which requires far fewer resources. Smaller organisms have higher metabolisms per unit of mass, so smaller organisms will require more oxygen per unit mass, while larger organisms generally require more total oxygen. Higher activity levels also require more oxygen.
This is why bioavailability is important in deoxygenated systems: an oxygen quantity which is dangerously low for one species might be more than enough for another species.
Indices and calculations
Several indices to measure bioavailability have been suggested: Respiration Index, Oxygen Supply Index, and the Metabolic Index. The Respiration Index describes oxygen availability based on the free energy available in the reactants and products of the stoichiometric equation for respiration. However, organisms have ways of altering their oxygen intake and carbon dioxide release, so the strict stoichiometric equation is not necessarily accurate. The Oxygen Supply Index accounts for oxygen solubility and partial pressure, along with the Q10 of the organism, but does not account for behavioral or physiological changes in organisms to compensate for reduced oxygen availability. The Metabolic Index accounts for the supply of oxygen in terms of solubility, partial pressure, and diffusivity of oxygen in water, and the organism's metabolic rate. The metabolic index is generally viewed as a closer approximation of oxygen bioavailability than the other indices.
There are two thresholds of oxygen required by organisms:
Pcrit (critical partial pressure)- the oxygen level below which an organism cannot support a normal respiration rate
Pleth (lethal partial pressure)- the oxygen level below which an organism cannot support the minimum respiration rate necessary for survival.
Since bioavailability is specific to each organism and temperature, calculation of these thresholds is done experimentally by measuring activity and respiration rates under different temperature and oxygen conditions, or by collecting data from separate studies.
Life in the OMZ
Despite the low oxygen conditions, organisms have evolved to live in and around OMZs. For those organisms, like the vampire squid, special adaptations are needed to either make do with lesser amounts of oxygen or to extract oxygen from the water more efficiently. For example, the giant red mysid (Gnathophausia ingens) continues to live aerobically (using oxygen) in OMZs. They have highly developed gills with large surface area and thin blood-to-water diffusion distance that enables effective removal of oxygen from the water (up to 90% O2 removal from inhaled water) and an efficient circulatory system with high capacity and high blood concentration of a protein (hemocyanin) that readily binds oxygen.
Another strategy used by some classes of bacteria in the oxygen minimum zones is to use nitrate rather than oxygen, thus drawing down the concentrations of this important nutrient. This process is called denitrification. The oxygen minimum zones thus play an important role in regulating the productivity and ecological community structure of the global ocean. For example, giant bacterial mats floating in the oxygen minimum zone off the west coast of South America may play a key role in the region's extremely rich fisheries, as bacterial mats the size of Uruguay have been found there.
Zooplankton
Decreased oxygen availability results in decreases in many zooplankton species’ egg production, food intake, respiration, and metabolic rates. Temperature and salinity in areas of decreased oxygen concentrations also affect oxygen availability. Higher temperatures and salinity lower oxygen solubility decrease the partial pressure of oxygen. This decreased partial pressure increases organisms’ respiration rates, causing the oxygen demand of the organism to increase.
In addition to affecting their vital functions, zooplankton alter their distribution in response to hypoxic or anoxic zones. Many species actively avoid low oxygen zones, while others take advantage of their predators’ low tolerance for hypoxia and use these areas as a refuge. Zooplankton that exhibit daily vertical migrations to avoid predation and low oxygen conditions also excrete ammonium near the oxycline and contribute to increased anaerobic ammonium oxidation (anammox, which produces N2 gas. As hypoxic regions expand vertically and horizontally, the habitable ranges for phytoplankton, zooplankton, and nekton increasingly overlap, increasing their susceptibility to predation and human exploitation.
Changes
OMZs have changed over time due to effects from numerous global chemical and biological processes. To assess these changes, scientists utilize climate models and sediment samples to understand changes to dissolved oxygen in OMZs. Many recent studies of OMZs have focused on their fluctuations over time and how they may be currently changing as a result of climate change.
In geological time scales
Some research has aimed to understand how OMZs have changed over geological time scales. Throughout the history of Earth's oceans, OMZs have fluctuated on long time scales, becoming larger or smaller depending on multiple variables. The factors that change OMZs are the amount of oceanic primary production resulting in increased respiration at greater depths, changes in the oxygen supply due to poor ventilation, and amount of oxygen supplied through thermohaline circulation.
Since industrialization
See also
Dead zone (ecology), localized areas of dramatically reduced oxygen levels, often due to human impacts.
Ocean deoxygenation
References
Chemical oceanography
Aquatic ecology
Biological oceanography
Physical oceanography | Oxygen minimum zone | [
"Physics",
"Chemistry",
"Biology"
] | 2,335 | [
"Applied and interdisciplinary physics",
"Chemical oceanography",
"Ecosystems",
"Physical oceanography",
"Aquatic ecology"
] |
997,189 | https://en.wikipedia.org/wiki/Lenovo | Lenovo Group Limited, trading as Lenovo ( , ), is a Chinese multinational technology company specializing in designing, manufacturing, and marketing consumer electronics, personal computers, software, servers, converged and hyperconverged infrastructure solutions, and related services. Its global headquarters are in Beijing, and Morrisville, North Carolina, United States; it has research centers at these locations, elsewhere in China, in Stuttgart, Germany, and in Yamato, Kanagawa, Japan.
Lenovo originated as an offshoot of a state-owned research institute. Then known as Legend and distributing foreign IT products, co-founder Liu Chuanzhi incorporated Legend in Hong Kong in an attempt to raise capital and was successfully permitted to build computers in China, and were helped by the American AST Research. Legend listed on the Hong Kong Stock Exchange in 1994 and became the largest PC manufacturer in China and eventually in Asia; they were also domestic distributors for HP printers, Toshiba laptops, and others. After the company rebranded itself to Lenovo, it acquired IBM's PC business including its ThinkPad line in 2005, after which it rapidly expanded abroad. In 2013, Lenovo became the world's largest personal computer vendor by unit sales for the first time, a position it still holds as of 2024.
Products manufactured by the company include desktop computers, laptops, tablet computers, smartphones, workstations, servers, supercomputers, data storage devices, IT management software, and smart televisions. Its best-known brands include its ThinkPad business line of notebooks, the IdeaPad, Yoga, LOQ, and Legion consumer lines of notebooks, and the IdeaCentre, LOQ, Legion, and ThinkCentre lines of desktops. Lenovo is also part of a joint venture with NEC, named Lenovo NEC Holdings, that produces personal computers for the Japanese market. The company also operates Motorola Mobility which produces smartphones.
History
1984–1993: Founding and early history
Lenovo was founded in Beijing on 1 November 1984 as Legend by a team of engineers led by Liu Chuanzhi and Danny Lui. Initially specializing in televisions, the company migrated towards manufacturing and marketing computers.
Liu Chuanzhi and his group of ten experienced engineers, teaming up with Danny Lui, officially founded Lenovo in Beijing on 1 November 1984, with 200,000 yuan. The Chinese government approved Lenovo's incorporation on the same day. Jia Xufu (贾续福), one of the founders of Lenovo, indicated that the first meeting in preparation for starting the company was held on October 17 the same year. Eleven people, the entirety of the initial staff, attended. Each of the founders was a member of the Institute of Computing Technology of the Chinese Academy of Sciences (CAS). The 200,000 yuan used as start-up capital was approved by Zeng Maochao (曾茂朝). The name for the company agreed upon at this meeting was the Chinese Academy of Sciences Computer Technology Research Institute New Technology Development Company.
The organizational structure of the company was established in 1985 after the Chinese New Year. It included technology, engineering, administrative, and office departments. The group first attempted to import televisions but failed. It rebuilt itself as a company doing quality checks on computers. It also tried and failed to market a digital watch.
In May 1988, Lenovo placed its first recruitment advertisement on the front page of the China Youth News. Such ads were quite rare in China at the time. Out of the 500 respondents, 280 were selected to take a written employment exam. 120 of these candidates were interviewed in person. Although interviewers initially only had the authority to hire 16 people, 58 were given offers. The new staff included 18 people with graduate degrees, 37 with undergraduate degrees, and three students with no university-level education. Yang Yuanqing, the current chairman and CEO of Lenovo, was among that group.
Liu Chuanzhi received government permission to form a subsidiary in Hong Kong and to move there along with five other employees. Liu's father, already in Hong Kong along with Lui, furthered his son's ambitions through mentoring and facilitating loans. Liu moved to Hong Kong in 1988. To save money during this period, Liu and his co-workers walked instead of taking public transportation. To keep up appearances, they rented hotel rooms for meetings.
In 1990, Lenovo started to manufacture and market computers using its own brand name. Some of the company's early successes included the KT8920 mainframe computer. It also developed a circuit board that allowed IBM-compatible personal computers to process Chinese characters.
1994–1998: IPO, second offerings and bond sales
Lenovo (known at the time as Legend) became publicly traded after a 1994 Hong Kong IPO that raised nearly at per share. Prior to the IPO, many analysts were optimistic about Lenovo. On its first day of trading, the company's stock price hit a high of and closed at suggesting an initial under-valuing of the company. Proceeds from the offering were used to finance sales offices in Europe, North America and Australia, to expand and improve production and research and development, and to increase working capital.
By 1996, Lenovo was the market leader in China and began selling its own laptop. By 1998 it held 43 per cent of the domestic computer market share in China, selling approximately one million computers.
Lenovo released its Tianxi () computer in 1998. Designed to make it easy for inexperienced Chinese consumers to use computers and access the internet, one of its most important features was a button that instantly connected users to the internet and opened the Web browser. It was co-branded with China Telecom and it was bundled with one year of Internet service. The Tianxi was released in 1998. It was the result of two years of research and development. It had a pastel-colored, shell-shaped case and a seven-port USB hub under its screen. As of 2000, the Tianxi was the best-selling computer in Chinese history. It sold more than 1,000,000 units in 2000 alone.
1999–2010: IBM purchase and sale of smartphone division
To fund its continued growth, Lenovo issued a secondary offering of 50 million shares on the Hong Kong market in March 2000 and raised about . It rebranded to the name Lenovo in 2003 and began making acquisitions to expand the company.
Lenovo acquired IBM's personal computer business in 2005, including the ThinkPad laptop and ThinkCentre desktop lines. Lenovo's acquisition of IBM's personal computer division accelerated access to foreign markets while improving Lenovo's branding and technology. Lenovo paid for IBM's computer business and assumed an additional of IBM's debt. This acquisition made Lenovo the third-largest computer maker worldwide by volume. Lenovo's purchase of the Think line from IBM also led to the creation of the IBM/Lenovo partnership, which works together in the creation of Think-line of products sold by Lenovo.
On the purchase of IBM's personal computer division, Chuanzhi said in 2012: "We benefited in three ways from the IBM acquisition. We got the ThinkPad brand, IBM's more advanced PC manufacturing technology and the company's international resources, such as its global sales channels and operation teams. These three elements have shored up our sales revenue in the past several years." The employees of the division, including those who developed ThinkPad laptops and ThinkCentre desktops, became employees of Lenovo.
Despite Lenovo acquiring the "Think" brand from IBM, IBM still plays a key indirect, background role in the design and production of the Think line of products. Today, IBM is responsible for overseeing servicing and repair centers, and is considered an authorized distributor and refurbisher of the Think line of products produced by Lenovo.
IBM also acquired an 18.9% share of Lenovo in 2005 as part of Lenovo's purchase of IBM's personal computing division. In the years following the deal, IBM sold their stake in Lenovo, with a final sale in 2011 completing their divestment.
Mary Ma, Lenovo's chief financial officer from 1990 to 2007, was in charge of investor relations. Under her leadership, Lenovo successfully integrated Western-style accountability into its corporate culture. Lenovo's emphasis on transparency earned it a reputation for the best corporate governance among mainland Chinese firms. While Hong Kong-listed firms were only required to issue financial reports twice per year, Lenovo followed the international norm of issuing quarterly reports. Lenovo created an audit committee and a compensation committee with non-management directors. The company started roadshows twice per year to meet institutional investors. Ma organized the first-ever investor relations conference held in mainland China. The conference was held in Beijing in 2002 and televised on China Central Television (CCTV). Liu and Ma co-hosted the conference and both gave speeches on corporate governance.
Lenovo sold its smartphone and tablet division in 2008 for in order to focus on personal computers and then paid US$200 million to buy it back in November 2009. , the mobile division ranked third in terms of unit share in China's mobile handset market. Lenovo invested in a fund dedicated to providing seed funding for mobile application development for its LeGarden online app store. As of 2010, LeGarden had more than 1,000 programs available for the LePhone. At the same time, LeGarden counted 2,774 individual developers and 542 developer companies as members.
2011–2013: Re-entering smartphone market and other ventures
On 27 January 2011, Lenovo formed a joint venture to produce personal computers with Japanese electronics firm NEC. The companies said in a statement that they would establish a new company called Lenovo NEC Holdings, to be registered in the Netherlands. NEC received US$175 million in Lenovo stock. Lenovo was to own a 51% stake in the joint venture, while NEC would have 49%. Lenovo has a five-year option to expand its stake in the joint venture.
This joint venture was intended to boost Lenovo's worldwide sales by expanding its presence in Japan, a key market for personal computers. NEC spun off its personal computer business into the joint venture. As of 2010, NEC controlled about 20% of Japan's market for personal computers while Lenovo had a 5% share. Lenovo and NEC also agreed to explore cooperating in other areas such as servers and tablet computers.
Roderick Lappin, chairman of the Lenovo–NEC joint venture, told the press that the two companies will expand their co-operation to include the development of tablet computers.
In June 2011, Lenovo announced that it planned to acquire control of Medion, a German electronics manufacturing company. Lenovo said the acquisition would double its share of the German computer market, making it the third-largest vendor by sales (after Acer and Hewlett-Packard). The deal, which closed in the third quarter of the same year, was claimed by The New York Times as "the first in which a Chinese company acquired a well-known German company."
This acquisition will give Lenovo 14% of the German computer market. Gerd Brachmann, chairman of Medion, agreed to sell two-thirds of his 60 per cent stake in the company. He will be paid in cash for 80 per cent of the shares and will receive 20 per cent in Lenovo stock. That would give him about one percent of Lenovo.
In September 2012, Lenovo agreed to acquire the Brazil-based electronics company Digibras, which sells products under the brand-name CCE, for a base price of R$300 million (US$148 million) in a combination of stock and cash. An additional payment of R$400 million was made dependent upon performance benchmarks. Prior to its acquisition of CCE, Lenovo already established a $30 million factory in Brazil, but Lenovo's management had felt that they needed a local partner to maximize regional growth. Lenovo cited their desire to take advantage of increased sales due to the 2014 World Cup that would be hosted by Brazil and the 2016 Summer Olympics and CCE's reputation for quality. Following the acquisition, Lenovo announced that its subsequent acquisitions would be concentrated in software and services.
In September 2012, Lenovo agreed to acquire the United States–based software company Stoneware, in its first software acquisition. The transaction was expected to close by the end of 2012; no financial details have been disclosed. Lenovo said that the company was acquired in order to gain access to new technology and that Stoneware is not expected to significantly affect earnings. More specifically, Stoneware was acquired to further Lenovo's efforts to improve and expand its cloud-computing services. For the two years prior to its acquisition, Stoneware partnered with Lenovo to sell its software. During this period Stoneware's sales doubled. Stoneware was founded in 2000. As of September 2012, Stoneware is based in Carmel, Indiana, and has 67 employees.
Lenovo re-entered the smartphone market in 2012 and quickly became the largest vendor of smartphones in mainland China. Entry into the smartphone market was paired with a change of strategy from "the one-size-fits-all" to a diverse portfolio of devices. These changes were driven by the popularity of Apple's iPhone and Lenovo's desire to increase its market share in mainland China. Lenovo surpassed Apple Inc. to become the No. 2 provider of smartphones in the domestic Chinese market in 2012. However, with approximately 100 smartphone brands sold in China, this only equated to a 10.4% market share.
In May 2012, Lenovo announced an investment of US$793 million in the construction of a mobile phone manufacturing and R&D facility in Wuhan, Hubei.
In 2013, Lenovo created a joint venture with EMC named Iomega. The venture took over Iomega's business and rebranded all of Iomega's products under the LenovoEMC brand, and designed products for small and medium-sized businesses that could not afford enterprise-class data storage. Lenovo has since retired all of the LenovoEMC products on their product page advising that the products are no longer available for purchase on lenovo.com.
Since 2014: Purchase of IBM server lines and other acquisitions
IBM sold its x86-based server lines, including System x and Blade Center, to Lenovo in 2014. Lenovo says it will gain access to more enterprise customers, improve its profit margins, and develop a closer relationship with Intel, the maker of most server processors, through its acquisition of IBM's x86-based server business. On 1 October 2014, Lenovo closed its acquisition of IBM's server division, with the final price put at $2.1 billion. Lenovo said this acquisition came in at a price lower than the previously announced $2.3 billion partially because of a change in the value of IBM inventories. The deal has been already approved by Europe and China. Per Forbes, the United States Department of Treasury Committee on Foreign Investment in the United States (CFIUS) was reportedly the last major hurdle for Lenovo, since the United States has the strictest policies. According to Timothy Prickett-Morgan from Enterprise Tech, the deal still awaits "approval of regulators in China, the European Commission, and Canada".
After closing, Lenovo said that its goal was to become the world's largest maker of servers. Lenovo also announced plans to start integrating IBM's workforce. The acquisition added about 6,500 new employees to Lenovo. Lenovo said that it has no immediate intent to cut jobs. Lenovo said that positions in research and development and customer-facing roles such as marketing would be "100% protected", but expected "rationalization" of its supply chain and procurement.
On 29 January 2014, Google announced it would sell Motorola Mobility to Lenovo for US$2.91 billion. As of February 2014, Google owned about 5.94% of Lenovo's stock. The deal included smartphone lines like the Moto X, Moto G, Droid Turbo, and the future Motorola Mobility product roadmap, while Google retained the Advanced Technologies & Projects unit and all but 2,000 of the company's patents. Lenovo received royalty free licenses to all the patents retained by Google. Lenovo received approval from the European Union for its acquisition of Motorola in June 2014. The acquisition was completed on 30 October 2014. Motorola Mobility remained headquartered in Chicago, and continued to use the Motorola brand, but Liu Jun, president of Lenovo's mobile device business, became the head of the company.
In April 2014, Lenovo purchased a portfolio of patents from NEC related to mobile technology. These included over 3,800 patent families in countries around the world. The purchase included standards-essential patents for 3G and LTE cellular technologies and other patents related to smartphones and tablets.
In May 2015, Lenovo revealed a new logo at Lenovo Tech World in Beijing, with the slogan "Innovation Never Stands Still" (). Lenovo's new logo, created by Saatchi, can be changed by its advertising agencies and sales partners, within restrictions, to fit the context. It has a lounging "e" and is surrounded by a box that can be changed to use a relevant scene, solid color, or photograph. Lenovo's Chief Marketing Officer David Roman said, "When we first started looking at it, it wasn't about just a change in typography or the look of the logo. We asked 'If we really are a net-driven, customer-centric company, what should the logo look like?' We came up with the idea of a digital logo first [...] designed to be used on the internet and adaptable to context."
In early June 2015, Lenovo announced plans to sell up to US$650 million in five-year bonds denominated in Chinese yuan. The bonds were sold in Hong Kong with coupon ranging from 4.95% to 5.05%. This is only the second sale of bonds in Lenovo's history. Financial commentators noted that Lenovo was paying a premium to list the bonds in yuan given relatively low costs for borrowing in US dollars.
Lenovo said that its x86 servers will be available to all its channel partners. Lenovo plans to cut prices on x86 products in order to gain market share. This goes in alliance with IBM's vision of the future around cloud technologies and their own POWER processor architecture.
Lenovo's acquisition of IBM's businesses is arguably one of the greatest case studies on merging massive international enterprises. Though this acquisition in 2005 ultimately resulted in success, the integration of the businesses had a difficult and challenging beginning. Lenovo had employees from different cultures, different backgrounds, and different languages. These differences caused misunderstandings, hampering trust and the ability to build a new corporate culture. At the end of its first two years, Lenovo Group had met many of its original challenges, including integrating two disparate cultures in the newly formed company, maintaining the Think brand image for quality and innovation, and improving supply chain and manufacturing efficiencies. However, Lenovo had failed to meet a key objective of the merger: leveraging the combined strength of the two companies to grow volume and market share. In order to achieve success, Lenovo embraced diversification at multiple levels- business model, culture, and talent. By 2015, Lenovo grew into the world's number 1 PC maker, number 3 smartphone manufacturer and number 3 in the production of tablet computers.
In March 2017, Lenovo announced it was partnering with Fort Lauderdale, Florida–based software storage virtualization company DataCore to add DataCore's parallel I/O-processing software to Lenovo's storage devices. The servers were reportedly designed to outperform Storage Area Network (SAN) SAN arrays.
In 2017, Lenovo formed a joint venture with Fujitsu and the Development Bank of Japan (DBJ). In the joint venture, Fujitsu would sell Lenovo a 51% stake in Fujitsu Client Computing Limited. DBJ would acquire a 5% stake.
In September 2018, Lenovo and NetApp announced about strategic partnership and joint venture in China. As part of strategic partnership Lenovo started two new lines of storage systems: DM-Series and DE-Series. Both storage systems using Lenovo hardware and NetApp software: DM-Series using ONTAP OS and DE-Series SANtricity OS.
In 2018, Lenovo became the world's largest provider for the TOP500 supercomputers.
In 2020, Lenovo became a preferred data center innovation provider for DreamWorks Animation starting with Trolls World Tour.
On 12 January 2021, Lenovo filed an application to issue Chinese depositary receipts, representing newly issued ordinary shares, and to list them on the Science and Technology Innovation Board of the Shanghai Stock Exchange.
In April 2021, Lenovo was reorganized into three divisions: The Intelligent Devices Group for PCs, Smartphones, Smart Collaboration products, Augmented and Virtual Reality solutions and Internet of Things devices, the Infrastructure Solutions Group (formally known as Data Center Group) for smart infrastructure solutions, and the Solutions and Services Group focused on services and industry-specific products. That year, the company hit $60 billion in annual revenues.
On 8 October 2021, Lenovo withdrew its application to list on the Shanghai Stock Exchange just days after it had been accepted by the exchange, citing the possibility of the validity of financial information in its prospectus lapsing as the reason. The price of the company's shares on the Hong Kong Stock Exchange dropped by over 17% following the news, which was its biggest intraday decline in over a decade.
Name
"Lenovo" is a portmanteau of "Le-" (from Legend) and "novo", Latin ablative for "new". The Chinese name () means "association" (as in "association of ideas"), "associative thinking", or "connected thinking". It also implies creativity. "Lianxiang" was first used to refer to a layout of Chinese typewriters in the 1950s organized into groups of common words and phrases rather than the standard dictionary layout.
For the first 20 years of its existence, the company's English name was "Legend". In 2002, Yang Yuanqing decided to abandon the Legend English name to expand beyond the Chinese home market. "Legend" was already in use worldwide by many businesses, making it impossible to register in many jurisdictions outside China. In April 2003, the company publicly announced its new English name, "Lenovo", with an advertising campaign including huge billboards and primetime television ads. Lenovo spent 18 million RMB on an eight-week television advertising campaign. The billboards showed the Lenovo logo against blue sky with a slogan that read, "Transcendence depends on how you think." By the end of 2003, Lenovo had spent a total of 200 million RMB on rebranding.
Products and services
Lenovo is a manufacturer of personal computers, smartphones, televisions, and wearable devices. Some of the company's earliest products included the KT8920 mainframe computer and a circuit board that allowed IBM-compatible personal computers to process Chinese characters. One of its first computers was the Tianxi (), released in 1998 in the Chinese market. It became the best selling computer in Chinese history in 2000.
Personal and business computing
Lenovo markets the ThinkPad, IdeaPad, Yoga, Legion and Xiaoxin (; Chinese market only) lines of laptops, as well as the IdeaCentre and ThinkCentre lines of desktops. It expanded significantly in 2005 through its acquisition of IBM's personal computer business, including its ThinkPad and ThinkCentre lines. As of January 2013, shipments of THINK-branded computers have doubled since Lenovo's takeover of the brand, with profit margins thought to be above 5%. Lenovo aggressively expanded the THINK brand away from traditional laptop computers in favor of tablets and hybrid devices such as the ThinkPad Tablet 2, ThinkPad Yoga, ThinkPad 8, ThinkPad Helix, and ThinkPad Twist; the shift came as a response to the growing popularity of mobile devices, and the release of Windows 8 in October 2012. Lenovo achieved significant success with this high-value strategy and in 2013 controlled more than 40% of the market for Windows computers priced above $900 in the United States.
ThinkPad
The ThinkPad is a line of business-oriented laptop known for their boxy black design, modeled after a traditional Japanese bento. The ThinkPad was originally an IBM product developed at the Yamato Facility in Japan by ; they have since been developed, manufactured and sold by both IBM and Lenovo after early 2005, following its acquisition of IBM's personal computer division. The ThinkPad has been used in space and was the only laptop model certified for use on the International Space Station until 2016.
ThinkCentre
The ThinkCentre is a line of business-oriented desktop computers which was introduced in 2003 by IBM and since has been produced and sold by Lenovo since 2005. ThinkCentre computers typically include mid-range to high-end processors, options for discrete graphics cards, and multi-monitor support. Similar to the ThinkPad line of computers, there have been budget lines of ThinkCentre branded computers in the past. Some examples of this include: M55e series, A50 series, M72 series. These "budget" lines are typically "thin clients" however, meaning they are not standalone computers, rather, they are access points to a central server.
ThinkServer, followed by ThinkSystem
The ThinkServer product line began with the TS100 from Lenovo. The server was developed under an agreement with IBM, by which Lenovo would produce single-socket and dual-socket servers based on IBM's xSeries technology. An additional feature of the server design was a support package aimed at small businesses. The focus of this support package was to provide small businesses with software tools to ease the process of server management and reduce dependence on IT support.
On 20 June 2017, Lenovo's Data Center Group relaunched the ThinkServer product line as ThinkSystem, which consisted of 17 new machine type models, in the catalog formate containing form factors such as Tower, 1U/2U, Blades, Dense and 4U Mission Critical Intel-based servers. Also within this relaunch contained a portfolio of Storage Arrays and of Fibre Channel SAN Switches and Directors. To further incorporate industry-leading partnerships into its portfolio, Lenovo struck an agreement with the processor company, AMD, to be able to supply customers with a choice of options between both Intel and AMD powered appliances. In August, 2019, the first two ThinkSystem platforms were introduced to the market containing a single AMD EPYC processor, the SR635 (1U) and the SR655 (2U). Again, in May 2020, Lenovo DCG further expanded its AMD offerings to incorporate 2-proc systems, the SR645 and the SR665, continuing to exemplify its approach to being the Most Trusted Data Center Advisor in the market.
ThinkStation
Lenovo ThinkStations are workstations designed for high-end computing. In 2008, IBM/Lenovo expanded the focus of its THINK brand to include workstations, with the ThinkStation S10 being the first model released.
ThinkVision
High-end monitors are marketed under the ThinkVision name. ThinkVision displays share a common design language with other THINK devices such as the ThinkPad line of laptop computers and ThinkCentre line of desktop computers. At the 2014 International CES, Lenovo announced the ThinkVision Pro2840m, a 28-inch 4K display aimed at professionals. Lenovo also announced another 28-inch 4K touch-enabled device running Android that can function as an all-in-one PC or an external display for other devices.
At the 2016 International CES, Lenovo announced two displays with both USB-C and DisplayPort connectivity. The ThinkVision X24 Pro monitor is a 24-inch 1920 by 1080 pixel thin-bezel display that uses an IPS LCD panel. The ThinkVision X1 is a 27-inch 3840 by 2160 pixel thin-bezel display that uses a 10-bit panel with 99% coverage of the sRGB color gamut. The X24 includes a wireless charging base for mobile phones. The X1 is the first monitor to receive the TUV Eye-Comfort certification. Both monitors have HDMI 2.0 ports, support charging laptops, mobile phones, and other devices, and have Intel RealSense 3D cameras in order to support facial recognition. Both displays have dual-array microphones and 3-watt stereo speakers.
IdeaPad
The IdeaPad line of consumer-oriented laptops was introduced in January 2008. The IdeaPad is the result of Lenovo's own research and development; Unlike the ThinkPad line, its design and branding were not inherited from IBM nor are they designed/developed by IBM.
The IdeaPad's design language differs markedly from the ThinkPad and has a more consumer-focused look and feel.
On 21 September 2016, Lenovo confirmed that their Yoga series is not meant to be compatible with Linux operating systems, that they know it is impossible to install Linux on some models, and that it is not supported. This came in the wake of media coverage of problems that users were having while trying to install Ubuntu on several Yoga models, including the 900 ISK2, 900 ISK For Business, 900S, and 710, which were traced back to Lenovo disabling and removing support for the AHCI storage mode for the device's Solid State Drive in the computer's BIOS, in favor of a RAID mode that is only supported by Windows 10 drivers that come with the system. Lenovo has since released an alternative firmware that has restored the AHCI mode to the drive controller to allow installation of Linux operating systems.
IdeaCentre
All IdeaCentres are all-in-one machines, combining processor and monitor into a single unit. The desktops were described by HotHardware as being "uniquely designed". The first IdeaCentre desktop, the IdeaCentre K210, was announced by Lenovo on 30 June 2008. While the IdeaCentre line consists only of desktops, it shares design elements and features with the IdeaPad line. One such feature was Veriface facial recognition technology.
At CES 2011, Lenovo announced the launch of four IdeaCentre desktops: the A320, B520, B320, and C205. In the autumn of 2012, the firm introduced the more powerful IdeaCentre A720, with a 27-inch touchscreen display and running Windows 8. With a TV tuner and HDMI in, the A720 can also serve as a multimedia hub or home theater PC.
In 2013, Lenovo added a table computer to the IdeaCentre line. The Lenovo IdeaCentre Horizon, introduced at the 2013 Consumer Electronics Show is a 27-inch touchscreen computer designed to lay flat for simultaneous use by multiple people. Thanks to its use of Windows 8, the Horizon can also serve as a desktop computer when set upright.
Legion and LOQ
Legion is a series of laptops and tablets from Lenovo targeting gaming performance. The first Legion brand laptops was revealed at CES 2017, the Legion Y520 and the Legion Y720. On 6 June 2017, a high-performance model, the Legion Y920, equipped with Intel's seventh-generation quad-core i7-7820HK and Nvidia GTX 1070 discrete graphics, was launched.
At E3 2018, Lenovo announced three new laptops with new redesigned chassis, Y530, Y730 and Y7000.
In 2020, Lenovo launched Legion 3, 5, and 7, where Legion 7 is the highest specification of the series.
In 2021, Lenovo launched Legion 5 pro with AMD 5th series CPU and Nvidia 30s GPU.
In March 2023, Lenovo launched the LOQ gaming sub-brand which is aimed towards budget and new-to-gaming markets.
Smartphones
As of January 2013, Lenovo only manufactured phones that use the Android operating system from Google. Numerous press reports indicated that Lenovo planned to release a phone running Windows Phone 8, According to J. D. Howard, a vice president at Lenovo's mobile division, the company would release a Windows Phone product if there is market demand.
Lenovo has implemented an aggressive strategy to replace Samsung Electronics as Mainland China market's top smartphone vendor. It has spent $793.5 million in Wuhan in order to build a plant that can produce 30 to 40 million phones per year. Data from Analysys International shows that Lenovo experienced considerable growth in smartphone sales in China during 2012. Specifically, it saw its market share increase to 14.2% during 2012's third quarter, representing an increase when compared to 4.8% in the same quarter of 2011. IDC analysts said that Lenovo's success is due to its "aggressive ramping-up and improvements in channel partnerships". Analysys International analyst Wang Ying wrote, "Lenovo possesses an obvious advantage over rivals in terms of sales channels." The company's CEO, Yang Yuanqing, said, "Lenovo does not want to be the second player ... we want to be the best. Lenovo has the confidence to outperform Samsung and Apple, at least in the Chinese market."
According to IHS iSuppli, Lenovo was a top-three smartphone maker in China with a 16.5% market share in the first quarter of 2012. According to a May report released by IDC Lenovo ranks fourth in the global tablet market by volume. As of November 2012, Lenovo was the second largest seller of mobile phones in China when measured by volume.
In May 2013, Lenovo CEO Yang Yuanqing indicated that the company had aimed to release smartphones in the United States within the next year. Later in October, Lenovo expressed interest in acquiring the Canadian smartphone maker BlackBerry Limited. However, its attempt was reportedly blocked by the Government of Canada, citing security concerns due to the use of BlackBerry devices by prominent members of the government. An official stated that "we have been pretty consistent that the message is Canada is open to foreign investment and investment from China in particular but not at the cost of compromising national security".
In January 2014, Lenovo announced a proposed deal to acquire Motorola Mobility to bolster its plans for the U.S. market. Microsoft officially announced that Lenovo had become the hardware partner of Windows Phone platform at the Mobile World Congress 2014. In January 2016, Lenovo announced at CES that the company would be producing the first Tango phone.
Lenovo plus Motorola was the 3rd largest producer of smartphones by volume in the world between 2011 and 2014. Since Lenovo's acquisition of Motorola Mobility, the combined global market share of Lenovo plus Motorola has fallen from 7.2% in 2014 to 3.9% in the third quarter of 2016. A number of factors have been cited as the cause of this reduced demand, including the fact that Lenovo relied heavily on carriers to sell its phones, its phones lacked strong branding and unique features to distinguish them in the competitive Chinese market where a weak economy and saturated market is slowing demand and the culture clash between a more hierarchical PC company and the need to be nimble to sell rapidly-evolving smartphones. In response to the weak sales, Lenovo announced in August 2015 that it would lay off 3,200 employees, mostly in its Motorola smartphone business.
In the reorganization which followed, Lenovo was uncertain how to brand its Motorola smartphones. In November 2015, members of Lenovo management made statements that Lenovo would use the Motorola brand for all its smartphones. Then, in January 2016, Lenovo announced that it would be eliminating the Motorola brand in favor of "Moto by Lenovo". The company reversed course in March 2017 and announced that the Motorola brand name would be used in all regions in future products. "In 2016, we just finished transforming ourselves," Motorola Chairman and President Aymar de Lencquesaing said in an interview, "We have clarity on how we present ourselves."
Smart televisions
In November 2011, Lenovo said it would soon unveil a Smart TV product called LeTV, expected for release in the first quarter of 2012. "The PC, communications and TV industries are currently undergoing a "smart" transformation. In the future, users will have many smart devices and will desire an integrated experience of hardware, software and cloud services." Liu Jun, president of Lenovo's mobile-Internet and digital-home-business division. In June 2013 Lenovo announced a partnership with Sharp to produce smart televisions. In March 2014, Lenovo announced that it projected smart television sales surpassing one million units for 2014. The same month Lenovo released its flagship S9 Smart TV.
Wearables
Rumors that Lenovo was developing a wearable device were confirmed in October 2014 after the company submitted a regulatory finding to the Federal Communications Commission. The device, branded a "Smartband", has a battery life of seven days. It has an optical heart-rate monitor and can be used to track distance and time spent running and calories burned. It can also notify the user of incoming calls and texts. It can also unlock computers without the use of a password. The Smartband went on sale in October 2014. Lenovo started offering the device for sale on its website without a formal product announcement.
IoT / Smart Home
In 2015 Lenovo launched a strategic cooperation with IngDan (), a subsidiary of Chinese electronics e-commerce company Cogobuy Group, to penetrate into the intelligent hardware sector. Lenovo wanted to procure High-Tech hardware in the then newly emerging Internet of Things (IoT) economy and formed a strategic partnership with Cogobuy in which it previously primarily bought IC components from. Cogobuy's supply chain was utilized by Lenovo to procure consumer devices and bridge gaps in their proprietary hardware and software development. At the IFA 2018, Lenovo launched several Home automation products.
Lenovo Connect
At the Mobile World Congress in 2016, Lenovo introduced Lenovo Connect, a wireless roaming service. This service works across devices, networks, and borders for customers in China and EMEA (Europe, the Middle East and Africa). Lenovo Connect eliminates the need to buy new SIM cards when crossing borders. Lenovo Connect started service for phones and select ThinkPad laptops in China in February 2016.
Operations
Lenovo has operations in over 60 countries, and sells its products in around 180 countries. Lenovo's principal facilities are in Beijing, Singapore, and Morrisville, North Carolina, United States, with research centers in Beijing, Singapore, Morrisville, Shanghai, Shenzhen, Xiamen, Chengdu, Nanjing, Wuhan and Yamato (Kanagawa Prefecture, Japan). Lenovo operates manufacturing facilities in Chengdu and Hefei in China, and in Japan. A global flagship opened in Beijing in February 2013.
Lenovo's manufacturing operations are a departure from the usual industry practice of outsourcing to contract manufacturers. Lenovo instead focuses on vertical integration in order to avoid excessive reliance on original equipment manufacturers and to keep down costs. Speaking on this topic, Yang Yuanqing said, "Selling PCs is like selling fresh fruit. The speed of innovation is very fast, so you must know how to keep up with the pace, control inventory, to match supply with demand and handle very fast turnover." Lenovo benefited from its vertical integration after flooding affected hard-drive manufacturers in Thailand in 2011, as the company could continue manufacturing operations by shifting production towards products for which hard drives were still available.
Lenovo began to emphasize vertical integration after a meeting in 2009 in which CEO Yang Yuanqing, and the head of Lenovo's supply chain, analyzed the costs versus the benefits of in-house manufacturing, and decided to make at least 50% of Lenovo's manufacturing in-house. Lenovo Chief Technology Officer George He said that vertical integration is having an important role in product development. He stated, "If you look at the industry trends, most innovations for" PCs, smartphones, tablets and smart TVs are related to innovation of key components—display, battery and storage. Differentiation of key parts is so important. So we started investing more ... and working very closely with key parts suppliers." Previously, lack of integration due to numerous foreign acquisitions and an excessive number of "key performance indicators" (KPIs) was making Lenovo's expansion expensive and creating unacceptably slow delivery times to end-customers. Lenovo responded by reducing the number of KPIs from 150 to 5, offering intensive training to managers, and working to create a global Lenovo culture. Lenovo also doubled-down on vertical integration and manufacturing near target markets in order to cut costs at time when its competitors were making increased use of outsourcing offshoring. By 2013, Lenovo ranked 20th on Gartner's list of top 50 supply chains, whereas in 2010 the company was unranked.
In 2012, Lenovo partially moved production of its ThinkPad line of computers to Japan. ThinkPads will be produced by NEC in Yamagata Prefecture. , president of Lenovo Japan, said, "As a Japanese, I am glad to see the return to domestic production and the goal is to realize full-scale production as this will improve our image and make the products more acceptable to Japanese customers."
In October 2012, Lenovo announced that it would start assembling computers in Whitsett, North Carolina. Production of desktop and laptop computers, including the ThinkPad Helix began in January 2013. , 115 workers were employed at this facility. Lenovo has been in Whitsett since 2008, where it also has centers for logistics, customer service, and return processing.
In 2015, Lenovo and Hong Kong Cyberport Management Company Limited, a government-sponsored business park for technology firms, reached a deal to "jointly build a cloud service and product research and development center". Lenovo's Asia Pacific data center will also be housed in Cyperport.
Lenovo assembles smartphones in Chennai, India through a contract manufacturing agreement with Flex. In November 2015, Lenovo announced that it would start manufacturing computers in Pondicherry.
Accusations of slave labor by supplier
In August 2020, The Intercept reported that Lenovo imported about 258,000 laptops from the Chinese manufacturer Hefei Bitland Information Technology, a company, among others, accused by the Australian Strategic Policy Institute of using Uyghur forced labor. In July 2020, the United States Commerce Department added 11 companies, including Hefei Bitland, implicated in human rights abuses on the Entity List. Lenovo took some shipments out of the distribution, but other shipments were distributed to consumers.
In late July, Lenovo informed its customers it had stopped manufacturing with Bitland and was moving production of related devices to other suppliers.
Corporate affairs
Business trends
The key trends for Lenovo are (as of the financial year ending March 31):
Headquarters
Alongside Beijing, the company has operational centres in Lorong Chuan, Singapore, and Morrisville, North Carolina (near Raleigh in the Research Triangle metropolitan area) in the United States. As of October 2012, the Morrisville facility has about 2,000 employees. Lenovo identifies its facilities in Beijing, Singapore, and Morrisville as its "key location addresses", where its principal operations occur. The company's registered office is on the 23rd floor of the Lincoln House building of the Taikoo Place in Quarry Bay, Hong Kong.
Previously the company's U.S. headquarters were in Purchase, Harrison, New York. About 70 people worked there. In 2006, Lenovo announced that it was consolidating its U.S. headquarters, a logistics facility in Boulder, Colorado, and a call center in Atlanta, to a new facility in Morrisville. The company received offers of over $11 million in incentive funds from the local Morrisville, North Carolina, area and from the State of North Carolina on the condition that the company employs about 2,200 people. In early 2016, Lenovo carried out a comprehensive restructuring of its business units.
Financials and market share
In the third quarter of 2020, Lenovo commands a leading market share of 25.7 percent of all PCs sold in the world.
In March 2013, Lenovo was included as a constituent stock in the Hang Seng Index. Lenovo replaced the unprofitable Aluminum Corporation of China Limited, a state-owned enterprise, on the list of 50 key companies on the Hong Kong stock exchange that constitute the Hang Seng Index. The inclusion of Lenovo and Tencent, China's largest internet firm, significantly increased the weight of the technology sector on the index. Being added to the Hang Seng Index was a significant boon for Lenovo and its shareholders as it widened the pool of investors willing to purchase Lenovo's stock. For instance, index funds pegged to the Hang Seng and pension funds that consider index inclusion now have the opportunity to invest in Lenovo. In November 2013 Lenovo reported that they had achieved double-digit market share in the United States for the first time.
Ownership
In 2009, China Oceanwide Holdings Group, a private Investment company based in Beijing, bought 29% of Legend Holdings, the parent company of Lenovo, for ¥2.76 billion. , 65% of Lenovo stock was held by the general public, 29% by Legend Holdings, 5.8% by Yang Yuanqing, and 0.2% by other directors.
Responding to claims that Lenovo is a state-owned enterprise, CEO Yang Yuanqing said, "Our company is a 100% market oriented company. Some people have said we are a state-owned enterprise. It's 100% not true. In 1984 the Chinese Academy of Sciences only invested $25,000 in our company. The purpose of the Chinese Academy of Sciences to invest in this company was that they wanted to commercialize their research results. The Chinese Academy of Sciences is a pure research entity in China, owned by the government. From this point, you could say we're different from state-owned enterprises. Secondly, after this investment, this company is run totally by the founders and management team. The government has never been involved in our daily operation, in important decisions, strategic direction, nomination of the CEO and top executives and financial management. Everything is done by our management team."
As of 2014, the Chinese Academy of Sciences, owns 11.7% of Lenovo and IBM owns 37.8%.
In early 2006, the U.S. State Department was harshly criticized for purchasing 16,000 computers from Lenovo. Critics argued that Lenovo was controlled by the Chinese government and a potential vehicle for espionage against the United States. Yang spoke out forcefully and publicly to defend Lenovo. He said, "We are not a government-controlled company." He pointed out that Lenovo pioneered China's transition to a market economy and that in the early 1990s had fought and beaten four state-owned enterprises that dominated the Chinese computer market. Those firms had the full backing of the state while Lenovo received no special treatment. The State Department deal went through. Yang worried that fears about Lenovo's supposed connections to the Chinese government would be an ongoing issue in the United States. Yang worked to ease worries by communicating directly with Congress.
Yang dramatically increased his ownership stake by acquiring 797 million shares in 2011. As of June 2011, Yang owned an 8 per cent stake in Lenovo. He previously owned only 70 million shares. In a statement, Yang said, "While the transaction is a personal financial matter, I want to be very clear that my decision to make this investment is based on my strong belief in the company's very bright future. Our culture is built on commitment and ownership – we do what we say, and we own what we do. My decision to increase my holdings represents my steadfast belief in these principles."
Corporate culture
Lenovo's senior executives rotate between the three head offices at Beijing, Singapore, and Morrisville, as well as Lenovo's research and development center in Yamato, Japan.
Leadership
Yang Yuanqing
Yang Yuanqing is the chairman and chief executive officer of Lenovo. One of his major achievements was leading Lenovo to become the best-selling personal computer brand in China since 1997. In 2001, Bloomberg Businessweek named him one of Asia's rising stars in business. Yang was president and CEO of Lenovo until 2004, when Lenovo closed its acquisition of IBM's PC division, after which Yang was succeeded as Lenovo CEO by IBM's Stephen M. Ward Jr. Ward was succeeded by William Amelio on 20 December 2005. In February 2009, Yang replaced Amelio as CEO and has served in that capacity ever since. Yang was chairman of Lenovo's board from 2004 to 2008, and returned as chairman in 2012 alongside his role as CEO.
In 2012, Yang received a $3 million bonus as a reward for record profits, which he in turn redistributed to about 10,000 of Lenovo's employees. According to Lenovo spokesman, Jeffrey Shafer, Yang felt that it would be the right thing to, "redirect [the money] to the employees as a real tangible gesture for what they done." Shafer also said that Yang, who owns about eight per cent of Lenovo's stock, "felt that he was rewarded well simply as the owner of the company". The bonuses were mostly distributed among staff working in positions such as production and reception who received an average of or about . This was almost equivalent to a monthly salary of an average worker in China. Yang made a similar gift of again in 2013.
According to Lenovo's annual report, Yang earned , including in bonuses, during the fiscal year that ended in March 2012.
In 2013, Barron's named Yang one of the "World's Best CEOs".
Liu Chuanzhi
Liu Chuanzhi is the founder and former chairman of Lenovo. Liu was trained as an engineer at a military college and later went on to work at the Chinese Academy of Sciences. Like many young people during the Cultural Revolution, Liu was denounced and sent to the countryside where he worked as a laborer on a rice farm. Liu claims Hewlett-Packard as a key source of inspiration. In an interview with The Economist he stated that "Our earliest and best teacher was Hewlett-Packard." For more than ten years, Lenovo was Hewlett-Packard's distributor in China. In reference to Lenovo's later acquisition of IBM's personal computer unit Liu said, "I remember the first time I took part in a meeting of IBM agents. I was wearing an old business suit of my father's and I sat in the back row. Even in my dreams, I never imagined that one day we could buy the IBM PC business. It was unthinkable. Impossible."
Board of directors
In early 2013, Lenovo announced the addition of Yahoo! founder Jerry Yang to its board. Lenovo's CEO Yang Yuanqing said, "Jerry's appointment as an observer to our board furthers Lenovo's reputation as a transparent international company." Just prior to the appointment of Jerry Yang, Tudor Brown, the founder of British semiconductor design firm ARM, was also appointed to Lenovo's board. Speaking of both men Yang Yuanqing said, "We believe that they will add a great deal to our strategic thinking, long-term direction and, ultimately, our ability to achieve our aspirations in the PC plus era."
Marketing and sponsorships
In 2009, Lenovo became the first personal computer manufacturer to divide countries into emerging markets and mature markets. Lenovo then developed a different set of strategies for each category. Lenovo's competitors have widely adopted the same approach In 2012, Lenovo made a major effort to expand its market share in developing economies such as Brazil and India through acquisitions and increased budgets for marketing and advertising.
Celebrity sponsorships and endorsements
In October 2013, Lenovo announced that it had hired American actor Ashton Kutcher as a product engineer and spokesman. David Roman, Lenovo's chief marketing officer, said, "His partnership goes beyond traditional bounds by deeply integrating him into our organization as a product engineer. Ashton will help us break new ground by challenging assumptions, bringing a new perspective and contributing his technical expertise to Yoga Tablet and other devices." Kobe Bryant became an official ambassador for Lenovo smartphones in China and Southeast Asia in early 2013. Bryant appeared in a social campaign titled "The Everyday Kobe Challenge" for the launch of Lenovo IdeaPhone K900 in Malaysia, Thailand, Indonesia and the Philippines in the same year.
Sporting sponsorship
Lenovo was an official computer sponsor of the 2006 Winter Olympics in Turin, Italy, and the 2008 Summer Olympics in Beijing. When asked about Lenovo's brand Yang Yuanqing said, "The Beijing Olympics were very good for brand awareness in countries like the US and Argentina, but not good enough." The NFL has been a Lenovo customer since 2007. In July 2012, Lenovo and the National Football League (NFL) announced that Lenovo had become the NFL's "Official Laptop, Desktop and Workstation Sponsor." Lenovo said that this was its largest sponsorship deal ever in the United States. NFL stars Jerry Rice, DeAngelo Williams, and Torry Holt were on hand for the announcement and a celebration with 1,500 Lenovo employees. Lenovo's sponsorship will last at least three years.
Lenovo is a technology partner of Ducati Corse in Grand Prix motorcycle racing since 2018. For the 2021 MotoGP it will become main sponsor for the Bolognese.
Lenovo is also an official partner of the NHL's Carolina Hurricanes who play in nearby Raleigh, North Carolina. In 2024, Lenovo bought the naming rights to the their arena, renaming it Lenovo Center.
Lenovo and FC Internazionale, in 2019, have signed a multi-year sponsorship agreement that makes Lenovo the Global Technology Partner of the Nerazzurri company. In May 2021, Lenovo and Motorola Mobility decided to celebrate with a limited edition of Motorola Razr totally customized and produced in 2021 numbered pieces, to honor Inter who won their 19th Scudetto. In July 2021 there was the launch of the new Inter Home shirt for the 2021–22 season, they unveiled the introduction of Lenovo as a sponsor on the back of the shirt. In October 2024, Lenovo was named the official technology partner of FIFA.
Television, internet, and other media
Lenovo used a short-film entitled The Pursuit in its "For Those Who Do" campaign launched in 2011. The film depicted a mysterious young woman using the IdeaPad Yoga 13 to stay one-step-ahead of her evil pursuers. Martin Campbell, who previously worked on action movies and James Bond films such as GoldenEye and the remake of Casino Royale, shot this film. Lenovo was the first Chinese company to make use of such marketing techniques.
In May 2015, Lenovo hosted its first ever "Tech World" conference in Beijing. ZUK Mobile, a separate company formed by Lenovo in 2014, announced several products at Tech World. These included slim power banks, 3D printers that can print food such as chocolate, an outdoor sound box, and a Wi-Fi based control system for home automation.
China
In its home market China, Lenovo has a vast distribution network designed to make sure that there is at least one shop selling Lenovo computers within 50 kilometers of nearly all consumers. Lenovo has also developed close relationships with its Chinese distributors, who are granted exclusive territories and only carry Lenovo products.
As of July 2013, Lenovo believes that urbanization initiatives being pushed by former premier Li Keqiang will allow it to sustain sales growth in China for the foreseeable future. Speaking at Lenovo's annual general meeting in Hong Kong in 2013, Yang Yuanqing said: "I believe urbanisation will help us further increase the overall [domestic] PC market." Yang also stressed the opportunity presented by China's relatively low penetration rate of personal computers. Lenovo previously benefited from the Chinese government's rural subsidies, part of a wider economic stimulus initiative, designed to increase purchases of appliances and electronics. That program, which Lenovo joined in 2004, ended in 2011. Lenovo enjoys consistent price premiums over its traditional competitors in rural markets and a stronger local sales and service presence.
India
Lenovo has gained significant market share in India through bulk orders to large companies and government agencies. For example, the government of Tamil Nadu ordered a million ThinkPads from IBM/Lenovo in 2012 and single-handedly made the firm a market leader. Lenovo distributes most of the personal computers it sells in India through five national distributors such as Ingram Micro and Redington.
Given that most smartphones and tablets are sold to individuals Lenovo is pursuing a different strategy making use of many small state-centric distributors. Amar Babu, Lenovo's managing director for India, said, "To reach out to small towns and the hinterland, we have tied up with 40 regional distributors. We want our distributors to be exclusive to us. We will, in turn, ensure they have exclusive rights to distribute Lenovo products in their catchment area." As of 2013, Lenovo had about 6,000 retailers selling smartphones and tablets in India. In February 2013, Lenovo established a relationship with Reliance Communications to sell smartphones. The smartphones carried by Reliance have dual-SIM capability and support both GSM and CDMA. Babu claims that the relative under penetration of smartphones in India represents an opportunity for Lenovo.
Lenovo has assembled a team of senior managers familiar with the Indian market, launched mobile phones at all price points there, and worked on branding to build market share. As of February 2014, Lenovo claims that its sales of smartphones in India have been increasing 100% per quarter while the market is only growing 15–20% over the same period. Lenovo did marketing tests of its smartphones in November 2012 in Gujarat and some southern cities, where Lenovo already had a strong presence. Lenovo's strategy has been to create awareness, maintain a broad selection of phones at all price points, and develop distribution networks. Lenovo partnered with two national distributors and over 100 local distributors. As of February 2014, more than 7,000 retail outlets in India sold Lenovo smartphones. Lenovo has also partnered with HCL in order to set up 250 service centers in 110 cities.
In India, Lenovo grants distributors exclusive territories but allows them to sell computers from other companies. Lenovo uses its close relationships with distributors to gain market intelligence and speed up product development.
Lenovo reported a year-on-year increase of about 951% in tablet sales in India for the first quarter of 2014. Canalys, a market research firm, said Lenovo took market share away from Apple and Samsung in the country.
Africa
Lenovo first started doing business in South Africa, establishing a sales office, and then expanded to East African markets such as Kenya, Tanzania, Ethiopia, Uganda, and Rwanda. West Africa followed when Lenovo set-up a Nigerian legal office and then expanded to Ghana, Zimbabwe, Mozambique and Botswana.
According to Lenovo's general manager for Africa, Graham Braum, Lenovo's strategy is to put "great emphasis on products that sell well in Africa" and roll out "products alongside different African governments' rolling out of wireless technology". Products such as the Lenovo Yoga series are popular in Africa because of their long battery life, as many areas have unreliable electrical supply. Other popular products include the Lenovo netbooks, which were introduced in 2008.
Lenovo picked Nigeria in 2013 to release its smartphone because unlike South Africa and other African countries, there is no requirement to partner with a local telecom firm to sell its phones.
In the long term, according to Braum, "Lenovo in Africa will focus on continuing to consistently supply personal computer products and allow this market to grow, while moving into new territory such as mobile and enterprise."
Singapore
Lenovo has had a presence in Singapore as early as its foundation, and with a focus on the Southeast Asia region, it is the location of one of Lenovo's three head offices. Registered as Lenovo (Singapore) Pte. Ltd., it is located at the New Tech Park in the Lorong Chuan district of the North-East Region of Singapore.
United States
In the United States, Lenovo began the "For Those Who Do" marketing campaign in 2010, created by the ad agency Saatchi & Saatchi. It was part of Lenovo's first-ever global branding campaign, beyond its domestic market in China. "For Those Who Do" was designed to appeal to young consumers in the 18- to 25-year-old demographic by stressing its utility to creative individuals that Lenovo's advertising refers to as "doers". One of Lenovo's operational centers is located in North Carolina, United States. Lenovo also started manufacturing products in the United States in 2012.
Goodweird
Lenovo launched a multi-year advertising campaign called "Goodweird" in the last half of 2015. Goodweird is designed to convey the idea that designs that seem strange initially often become familiar and widely accepted. The Goodweird campaign includes a video with famous images of early attempts to fly with the aid of homemade wings and a bicycle that transitions to a modern-day shot of a man soaring across mountains in a wingsuit before transitioning again to a shot of the Stealth Bomber. Lenovo worked with three agencies on Goodweird: London-based DLKW Low, We Are Social, and Blast Radius. Goodweird is part of Lenovo's wider strategy to appeal to millennials with an emphasis on design trendsetters. A portion of the funding for Goodweird is being directed to prominent YouTubers and Viners. BuzzFeed has been engaged to create relevant content.
Security and privacy incidents
Superfish
In February 2015, Lenovo became the subject of controversy for having bundled software identified as malware on some of its laptops. The software, Superfish Visual Discovery, is a web browser add-on that injects Pricing advertising into search engine results pages. To intercept HTTPS-encrypted communications, the software also installed a self-signed public key certificate. When the Superfish public-key cryptography was compromised, it was also discovered that the same private key was used across all installations of the software, leaving users vulnerable to security exploits utilizing the key. Lenovo made between on its deal with Superfish. In 2017 Lenovo agreed to pay as part of a settlement with the US Federal Trade Commission. and announced an apology to its customers and stockholders.
The head of Superfish responded to security concerns by saying the vulnerability was "inadvertently" introduced by Komodia, which built the application. In response to the criticism, Lenovo detailed that it would cease further distribution and use of the Superfish software, and offered affected customers free six-month subscriptions to the McAfee LiveSafe software. Lenovo issued a promise to reduce the amount of "Software bloat" it bundles with its Windows 10 devices, promising to only include Lenovo software, security software, drivers, and "certain applications customarily expected by users". Salon tech writer David Auerbach compared the Superfish incident to the Sony DRM rootkit scandal, and argued that "installing Superfish is one of the most irresponsible mistakes an established tech company has ever made."
Lenovo Service Engine
From October 2014 through June 2015, the UEFI firmware on certain Lenovo models had contained software known as "Lenovo Service Engine", which Lenovo says automatically sent non-identifiable system information to Lenovo the first time Windows is connected to the internet, and on laptops, automatically installs the Lenovo OneKey Optimizer program (software considered to be Software bloat) as well. This process occurs even on clean installations of Windows. It was found that this program had been automatically installed using a new feature in Windows 8, Windows Platform Binary Table, which allows executable files to be stored within UEFI firmware for execution on startup, and is meant to "allow critical software to persist even when the operating system has changed or been reinstalled in a "clean" configuration"; specifically, anti-theft security software. The software was discontinued after it was found that aspects of the software had security vulnerabilities, and did not comply with revised guidelines for appropriate usage of WPBT. On 31 July 2015, Lenovo released instructions and UEFI firmware updates meant to remove Lenovo Service Engine.
Lenovo Customer Feedback program
At a third time in 2015, criticism arose that Lenovo might have installed software that looked suspicious on their commercial Think-PC lines. This was discovered by Computerworld writer Michael Horowitz, who had purchased several Think systems with the Customer Feedback program installed, which seemed to log usage data and metrics. Further analysis by Horowitz revealed however that this was mostly harmless, as it was only logging the usage of some pre-installed Lenovo programs, and not the usage in general, and only if the user allowed the data to be collected. Horowitz also criticized other media for quoting his original article and saying that Lenovo preinstalled spyware, as he himself never used that term in this case and he also said that he does not consider the software he found to be spyware.
Lenovo Accelerator
As of June 2016, a Duo Labs report stated that Lenovo was still installing bloatware, some of which leads to security vulnerabilities as soon as the user turns on their new PC. Lenovo advised users to remove the offending app, "Lenovo Accelerator". According to Lenovo, the app, designed to "speed up the loading" of Lenovo applications, created a man-in-the-middle security vulnerability.
U.S. Marine network security breach
In February 2021, Bloomberg Businessweek reported that U.S. investigators found in 2008 that military units in Iraq were using Lenovo laptops in which the hardware had been altered. According to a testimony from the case in 2010, "A large amount of Lenovo laptops were sold to the U.S. military that had a chip encrypted on the motherboard that would record all the data that was being inputted into that laptop and send it back to China". Lenovo was unaware of the testimony and the U.S. military did not inform the company of any security concerns. A Lenovo spokesperson stated that "we have no way to assess the allegations you cite or whether security concerns may have been triggered by third-party interference."
See also
List of computer system manufacturers
List of companies of China
References
Further reading
External links
lenovo.com
Chinese brands
Chinese companies established in 1984
1994 initial public offerings
Companies based in Beijing
Companies listed on the Hong Kong Stock Exchange
Computer companies established in 1984
Computer companies of China
Computer hardware companies
Computer systems companies
Consumer electronics brands
Display technology companies
Electronics companies established in 1984
Mobile phone companies of China
Mobile phone manufacturers
Multinational companies headquartered in China
Netbook manufacturers
Supercomputing in China
Videotelephony
Zhongguancun | Lenovo | [
"Technology"
] | 13,949 | [
"Computer hardware companies",
"Computer systems companies",
"Computers",
"Computer systems"
] |
997,260 | https://en.wikipedia.org/wiki/Specific%20angular%20momentum | In celestial mechanics, the specific relative angular momentum (often denoted or ) of a body is the angular momentum of that body divided by its mass. In the case of two orbiting bodies it is the vector product of their relative position and relative linear momentum, divided by the mass of the body in question.
Specific relative angular momentum plays a pivotal role in the analysis of the two-body problem, as it remains constant for a given orbit under ideal conditions. "Specific" in this context indicates angular momentum per unit mass. The SI unit for specific relative angular momentum is square meter per second.
Definition
The specific relative angular momentum is defined as the cross product of the relative position vector and the relative velocity vector .
where is the angular momentum vector, defined as .
The vector is always perpendicular to the instantaneous osculating orbital plane, which coincides with the instantaneous perturbed orbit. It is not necessarily perpendicular to the average orbital plane over time.
Proof of constancy in the two body case
Under certain conditions, it can be proven that the specific angular momentum is constant. The conditions for this proof include:
The mass of one object is much greater than the mass of the other one. ()
The coordinate system is inertial.
Each object can be treated as a spherically symmetrical point mass.
No other forces act on the system other than the gravitational force that connects the two bodies.
Proof
The proof starts with the two body equation of motion, derived from Newton's law of universal gravitation:
where:
is the position vector from to with scalar magnitude .
is the second time derivative of . (the acceleration)
is the Gravitational constant.
The cross product of the position vector with the equation of motion is:
Because the second term vanishes:
It can also be derived that:
Combining these two equations gives:
Since the time derivative is equal to zero, the quantity is constant. Using the velocity vector in place of the rate of change of position, and for the specific angular momentum:
is constant.
This is different from the normal construction of momentum, , because it does not include the mass of the object in question.
Kepler's laws of planetary motion
Kepler's laws of planetary motion can be proved almost directly with the above relationships.
First law
The proof starts again with the equation of the two-body problem. This time the cross product is multiplied with the specific relative angular momentum
The left hand side is equal to the derivative because the angular momentum is constant.
After some steps (which includes using the vector triple product and defining the scalar to be the radial velocity, as opposed to the norm of the vector ) the right hand side becomes:
Setting these two expression equal and integrating over time leads to (with the constant of integration )
Now this equation is multiplied (dot product) with and rearranged
Finally one gets the orbit equation
which is the equation of a conic section in polar coordinates with semi-latus rectum and eccentricity .
Second law
The second law follows instantly from the second of the three equations to calculate the absolute value of the specific relative angular momentum.
If one connects this form of the equation with the relationship for the area of a sector with an infinitesimal small angle (triangle with one very small side), the equation
Third law
Kepler's third is a direct consequence of the second law. Integrating over one revolution gives the orbital period
for the area of an ellipse. Replacing the semi-minor axis with and the specific relative angular momentum with one gets
There is thus a relationship between the semi-major axis and the orbital period of a satellite that can be reduced to a constant of the central body.
See also
Specific orbital energy, another conserved quantity in the two-body problem.
References
Angular momentum
Astrodynamics
Orbits | Specific angular momentum | [
"Physics",
"Mathematics",
"Engineering"
] | 761 | [
"Astrodynamics",
"Physical quantities",
"Quantity",
"Aerospace engineering",
"Angular momentum",
"Momentum",
"Moment (physics)"
] |
997,387 | https://en.wikipedia.org/wiki/Specific%20orbital%20energy | In the gravitational two-body problem, the specific orbital energy (or vis-viva energy) of two orbiting bodies is the constant sum of their mutual potential energy () and their kinetic energy (), divided by the reduced mass. According to the orbital energy conservation equation (also referred to as vis-viva equation), it does not vary with time:
where
is the relative orbital speed;
is the orbital distance between the bodies;
is the sum of the standard gravitational parameters of the bodies;
is the specific relative angular momentum in the sense of relative angular momentum divided by the reduced mass;
is the orbital eccentricity;
is the semi-major axis.
It is typically expressed in (megajoule per kilogram) or (squared kilometer per squared second). For an elliptic orbit the specific orbital energy is the negative of the additional energy required to accelerate a mass of one kilogram to escape velocity (parabolic orbit). For a hyperbolic orbit, it is equal to the excess energy compared to that of a parabolic orbit. In this case the specific orbital energy is also referred to as characteristic energy.
Equation forms for different orbits
For an elliptic orbit, the specific orbital energy equation, when combined with conservation of specific angular momentum at one of the orbit's apsides, simplifies to:
where
is the standard gravitational parameter;
is semi-major axis of the orbit.
For a parabolic orbit this equation simplifies to
For a hyperbolic trajectory this specific orbital energy is either given by
or the same as for an ellipse, depending on the convention for the sign of a.
In this case the specific orbital energy is also referred to as characteristic energy (or ) and is equal to the excess specific energy compared to that for a parabolic orbit.
It is related to the hyperbolic excess velocity (the orbital velocity at infinity) by
It is relevant for interplanetary missions.
Thus, if orbital position vector () and orbital velocity vector () are known at one position, and is known, then the energy can be computed and from that, for any other position, the orbital speed.
Rate of change
For an elliptic orbit the rate of change of the specific orbital energy with respect to a change in the semi-major axis is
where
is the standard gravitational parameter;
is semi-major axis of the orbit.
In the case of circular orbits, this rate is one half of the gravitation at the orbit. This corresponds to the fact that for such orbits the total energy is one half of the potential energy, because the kinetic energy is minus one half of the potential energy.
Additional energy
If the central body has radius R, then the additional specific energy of an elliptic orbit compared to being stationary at the surface is
The quantity is the height the ellipse extends above the surface, plus the periapsis distance (the distance the ellipse extends beyond the center of the Earth). For the Earth and just little more than the additional specific energy is ; which is the kinetic energy of the horizontal component of the velocity, i.e. , .
Examples
ISS
The International Space Station has an orbital period of 91.74 minutes (5504s), hence by Kepler's Third Law the semi-major axis of its orbit is 6,738km.
The specific orbital energy associated with this orbit is −29.6MJ/kg: the potential energy is −59.2MJ/kg, and the kinetic energy 29.6MJ/kg. Compared with the potential energy at the surface, which is −62.6MJ/kg., the extra potential energy is 3.4MJ/kg, and the total extra energy is 33.0MJ/kg. The average speed is 7.7km/s, the net delta-v to reach this orbit is 8.1km/s (the actual delta-v is typically 1.5–2.0km/s more for atmospheric drag and gravity drag).
The increase per meter would be 4.4J/kg; this rate corresponds to one half of the local gravity of 8.8m/s2.
For an altitude of 100km (radius is 6471km):
The energy is −30.8MJ/kg: the potential energy is −61.6MJ/kg, and the kinetic energy 30.8MJ/kg. Compare with the potential energy at the surface, which is −62.6MJ/kg. The extra potential energy is 1.0MJ/kg, the total extra energy is 31.8MJ/kg.
The increase per meter would be 4.8J/kg; this rate corresponds to one half of the local gravity of 9.5m/s2. The speed is 7.8km/s, the net delta-v to reach this orbit is 8.0km/s.
Taking into account the rotation of the Earth, the delta-v is up to 0.46km/s less (starting at the equator and going east) or more (if going west).
Voyager 1
For Voyager 1, with respect to the Sun:
= 132,712,440,018 km3⋅s−2 is the standard gravitational parameter of the Sun
r = 17 billion kilometers
v = 17.1 km/s
Hence:
Thus the hyperbolic excess velocity (the theoretical orbital velocity at infinity) is given by
However, Voyager 1 does not have enough velocity to leave the Milky Way. The computed speed applies far away from the Sun, but at such a position that the potential energy with respect to the Milky Way as a whole has changed negligibly, and only if there is no strong interaction with celestial bodies other than the Sun.
Applying thrust
Assume:
a is the acceleration due to thrust (the time-rate at which delta-v is spent)
g is the gravitational field strength
v is the velocity of the rocket
Then the time-rate of change of the specific energy of the rocket is : an amount for the kinetic energy and an amount for the potential energy.
The change of the specific energy of the rocket per unit change of delta-v is
which is |v| times the cosine of the angle between v and a.
Thus, when applying delta-v to increase specific orbital energy, this is done most efficiently if a is applied in the direction of v, and when |v| is large. If the angle between v and g is obtuse, for example in a launch and in a transfer to a higher orbit, this means applying the delta-v as early as possible and at full capacity. See also gravity drag. When passing by a celestial body it means applying thrust when nearest to the body. When gradually making an elliptic orbit larger, it means applying thrust each time when near the periapsis. Such maneuver is called an Oberth maneuver or powered flyby.
When applying delta-v to decrease specific orbital energy, this is done most efficiently if a is applied in the direction opposite to that of v, and again when |v| is large. If the angle between v and g is acute, for example in a landing (on a celestial body without atmosphere) and in a transfer to a circular orbit around a celestial body when arriving from outside, this means applying the delta-v as late as possible. When passing by a planet it means applying thrust when nearest to the planet. When gradually making an elliptic orbit smaller, it means applying thrust each time when near the periapsis.
If a is in the direction of v:
See also
Specific energy change of rockets
Characteristic energy C3 (Double the specific orbital energy)
References
Astrodynamics
Orbits
Physical quantities
Mass-specific quantities | Specific orbital energy | [
"Physics",
"Mathematics",
"Engineering"
] | 1,576 | [
"Physical phenomena",
"Astrodynamics",
"Physical quantities",
"Quantity",
"Mass",
"Intensive quantities",
"Aerospace engineering",
"Mass-specific quantities",
"Physical properties",
"Matter"
] |
997,397 | https://en.wikipedia.org/wiki/Weldon%20process | The Weldon process is a process developed in 1866 by Walter Weldon for recovering manganese dioxide for re-use in chlorine manufacture. Commercial operations started at the Gamble works in St. Helens in 1869. The process is described in considerable detail in the book, The Alkali Industry, by J.R. Partington,D.Sc.
The common method to manufacture chlorine at the time, was to react manganese dioxide (and related oxides) with hydrochloric acid to give chlorine:
MnO2 + 4 HCl → MnCl2 + Cl2 + 2H2O
Weldon's contribution was to develop a process to recycle the manganese. The waste manganese(II) chloride solution is treated with lime, steam and oxygen, producing calcium manganite(IV):
2 MnCl2 + 3 Ca(OH)2 + O2 → CaO·2MnO2 + 3 H2O + 2 CaCl2
The resulting calcium manganite can be reacted with HCl as in related processes:
CaO·2MnO2 + 10 HCl → CaCl2 + 2 MnCl2 + 2 Cl2 + 5 H2O
The manganese(II) chloride can be recycled, while the calcium chloride is a waste byproduct.
The Weldon process was first replaced by the Deacon process and later by the Chloralkali process.
References
Further reading
Chemical processes
Chlorine
de:Walter Weldon#Weldon-Verfahren | Weldon process | [
"Chemistry"
] | 314 | [
"Chemical process engineering",
"Chemical processes",
"nan"
] |
997,409 | https://en.wikipedia.org/wiki/Deacon%20process | The Deacon process, invented by Henry Deacon, is a process used during the manufacture of alkalis (the initial end product was sodium carbonate) by the Leblanc process. Hydrogen chloride gas was converted to chlorine gas, which was then used to manufacture a commercially valuable bleaching powder, and at the same time the emission of waste hydrochloric acid was curtailed. To some extent this technically sophisticated process superseded the earlier manganese dioxide process.
Process
The process was based on the oxidation of hydrogen chloride:
4 HCl + O2 → 2 Cl2 + 2H2O
The reaction takes place at about 400 to 450 °C in the presence of a variety of catalysts, including copper chloride (CuCl2). Three companies developed commercial processes for producing chlorine based on the Deacon reaction:
The Kel-Chlor process developed by the M. W. Kellogg Company, which utilizes nitrosylsulfuric acid.
The Shell-Chlor process developed by the Shell Oil Company, which utilizes copper catalysts.
The MT-Chlor process developed by the Mitsui Toatsu Company, which utilizes chromium-based catalysts.
The Deacon process is now outdated technology. Most chlorine today is produced by using electrolytic processes. New catalysts based on ruthenium(IV) oxide have been developed by Sumitomo.
Leblanc-Deacon process
The Leblanc-Deacon process is a modification of the Leblanc process. The Leblanc process was notoriously environmentally unfriendly, and resulted in some of the first Air and Water pollution acts. In 1874, Henry Deacon had derived a process to reduce HCl emissions as mandated by the Alkali Act. In this process, hydrogen chloride is oxidized by oxygen over a copper chloride catalyst, resulting in the production of chlorine. This was widely used in the paper and textile industries as a bleaching agent, and as a result sodium carbonate was no longer the primary product of these plants, and henceforth sold at a loss.
See also
Alkali act
Leblanc process
Hydrochloric acid
Chlorine production
References
External links
http://www.che.lsu.edu/COURSES/4205/2000/Lim/paper.htm
http://www.electrochem.org/dl/interface/fal/fal98/IF8-98-Pages32-36.pdf
Deacon chemistry revisited: new catalysts for chlorine recycling. ETH (2013). ; https://dx.doi.org/10.3929/ethz-a-010055281
Chemical processes
Inorganic reactions
Chlorine | Deacon process | [
"Chemistry"
] | 560 | [
"Chemical process engineering",
"Chemical processes",
"Inorganic reactions",
"nan"
] |
997,416 | https://en.wikipedia.org/wiki/FTP%20bounce%20attack | FTP bounce attack is an exploit of the FTP protocol whereby an attacker is able to use the command to request access to ports indirectly through the use of the victim machine, which serves as a proxy for the request, similar to an Open mail relay using SMTP.
This technique can be used to port scan hosts discreetly, and to potentially bypass a network's Access-control list to access specific ports that the attacker cannot access through a direct connection, for example with the nmap port scanner.
Nearly all modern FTP server programs are configured by default to refuse commands that would connect to any host but the originating host, thwarting FTP bounce attacks.
See also
Confused deputy problem
References
External links
CERT Advisory on FTP Bounce Attack
CERT Article on FTP Bounce Attack
Original posting describing the attack
File Transfer Protocol
Computer network security | FTP bounce attack | [
"Technology",
"Engineering"
] | 171 | [
"Cybersecurity engineering",
"Computer network stubs",
"Computer networks engineering",
"Computer network security",
"Computing stubs"
] |
997,476 | https://en.wikipedia.org/wiki/Night%20sky | The night sky is the nighttime appearance of celestial objects like stars, planets, and the Moon, which are visible in a clear sky between sunset and sunrise, when the Sun is below the horizon.
Natural light sources in a night sky include moonlight, starlight, and airglow, depending on location and timing. Aurorae light up the skies above the polar circles. Occasionally, a large coronal mass ejection from the Sun or simply high levels of solar wind may extend the phenomenon toward the Equator.
The night sky and studies of it have a historical place in both ancient and modern cultures. In the past, for instance, farmers have used the status of the night sky as a calendar to determine when to plant crops. Many cultures have drawn constellations between stars in the sky, using them in association with legends and mythology about their deities.
The history of astrology has generally been based on the belief that relationships between heavenly bodies influence or explain events on Earth. The scientific study of objects in the night sky takes place in the context of observational astronomy.
Visibility of celestial objects in the night sky is affected by light pollution. The presence of the Moon in the night sky has historically hindered astronomical observation by increasing the amount of sky brightness. With the advent of artificial light sources, however, light pollution has been a growing problem for viewing the night sky. Optical filters and modifications to light fixtures can help to alleviate this problem, but for optimal views, both professional and amateur astronomers seek locations far from urban skyglow.
Brightness
The fact that the sky is not completely dark at night, even in the absence of moonlight and city lights, can be easily observed, since if the sky were absolutely dark, one would not be able to see the silhouette of an object against the sky.
The intensity of the sky brightness varies greatly over the day and the primary cause differs as well. During daytime when the Sun is above the horizon direct scattering of sunlight (Rayleigh scattering) is the overwhelmingly dominant source of light. In twilight, the period of time between sunset and sunrise, the situation is more complicated and a further differentiation is required. Twilight is divided in three segments according to how far the Sun is below the horizon in segments of 6°.
After sunset the civil twilight sets in, and ends when the Sun drops more than 6° below the horizon. This is followed by the nautical twilight, when the Sun reaches heights of −6° and −12°, after which comes the astronomical twilight defined as the period from −12° to −18°. When the Sun drops more than 18° below the horizon, the sky generally attains its minimum brightness.
Several sources can be identified as the source of the intrinsic brightness of the sky, namely airglow, indirect scattering of sunlight, scattering of starlight, and artificial light pollution.
Visual presentation
Depending on local sky cloud cover, pollution, humidity, and light pollution levels, the stars visible to the unaided naked eye appear as hundreds, thousands or tens of thousands of white pinpoints of light in an otherwise near black sky together with some faint nebulae or clouds of light. In ancient times the stars were often assumed to be equidistant on a dome above the Earth because they are much too far away for stereopsis to offer any depth cues. Visible stars range in color from blue (hot) to red (cold), but with such small points of faint light, most look white because they stimulate the rod cells without triggering the cone cells. If it is particularly dark and a particularly faint celestial object is of interest, averted vision may be helpful.
The stars of the night sky cannot be counted unaided because they are so numerous and there is no way to track which have been counted and which have not. Further complicating the count, fainter stars may appear and disappear depending on exactly where the observer is looking. The result is an impression of an extraordinarily vast star field.
Because stargazing is best done from a dark place away from city lights, dark adaptation is important to achieve and maintain. It takes several minutes for eyes to adjust to the darkness necessary for seeing the most stars, and surroundings on the ground are hard to discern. A red flashlight can be used to illuminate star charts and telescope parts without undoing the dark adaptation.
Constellations
Star charts are produced to aid stargazers in identifying constellations and other celestial objects. Constellations are prominent because their stars tend to be brighter than other nearby stars in the sky. Different cultures have created different groupings of constellations based on differing interpretations of the more-or-less random patterns of dots in the sky. Constellations were identified without regard to distance to each star, but instead as if they were all dots on a dome.
Orion is among the most prominent and recognizable constellations. The Big Dipper (which has a wide variety of other names) is helpful for navigation in the northern hemisphere because it points to Polaris, the north star.
The pole stars are special because they are approximately in line with the Earth's axis of rotation so they appear to stay in one place while the other stars rotate around them through the course of a night (or a year).
Planets
Planets, named for the Greek word for 'wanderer', process through the starfield a little each day, executing loops with time scales dependent on the length of the planet's year or orbital period around the Sun. Planets, to the naked eye, appear as points of light in the sky with variable brightness. Planets shine due to sunlight reflecting or scattering from the planets' surface or atmosphere. Thus, the relative Sun-planet-Earth positions determine the planet's brightness. With a telescope or good binoculars, the planets appear as discs demonstrating finite size, and it is possible to observe orbiting moons which cast shadows onto the host planet's surface. Venus is the most prominent planet, often called the "morning star" or "evening star" because it is brighter than the stars and often the only "star" visible near sunrise or sunset, depending on its location in its orbit. Because of its brightness, Venus can sometimes be seen after sunrise. Mercury, Mars, Jupiter and Saturn are also visible to the naked eye in the night sky.
The Moon
The Moon appears as a grey disc in the sky with cratering visible to the naked eye. It spans, depending on its exact location, 29–33 arcminutes – which is about the size of a thumbnail at arm's length, and is readily identified. Over 29.53 days on average, the moon goes through a full cycle of lunar phases. People can generally identify phases within a few days by looking at the Moon. Unlike stars and most planets, the light reflected from the Moon is bright enough to be seen during the day.
Some of the most spectacular moons come during the full moon phase near sunset or sunrise. The Moon on the horizon benefits from the Moon illusion which makes it appear larger. The Sun's light reflected from the Moon traveling through the atmosphere also appears to color the Moon orange and/or red.
Comets
Comets come to the night sky only rarely. Comets are illuminated by the Sun, and their tails extend away from the Sun. A comet with a visible tail is quite unusual – a great comet appears about once a decade. They tend to be visible only shortly before sunrise or after sunset because those are the times they are close enough to the Sun to show a tail.
Clouds
Clouds obscure the view of other objects in the sky, though varying thicknesses of cloud cover have differing effects. A very thin cirrus cloud in front of the moon might produce a rainbow-colored ring around the moon. Stars and planets are too small or dim to take on this effect and are instead only dimmed (often to the point of invisibility). Thicker cloud cover obscures celestial objects entirely, making the sky black or reflecting city lights back down. Clouds are often close enough to afford some depth perception, though they are hard to see without moonlight or light pollution.
Other objects
On clear dark nights in unpolluted areas, when the Moon appears thin or below the horizon, the Milky Way, a band of what looks like white dust, can be seen.
The Magellanic Clouds of the southern sky are easily mistaken to be Earth-based clouds (hence the name) but are in fact collections of stars found outside the Milky Way known as dwarf galaxies.
Zodiacal light is a glow that appears near the points where the Sun rises and sets, and is caused by sunlight interacting with interplanetary dust.
Gegenschein is a faint bright spot in the night sky centered at the antisolar point, caused by the backscatter of sunlight by interplanetary dust.
Shortly after sunset and before sunrise, artificial satellites often look like stars – similar in brightness and size – but move relatively quickly. Those that fly in low Earth orbit cross the sky in a couple of minutes. Some satellites, including space debris, appear to blink or have a periodic fluctuation in brightness because they are rotating. Satellite flares can appear brighter than Venus, with notable examples including the International Space Station (ISS) and Iridium Satellites.
Meteors streak across the sky infrequently. During a meteor shower, they may average one a minute at irregular intervals, but otherwise their appearance is a random surprise. The occasional meteor will make a bright, fleeting streak across the sky, and they can be very bright in comparison to the night sky.
Aircraft are also visible at night, distinguishable at a distance from other objects because their navigation lights blink.
Sky map
Future and past
Beside the Solar System objects changing in the course of them
and Earth orbiting and changing orbits over time around the Sun and in the case of the Moon around Earth, appearing over time smaller by expanding its orbit, the night sky also changes over the course of the years with stars having a proper motion and changing brightness because of being variable stars, by the distance to them getting larger or other celestial events like supernovas.
Over a timescale of tens of billions of years the night sky in the Local Group will significantly change when the coalescence of the Andromeda Galaxy and the Milky Way merge into a single elliptical galaxy.
See also
Amateur astronomy
Asterism (astronomy)
Astrology
Astronomical object
Constellation
Earth's shadow
Olbers' paradox
Planetarium
References
External links
A virtual panorama of winter night. Pokljuka, Slovenia. Burger.si. Accessed 28 February 2011.
Observational astronomy
Articles containing video clips
Sky
Sky | Night sky | [
"Astronomy"
] | 2,145 | [
"Time in astronomy",
"Night",
"Observational astronomy",
"Astronomical sub-disciplines"
] |
997,483 | https://en.wikipedia.org/wiki/Feller%20buncher | A feller buncher is a type of harvester used in logging. It is a motorized vehicle with an attachment that can rapidly gather and cut a tree before felling it.
Feller is a traditional name for someone who cuts down trees, and bunching is the skidding and assembly of two or more trees. A feller buncher performs both of these harvesting functions and consists of a standard heavy equipment base with a tree-grabbing device furnished with a chain-saw, circular saw or a shear—a pinching device designed to cut small trees off at the base. The machine then places the cut tree on a stack suitable for a skidder, forwarder, or yarder for transport to further processing such as delimbing, bucking, loading, or chipping.
Some wheeled feller bunchers lack an articulated arm, and must drive close to a tree to grasp it.
In cut-to-length logging a harvester performs the tasks of a feller buncher and additionally does delimbing and bucking.
Components and Felling attachment
Feller buncher is either tracked or wheeled and has self-levelling cabin and matches with different felling heads. For steep terrain, tracked feller buncher is being used because it provides high level of traction to the steep slope and also has high level of stability. For flat terrain, wheeled feller buncher is more efficient compared to tracked feller buncher. It is common that levelling cabins are matched with both wheeled and tracked feller buncher for steep terrain as it provides operator comfort and helps keeping the standard of tree felling production. The size and type of trees determine which type of felling heads being used.
Types of felling heads
Disc Saw Head – It can provide a high speed of cutting when the head is pushed against the tree. Then, the clamp arms will hold the tree when the tree is almost completed cutting. It is able to cut and gather multiple trees in the felling head. The disc saw head with good ground speed provides high production, which allows it to keep more than one skidder working continuously.
Shear Blade Head - It is placed against the tree and the clamp arms will hold the tree firmly. Then, the blade will activate and start cutting the tree. Same as disc saw head, it can hold multiple trees before they are placed on the ground.
Chain Saw Head – The floppy head provides minimal control to place the trees on the ground. It might not suit to collect the cut trees or gather the cut stems in the felling head.
Cost-effectiveness
The purchase cost of a feller buncher is around $180,000 USD and its fuel consumption and lubricant consumption is high among other mechanical harvesting equipment. The feller buncher also has the highest hourly cost which is around $99.5 when comparing other equipment such as a harvesters and grapple skidders. Although the total cost of feller buncher is high in overall, the unit production price is the lowest which explains why feller buncher is considered the most cost-effective harvesting equipment. The average unit cost of the feller buncher is $12.1/m3 while the unit cost of the harvesters is $16.5/m3. The unit cost of the feller buncher is primary affected by the tree size and the tree volume. The unit felling cost is lower when the tree size increased. For example, tree with 5 inches at DBH has the unit cost of $70 while tree with 15 inches at DBH has the unit cost of $12. As the cost of feller buncher is high, only large tree volume can produce more profit to cover the high average cost. In terms of stump height, lower stump height can maximise the use of natural resources and prevent wood waste. Mechanical felling such as using feller buncher can prevent 30% of value loss caused by the high stumps.
Maintenance
Feller buncher requires daily maintenance before operation and some components only require periodic maintenance. It could ensure the safety of operators and all the workers around the operation. If damaged or faulty machine is operated, it could result in further damage to the machine which can be more expensive to repair.
Daily or Every 8 hours
Lubrication
The felling head is considered one of the hardest part of the feller buncher and it is necessary to apply lubricant to every joint for daily maintenance. It is suggested to apply lubricant to saw head clamps, wrist attachment and driveshaft bearings during every maintenance. The use of grease should meet the extreme pressure performance standard and contains 3% of molybdenum disulphide (MoS2). MoS2 can prevent the wear takes place where the metal to metal contact exists.
Fuel
It is also important to check if there is enough fuel for the operation. Feller bunchers use diesel fuel to generate power. In most of the cases, the fuel is preferably to have cetane number greater than 50 (minimum 40). This is suitable for operation at temperatures below -20 °C (-4 °F) or elevations more than 1500m (5000 Ft.). The Cloud Point of the fuel is preferably at least 5 °C (9 °F) lower than the expected low temperature. It is also suggested that the sulphur content of the fuel should not be more than 0.5% as it could reduce 50% of the service interval for the engine oil and filter.
Engine coolant
Operators have the responsibility to check the engine coolant level of the feller buncher before starting the engine. The coolant prevents cylinder linear erosion and pitting, and provides protection during extremely low temperature for up to -37 °C (-34 °F). It is recommended to use coolants for heavy-duty engines which are relatively low silicate ethylene glycol base. There are two forms of coolants: pre-diluted or concentrate. Water is required to dilute the concentrated coolant with an approximate ratio of 50:50. The use of supplemental coolant addictive might be also required in the concentrated coolant in order to provide protection against corrosion. Distilled, deionised, or demineralised water is suggested for mixing the concentrated coolant because when some water compositions mix with other substances could form a precipitate, causing damage or blockage in the engine.
Risk management approach
During maintenance, there are common working hazards related to two main areas: working environment and exhaust system. When working on the exhaust system, be aware of the hot components around the engine. Workers could wear personal protective equipment such as safety spectacles, heat-proof gloves and safety boots. When feller buncher is elevated for service or maintenance, falls from height might happen. Related injuries could be avoided by ensuring dryness of all the walking surface, wiping any oils or other liquid substances on the floor. Also, ensure the feller buncher is parked on a level and stable ground during maintenance. When getting in and out of the machinery, workers are suggested to use three point of contact with two hands holding the handrails and one foot on a step. It is also important to provide sufficient lighting for all the working sites at all time of service.
Safety
Logging is considered one of the most dangerous occupations. This is because many loggers are injured by the falling objects which are large in size and heavy. “Struck by object” is the most common injuries that reported in the logging industry due to the manual use of equipment during the logging procedures. There is evidence that using mechanized harvesting equipment could reduce the rate of “struck by” injuries. One study indicates that total injury claims could be reduced by 14.2%, while the “struck by” injuries could be reduced by 8.2%, when comparing the changes before and after the use of feller buncher. The significant decline in the number of “struck by” injuries after using the feller buncher in the logging companies supported the statement that using mechanized harvesting equipment could lessen overall injuries. The evidence also found that the rate of injuries in the logging companies without using feller bunchers had increased slightly throughout a period of time, increasing from 14.5% to 17.5%, in five years. In terms of trees fatality, areas with lower levels of mechanization in harvesting resulted in higher rate of trees fatality. For instance, in Eastern areas of the United States, research which compared the conventional and mechanized logging operations, indicated the number of injuries, when using the conventional approach, is three times greater than that of using the mechanized equipment such as a feller buncher. However, mechanized related injury could be raised accordingly, especially when performing machine maintenance or repair. These kinds of injuries could be serious and also costly.
Limitations
Feller buncher could be highly productive and cost-effective but there are several limitations. Feller buncher is less beneficial when performing operations on a very rough and relatively steep land. For example, in Appalachian hardwood area, trees have heavy crowns and are grown on the steep slopes which requires tracked feller bunchers in the operations. Although tracked feller bunchers allow operations on a steep slope, the cost-effectiveness is not well studied. Also, manual felling can operate on the steeper slopes than the feller bunchers do. On the other hand, feller bunchers are cost-effective only when there is a high volume of trees in the operations. If there is not enough timber to harvest, the unit cost can be expensive, especially when the majority of the operation site is steep slopes. A 2013, University of Maine study suggests that the use of feller bunchers could cause medium to high level of stand damage from 7% to 25%. However, in comparison with other equipment such as harvesters, the damage that is caused by the feller bunchers is less severe.
See also
Woodchipper
Harvester
Skidder
Logging truck
Stump grinder
References
External links
Logging
Forestry equipment
Mechanical engineering
Forest management | Feller buncher | [
"Physics",
"Engineering"
] | 2,040 | [
"Applied and interdisciplinary physics",
"Mechanical engineering"
] |
997,557 | https://en.wikipedia.org/wiki/Master%20of%20Architecture | The Master of Architecture (M.Arch or MArch) is a professional degree in architecture qualifying the graduate to move through the various stages of professional accreditation (internship, exams) that result in receiving a license.
Overview
The degree is earned through several possible paths of study, depending on both a particular program's construction, and the candidate's previous academic experience and degrees. M.Arch degrees vary in kind, so they are frequently given names such as "M.Arch I" and "M.Arch II" to distinguish them. All M.Arch degrees are professional degrees in architecture. There are, however, other master's degrees offered by architecture schools that are not accredited in any way.
Many schools offer several possible tracks of architectural education. Including study at the bachelor's and master's level, these tracks range up to 7.5 years in duration.
One possible route is what is commonly referred to as the "4+2" course. This path entails completing a four-year, accredited, pre-professional Bachelor of Arts in architecture or a Bachelor of Science in architecture. This degree is not 3-year, depending on the nature and quality of your undergraduate study performance, and the evaluation of your master's degree program school of your undergraduate study) Master of Architecture program. This route offers several advantages: your first four years are a bit more loose, allowing the inclusion of some liberal arts study; you can attend two different institutions for your undergraduate and graduate study, which is helpful in that it allows you to have a more varied architectural education, and you can pick the best place for you to complete your thesis (because chances are, you might not pick the program that has the exact focus that you will want when it becomes time for your thesis study); and you will finish the 4+2 course of study with a master's degree that will provide you the career option of teaching architecture at the collegiate level.
The second route to obtaining an accredited master's degree begins in graduate school, with a 3 or 3.5-year master's degree (commonly called an "M.Arch I"). The advantage to this route is that the student can study something else they are interested in their undergraduate study. Because students come from different undergraduate backgrounds, the breadth of knowledge and experience in the student body of an M.Arch I program is often considered an advantage. One possible disadvantage is that the total time in school is longer (7 or 7.5 years with an undergraduate degree). Another disadvantage is that the student has a very short time to cover the extremely broad scope of subject areas of which architects are expected to have a working knowledge. Nevertheless, major schools of architecture including MIT and Harvard often offer a 3.5-year program to students already with strong architectural background, fostering a competitive and productive academic environment.
A third possible route is what schools are calling a "post-professional" master's degree. It is research-based and often a stepping-stone to a Doctor of Philosophy in Architecture. Schools include Cornell, Harvard, Princeton, MIT, and RISD.
Some institutions offer a 5-year professional degree program. Depending on the school and course of study, this could be either a Bachelor of Architecture (B.Arch) or an M.Arch. In the U.S., it is typically a 5-year B.Arch Either degree qualifies those who complete it to sit for the ARE (the Architectural Registration Exam, the architecture equivalent of the bar exam), which leads to an architect's license in the U.S.. One disadvantage of the B.Arch degree is that it is rarely considered as sufficient qualification for teaching architecture at the university/college level in the U.S. (though there are many exceptions). Many architects who wish to teach and have only received a B.Arch choose to pursue a 3-semester master's degree (not an M.Arch) to obtain further academic qualification.
Graduate-level architecture programs consist of course work in design, building science, structural engineering, architectural history, theory, professional practice, and elective courses. For those without any prior knowledge of the field, coursework in calculus, physics, computers, statics and strengths of materials, architectural history, studio, and building science is usually required. Some architecture programs allow students to specialize in a specific aspect of architecture, such as architectural technologies or digital media. A thesis or final project is usually required to graduate.
In the United States, The National Architectural Accrediting Board (NAAB) is the sole accrediting body for professional degree programs in architecture. Since most state registration boards in the United States require any applicant for licensure to have graduated from an NAAB-accredited program, obtaining such a degree is an essential aspect of preparing for the professional practice of architecture. First time students matriculating with a 5-year B.Arch degree can also qualify for registration without obtaining a master's degree. Some programs offer a concurrent learning model, allowing students the opportunity to work in the profession while they earn their degree, so that they can test for licensure immediately upon graduation.
In Canada, Master of Architecture degrees may be accredited by the Canadian Architectural Certification Board (CACB), allowing the recipient to qualify for both the ARE and the Examination for Architects in Canada (ExAC).
As of June 2022, there were 120 accredited Master of Architecture programs in the United States, including Puerto Rico.
Master's degree programs
United States
Canada
Colleges and universities in Canada with accredited Master of Architecture degree programs are listed below:
University of British Columbia
University of Calgary
Carleton University
Université Laval
McGill University
University of Manitoba
Université de Montréal
University of Guelph (Only Master of Landscape Architecture)
University of Toronto
Dalhousie University
University of Waterloo, School of Architecture
Ryerson University
Australia and New Zealand
Universities in Australia and New Zealand with accredited Master of Architecture degree programs are listed below Architects Accreditation Council Of Australia « Recognised Architecture Qualifications:
Curtin University
Griffith University
Deakin University
Monash University
Queensland University of Technology (QUT)
RMIT University
University of Adelaide
University of Canberra
University of Melbourne
University of Newcastle
University of New South Wales
University of Queensland
University of South Australia (UniSA)
University of Sydney
University of Tasmania
University of Technology, Sydney (UTS)
University of Western Australia
University of Auckland
Unitec New Zealand
Victoria University of Wellington
Hong Kong
The only 2 universities offering HKIA (Hong Kong Institute of Architects), CAA (Commonwealth Association of Architects) & RIBA (Royal Institute of British Architects) accredited Master of Architecture for architect professional registration.
The Chinese University of Hong Kong, School of Architecture, Hong Kong, founded in 1992
The University of Hong Kong, Faculty of Architecture, Department of Architecture, Hong Kong, founded in 1950
China
Tsinghua University, Beijing
Beijing University of Civil Engineering and Architecture, Beijing
Tongji University, Shanghai
Southeast University, Nanjing
Xi'an Jiaotong-Liverpool University, starting fall 2014, language: English
Hunan University, Changsha
Singapore
National University of Singapore
Singapore University of Technology and Design
Mexico
In Mexico, an officially recognized Bachelor of Architecture is sufficient for practice.
Faculty of Architecture at the National Autonomous University of Mexico
Monterrey Institute of Technology and Higher Education
Universidad Autónoma Benito Juárez de Oaxaca
Universidad Autónoma de Guadalajara
Universidad Autónoma de Nuevo León
Universidad Autónoma de San Luis Potosí
Africa
University of Pretoria
University of Cape Town
University of the Witwatersrand
University of Johannesburg
Tshwane University of Technology
University of Nigeria, Enugu Campus
University of Carthage
Uganda Martyrs University
University of the Free State
Nelson Mandela Metropolitan University
University of Nairobi
Caleb University
Bells University of Technology
Ardhi University, Tanzania
Kwame Nkrumah University of Science and Technology, Ghana
Ahmadu Bello University, Zaria
Federal University of Technology, Akure
Federal University of Technology, Minna. Nigeria.
India
In India, the Council of Architecture regulates the architectural education and maintains a registry of higher education institutions approved to offer a 2-year long Master of Architecture degree. While 5-year long Bachelor of Architecture degree allows a person to register with Council of Architecture as an architect and practice architecture in India, Master of Architecture is often required to teaching architecture at the collegiate level.
Iran
Some universities in Iran with accredited Master of Architecture degree programs are listed below:
Tehran University
Shahid Beheshti University (SBU)
Iran University of Science and Technology
Tarbiat Modares University (TMU)
Tabriz Islamic Art University
Yazd University
University of Shahrood
Islamic Azad University
Sooreh University
Shiraz University
Schools and Universities in Europe
Austria
Academy of fine Arts, Vienna Institute for Art and Architecture (B.Arch. and M.Arch. language: German and English) (Austria)
Bosnia and Herzegovina
University of Sarajevo Faculty of Architecture (B.Arch. and M.Arch. language: Bosnian and English)
Belgium
WENK Gent Brussels (Sint Lucas Institute of Architecture) Sint Lucas Ghent Brussels in Belgium (language: English)
Denmark
Royal Danish Academy of Fine Arts (M.A. Professional Degree, language: English)(Denmark)
Finland
University of Oulu (M.S. Professional Degree, language: English)(Finland)
University of Tampere (M.S. Professional Degree, language: English)(Finland)
Aalto University (M.S. Professional Degree, language: English)(Finland)
Germany
DIA Dessau (Dessau International Architecture) at the Hochschule Anhalt / Bauhaus Dessau in Germany (language: English)
Hochschule Wismar (language: German and English) in Wismar, Germany
Ireland
Cork Centre for Architectural Education (University College Cork/Munster Technological University)
Technological University Dublin
University College Dublin
Italy
Politecnico di Torino - I Facoltà di Architettura I (Italy)
Politecnico di Torino - II Facoltà di Architettura (Italy)
Liechtenstein
Hochschule Liechtenstein (candidate for accreditation, language: English)
Netherlands
TU Delft Faculty of Architecture (M.S. Professional Degree, language: English)
Academy of Architecture at the Amsterdam School of Art
Artez Academy of Architecture in Arnhem
Academie van Bouwkunst Groningen
Academie van bouwkunst Maastricht
The Rotterdam Academy of Architecture and Urban Design
TU Eindhoven Faculty of Architecture, Building and Planning (M.S. Professional Degree, language: English)
Poland
Warsaw University of Technology Architecture and Urban Planning with specialisation Architecture for Society of Knowledge (M.Arch. language: English) (Poland)
Cracow University of Technology Department of Architecture with specialisation Architecture and Urban Planning (M.Arch. RIBA accredited) (Poland)
Wroclaw University of Science and Technology Faculty of Architecture
(M.Arch. language: English) (Poland)
Serbia
University of Belgrade Architecture and Urban Planning (M.Arch. RIBA accredited)(M.Arch. language: Serbian, English)
University of Novi Sad Architecture (M.Arch. language: Serbian, English)
Slovenia
University of Ljubljana Architecture and Urban Planning (M.Arch. language: English) (M.I.A. Language: Slovenian)
Spain
Universidad de Navarra Department of Architecture (M.D.A. language: Spanish and English) (Spain)
The University of the Basque Country The University of the Basque Country (M.D.A. Language: Basque or Spanish) (Basque Country, Spain)
Switzerland
Jointmaster of Architecture in Berne, Fribourg and Geneva, (languages: English and French) (Switzerland)
Accademia di Architettura di Mendrisio (Switzerland)
Academie van Bouwkunst Tilburg (the Netherlands)
United Kingdom
All M.Arch courses listed below comply with RIBA and ARB accreditation, complying to RIBA's Part 2 stage before Part 3 and Architect registry.
England
University of Bath, Department of Architecture and Civil Engineering, Bath, as MArch
Birmingham City University, Birmingham School of Architecture, Birmingham, as MArch
Arts University Bournemouth, Bournemouth, as MArch
University of Brighton, Brighton, as MArch
University of the West of England (UWE Bristol), Bristol, as MArch
University of Cambridge, Department of Architecture, Cambridge as MPhil
The University of Creative Arts, Canterbury School of Architecture, as MArch
The University of Kent (Canterbury), Kent School of Architecture, as MArch
The University of Huddersfield, School of Art, Design and Architecture. as M.Arch or M.Arch (International)
Leeds Beckett University, School of Arts, as MArch or Level 7 Architecture Apprenticeship.
De Montfort University, The Leicester School of Architecture, Leicester, as March or Level 7 Architecture Apprenticeship.
University of Lincoln, The Lincoln School of Architecture, Lincoln, as MArch
University of Liverpool, Liverpool School of Architecture, Liverpool, as MArch
Liverpool John Moores University, Liverpool, as MArch
Architectural Association School of Architecture, London, as Final Examination
The London School of Architecture as MArch
The University College of London, The Bartlett School of Architecture, as MArch
The University of Arts, London, Central Saint Martins College of Art and Design, London, as MArch
The University of East London, School of Architecture, Computing and Engineering, as March
The University of Greenwich London, School of Architecture, Design and Construction, London, as MArch
Kingston University London, Kingston School of Art, London, as MArch
London Metropolitan University, School of Art, Architecture and Design, as MArch or Level 7 Architect Apprenticeship
Royal College of Art, School of Architecture, as MA
London South Bank University, Engineering, Science and the Built Environment, as MArch or Level 7 Architect Apprenticeship
The University of Westminster, Department of Architecture, as M.Arch
University of Manchester and Manchester Metropolitan University, The Manchester School of Architecture, as MArch or Level 7 Architect Apprenticeship
The University of Newcastle upon Tyne, School of Architecture, Planning and Landscape, Newcastle, as M.Arch
Northumbria University, Architecture Department, School of the Built Environment, Newcastle upon Tyne, as MArch orLevel 7 Architect Apprenticeship
The University of Nottingham, Architecture and Built Environment, Nottingham, as MArch
Nottingham Trent University, School of Architecture, Design and the Built Environment. as M.Arch
Oxford Brookes University, School of Architecture, Oxford, as M.ARchD
University of Central Lancashire (UCLAN), (Preston) The Grenfell-Baines School of Architecture, Construction and Environment, as MArch
RIBA Studio, as Diploma
The University of Plymouth, Plymouth School of Architecture, Design and Environment, Plymouth, as M.Arch
The University of Portsmouth, Portsmouth School of Architecture, Portsmouth, as MArch
The University of Sheffield, Sheffield School of Architecture, Sheffield, as MArch
Sheffield Hallam University, Department of Architecture and Planning, Sheffield, as M.Arch
Northern Ireland
The Queen's University Belfast as MArch
The University of Ulster as MArch
Scotland
University of Dundee as MArch (with Honours)
University of Edinburgh, The Edinburgh College of Art, MArch
University of Strathclyde (Glasgow) as PgDip or MArch
Glasgow School of Art, Mackintosh School of Architecture, as MArch
Robert Gordon University, The Scott Sutherland School of Architecture & Built Environment, via BSc/MArch (Integrated Degree) or MArch
Duncan of Jordanstone College of Art and Design as MArch
Wales
Cardiff University, Welsh School of Architecture, via BSc/MArch (Integrated Degree) or MArch
Schools and Universities in the Middle East
Technion Department of Architecture (M.Arch. language: English) (Israel)
Bezalel Academy of Art and Design Department of Architecture (B.Arch. language: Hebrew) (Israel)
Ariel University Department of Architecture (B.Arch. language: Hebrew) (Israel)
Middle East Technical University Department of Architecture (M.Arch. language: English) (Turkey)
Mimar Sinan Fine Arts University (B.Arch. and M.Arch. language: Turkish) (Turkey)
King Saud University, college of architecture and planning ( Riyadh, Saudi Arabia)
B.Arch main major:
1- Science of building
2- Urban design
And there's master's degree and PHD
Language: English, Arabic
Rank (according to NAAB 2012) #1 architecture school in middle east
See also
Bachelor of Architecture
Doctor of Architecture
National Council of Architectural Registration Boards
References
Architecture schools
Architecture
Architectural education | Master of Architecture | [
"Engineering"
] | 3,300 | [
"Architectural education",
"Architecture"
] |
997,579 | https://en.wikipedia.org/wiki/14%20Herculis | 14 Herculis or 14 Her is a K-type main-sequence star away in the constellation Hercules. It is also known as HD 145675. Because of its apparent magnitude, of 6.61 the star can be very faintly seen with the naked eye. As of 2021, 14 Herculis is known to host two exoplanets in orbit around the star.
Stellar components
14 Herculis is an orange dwarf star of the spectral type K0V. The star has about 98 percent of the mass, 97 percent of the radius, and only 67 percent of the luminosity of the Sun. The star appears to be 2.7 times as enriched with elements heavier than hydrogen (based on its abundance of iron), in comparison to the Sun. It may have been the most metal rich star known as of 2001.
Planetary system
In 1998 a planet, 14 Herculis b was discovered orbiting 14 Herculis via radial velocity. This was formally published in 2003. The planet has an eccentric orbit with a period of 4.8 years. In 2005, a possible second planet was proposed, designated 14 Herculis c. The parameters of this planet were very uncertain, but an initial analysis suggested that it was in the 4:1 resonance with the inner planet, with an orbital period of almost 19 years at an orbital distance of 6.9 AU. The existence of the planet 14 Herculis c was confirmed in 2021, along with a rough orbit determination.
A 2021 study combining radial velocity and astrometry found that the planetary orbits are not coplanar, which may indicate a strong planet-planet scattering event in the past. Subsequent astrometric studies have found differing results; a 2022 study found inclinations consistent with aligned orbits, while a 2023 study again found misaligned orbits. The latter study also found signs of a third candidate planet with a period of about 10 years, but this signal is most likely related to the star's magnetic activity cycle.
Direct imaging of the outer planet 14 Herculis c with the James Webb Space Telescope is planned.
See also
47 Ursae Majoris
List of stars in Hercules
Lists of exoplanets
References
External links
Herculis, 014
Hercules (constellation)
145675
079248
0614
BD+44 2549
K-type main-sequence stars
Planetary systems with two confirmed planets | 14 Herculis | [
"Astronomy"
] | 484 | [
"Hercules (constellation)",
"Constellations"
] |
997,664 | https://en.wikipedia.org/wiki/Used%20good | Used goods, also known as secondhand goods, are any item of personal property offered for sale not as new, including metals in any form except coins that are legal tender, but excluding books, magazines, and postage stamps. Used goods may also be handed down, especially among family or close friends, as a hand-me-down.
Risks
Furniture, especially bedding or upholstered items, may have bedbugs, if they have not been examined by an expert and some goods may be of poor quality.
Benefits
Recycling goods through the secondhand market reduces use of resources in manufacturing new goods and diminishes waste which must be disposed of, both of which are significant environmental benefits. Another benefit of recycling clothes is for the creation for new pieces of clothing from combining parts of recycled clothes to make a whole new piece. This has been done by multiple fashion designers recently and has been growing in recent years.
However, manufacturers who profit from sales of new goods lose corresponding sales. Scientific research shows that buying used goods significantly reduces carbon footprint (including emissions) compared to the complete product life cycle. In most cases, the relative carbon footprint of production, raw material sourcing, and the supply chain—which comprise a great deal of the product's life cycle—is unknown. A scientific methodology has been made to analyze how much emissions are reduced when buying used goods like secondhand computer hardware versus new hardware.
Quality secondhand goods can be more durable than equivalent new goods.
Types of transfers
Many items that are considered obsolete and worthless in developed countries, such as decade-old hand tools and clothes, are useful and valuable in impoverished communities in the country or in developing countries. Underdeveloped countries like Zambia are extremely welcoming to donated secondhand clothing. At a time when the country's economy was in severe decline, the used goods provided jobs by keeping "many others busy with repairs and alterations." It has created a type of spin-off economy at a time when many Zambians were out of work. The used garments and materials that were donated to the country also allowed for the production of "a wide range of fabrics" whose imports had been previously restricted. The trade is essentially executed by women who operate their small business based on local associations and networks. Not only does this provide self-employment, but it also increases household income and enhances the economy. But while many countries would be welcoming of secondhand goods, it is also true that there are countries in need who refuse donated items. Countries like Poland, the Philippines, and Pakistan have been known to reject secondhand items for "fear of venereal disease and risk to personal hygiene". Similar to these countries, India also refuses the import of secondhand clothing but will accept the import of wool fibers, including mutilated hosiery which is a term meaning "woollen garments shredded by machine in the West prior to export." Through the production of shoddy (recycled wool), most of which is produced in Northern India today, unused clothing can be recycled into fibers that are spun into yarn for reuse in "new" used goods.
There has been concern that export of electronic waste is disguised as trade of used goods, with the equipment ending in poor-country waste dumps.
Types
Used clothing
In developed countries, unwanted used clothing is often donated to charities that sort and sell it. Some of these distribute some of the clothing to people on low incomes for free or at a very low price. Others sell all of the collected clothing in bulk to a commercial used clothing redistributor and then use the raised funds to finance their activities. In the U.S., almost 5 billion pounds of clothing are donated to charity shops each year, only about 10% of which can be re-sold by the charity shops. About a third of the donated clothing is bought, usually in bulk and at a heavy discount, by commercial dealers and fabric recyclers, who export it to other countries. Some of the used clothes are also smuggled into Mexico.
Whereas charity shops dominated the secondhand market from the 1960s to the 1970s, more specialized, profit-oriented shops emerged in the 1980s. These shops catered primarily to the fashionable female demographic and offered women and children designer clothes, and occasionally high-end formal wear for men. Resale boutiques specialized in contemporary high-end used designer fashion (for example, 2nd Take, or Couture Designer Resale), while others (such as Buffalo Exchange and Plato's Closet) specialize in vintage or retro fashion, period fashion, or contemporary basics and one-of-a-kind finds. Still, others cater to specific active sports by specializing in things such as riding equipment and diving gear. The resale business model has now expanded into the athletic equipment, books, and music categories. Secondhand sales migrated to a peer-to-peer platform—effectively cutting out the retailer as the middleman—when websites such as eBay and Amazon introduced the opportunity for Internet users to sell virtually anything online, including designer (or fraudulent) handbags, fashion, shoes, and accessories.
Used clothing unsuitable for sale in an affluent market may still find a buyer or end-user in another market, such as a student market or a less affluent region of a developing country. In developing countries, such as Zambia, secondhand clothing is sorted, recycled, and sometimes redistributed to other nations. Some of the scraps are kept and used to create unique fashions that enable the locals to construct identity. Not only does the trade represent a great source of employment for women as well as men, but it also supports other facets of the economy: the merchants buy timber and other materials for their stands, metal hangers to display clothing, and food and drinks for customers. Carriers also find work as they transport the garments from factories to various locations. The secondhand clothing trade is central to the lives of many citizens dwelling in such countries.
Importation of used clothing is sometimes opposed by the textile industry in developing countries. They are concerned that fewer people will buy the new clothes that they make when it is cheaper to buy imported used clothing. Nearly all the clothes made in Mexico are intended for export, and the Mexican textile industry opposes the importation of used clothes.
Electronics and home appliances
Electronics usually are traded as secondhand goods, and may represent a hazard if disposed of incorrectly. Many of them may still be used despite being possibly outdated; for example, an older television set or computer may be sold or handed down to someone who is in need of one. In some cases, older electronics (such as home audio equipment) may outlast new equipment.
This is also the case for home appliances, from microwave ovens and toaster ovens to refrigerators and kitchen stoves.
Design and furniture
Design items and furniture are also seeing an increase in being traded as secondhand goods. With some designer items being sought after in marketplaces. When trading design furniture and items you usually must be aware of the original retail price as most of the goods, if kept well, retain their value quite well.
Other items
The Sierra Club, an environmental organization, argues that secondhand purchasing of furniture is the "greenest" way of furnishing a home.
See also
Atomic Ed and the Black Hole, a documentary film about a unique secondhand shop
Auto auction
Car boot sale
Charity shop
Consignment
Fashionphile
Flea market
Freeganism
Give-away shop
Jumble sale
Rebag
Recommerce
Regifting
Regiving
Remanufacturing
Reseller
Reverse engineering
Surplus store
Sustainable clothing
The RealReal
Thrift shop
Upcycling
References
Sustainable design
Sustainable business
Repurposing
Retailing by products and services sold
Waste | Used good | [
"Physics"
] | 1,568 | [
"Materials",
"Waste",
"Matter"
] |
997,690 | https://en.wikipedia.org/wiki/Leiden%20scale | The Leiden scale (°L or ÐL) is a temperature scale that was used to calibrate low-temperature indirect measurements in the early 20th century, by providing conventional values (in kelvins, then termed "degrees Kelvin") of helium vapour pressure. The scale dates back at around 1894, when Heike Kamerlingh Onnes established his cryogenics laboratory in Leiden, Netherlands. It was used below −183 °C, the starting point of the International Temperature Scale in the 1930s (Awbery 1934). The boiling points of standard hydrogen (−253 °C), consisting of 75% orthohydrogen and 25% parahydrogen, and oxygen (−183 °C) were used as fixed points, corresponding to zero and 70 on the scale respectively.
See also
Outline of metrology and measurement
References
Berman, A.; Zemansky, M. W.; and Boorse, H. A.; Normal and Superconducting Heat Capacities of Lanthanum, Physical Review, Vol. 109, No. 1 (January 1958), pp. 70–76. Quote:
"The 1955 Leiden scale13 was used to convert helium vapor pressures into temperatures [...] (13) H. van Dijk and M. Durieux, in Progress in Low Temperature Physics II, edited by C. J. Gorter (North-Holland Publishing Company, Amsterdam, 1957), p. 461. In the region of calibration the 1955 Leiden scale, TL55, differs from the Clement scale, T55E, by less than 0.004 deg." (emphasis added)
Grebenkemper, C. J.; and Hagen, John P.; The Dielectric Constant of Liquid Helium, Physical Review, Vol. 80, No. 1 (October 1950), pp. 89–89. Quote:
"The temperature scale used was the 1937 Leiden scale." (emphasis added)
Awbery, J. H.; Heat, Rep. Prog. Phys. 1934, No. 1, pp. 161–197 . Quote:
"It should be mentioned that below −183 °C, the Leiden workers do not entirely agree with some of the other cryogenic laboratories, but use a scale of their own, generally known as the Leiden scale." (emphasis added)
H. van Dijk, M. Durieux.; The Temperature Scale in the Liquid Helium Region, Progress in Low Temperature Physics. - 1957. - Vol. 2. — P. 431–464.
Hubbard, Joanna; Are icebergs made of salt water or fresh water? Archived at the Wayback Machine (07-16-2011)
Obsolete units of measurement
Scales of temperature | Leiden scale | [
"Physics",
"Mathematics"
] | 569 | [
"Scales of temperature",
"Obsolete units of measurement",
"Physical quantities",
"Quantity",
"Units of measurement"
] |
997,696 | https://en.wikipedia.org/wiki/Administrative%20share | Administrative shares are hidden network shares created by the Windows NT family of operating systems that allow system administrators to have remote access to every disk volume on a network-connected system. These shares may not be permanently deleted but may be disabled. Administrative shares cannot be accessed by users without administrative privileges.
Share names
Administrative shares are a collection of automatically shared resources including the following:
Disk volumes: Every disk volume on the system is shared as an administrative share. The name of these shares consists of the drive letters of shared volume plus a dollar sign ($). For example, a system that has volumes C, D and E has three administrative shares named C$, D$ and E$. (NetBIOS is not case sensitive.)
OS folder: The folder in which Windows is installed is shared as admin$
Fax cache: The folder in which faxed pages and cover pages are cached is shared as fax$
IPC shares: This area, which is used for inter-process communication via named pipes and is not part of the file system, is shared as ipc$
Printers folder: This virtual folder, which contains objects that represent installed printers is shared as print$
Domain controller shares: Windows Server family of operating system creates two domain controller-specific shares called sysvol and netlogon which do not have dollar signs ($) appended to their names.
Characteristics
Administrative shares have the following characteristics:
Hidden: The "$" appended to the end of the share name means that it is a hidden share. Windows will not list such shares among those it defines in typical queries by remote clients to obtain the list of shares. One needs to know the name of an administrative share in order to access it. Not every hidden share is an administrative share; in other words, ordinary hidden shares may be created at user's discretion.
Automatically created: Administrative shares are created by Windows, not a network administrator. If deleted, they will be automatically recreated.
Administrative shares are not created by Windows XP Home Edition.
Management
The administrative shares can be deleted just as any other network share, only to be recreated automatically at the next reboot. It is, however, possible to disable administrative shares.
Disabling administrative shares is not without caveats. Previous Versions for local files, a feature of Windows Vista and Windows 7, requires administrative shares to operate.
Restrictions
Windows XP implements "simple file sharing" (also known as "ForceGuest"), a feature that can be enabled on computers that are not part of a Windows domain. When enabled, it authenticates all incoming access requests to network shares as "Guest", a user account with very limited access rights in Windows. This effectively disables access to administrative shares.
By default, Windows Vista and later use User Account Control (UAC) to enforce security. One of UAC's features denies administrative rights to a user who accesses network shares on the local computer over a network, unless the accessing user is registered on a Windows domain or using the built in Administrator account. If not in a Windows domain it is possible to allow administrative share access to all accounts with administrative permissions by adding the LocalAccountTokenFilterPolicy value to the registry.
See also
Server Message Block (SMB) – the infrastructure responsible for file and printer sharing in Windows
Distributed File System (DFS) – another infrastructure that makes file sharing possible
My Network Places – Windows graphical user interface for accessing network shares
Network Access Protection (NAP) – a Microsoft network security technology
Conficker – an infamous malware that exploited a combination of weak passwords, security vulnerabilities, administrative negligence and admin$ share to breach a computer over a network and propagate itself
References
Microsoft server technology
Data security
Windows communication and services | Administrative share | [
"Engineering"
] | 772 | [
"Cybersecurity engineering",
"Data security"
] |
997,871 | https://en.wikipedia.org/wiki/Ligne | The ligne ( ), or line or Paris line, is a historic unit of length used in France and elsewhere prior to the adoption of the metric system in the late 18th century, and used in various sciences after that time. The loi du 19 frimaire an VIII (Law of 10 December 1799) states that one metre is equal to exactly 443.296 French lines.
It is vestigially retained today by French and Swiss watchmakers to measure the size of watch casings, in button making and in ribbon manufacture.
Current use
Watchmaking
There are 12 lignes to one French inch (pouce). The standardized conversion for a ligne is 2.2558291 mm (1 mm = 0.443296 ligne), and it is abbreviated with the letter L or represented by the triple prime, . One ligne is the equivalent of 0.0888 international inch.
This is comparable in size to the British measurement called "line" (one-twelfth of an English inch), used prior to 1824. (The French inch at that time was slightly larger than the English one, but the system of 12 inches to a foot and 12 lines to an inch was the same in both cases.)
Hatmaking
Ligne is used in measuring the width of ribbons in men's hat bands, at 11.26 per international inch.
Button making
The button trade uses the term ligne (sometimes "line"), but with a substantially different definition: .
See also
Notes
References
Obsolete units of measurement
Units of length | Ligne | [
"Mathematics"
] | 319 | [
"Obsolete units of measurement",
"Quantity",
"Units of measurement",
"Units of length"
] |
997,986 | https://en.wikipedia.org/wiki/Entner%E2%80%93Doudoroff%20pathway | The Entner–Doudoroff pathway (ED Pathway) is a metabolic pathway that is most notable in Gram-negative bacteria, certain Gram-positive bacteria and archaea. Glucose is the substrate in the ED pathway and through a series of enzyme assisted chemical reactions it is catabolized into pyruvate. Entner and Doudoroff (1952) and MacGee and Doudoroff (1954) first reported the ED pathway in the bacterium Pseudomonas saccharophila. While originally thought to be just an alternative to glycolysis (EMP) and the pentose phosphate pathway (PPP), some studies now suggest that the original role of the EMP may have originally been about anabolism and repurposed over time to catabolism, meaning the ED pathway may be the older pathway. Recent studies have also shown the prevalence of the ED pathway may be more widespread than first predicted with evidence supporting the presence of the pathway in cyanobacteria, ferns, algae, mosses, and plants. Specifically, there is direct evidence that Hordeum vulgare uses the Entner–Doudoroff pathway.
Distinct features of the Entner–Doudoroff pathway are that it:
Uses the unique enzymes 6-phosphogluconate dehydratase and 2-keto-deoxy-6-phosphogluconate (KDPG) aldolase and other common metabolic enzymes to other metabolic pathways to catabolize glucose to pyruvate.
In the process of breaking down glucose, a net yield of 1 ATP is formed per every one glucose molecule processed, as well as 1 NADH and 1 NADPH. In comparison, glycolysis has a net yield of 2 ATP molecules and 2 NADH molecules per every one glucose molecule metabolized. This difference in energy production may be offset by the difference in protein amount needed per pathway.
Archaeal variations
Archaea have variants of the Entner-Doudoroff Pathway. These variants are called the semiphosphorylative ED (spED) and the nonphosphorylative ED (npED):
spED is found in halophilic euryachaea and Clostridium species.
In spED, the difference is where phosphorylation occurs. In the standard ED, phosphorylation occurs at the first step from glucose to G-6-P. In spED, the glucose is first oxidized to gluconate via a glucose dehydrogenase. Next, gluconate dehydratase converts gluconate into 2-keto-3-deoxy-gluconate (KDG). The next step is where phosphorylation occurs as KDG kinase converts KDG into KDPG. KDPG is then cleaved into glyceraldehyde 3-phosphate (GAP) and pyruvate via KDPG aldolase and follows the same EMP pathway as the standard ED. This pathway produces the same amount of ATP as the standard ED.
npED is found in thermoacidophilic Sulfolobus, Euryarchaeota Tp. acidophilum, and Picrophilus species.
In npED, there is no phosphorylation at all. The pathway is the same as spED but instead of phosphorylation occurring at KDG, KDG is instead cleaved GA and pyruvate via KDG aldolase. From here, GA is oxidized via GA dehydrogenase into glycerate. The glycerate is phosphorylated by glycerate kinase into 2PG. 2PG then follows the same pathway as ED and is converted into pyruvate via ENO and PK. In this pathway though, there is no ATP produced.
Some archaea such as Crenacraeota Sul. solfacaricus and Tpt. tenax have what is called branched ED. In branched ED, the organism have both spED and npED that are both operative and work in parallel.
Organisms that use the Entner–Doudoroff pathway
There are several bacteria that use the Entner–Doudoroff pathway for metabolism of glucose and are unable to catabolize via glycolysis (e.g., therefore lacking essential glycolytic enzymes such as phosphofructokinase as seen in Pseudomonas). Genera in which the pathway is prominent include Gram-negative, as listed below, Gram-positive bacteria such as Enterococcus faecalis, as well as several in the Archaea, the second distinct branch of the prokaryotes (and the "third domain of life", after the prokaryotic Eubacteria and the eukaryotes). Due to the low energy yield of the ED pathway, anaerobic bacteria seem to mainly use glycolysis while aerobic and facultative anaerobes are more likely to have the ED pathway. This is thought to be due to the fact that aerobic and facultative anaerobes have other non-glycolytic pathways for creating ATP such as oxidative phosphorylation. Thus, the ED pathway is favored due to the lesser amounts of proteins required. While anaerobic bacteria must rely on the glycolysis pathway to create a greater percentage of their required ATP thus its 2 ATP production is more favored over the ED pathway's 1 ATP production.
Examples of bacteria using the pathway are:
Pseudomonas, a genus of Gram-negative bacteria
Azotobacter, a genus of Gram-negative bacteria
Rhizobium, a plant root-associated and plant differentiation-active genus of Gram-negative bacteria
Agrobacterium, a plant pathogen (oncogenic) genus of Gram-negative bacteria, also of biotechnologic use
Escherichia coli, a Gram-negative bacterium
Enterococcus faecalis, a Gram-positive bacterium
Zymomonas mobilis, a Gram-negative facultative anaerobe
Xanthomonas campestris, a Gram-negative bacterium which uses this pathway as main pathway for providing energy.
To date there is evidence of Eukaryotes using the pathway, suggesting it may be more widespread than previously thought:
Hordeum vulgare, barley uses the Entner–Duodoroff pathway.
Phaeodactylum tricornutum diatom model species presents functional phosphogluconate dehydratase and dehoxyphosphogluconate aldolase genes in its genome
The Entner–Doudoroff pathway is present in many species of Archaea (caveat, see following), whose metabolisms "resemble... in [their] complexity those of Bacteria and lower Eukarya", and often include both this pathway and the Embden-Meyerhof-Parnas pathway of glycolysis, except most often as unique, modified variants.
Catalyzing enzymes
Conversion of glucose to glucose-6-phosphate
The first step in ED is phosphorylation of glucose by a family of enzymes called hexokinases to form glucose 6-phosphate (G6P). This reaction consumes ATP, but it acts to keep the glucose concentration low, promoting continuous transport of glucose into the cell through the plasma membrane transporters. In addition, it blocks the glucose from leaking out – the cell lacks transporters for G6P, and free diffusion out of the cell is prevented due to the charged nature of G6P. Glucose may alternatively be formed from the phosphorolysis or hydrolysis of intracellular starch or glycogen.
In animals, an isozyme of hexokinase called glucokinase is also used in the liver, which has a much lower affinity for glucose (Km in the vicinity of normal glycemia), and differs in regulatory properties. The different substrate affinity and alternate regulation of this enzyme are a reflection of the role of the liver in maintaining blood sugar levels.
Cofactors: Mg2+
Conversion of glucose-6-phosphate to 6-phosphogluconolactone
The G6P is then converted to 6-phosphogluconolactone in the presence of enzyme glucose-6-phosphate dehydrogenase (an oxido-reductase) with the presence of co-enzyme nicotinamide adenine dinucleotide phosphate (NADP+). which will be reduced to nicotinamide adenine dinucleotide phosphate hydrogen along with a free hydrogen atom H+.
Conversion of 6-phosphogluconolactone to 6-phosphogluconic acid
The 6PGL is converted into 6-phosphogluconic acid in the presence of enzyme hydrolase.
Conversion of 6-phosphogluconic acid to 2-keto-3-deoxy-6-phosphogluconate
The 6-phosphogluconic acid is converted to 2-keto-3-deoxy-6-phosphogluconate (KDPG) in the presence of enzyme 6-phosphogluconate dehydratase; in the process, a water molecule is released to the surroundings.
Conversion of 2-keto-3-deoxy-6-phosphogluconate to pyruvate and glyceraldehyde-3-phosphate
The KDPG is then converted into pyruvate and glyceraldehyde-3-phosphate in the presence of enzyme KDPG aldolase. For the pyruvate, the ED pathway ends here, and the pyruvate then goes into further metabolic pathways (TCA cycle, ETC cycle, etc).
The other product (glyceraldehyde-3-phosphate) is further converted by entering into the glycolysis pathway, via which it, too, gets converted into pyruvate for further metabolism.
Conversion of glyceraldehyde-3-phosphate to 1,3-bisphosphoglycerate
The G3P is converted to 1,3-bisphosphoglycerate in the presence of enzyme glyceraldehyde-3-phosphate dehydrogenase (an oxido-reductase).
The aldehyde groups of the triose sugars are oxidised, and inorganic phosphate is added to them, forming 1,3-bisphosphoglycerate.
The hydrogen is used to reduce two molecules of NAD+, a hydrogen carrier, to give NADH + H+ for each triose.
Hydrogen atom balance and charge balance are both maintained because the phosphate (Pi) group actually exists in the form of a hydrogen phosphate anion (HPO42−), which dissociates to contribute the extra H+ ion and gives a net charge of -3 on both sides.
Conversion of 1,3-bisphosphoglycerate to 3-phosphoglycerate
This step is the enzymatic transfer of a phosphate group from 1,3-bisphosphoglycerate to ADP by phosphoglycerate kinase, forming ATP and 3-phosphoglycerate.
Conversion of 3-phosphoglycerate to 2-phosphoglycerate
Phosphoglycerate mutase isomerises 3-phosphoglycerate into 2-phosphoglycerate.
Conversion of 2-phosphoglycerate to phosphoenolpyruvate
Enolase next converts 2-phosphoglycerate to phosphoenolpyruvate. This reaction is an elimination reaction involving an E1cB mechanism.
Cofactors: 2 Mg2+: one "conformational" ion to coordinate with the carboxylate group of the substrate, and one "catalytic" ion that participates in the dehydration
Conversion of phosphoenol pyruvate to pyruvate
A final substrate-level phosphorylation now forms a molecule of pyruvate and a molecule of ATP by means of the enzyme pyruvate kinase. This serves as an additional regulatory step, similar to the phosphoglycerate kinase step.
Cofactors: Mg2+
References
Further reading
Bräsen C.; D. Esser; B. Rauch & B. Siebers (2014) "Carbohydrate metabolism in Archaea: current insights into unusual enzymes and pathways and their regulation," Microbiol. Mol. Biol. Rev. 78(1; March), pp. 89–175, DOI 10.1128/MMBR.00041-13, see or , accessed 3 August 2015.
Ahmed, H.; B. Tjaden; R. Hensel & B. Siebers (2004) "Embden–Meyerhof–Parnas and Entner–Doudoroff pathways in Thermoproteus tenax: metabolic parallelism or specific adaptation?," Biochem. Soc. Trans. 32(2; April 1), pp. 303–304, DOI 10.1042/bst0320303, see , accessed 3 August 2015.
Conway T. (1992) "The Entner-Doudoroff pathway: history, physiology and molecular biology," FEMS Microbiol. Rev., 9(1; September), pp. 1–27, see , accessed 3 August 2015.
Snyder, L., Peters, J. E., Henkin, T. M., & Champness, W. (2013). Molecular genetics of bacteria. American Society of Microbiology.
Biochemical reactions
Carbohydrate metabolism
Metabolic pathways | Entner–Doudoroff pathway | [
"Chemistry",
"Biology"
] | 2,933 | [
"Carbohydrate metabolism",
"Biochemistry",
"Biochemical reactions",
"Carbohydrate chemistry",
"Metabolic pathways",
"Metabolism"
] |
997,997 | https://en.wikipedia.org/wiki/Vih%C4%81ra | Vihāra generally refers to a Buddhist temple or Buddhist monastery for Buddhist renunciates, mostly in the Indian subcontinent. The concept is ancient and in early Pali texts, it meant any arrangement of space or facilities for dwellings. The term evolved into an architectural concept wherein it refers to living quarters for monks with an open shared space or courtyard, particularly in Buddhism. The term is also found in Jain monastic literature, usually referring to temporary refuge for wandering monks or nuns during the annual Indian monsoons. In modern Jainism, the monks continue to wander from town to town except during the rainy season (chaturmasya), and the term "vihara" refers to their wanderings.
Vihara or vihara hall has a more specific meaning in the architecture of India, especially ancient Indian rock-cut architecture. Here it means a central hall, with small cells connected to it, sometimes with beds carved from the stone. Some have a shrine cell set back at the centre of the back wall, containing a stupa in early examples, or a Buddha statue later. Typical large sites such as the Ajanta Caves, Aurangabad Caves, Karli Caves, and Kanheri Caves contain several viharas. Some included a chaitya or worship hall nearby. The vihara originated as a shelter for monks when it rains.
Etymology and nomenclature
The word means a form of rest house, temple or monastery in ascetic traditions of India, particularly for a group of monks. It particularly referred to a hall that was used as a temple or where monks met and some walked about. In the context of the performative arts, the term means the theatre, playhouse, convent or temple compound to meet, perform or relax in. Later it referred to a form of temple or monastery construction in Buddhism and Jainism, wherein the design has a central hall and attached separated shrines for residence either for monks or for deities and sacred figure such as Tirthankaras, Gautama Buddha. The word means a Jain or Buddhist temple or "dwelling, waiting place" in many medieval era inscriptions and texts, from vi-har which means "to construct".
It contrasts with or , which means "forest". In medieval era, the term meant any monastery, particularly for Buddhist monks. Matha is another term for monastery in the Buddhist tradition, today normally used for Hindu establishments.
The eastern Indian state of Bihar derives its name from vihāra due to the abundance of Buddhist monasteries in that area. The word has also been borrowed in Malay as biara, denoting a monastery or other non-Muslim place of worship. It is called a wihan () in Thai, and vĭhéar ( ) in Khmer. In Burmese, wihara ( ), means "monastery", but the native Burmese word kyaung ( ) is preferred. Monks wandering from place to place preaching and seeking alms often stayed together in the sangha. In the Punjabi language, an open space inside a home is called a .
In Korea, Japan, Vietnam and China, the word for a Buddhist temple or monastery seems to have a different origin. The Japanese word for a Buddhist temple is , it was anciently also written phonetically 天良 tera, and it is cognate with the Modern Korean Chǒl from Middle Korean Tiel, the Jurchen Taira and the reconstructed Old Chinese *dɘiaʁ, all meaning "Buddhist Monastery". These words are apparently derived from the Aramaic word for "monastery" dērā/ dairā/ dēr (from the root dwr "to live together"), rather than from the unrelated Indian word for monastery vihara, and may have been transmitted to China by the first Central Asian translators of Buddhist scriptures, such as An Shigao or Lokaksema.
Origins
Viharas as pleasure centers
During the 3rd-century BCE era of Ashoka, vihara yatras were travel stops aimed at enjoyments, pleasures and hobbies such as hunting. These contrasted with dharma yatras which related to religious pursuits and pilgrimage. After Ashoka converted to Buddhism, states Lahiri, he started dharma yatras around mid 3rd century BCE instead of hedonistic royal vihara yatras.
Viharas as monasteries
The early history of viharas is unclear. Monasteries in the form of caves are dated to centuries before the start of the common era, for Ajivikas, Buddhists and Jainas. The rock-cut architecture found in cave viharas from the 2nd-century BCE have roots in the Maurya Empire period. In and around the Bihar state of India are a group of residential cave monuments all dated to be from pre-common era, reflecting the Maurya architecture. Some of these have Brahmi script inscription which confirms their antiquity, but the inscriptions were likely added to pre-existing caves. The oldest layer of Buddhist and Jain texts mention legends of the Buddha, the Jain Tirthankaras or sramana monks living in caves. If these records derived from an oral tradition accurately reflect the significance of monks and caves in the times of the Buddha and the Mahavira, then cave residence tradition dates back to at least the 5th century BCE. According to Allchin and Erdosy, the legend of First Buddhist Council is dated to a period just after the death of the Buddha. It mentions monks gathering at a cave near Rajgiri, and this dates it in pre-Mauryan times. However, the square courtyard with cells architecture of vihara, state Allchin and Erdosy, is dated to the Mauryan period. The earlier monastic residences of Ajivikas, Buddhists, Hindus, and Jains were likely outside rock cliffs and made of temporary materials and these have not survived.
The earliest known gift of immovable property for monastic purposes ever recorded in an Indian inscription is credited to Emperor Ashoka, and it is a donation to the Ajivikas. According to Johannes Bronkhorst, this created competitive financial pressures on all traditions, including the Hindu Brahmins. This may have led to the development of viharas as shelters for monks, and evolution in the Ashrama concept to agraharas or Hindu monasteries. These shelters were normally accompanied by donation of revenue from villages nearby, who would work and support these cave residences with food and services. The Karle inscription dated to the 1st century CE donates a cave and nearby village, states Bronkhorst, "for the support of the ascetics living in the caves at Valuraka [Karle] without any distinction of sect or origin". Buddhist texts from Bengal, dated to centuries later, use the term asrama-vihara or agrahara-vihara for their monasteries.
Buddhist viharas or monasteries may be described as a residence for monks, a centre for religious work and meditation and a centre of Buddhist learning. Reference to five kinds of dwellings (Pancha Lenani) namely, Vihara, Addayoga, Pasada, Hammiya and Guha is found in the Buddhist canonical texts as fit for monks. Of these only the Vihara (monastery) and Guha (Cave) have survived.
At some stage of Buddhism, like other Indian religious traditions, the wandering monks of the Sangha dedicated to asceticism and the monastic life, wandered from place to place. During the rainy season (cf. vassa) they stayed in temporary shelters. In Buddhist theology relating to rebirth and merit earning, it was considered an act of merit not only to feed a monk but also to shelter him, sumptuous monasteries were created by rich lay devotees.
Architecture
The only substantial remains of very early viharas are in the rock-cut complexes, mostly in north India, the Deccan in particular, but this is an accident of survival. Originally structural viharas of stone or brick would probably have been at least as common everywhere, and the norm in the south. By the second century BCE a standard plan for a vihara was established; these form the majority of Buddhist rock-cut "caves". It consisted of a roughly square rectangular hall, in rock-cut cases, or probably an open court in structural examples, off which there were a number of small cells. Rock-cut cells are often fitted with rock-cut platforms for beds and pillows. The front wall had one or more entrances, and often a verandah. Later the back wall facing the entrance had a fairly small shrine-room, often reached through an ante-chamber. Initially these held stupas, but later a large sculpted Buddha image, sometimes with reliefs on the walls. The verandah might also have sculpture, and in some cases the walls of the main hall. Paintings were perhaps more common, but these rarely survive, except in a few cases such as Caves 2, 10, 11 and 17 at the Ajanta Caves. As later rock-cut viharas are often on up to three storeys, this was also probably the case with the structural ones.
As the vihara acquired a central image, it came to take over the function of the chaitya worship hall, and eventually these ceased to be built. This was despite the rock-cut vihara shrine room usually offering no path for circumambulation or pradakshina, an important ritual practice.
In early medieval era, Viharas became important institutions and a part of Buddhist Universities with thousands of students, such as Nalanda. Life in "Viharas" was codified early on. It is the object of a part of the Pali canon, the Vinaya Pitaka or "basket of monastic discipline". Shalban Vihara in Bangladesh is an example of a structural monastery with 115 cells, where the lower parts of the brick-built structure have been excavated. Somapura Mahavihara, also in Bangladesh, was a larger vihara, mostly 8th-century, with 177 cells around a huge central temple.
Variants in rock-cut viharas
Usually the standard form as described above is followed, but there are some variants. Two vihara halls, Cave 5 at Ellora and Cave 11 at Kanheri, have very low platforms running most of the length of the main hall. These were probably used as some combination of benches or tables for dining, desks for study, and possibly beds. They are often termed "dining-hall" or the "Durbar Hall" at Kanheri, on no good evidence.
Cave 11 at the Bedse Caves is a fairly small 1st-century vihara, with nine cells in the interior and originally four around the entrance, and no shrine room. It is distinguished by elaborate gavaksha and railing relief carving around the cell-doors, but especially by having a rounded roof and apsidal far end, like a chaitya hall.
History
The earliest Buddhist rock-cut cave abodes and sacred places are found in the western Deccan dating back to the 3rd century BCE. These earliest rock-cut caves include the Bhaja Caves, the Karla Caves, and some of the Ajanta Caves.
Vihara with central shrine containing devotional images of the Buddha, dated to about the 2nd century CE are found in the northwestern area of Gandhara, in sites such as Jaulian, Kalawan (in the Taxila area) or Dharmarajika, which states Behrendt, possibly were the prototypes for the 4th century monasteries such as those at Devnimori in Gujarat. This is supported by the discovery of clay and bronze Buddha statues, but it is unclear if the statue is of a later date. According to Behrendt, these "must have been the architectural prototype for the later northern and western Buddhist shrines in the Ajanta Caves, Aurangabad, Ellora, Nalanda, Ratnagiri and other sites". Behrendt's proposal follows the model that states the northwestern influences and Kushana era during the 1st and 2nd century CE triggered the development of Buddhist art and monastery designs. In contrast, Susan Huntington states that this late nineteenth and early twentieth century model is increasingly questioned by the discovery of pre-Kushana era Buddha images outside the northwestern territories. Further, states Huntington, "archaeological, literary, and inscriptional evidence" such as those in Madhya Pradesh cast further doubts. Devotional worship of Buddha is traceable, for example, to Bharhut Buddhist monuments dated between 2nd and 1st century BCE. The Krishna or Kanha Cave (Cave 19) at Nasik has the central hall with connected cells, and it is generally dated to about the 1st century BCE.
The early stone viharas mimicked the timber construction that likely preceded them.
Inscriptional evidence on stone and copper plates indicate that Buddhist viharas were often co-built with Hindu and Jain temples. The Gupta Empire era witnessed the building of numerous viharas, including those at the Ajanta Caves. Some of these viharas and temples though evidenced in texts and inscriptions are no longer physically found, likely destroyed in later centuries by natural causes or due to war.
Viharas as a source of major Buddhist traditions
As more people joined Buddhist monastic sangha, the senior monks adopted a code of discipline which came to be known in the Pali Canon as the Vinaya texts. These texts are mostly concerned with the rules of the sangha. The rules are preceded by stories telling how the Buddha came to lay them down, and followed by explanations and analysis. According to the stories, the rules were devised on an ad hoc basis as the Buddha encountered various behavioral problems or disputes among his followers. Each major early Buddhist tradition had its own variant text of code of discipline for vihara life. Major vihara appointed a vihara-pala, the one who managed the vihara, settled disputes, determined sangha's consent and rules, and forced those hold-outs to this consensus.
Three early influential monastic fraternities are traceable in Buddhist history. The Mahavihara established by Mahinda was the oldest. Later, in 1st century BCE, King Vattagamani donated the Abhayagiri vihara to his favored monk, which led the Mahavihara fraternity to expel that monk. In 3rd century CE, this repeated when King Mahasena donated the Jetavana vihara to an individual monk, which led to his expulsion. The Mahinda Mahavihara led to the orthodox Theravada tradition. The Abhayagiri vihara monks, rejected and criticized by the orthodox Buddhist monks, were more receptive to heterodox ideas and they nurtured the Mahayana tradition. The Jetavana vihara monks vacillated between the two traditions, blending their ideas.
Viharas of the Pāla era
A range of monasteries grew up during the Pāla period in ancient Magadha (modern Bihar) and Bengal. According to Tibetan sources, five great mahaviharas stood out: Vikramashila, the premier university of the era; Nalanda, past its prime but still illustrious, Somapura, Odantapurā, and Jagaddala. According to Sukumar Dutt, the five monasteries formed a network, were supported and supervised by the Pala state. Each of the five had their own seal and operated like a corporation, serving as centers of learning.
Other notable monasteries of the Pala Empire were Traikuta, Devikota (identified with ancient Kotivarsa, 'modern Bangarh'), and Pandit Vihara. Excavations jointly conducted by the Archaeological Survey of India and University of Burdwan in 1971–1972 to 1974–1975 yielded a Buddhist monastic complex at Monorampur, near Bharatpur via Panagarh Bazar in the Bardhaman district of West Bengal. The date of the monastery may be ascribed to the early medieval period. Recent excavations at Jagjivanpur (Malda district, West Bengal) revealed another Buddhist monastery (Nandadirghika-Udranga Mahavihara) of the ninth century.
Nothing of the superstructure has survived. A number of monastic cells facing a rectangular courtyard have been found. A notable feature is the presence of circular corner cells. It is believed that the general layout of the monastic complex at Jagjivanpur is by and large similar to that of Nalanda. Beside these, scattered references to some monasteries are found in epigraphic and other sources. Among them Pullahari (in western Magadha), Halud Vihara (45 km south of Paharpur), Parikramana vihara and Yashovarmapura vihara (in Bihar) deserve mention. Other important structural complexes have been discovered at Mainamati (Comilla district, Bangladesh). Remains of quite a few viharas have been unearthed here and the most elaborate is the Shalban Vihara. The complex consists of a fairly large vihara of the usual plan of four ranges of monastic cells round a central court, with a temple in cruciform plan situated in the centre. According to a legend on a seal (discovered at the site) the founder of the monastery was Bhavadeva, a ruler of the Deva dynasty.
Southeast Asia
As Buddhism spread in Southeast Asia, monasteries were built by local kings. The term vihara is still sometimes used to refer to the monasteries/temples, also known as wat, but in Thailand it also took on a narrower meaning referring to certain buildings in the temple complex. The wihan is a building, apart from the main ubosot (ordination hall) in which a Buddha image is enshrined. In many temples, the wihan serves as a sermon hall or an assembly hall where ceremonies, such as the kathina, are held. Many of these Theravada viharas feature a Buddha image that is considered sacred after it is formally consecrated by the monks.
Image gallery
See also
List of Buddhist universities across the world
Ashram
Bahal, Nepal
Brahma-vihara
Gal Vihara
Kyaung
Mahavihara
Mahiyangana Raja Maha Vihara
Nava Vihara
Tissamaharama Raja Maha Vihara
Vihara Buddhagaya Watugong
Wat – Buddhist temple in Cambodia, Laos or Thailand.
Notes
References
Harle, J.C., The Art and Architecture of the Indian Subcontinent, 2nd edn. 1994, Yale University Press Pelican History of Art,
Michell, George, The Penguin Guide to the Monuments of India, Volume 1: Buddhist, Jain, Hindu, 1989, Penguin Books,
External links
Lay Buddhist Practice: The Rains Residence – A short article on the meaning of Vassa, and its observation by lay Buddhists.
Mapping Buddhist Monasteries A project aiming to catalogue, crosscheck, verify and interrelate, tag and georeference, chronoreference and map online (using KML markup & Google Maps technology).
V
V
Buddhist architecture
Architectural history
Sanskrit words and phrases | Vihāra | [
"Engineering"
] | 3,854 | [
"Architectural history",
"Architecture"
] |
998,070 | https://en.wikipedia.org/wiki/Node%20%28physics%29 | A node is a point along a standing wave where the wave has minimum amplitude. For instance, in a vibrating guitar string, the ends of the string are nodes. By changing the position of the end node through frets, the guitarist changes the effective length of the vibrating string and thereby the note played. The opposite of a node is an anti-node, a point where the amplitude of the standing wave is at maximum. These occur midway between the nodes.
Explanation
Standing waves result when two sinusoidal wave trains of the same frequency are moving in opposite directions in the same space and interfere with each other. They occur when waves are reflected at a boundary, such as sound waves reflected from a wall or electromagnetic waves reflected from the end of a transmission line, and particularly when waves are confined in a resonator at resonance, bouncing back and forth between two boundaries, such as in an organ pipe or guitar string.
In a standing wave the nodes are a series of locations at equally spaced intervals where the wave amplitude (motion) is zero (see animation above). At these points the two waves add with opposite phase and cancel each other out. They occur at intervals of half a wavelength (λ/2). Midway between each pair of nodes are locations where the amplitude is maximum. These are called the antinodes. At these points the two waves add with the same phase and reinforce each other.
In cases where the two opposite wave trains are not the same amplitude, they do not cancel perfectly, so the amplitude of the standing wave at the nodes is not zero but merely a minimum. This occurs when the reflection at the boundary is imperfect. This is indicated by a finite standing wave ratio (SWR), the ratio of the amplitude of the wave at the antinode to the amplitude at the node.
In resonance of a two dimensional surface or membrane, such as a drumhead or vibrating metal plate, the nodes become nodal lines, lines on the surface where the surface is motionless, dividing the surface into separate regions vibrating with opposite phase. These can be made visible by sprinkling sand on the surface, and the intricate patterns of lines resulting are called Chladni figures.
In transmission lines a voltage node is a current antinode, and a voltage antinode is a current node.
Nodes are the points of zero displacement, not the points where two constituent waves intersect.
Boundary conditions
Where the nodes occur in relation to the boundary reflecting the waves depends on the end conditions or boundary conditions. Although there are many types of end conditions, the ends of resonators are usually one of two types that cause total reflection:
Fixed boundary: Examples of this type of boundary are the attachment point of a guitar string, the closed end of an open pipe like an organ pipe, or a woodwind pipe, the periphery of a drumhead, a transmission line with the end short circuited, or the mirrors at the ends of a laser cavity. In this type, the amplitude of the wave is forced to zero at the boundary, so there is a node at the boundary, and the other nodes occur at multiples of half a wavelength from it:
Free boundary: Examples of this type are an open-ended organ or woodwind pipe, the ends of the vibrating resonator bars in a xylophone, glockenspiel or tuning fork, the ends of an antenna, or a transmission line with an open end. In this type the derivative (slope) of the wave's amplitude (in sound waves the pressure, in electromagnetic waves, the current) is forced to zero at the boundary. So there is an amplitude maximum (antinode) at the boundary, the first node occurs a quarter wavelength from the end, and the other nodes are at half wavelength intervals from there:
Examples
Sound
A sound wave consists of alternating cycles of compression and expansion of the wave medium. During compression, the molecules of the medium are forced together, resulting in the increased pressure and density. During expansion the molecules are forced apart, resulting in the decreased pressure and density.
The number of nodes in a specified length is directly proportional to the frequency of the wave.
Occasionally on a guitar, violin, or other stringed instrument, nodes are used to create harmonics. When the finger is placed on top of the string at a certain point, but does not push the string all the way down to the fretboard, a third node is created (in addition to the bridge and nut) and a harmonic is sounded. During normal play when the frets are used, the harmonics are always present, although they are quieter. With the artificial node method, the overtone is louder and the fundamental tone is quieter. If the finger is placed at the midpoint of the string, the first overtone is heard, which is an octave above the fundamental note which would be played, had the harmonic not been sounded. When two additional nodes divide the string into thirds, this creates an octave and a perfect fifth (twelfth). When three additional nodes divide the string into quarters, this creates a double octave. When four additional nodes divide the string into fifths, this creates a double-octave and a major third (17th). The octave, major third and perfect fifth are the three notes present in a major chord.
The characteristic sound that allows the listener to identify a particular instrument is largely due to the relative magnitude of the harmonics created by the instrument.
Waves in two or three dimensions
In two dimensional standing waves, nodes are curves (often straight lines or circles when displayed on simple geometries.) For example, sand collects along the nodes of a vibrating Chladni plate to indicate regions where the plate is not moving.
In chemistry, quantum mechanical waves, or "orbitals", are used to describe the wave-like properties of electrons. Many of these quantum waves have nodes and antinodes as well. The number and position of these nodes and antinodes give rise to many of the properties of an atom or covalent bond. Atomic orbitals are classified according to the number of radial and angular nodes. A radial node for the hydrogen atom is a sphere that occurs where the wavefunction for an atomic orbital is equal to zero, while the angular node is
a flat plane.
Molecular orbitals are classified according to bonding character. Molecular orbitals with an antinode between nuclei are very stable, and are known as "bonding orbitals" which strengthen the bond. In contrast, molecular orbitals with a node between nuclei will not be stable due to electrostatic repulsion and are known as "anti-bonding orbitals" which weaken the bond. Another such quantum mechanical concept is the particle in a box where the number of nodes of the wavefunction can help determine the quantum energy state—zero nodes corresponds to the ground state, one node corresponds to the 1st excited state, etc. In general, If one arranges the eigenstates in the order of increasing energies, , the eigenfunctions likewise fall in the order of increasing number of nodes; the nth eigenfunction has n−1 nodes, between each of which the following eigenfunctions have at least one node.
References
Concepts in physics
Sound
Musical tuning
Waves | Node (physics) | [
"Physics"
] | 1,469 | [
"Waves",
"Physical phenomena",
"Motion (physics)",
"nan"
] |
998,103 | https://en.wikipedia.org/wiki/Bioequivalence | Bioequivalence is a term in pharmacokinetics used to assess the expected in vivo biological equivalence of two proprietary preparations of a drug. If two products are said to be bioequivalent it means that they would be expected to be, for all intents and purposes, the same.
One article defined bioequivalence by stating that, "two pharmaceutical products are bioequivalent if they are pharmaceutically equivalent and their bioavailabilities (rate and extent of availability) after administration in the same molar dose are similar to such a degree that their effects, with respect to both efficacy and safety, can be expected to be essentially the same. Pharmaceutical equivalence implies the same amount of the same active substance(s), in the same dosage form, for the same route of administration and meeting the same or comparable standards."
For The World Health Organization (WHO) "two pharmaceutical products are bioequivalent if they are pharmaceutically equivalent or pharmaceutical alternatives, and their bioavailabilities, in terms of rate (Cmax and tmax) and extent of absorption (area under the curve), after administration of the same molar dose under the same conditions, are similar to such a degree that their effects can be expected to be essentially the same".
The United States Food and Drug Administration (FDA) has defined bioequivalence as, "the absence of a significant difference in the rate and extent to which the active ingredient or active moiety in pharmaceutical equivalents or pharmaceutical alternatives becomes available at the site of drug action when administered at the same molar dose under similar conditions in an appropriately designed study."
Bioequivalence
In determining bioequivalence between two products such as a commercially available Branded product and a potential to-be-marketed Generic product, pharmacokinetic studies are conducted whereby each of the preparations are administered in a cross-over study (sometimes parallel study, when a cross-over study is not feasible) to volunteer subjects, generally healthy individuals but occasionally in patients. Serum/plasma samples are obtained at prescribed times and assayed for parent drug (or occasionally metabolite) concentration. Occasionally, blood concentration levels are neither feasible or possible to compare the two products (e.g. inhaled corticosteroids), then pharmacodynamic endpoints rather than pharmacokinetic endpoints (see below) are used for comparison. For a pharmacokinetic comparison, the plasma concentration data are used to assess key pharmacokinetic parameters such as area under the curve (AUC), peak concentration (Cmax), time to peak concentration (tmax), and absorption lag time (tlag). Testing should be conducted at several different doses, especially when the drug displays non-linear pharmacokinetics.
In addition to data from bioequivalence studies, other data may need to be submitted to meet regulatory requirements for bioequivalence. Such evidence may include:
analytical method validation
in vitro-in vivo correlation studies (IVIVC)
Regulatory definition
The World Health Organization
The World Health Organization considers two formulation bioequivalent if the 90% confidence interval for the ratio multisource (generic) product/comparator lie within 80.00–125.00% acceptance range for AUC0–t and Cmax. For high variable finished pharmaceutical products, the applicable acceptance range for Cmax can be expanded (up to 69.84–143.19%).
Australia
In Australia, the Therapeutics Goods Administration (TGA) considers preparations to be bioequivalent if the 90% confidence intervals (90% CI) of the rate ratios, between the two preparations, of Cmax and AUC lie in the range 0.80–1.25. Tmax should also be similar between the products.
There are tighter requirements for drugs with a narrow therapeutic index and/or saturable metabolism – thus no generic products exist on the Australian market for digoxin or phenytoin for instance.
Europe
According to regulations applicable in the European Economic Area two medicinal products are bioequivalent if they are pharmaceutically equivalent or pharmaceutical alternatives and if their bioavailabilities after administration in the same molar dose are similar to such a degree that their effects, with respect to both efficacy and safety, will be essentially the same. This is considered demonstrated if the 90% confidence intervals (90% CI) of the ratios for AUC0–t and Cmax between the two preparations lie in the range 80–125%.
United States
The FDA considers two products bioequivalent if the 90% CI of the relative mean Cmax, AUC(0–t) and AUC(0–∞) of the test (e.g. generic formulation) to reference (e.g. innovator brand formulation) should be within 80% to 125% in the fasting state. Although there are a few exceptions, generally a bioequivalent comparison of Test to Reference formulations also requires administration after an appropriate meal at a specified time before taking the drug, a so-called "fed" or "food-effect" study. A food-effect study requires the same statistical evaluation as the fasting study, described above.
China
There were no requirements for bioequivalence in generic medications in China until the 2016 Opinion on Conducting Consistent Evaluation of the Quality and Efficacy of Generic Drugs (), which established basic rules for future bioequivalence work. Since July 2020, all newly-approved generics must pass bioequivalence checks; previous drugs may apply to be checked. Since 2019, National Centralized Volume-Based Procurement uses "passes generic-consistency evalulation" as one of the bidding criteria.
The Chinese definition of "bioequivalence" entails having the test drug's geometric mean Cmax, AUC(0–t), and AUC(0–∞) fall into 80%–125% of the reference drug in both fasting and fed states. The reference drug should be preferably the original brand-name drug, then (if not available) an internationally-recognized generic approved by a developed country, then (if still not available) an internationally-recognized generic approved domestically – this is to avoid deviation from the original drug by serial use of generics as reference. If pharmacokinetic values such as Cmax do not apply to the type of drug (e.g. if the drug is not absorbed orally), comparisons can be made using other means such as dose-response curves.
According to Wei et al. (2022), the Consistency Evaluation Policy increased R&D spending for Chinese pharmaceutical companies, especially among private and high-yielding ones. Liu et al. (2023) argues that the Policy increased the innovation quality of the Chinese pharmaceutical industry.
Bioequivalence issues
While the FDA maintains that approved generic drugs are equivalent to their branded counterparts, bioequivalence problems have been reported by physicians and patients for many drugs. Certain classes of drugs are suspected to be particularly problematic because of their chemistry. Some of these include chiral drugs, poorly absorbed drugs, and cytotoxic drugs. In addition, complex delivery mechanisms can cause bioequivalence variances. Physicians are cautioned to avoid switching patients from branded to generic, or between different generic manufacturers, when prescribing anti-epileptic drugs, warfarin, and levothyroxine.
Major issues were raised in the verification of bioequivalence when multiple generic versions of FDA-approved generic drug were found not to be equivalent in efficacy and side effect profiles. In 2007, two providers of consumer information on nutritional products and supplements, ConsumerLab.com and The People's Pharmacy, released the results of comparative tests of different brands of bupropion. The People's Pharmacy received multiple reports of increased side effects and decreased efficacy of generic bupropion, which prompted it to ask ConsumerLab.com to test the products in question. The tests showed that some generic versions of Wellbutrin XL 300 mg didn't perform the same as the brand-name pill in laboratory tests. The FDA investigated these complaints and concluded that the generic version is equivalent to Wellbutrin XL in regard to bioavailability of bupropion and its main active metabolite hydroxybupropion. The FDA also said that coincidental natural mood variation is the most likely explanation for the apparent worsening of depression after the switch from Wellbutrin XL to Budeprion XL. After several years of denying patient reports, in 2012 the FDA reversed this opinion, announcing that "Budeprion XL 300 mg fails to demonstrate therapeutic equivalence to Wellbutrin XL 300 mg." The FDA did not test the bioequivalence of any of the other generic versions of Wellbutrin XL 300 mg, but requested that the four manufacturers submit data on this question to the FDA by March 2013. As of October 2013, the FDA has made determinations on the formulations from some manufacturers not being bioequivalent.
In 2004, Ranbaxy was revealed to have been falsifying data regarding the generic drugs they were manufacturing. As a result, 30 products were removed from US markets and Ranbaxy paid $500 million in fines. The FDA investigated many Indian drug manufacturers after this was discovered, and as a result at least 12 companies have been banned from shipping drugs to the US.
In 2017, The European Medicines Agency recommended suspension of a number of nationally approved medicines for which bioequivalence studies were conducted by Micro Therapeutic Research Labs in India, due to inspections identifying misrepresentation of study data and deficiencies in documentation and data handling.
See also
Generic drug
Pharmacokinetics
Clinical trial
Abbreviated New Drug Application
References
External links
Hussain AS, et al. The Biopharmaceutics Classification System: Highlights of the FDA's Draft Guidance Office of Pharmaceutical Science, Center for Drug Evaluation and Research, Food and Drug Administration.
Mills D (2005). Regulatory Agencies Do Not Require Clinical Trials To Be Expensive International Biopharmaceutical Association: IBPA Publications.
FDA CDER Office of Generic Drugs – further U.S. information on bioequivalence testing and generic drugs
Proposal to waive in vivo bioequivalence requirements for WHO Model List of Essential Medicines immediate-release, solid oral dosage forms. WHO Technical Report Series, No. 937, 2006, Annex 8.
Guidance for organizations performing in vivo bioequivalence studies (revision). WHO Technical Report Series 996, 2016, Annex 9.
General background notes and list of international comparator pharmaceutical products. WHO Technical Report Series 1003, 2017, Annex 5.
WHO List of International Comparator products (September 2016)
Pharmacokinetics
Clinical research
Life sciences industry | Bioequivalence | [
"Chemistry",
"Biology"
] | 2,243 | [
"Pharmacology",
"Life sciences industry",
"Pharmacokinetics"
] |
998,116 | https://en.wikipedia.org/wiki/Node%20%28networking%29 | In telecommunications networks, a node (, ‘knot’) is either a redistribution point or a communication endpoint.
A physical network node is an electronic device that is attached to a network, and is capable of creating, receiving, or transmitting information over a communication channel. In data communication, a physical network node may either be data communication equipment (such as a modem, hub, bridge or switch) or data terminal equipment (such as a digital telephone handset, a printer or a host computer).
A passive distribution point such as a distribution frame or patch panel is not a node.
Computer networks
In data communication, a physical network node may either be data communication equipment (DCE) such as a modem, hub, bridge or switch; or data terminal equipment (DTE) such as a digital telephone handset, a printer or a host computer.
If a network is a local area network (LAN) or wide area network (WAN), every LAN or WAN node that participates on the data link layer must have a network address, typically one for each network interface controller it possesses. Examples are computers, a DSL modem with Ethernet interface and wireless access point. Equipment, such as an Ethernet hub or modem with serial interface, that operates only below the data link layer does not require a network address.
If the network in question is the Internet or an intranet, many physical network nodes are host computers, also known as Internet nodes, identified by an IP address, and all hosts are physical network nodes. However, some data-link-layer devices such as switches, bridges and wireless access points do not have an IP host address (except sometimes for administrative purposes), and are not considered to be Internet nodes or hosts, but are considered physical network nodes and LAN nodes.
Telecommunications
In the fixed telephone network, a node may be a public or private telephone exchange, a remote concentrator or a computer providing some intelligent network service. In cellular communication, switching points and databases such as the base station controller, home location register, gateway GPRS Support Node (GGSN) and serving GPRS support node (SGSN) are examples of nodes. Cellular network base stations are not considered to be nodes in this context.
In cable television systems (CATV), this term has assumed a broader context and is generally associated with a fiber optic node. This can be defined as those homes or businesses within a specific geographic area that are served from a common fiber optic receiver. A fiber optic node is generally described in terms of the number of "homes passed" that are served by that specific fiber node.
Distributed systems
In a distributed system network, the nodes are clients, servers or peers. A peer may sometimes serve as client, sometimes server. In a peer-to-peer or overlay network, nodes that actively route data for the other networked devices as well as themselves are called supernodes.
Distributed systems may sometimes use virtual nodes so that the system is not oblivious to the heterogeneity of the nodes. This issue is addressed with special algorithms, like consistent hashing, as it is the case in Amazon's Dynamo.
Within a vast computer network, the individual computers on the periphery of the network, those that do not also connect other networks, and those that often connect transiently to one or more clouds are called end nodes. Typically, within the cloud computing construct, the individual user or customer computer that connects into one well-managed cloud is called an end node. Since these computers are a part of the network yet unmanaged by the cloud's host, they present significant risks to the entire cloud. This is called the end node problem. There are several means to remedy this problem but all require instilling trust in the end node computer.
See also
End system
Middlebox
Networking hardware
Terminal (telecommunication)
References
Computer networking
Routing | Node (networking) | [
"Technology",
"Engineering"
] | 785 | [
"Computer networking",
"Computer science",
"Computer engineering"
] |
998,156 | https://en.wikipedia.org/wiki/Haze | Haze is traditionally an atmospheric phenomenon in which dust, smoke, and other dry particulates suspended in air obscure visibility and the clarity of the sky. The World Meteorological Organization manual of codes includes a classification of particulates causing horizontal obscuration into categories of fog, ice fog, steam fog, mist, haze, smoke, volcanic ash, dust, sand, and snow. Sources for particles that cause haze include farming (stubble burning, ploughing in dry weather), traffic, industry, windy weather, volcanic activity and wildfires.
Seen from afar (e.g. an approaching airplane) and depending on the direction of view with respect to the Sun, haze may appear brownish or bluish, while mist tends to be bluish grey instead. Whereas haze often is considered a phenomenon occurring in dry air, mist formation is a phenomenon in saturated, humid air. However, haze particles may act as condensation nuclei that leads to the subsequent vapor condensation and formation of mist droplets; such forms of haze are known as "wet haze".
In meteorological literature, the word haze is generally used to denote visibility-reducing aerosols of the wet type suspended in the atmosphere. Such aerosols commonly arise from complex chemical reactions that occur as sulfur dioxide gases emitted during combustion are converted into small droplets of sulfuric acid when exposed. The reactions are enhanced in the presence of sunlight, high relative humidity, and an absence of air flow (wind). A small component of wet-haze aerosols appear to be derived from compounds released by trees when burning, such as terpenes. For all these reasons, wet haze tends to be primarily a warm-season phenomenon. Large areas of haze covering many thousands of kilometers may be produced under extensive favorable conditions each summer.
Air pollution
Haze often occurs when suspended dust and smoke particles accumulate in relatively dry air. When weather conditions block the dispersal of smoke and other pollutants they concentrate and form a usually low-hanging shroud that impairs visibility and may become a respiratory health threat if excessively inhaled. Industrial pollution can result in dense haze, which is known as smog.
Since 1991, haze has been a particularly acute problem in Southeast Asia. The main source of the haze has been smoke from fires occurring in Sumatra and Borneo which dispersed over a wide area. In response to the 1997 Southeast Asian haze, the ASEAN countries agreed on a Regional Haze Action Plan (1997) as an attempt to reduce haze. In 2002, all ASEAN countries signed the Agreement on Transboundary Haze Pollution, but the pollution is still a problem there today. Under the agreement, the ASEAN secretariat hosts a co-ordination and support unit. During the 2013 Southeast Asian haze, Singapore experienced a record high pollution level, with the 3-hour Pollutant Standards Index reaching a record high of 401.
In the United States, the Interagency Monitoring of Protected Visual Environments (IMPROVE) program was developed as a collaborative effort between the US EPA and the National Park Service in order to establish the chemical composition of haze in National Parks and establish air pollution control measures in order to restore the visibility of the air to pre-industrial levels. Additionally, the Clean Air Act requires that any current visibility problems be addressed and remedied, and future visibility problems be prevented, in 156 Class I Federal areas located throughout the United States. A full list of these areas is available on EPA's website.
In addition to the severe health issues caused by haze from air pollution, dust storm particles, and bush fire smoke, reduction in irradiance is the most dominant impact of these sources of haze and a growing issue for photovoltaic production as the solar industry grows. Smog also lowers agricultural yield and it has been proposed that pollution controls could increase agricultural production in China. These effects are negative for both sides of agrivoltaics (the combination of photovoltaic electricity production and food from agriculture).
International disputes
Transboundary haze
Haze is no longer just a confined as a domestic problem. It has become one of the causes of international disputes among neighboring countries. Haze can migrate to adjacent countries in the path of wind and thereby pollutes other countries as well, even if haze does not first manifest there. One of the most recent problems occur in Southeast Asia which largely affects the nations of Indonesia, Malaysia and Singapore. In 2013, due to forest fires in Indonesia, Kuala Lumpur and surrounding areas became shrouded in a pall of noxious fumes dispersed from Indonesia, that brings a smell of ash and coal for more than a week, in the country's worst environmental crisis since 1997.
The main sources of the haze are Indonesia's Sumatra Island, Indonesian areas of Borneo, and Riau, where farmers, plantation owners and miners have set hundreds of fires in the forests to clear land during dry weather. Winds blew most of the particulates and fumes across the narrow Strait of Malacca to Malaysia, although parts of Indonesia in the path are also affected. The 2015 Southeast Asian haze was another major crisis of air quality, although there were occasions such as the 2006 and 2019 haze which were less impactful than the three major Southeast Asian haze of 1997, 2013 and 2015.
Obscuration
Haze causes issues in the area of terrestrial photography and imaging, where the penetration of large amounts of dense atmosphere may be necessary to image distant subjects. This results in the visual effect of a loss of contrast in the subject, due to the effect of light scattering and reflection through the haze particles. For these reasons, sunrise and sunset colors and possibly the sun itself appear subdued on hazy days, and stars may be obscured by haze at night. In some cases, attenuation by haze is so great that, toward sunset, the sun disappears altogether before even reaching the horizon.
Haze can be defined as an aerial form of the Tyndall effect therefore unlike other atmospheric effects such as cloud, mist and fog, haze is spectrally selective in accordance to the electromagnetic spectrum: shorter (blue) wavelengths are scattered more, and longer (red/infrared) wavelengths are scattered less. For this reason, many super-telephoto lenses often incorporate yellow light filters or coatings to enhance image contrast. Infrared (IR) imaging may also be used to penetrate haze over long distances, with a combination of IR-pass optical filters and IR-sensitive detectors at the intended destination.
See also
Arctic haze
ASEAN Agreement on Transboundary Haze Pollution
Asian brown cloud
Asian Dust
Coefficient of haze
Convention on Long-Range Transboundary Air Pollution
Fog
Mist
Saharan Air Layer
Southeast Asian haze
Smog
Trail Smelter dispute
Notes
External links
National Pollutant Inventory - Particulate matter fact sheet
Those hazy days of summer
Haze over the central and eastern United States
Chemical Composition of Haze in US National Parks: Views Visibility Database
Visibility
Air pollution
Atmospheric optical phenomena
Psychrometrics
Pollution
Fog | Haze | [
"Physics",
"Mathematics"
] | 1,390 | [
"Visibility",
"Physical phenomena",
"Earth phenomena",
"Fog",
"Physical quantities",
"Quantity",
"Optical phenomena",
"Wikipedia categories named after physical quantities",
"Atmospheric optical phenomena"
] |
998,172 | https://en.wikipedia.org/wiki/Moon%20Shot | Moon Shot: The Inside Story of America's Race to the Moon is a 1994 book written by Mercury Seven astronaut Alan Shepard, with NBC News correspondent Jay Barbree and Associated Press space writer Howard Benedict. Astronaut Donald K. "Deke" Slayton is also listed as an author, although he died before the project was completed and was an author in name only; astronaut Neil Armstrong wrote the introduction.
Miniseries
The book was turned into a four part television documentary miniseries that aired on TBS in the United States in 1994. The miniseries was narrated by Barry Corbin (as Slayton) and featured interviews with several American astronauts as well as a few Russian cosmonauts. Slayton died before the miniseries completed production in 1993 and the miniseries is dedicated to his memory.
References
External links
Apollo Lunar Surface Journal Chaikin comments on faked book photo
James Scotti comments on Moon Shot
Apollo Lunar Surface Journal
American non-fiction books
1994 non-fiction books
1990s American television miniseries
Peabody Award–winning television programs
Books about the Apollo program
Neil Armstrong
Alan Shepard
Books by astronauts | Moon Shot | [
"Astronomy"
] | 215 | [
"Outer space stubs",
"Astronomy book stubs",
"Outer space",
"Astronomy stubs"
] |
998,358 | https://en.wikipedia.org/wiki/Galileo%20thermometer | A Galileo thermometer (or Galilean thermometer) is a thermometer made of a sealed glass cylinder containing a clear liquid and several glass vessels of varying density. The individual floats rise or fall in proportion to their respective density and the density of the surrounding liquid as the temperature changes. It is named after Galileo Galilei because he discovered the principle on which this thermometer is based—that the density of a liquid changes in proportion to its temperature.
History
Although named after the 16th–17th-century physicist Galileo, the thermometer was not invented by him. (Galileo did invent a thermometer called Galileo's air thermometer, more accurately called a thermoscope, in or before 1603.)
The instrument now known as a Galileo thermometer was invented by a group of academics and technicians known as the Accademia del Cimento of Florence, who included Galileo's pupil, Torricelli and Torricelli's pupil Viviani. Details of the thermometer were published in the Saggi di naturali esperienze fatte nell'Academia del Cimento sotto la protezione del Serenissimo Principe Leopoldo di Toscana e descritte dal segretario di essa Accademia (1666), the academy's main publication. The English translation of this work (1684) describes the device ('The Fifth Thermometer') as 'slow and lazy', a description that is reflected in an alternative Italian name for the invention, the termometro lento (slow thermometer). The outer vessel was filled with 'rectified spirits of wine' (a concentrated solution of ethanol in water); the weights of the glass bubbles were adjusted by grinding a small amount of glass from the sealed end; and a small air space was left at the top of the main vessel to allow 'for the Liquor to rarefie' (i.e. expand).
The device now called the Galileo thermometer was revived in the modern era by the Natural History Museum, London, which started selling a version in the 1990s.
Operation
In the Galileo thermometer, the small glass bulbs are partly filled with different-colored liquids. The composition of these liquids is mainly water; some contain a tiny percent of alcohol, but that is not important for the functioning of the thermometer; they merely function as fixed weights, with their colors denoting given temperatures. Once the hand-blown bulbs have been sealed, their effective densities are adjusted using the metal tags hanging from beneath them. Any expansion due to the temperature change of the colored liquid and air gap inside the bulbs does not affect the operation of the thermometer, as these materials are sealed inside a glass bulb of approximately fixed size. The clear liquid in which the bulbs are submerged is not water, but some organic compounds (such as ethanol or kerosene) the density of which varies with temperature more than water does. Temperature changes affect the density of the outer clear liquid and this causes the bulbs to rise or sink accordingly.
As the temperature rises the bulbs will sink one by one according to their individual density as the clear holding fluid's density gradually changes around them. Eventually all the bulbs may be at the base of the tube depending on the temperature of the surroundings and therefore that of the clear holding fluid. As the temperature falls the reverse happens with the bulbs, until they can all be at the top.
The metal tags on each bulb are stamped with a temperature. If a bulb is in the centre of the column, that gives a close approximation of the environment temperature outside the tube. If there are some at the top and some at the base but none in between the average of the lowest bulb at the top and the highest at the base provides that figure.
Gallery
See also
References
Thermometers
Science education materials | Galileo thermometer | [
"Technology",
"Engineering"
] | 796 | [
"Thermometers",
"Measuring instruments"
] |
998,456 | https://en.wikipedia.org/wiki/Chloralkali%20process | The chloralkali process (also chlor-alkali and chlor alkali) is an industrial process for the electrolysis of sodium chloride (NaCl) solutions. It is the technology used to produce chlorine and sodium hydroxide (caustic soda), which are commodity chemicals required by industry. Thirty five million tons of chlorine were prepared by this process in 1987. In 2022, this had increased to about 97 million tonnes. The chlorine and sodium hydroxide produced in this process are widely used in the chemical industry.
Usually the process is conducted on a brine (an aqueous solution of concentrated NaCl), in which case sodium hydroxide (NaOH), hydrogen, and chlorine result. When using calcium chloride or potassium chloride, the products contain calcium or potassium instead of sodium. Related processes are known that use molten NaCl to give chlorine and sodium metal or condensed hydrogen chloride to give hydrogen and chlorine.
The process has a high energy consumption, for example around of electricity per tonne of sodium hydroxide produced. Because the process yields equivalent amounts of chlorine and sodium hydroxide (two moles of sodium hydroxide per mole of chlorine), it is necessary to find a use for these products in the same proportion. For every mole of chlorine produced, one mole of hydrogen is produced. Much of this hydrogen is used to produce hydrochloric acid, ammonia, hydrogen peroxide, or is burned for power and/or steam production.
History
The chloralkali process has been in use since the 19th century and is a primary industry in the United States, Western Europe, and Japan. It has become the principal source of chlorine during the 20th century. The diaphragm cell process and the mercury cell process have been used for over 100 years but are environmentally unfriendly through their use of asbestos and mercury, respectively. The membrane cell process, which was only developed in the past 60 years, is a superior method with its improved energy efficiency and lack of harmful chemicals.
Although the first formation of chlorine by the electrolysis of brine was attributed to chemist William Cruikshank in 1800, it was 90 years later that the electrolytic method was used successfully on a commercial scale. Industrial scale production began in 1892. In 1833, Faraday formulated the laws that governed the electrolysis of aqueous solutions, and patents were issued to Cook and Watt in 1851 and to Stanley in 1853 for the electrolytic production of chlorine from brine.
Process systems
Three production methods are in use. While the mercury cell method produces chlorine-free sodium hydroxide, the use of several tonnes of mercury leads to serious environmental problems. In a normal production cycle a few hundred pounds of mercury per year are emitted, which accumulate in the environment. Additionally, the chlorine and sodium hydroxide produced via the mercury-cell chloralkali process are themselves contaminated with trace amounts of mercury. The membrane and diaphragm method use no mercury, but the sodium hydroxide contains chlorine, which must be removed.
Membrane cell
The most common chloralkali process involves the electrolysis of aqueous sodium chloride (a brine) in a membrane cell. A membrane, such as Nafion, Flemion or Aciplex, is used to prevent the reaction between the chlorine and hydroxide ions.
Saturated brine is passed into the first chamber of the cell. Due to the higher concentration of chloride ions in the brine, the chloride ions are oxidised at the anode, losing electrons to become chlorine gas (A in figure):
2Cl− → + 2e−
At the cathode, positive hydrogen ions pulled from water molecules are reduced by the electrons provided by the electrolytic current, to hydrogen gas, releasing hydroxide ions into the solution (C in figure):
2 + 2e− → H2 + 2OH−
The ion-permeable ion-exchange membrane at the center of the cell allows only the sodium ions (Na+) to pass to the second chamber where they react with the hydroxide ions to produce caustic soda (NaOH) (B in figure):
Na+ + OH− → NaOH
The overall reaction for the electrolysis of brine is thus:
2NaCl + 2 → + + 2NaOH
Diaphragm cell
In the diaphragm cell process, there are two compartments separated by a permeable diaphragm, often made of asbestos fibers. Brine is introduced into the anode compartment and flows into the cathode compartment. Similarly to the membrane cell, chloride ions are oxidized at the anode to produce chlorine, and at the cathode, water is split into caustic soda and hydrogen. The diaphragm prevents the reaction of the caustic soda with the chlorine. A diluted caustic brine leaves the cell. The caustic soda must usually be concentrated to 50% and the salt removed. This is done using an evaporative process with about three tonnes of steam per tonne of caustic soda. The salt separated from the caustic brine can be used to saturate diluted brine. The chlorine contains oxygen and must often be purified by liquefaction and evaporation.
Mercury cell
In the mercury-cell process, also known as the Castner–Kellner process, a saturated brine solution floats on top of a thin layer of mercury. The mercury is the cathode, where sodium is produced and forms an amalgam with the mercury. The amalgam is continuously drawn out of the cell and reacted with water which decomposes the amalgam into sodium hydroxide, hydrogen and mercury. The mercury is recycled into the electrolytic cell. Chlorine is produced at the anode and bubbles out of the cell. Mercury cells are being phased out due to concerns about the high toxicity of mercury and mercury poisoning from mercury cell pollution such as occurred in Canada (see Ontario Minamata disease) and Japan (see Minamata disease).
Unpartitioned cell
The initial overall reaction produces hydroxide and also hydrogen and chlorine gases:
2 NaCl + 2 H2O → 2 NaOH + H2 + Cl2
Without a membrane, the OH− ions produced at the cathode are free to diffuse throughout the electrolyte. As the electrolyte becomes more basic due to the production of OH−, less Cl2 emerges from the solution as it begins to disproportionate to form chloride and hypochlorite ions at the anode:
Cl2 + 2 NaOH → NaCl + NaClO + H2O
The more opportunity the Cl2 has to interact with NaOH in the solution, the less Cl2 emerges at the surface of the solution and the faster the production of hypochlorite progresses. This depends on factors such as solution temperature, the amount of time the Cl2 molecule is in contact with the solution, and concentration of NaOH.
Likewise, as hypochlorite increases in concentration, chlorates are produced from them:
3 NaClO → NaClO3 + 2 NaCl
This reaction is accelerated at temperatures above about 60 °C. Other reactions occur, such as the self-ionization of water and the decomposition of hypochlorite at the cathode, the rate of the latter depends on factors such as diffusion and the surface area of the cathode in contact with the electrolyte.
If current is interrupted while the cathode is submerged, cathodes that are attacked by hypochlorites, such as those made from stainless steel, will dissolve in unpartitioned cells.
If producing hydrogen and oxygen gases is not a priority, the addition of 0.18% sodium or potassium chromate to the electrolyte will improve the efficiency of producing the other products.
Electrodes
Due to the corrosive nature of chlorine production, the anode (where the chlorine is formed) must be non-reactive and has been made from materials such as platinum metal, graphite (called plumbago in Faraday's time), or platinized titanium. A mixed metal oxide clad titanium anode (also called a dimensionally stable anode) is the industrial standard today. Historically, platinum, magnetite, lead dioxide, manganese dioxide, and ferrosilicon (13–15% silicon) have also been used as anodes. Platinum alloyed with iridium is more resistant to corrosion from chlorine than pure platinum. Unclad titanium cannot be used as an anode because it anodizes, forming a non-conductive oxide and passivates. Graphite will slowly disintegrate due to internal electrolytic gas production from the porous nature of the material and carbon dioxide forming due to carbon oxidation, causing fine particles of graphite to be suspended in the electrolyte that can be removed by filtration. The cathode (where hydroxide forms) can be made from unalloyed titanium, graphite, or a more easily oxidized metal such as stainless steel or nickel.
Manufacturer associations
The interests of chloralkali product manufacturers are represented at regional, national and international levels by associations such as Euro Chlor and The World Chlorine Council.
See also
Electrochemical engineering
Gas diffusion electrode
Solvay process, a similar industrial method of making sodium carbonate from calcium carbonate and sodium chloride
References
Further reading
Bommaraju, Tilak V.; Orosz, Paul J.; Sokol, Elizabeth A.(2007). "Brine Electrolysis." Electrochemistry Encyclopedia. Cleveland: Case Western Reserve University.
External links
Animation showing the membrane cell process
Animation showing the diaphragm cell process
Chemical processes
Electrolysis
Industrial gases | Chloralkali process | [
"Chemistry"
] | 2,061 | [
"Chemical processes",
"Electrochemistry",
"Industrial gases",
"nan",
"Electrolysis",
"Chemical process engineering"
] |
998,595 | https://en.wikipedia.org/wiki/Ferrite%20bead | A ferrite beadalso called a ferrite block, ferrite core, ferrite ring, EMI filter, or ferrite chokeis a type of choke that suppresses high-frequency electronic noise in electronic circuits.
Ferrite beads employ high-frequency current dissipation in a ferrite ceramic to build high-frequency noise suppression devices.
Use
Ferrite beads prevent electromagnetic interference (EMI) in two directions: from a device or to a device. A conductive cable acts as an antenna – if the device produces radio-frequency energy, this can be transmitted through the cable, which acts as an unintentional radiator. In this case, the bead is required for regulatory compliance to reduce EMI. Conversely, if there are other sources of EMI, such as household appliances, the bead prevents the cable from acting as an antenna and receiving interference from these other devices. This is particularly common on data cables and medical equipment.
Large ferrite beads are commonly seen on external cabling. In addition, various smaller ferrite beads are used internally in circuits—on conductors or around the pins of small circuit-board components, such as transistors, connectors, and integrated circuits.
Beads can block low-level unintended radio frequency energy on wires intended to be DC conductors by acting as a low-pass filter. For example, on unbalanced coax transmission lines (such as video cables), the cable is designed to contain the signal, and beads can be used to block stray common mode current from using the cable as an antenna while not interfering with the signal carried inside the cable. In this use, the bead is a simple form of a balun.
Ferrite beads are one of the simplest and least expensive interference filters to install on preexisting electronic cabling. For a simple ferrite ring, the wire is wrapped around the core through the center, typically five or seven times. Clamp-on cores are also available, which attach without wrapping the wire: this type of ferrite core is usually designed so that the wire passes only once through it. If the fit is not snug enough, the core can be secured with cable ties, or if the center is large enough, the cabling can loop through one or more times. (However, although each loop increases the impedance to high frequencies, it also shifts the frequency of the highest impedance to a lower frequency.) Small ferrite beads can be slipped over component leads to suppress parasitic oscillation.
Surface-mount ferrite beads are available. Like any other surface-mount inductor, these are soldered into a gap in the printed circuit board trace. Inside the bead component, a coil of wire runs between layers of ferrite to form a multi-turn inductor around the high-permeability core.
Theory of operation
Ferrite beads are used as a passive low-pass filter by dissipating radio frequency (RF) energy as heat by design.
Ideal inductors, on the other hand, have no resistance and hence do not dissipate energy as heat. Ideal inductors only have inductive reactance, which reduces the flow of high-frequency signals by returning some of their energy back towards the signal source (possibly reducing the amount of power drawn) rather than dissipating that energy as heat (as done by the resistance in ferrite beads). While an inductor's reactance may commonly be referred to simply as impedance, impedance generally can be any combination of resistance and reactance.
The geometry and electromagnetic properties of coiled wire over the ferrite bead result in an impedance for high-frequency signals, attenuating high-frequency EMI/RFI electronic noise. The energy is either reflected back up the cable or dissipated as low-level heat. Only in extreme cases is the heat noticeable.
A ferrite bead can be added to an inductor to improve, in two ways, its ability to block unwanted high frequency noise. First, the ferrite concentrates the magnetic field, increasing inductance and, therefore, reactance, which filters out the noise. Second, if the ferrite is so designed, it can produce an additional loss in the form of resistance in the ferrite itself. The ferrite creates an inductor with a very low Q factor. This loss heats the ferrite, generally by a negligible amount. While the signal level is large enough to cause interference or undesirable effects in sensitive circuits, the energy blocked is typically relatively small. Depending on the application, the resistive loss characteristic of the ferrite may or may not be desired.
A design that uses a ferrite bead to improve noise filtering must consider specific circuit characteristics and the frequency range to block. Different ferrite materials have different properties concerning frequency, and the manufacturer's literature helps select the most effective material for the frequency range.
See also
Braid-breaker
Balun
Electromagnetic interference
Magnetic core
Toroidal inductors and transformers
Unintentional radiator
Decoupling (electronics)
Fuse (electrical)
Zero-ohm resistor
References
External links
Ferrite bead inductor usage in electronic circuits
Electromagnetic radiation
Wireless tuning and filtering
Ferrites | Ferrite bead | [
"Physics",
"Engineering"
] | 1,102 | [
"Wireless tuning and filtering",
"Physical phenomena",
"Radio electronics",
"Electromagnetic radiation",
"Radiation"
] |
998,824 | https://en.wikipedia.org/wiki/Atom%20%28order%20theory%29 | In the mathematical field of order theory, an element a of a partially ordered set with least element 0 is an atom if 0 < a and there is no x such that 0 < x < a.
Equivalently, one may define an atom to be an element that is minimal among the non-zero elements, or alternatively an element that covers the least element 0.
Atomic orderings
Let <: denote the covering relation in a partially ordered set.
A partially ordered set with a least element 0 is atomic if every element b > 0 has an atom a below it, that is, there is some a such that b ≥ a :> 0. Every finite partially ordered set with 0 is atomic, but the set of nonnegative real numbers (ordered in the usual way) is not atomic (and in fact has no atoms).
A partially ordered set is relatively atomic (or strongly atomic) if for all a < b there is an element c such that a <: c ≤ b or, equivalently, if every interval [a, b] is atomic. Every relatively atomic partially ordered set with a least element is atomic. Every finite poset is relatively atomic.
A partially ordered set with least element 0 is called atomistic (not to be confused with atomic) if every element is the least upper bound of a set of atoms. The linear order with three elements is not atomistic (see Fig. 2).
Atoms in partially ordered sets are abstract generalizations of singletons in set theory (see Fig. 1). Atomicity (the property of being atomic) provides an abstract generalization in the context of order theory of the ability to select an element from a non-empty set.
Coatoms
The terms coatom, coatomic, and coatomistic are defined dually. Thus, in a partially ordered set with greatest element 1, one says that
a coatom is an element covered by 1,
the set is coatomic if every b < 1 has a coatom c above it, and
the set is coatomistic if every element is the greatest lower bound of a set of coatoms.
References
External links
Order theory | Atom (order theory) | [
"Mathematics"
] | 433 | [
"Order theory"
] |
998,826 | https://en.wikipedia.org/wiki/Wikispecies | Wikispecies is a wiki-based online project supported by the Wikimedia Foundation. Its aim is to create a comprehensive open content catalogue of all species; the project is directed at scientists, rather than at the general public. Jimmy Wales stated that editors are not required to fax in their degrees, but that submissions will have to pass muster with a technical audience. Wikispecies is available under the GNU Free Documentation License and CC BY-SA 4.0.
Started in September 2004, with biologists around the world invited to contribute, the project had grown to a framework encompassing the Linnaean taxonomy with links to Wikipedia articles on individual species by April 2005.
History
Benedikt Mandl coordinated the efforts of several people who were interested in getting involved with the project and contacted potential supporters in the early summer of 2004. Databases were evaluated and the administrators contacted; some of them have agreed on providing their data for Wikispecies. Mandl defined two major tasks:
Figure out how the contents of the data base would need to be presented—by asking experts, potential non-professional users and comparing that with existing databases
Figure out how to do the software, which hardware is required and how to cover the costs—by asking experts, looking for fellow volunteers and potential sponsors
Advantages and disadvantages were widely discussed by the wikimedia-I mailing list. The board of directors of the Wikimedia Foundation voted by 4 to 0 in favor of the establishment of Wikispecies. The project was launched in August 2004 and is hosted at species.wikimedia.org. It was officially merged into a sister project of the Wikimedia Foundation on September 14, 2004.
On October 10, 2006, the project exceeded 75,000 articles.
On May 20, 2007, the project exceeded 100,000 articles
On September 8, 2008, the project exceeded 150,000 articles
On October 23, 2011, the project reached 300,000 articles.
On June 16, 2014, the project reached 400,000 articles.
On January 7, 2017, the project reached 500,000 articles.
On October 30, 2018, the project reached 600,000 articles, and a total of 1.12 million pages.
On December 8, 2019, the project reached 700,000 articles, and a total of 1.33 million pages.
On January 8, 2021, the project reached 750,000 articles, and a total of 1.5 million pages.
On April 16, 2022, the project reached 800,000 articles, and a total of 1.67 million pages.
On September 17, 2023, the project reached 850,000 articles, and a total of 1.87 million pages.
As a database for taxonomy and nomenclature, Wikispecies comprises taxon pages, and additionally pages about synonyms, taxon authorities, taxonomical publications, type material, and institutions or repositories holding type specimen.
Policies
Wikispecies has disabled local upload and asks users to use images from Wikimedia Commons. Wikispecies does not allow the use of content that does not conform to a free license.
See also
All Species Foundation
Catalogue of Life
Encyclopedia of Life
Tree of Life Web Project
List of online encyclopedias
The Plant List
References
External links
Species Community Portal
The Wikispecies Charter, written by Wales.
Biodiversity databases
Biology websites
Botanical nomenclature
Internet properties established in 2004
Multilingual websites
Phylogenetics
Taxonomy (biology)
Wikimedia projects
Zoological nomenclature
Creative Commons-licensed websites | Wikispecies | [
"Biology",
"Environmental_science"
] | 715 | [
"Zoological nomenclature",
"Botanical nomenclature",
"Botanical terminology",
"Biological nomenclature",
"Taxonomy (biology)",
"Bioinformatics",
"Biodiversity",
"Environmental science databases",
"Phylogenetics",
"Biodiversity databases"
] |
998,835 | https://en.wikipedia.org/wiki/Data%20hierarchy | Data hierarchy refers to the systematic organization of data, often in a hierarchical form. Data organization involves characters, fields, records, files and so on. This concept is a starting point when trying to see what makes up data and whether data has a structure. For example, how does a person make sense of data such as 'employee', 'name', 'department', 'Marcy Smith', 'Sales Department' and so on, assuming that they are all related? One way to understand them is to see these terms as smaller or larger components in a hierarchy. One might say that Marcy Smith is one of the employees in the Sales Department, or an example of an employee in that Department. The data we want to capture about all our employees, and not just Marcy, is the name, ID number, address etc.
Purpose of the data hierarchy
"Data hierarchy" is a basic concept in data and database theory and helps to show the relationships between smaller and larger components in a database or data file. It is used to give a better sense of understanding about the components of data and how they are related.
It is particularly important in databases with referential integrity, third normal form, or perfect key. "Data hierarchy" is the result of proper arrangement of data without redundancy. Avoiding redundancy eventually leads to proper "data hierarchy" representing the relationship between data, and revealing its relational structure.
Components of the data hierarchy
The components of the data hierarchy are listed below.
A data field holds a single fact or attribute of an entity. Consider a date field, e.g. "19 September 2004". This can be treated as a single date field (e.g. birthdate), or three fields, namely, day of month, month and year.
A record is a collection of related fields. An Employee record may contain a name field(s), address fields, birthdate field and so on.
A file is a collection of related records. If there are 100 employees, then each employee would have a record (e.g. called Employee Personal Details record) and the collection of 100 such records would constitute a file (in this case, called Employee Personal Details file).
Files are integrated into a database. This is done using a Database Management System. If there are other facets of employee data that we wish to capture, then other files such as Employee Training History file and Employee Work History file could be created as well.
Illustration of the data hierarchy
An illustration of the above description is shown in this diagram below:
The following terms are for better clarity. With reference to the example in the above diagram:
Data field label = Employee Name or EMP_NAME
Data field value = Jeffrey Tan
The above description is a view of data as understood by a user e.g. a person working in Human Resource Department.
The above structure can be seen in the hierarchical model, which is one way to organize data in a database.
In terms of data storage, data fields are made of bytes and these in turn are made up of bits.
See also
References
Data modeling | Data hierarchy | [
"Engineering"
] | 629 | [
"Data modeling",
"Data engineering"
] |
998,893 | https://en.wikipedia.org/wiki/Kinetochore | A kinetochore (, ) is a disc-shaped protein structure associated with duplicated chromatids in eukaryotic cells where the spindle fibers attach during cell division to pull sister chromatids apart. The kinetochore assembles on the centromere and links the chromosome to microtubule polymers from the mitotic spindle during mitosis and meiosis. The term kinetochore was first used in a footnote in a 1934 Cytology book by Lester W. Sharp and commonly accepted in 1936. Sharp's footnote reads: "The convenient term kinetochore (= movement place) has been suggested to the author by J. A. Moore", likely referring to John Alexander Moore who had joined Columbia University as a freshman in 1932.
Monocentric organisms, including vertebrates, fungi, and most plants, have a single centromeric region on each chromosome which assembles a single, localized kinetochore. Holocentric organisms, such as nematodes and some plants, assemble a kinetochore along the entire length of a chromosome.
Kinetochores start, control, and supervise the striking movements of chromosomes during cell division. During mitosis, which occurs after the amount of DNA is doubled in each chromosome (while maintaining the same number of chromosomes) in S phase, two sister chromatids are held together by a centromere. Each chromatid has its own kinetochore, which face in opposite directions and attach to opposite poles of the mitotic spindle apparatus. Following the transition from metaphase to anaphase, the sister chromatids separate from each other, and the individual kinetochores on each chromatid drive their movement to the spindle poles that will define the two new daughter cells. The kinetochore is therefore essential for the chromosome segregation that is classically associated with mitosis and meiosis.
Structure
The kinetochore contains two regions:
an inner kinetochore, which is tightly associated with the centromere DNA and assembled in a specialized form of chromatin that persists throughout the cell cycle;
an outer kinetochore, which interacts with microtubules; the outer kinetochore is a very dynamic structure with many identical components, which are assembled and functional only during cell division.
Even the simplest kinetochores consist of more than 19 different proteins. Many of these proteins are conserved between eukaryotic species, including a specialized histone H3 variant (called CENP-A or CenH3) which helps the kinetochore associate with DNA. Other proteins in the kinetochore adhere it to the microtubules (MTs) of the mitotic spindle. There are also motor proteins, including both dynein and kinesin, which generate forces that move chromosomes during mitosis. Other proteins, such as Mad2, monitor the microtubule attachment as well as the tension between sister kinetochores and activate the spindle checkpoint to arrest the cell cycle when either of these is absent. The actual set of genes essential for kinetochore function varies from one species to another.
Kinetochore functions include anchoring of chromosomes to MTs in the spindle, verification of anchoring, activation of the spindle checkpoint and participation in the generation of force to propel chromosome movement during cell division. On the other hand, microtubules are metastable polymers made of α- and β-tubulin, alternating between growing and shrinking phases, a phenomenon known as dynamic instability. MTs are highly dynamic structures, whose behavior is integrated with kinetochore function to control chromosome movement and segregation. It has also been reported that the kinetochore organization differs between mitosis and meiosis and the integrity of meiotic kinetochore is essential for meiosis specific events such as pairing of homologous chromosomes, sister kinetochore monoorientation, protection of centromeric cohesin and spindle-pole body cohesion and duplication.
In animal cells
The kinetochore is composed of several layers, observed initially by conventional fixation and staining methods of electron microscopy, (reviewed by C. Rieder in 1982) and more recently by rapid freezing and substitution.
The deepest layer in the kinetochore is the inner plate, which is organized on a chromatin structure containing nucleosomes presenting a specialized histone (named CENP-A, which substitutes histone H3 in this region), auxiliary proteins, and DNA. DNA organization in the centromere (satellite DNA) is one of the least understood aspects of vertebrate kinetochores. The inner plate appears like a discrete heterochromatin domain throughout the cell cycle.
External to the inner plate is the outer plate, which is composed mostly of proteins. This structure is assembled on the surface of the chromosomes only after the nuclear envelope breaks down. The outer plate in vertebrate kinetochores contains about 20 anchoring sites for MTs (+) ends (named kMTs, after kinetochore MTs), whereas a kinetochore's outer plate in yeast (Saccharomyces cerevisiae) contains only one anchoring site.
The outermost domain in the kinetochore forms a fibrous corona, which can be visualized by conventional microscopy, yet only in the absence of MTs. This corona is formed by a dynamic network of resident and temporary proteins implicated in the spindle checkpoint, in microtubule anchoring, and in the regulation of chromosome behavior.
During mitosis, each sister chromatid forming the complete chromosome has its own kinetochore. Distinct sister kinetochores can be observed at first at the end of G2 phase in cultured mammalian cells. These early kinetochores show a mature laminar structure before the nuclear envelope breaks down. The molecular pathway for kinetochore assembly in higher eukaryotes has been studied using gene knockouts in mice and in cultured chicken cells, as well as using RNA interference (RNAi) in C. elegans, Drosophila and human cells, yet no simple linear route can describe the data obtained so far.
The first protein to be assembled on the kinetochore is CENP-A (Cse4 in Saccharomyces cerevisiae). This protein is a specialized isoform of histone H3. CENP-A is required for incorporation of the inner kinetochore proteins CENP-C, CENP-H and CENP-I/MIS6. The relation of these proteins in the CENP-A-dependent pathway is not completely defined. For instance, CENP-C localization requires CENP-H in chicken cells, but it is independent of CENP-I/MIS6 in human cells. In C. elegans and metazoa, the incorporation of many proteins in the outer kinetochore depends ultimately on CENP-A.
Kinetochore proteins can be grouped according to their concentration at kinetochores during mitosis: some proteins remain bound throughout cell division, whereas some others change in concentration. Furthermore, they can be recycled in their binding site on kinetochores either slowly (they are rather stable) or rapidly (dynamic).
Proteins whose levels remain stable from prophase until late anaphase include constitutive components of the inner plate and the stable components of the outer kinetocore, such as the Ndc80 complex, KNL/KBP proteins (kinetochore-null/KNL-binding protein), MIS proteins and CENP-F. Together with the constitutive components, these proteins seem to organize the nuclear core of the inner and outer structures in the kinetochore.
The dynamic components that vary in concentration on kinetochores during mitosis include the molecular motors CENP-E and dynein (as well as their target components ZW10 and ROD), and the spindle checkpoint proteins (such as Mad1, Mad2, BubR1 and Cdc20). These proteins assemble on the kinetochore in high concentrations in the absence of microtubules; however, the higher the number of MTs anchored to the kinetochore, the lower the concentrations of these proteins. At metaphase, CENP-E, Bub3 and Bub1 levels diminish by a factor of about three to four as compared with free kinetochores, whereas dynein/dynactin, Mad1, Mad2 and BubR1 levels are reduced by a factor of more than 10 to 100.
Whereas the spindle checkpoint protein levels present in the outer plate diminish as MTs anchor, other components such as EB1, APC and proteins in the Ran pathway (RanGap1 and RanBP2) associate to kinetochores only when MTs are anchored. This may belong to a mechanism in the kinetochore to recognize the microtubules' plus-end (+), ensuring their proper anchoring and regulating their dynamic behavior as they remain anchored.
A 2010 study used a complex method (termed "multiclassifier combinatorial proteomics" or MCCP) to analyze the proteomic composition of vertebrate chromosomes, including kinetochores. Although this study does not include a biochemical enrichment for kinetochores, obtained data include all the centromeric subcomplexes, with peptides from all 125 known centromeric proteins. According to this study, there are still about one hundred unknown kinetochore proteins, doubling the known structure during mitosis, which confirms the kinetochore as one of the most complex cellular substructures. Consistently, a comprehensive literature survey indicated that there had been at least 196 human proteins already experimentally shown to be localized at kinetochores.
Function
The number of microtubules attached to one kinetochore is variable: in Saccharomyces cerevisiae only one MT binds each kinetochore, whereas in mammals there can be 15–35 MTs bound to each kinetochore. However, not all the MTs in the spindle attach to one kinetochore. There are MTs that extend from one centrosome to the other (and they are responsible for spindle length) and some shorter ones are interdigitated between the long MTs. Professor B. Nicklas (Duke University), showed that, if one breaks down the MT-kinetochore attachment using a laser beam, chromatids can no longer move, leading to an abnormal chromosome distribution. These experiments also showed that kinetochores have polarity, and that kinetochore attachment to MTs emanating from one or the other centrosome will depend on its orientation. This specificity guarantees that only one chromatid will move to each spindle side, thus ensuring the correct distribution of the genetic material. Thus, one of the basic functions of the kinetochore is the MT attachment to the spindle, which is essential to correctly segregate sister chromatids. If anchoring is incorrect, errors may ensue, generating aneuploidy, with catastrophic consequences for the cell. To prevent this from happening, there are mechanisms of error detection and correction (as the spindle assembly checkpoint), whose components reside also on the kinetochores. The movement of one chromatid towards the centrosome is produced primarily by MT depolymerization in the binding site with the kinetochore. These movements require also force generation, involving molecular motors likewise located on the kinetochores.
Chromosome anchoring to MTs in the mitotic spindle
Capturing MTs
During the synthesis phase (S phase) in the cell cycle, the centrosome starts to duplicate. Just at the beginning of mitosis, both centrioles in each centrosome reach their maximal length, centrosomes recruit additional material and their nucleation capacity for microtubules increases. As mitosis progresses, both centrosomes separate to establish the mitotic spindle. In this way, the spindle in a mitotic cell has two poles emanating microtubules. Microtubules are long proteic filaments with asymmetric extremes, a "minus"(-) end relatively stable next to the centrosome, and a "plus"(+) end enduring alternate phases of growing-shrinking, exploring the center of the cell. During this searching process, a microtubule may encounter and capture a chromosome through the kinetochore. Microtubules that find and attach a kinetochore become stabilized, whereas those microtubules remaining free are rapidly depolymerized. As chromosomes have two kinetochores associated back-to-back (one on each sister chromatid), when one of them becomes attached to the microtubules generated by one of the cellular poles, the kinetochore on the sister chromatid becomes exposed to the opposed pole; for this reason, most of the times the second kinetochore becomes attached to the microtubules emanating from the opposing pole, in such a way that chromosomes are now bi-oriented, one fundamental configuration (also termed amphitelic) to ensure the correct segregation of both chromatids when the cell will divide.
When just one microtubule is anchored to one kinetochore, it starts a rapid movement of the associated chromosome towards the pole generating that microtubule. This movement is probably mediated by the motor activity towards the "minus" (-) of the motor protein cytoplasmic dynein, which is very concentrated in the kinetochores not anchored to MTs. The movement towards the pole is slowed down as far as kinetochores acquire kMTs (MTs anchored to kinetochores) and the movement becomes directed by changes in kMTs length. Dynein is released from kinetochores as they acquire kMTs and, in cultured mammalian cells, it is required for the spindle checkpoint inactivation, but not for chromosome congression in the spindle equator, kMTs acquisition or anaphase A during chromosome segregation. In higher plants or in yeast there is no evidence of dynein, but other kinesins towards the (-) end might compensate for the lack of dynein.
Another motor protein implicated in the initial capture of MTs is CENP-E; this is a high molecular weight kinesin associated with the fibrous corona at mammalian kinetochores from prometaphase until anaphase. In cells with low levels of CENP-E, chromosomes lack this protein at their kinetochores, which quite often are defective in their ability to congress at the metaphase plate. In this case, some chromosomes may remain chronically mono-oriented (anchored to only one pole), although most chromosomes may congress correctly at the metaphase plate.
It is widely accepted that the kMTs fiber (the bundle of microtubules bound to the kinetochore) is originated by the capture of MTs polymerized at the centrosomes and spindle poles in mammalian cultured cells. However, MTs directly polymerized at kinetochores might also contribute significantly. The manner in which the centromeric region or kinetochore initiates the formation of kMTs and the frequency at which this happens are important questions, because this mechanism may contribute not only to the initial formation of kMTs, but also to the way in which kinetochores correct defective anchoring of MTs and regulate the movement along kMTs.
Role of Ndc80 complex
MTs associated to kinetochores present special features: compared to free MTs, kMTs are much more resistant to cold-induced depolymerization, high hydrostatic pressures or calcium exposure. Furthermore, kMTs are recycled much more slowly than astral MTs and spindle MTs with free (+) ends, and if kMTs are released from kinetochores using a laser beam, they rapidly depolymerize.
When it was clear that neither dynein nor CENP-E is essential for kMTs formation, other molecules should be responsible for kMTs stabilization. Pioneer genetic work in yeast revealed the relevance of the Ndc80 complex in kMTs anchoring. In Saccharomyces cerevisiae, the Ndc80 complex has four components: Ndc80p, Nuf2p, Spc24p and Spc25p. Mutants lacking any of the components of this complex show loss of the kinetochore-microtubule connection, although kinetochore structure is not completely lost. Yet mutants in which kinetochore structure is lost (for instance Ndc10 mutants in yeast) are deficient both in the connection to microtubules and in the ability to activate the spindle checkpoint, probably because kinetochores work as a platform in which the components of the response are assembled.
The Ndc80 complex is highly conserved and it has been identified in S. pombe, C. elegans, Xenopus, chicken and humans. Studies on Hec1 (highly expressed in cancer cells 1), the human homolog of Ndc80p, show that it is important for correct chromosome congression and mitotic progression, and that it interacts with components of the cohesin and condensin complexes.
Different laboratories have shown that the Ndc80 complex is essential for stabilization of the kinetochore-microtubule anchoring, required to support the centromeric tension implicated in the establishment of the correct chromosome congression in high eukaryotes. Cells with impaired function of Ndc80 (using RNAi, gene knockout, or antibody microinjection) have abnormally long spindles, lack of tension between sister kinetochores, chromosomes unable to congregate at the metaphase plate and few or any associated kMTs.
There is a variety of strong support for the ability of the Ndc80 complex to directly associate with microtubules and form the core conserved component of the kinetochore-microtubule interface. However, formation of robust kinetochore-microtubule interactions may also require the function of additional proteins. In yeast, this connection requires the presence of the complex Dam1-DASH-DDD. Some members of this complex bind directly to MTs, whereas some others bind to the Ndc80 complex. This means that the complex Dam1-DASH-DDD might be an essential adapter between kinetochores and microtubules. However, in animals an equivalent complex has not been identified, and this question remains under intense investigation.
Verification of kinetochore–MT anchoring
During S-Phase, the cell duplicates all the genetic information stored in the chromosomes, in the process termed DNA replication. At the end of this process, each chromosome includes two sister chromatids, which are two complete and identical DNA molecules. Both chromatids remain associated by cohesin complexes until anaphase, when chromosome segregation occurs. If chromosome segregation happens correctly, each daughter cell receives a complete set of chromatids, and for this to happen each sister chromatid has to anchor (through the corresponding kinetochore) to MTs generated in opposed poles of the mitotic spindle. This configuration is termed amphitelic or bi-orientation.
However, during the anchoring process some incorrect configurations may also appear:
monotelic: only one of the chromatids is anchored to MTs, the second kinetochore is not anchored; in this situation, there is no centromeric tension, and the spindle checkpoint is activated, delaying entry in anaphase and allowing time for the cell to correct the error. If it is not corrected, the unanchored chromatid might randomly end in any of the two daughter cells, generating aneuploidy: one daughter cell would have chromosomes in excess and the other would lack some chromosomes.
syntelic: both chromatids are anchored to MTs emanating from the same pole; this situation does not generate centromeric tension either, and the spindle checkpoint will be activated. If it is not corrected, both chromatids will end in the same daughter cell, generating aneuploidy.
merotelic: at least one chromatid is anchored simultaneously to MTs emanating from both poles. This situation generates centromeric tension, and for this reason the spindle checkpoint is not activated. If it is not corrected, the chromatid bound to both poles will remain as a lagging chromosome at anaphase, and finally will be broken in two fragments, distributed between the daughter cells, generating aneuploidy.
Both the monotelic and the syntelic configurations fail to generate centromeric tension and are detected by the spindle checkpoint. In contrast, the merotelic configuration is not detected by this control mechanism. However, most of these errors are detected and corrected before the cell enters in anaphase. A key factor in the correction of these anchoring errors is the chromosomal passenger complex, which includes the kinase protein Aurora B, its target and activating subunit INCENP and two other subunits, Survivin and Borealin/Dasra B (reviewed by Adams and collaborators in 2001). Cells in which the function of this complex has been abolished by dominant negative mutants, RNAi, antibody microinjection or using selective drugs, accumulate errors in chromosome anchoring. Many studies have shown that Aurora B is required to destabilize incorrect anchoring kinetochore-MT, favoring the generation of amphitelic connections. Aurora B homolog in yeast (Ipl1p) phosphorilates some kinetochore proteins, such as the constitutive protein Ndc10p and members of the Ndc80 and Dam1-DASH-DDD complexes. Phosphorylation of Ndc80 complex components produces destabilization of kMTs anchoring. It has been proposed that Aurora B localization is important for its function: as it is located in the inner region of the kinetochore (in the centromeric heterochromatin), when the centromeric tension is established sister kinetochores separate, and Aurora B cannot reach its substrates, so that kMTs are stabilized. Aurora B is frequently overexpressed in several cancer types, and it is currently a target for the development of anticancer drugs.
Spindle checkpoint activation
The spindle checkpoint, or SAC (for spindle assembly checkpoint), also known as the mitotic checkpoint, is a cellular mechanism responsible for detection of:
correct assembly of the mitotic spindle;
attachment of all chromosomes to the mitotic spindle in a bipolar manner;
congression of all chromosomes at the metaphase plate.
When just one chromosome (for any reason) remains lagging during congression, the spindle checkpoint machinery generates a delay in cell cycle progression: the cell is arrested, allowing time for repair mechanisms to solve the detected problem. After some time, if the problem has not been solved, the cell will be targeted for apoptosis (programmed cell death), a safety mechanism to avoid the generation of aneuploidy, a situation which generally has dramatic consequences for the organism.
Whereas structural centromeric proteins (such as CENP-B), remain stably localized throughout mitosis (including during telophase), the spindle checkpoint components are assembled on the kinetochore in high concentrations in the absence of microtubules, and their concentrations decrease as the number of microtubules attached to the kinetochore increases.
At metaphase, CENP-E, Bub3 and Bub1 levels decreases 3 to 4 fold as compared to the levels at unattached kinetochores, whereas the levels of dynein/dynactin, Mad1, Mad2 and BubR1 decrease >10-100 fold. Thus at metaphase, when all chromosomes are aligned at the metaphase plate, all checkpoint proteins are released from the kinetochore. The disappearance of the checkpoint proteins out of the kinetochores indicates the moment when the chromosome has reached the metaphase plate and is under bipolar tension. At this moment, the checkpoint proteins that bind to and inhibit Cdc20 (Mad1-Mad2 and BubR1), release Cdc20, which binds and activates APC/CCdc20, and this complex triggers sister chromatids separation and consequently anaphase entry.
Several studies indicate that the Ndc80 complex participates in the regulation of the stable association of Mad1-Mad2 and dynein with kinetochores. Yet the kinetochore associated proteins CENP-A, CENP-C, CENP-E, CENP-H and BubR1 are independent of Ndc80/Hec1. The prolonged arrest in prometaphase observed in cells with low levels of Ndc80/Hec1 depends on Mad2, although these cells show low levels of Mad1, Mad2 and dynein on kinetochores (<10-15% in relation to unattached kinetochores). However, if both Ndc80/Hec1 and Nuf2 levels are reduced, Mad1 and Mad2 completely disappear from the kinetochores and the spindle checkpoint is inactivated.
Shugoshin (Sgo1, MEI-S332 in Drosophila melanogaster) are centromeric proteins which are essential to maintain cohesin bound to centromeres until anaphase. The human homolog, hsSgo1, associates with centromeres during prophase and disappears when anaphase starts. When Shugoshin levels are reduced by RNAi in HeLa cells, cohesin cannot remain on the centromeres during mitosis, and consequently sister chromatids separate synchronically before anaphase initiates, which triggers a long mitotic arrest.
On the other hand, Dasso and collaborators have found that proteins involved in the Ran cycle can be detected on kinetochores during mitosis: RanGAP1 (a GTPase activating protein which stimulates the conversion of Ran-GTP in Ran-GDP) and the Ran binding protein called RanBP2/Nup358. During interphase, these proteins are located at the nuclear pores and participate in the nucleo-cytoplasmic transport. Kinetochore localization of these proteins seem to be functionally significant, because some treatments that increase the levels of Ran-GTP inhibit kinetochore release of Bub1, Bub3, Mad2 and CENP-E.
Orc2 (a protein that belongs to the origin recognition complex -ORC- implicated in DNA replication initiation during S phase) is also localized at kinetochores during mitosis in human cells; in agreement with this localization, some studies indicate that Orc2 in yeast is implicated in sister chromatids cohesion, and when it is eliminated from the cell, spindle checkpoint activation ensues. Some other ORC components (such orc5 in S. pombe) have been also found to participate in cohesion. However, ORC proteins seem to participate in a molecular pathway which is additive to cohesin pathway, and it is mostly unknown.
Force generation to propel chromosome movement
Most chromosome movements in relation to spindle poles are associated to lengthening and shortening of kMTs. One of the features of kinetochores is their capacity to modify the state of their associated kMTs (around 20) from a depolymerization state at their (+) end to polymerization state. This allows the kinetochores from cells at prometaphase to show "directional instability", changing between persistent phases of movement towards the pole (poleward) or inversed (anti-poleward), which are coupled with alternating states of kMTs depolymerization and polymerization, respectively. This kinetochore bi-stability seem to be part of a mechanism to align the chromosomes at the equator of the spindle without losing the mechanic connection between kinetochores and spindle poles. It is thought that kinetochore bi-stability is based upon the dynamic instability of the kMTs (+) end, and it is partially controlled by the tension present at the kinetochore. In mammalian cultured cells, a low tension at kinetochores promotes change towards kMTs depolymerization, and high tension promotes change towards kMTs polymerization.
Kinetochore proteins and proteins binding to MTs (+) end (collectively called +TIPs) regulate kinetochore movement through the kMTs (+) end dynamics regulation. However, the kinetochore-microtubule interface is highly dynamic, and some of these proteins seem to be bona fide components of both structures. Two groups of proteins seem to be particularly important: kinesins which work like depolymerases, such as KinI kinesins; and proteins bound to MT (+) ends, +TIPs, promoting polymerization, perhaps antagonizing the depolymerases effect.
KinI kinesins are named "I" because they present an internal motor domain, which uses ATP to promote depolymerization of tubulin polymer, the microtubule. In vertebrates, the most important KinI kinesin controlling the dynamics of the (+) end assembly is MCAK. However, it seems that there are other kinesins implicated.
There are two groups of +TIPs with kinetochore functions.
The first one includes the protein adenomatous polyposis coli (APC) and the associated protein EB1, which need MTs to localize on the kinetochores. Both proteins are required for correct chromosome segregation. EB1 binds only to MTs in polymerizing state, suggesting that it promotes kMTs stabilization during this phase.
The second group of +TIPs includes proteins which can localize on kinetochores even in absence of MTs. In this group there are two proteins that have been widely studied: CLIP-170 and their associated proteins CLASPs (CLIP-associated proteins). CLIP-170 role at kinetochores is unknown, but the expression of a dominant negative mutant produces a prometaphase delay, suggesting that it has an active role in chromosome alignment. CLASPs proteins are required for chromosome alignment and maintenance of a bipolar spindle in Drosophila, humans and yeast.
References
External links
Cell biology
Organelles
Mitosis
Meiosis | Kinetochore | [
"Biology"
] | 6,399 | [
"Cell biology",
"Meiosis",
"Molecular genetics",
"Cellular processes",
"Mitosis"
] |
998,941 | https://en.wikipedia.org/wiki/BlueJ | BlueJ is an integrated development environment (IDE) for the Java programming language, developed mainly for educational purposes, but also suitable for small-scale software development. It runs with the help of Java Development Kit (JDK).
BlueJ was developed to support the learning and teaching of object-oriented programming, and its design differs from other development environments as a result. The main screen graphically shows the class structure of an application under development (in a UML-like diagram), and objects can be interactively created and tested. This interaction facility, combined with a clean, simple user interface, allows easy experimentation with objects under development. Object-oriented concepts (classes, objects, communication through method calls) are represented visually and in its interaction design in the interface.
History
The development of BlueJ was started in 1999 by Michael Kölling and John Rosenberg at Monash University, as a successor to the Blue system. BlueJ is an IDE (Integrated Development Environment). Blue was an integrated system with its own programming language and environment, and was a relative of the Eiffel language. BlueJ implements the Blue environment design for the Java programming language.
In March 2009, the BlueJ project became free and open source software, and licensed under GPL-2.0-or-later with the Classpath exception.
BlueJ is currently being maintained by a team at King's College London, England, where Kölling works.
Supported language
BlueJ supports programming in Java and in Stride. Java support has been provided in BlueJ since its inception, while Stride support was added in 2017.
See also
Greenfoot
DrJava
Educational programming language
References
Bibliography
External links
BlueJ textbook
Integrated development environments
Free integrated development environments
Cross-platform free software
Free software programmed in Java (programming language)
Java development tools
Java platform
Linux programming tools
Software development kits
MacOS programming tools
Programming tools for Windows
Linux software
Educational programming languages
Pedagogic integrated development environments | BlueJ | [
"Technology"
] | 395 | [
"Computing platforms",
"Java platform"
] |
999,006 | https://en.wikipedia.org/wiki/Amination | Amination is the process by which an amine group is introduced into an organic molecule. This type of reaction is important because organonitrogen compounds are pervasive.
Reactions
Aminase enzymes
Enzymes that catalyse this reaction are termed aminases. Amination can occur in a number of ways including reaction with ammonia or another amine such as an alkylation, reductive amination and the Mannich reaction.
Acid-catalysed hydroamination
Many alkyl amines are produced industrially by the amination of alcohols using ammonia in the presence of solid acid catalysts. Illustrative is the production of tert-butylamine:
NH3 + CH2=C(CH3)2 → H2NC(CH3)3
The Ritter reaction of isobutene with hydrogen cyanide is not useful in this case because it produces too much waste.
Electrophilic amination
Usually, the amine reacts as the nucleophile with another organic compound acting as the electrophile. This sense of reactivity may be reversed for some electron-deficient amines, including oxaziridines, hydroxylamines, oximes, and other N–O substrates. When the amine is used as an electrophile, the reaction is called electrophilic amination. Electron-rich organic substrates that may be used as nucleophiles for this process include carbanions and enolates.
Miscellaneous methods
Alpha hydroxy acids can be converted into amino acids directly using aqueous ammonia solution, hydrogen gas and a heterogeneous metallic ruthenium catalyst.
Metal-catalyzed hydroamination
In hydroamination, amines add to alkenes. When substituted amines add, the result is alkene carboamination.
See also
Alkylation, the addition of an alkyl group
Acylation, the addition of an acyl group (-C(O)R)
Deamination
References
Organic reactions | Amination | [
"Chemistry"
] | 412 | [
"Organic reactions"
] |
999,479 | https://en.wikipedia.org/wiki/Pussy%20willow | Pussy willow is a name given to many of the smaller species of the genus Salix (willows and sallows) when their furry catkins are young in early spring. These species include (among many others):
Goat willow or goat sallow (Salix caprea), a small tree native to northern Europe and northwest Asia.
Grey willow or grey sallow (Salix cinerea), a small tree native to northern Europe.
American pussy willow (Salix discolor), native to northern North America.
Before the male catkins of these species come into full flower they are covered in fine, greyish fur, leading to a fancied likeness to tiny cats, also known as “pussies”. The catkins appear before the leaves, and are one of the earliest signs of spring. At other times of year trees of most of these species are usually known by their ordinary names.
Cultural traditions
Asia
The many buds of the pussy willow make it a favourite flower for Lunar New Year. The fluffy white blossoms of the pussy willow resemble silk, and they soon give forth young shoots the colour of green jade. In Chinese tradition, this represents the coming of prosperity. Towards the Lunar New Year period in spring, stalks of the plant may be bought from wet market vendors or supermarkets.
Once unbundled within one's residence, the stalks are frequently decorated with gold and red ornaments—ornaments with colours and textures that signify prosperity and happiness. Felt pieces of red, pink, and yellow are also a common decoration in Southeast Asia.
Xie Daoyun's comparison of snow and willow catkins is a famous line of poetry and is used to refer to precocious young female poets.
Europe
The flowering shoots of pussy willow are used both in Europe and America for spring religious decoration on Palm Sunday, as a replacement for palm branches, which do not grow that far north.
Ukrainian and Russian Orthodox; Ruthenian, Polish, Romanian, Bulgarian, Czech, Slovak, Bavarian, and Austrian Roman Catholics; Finnish and Baltic Lutherans and Orthodox; and various other Eastern European peoples carry pussy willows on Palm Sunday instead of palm branches. This custom has continued to this day among Ukrainian Orthodox Church, Romanian Orthodox, Russian Orthodox, Ruthenian Catholic, Ukrainian Catholic, Kashubian Catholic and Polish Catholic émigrés to North America. Sometimes, on Palm Sunday they will bless both palms and pussy willows in church. The branches will often be preserved throughout the year in the family's icon corner.
Pussy willow also plays a prominent role in Polish Dyngus Day (Easter Monday) observances, continued also among Polish-Americans, especially in the Buffalo, New York, area.
Middle East
In Greater Iran it may be part of the decoration on the Haft-Seen table during the new year celebration of Nowruz on the first day of spring, and its distilled flower is used in traditional medicine.
References
Easter traditions
Plant common names
Salix
de:Weidenkätzchen
ru:Верба | Pussy willow | [
"Biology"
] | 619 | [
"Plants",
"Plant common names",
"Common names of organisms"
] |
999,516 | https://en.wikipedia.org/wiki/Gustav%20Zeuner | Gustav Anton Zeuner (30 November 1828 – 17 October 1907) was a German physicist, engineer and epistemologist, considered the founder of technical thermodynamics and of the Dresden School of Thermodynamics.
Life
University and Revolution
Zeuner was born in Chemnitz, Saxony. His first training in the subject of engineering was at the Chemnitz Königliche Gewerbeschule (Royal Vocational School), today Chemnitz University of Technology, where he studied from 1843-1848.
In 1848 he moved the short distance to the Bergakademie (Mining Academy) in Freiberg, today also a university of technology, where he studied mining and metallurgy. He developed close links with one of his professors, the famous mineralogist Albin Julius Weisbach, with whom he worked on several projects.
The university course was disrupted, however, during the revolutions which took place all over Germany. Large popular assemblies and mass demonstrations took place, primarily demanding freedom of the press, freedom of assembly, arming of the people, and a national German parliament. Zeuner joined the revolutionaries on the barricades in Dresden during the May Uprising in 1849. Unlike many of his compatriots, some of whom were sentenced to death or sent to the workhouse, Zeuner was pardoned. He was able to complete his course, and even completed his PhD at the University of Leipzig in 1853, but was banned from ever teaching at any Saxon university.
Escape to Zürich
In 1853, Zeuner took over as the editor of the engineering magazine "Der Civilenginieur. Zeitschrift für das Ingenieurwesen", the first German magazine specialising in mechanics, which ran until 1896. He continued in this position until 1857, even after moving to Zürich in 1855 to work as a professor for technical mechanics at the ETH Zürich, the Swiss Federal Institute of Technology in Zürich. There he worked alongside famous engineers such as Franz Reuleaux. Other Dresden revolutionaries had fled their home country for Zürich (Richard Wagner, Gottfried Semper, Theodor Mommsen).
It was in Zürich that Zeuner made his model of a locomotive front end in 1858; he recognised its potential for creating momentum but was only interested in the theory and did not develop the design any further. Also in Zürich (in 1869) Zeuner invented the three-dimensional population graph now sometimes known as a Zeuner diagram but more often as a Lexis diagram after Wilhelm Lexis who modified the idea slightly.
From 1859 Zeuner worked the stand-in director of the ETH Zürich, and in May 1865 he took over the position officially. His former professor, Albin Weisbach, commemorated his friend's acquisition of the post by naming a mineral after him - the transparent green crystal zeunerite.
Return to Germany
In 1871 Zeuner returned to Germany and was once again able to work with Weisbach when he succeeded his old friend as director of the Freiberg Mining Academy. He also taught there until 1875 as a professor of mechanics and the study of mining machinery. This was now possible, despite the teaching ban which had been placed on him, because of the amnesty granted to all the revolutionaries in 1862.
In 1873, while still director of Freiberg Mining Academy, Zeuner also took on the post of director at the Royal Saxon Polytechnicum in Dresden (now Technische Universität Dresden). Zeuner's efforts there led to the introduction of the humanities; the extension of the range of subjects taught resulted in the polytechnic's rise to a full-scale polytechnic university in 1890.
In 1889, aged 61, Zeuner gave up his position as director of the polytechnic to work as a lecturer until his retirement in 1897. On retiring he was made an emeritus professor. Zeuner died in Dresden in 1907.
Gustav Zeuner Award
Since 1993, the German Association of Engineers (Verein Deutscher Ingenieure or VDI) has presented students with the Gustav Zeuner Award for the best engineering thesis in Germany; Zeuner supported the Dresden branch of the VDI at its foundation in 1897.
Publications
Die Schiebersteuerungen mit besonderer Berücksichtigung der Lokomotivsteuerungen (Slide-valve controls with particular emphasis on locomotive controls) Freiberg 1858
Grundzüge der mechanischen Wärmetheorie (Basics of mechanical heat theory) 1860
Technische Thermodynamik (Technical Thermodynamics) 1887; translated in English in 1907 as Technical Thermodynamics
See also
Piston valve (steam engine)
Zeuner water turbine
References
Further reading
Das Leben und Wirken von Gustav Anton Zeuner by Gerd Grabow, published 1984 by Deutscher Verlag für Grundstoffanalyse.
German mechanical engineers
19th-century German physicists
People of the Revolutions of 1848
People from the Kingdom of Saxony
Engineers from Chemnitz
Leipzig University alumni
1828 births
1907 deaths
Recipients of German royal pardons
Academic staff of ETH Zurich
Thermodynamicists | Gustav Zeuner | [
"Physics",
"Chemistry"
] | 1,052 | [
"Thermodynamics",
"Thermodynamicists"
] |
999,536 | https://en.wikipedia.org/wiki/Product%20design | Product design is the process of creating new products for businesses to sell to their customers. It involves the generation and development of ideas through a systematic process that leads to the creation of innovative products. Thus, it is a major aspect of new product development.
Product Design Process:
The product design process is a set of strategic and tactical activities, from idea generation to commercialization, used to create a product design. In a systematic approach, product designers conceptualize and evaluate ideas, turning them into tangible inventions and products. The product designer's role is to combine art, science, and technology to create new products that people can use. Their evolving role has been facilitated by digital tools that now allow designers to do things that include communicate, visualize, analyze, 3D modeling and actually produce tangible ideas in a way that would have taken greater human resources in the past.
Product design is sometimes confused with (and certainly overlaps with) industrial design, and has recently become a broad term inclusive of service, software, and physical product design. Industrial design is concerned with bringing artistic form and usability, usually associated with craft design and ergonomics, together in order to mass-produce goods. Other aspects of product design and industrial design include engineering design, particularly when matters of functionality or utility (e.g. problem-solving) are at issue, though such boundaries are not always clear.
Product design process
There are various product design processes and many focus on different aspects. One example formulation/model of the process is described by Don Koberg and Jim Bagnel in "The Seven Universal Stages of Creative Problem-Solving." The process is usually completed by a group of people with different skills and training—e.g. industrial designers, field experts (prospective users), engineers (for engineering design aspects), depending upon the nature and type of the product involved. The process often involves figuring out what is required, brainstorming possible ideas, creating mock prototypes and then generating the product. However, that is not the end. Product designers would still need to execute the idea, making it into an actual product and evaluating its success (seeing if any improvements are necessary).
The product design process has experienced huge leaps in evolution over the last few years with the rise and adoption of 3D printing. New consumer-friendly 3D printers can produce dimensional objects and print upwards with a plastic like substance opposed to traditional printers that spread ink across a page.
The product design process, as expressed by Koberg and Bagnell, typically involves three main aspects:
Analysis
Concept
Synthesis
Depending on the kind of product being designed, the latter two sections are most often revisited (e.g. depending on how often the design needs revision, to improve it or to better fit the criteria). This is a continuous loop, where feedback is the main component. Koberg and Bagnell offer more specifics on the process: In their model, "analysis" consists of two stages, "concept" is only one stage, and "synthesis" encompasses the other four. (These terms notably vary in usage in different design frameworks. Here, they are used in the way they're used by Koberg and Bagnell.)
Analysis
Accept Situation: Here, the designers decide on committing to the project and finding a solution to the problem. They pool their resources into figuring out how to solve the task most efficiently.
Analyze: In this stage, everyone in the team begins research. They gather general and specific materials which will help to figure out how their problem might be solved. This can range from statistics, questionnaires, and articles, among many other sources.
Concept
Define: This is where the key issue of the matter is defined. The conditions of the problem become objectives, and restraints on the situation become the parameters within which the new design must be constructed.
Synthesis
Ideate: The designers here brainstorm different ideas, solutions for their design problem. The ideal brainstorming session does not involve any bias or judgment, but instead builds on original ideas.
Select: By now, the designers have narrowed down their ideas to a select few, which can be guaranteed successes and from there they can outline their plan to make the product.
Implement: This is where the prototypes are built, the plan outlined in the previous step is realized and the product starts to become an actual object.
Evaluate: In the last stage, the product is tested, and from there, improvements are made. Although this is the last stage, it does not mean that the process is over. The finished prototype may not work as well as hoped so new ideas need to be brainstormed.
Double Diamond Framework
The Double Diamond Framework is a widely used approach for product discovery, which emphasizes a structured method for problem-solving and solution development, encouraging teams to diverge (broad exploration) before converging (focused decision-making).
The framework is divided into two primary stages: diverging and converging, each with its own steps and considerations.
Diverging Stage:
During the diverging stage, teams explore the problem space broadly without predefined solutions. This phase involves engaging with core personas, conducting open-ended conversations, and gathering unfiltered input from customer-facing teams. The goal is to identify and document various problem areas, allowing themes and key issues to emerge naturally.
Converging Stage:
As insights emerge, teams transition to the converging stage, where they narrow down problem areas and prioritize solutions. This phase involves defining the problem, understanding major pain points, and advocating for solutions within the organization. Effective convergence requires clear articulation of the problem's significance and consideration of business strategies and feasibility.
Iterative Process:
The Double Diamond Framework is iterative, allowing teams to revisit stages as needed based on feedback and outcomes. Moving back to earlier stages may be necessary if solutions fail to address underlying issues or elicit negative user responses. Success lies in the team's ability to adapt and refine their approach over time.
Creative visualization
In design, Creative Visualization refers to the process by which computer generated imagery, digital animation, three-dimensional models, and two-dimensional representations, such as architectural blueprints, engineering drawings, and sewing patterns are created and used in order to visualize a potential product prior to production. Such products include prototypes for vehicles in automotive engineering, apparel in the fashion industry, and buildings in architectural design.
Demand-pull innovation and invention-push innovation
Most product designs fall under one of two categories: demand-pull innovation or invention-push innovation.
Demand-pull happens when there is an opportunity in the market to be explored by the design of a product. This product design attempts to solve a design problem. The design solution may be the development of a new product or developing a product that's already on the market, such as developing an existing invention for another purpose.
Invention-push innovation happens when there is an advancement in intelligence. This can occur through research or it can occur when the product designer comes up with a new product design idea.
Product design expression
Design expression comes from the combined effect of all elements in a product. Colour tone, shape and size should direct a person's thoughts towards buying the product. Therefore, it is in the product designer's best interest to consider the audiences who are most likely to be the product's end consumers. Keeping in mind how consumers will perceive the product during the design process will direct towards the product’s success in the market. However, even within a specific audience, it is challenging to cater to each possible personality within that group.
One solution to that is to create a product that, in its designed appearance and function, expresses a personality or tells a story. Products that carry such attributes are more likely to give off a stronger expression that will attract more consumers. On that note it is important to keep in mind that design expression does not only concern the appearance of a product, but also its function. For example, as humans our appearance as well as our actions are subject to people's judgment when they are making a first impression of us. People usually do not appreciate a rude person even if they are good looking. Similarly, a product can have an attractive appearance but if its function does not follow through it will most likely drop in regards to consumer interest. In this sense, designers are like communicators, they use the language of different elements in the product to express something.
Trends in product design
Product designers must consider every detail: how people use and misuse objects, potential flaws in products, errors in the design process, and the ideal ways people wish they could interact with those objects. Many new designs will fail and many won't even make it to market. Some designs eventually become obsolete. The design process itself can be quite frustrating usually taking 5 or 6 tries to get the product design right. A product that fails in the marketplace the first time may be re-introduced to the market 2 more times. If it continues to fail, the product is then considered to be dead because the market believes it to be a failure. Most new products fail, even if there's a great idea behind them.
All types of product design are clearly linked to the economic health of manufacturing sectors. Innovation provides much of the competitive impetus for the development of new products, with new technology often requiring a new design interpretation. It only takes one manufacturer to create a new product paradigm to force the rest of the industry to catch up—fueling further innovation. Products designed to benefit people of all ages and abilities—without penalty to any group—accommodate our swelling aging population by extending independence and supporting the changing physical and sensory needs we all encounter as we grow older.
See also
Axiomatic product development lifecycle (APDL)
Industrial design
Sustainable design
Transgenerational design
Virtual product development
Universal design
Inclusive design
References
Design for X | Product design | [
"Engineering"
] | 1,996 | [
"Product design",
"Design",
"Design for X"
] |
999,701 | https://en.wikipedia.org/wiki/Rate%20of%20convergence | In mathematical analysis, particularly numerical analysis, the rate of convergence and order of convergence of a sequence that converges to a limit are any of several characterizations of how quickly that sequence approaches its limit. These are broadly divided into rates and orders of convergence that describe how quickly a sequence further approaches its limit once it is already close to it, called asymptotic rates and orders of convergence, and those that describe how quickly sequences approach their limits from starting points that are not necessarily close to their limits, called non-asymptotic rates and orders of convergence.
Asymptotic behavior is particularly useful for deciding when to stop a sequence of numerical computations, for instance once a target precision has been reached with an iterative root-finding algorithm, but pre-asymptotic behavior is often crucial for determining whether to begin a sequence of computations at all, since it may be impossible or impractical to ever reach a target precision with a poorly chosen approach. Asymptotic rates and orders of convergence are the focus of this article.
In practical numerical computations, asymptotic rates and orders of convergence follow two common conventions for two types of sequences: the first for sequences of iterations of an iterative numerical method and the second for sequences of successively more accurate numerical discretizations of a target. In formal mathematics, rates of convergence and orders of convergence are often described comparatively using asymptotic notation commonly called "big O notation," which can be used to encompass both of the prior conventions; this is an application of asymptotic analysis.
For iterative methods, a sequence that converges to is said to have asymptotic order of convergence and asymptotic rate of convergence if
Where methodological precision is required, these rates and orders of convergence are known specifically as the rates and orders of Q-convergence, short for quotient-convergence, since the limit in question is a quotient of error terms. The rate of convergence may also be called the asymptotic error constant, and some authors will use rate where this article uses order. Series acceleration methods are techniques for improving the rate of convergence of the sequence of partial sums of a series and possibly its order of convergence, also.
Similar concepts are used for sequences of discretizations. For instance, ideally the solution of a differential equation discretized via a regular grid will converge to the solution of the continuous equation as the grid spacing goes to zero, and if so the asymptotic rate and order of that convergence are important properties of the gridding method. A sequence of approximate grid solutions of some problem that converges to a true solution with a corresponding sequence of regular grid spacings that converge to 0 is said to have asymptotic order of convergence and asymptotic rate of convergence if
where the absolute value symbols stand for a metric for the space of solutions such as the uniform norm. Similar definitions also apply for non-grid discretization schemes such as the polygon meshes of a finite element method or the basis sets in computational chemistry: in general, the appropriate definition of the asymptotic rate will involve the asymptotic limit of the ratio of an approximation error term above to an asymptotic order power of a discretization scale parameter below.
In general, comparatively, one sequence that converges to a limit is said to asymptotically converge more quickly than another sequence that converges to a limit if
and the two are said to asymptotically converge with the same order of convergence if the limit is any positive finite value. The two are said to be asymptotically equivalent if the limit is equal to one. These comparative definitions of rate and order of asymptotic convergence are fundamental in asymptotic analysis and find wide application in mathematical analysis as a whole, including numerical analysis, real analysis, complex analysis, and functional analysis.
Asymptotic rates of convergence for iterative methods
Definitions
Suppose that the sequence of iterates of an iterative method converges to the limit number as . The sequence is said to converge with order to and with a rate of convergence if the limit of quotients of absolute differences of sequential iterates from their limit satisfies
for some positive constant if and if . Other more technical rate definitions are needed if the sequence converges but or the limit does not exist. This definition is technically called Q-convergence, short for quotient-convergence, and the rates and orders are called rates and orders of Q-convergence when that technical specificity is needed. , below, is an appropriate alternative when this limit does not exist.
Sequences with larger orders converge more quickly than those with smaller order, and those with smaller rates converge more quickly than those with larger rates for a given order. This "smaller rates converge more quickly" behavior among sequences of the same order is standard but it can be counterintuitive. Therefore it is also common to define as the rate; this is the "number of extra decimals of precision per iterate" for sequences that converge with order 1.
Integer powers of are common and are given common names. Convergence with order and is called linear convergence and the sequence is said to converge linearly to . Convergence with and any is called quadratic convergence and the sequence is said to converge quadratically. Convergence with and any is called cubic convergence. However, it is not necessary that be an integer. For example, the secant method, when converging to a regular, simple root, has an order of the golden ratio φ ≈ 1.618.
The common names for integer orders of convergence connect to asymptotic big O notation, where the convergence of the quotient implies These are linear, quadratic, and cubic polynomial expressions when is 1, 2, and 3, respectively. More precisely, the limits imply the leading order error is exactly which can be expressed using asymptotic small o notation as
In general, when for a sequence or for any sequence that satisfies those sequences are said to converge superlinearly (i.e., faster than linearly). A sequence is said to converge sublinearly (i.e., slower than linearly) if it converges and Importantly, it is incorrect to say that these sublinear-order sequences converge linearly with an asymptotic rate of convergence of 1. A sequence converges logarithmically to if the sequence converges sublinearly and also
R-convergence
The definitions of Q-convergence rates have the shortcoming that they do not naturally capture the convergence behavior of sequences that do converge, but do not converge with an asymptotically constant rate with every step, so that the Q-convergence limit does not exist. One class of examples is the staggered geometric progressions that get closer to their limits only every other step or every several steps, for instance the example detailed below (where is the floor function applied to ). The defining Q-linear convergence limits do not exist for this sequence because one subsequence of error quotients starting from odd steps converges to 1 and another subsequence of quotients starting from even steps converges to 1/4. When two subsequences of a sequence converge to different limits, the sequence does not itself converge to a limit.
In cases like these, a closely related but more technical definition of rate of convergence called R-convergence is more appropriate. The "R-" prefix stands for "root." A sequence that converges to is said to converge at least R-linearly if there exists an error-bounding sequence such that and converges Q-linearly to zero; analogous definitions hold for R-superlinear convergence, R-sublinear convergence, R-quadratic convergence, and so on.
Any error bounding sequence provides a lower bound on the rate and order of R-convergence and the greatest lower bound gives the exact rate and order of R-convergence. As for Q-convergence, sequences with larger orders converge more quickly and those with smaller rates converge more quickly for a given order, so these greatest-rate-lower-bound error-upper-bound sequences are those that have the greatest possible and the smallest possible given that .
For the example given above, the tight bounding sequence converges Q-linearly with rate 1/2, so converges R-linearly with rate 1/2. Generally, for any staggered geometric progression , the sequence will not converge Q-linearly but will converge R-linearly with rate These examples demonstrate why the "R" in R-linear convergence is short for "root."
Examples
The geometric progression converges to . Plugging the sequence into the definition of Q-linear convergence (i.e., order of convergence 1) shows that
Thus converges Q-linearly with a convergence rate of ; see the first plot of the figure below.
More generally, for any initial value in the real numbers and a real number common ratio between -1 and 1, a geometric progression converges linearly with rate and the sequence of partial sums of a geometric series also converges linearly with rate . The same holds also for geometric progressions and geometric series parameterized by any complex numbers
The staggered geometric progression using the floor function that gives the largest integer that is less than or equal to converges R-linearly to 0 with rate 1/2, but it does not converge Q-linearly; see the second plot of the figure below. The defining Q-linear convergence limits do not exist for this sequence because one subsequence of error quotients starting from odd steps converges to 1 and another subsequence of quotients starting from even steps converges to 1/4. When two subsequences of a sequence converge to different limits, the sequence does not itself converge to a limit. Generally, for any staggered geometric progression , the sequence will not converge Q-linearly but will converge R-linearly with rate these examples demonstrate why the "R" in R-linear convergence is short for "root."
The sequence
converges to zero Q-superlinearly. In fact, it is quadratically convergent with a quadratic convergence rate of 1. It is shown in the third plot of the figure below.
Finally, the sequence
converges to zero Q-sublinearly and logarithmically and its convergence is shown as the fourth plot of the figure below.
Convergence rates to fixed points of recurrent sequences
Recurrent sequences , called fixed point iterations, define discrete time autonomous dynamical systems and have important general applications in mathematics through various fixed-point theorems about their convergence behavior. When f is continuously differentiable, given a fixed point p, such that , the fixed point is an attractive fixed point and the recurrent sequence will converge at least linearly to p for any starting value sufficiently close to p. If and , then the recurrent sequence will converge at least quadratically, and so on. If , then the fixed point is a repulsive fixed point and sequences cannot converge to p from its immediate neighborhoods, though they may still jump to p directly from outside of its local neighborhoods.
Order estimation
A practical method to calculate the order of convergence for a sequence generated by a fixed point iteration is to calculate the following sequence, which converges to the order :
For numerical approximation of an exact value through a numerical method of order see.
Accelerating convergence rates
Many methods exist to accelerate the convergence of a given sequence, i.e., to transform one sequence into a second sequence that converges more quickly to the same limit. Such techniques are in general known as "series acceleration" methods. These may reduce the computational costs of approximating the limits of the original sequences. One example of series acceleration by sequence transformation is Aitken's delta-squared process. These methods in general, and in particular Aitken's method, do not typically increase the order of convergence and thus they are useful only if initially the convergence is not faster than linear: if converges linearly, Aitken's method transforms it into a sequence that still converges linearly (except for pathologically designed special cases), but faster in the sense that . On the other hand, if the convergence is already of order ≥ 2, Aitken's method will bring no improvement.
Asymptotic rates of convergence for discretization methods
Definitions
A sequence of discretized approximations of some continuous-domain function that converges to this target, together with a corresponding sequence of discretization scale parameters that converge to 0, is said to have asymptotic order of convergence and asymptotic rate of convergence if
for some positive constants and and using to stand for an appropriate distance metric on the space of solutions, most often either the uniform norm, the absolute difference, or the Euclidean distance. Discretization scale parameters may be spacings of a regular grid in space or in time, the inverse of the number of points of a grid in one dimension, an average or maximum distance between points in a polygon mesh, the single-dimension spacings of an irregular sparse grid, or a characteristic quantum of energy or momentum in a quantum mechanical basis set.
When all the discretizations are generated using a single common method, it is common to discuss the asymptotic rate and order of convergence for the method itself rather than any particular discrete sequences of discretized solutions. In these cases one considers a single abstract discretized solution generated using the method with a scale parameter and then the method is said to have asymptotic order of convergence and asymptotic rate of convergence if
again for some positive constants and and an appropriate metric This implies that the error of a discretization asymptotically scales like the discretization's scale parameter to the power, or using asymptotic big O notation. More precisely, it implies the leading order error is which can be expressed using asymptotic small o notation as
In some cases multiple rates and orders for the same method but with different choices of scale parameter may be important, for instance for finite difference methods based on multidimensional grids where the different dimensions have different grid spacings or for finite element methods based on polygon meshes where choosing either average distance between mesh points or maximum distance between mesh points as scale parameters may imply different orders of convergence. In some especially technical contexts, discretization methods' asymptotic rates and orders of convergence will be characterized by several scale parameters at once with the value of each scale parameter possibly affecting the asymptotic rate and order of convergence of the method with respect to the other scale parameters.
Example
Consider the ordinary differential equation
with initial condition . We can approximate a solution to this one-dimensional equation using a sequence applying the forward Euler method for numerical discretization using any regular grid spacing and grid points indexed by as follows:
which implies the first-order linear recurrence with constant coefficients
Given , the sequence satisfying that recurrence is the geometric progression
The exact analytical solution to the differential equation is , corresponding to the following Taylor expansion in :
Therefore the error of the discrete approximation at each discrete point is
For any specific , given a sequence of forward Euler approximations , each using grid spacings that divide so that , one has
for any sequence of grids with successively smaller grid spacings . Thus converges to pointwise with a convergence order and asymptotic error constant at each point Similarly, the sequence converges uniformly with the same order and with rate on any bounded interval of , but it does not converge uniformly on the unbounded set of all positive real values,
Comparing asymptotic rates of convergence
Definitions
In asymptotic analysis in general, one sequence that converges to a limit is said to asymptotically converge to with a faster order of convergence than another sequence that converges to in a shared metric space with distance metric such as the real numbers or complex numbers with the ordinary absolute difference metrics, if
the two are said to asymptotically converge to with the same order of convergence if
for some positive finite constant and the two are said to asymptotically converge to with the same rate and order of convergence if
These comparative definitions of rate and order of asymptotic convergence are fundamental in asymptotic analysis. For the first two of these there are associated expressions in asymptotic O notation: the first is that in small o notation and the second is that in Knuth notation. The third is also called asymptotic equivalence, expressed
Examples
For any two geometric progressions and with shared limit zero, the two sequences are asymptotically equivalent if and only if both and They converge with the same order if and only if converges with a faster order than if and only if The convergence of any geometric series to its limit has error terms that are equal to a geometric progression, so similar relationships hold among geometric series as well. Any sequence that is asymptotically equivalent to a convergent geometric sequence may be either be said to "converge geometrically" or "converge exponentially" with respect to the absolute difference from its limit, or it may be said to "converge linearly" relative to a logarithm of the absolute difference such as the "number of decimals of precision." The latter is standard in numerical analysis.
For any two sequences of elements proportional to an inverse power of and with shared limit zero, the two sequences are asymptotically equivalent if and only if both and They converge with the same order if and only if converges with a faster order than if and only if
For any sequence with a limit of zero, its convergence can be compared to the convergence of the shifted sequence rescalings of the shifted sequence by a constant and scaled -powers of the shifted sequence, These comparisons are the basis for the Q-convergence classifications for iterative numerical methods as described above: when a sequence of iterate errors from a numerical method is asymptotically equivalent to the shifted, exponentiated, and rescaled sequence of iterate errors it is said to converge with order and rate
Non-asymptotic rates of convergence
Non-asymptotic rates of convergence do not have the common, standard definitions that asymptotic rates of convergence have. Among formal techniques, Lyapunov theory is one of the most powerful and widely applied frameworks for characterizing and analyzing non-asymptotic convergence behavior.
For iterative methods, one common practical approach is to discuss these rates in terms of the number of iterates or the computer time required to reach close neighborhoods of a limit from starting points far from the limit. The non-asymptotic rate is then an inverse of that number of iterates or computer time. In practical applications, an iterative method that required fewer steps or less computer time than another to reach target accuracy will be said to have converged faster than the other, even if its asymptotic convergence is slower. These rates will generally be different for different starting points and different error thresholds for defining the neighborhoods. It is most common to discuss summaries of statistical distributions of these single point rates corresponding to distributions of possible starting points, such as the "average non-asymptotic rate," the "median non-asymptotic rate," or the "worst-case non-asymptotic rate" for some method applied to some problem with some fixed error threshold. These ensembles of starting points can be chosen according to parameters like initial distance from the eventual limit in order to define quantities like "average non-asymptotic rate of convergence from a given distance."
For discretized approximation methods, similar approaches can be used with a discretization scale parameter such as an inverse of a number of grid or mesh points or a Fourier series cutoff frequency playing the role of inverse iterate number, though it is not especially common. For any problem, there is a greatest discretization scale parameter compatible with a desired accuracy of approximation, and it may not be as small as required for the asymptotic rate and order of convergence to provide accurate estimates of the error. In practical applications, when one discretization method gives a desired accuracy with a larger discretization scale parameter than another it will often be said to converge faster than the other, even if its eventual asymptotic convergence is slower.
References
Numerical analysis
Convergence | Rate of convergence | [
"Mathematics"
] | 4,227 | [
"Computational mathematics",
"Mathematical relations",
"Approximations",
"Numerical analysis"
] |
999,890 | https://en.wikipedia.org/wiki/Meade%20Instruments | The Meade Instruments (also shortened to Meade) was an American multinational company headquartered in Watsonville, California, that manufactures, imports, and distributes telescopes, binoculars, spotting scopes, microscopes, CCD cameras, and telescope accessories for the consumer market. It was, at one point, the world's largest manufacturer of telescopes.
Besides selling under its "Meade" brand name, the company sells solar telescopes under the brand "Coronado".
In July 2024, Sky and Telescope magazine reported that Optronic Technologies, the owner of Meade Instruments and Orion Telescopes, had closed their facilities in California and had laid off all of their employees. As of July 15, there had been no official announcement from the company, and S&T said they were trying to get more information from their sources. As of December, 2024, the Sky&Telescope website announced that the assets of Meade, Coronado and Orion Telescopes and Binoculars would be listed for auction and that these companies were ceasing operations.
Origins and history
Founded in 1972 by John Diebel, Meade started as a mail order seller of small refracting telescopes and telescope accessories manufactured by the Japan-based Towa Optical Manufacturing Company. Meade started manufacturing its own line of products in 1976, introducing 6" and 8" reflecting telescopes models in 1977. In 1980, the company ventured into the Schmidt-Cassegrain market that up to that time had been dominated by Celestron Corporation. Meade has a long history of litigation with other companies over infringement of their patents, particularly with its once bitter rival Celestron. In August 2008, Meade modified their line of Schmidt-Cassegrain telescopes with changes to the optical surfaces in design they call "Advanced Coma-Free optics" (ACF Optics).
Past production sites include 16542 Millikan Avenue in Irvine, which was used in the 1990s. Meade production was consequently moved to a new build plant in 6001 Oak Canyon, located as well in Irvine. The Oak Canyon plant was in use for about a decade until 2009, after which production was moved to an expanded plant in Tijuana, Mexico.
In October 2013, Meade Instruments merged with Ningbo Sunny Electronic, a Chinese manufacturer, and Joseph Lupica became CEO of Meade. In February 2015, Victor Aniceto succeeded Lupica as president.
On November 26, 2019, in the United States District Court for the Northern District of California a federal jury found that Ningbo and Meade suppressed competition and fixed prices for consumer telescopes in the United States in violation of federal antitrust laws (case# 16-06370). Optronic Technologies, Inc. was awarded $16.8 million in damages.
On December 4, 2019, Meade Instruments Corp. filed bankruptcy in the United States District Court for the Central District of California as case number 19-14714.
Products
Products produced by Meade include:
Catadioptric cassegrains
ACF telescopes
ACF (Advanced Coma-Free) is an altered version of the Meade's previous schmidt-cassegrain telescopes that replaces the traditional spherical schmidt-cassegrain secondary mirror with a hyperbolic secondary mirror. In the new design the full aperture corrector is slightly altered in shape and combined with a spherical primary mirror. Meade's literature describes their ACF as a variation on the Ritchey-Chrétien telescope, although it does not use the two hyperbolic mirror combination in that design (being more of an aplantic design).
Models
LX90-ACF, 8" to 12"
LX200-ACF, a series of LX200 with ACF Optics 8" to 16"
LX400-ACF, 16 to 20" f/8, w/ robotic equatorial mount
LX800
Maksutov telescopes
Meade produces a line of Maksutov telescopes under their ETX series (Everybody's Telescope). They were first produced in 90 mm (3-1/2") Maksutov Cassegrain telescope in 1996. They range in size from 90 mm to 125 mm.
Newtonian telescopes
Schmidt-Newtonian telescopes (6 to 10 inches).
LightBridge Dobsonian telescopes (currently 8, 10, 12 and a 16-inch model)
Meade Model 4504: equatorial reflecting telescope
GoTo telescopes
Many Meade telescope lines are classified by the self aiming computerized alt-azimuth and equatorial mounts they come on, a technology commonly called a "GoTo" mount.
Models
LXD75, including Newtonian, Schmidt-Newtonian, Advanced Coma-Free, and achromatic refractor telescopes
ETX-LS, 150mm (6 in) and 200mm (8 in) F/10 ACF telescope on a single-fork arm with integral GPS and 'Eclips' camera for self-alignment.
DS-2000 Series, 80mm (3.1") refractor, 114mm (4.5") and 130mm (5.1") reflector on altazimuth Goto mounts
LX80, LX90
ETX-70, ETX-80
LX85, including Newtonian, Schmidt-Newtonian, Advanced Coma-Free, and achromatic refractor telescopes
Solar telescopes
In 2004, Meade acquired Coronado Filters from founder and designer David Lunt, who produce an extensive range of specialty telescopes that allow views of the sun in Hydrogen-Alpha, and formerly, at Calcium K line wavelengths. The Meade Coronado telescopes are called "Solarmax 40" or higher depending on the model.
Other products
Achromatic refractors (5 and 6-inch)
Meade also sells under the "Meade" name imported low to moderate cost reflectors and refractors intended for the beginner retail market.
Telescope accessories
Accessories produced by Meade include the series 5000 eyepieces that are comparable in construction to those of Chester, New York-based Tele Vue Optical's "Nagler" (82-degree field of view), "Panoptic" (68-degree field of view), and "Radian" (60-degree field of view) eyepieces. Meade sells Deep Sky and Lunar digital imagers for telescopes. They also market the mySKY & mySKY Plus, multi-media GPS devices guiding users to the sky, similar to the competing Celestron SkyScout.
Litigation
In November, 2006, plaintiffs including Star Instruments and RC Optical Systems, manufacturers of traditional Ritchey-Chrétien optics and telescopes, filed a civil lawsuit against Meade, several dealerships, and other individuals in federal court (New York Southern District). The complaint was against Meade advertising their RCX400 and LX200R models as "Ritchey-Chrétien." The plaintiffs claimed these models did not use true Ritchey-Chrétien optics and therefore Meade and its retailers were committing false advertising infringing on the plaintiff's market. In January 2008, Meade settled, with a "small" amount paid to the plaintiffs and the requirement to rename the affected products, not using any initials that might suggest Ritchey-Chrétien.
On September 27, 2006, Finkelstein, Thompson & Loughran filed a class action lawsuit against Meade. The complaint alleged that, throughout the Class Period, defendants misrepresented and omitted material facts concerning Meade's backdating of stock option grants to two of its officers. A settlement of $2,950,000 was reached in December, 2007.
Financial problems
Meade has had financial problems in the past and has survived with the help of its founder, John Diebel, purchasing back the company. However, Meade in the past few years has run into another round of financial woes, since Diebel sold the company again. The previous CEO since May 2006, Steve Muellner had announced various bad news for the company since he had the lead role for Meade. Meade's Irvine, California manufacturing plant was closed, with manufacturing moved to a new plant in Mexico, and a majority of the administrative positions were cut. Meade's customer service line has also been affected by the move to Mexico, including shorter operating hours and the elimination of the callback option. Meade is also looking at other options for the uncertain future of the company. No matter what the future was holding for the company, Muellner and some of the board members signed an agreement to cover themselves financially.
In April 2008, Meade sold two of its three non-telescope product brands (Weaver/Redfield) to two companies for a total of $8 million. However, as compensation for divestiture of these two brands, out-going VP of Sales, Robert Davis, received a $100,000 bonus from the company. On June 13, 2008, Meade sold their last non-telescope brand Simmons to Bushnell for $7.25 million. Also in 2008, Meade's stock value fell below one dollar, bringing up the possibility of Meade being delisted from the stock exchange. On October 3, 2008, Meade eliminated Donald Finkle's Senior Vice President position with the company providing him with one year of salary as severance and certain other benefits.
Meade announced on January 29, 2009 that it had sold Meade Europe, its European subsidiary, for 12.4 million dollars, thus relieving much of Meade's debt. However, that reduced the company's assets greatly. Further changes and unknown stability of the company was announced on February 5, 2009, with the resignation of Steve Muellner, chairman of the board Harry Casari and fellow board member James Chadwick. Former CEO Steven Murdock was reinstated as Meade CEO. On March 5, 2009, the company announced the resignation of CFO Paul Ross and the assumption of the position by John Elwood. With his resignation, Ross receives a severance in the lump sum of $260,000. During the summer of 2009, Meade announced a 20:1 reverse stock split in hopes of raising the value of their stock.
By July 8, 2013, Meade Instruments was tipping their hand on whether to recommend selling the company to a Chinese co. or a San Jose venture capital firm, plow ahead alone, or possibly seek bankruptcy protection. In September 2013, Sunny Optics Inc, a unit of the Chinese firm Ningbo Sunny Electronic Co Ltd, completed the acquisition of the entire share capital of Meade.
In November 2019, Orion Telescopes & Binoculars won a lawsuit against Ningbo Sunny Electronic Co Ltd for price fixing and anti-competitive practices costing Sunny Ningbo an estimated 20 million dollars in settlement. Meade under Ningbo Sunny ownership shortly after declared bankruptcy. On June 1, 2021, Orion Telescopes & Binoculars announced the acquisition rescue of Meade Instruments, following the approval of the United States Bankruptcy Court for the Central District of California.
References
External links
Meade Instruments Then and Now
fundinguniverse.com, Meade Instruments Corporation
The ACF design
Instrument-making corporations
Telescope manufacturers
Retail companies based in California
Companies based in Irvine, California
Manufacturing companies based in Greater Los Angeles
Manufacturing companies established in 1972
Retail companies established in 1972
American companies established in 1972
1972 establishments in California
Companies formerly listed on the Nasdaq
American subsidiaries of foreign companies
1997 initial public offerings
1972 establishments in the United States
2013 mergers and acquisitions | Meade Instruments | [
"Astronomy"
] | 2,303 | [
"Telescope manufacturers",
"People associated with astronomy"
] |
999,954 | https://en.wikipedia.org/wiki/.su | .su is an Internet country code top-level domain (ccTLD) that was designated for the Union of Soviet Socialist Republics (USSR) on 19 September 1990. Even though the Soviet Union itself was dissolved 15 months later, the .su top-level domain remains in use to the present day. It is administered by the Russian Institute for Public Networks (RIPN, or RosNIIROS in Russian transcription).
The .su ccTLD is known for usage by cybercriminals, hackers and scammers.
History
After 1989 a set of new internet domains was created in Europe, including .pl (Poland), .cs (Czechoslovakia), .yu (Yugoslavia) and .dd (East Germany). Among them, there was also a domain for the USSR – .su. Initially, before two-letter ccTLDs became standard, the Soviet Union was to receive a .ussr domain. The .su domain was proposed by the 19-year-old Finnish student Petri Ojala.
On 26 December 1991 the country was dissolved and its constituent republics gained independence, which should have caused the domain to begin a phase-out process, as happened with those of East Germany, Czechoslovakia, and Yugoslavia. Until 1994 there was no assigned top-level domain name for Russia. For this reason the country continued to use the Soviet domain. In 1994, the .ru domain was created, which was supposed to eventually replace the .su domain (domains for the republics other than Russia were created at different times in the mid-nineties). The domain was supposed to be withdrawn by ICANN, but it was kept at the request of the Russian government and Internet users.
In 2001, the managers of the domain stated that they would commence accepting new .su registrations, but it is unclear whether this action was compatible with ICANN policies. ICANN has expressed intentions to terminate the .su domain and IANA states that the domain is being phased out, but lobbyists stated in September 2007 that they had started negotiations with ICANN on retaining the domain. In the first quarter of 2008, .su registrations increased by 45%.
Usage
The domain was intended to be used by Soviet institutions and companies operating in the USSR. The dissolution of the Soviet Union meant that the new TLD was superseded by the new country TLDs of the former Soviet republics. Despite this, .su is still in use. Most of the .su domains are registered in Russia and the United States. According to the RU-CENTER data from May 2010, there were over 93,500 registered domains with the .su TLD (there are over 2.8 million .ru domains). Some organizations with roots in the former Soviet Union also still use this TLD. The pro-Russian Ukrainian separatist group Donetsk People's Republic have also registered their domain with the TLD. The .su domain also hosts white supremacist websites that have been deplatformed elsewhere, formerly including The Daily Stormer.
The domain has been reported to host many cybercrime activities due to the relaxed and outdated terms of use, along with staying out of focus (2% usage comparing to the primary .ru zone). Rules for timely suspension of malicious domains have been in place since 2013 in response to the issue.
See also
References
External links
Statistics of registrations under the .su domain
RIPN press release regarding future of .su domain
Computing in the Soviet Union
Country code top-level domains
Internet in Russia
1990 establishments in the Soviet Union
Communications in the Soviet Union
Internet properties established in 1990
sv:Toppdomän#S | .su | [
"Technology"
] | 736 | [
"Computing in the Soviet Union",
"History of computing"
] |
1,000,060 | https://en.wikipedia.org/wiki/PSR%20B1919%2B21 | PSR B1919+21 is a pulsar with a period of 1.3373 seconds and a pulse width of 0.04 seconds. Discovered by Jocelyn Bell Burnell on 28 November 1967, it is the first discovered radio pulsar. The power and regularity of the signals were briefly thought to resemble an extraterrestrial beacon, leading the source to be nicknamed LGM, later LGM-1 (for "little green men").
The original designation of this pulsar was CP 1919, which stands for Cambridge Pulsar at RA . It is also known as PSR J1921+2153 and is located in the constellation of Vulpecula.
Discovery
In 1967, a radio signal was detected using the Interplanetary Scintillation Array of the Mullard Radio Astronomy Observatory in Cambridge, UK, by Jocelyn Bell Burnell. The signal had a -second period (not in 1967, but in 1991) and 0.04-second pulsewidth. It originated at celestial coordinates right ascension, +21° declination. It was detected by individual observation of miles of graphical data traces. Due to its almost perfect regularity, it was at first assumed to be spurious noise, but this hypothesis was promptly discarded. The discoverers jokingly named it little green men 1 (LGM-1), considering that it may have originated from an extraterrestrial civilization, but Bell Burnell soon ruled out extraterrestrial life as a source after discovering a similar signal from another part of the sky.
The original signal turned out to be radio emissions from the pulsar CP 1919, and was the first one recognized as such. Bell Burnell noted that other scientists could have discovered pulsars before her, but their observations were either ignored or disregarded. Researchers Thomas Gold and Fred Hoyle identified this astronomical object as a rapidly rotating neutron star immediately upon their announcement.
Before the nature of the signal was determined, the researchers, Bell Burnell and her PhD supervisor Antony Hewish, considered the possibility of extraterrestrial life:
We did not really believe that we had picked up signals from another civilization, but obviously the idea had crossed our minds and we had no proof that it was an entirely natural radio emission. It is an interesting problem – if one thinks one may have detected life elsewhere in the universe[,] how does one announce the results responsibly? Who does one tell first?
Nobel Prize controversy
When Antony Hewish and Martin Ryle received the Nobel Prize in physics in 1974 for their work in radio astronomy and pulsars, Fred Hoyle, Hewish's fellow astronomer, argued that Jocelyn Bell Burnell should have been a co-recipient of the prize.
In 2018, Bell won the $3-Million Breakthrough Prize in Fundamental Physics for her work.
Cultural references
The English post-punk band Joy Division used an image of CP 1919's radio pulses on the cover of their 1979 debut album, Unknown Pleasures.
German-born British composer Max Richter wrote a piece inspired by the discovery of CP1919 titled Journey (CP1919).
The English indie rock band Arctic Monkeys used a sound based on the pulses in their music video for "Four Out of Five."
See also
Variable star
References
Further reading
External links
Rotation-powered pulsars
Vulpecula
Astronomical objects discovered in 1967
LGM-1 | PSR B1919+21 | [
"Astronomy"
] | 695 | [
"Vulpecula",
"Constellations"
] |
1,000,067 | https://en.wikipedia.org/wiki/Splenocyte | Splenocytes are white blood cells that reside in the spleen and are involved in functions of the spleen, such as filtering blood and the immune response.
Splenocytes consist of a variety of cell populations such as T and B lymphocytes, dendritic cells and macrophages, which have different immune functions.
Overview
Splenocytes are spleen cells and consist of leukocytes like B and T cells, dendritic cells, and macrophages. The spleen is split into red and white pulp regions with the marginal zone separating the two areas. The red pulp is involved with filtering blood and recycling iron, while the white pulp is involved in the immune response.
The red pulp contains macrophages that phagocytose old or damaged red blood cells.
The white pulp contains separate compartments for B and T cells called the B cell zone (BCZ) and the T cell zone (TCZ). B cells make antibodies to fight off bacterial, viral, and fungal infections, and T cells are activated in response to antigens.
The marginal zone (MZ) separates the red and white pulp regions and contains macrophages, B cells, and dendritic cells. MZ macrophages remove some types of blood-borne bacteria and viruses. MZ B and dendritic cells are involved in antigen processing and presentation to lymphocytes in the white pulp.
References
Spleen (anatomy)
Mononuclear phagocytes
Leukocytes
Cell biology | Splenocyte | [
"Biology"
] | 305 | [
"Cell biology"
] |
1,000,175 | https://en.wikipedia.org/wiki/Computer%20security%20policy | A computer security policy defines the goals and elements of an organization's computer systems. The definition can be highly formal or informal. Security policies are enforced by organizational policies or security mechanisms. A technical implementation defines whether a computer system is secure or insecure. These formal policy models can be categorized into the core security principles of Confidentiality, Integrity, and Availability. For example, the Bell-La Padula model is a confidentiality policy model, whereas the Biba model is an integrity policy model.
Formal description
If a system is regarded as a finite-state automaton with a set of transitions (operations) that change the system's state, then a security policy can be seen as a statement that partitions these states into authorized and unauthorized ones.
Given this simple definition, one can define a secure system as one that starts in an authorized state and will never enter an unauthorized state.
Formal policy models
Confidentiality policy model
Bell-La Padula model
Integrity policies model
Biba model
Clark-Wilson model
Hybrid policy model
Chinese Wall (Also known as Brewer and Nash model)
Policy languages
To represent a concrete policy, especially for automated enforcement of it, a language representation is needed. There exist a lot of application-specific languages that are closely coupled with the security mechanisms that enforce the policy in that application.
Compared with this abstract policy languages, e.g., the Domain Type Enforcement-Language, is independent of the concrete mechanism.
See also
Anti-virus
Information Assurance - CIA Triad
Firewall (computing)
Protection mechanisms
References
Clark, D.D. and Wilson, D.R., 1987, April. A comparison of commercial and military computer security policies. In 1987 IEEE Symposium on Security and Privacy (pp. 184–184). IEEE.
Computer security procedures
Computer security models | Computer security policy | [
"Engineering"
] | 360 | [
"Cybersecurity engineering",
"Computer security models",
"Computer security procedures"
] |
589,696 | https://en.wikipedia.org/wiki/ViewSonic | ViewSonic Corporation is an American privately held multinational electronics company with headquarters in Brea, California, United States.
The company was founded in 1987 as Keypoint Technology Corporation by James Chu and was renamed to its present name in 1993, after a brand name of monitors launched in 1990. Today, ViewSonic specializes in visual display hardware—including liquid-crystal displays, projectors, and interactive whiteboards—as well as digital whiteboarding software. The company trades in three key markets: education, enterprise, and entertainment.
Company history
The company was initially founded as Keypoint Technology Corporation in 1987 by James Chu. In 1990 it launched the ViewSonic line of color computer monitors, and shortly afterward the company renamed itself after its monitor brand.
The ViewSonic logo features Gouldian finches, colorful birds native to Australia.
In the mid-1990s, ViewSonic rose to become one of the top-rated makers of computer CRT monitors, alongside Sony, NEC, MAG InnoVision, and Panasonic. ViewSonic soon displaced the rest of these companies to emerge as the largest display manufacturer from America/Japan at the turn of the millennium.
In 2000, ViewSonic acquired the Nokia Display Products' branded business.
In 2002 ViewSonic announced a 3840x2400 WQUXGA, 22.2-inch monitor, VP2290.
In 2005, ViewSonic and Tatung won a British patent lawsuit filed against them by LG Philips in a dispute over which company created technology for rear mounting of LCDs in a mobile PC (U.K. Patent GB2346464B, titled “portable computer").
On July 2, 2007, the company filed with the Securities and Exchange Commission to raise up to $143.8M in an IPO on NASDAQ.
On March 5, 2008, the company filed a withdraw request with the Securities and Exchange Commission saying "terms currently obtainable in the public marketplace are not sufficiently attractive to the Registrant to warrant proceeding with the initial public offering".
In 2017, ViewSonic entered the interactive whiteboard market with its ViewBoard flat panels and myViewBoard software. ViewSonic was named a best-selling collaboration display brand in 2018. By 2019, more than 5,500 elementary and junior high schools in the United States had installed ViewBoards, and ViewSonic ranked third in global interactive display market share, excluding China. ViewSonic became a Google for Education partner in 2019 and a Microsoft Education partner in 2020.
Operations
ViewSonic has its headquarters in Brea, California, United States and a research & development center in New Taipei City, Taiwan. , ViewSonic is selling globally with offices in Canada, Germany, the United Kingdom, France, Russia, Italy, Ukraine, Turkey, Spain, Sweden, Greece, Switzerland, Australia, Taiwan, Malaysia, India, South Korea, United Arab Emirates, Singapore, Japan, and the United States.
Product history
In 1998, ViewSonic announced that two of its Professional Series monitors achieved TCO '99 certification.
In 2000, ViewSonic partnered with AT&T Corporation to offer Internet appliances integrated with the AT&T WorldNet Service, initially targeting the corporate market. The Internet appliances ranged from standalone i-boxes, integrated LCD and CRT devices, to web phones and wireless web pads. The units were deemed capable of operating on nearly any operating system, including Windows CE, Linux, QNX and VxWorks.
In 2002, ViewSonic announced a 3840x2400 WQUXGA, 22.2-inch monitor, VP2290.
At the 2007 Consumer Electronics Show, ViewSonic introduced display products, namely a projector, monitors and an HDTV set, capable of being connected directly to a video iPod.
On May 31, 2011, the ViewPad 7x debuted at the Computex computer show in Taipei, Taiwan, Pocket-Lint reported, being a follow-up rather than a replacement to ViewSonic's existing ViewPad 7 tablet, which runs Android 2.2, a.k.a. Froyo.
In 2017, ViewSonic rolled out its ViewBoard smart interactive whiteboards. By 2019, more than 5,500 elementary and junior high schools in the United States had installed ViewBoards, and ViewSonic ranked third in global interactive display market share (excluding China).
See also
List of computer system manufacturers
References
External links
ViewSonic Corporation at Yahoo! Finance
American companies established in 1987
Companies based in Brea, California
Computer companies established in 1987
Computer companies of Taiwan
Computer companies of the United States
Computer hardware companies
Computer monitors
Defunct computer systems companies
Display technology companies
Manufacturing companies based in Greater Los Angeles
Privately held companies based in California
Technology companies based in Greater Los Angeles | ViewSonic | [
"Technology"
] | 949 | [
"Computer hardware companies",
"Computers"
] |
589,726 | https://en.wikipedia.org/wiki/Strong%20topology | In mathematics, a strong topology is a topology which is stronger than some other "default" topology. This term is used to describe different topologies depending on context, and it may refer to:
the final topology on the disjoint union
the topology arising from a norm
the strong operator topology
the strong topology (polar topology), which subsumes all topologies above.
A topology τ is stronger than a topology σ (is a finer topology) if τ contains all the open sets of σ.
In algebraic geometry, it usually means the topology of an algebraic variety as complex manifold or subspace of complex projective space, as opposed to the Zariski topology (which is rarely even a Hausdorff space).
See also
Weak topology
Topology | Strong topology | [
"Physics",
"Mathematics"
] | 152 | [
"Spacetime",
"Topology",
"Space",
"Geometry"
] |
589,755 | https://en.wikipedia.org/wiki/Retrocomputing | Retrocomputing is the current use of older computer hardware and software. Retrocomputing is usually classed as a hobby and recreation rather than a practical application of technology; enthusiasts often collect rare and valuable hardware and software for sentimental reasons.
Occasionally, however, an obsolete computer system has to be "resurrected" to run software specific to that system, to access data stored on obsolete media, or to use a peripheral that requires that system.
Hardware retrocomputing
Historic systems
Retrocomputing is part of the history of computer hardware. It can be seen as the analogue of experimental archaeology in computing. Some notable examples include the reconstruction of Babbage's Difference engine (more than a century after its design) and the implementation of Plankalkül in 2000 (more than half a century since its inception).
"Homebrew" computers
Some retrocomputing enthusiasts also consider the "homebrewing" (designing and building of retro- and retro-styled computers or kits), to be an important aspect of the hobby, giving new enthusiasts an opportunity to experience more fully what the early years of hobby computing were like. There are several different approaches to this end. Some are exact replicas of older systems, and some are newer designs based on the principles of retrocomputing, while others combine the two, with old and new features in the same package. Examples include:
Device offered by IMSAI, a modern, updated, yet backward-compatible version and replica of the original IMSAI 8080, one of the most popular early personal systems;
Several Apple 1 replicas and kits have been sold in limited quantities in recent years, by different builders, such as the "Replica 1", from Briel Computers;
A currently ongoing project that uses old technology in a new design is the Z80-based N8VEM;
The Arduino Retro Computer kit is an open source, open hardware kit you can build and has a BASIC interpreter. There is also a version of the Arduino Retro Computer that can be hooked up to a TV;
There is at least one remake of the Commodore 64 using an FPGA configured to emulate the 6502;
MSX 2/2+ compatible do-it-yourself kit GR8BIT, designed for the hands-on education in electronics, deliberately employing old and new concepts and devices (high-capacity SRAMs, micro-controllers and FPGA);
The MEGA65 is a Commodore 65 compatible computer;
The Commander X16 is an ongoing project by David Murray that hopes to build a new 8-bit platform inspired by the Commodore 64, using off the shelf modern parts.
The C256 Foenix and its different versions is a new retro computer family based on the WDC65C816. FPGAs are used to simulate CBM custom chips and has the power of an Amiga with its graphic and sound capabilities.
Grant Searle collection of homebrew 8-bit projects.
Software retrocomputing
As old computer hardware becomes harder to maintain, there has been increasing interest in computer simulation. This is especially the case with old mainframe computers, which have largely been scrapped, and have space, power, and environmental requirements unaffordable by the average user. The memory size and speed of current systems enable simulation of many old systems to run faster than that system on original hardware.
One popular simulator, the history simulator SIMH, offers simulations for over 50 historic systems, from the 1950s through the present. The Hercules emulator simulates the IBM System/360 family from System/360 to 64-bit System/z. A simulator is available for the Honeywell Multics system.
Software for older systems was not copyrighted, and was open source, so there is a wide variety of available software to run on these simulators.
Some emulations are used by businesses, as running production software in a simulator is usually faster, cheaper, and more reliable than running it on original hardware.
In popular culture
In an interview with Conan O'Brien in May 2014, George R. R. Martin revealed that he writes his books using WordStar 4.0, an MS-DOS application dating back to 1987.
US-based streaming video provider Netflix released a multiple-choice movie branded to be part of their Black Mirror series, called Bandersnatch. The protagonist is a teenage programmer working on a contract to deliver a video-game adaptation of a fantasy novel for an 8-bit computer in 1984. The multiple storylines revolve around the emotions and mental health issues resulting from a reality-perception mismatch between a new generation of computer-savvy teenagers and twenty-somethings, and their care givers.
Education
Due to their low complexity together with other technical advantages, 8-bit computers are frequently re-discovered for education, especially for introductory programming classes in elementary schools. 8-bit computers turn on and directly present a programming environment; there are no distractions, and no need for other features or additional connectivity. The BASIC language is a simple-to-learn programming language that has access to the entire system without having to load libraries for sound, graphics, math, etc. The focus of the programming language is on efficiency; in particular, one command does one thing immediately (e.g. turns the screen green).
Reception
Retrocomputing (and retrogaming as aspect) has been described in one paper as preservation activity and as aspect of the remix culture.
See also
History of computing hardware
Vintage Computer Festival
Computer History Museum
Computer Conservation Society
Living Computers: Museum + Labs
References
External links
Retro Computer Museum, a computer museum in Leicestershire, UK with regular "come and play" open days
Retrocomputing Museum for re-implementations of old programming languages
RETRO German paper mag about digital culture
The Centre for Computing History The Centre for Computing History UK Computer Museum
Living Computer Museum Request a Login from the LCM to interact with vintage computers over the internet.
bitsavers Software and PDF Document archive about older computers
Vintage Computing Resources Active resources for retrocomputing hobbyists
Learning to code in a “retro” programming environment
Beginning Programming Using Retro Computing
LOAD ZX Spectrum Museum, a retro computing museum in Portugal mostly focused on the Sinclair line of computers
History of computing
Nostalgia | Retrocomputing | [
"Technology"
] | 1,269 | [
"Computing and society",
"Computers",
"Computing culture",
"History of computing"
] |
589,782 | https://en.wikipedia.org/wiki/Swingometer | The swingometer is a graphics device that shows the effects of the swing from one party to another on British election results programmes. It is used to estimate the number of seats that will be won by different parties, given a particular national swing (in percentage points) in the vote towards or away from a given party, and assuming that that percentage change in the vote will apply in each constituency. The device was invented by Peter Milne, and later refined by David Butler and Robert McKenzie.
The first outing on British television was during a regional output from the BBC studios in Bristol during the 1955 general election (the first UK general election to be televised) and was used to show the swing in the two constituencies of Southampton Itchen and Southampton Test.
Following this use in 1955, the BBC adopted the swingometer on a national basis and it was unveiled in the national broadcasts for the 1959 general election. This swingometer merely showed the national swing in Britain but not the implications on that swing on the composition of parliament. These issues were not addressed until the 1964 general election.
The swingometer for that election showed not only the national swing, but also the implications of that national swing. So for instance, a 3.5% swing to Labour would see Labour become a majority government whilst any swing to the Conservatives would see Sir Alec Douglas-Home reelected as Prime Minister with a huge parliamentary majority. In the end the result was a Labour overall majority of 4, and so when the 1966 general election came around, a new element had to be added (namely the prospect of a hung parliament).
At the 1970 general election, the swingometer entered the age of colour television and showed the traditional party colours of red for Labour and blue for Conservative and had to be extended due to the success of the Conservative party at that election.
However, following the success of the Liberals in the by-elections held between the 1970 and February 1974 general elections, the swingometer was reduced in scale to just a small standby as the computers used by the BBC were deemed more reliable. As the Liberal Party reduced in importance the swingometer was brought back for the 1979 general election but for the 1983 and 1987 general elections computers were introduced to show changes in support in both map and graphic form.
The swingometer was brought back for the 1992 general election covering the whole side of the election studio and also had to be manhandled by at least four technicians as well as Peter Snow who had taken over the election graphics role following the death of Bob McKenzie. This swingometer was too big for comfort and in 1997 started on a shrinking process and was changed from an actual swingometer to a virtual reality construct. For the 2001 general election the graphic was reduced further. Following a few experiments in the United Kingdom local elections in 2003 and 2004 the swingometer for the next election in 2005 was held on virtual structs as well as swingometers for the Labour and Liberal Democrats parties.
An online version of the swingometer, featuring Labour and the Conservatives only, was introduced on the BBC News website at the 2001 general election. In 2005 the online swingometer was substantially re-designed to include versions featuring the Liberal Democrats, plus information on specific constituencies - including "VIP" seats - won/ lost on different swings. For the 2010 general election, the swingometer was placed in a completely virtual environment and repositioned to appear on the back wall of the virtual studio, with named constituencies as opposed to virtual MPs. All three swingometers were updated (Con / Lab, Con / Lib Dem, Lab / Lib Dem) in this manner.
3D swingometer
The 3D swingometer is used to illustrate the shift in election results from the previous election in a three-party system. It is similar to the "2D" swingometer used in two-party system elections, but uses the extra dimension to allow swings to occur among three parties.
The sum of all the swings between parties must equal zero. In a three party system, the most complicated swings will involve a major swing either to or from one political party, with this swing being made up of two components from each of the other two parties. For instance there may be a 3-point swing towards the Purple party, consisting of a 2-point swing from the Orange party and a 1-point swing from the Brown party. Alternatively, there may be a 5-point swing from the Orange party, of which 3 points are towards the Brown party and 2 towards the Purple party.
It is possible to split the swing space up into different regions indicating what the result would be if the swing indicated occurred linearly across the electorate. This gives rise to four regions: one each indicating overall control for each party, and a fourth region indicating no overall control.
Where there are swings directly from one party to a second party with the third party's vote remaining unchanged, the 3D swingometer clearly indicates that the third party also benefits slightly from the reduction in vote of the first party.
The three dimensions consist of the two used to create the swing space and the third for the pendulum to swing in.
Parodies
During the 2010 UK General election race, the Slapometer website allowed voters to slap along to the live TV debates between party leaders Gordon Brown, David Cameron and Nick Clegg. Rather than showing a swing in votes it merely gave feedback about the number of slaps each politician was receiving each second.
References
External links
Sultan of swingometers
The oldest swingometer in town
Labour and Conservatives in the 2005 UK general election
Conservatives and LibDems in the 2005 UK general election
Labour and LibDems in the 2005 UK general election
Electoral College Swingometer in the 2008 US Presidential election
Slapometer website asked people to Vote with the back of your hand
BBC Archive - Swingometer
Television in the United Kingdom
Television terminology
Television technology
Elections
Politics of the United Kingdom
British inventions
1955 introductions
1959 introductions
1959 in British television | Swingometer | [
"Technology"
] | 1,186 | [
"Information and communications technology",
"Television technology"
] |
589,876 | https://en.wikipedia.org/wiki/Ymir%20%28moon%29 | Ymir , or Saturn XIX, is the second-largest retrograde irregular moon of Saturn. It was discovered by Brett J. Gladman, et al. in 2000, and given the temporary designation S/2000 S 1. It was named in August 2003 after Ymir, who in Norse mythology is the ancestor of all the Jotuns or frost giants.
It takes 3.6 Earth years to complete an orbit around Saturn. Of the moons that take more than 3 Earth years to orbit Saturn, Ymir is the largest, at about in diameter; Ymir is also the second largest member of the Norse group, after Phoebe.
Spectral measurements from Cassini–Huygens show that Ymir is reddish in color, unlike Phoebe's gray color, suggesting a separate origin for this moon. It shows a similar light curve as Siarnaq and has a triangular shape, rotating in a retrograde direction about once every 11.9 hours.
Notes
References
External links
MPEC 2000-Y15: S/2000 S 1, S/2000 S 2, S/2000 S 7, S/2000 S 8, S/2000 S 9 (2000 Dec. 19 ephemeris)
Ephemeris IAU-NSES
Saturn's Known Satellites (by Scott S. Sheppard)
Astronomical objects discovered in 2000
Discoveries by Brett J. Gladman
Irregular satellites
Moons of Saturn
Norse group
Ymir
Moons with a retrograde orbit | Ymir (moon) | [
"Astronomy"
] | 293 | [
"Astronomy stubs",
"Planetary science stubs"
] |
589,883 | https://en.wikipedia.org/wiki/Trinculo%20%28moon%29 | Trinculo is a retrograde irregular satellite of Uranus. It was discovered by a group of astronomers led by Holman, et al. on 13 August 2001, and given the temporary designation S/2001 U 1.
Confirmed as Uranus XXI, it was named after the drunken jester Trinculo in William Shakespeare's play The Tempest. Trinculo is the second smallest of Uranus' 28 moons after Ferdinand and is approximately only 18 km wide.
See also
Uranus' natural satellites
References
External links
David C. Jewitt pages
Uranus' Known Satellites (by Scott S. Sheppard)
MPC: Natural Satellites Ephemeris Service
Moons of Uranus
Irregular satellites
20010813
Moons with a retrograde orbit
The Tempest | Trinculo (moon) | [
"Astronomy"
] | 153 | [
"Astronomy stubs",
"Planetary science stubs"
] |
589,968 | https://en.wikipedia.org/wiki/Gel%20electrophoresis%20of%20proteins | Protein electrophoresis is a method for analysing the proteins in a fluid or an extract. The electrophoresis may be performed with a small volume of sample in a number of alternative ways with or without a supporting medium, namely agarose or polyacrylamide. Variants of gel electrophoresis include SDS-PAGE, free-flow electrophoresis, electrofocusing, isotachophoresis, affinity electrophoresis, immunoelectrophoresis, counterelectrophoresis, and capillary electrophoresis. Each variant has many subtypes with individual advantages and limitations. Gel electrophoresis is often performed in combination with electroblotting or immunoblotting to give additional information about a specific protein.
Denaturing gel methods
SDS-PAGE
SDS-PAGE, sodium dodecyl sulfate polyacrylamide gel electrophoresis, describes a collection of related techniques to separate proteins according to their electrophoretic mobility (a function of the molecular weight of a polypeptide chain) while in the denatured (unfolded) state. In most proteins, the binding of SDS to the polypeptide chain imparts an even distribution of charge per unit mass, thereby resulting in a fractionation by approximate size during electrophoresis.
SDS is a strong detergent agent used to denature native proteins to unfolded, individual polypeptides. When a protein mixture is heated to 100 °C in presence of SDS, the detergent wraps around the polypeptide backbone. In this process, the intrinsic charges of polypeptides becomes negligible when compared to the negative charges contributed by SDS. Thus polypeptides after treatment become rod-like structures possessing a uniform charge density, that is same net negative charge per unit length. The electrophoretic mobilities of these proteins will be a linear function of the logarithms of their molecular weights.
Native gel methods
Native gels, also known as non-denaturing gels, analyze proteins that are still in their folded state. Thus, the electrophoretic mobility depends not only on the charge-to-mass ratio, but also on the physical shape and size of the protein.
Blue native PAGE
BN-PAGE is a native PAGE technique, where the Coomassie brilliant blue dye provides the necessary charges to the protein complexes for the electrophoretic separation. The disadvantage of Coomassie is that in binding to proteins it can act like a detergent causing complexes to dissociate. Another drawback is the potential quenching of chemoluminescence (e.g. in subsequent western blot detection or activity assays) or fluorescence of proteins with prosthetic groups (e.g. heme or chlorophyll) or labelled with fluorescent dyes.
Clear native PAGE
CN-PAGE (commonly referred to as Native PAGE) separates acidic water-soluble and membrane proteins in a polyacrylamide gradient gel. It uses no charged dye so the electrophoretic mobility of proteins in CN-PAGE (in contrast to the charge shift technique BN-PAGE) is related to the intrinsic charge of the proteins. The migration distance depends on the protein charge, its size and the pore size of the gel. In many cases this method has lower resolution than BN-PAGE, but CN-PAGE offers advantages whenever Coomassie dye would interfere with further analytical techniques, for example it has been described as a very efficient microscale separation technique for FRET analyses. Additionally, as CN-PAGE does not require the harsh conditions of BN-PAGE, it can retain the supramolecular assemblies of membrane protein complexes that would be dissociated in BN-PAGE.
Preparative native PAGE
The folded protein complexes of interest separate cleanly and predictably without the risk of denaturation due to the specific properties of the polyacrylamide gel, electrophoresis buffer solution, electrophoretic equipment and standardized parameters used. The separated proteins are continuously eluted into a physiological eluent and transported to a fraction collector. In four to five PAGE fractions each the different metal cofactors can be identified and absolutely quantified by high-resolution ICP-MS. The associated structures of the isolated metalloproteins in these fractions can be specifically determined by solution NMR spectroscopy.
Buffer systems
Most protein separations are performed using a "discontinuous" (or DISC) buffer system that significantly enhances the sharpness of the bands within the gel. During electrophoresis in a discontinuous gel system, an ion gradient is formed in the early stage of electrophoresis that causes all of the proteins to focus into a single sharp band. The formation of the ion gradient is achieved by choosing a pH value at which the ions of the buffer are only moderately charged compared to the SDS-coated proteins. These conditions provide an environment in which Kohlrausch's reactions determine the molar conductivity. As a result, SDS-coated proteins are concentrated to several fold in a thin zone of the order of 19 μm within a few minutes. At this stage all proteins migrate at the same migration speed by isotachophoresis. This occurs in a region of the gel that has larger pores so that the gel matrix does not retard the migration during the focusing or "stacking" event. Separation of the proteins by size is achieved in the lower, "resolving" region of the gel. The resolving gel typically has a much smaller pore size, which leads to a sieving effect that now determines the electrophoretic mobility of the proteins. At the same time, the separating part of the gel also has a pH value in which the buffer ions on average carry a greater charge, causing them to "outrun" the SDS-covered proteins and eliminate the ion gradient and thereby the stacking effect.
A very widespread discontinuous buffer system is the tris-glycine or "Laemmli" system that stacks at a pH of 6.8 and resolves at a pH of ~8.3-9.0. A drawback of this system is that these pH values may promote disulfide bond formation between cysteine residues in the proteins because the pKa of cysteine ranges from 8-9 and because reducing agent present in the loading buffer doesn't co-migrate with the proteins. Recent advances in buffering technology alleviate this problem by resolving the proteins at a pH well below the pKa of cysteine (e.g., bis-tris, pH 6.5) and include reducing agents (e.g. sodium bisulfite) that move into the gel ahead of the proteins to maintain a reducing environment. An additional benefit of using buffers with lower pH values is that the acrylamide gel is more stable at lower pH values, so the gels can be stored for long periods of time before use.
SDS gradient gel electrophoresis of proteins
As voltage is applied, the anions (and negatively charged sample molecules) migrate toward the positive electrode (anode) in the lower chamber, the leading ion is Cl− ( high mobility and high concentration); glycinate is the trailing ion (low mobility and low concentration). SDS-protein particles do not migrate freely at the border between the Cl− of the gel buffer and the Gly− of the cathode buffer. Friedrich Kohlrausch found that Ohm's law also applies to dissolved electrolytes. Because of the voltage drop between the Cl− and Glycine-buffers, proteins are compressed (stacked) into micrometer thin layers. The boundary moves through a pore gradient and the protein stack gradually disperses due to a frictional resistance increase of the gel matrix. Stacking and unstacking occurs continuously in the gradient gel, for every protein at a different position. For a complete protein unstacking the polyacrylamide-gel concentration must exceed 16% T. The two-gel system of "Laemmli" is a simple gradient gel. The pH discontinuity of the buffers is of no significance for the separation quality, and a "stacking-gel" with a different pH is not needed.
Visualization
The most popular protein stain is Coomassie brilliant blue. It is an anionic dye, which non-specifically binds to proteins. Proteins in the gel are fixed by acetic acid and simultaneously stained. The excess dye incorporated into the gel can be removed by destaining with the same solution without the dye. The proteins are detected as blue bands on a clear background.
When more sensitive method than staining by Coomassie is needed, silver staining is usually used. Silver staining is a sensitive procedure to detect trace amounts of proteins in gels, but can also visualize nucleic acid or polysaccharides.
Visualization methods without using a dye such as Coomassie and silver are available on the market. For example Bio-Rad Laboratories markets ”stain-free” gels for SDS-PAGE gel electrophoresis. Alternatively, reversible fluorescent dyes, such as those from Azure Biosystems such as AzureRed or Azure TotalStain Q can be used.
Similarly as in nucleic acid gel electrophoresis, tracking dye is often used. Anionic dyes of a known electrophoretic mobility are usually included in the sample buffer. A very common tracking dye is Bromophenol blue. This dye is coloured at alkali and neutral pH and is a small negatively charged molecule that moves towards the anode. Being a highly mobile molecule it moves ahead of most proteins.
Medical applications
In medicine, protein electrophoresis is a method of analysing the proteins mainly in blood serum. Before the widespread use of gel electrophoresis, protein electrophoresis was performed as free-flow electrophoresis (on paper) or as immunoelectrophoresis.
Traditionally, two classes of blood proteins are considered: serum albumin and globulin. They are generally equal in proportion, but albumin as a molecule is much smaller and lightly, negatively-charged, leading to an accumulation of albumin on the electrophoretic gel. A small band before albumin represents transthyretin (also named prealbumin). Some forms of medication or body chemicals can cause their own band, but it usually is small. Abnormal bands (spikes) are seen in monoclonal gammopathy of undetermined significance and multiple myeloma, and are useful in the diagnosis of these conditions.
The globulins are classified by their banding pattern (with their main representatives):
The alpha (α) band consists of two parts, 1 and 2:
α1 - α1-antitrypsin, α1-acid glycoprotein.
α2 - haptoglobin, α2-macroglobulin, α2-antiplasmin, ceruloplasmin.
The beta (β) band - transferrin, LDL, complement
The gamma (γ) band - immunoglobulin (IgA, IgD, IgE, IgG and IgM). Paraproteins (in multiple myeloma) usually appear in this band.
See also
Affinity electrophoresis
Electroblotting
Electrofocusing
Fast parallel proteolysis (FASTpp)
Gel electrophoresis
Immunoelectrophoresis
Immunofixation
Native gel electrophoresis
Paraprotein
QPNC-PAGE
SDD-AGE
References
External links
Educational resource for protein electrophoresis
Gel electrophoresis of proteins
Electrophoresis
Molecular biology
Protein methods
Laboratory techniques
Blood tests | Gel electrophoresis of proteins | [
"Chemistry",
"Biology"
] | 2,469 | [
"Biochemistry methods",
"Blood tests",
"Instrumental analysis",
"Protein methods",
"Protein biochemistry",
"Biochemical separation processes",
"Molecular biology techniques",
"nan",
"Molecular biology",
"Biochemistry",
"Chemical pathology",
"Electrophoresis"
] |
589,997 | https://en.wikipedia.org/wiki/Silicon%20Fen | Silicon Fen or the Cambridge Cluster is a collective name given to high tech businesses focused on software, electronics, and biotechnology, including Arm and AstraZeneca, in and around the city of Cambridge in England.
The name Silicon Fen originated as an analogy with Silicon Valley in California because Cambridge lies at the southern tip of the Fens. The local growth in technology companies started with Sinclair Research and Acorn Computers.
Business growth
More than 1,000 high-technology companies established offices in the area during the five years preceding 1998. Some early successful businesses were Advanced RISC Machines and Cambridge Display Technology. In 2004, 24% of all UK venture capital, representing 8% of all venture capital in the European Union, was received by Silicon Fen companies, according to the Cambridge Cluster Report 2004 produced by Library House and Grant Thornton.
The so-called Cambridge phenomenon, which gave rise to start-up companies in a town that previously had only light industry in the electrical sector, is usually dated to the founding of the Cambridge Science Park in 1970 as an initiative of Trinity College at the University of Cambridge.
The characteristic of Cambridge is small companies in sectors such as computer-aided design. They are spread over an area defined by the CB postcode or 01223 telephone area code, or more generously in an area bounded by Ely, Newmarket, Saffron Walden, Royston, and Huntingdon.
In 2000, then Chancellor of the Exchequer Gordon Brown set up a research partnership between MIT and Cambridge University, the Cambridge–MIT Institute, in order to increase international collaboration between the two universities and strengthen the economic success of Silicon Fen.
In February 2006, Cambridge Judge Business School reported estimates that there were approximately 250 active start-ups directly linked to the university, valued at roughly US$6 billion. Several of these companies have grown into multinationals, including Arm, Autonomy Corporation, AVEVA, and Cambridge Silicon Radio.
In 2012, it was reported that strong employment growth in the Silicon Fen hub was hampered due to its significant concentration on research and development, which was limiting competition in manufacturing and costs.
Cambridge Ahead, the business and academic membership organisation dedicated to the long-term growth of the city and its region, reported in 2015–16, that growth of Cambridge companies was approximately 7% over one, three, and five-year durations. Global turnover of Cambridge companies increased by 7.6% to £35.7bn, up from £33bn the previous year, and global employment grew by 7.6% to 210,292. The number of companies headquartered within 20 miles of Cambridge grew from 22,017 to 24,580.
Area characteristics
The Cambridge Network is an organization facilitating networking in the area.
Other possible factors include a high standard of living available in the county, and good transport links, for example to London and with Cambridge Airport having a full service business jet centre. Many graduates from the university choose to stay on in the area, giving local companies a rich pool of talent to draw upon. The high-technology industry has little by way of competition, unlike say in Oxfordshire where many other competing industries exist. Cambridgeshire has only recently become a high-technology centre, which has meant that commercial rents were generally lower than in other parts of the UK and thus giving companies a head-start on those situated in other more expensive regions. However, the recent technology boom has changed the situation and Cambridgeshire now ranks as one of the highest costs of living in the UK outside London, which is home to an even bigger technology centre.
People and companies associated with Silicon Fen
People
Companies
See also
Silicon Valley
List of places with 'Silicon' names
List of city nicknames in the United Kingdom
Oxford-Cambridge Arc
References
The Cambridge Cluster Report 2007, Library House 2007, Download
The Cambridge Phenomenon: The Growth of High Technology Industry in a University Town, Segal Quince & Partners 1985,
The Cambridge Phenomenon Revisited – a synopsis of the new report by Segal Quince Wicksteed, Segal Quince & Partners 2000, Download
The Cambridge Cluster Report 2003, Library House 2003, Download
The Cambridge Cluster Report 2004, Library House in association with Grant Thornton 2004, Download
The Cambridge Cluster Report 2006, Library House 2006, Download
The Cambridge Technopole Report 2006 An overview of the UK's leading high tech cluster, St John's Innovation Centre 2006,
The Impact of the University of Cambridge on the UK Economy and Society A high-level study commissioned by EEDA and the Cambridge Network in 2006,
INSIGHTS & RESEARCH | WHAT IS SILICON FEN? bidwells.co.uk,
External links
Cambridge Corporate Gateway
Cambridge Technopole
Economy of Cambridge
High-technology business districts in the United Kingdom
History of Cambridge
Information technology places
Science and technology in Cambridgeshire | Silicon Fen | [
"Technology"
] | 952 | [
"Information technology",
"Information technology places"
] |
590,057 | https://en.wikipedia.org/wiki/Sandoz | Sandoz Group AG is a Swiss company that focuses on generic pharmaceuticals and biosimilars. Prior to October 2023, it was part of a division of Novartis that was established in 2003, when Novartis united all of its generics businesses under the name Sandoz. Before this, the company existed as an independent pharmaceutical manufacturer until 1996, when it was merged with Ciba-Geigy to form Novartis. Prior to the merger, it specialized in medicines used in organ transplants, such as Sandimmune, and various antipsychotics and migraine medicines. Its headquarters were in Holzkirchen, Germany and after the spin-off from Novartis, the headquarters moved to Basel, Switzerland. Sandoz is one of the leading global generics businesses.
History
1886–1995: Formation and initial growth
The company was founded in 1886 by Alfred Kern (1850–1893) and Edouard Sandoz (1853–1928) in Basel (Switzerland) under the name Chemiefirma Kern und Sandoz. Initially the company focused on production of dyes namely alizarin blue and auramine. When Kern died, the company changed its name to Chemische Fabrik vormals Sandoz in 1895 and began producing pharmaceuticals for the first time the same year. As early as 1895, the first pharmaceutical substance called antipyrine was produced to reduce fever. In 1899 they started producing saccharin.
In 1917, Sandoz entered pharmaceutical research when Arthur Stoll (1887–1971) was hired, and, in 1929, Calcium Sandoz was introduced, laying the foundation research into modern calcium therapy.
In 1918, Arthur Stoll isolated ergotamine from ergot; the substance was eventually used to treat migraine and headaches and was introduced under the trade name Gynergen in 1921.
In 1938 Albert Hofmann produced the synthetic substance lysergic acid diethylamide, better known as LSD. The psychoactive properties of this preparation were nevertheless not discovered until 1943, when Hofmann ingested a small amount by accident. From 1947 to the mid-60s, LSD was sold by Sandoz under the name Delysid. It was marketed as a treatment for a wide variety of mental ailments, ranging from alcoholism to sexual deviancy. Sandoz suggested in its marketing literature that psychiatrists take LSD themselves, to gain a better subjective understanding of the schizophrenic experience, and many did exactly that and so did other scientific researchers. The Sandoz product received mass publicity as early as 1954, in a Time magazine feature. Research on LSD peaked in the 1950s and early 1960s. The CIA purchased quantities of LSD from Sandoz for use in its illegal human experimentation program known as MKUltra. Sandoz withdrew the drug from the market in 1965. The drug became a cultural novelty of the 1960s after psychologist Timothy Leary at Harvard University began to promote its use for recreational and spiritual experiences among the general public.
In 1939, Kern & Sandoz became Sandoz Ltd., a name it operated under for nearly sixty years.
In 1963, Sandoz acquired Biochemie GmbH, which was producing and supplying scarce, urgently needed acid-resistant penicillin.
In 1967, Sandoz merged with Wander AG and diversified into the dietetics business with Ovomaltine and Isostar.
In 1972, Sandoz acquired Delmark, Wasabröd, Wasa, the Swedish crisp bread producer Wasa in 1982.
In 1986, Velsicol Chemical Corporation acquired the agrochemicals division of Sandoz.
In 1994, Sandoz bought Gerber Products Company, expanding its research into biopharmacueticals.
In 1995, the specialty chemicals division became an independent company under the name Clariant, based in Muttenz.
1996–2023: Merger with Ciba-Geigy and developments under Novartis
On December 20, 1996, the merger of Sandoz and Ciba-Geigy led to the creation of Novartis. The Sandoz brand name was then only used in the pharmaceutical business for over-the-counter medicines.
The former company name Sandoz was reactivated in May 2003 with the merger of the globally differently named generics companies of the parent company Novartis under the uniform brand name Sandoz. In addition to the name, the company logo used before the merger was also adopted.
In 2002, Sandoz acquired Lek Pharmaceuticals d.d., Slovenia's largest pharmaceutical company.
In 2003, Novartis united its global generics businesses under a single global brand, reestablishing the name Sandoz as a division of Novartis. The Amifarma S.L. production plant in Palafolls, located near Barcelona, Spain was also acquired.
In February 2005, Sandoz acquired over Hexal AG and Eon Labs. The integration into Sandoz created the second largest generics group in the world and the largest on the German market with annual sales of 7.6 billion US dollars (2008) and over 23,000 employees in 130 countries. The headquarters have been in Holzkirchen since 2005. Sandoz's Swiss administrative headquarters are in Rotkreuz ZG in the municipality of Risch in the canton of Zug.
In 2006, Omnitrope, a recombinant human growth hormone, was approved by the European Medicines Agency (EMA) and also became the first biosimilar to receive approval from the FDA.
In 2007, the first complex biosimilar, Binocrit was approved in the EU.
In 2009, Sandoz acquired EBEWE Pharma's specialty generic injectables division and in 2010, acquired Oriel Therapeutics.
In 2012, Sandoz acquired Fougera Pharmaceuticals, entering the generic (topical) dermatology business.
In November 2018, it was announced that Novartis would convert Sandoz into an independent entity over the next two years. In March 2019, it was announced that CEO Richard Francis had resigned for personal reasons and that Francesco Balestrieri, Sandoz's European head, had taken over management ad interim. Richard Saynor was appointed as CEO later in 2019.
In August 2022, Novartis said the spin-off of Sandoz into a standalone company would be completed by the end of 2023. As part of the spin-off, Sandoz announced in June 2023 it would move its headquarters from Holzkirchen, Germany to Basel, Switzerland.
In July 2023, Sandoz launched a biosimilar version of AbbVie Inc's Humira, under the label, Hyrimoz.
2023–present: Return to a standalone company
In September 2023, Novartis announced that the spin-off had been approved by its shareholders and that it would be completed by the next month, resulting in Novartis shareholders receiving one Sandoz share for every five Novartis shares. Sandoz was listed on the SIX Swiss Exchange with a market capitalization between $18 billion and $25bn.
On October 4, 2023, Novartis completed the spin-off of Sandoz as a stand-alone company.
In January 2024, Sandoz announced it would acquire biosimilar drug for vision Cimerli For $170 million from Coherus BioSciences. The acquisition was completed in March 2024.
In February 2024, Sandoz US and its subsidiary Fougera Pharmaceuticals Inc. - indirect subsidiaries of Sandoz Group AG reached a USD 265 million settlement agreement in the US related to a generics direct purchaser class action lawsuit.
In April 2024, Sandoz reached an agreement with Amgen to resolve all patent disputes between the two companies relating to the US Food and Drug Administration (FDA)-approved Sandoz denosumab biosimilars.
In November 2024, Sandoz inaugurated the new headquarters in Basel, Switzerland.
1986 Sandoz warehouse fire in Schweizerhalle
On November 1, 1986, a major fire broke out in a warehouse containing of chemicals in what was then Sandoz in Schweizerhalle. The thick smoke, the stench and the unknown composition of the combustion gases caused the authorities in the neighboring communities to alert the population early in the morning with a general siren alarm and a curfew of several hours was imposed. No people suffered acute harm, with the exception of three people with pre-existing asthma who required hospitalization. However, the toxins found their way into the Rhine via the extinguishing water, where they caused a large number of fish to die off.
On November 11, 1986, the analysis of water samples proved that at the same time as the Rhine was being polluted by the contaminated extinguishing water from the Sandoz area, 400 kg of atrazine, a herbicide, had been discharged into the Rhine from the neighboring chemical company Ciba-Geigy.
The official investigation report came to the conclusion (only "on the basis of theoretical considerations") that when pallets were packed with Prussian blue, incorrect handling of a hot air blower led to a hot spot, which could be the cause. Subsequent trials, however, resulted in no conviction. The plant now belongs to Clariant.
To this day, the landfill left after the fire continues to pollute the groundwater in Muttenz and is actively monitored by Novartis, as the legal successor to Sandoz, and the environmental authorities of the Canton of Basel-Landschaft.
To commemorate the spill, there is a plastic market table by Bettina Eichin in the cloister of Basel Munster.
Literature
Ernst Brandl: Zur Entdeckungsgeschichte des Penicillin V in Kundl (Tirol). In: Veröffentlichungen des Tiroler Landesmuseums Ferdinandeum. Band 71, Innsbruck 1991, S. 5–16 (Geschichte der Bio Chemie in Kundl, zobodat.at).
See also
Pharmaceutical industry in Switzerland
References
External links
1996 mergers and acquisitions
Biotechnology companies of Switzerland
Manufacturing companies based in Basel
Companies listed on the SIX Swiss Exchange
Corporate spin-offs
Life sciences industry
Pharmaceutical companies of Switzerland
Swiss brands
Zug | Sandoz | [
"Biology"
] | 2,105 | [
"Life sciences industry"
] |
590,214 | https://en.wikipedia.org/wiki/Trend%20Micro | Trend Micro Inc. is a global cyber security software company. The company has globally dispersed R&D in 16 locations across every continent. The company develops enterprise security software for servers, containers, and cloud computing environments, networks, and end points. Its cloud and virtualization security products provide automated security for customers of VMware, Amazon AWS, Microsoft Azure, and Google Cloud Platform.
Eva Chen is a co-founder, and chief executive officer since 2005. She succeeded founding CEO Steve Chang, who now is chairman.
Kevin Simzer is the COO, running the company’s global operations, and appears frequently in the media to speak about the company.
History
Founding
The company was founded in 1988 in Los Angeles by Steve Chang, his wife, Jenny Chang, and her sister, Eva Chen. The company was established with proceeds from Steve Chang's previous sale of a copy protection dongle to a United States–based Rainbow Technologies. Shortly after establishing the company, its founders moved headquarters to Taipei.
In 1992, Trend Micro took over a Japanese software firm to form Trend Micro Devices and established headquarters in Tokyo. It then made an agreement with CPU maker Intel, under which it produced an anti-virus product for local area networks (LANs) for sale under the Intel brand. Intel paid royalties to Trend Micro for sales of LANDesk Virus Protect in the United States and Europe, while Trend paid royalties to Intel for sales in Asia. In 1993, Novell began bundling the product with its network operating system. In 1996, the two companies agreed to a two-year continuation of the agreement in which Trend was allowed to globally market the ServerProtect product under its own brand alongside Intel's LANDesk brand.
Trend Micro was listed on the Tokyo Stock Exchange in 1998 under the ticker 4704. The company began trading on the United States–based NASDAQ stock exchange in July 1999.
Acquisitions
KelKea – 2005
HijackThis – 2007
Provilla – 2007
TippingPoint – 2007
Identum – 2008
Third Brigade – 2009
InterMute – 2009
Humyo – 2010
Mobile Armor – 2010
AffirmTrust – 2012
IndusGuard – 2012
Broadweb – 2012
IMMUNIO – 2017
Notable Partnerships
AWS
INTERPOL
Microsoft
Google
NVIDIA
IBM
CapGemini
Paris Peace Forum
Cybersecurity Tech Accord
HITRUST
Snyk
2000s
In 2004, founding chief executive officer Steve Chang decided to split the responsibilities of CEO and chairman of the company. Company co-founder Eva Chen succeeded Chang as chief executive officer of Trend Micro in January 2005. Chen has been the company's chief technology officer since 1996 and before that executive vice president since the company's founding in October 1989. Chang retained his position as company chairman. In May, Trend Micro acquired US-based antispyware company InterMute for $15 million. Trend Micro had fully integrated InterMute's SpySubtract antispyware program into its antispyware product offerings by the end of that year.
In 2008, Trend Micro Smart Protection Network released Cloud-Based Global Threat Intelligence with more than 1,000 security experts worldwide and 24/7 operation.
2010s
In 2014, Trend Micro was first to market with security designed for both AWS and Microsoft Azure public Clouds.
In 2018, Trend Micro released the first AI-Powered writing style analysis to halt email fraud, a new layer of protection against BEC attacks, which uses AI to "blueprint" a user's style of writing.
In 2019, Trend Micro released the broadest Cloud Workload Protection Platform (CWPP) for AWS, Azure, & GCP, delivering the industry’s broadest range of security capabilities in a single platform.
2020s
In 2024 Trend Micro launched the world’s first security solutions for consumer AI PC’s, tailored to safeguard against emerging threats in the era of AI PCs.
Also in 2024, Trend Micro announced plans to demo a new data center solution, using NVIDIA technology, for security-conscious business and government customers harnessing the power of AI.
Adversary, Attack, and Campaign Intelligence
Trend Micro’s Adversary, Attack, and Campaign Intelligence is the broadest in the industry, with 500K Commercial customers and over 17M Consumer customers worldwide. Trend Micro’s intelligence gathering documents on average 2.5 Trillion Events per day by monitoring customer’s endpoints, web traffic, cloud, servers, and more.
Trend Micro had over 6.5 Trillion threat queries in 2023, and blocked 1.6 Billion threats the same year. Trend Micro has been a leader in vulnerability disclosure since 2007.
Trend ZDI
Trend Micro's Zero Day Initiative (ZDI) is an international software vulnerability initiative that was started in 2005 by TippingPoint, a division of 3Com. The program was acquired by Trend Micro as a part of the HP TippingPoint acquisition in 2015.
ZDI buys various software vulnerabilities from independent security researchers, and then discloses these vulnerabilities to their original vendors for patching before making such information public.
See also
Antivirus software
Cloud security
Comparison of antivirus software
Comparison of computer viruses
References
External links
1988 establishments in California
Software companies based in Tokyo
Computer security software companies
Computer security companies specializing in botnets
Computer forensics
Computer companies of the United States
Computer hardware companies
Software companies of Japan
Software companies of Taiwan
Software companies established in 1988
Companies formerly listed on the Nasdaq
Companies listed on the Tokyo Stock Exchange
Companies in the Nikkei 225
Japanese brands
Taiwanese brands | Trend Micro | [
"Technology",
"Engineering"
] | 1,117 | [
"Computer hardware companies",
"Computers",
"Cybersecurity engineering",
"Computer forensics"
] |
590,260 | https://en.wikipedia.org/wiki/Arak%20%28drink%29 | Arak or araq () is a distilled Levantine spirit of the anise drinks family. It is translucent and unsweetened.
Composition
Arak is traditionally made of grapes and aniseed (the seeds of the anise plant); when crushed, their oil provides arak with a slight licorice taste. Dates, figs, and other fruits are sometimes added.
Typically, arak is a minimum of 50% alcohol by volume (ABV), and can be up to 70% ABV (126 proof). A 53% ABV is considered typical.
Etymology
The word arak comes from Arabic (, meaning 'perspiration'). Its pronunciation varies depending on the regional varieties of Arabic, e.g.: or .
Production and consumption
Arak is a traditional alcoholic beverage of the Levant and Eastern Mediterranean. It is distilled and consumed across a wide area in the Levant, including in Lebanon, Syria, Jordan, Egypt, Israel and Palestine.
Arak is a stronger flavored liquor, and is usually mixed in proportions of approximately one part arak to two parts water in a traditional Eastern Mediterranean water vessel called an ibrik (Arabic: ) from Middle Persian or Parthian *ābrēz. The mixture is then poured into ice-filled cups, usually small, but can also be consumed in regular sized cups. This dilution causes the clear liquor to turn a translucent milky-white color; this is because anethole, the essential oil of anise, is soluble in alcohol but not in water. This results in an emulsion whose fine droplets scatter the light and turn the liquid translucent, a phenomenon known as the ouzo effect.
Arak is often served with meze, which may include dozens of small traditional dishes, as well as with grilled meat. It is also commonly served as an apéritif.
In Lebanon
Arak is often called the national drink of Lebanon. Often made from the Marawi and Obaideh grape varieties, a center of production is the Bekaa Valley vineyards, particularly the Kefraya, Ksara, Domaine des Tourelles, and Massaya vineyards. Zahlé, where Arak Zahlawi is produced, is considered a capital of arak. The water used in the production of Arak Zahlawi is traditionally drawn from the Berdawni River.
In Syria
In Syria, arak is common. Before the outbreak of the Syria Civil War in 2011, production was dominated by two state-run firms, Al-Rayan (based in the Druze city of Sweida) and Al-Mimas (based in a Christian settlement near Homs). Together, the two companies held about 85% of Syria's market share in arak. Since the civil war, however, the companies' profits and the price of arak, has declined, with their combined market share falling to under half. Low-quality counterfeits also proliferated, using pure alcohol (rather than fermented grapes) and an aniseed substitute (rather than aniseed).
In Iraq
Iraq formerly manufactured arak, including in Bashiqa in northern Iraq, but most arak production facilities shut down in the 2010s. Arak is distilled and consumed by Iraq's Yazidi and Christian minorities, although many members of these groups fled after ISIL seized control of large portions of northern Iraq in 2014. Amid a rise in Islamic conservatism, the Iraqi parliament passed a ban on the import, manufacture, and sale of alcoholic beverages in 2016, prompting protests from Iraqi non-Muslims and rights activists. The ban was not enforced until it was officially gazetted in 2023, triggering border crackdowns. The ban is not enforced in Iraq's autonomous Kurdistan Region.
In Israel
During the age of austerity in the early years of the State of Israel, arak was locally made, with few imports. The core market for arak was among older, working-class Israelis, and the drink was disfavored among younger and modern Israelis. In the first two decades of the 21st century, however, arak had a resurgence of popularity in Israel. Arak also continues to be popular among Moroccan Jews in Israel, some of whom regard arak as having folk medicine properties.
Israeli tax reforms in 2013 substantially increased the alcohol tax, and this led to consolidation of the arak market. The most popular producer is Joseph Gold & Sons, a winery established in 1824 in Haifa by the Gold family, which formerly made vodka in Ukraine before establishing an arak distillery in Israel. The winery, which later moved to Tirat Carmel outside Haifa, produces different arak brands, including Elite Arak, Alouf Arak and Amir Arak. Other major arak producers include Barkan Wine Cellars (which produces Arak Ashkelon) and Kawar Distillery (which produces Arak Kawar, Arak Yuda, and Arak Noah). After the Israeli withdrawal from southern Lebanon in 2000, some former South Lebanon Army members who settled in Israel began to produce arak using Lebanese (Zahle) methods.
In Palestine
Arak is locally produced by Palestinian Christians. The West Bank city of Ramallah is a center of arak distillation. Imports of Palestinian arak to the U.S. increased after imports of Syrian arak was disrupted by the Syrian civil war.
Outside the Levant
Several arak brands are produced outside of the eastern Mediterranean. The Sudanese araqi is a similar drink. Arak is also produced in north Africa. The Arak Carmel brand is produced in Spain, while the Arak Julenar brand is produced by an Iraqi in Greece.
Arak was once produced in Iran, until it was banned following the 1979 Iranian Revolution. Iranian Armenians locally manufacture black-market arak in Iran, and some foreign brands are also smuggled in the country. A locally made Iranian arak moonshine, aragh sagi, is made from fermented raisins; in 2020, it sold on the black market for about US$10 for 1.5 liters.
The Persian Empire Distillery, established in 2006 by a Shiraz-born Persian Canadian entrepreneur, distills an arak brand, Arak Saggi, at its distillery in Peterborough, Ontario.
Arak has achieved popularity among consumers in the North Caucasus area of Russia.
Similar drinks
Arak is very similar to other anise-based spirits, including the Turkish rakı and the Greek ouzo, the Greek tsikoudia, the Italian sambuca and anisette, the Bulgarian and Macedonian mastika, and the Spanish anis. However, it is unrelated to the similarly named arrack, a sugarcane-based Indonesia liquor.
Preparation
Manufacturing begins with the vineyards, and quality grapevines are the key to making good arak. The vines should be very mature and usually of a golden color. Instead of being irrigated, the vineyards are left to the care of the Mediterranean climate and make use of the natural rain and sun. The grapes, which are harvested in late September and early October, are crushed and put in barrels together with the juice (in Arabic el romeli) and left to ferment for three weeks. Occasionally the whole mix is stirred to release the CO2.
Both pot stills and column stills are used. Stills are usually made of stainless steel or copper. Copper stills with a Moorish shape are the most sought after.
The alcohol collected in the first distillation undergoes a second distillation, but this time it is mixed with aniseed. The ratio of alcohol to aniseed may vary and it is one of the major factors in the quality of the final product. The finished product is produced during a final distillation which takes place at the lowest possible temperature. For a quality arak, the finished spirit is then aged in clay amphoras to allow the angel's share to evaporate. The liquid remaining after this step is the most suitable for consumption.
See also
Boukha (Tunisian drink)
Mahia (drink) (Moroccan drink)
Korean soju
Moonshine by country
Zivania (Cypriot drink)
References
External links
Adulteration
Anise liqueurs and spirits
Arabic drinks
Distilled drinks
Iraq distilled drinks
Israeli alcoholic drinks
Jordanian distilled drinks
Lebanese distilled drinks
Levantine cuisine
Mediterranean cuisine
Palestinian distilled drinks
Syrian distilled drinks | Arak (drink) | [
"Chemistry"
] | 1,733 | [
"Adulteration",
"Distillation",
"Drug safety",
"Distilled drinks"
] |
590,280 | https://en.wikipedia.org/wiki/List%20of%20Hewlett-Packard%20products | The following is a partial list of products manufactured under the Hewlett-Packard brand.
Printers
HP categories of printers as of November 2014 are:
Black and white laser printers
Color laser printers
Laser multifunction printers
Inkjet all-in-one printers
Specialty Photo inkjet printers
Business ink printers
Color inkjet printers
HP Designjet large format printers
HP Indigo Digital Presses
HP Inkjet Digital Web Press
HP latex printers
HP Scitex large format printers
Network print servers
Black and white laser printers
(Current Line: November 2014)
High-volume black and white laser printers
LaserJet 700 printer
LaserJet M806 printer
Office black and white laser printers
LaserJet 400 printer
LaserJet 600 printer
LaserJet P2000 printer
LaserJet P3000 prin
Color Laser printers
(As of November 2014)
Laser multifunction printers
(As of November 2014)
Discontinued models
Inkjet all-in-one printers
(As of November 2014)
Specialty photo inkjet printers
(As of November 2014)
Compact photo printers
Photosmart A310 Printer
Photosmart A430 Portable Photo Studio Series
Business ink printers
(Current Line: November 2014)
Business ink multifunction printers
Officejet Enterprise Color X585 Multifunction Printer
Officejet Pro X476/X576 Multifunction Printer
Page wide array printers
Officejet Enterprise Color X555 Printer
Officejet Pro X451 Printer
Officejet Pro X551 Printer
6+3262
36+41
Color inkjet printers
(Current Line: November 2014)
Discontinued models
Designjet printers
(Current Line: November 2014)
Discontinued models
HP Indigo Digital Presses
(Current Line: November 2014)
HP Inkjet Digital Web Press
(Current Line: November 2014)
Inkjet Digital Web Press
T300 Inkjet Web Press series
HP latex printers
Current Line: (June 2015)
HP Scitex large format printers
Current Line: (June 2015)
Network print servers
Current Line: (November 2014)
Printer Notes:
In HP printers introduced since ca 2006, alpha codes indicate product groupings and optional features, thus for example:
HP software products
HP Cloud Services Print App series
HP Connected Music
HP Connected Photo
HP Instant Ink series
HP Link Reader
HP Live Photo
HP Photo Creations Software
HP Scan and Capture Application
HP Smart Web Printing Software
HP SureSupply Software
HP Touch point Manager
HP Update Software
HP WallArt Solution
HP converged cloud products
HP Public Cloud
HP CloudSystem
Digital cameras
Original line
HP E-series
HP M-series
HP R-series
Scanners
ScanJet series
Film scanners
Tablet computers
HP 7 1800
HP Slate
Slate 6
Slate 7
Slate 8 Plus
Slate 10 Plus
HP TouchPad
HP Omni 10
HP Stream 7
HP Stream 8
HP Envy 8 Note
HP 408
HP 608
HP 612
HP ElitePad
Mobile phones
Palm Prē, Prē Plus, Prē 2, Prē 3
HP Veer
Palm Pixi, Pixi Plus
HP Elite x3
Pocket computer
HP-75 BASIC hand-held 1982
LX series
OmniGo series
Jornada
iPAQ
Originally made by Compaq, acquired by HP in 2002 following the merger.
Source: HP Handheld/Pocket/Palmtop PCs
Desktop calculators and computers
HP 9800 series desktop computers as follows:
Computer terminals
Plotters
Pocket calculators
Calculator wristwatches:
HP-01
Business desktops
Compaq Evo
The Compaq Evo line of business desktops and laptops were originally made by Compaq in 2001 and was rebranded HP Compaq after the 2002 merger (see HP Business Desktops for recent products).
HP X-Terminal
See HP X-Terminals
HP TouchSmart PC
HP Brio
HP Vectra
HP e-PC (e-Vectra)
HP Compaq desktops
See HP Business Desktops
HP Pro/ProDesk
HP Elite/EliteDesk
Thin clients
Blade System
Thin client
See also HP Mobile Thin Clients
Personal desktops
Compaq Presario desktops
A series of desktop computers made by Compaq under the Compaq Presario brand since 1993. Acquired by HP in 2002, discontinued in 2013.
HP Pavilion
HP Slimline PC
HP Pavilion Media Center TV
HP Pavilion Elite
HP Pavilion Elite m9000 series - m9040n
HP Blackbird 002
HP Blackbird 002
HP Black wired keyboards
HP 434820-167 PS2 Keyboard
Business notebooks
HP OmniBook
HP's line of business-oriented notebook computers since 1993. In chronological order of release:
Following HP's acquisition of Compaq in 2002, this series of notebooks was discontinued, replaced with the HP Pavilion, HP Compaq, and Compaq Presario notebooks.
The OmniBook name would later be repurposed for a line of consumer-oriented notebooks in 2024, replacing the old Pavilion and Spectre series of notebooks.
Compaq Evo
The Compaq Evo line of business desktops and laptops were originally made by Compaq and was rebranded HP Compaq after the 2002 merger (see below for recent products).
HP Compaq laptops
HP Mini
HP ProBook
HP EliteBook
See the HP EliteBook article for more details.
First generation — The xx30 generation comprised the following models:
Second generation — The xx40 series comprised the following models:
Third generation — The xx60 series, announced on February 23, 2011, comprised the following models:
Fourth generation — The fourth generation, announced on May 9, 2012, comprised the following models:
Mobile thin client
Rugged notebooks
Personal notebooks
Compaq Presario laptops
A series of notebook computers made by Compaq under the Compaq Presario brand since 1996. Acquired by HP in 2002 and replaces HP OmniBook that year, discontinued in 2013.
HP Pavilion notebooks
A series of multimedia notebooks. Some models had the HP developed QuickPlay software which enabled booting to a linux based DVD/Music player held on a separate partition.
HP Envy
HP G series
HP Mini
HP OmniBook
A series of notebooks introduced in 2024 to succeed the HP Pavilion laptops. The name was originally used for a line of business-oriented laptops and notebooks made by Hewlett-Packard from 1993–2002.
Workstations
PA-RISC based
Itanium based
Alpha based (from DEC, via Compaq)
x86 based
Blade Workstations
Servers
x86 (Intel & AMD Opteron) based
Entry-level servers
Entry-level servers used either the NetServer or ProLiant brands. The NetServer line of servers were discontinued following the merger with Compaq in 2002, with the ProLiant line of servers succeeding it. The ProLiant line of servers was then acquired by Hewlett Packard Enterprise in 2015 after HP split up into two separate companies.
Despite the ProLiant name being used on some of these entry-level servers listed below, they are based on HP's former NetServer line of servers from 1993–2002 (more specifically the tc series) and as such do not come with Compaq's SmartStart or Insight Management Agents. This is especially applicable to the later, post-merger ProLiant models made by HP, as earlier, pre-merger models made by Compaq under the ProLiant brand did come with the aforementioned management tools.
NetServer
HP NetServer LPr
HP NetServer LP1000R (retired)
HP NetServer LP2000R (retired)
HP NetServer LH3 (retired)
HP NetServer LH3R (retired)
HP NetServer LH4 (retired)
HP NetServer LH4R (retired)
HP NetServer LH3000 (retired)
HP NetServer LH6000 (retired)
HP NetServer LHX8000 (retired)
HP NetServer LHX8500 (retired)
ProLiant ML
These are in a tower form factor.
G1 (retired)
Compaq ML330
Compaq ML330e
Compaq ML350
Compaq ML370
Compaq ML530
Compaq ML570
Compaq ML750
G2 (retired)
Marketed as Compaq (pre-merger)
Compaq ML330
Compaq ML350
Compaq ML370
Marketed as HP (post-merger)
HP ML110
HP ML150
HP ML370
HP ML530
HP ML570
G3 (retired)
HP ML110
HP ML150
HP ML310
HP ML330
HP ML350
HP ML370
HP ML570
G4 (retired)
ML 100 series
HP ML110
HP ML110 storage server
HP ML115
HP ML150
ML 300 series
HP ML310
HP ML330
HP ML350
HP ML350 storage server
HP ML370
G5 (retired)
ML 100 series
HP ML110
HP ML110 storage server
HP ML115
HP ML150
ML 300 series
HP ML310
HP ML350
HP ML350 storage server
HP ML370
G6 (retired)
ML100 series
HP ML110
HP ML150
ML300 series
HP ML330
HP ML350
HP ML370
G7 (retired)
ML100 series
HP ML110
Gen8 (retired)
ML300 series
HP ML310e
HP ML350e
HP ML350p
Gen9 (retired)
Marketed as HP (pre-split)
HP ML10
HP ML10 V2
HP ML30
HP ML110
HP ML150
HP ML350
Marketed as HPE (post-split)
HPE ML10
HPE ML30
HPE ML110
HPE ML350
Gen10
HPE ML30
HPE ML110
HPE ML350
ProLiant DL
These are in a rack mount form factor.
ProLiant
A series of servers under the ProLiant brand, originally made by Compaq. The ProLiant brand was acquired by HP in 2002 during their merger with Compaq and later acquired by Hewlett Packard Enterprise in 2015.
Unlike with the NetServer-based ProLiant entry-level servers made by HP, these servers listed below are based on Compaq's former ProLiant line from 1993–2002 and do come with SmartStart and Compaq's Insight Management Agents, especially for the earlier pre-merger ProLiant models. The later, post-merger models made by HP under the ProLiant brand do not come with these aforementioned management tools following HP's acquisition of Compaq in 2002.
ProLiant ML Series
These are in a tower form factor.
'e' indicates 'essential' and 'p' indicates 'performance' variants.
Compaq ProLiant ML310
Compaq ProLiant ML330
Compaq ProLiant ML350
Compaq ProLiant ML370
Compaq ProLiant ML570
ProLiant ML570 G2 (retired)
ProLiant DL Series
These are in a rack mount form factor.
Compaq ProLiant DL320 (1U, single processor server)
Compaq ProLiant DL360 (1U, 2-processor server, 2hot swap Compaq universal hard disks)
ProLiant DL365 (retired)
Compaq ProLiant DL380
ProLiant DL385
ProLiant DL560
DL560 G1
DL560 Gen8
ProLiant DL580
ProLiant DL585 (supports two or four dual-core AMD Opteron)
ProLiant DL740 (retired)
Compaq ProLiant DL760 (retired)
ProLiant DL760 G2 (retired)
ProLiant DL785 (supports up to eight quad-core AMD Opteron)
ProLiant DL785 G6
ProLiant DL980 G7 (supports up to 8 Intel Xeon E7-4800 and 7500 series processors)
ProLiant BLp blades
These are in a blade form factor.
ProLiant BL20p
ProLiant BL25p
ProLiant BL30p
ProLiant BL35p
ProLiant BL40p
ProLiant BL45p
ProLiant BLc blades
ProLiant BL2x220c
ProLiant BL260c (G5 only)
ProLiant BL280c (G6 only)
ProLiant BL460c
ProLiant BL465c
ProLiant BL480c
ProLiant BL490c
Proliant BL495c
Proliant BL660c (G8)
ProLiant BL680c
ProLiant BL685c
Itanium based
HPE Integrity Servers
rx1600 series – 1U
rx1600
rx1620
rx2600 series – 2U
rx2600
rx2620
rx2660
rx3600 – 4U
rx4610 – 7U
rx4640 – 4U
rx5670 – 7U
rx6600 – 7U
rx7600 series – 10U
rx7610
rx7620
rx7640
rx8600 series – 17U
rx8620
rx8640
HP Superdome
SX1000 based – SX2000 based
Integrity BL blades
Compaq ProLiant
Compaq ProLiant DL590/64 (retired)
Alpha based
PA-RISC based
Scalable servers and supercomputer nodes
Apollo series
SGI series
HPE SGI 8600
Enterprise storage
HP StorageWorks XP storage array
StorageWorks EVA storage array (from Compaq)
HP AutoRAID storage array (retired)
HP VA storage array (retired)
HP Jamaica storage enclosure (retired)
"StorageWorks" Storage element managers
Command View XP
Command View AE
Command View EVA
Command View SDM.
StorageWorks Command View TL
Storage area management
HP Storage Essentials
OpenView Storage Area Manager
ProCurve
ProCurve Networking by HP is the networking division of HP.
Telepresence and videoconferencing
HP Halo, a high-end immersive telepresence system, was sold to Polycom on June 1, 2011.
External hard disk drives
HP External Hard Drive (1 TB, USB 3.0)
HP Portable Hard Drive (1 TB, USB 3.0)
HP USB Flash Drive (16 GB)
External optical disk drives
HP DVD-R Drive
External tape drives and libraries
HP SureStore series
Tape libraries
All sold in either the DLT 8K or Ultrium 230 format.
Autoloaders
SureStore 1/9 (Ultrium 230, DLT-1, or DLT 8K)
Docking Stations
HP xb3000
See also
List of Palm OS devices
List of Dell PowerEdge Servers
References
Hewlett-Packard products
Hewlett-Packard products
Videotelephony
Hewlett-Packard | List of Hewlett-Packard products | [
"Technology"
] | 3,097 | [
"Computing-related lists"
] |
590,456 | https://en.wikipedia.org/wiki/57P/du%20Toit%E2%80%93Neujmin%E2%80%93Delporte | 57P/du Toit–Neujmin–Delporte is the designation of a periodic comet. It is a member of the Jupiter family of comets whose orbits and evolution are strongly influenced by the giant planet. In 2002 it was discovered to have broken up into at least 20 fragments. At the time of their discovery, these shed fragments were spread out along the orbital path subtending an angle of 27 arcminutes from the comet's surviving head.
Discovery
The comet has many co-discoverers and a complicated discovery history due to unreliable communications during World War II. Daniel du Toit discovered the comet (retrospectively designated as P/1941 O1) on July 18, 1941, working at Boyden Station, South Africa. His cabled message about the comet did not reach his employer, Harvard College Observatory, until July 27. During a routine asteroid search, Grigory N. Neujmin (Simeis Observatory, Soviet Union) found the comet on a photographic plate exposed July 25. He confirmed his own observation on July 29, but the radiogram from Moscow took 20 days to reach Harvard. The official announcement of the new comet finally happened on August 20, 1941. A few days later, it became known that Eugène Joseph Delporte at the Royal Observatory, Belgium, also had found the comet on August 19, so he was added to the list of discoverers.
A few weeks later, news from Paul Ahnert at Sonneberg, Thuringia, Germany, reached Harvard that he also observed the new comet on July 22, but it was too late to recognize his contribution.
Fragment A was last observed in 2002.
References
External links
Orbital simulation from JPL (Java) / Horizons Ephemeris
57P/du Toit-Neujmin-Delporte – Seiichi Yoshida @ aerith.net
57P/du Toit–Neujmin–Delporte at Gary W. Kronk's Cometography
Periodic comets
0057
057P
057P
057P
19410718 | 57P/du Toit–Neujmin–Delporte | [
"Astronomy"
] | 422 | [
"Astronomy stubs",
"Comet stubs"
] |
590,600 | https://en.wikipedia.org/wiki/Blohm%20%26%20Voss%20BV%20141 | The Blohm & Voss BV 141 (originally the Ha 141) was a World War II German tactical reconnaissance aircraft, notable for its uncommon structural asymmetry. Although the Blohm & Voss BV 141 performed well, it was never ordered into full-scale production, for reasons that included the unavailability of the preferred engine and competition from another tactical reconnaissance aircraft, the Focke-Wulf Fw 189.
Development
In 1937, the Reichsluftfahrtministerium (RLM/German Aviation Ministry) – issued a specification for a single-engine reconnaissance aircraft with optimal visual characteristics. The preferred contractor was Arado with the Ar 198, but the prototype proved unsuccessful. The eventual winner was the Focke-Wulf Fw 189 Uhu; even though its twin-boom design using two smaller engines did not match the requirement of a single engined aircraft. Blohm & Voss (Hamburger Flugzeugbau) although not invited to participate, pursued as a private venture something far more radical. The proposal of chief designer Dr. Richard Vogt was the uniquely asymmetric BV 141.
Design
The Plexiglas-glazed crew gondola on the starboard side strongly resembled that found on the Fw 189, and housed the pilot, observer and rear gunner, while the fuselage on the port side led smoothly from the BMW 132N radial engine to a tail unit.
At first glance, the placement of weight would have induced tendency to roll, but the weight was evenly supported by lift from the wings.
In terms of thrust vs drag asymmetry, the countering of induced yaw was a more complicated matter. At low airspeed, it was calculated to be mostly alleviated because of a phenomenon known as P-factor, while at normal airspeed it proved to be easily controlled with trimming.
The tailplane was symmetrical at first, but in the 141B it became asymmetrical – starboard tailplane virtually removed – to improve the rear gunner's fields of view and fire.
Operational history
The first prototype, the BV 141 V1 (D-ORJE) first flew on 25 February 1938, using an 865hp BMW 132N engine. Three prototypes and an evaluation batch of five BV 141As were produced, backed personally by Ernst Udet, but the RLM decided on 4 April 1940 that they were underpowered, although it was also noted they otherwise exceeded the requirements. By the time a batch of 12 BV 141Bs were built with the more powerful BMW 801 engine, they were too late to make an impression, as the RLM had already decided to put the Fw 189 into production.
An urgent need for BMW 801 engines for use in the Fw 190 fighter aircraft reduced the chance of the BV 141B being produced in quantity. The BV141 was never operational, though the B-02 (V-10) was evaluated in Autumn 1941 by Aufklärungsschule 1 (Reconnaissance school). Vogt came up with several other asymmetric designs, including the piston-jet P.194.01, but none of those were actually built. Several wrecked BV 141s were found by advancing Allied forces. One was captured by British forces and sent to England for examination. No examples survive today.
Variants
All 20 of the BV 141Bs that were ordered were produced and delivered. There exists a complete record of BV 141 production with either a German civil registration number or pre-military, four letter Stammkennzeichen factory radio code number.
Prototypes
Ha 141-0 - D-ORJE; original designation of the first aircraft completed with the stepped cockpit nacelle. Became the BV 141 V2.
BV 141 V1 - WNr 141-00-0171; D-OTTO then BL+AU, damaged
BV 141 V2 - WNr 141-00-0172; D-ORJE then PC+BA; chronologically, the first one built and the only one known under old "Ha" designation as "Ha 141"
BV 141 V3 - WNr 141-00-0359; D-OLGA then BL+AA
Pre-series BV 141A-0
BV 141A-01
(V4); WNr 01010360; D-OLLE; damaged
BV 141A-02
(V5); WNr 01010361; BL+AB
BV 141A-03
(V6); WNr 01010362; BL+AC
BV 141A-04
(V7); WNr 01010363; BL+AD
BV 141A-05
(V8); WNr 01010364; BL+AE
Pre-series BV 141B-0
The first to have BMW 801 engine. About 2 m longer and 2 m wider than A-05.
B-01 (V9) - WNr 0210001; NC+QZ; first flown 9 January 1941, had severe structural problem
B-02 (V10) - WNr 0210002; NC+RA; first flown 1 June 1941
B-03 (V11) - WNr 0210003; NC+RB
B-04 (V12) - WNr 0210004; NC+RC
B-05 (V13) - WNr 0210005; NC+RD
B-06 (V14) - WNr 0210006; NC+RE
B-07 (V15) - WNr 0210007; NC+RF
B-08 (V16) - WNr 0210008; NC+RG
B-09 (V17) - WNr 0210009; NC+RH
B-10 (V18) - WNr 0210010; NC+RI
Series BV 141B-1
WNr 0210011; GK+GA
WNr 0210012; GK+GB
WNr 0210013; GK+GC
WNr 0210014; GK+GD
WNr 0210015; GK+GE
WNr 0210016; GK+GF
WNr 0210017; GK+GG
WNr 0210018; GK+GH
WNr 0210019; GL+AG; rebuilt D-OTTO
WNr 0210020; GL+AH; rebuilt D-OLLE
Specifications (BV 141B-02 [V10])
See also
References
Notes
Citations
Bibliography
Green, William. Warplanes of the Third Reich. London: Macdonald and Jane's, 1979, pp. 81–86. .
Smith, J. Richard and Anthony Kay. German Aircraft of the Second World War. London: Putnam & Co, 1978, Third impression, pp. 66–71. .
Taylor, Michael. The World's Strangest Aircraft. London: Grange, 1999. .
Wood, Anthony and Bill Gunston. Hitler's Luftwaffe: A Pictorial History and Technical Encyclopedia of Hitler's Air Power in World War II. London: Salamander, 1977, p. 135. .
Further reading
External links
Blohm & Voss BV 141, VR Curassow.
Single-engined tractor aircraft
1930s German military reconnaissance aircraft
Asymmetrical aircraft
BV 141
Aircraft first flown in 1938 | Blohm & Voss BV 141 | [
"Physics"
] | 1,525 | [
"Asymmetrical aircraft",
"Symmetry",
"Asymmetry"
] |
590,687 | https://en.wikipedia.org/wiki/Cobblestone | Cobblestone is a natural building material based on cobble-sized stones, and is used for pavement roads, streets, and buildings. Setts, also called Belgian blocks, are often referred to as "cobbles", although a sett is distinct from a cobblestone by being quarried and shaped into a regular form, while cobblestones are naturally occurring forms less uniform in size.
It has been used across various cultures for millennia, particularly in Europe, and became especially prominent during the medieval and early modern periods. Today, cobblestone streets are often associated with historic preservation and are used in many cities to maintain the historical character of certain neighborhoods.
History as road surface
During the medieval period, cobblestone streets became common in many European towns and cities. Cobblestones were readily available, as they were often naturally occurring stones found in riverbeds and fields. Their rounded shape made them easy to lay, and their durability was well-suited to the needs of growing urban centers. Cobblestones are typically either set in sand or similar material, or are bound together with mortar. Paving with cobblestones allows a road to be heavily used all year long. It prevents the build-up of ruts often found in dirt roads. It has the additional advantage of immediately draining water, and not getting muddy in wet weather or dusty in dry weather. Shod horses are also able to get better traction on stone cobbles, pitches or setts than tarmac or asphalt. Cobblestones set in sand have the environmental advantage of being permeable paving, and of moving rather than cracking with movements in the ground. The fact that carriage wheels, horse hooves and even modern automobiles make a lot of noise when rolling over cobblestone paving might be thought a disadvantage, but it has the advantage of warning pedestrians of their approach. In England, the custom was to strew straw over the cobbles outside the house of a sick or dying person to dampen the sound. In rural areas, cobblestones were sometimes used to pave important roads, particularly those leading to and from major cities.In England, it was commonplace since ancient times for flat stones with a flat narrow edge to be set on edge to provide an even paved surface. This was known as a 'pitched' surface and was common all over Britain, as it did not require rounded pebbles. Pitched surfaces predate the use of regularly-sized granite setts by more than a thousand years. Such pitched paving is quite distinct from that formed from rounded stones, although both forms are commonly referred to as 'cobbled' surfaces. Most surviving genuinely old 'cobbled' areas are in reality pitched surfaces. A cobbled area is known as a "causey", "cassay" or "cassie" in Scots (probably from causeway). In the early modern period, cities like Paris, London, and Amsterdam began to pave their streets with cobblestones to manage the increased traffic from carts, carriages, and pedestrians.
Cobblestones were largely replaced by quarried granite setts (also known as Belgian block) in the nineteenth century. Cobblestoned and "setted" streets gradually gave way to macadam roads and later to tarmac, and finally to asphalt concrete at the beginning of the 20th century. However, cobblestones are often retained in historic areas, even for streets with modern vehicular traffic. Many older villages and cities in Europe are still paved with cobblestones or pitched.
Use today
With the advent of asphalt and concrete in the 20th century, the use of cobblestones declined. These newer materials were cheaper and easier to install, leading to the replacement of many cobblestone streets. However, cobblestone streets have been preserved in many historic districts around the world, valued for their historical significance and aesthetic charm. In recent decades, cobblestones have become a popular material for paving newly pedestrianised streets in Europe. In this case, the noisy nature of the surface is an advantage as pedestrians can hear approaching vehicles. The visual cues of the cobblestones also clarify that the area is more than just a normal street. The use of cobblestones/setts is also considered to be a more "upmarket" roadway solution, having been described as "unique and artistic" compared to the normal asphalt road environment.
In older U.S. cities such as Philadelphia, Boston, Pittsburgh, New York City, Chicago, San Francisco, New Castle, Portland (Maine), Baltimore, Charleston, and New Orleans, many of the older streets are paved in cobblestones and setts (mostly setts); however, many such streets have been paved over with asphalt, which can crack and erode away due to heavy traffic, thus revealing the original stone pavement.
In some places such as Saskatoon, Saskatchewan, Canada, as late as the 1990s some busy intersections still showed cobblestones through worn down sections of pavement. In Toronto streets using setts were used by streetcar routes and disappeared by the 1980s, but are still found in the Distillery District.
Many cities in Latin America, such as Buenos Aires, Argentina; Zacatecas and Guanajuato, in Mexico; Old San Juan, Puerto Rico; Vigan, Philippines; and Montevideo, Uruguay, are well known for their many cobblestone streets, which are still operational and in good condition. They are still maintained and repaired in the traditional manner, by placing and arranging granite stones by hand.
In the Czech Republic, there are old cobblestone paths with colored marbles and limestones. The design with three colors (red/limestone, black/limestone, white/marble) has a long tradition in Bohemia. The cubes of the old ways are handmade.
Use in architecture
In the Finger Lakes Region of New York State, the retreat of the glaciers during the last ice age left numerous small, rounded cobblestones available for building. Pre-Civil War architecture in the region made heavy use of cobblestones for walls. Today, the fewer than 600 remaining cobblestone buildings are prized as historic locations, most of them private homes. Ninety percent of the cobblestone buildings in America can be found within a 75-mile radius of Rochester, New York. There is also a cluster of cobblestone buildings in the Town of Paris, Ontario. In addition to homes, cobblestones were used to build barns, stagecoach taverns, smokehouses, stores, churches, schools, factories, and cemetery markers.
The only public cobblestone building in the US is the Alexander Classical School, located in Alexander, New York.
Implications for disabled people
Cobblestone may not be accessible for disabled people, particularly wheelchair users. Wheelchair users and other disabled people may opt to avoid streets and sidewalks made with cobblestone. Some European cities, such as Breda in the Netherlands, have tried to preserve their historic aesthetic while also making cobblestone pavement more accessible for disabled people by slicing the cobblestone to be flat on the surface.
The United States Access Board does not specify which materials a sidewalk must be made of in order to be ADA compliant, but does state that "cobblestones can significantly impede wheelchair movement" and that sidewalks must not have surface variances of greater than one inch. Due to the accessibility challenges of cobblestone, the Federal Highway Administration recommends against the use of cobblestone and bricks in its accessibility guide for sidewalks and crosswalks.
See also
Calade, a harmonious, decorative arrangement of medium-sized pebbles, fixed to the ground
Flagstone
List of cobblestone buildings
List of cobblestone streets
Portuguese pavement
References
External links
The Cobblestone Society & Museum - Albion, New York
Building stone
Natural materials
Pavements
Stone (material) | Cobblestone | [
"Physics"
] | 1,577 | [
"Natural materials",
"Materials",
"Matter"
] |
590,787 | https://en.wikipedia.org/wiki/Henrietta%20Swan%20Leavitt | Henrietta Swan Leavitt (; July 4, 1868 – December 12, 1921) was an American astronomer. Her discovery of how to effectively measure vast distances to remote galaxies led to a shift in the scale and understanding of the scale and the nature of the universe. Nomination of Leavitt for the Nobel Prize had to be halted because of her death.
A graduate of Radcliffe College, she worked at the Harvard College Observatory as a human computer, tasked with measuring photographic plates to catalog the positions and brightness of stars. This work led her to discover the relation between the luminosity and the period of Cepheid variables. Leavitt's discovery provided astronomers with the first standard candle with which to measure the distance to other galaxies.
Before Leavitt discovered the period-luminosity relationship for Cepheid variables (sometimes referred to as Leavitt's Law), the only techniques available to astronomers for measuring the distance to a star were based on stellar parallax. Such techniques can only be used for measuring distances out to several hundred light years. Leavitt's great insight was that while no one knew the distance to the Small Magellanic Cloud, all its stars must be roughly the same distance from Earth. Therefore, a relationship she discovered in it, between the period of certain variable stars (Cepheids) and their apparent brightness, reflected a relationship in their absolute brightness. Once calibrated by measuring the distance to a nearby star of the same type via parallax, her discovery became a measuring stick with vastly greater reach.
After Leavitt's death, Edwin Hubble found Cepheids in several nebulae, including the Andromeda Nebula, and, using Leavitt's Law, calculated that their distance was far too great to be part of the Milky Way and were separate galaxies in their own right. This settled astronomy's Great Debate over the size of the universe. Hubble later used Leavitt's Law, together with galactic redshifts, to establish that the universe is expanding (see Hubble's law).
Early life and education
Henrietta Swan Leavitt was born in Lancaster, Massachusetts, the daughter of Henrietta Swan Kendrick and Congregational church minister George Roswell Leavitt. She was a descendant of Deacon John Leavitt, an English Puritan tailor, who settled in the Massachusetts Bay Colony in the early seventeenth century. (In the early Massachusetts records the family name was spelled "Levett".) Henrietta Leavitt remained deeply religious and committed to her church throughout her life.
Leavitt attended Oberlin College for two years before transferring to Harvard University's Society for the Collegiate Instruction of Women (later Radcliffe College), where she received a bachelor degree in 1892. At Oberlin and Harvard, Leavitt studied a broad curriculum that included Latin and classical Greek, fine arts, philosophy, analytic geometry, and calculus. It wasn't until her fourth year of college that Leavitt took a course in astronomy, in which she earned an A−.
Leavitt also began working as volunteer assistant, one of the "computers" at the Harvard College Observatory. In 1902, she was hired by the director of the observatory, Edward Charles Pickering, to measure and catalog the brightness of stars as they appeared in the observatory's photographic plate collection. (In the early 1900s, women were not allowed to operate telescopes, but the scientific data were on the photographic plates.)
In 1893, Leavitt obtained credits toward a graduate degree in astronomy for her work at the Harvard College Observatory, but due to chronic illness, she never completed that degree. In 1898, she became a member of the Harvard staff. Leavitt left the observatory to make two trips to Europe and completed a stint as an art assistant at Beloit College in Wisconsin. At this time, she contracted an illness that led to progressive hearing loss.
Astronomical career
Leavitt returned to the Harvard College Observatory in 1903. Because Leavitt was financially independent, Pickering initially did not have to pay her. Later, she received an hour for her work, and only per week. She was reportedly "hard-working, serious-minded …, little given to frivolous pursuits and selflessly devoted to her family, her church, and her career." At the Harvard Observatory, Leavitt worked alongside Annie Jump Cannon, who also was deaf.
Pickering assigned Leavitt to study variable stars of the Small and Large Magellanic Clouds, as recorded on photographic plates taken with the Bruce Astrograph of the Boyden Station of the Harvard Observatory in Arequipa, Peru. She identified 1,777 variable stars. In 1908, Leavitt published the results of her studies in the Annals of the Astronomical Observatory of Harvard College, noting that the brighter variables had the longer period.
In a 1912 paper, Leavitt examined the relationship between the periods and the brightness of a sample of 25 of the Cepheids variables in the Small Magellanic Cloud. The paper was communicated and signed by Edward Pickering, but the first sentence indicates that it was "prepared by Miss Leavitt". Leavitt made a graph of magnitude versus logarithm of period and determined that, in her own words,
Leavitt then used the simplifying assumption that all of the Cepheids within the Small Magellanic Cloud were at approximately the same distance, so that their intrinsic brightness could be deduced from their apparent brightness as registered in the photographic plates, up to a scale factor, since the distance to the Magellanic Clouds were as yet unknown. She expressed the hope that parallaxes to some Cepheids would be measured. This soon occurred, allowing her period-luminosity scale to be calibrated. This reasoning allowed Leavitt to establish that the logarithm of the period is linearly related to the logarithm of the star's average intrinsic optical luminosity (the amount of power radiated by the star in the visible spectrum).
Henrietta found that Delta Cepheus was the "standard candle" that had long been sought by astronomers. A similar five-day cepheid variable in the Small Magellanic cloud she found to be about one ten-thousandth as bright as our five-day Delta Cepheus. Using the inverse-square law, she calculated that the Small Magellanic cloud was 100 times as far away as Delta Cepheus, thus having discovered a way to calculate the distance to another galaxy.
Leavitt also developed, and continued to refine, the Harvard Standard for photographic measurements, a logarithmic scale that orders stars by brightness greater than 17 magnitudes. She initially analyzed 299 plates from 13 telescopes to construct her scale, which was accepted by the International Committee of Photographic Magnitudes in 1913.
In 1913, Leavitt discovered T Pyxidis, a recurrent nova in the constellation Pyxis, and one of the most frequent recurrent novae in the sky, with eruptions observed in 1890, 1902, 1920, 1944, 1967, and 2011.
Leavitt was a member of Phi Beta Kappa, the American Association of University Women, the American Astronomical and Astrophysical Society, the American Association for the Advancement of Science, and an honorary member of the American Association of Variable Star Observers. In 1921, when Harlow Shapley took over as director of the observatory, Leavitt was made head of stellar photometry. By the end of that year she had died from cancer.
Scientific impact
According to science writer Jeremy Bernstein, "variable stars had been of interest for years, but when she was studying those plates, I doubt Pickering thought she would make a significant discovery—one that would eventually change astronomy." The period–luminosity relationship for Cepheids, now known as "Leavitt's law", made the stars the first "standard candle" in astronomy, allowing scientists to compute the distances to stars too remote for stellar parallax observations to be useful. One year after Leavitt reported her results, Ejnar Hertzsprung determined the distance of several Cepheids in the Milky Way; with this calibration, the distance to any Cepheid could be determined accurately.
Cepheids were soon detected in other galaxies, such as Andromeda (notably by Edwin Hubble in 1923–24), and they became an important part of the evidence that "spiral nebulae" are independent galaxies located far outside of the Milky Way. Thus, Leavitt's discovery would forever change humanity's picture of the universe, as it prompted Harlow Shapley to move the Sun from the center of the galaxy in the "Great Debate" and Edwin Hubble to move the Milky Way galaxy from the center of the universe.
Leavitt's discovery of an accurate way to measure distances on an inter-galactic scale, paved the way for modern astronomy's understanding of the structure and scale of the universe. The accomplishments of Edwin Hubble, the American astronomer who established that the universe is expanding, also were made possible by Leavitt's groundbreaking research.
Hubble often said that Leavitt deserved the Nobel Prize for her work. Mathematician Gösta Mittag-Leffler, a member of the Swedish Academy of Sciences, tried to nominate her for that prize in 1925, only to learn that she had died of cancer three years earlier. (The Nobel Prize is not awarded posthumously.)
Cepheid variables allow astronomers to measure distances up to about 60 million light years. Even greater distances can now be measured by using the theoretical maximum mass of white dwarfs calculated by Subrahmanyan Chandrasekhar.
Illness and death
Leavitt's scientific work at Harvard was frequently interrupted by illness and family obligations. Her early death at the age of 53, from stomach cancer, was seen as a tragedy by her colleagues for reasons that went beyond her scientific achievements. Her colleague Solon I. Bailey wrote in his obituary for Leavitt that "she had the happy, joyful, faculty of appreciating all that was worthy and lovable in others, and was possessed of a nature so full of sunshine that, to her, all of life became beautiful and full of meaning."
She was buried in the Leavitt family plot at Cambridge Cemetery in Cambridge, Massachusetts."Sitting at the top of a gentle hill", writes George Johnson in his biography of Leavitt, "the spot is marked by a tall hexagonal monument, on top of which sits a globe cradled on a draped marble pedestal. Her uncle Erasmus Darwin Leavitt and his family also are buried there, along with other Leavitts." A plaque memorializing Henrietta and her two siblings, Mira and Roswell, is mounted on one side of the monument. Nearby are the graves of Henry and William James.
Posthumous honors
The asteroid 5383 Leavitt and the crater Leavitt on the Moon are named after her to honor deaf men and women who have worked as astronomers.
One of the ASAS-SN telescopes, located in the McDonald Observatory in Texas, is named in her honor.
In popular culture
Anna Von Mertens designed a book-based work of art, Attention Is Discovery: The Life and Legacy of Henrietta Leavitt. The pages weave Von Merton's artistic interpretations of Leavitt's work with photos and descriptions of the work of Leavitt and her fellow Harvard Computers.
George Johnson wrote a 2005 biography, Miss Leavitt's Stars, which showcases the triumphs of women's progress in science through the story of Leavitt.
Robert Burleigh wrote the 2013 biography Look Up!: Henrietta Leavitt, Pioneering Woman Astronomer for a younger audience. It is written for four- to eight-year-olds.
Lauren Gunderson wrote a 2015 play, Silent Sky, which followed Leavitt's journey from her acceptance at Harvard to her death.
Theo Strassell wrote a play, The Troubling Things We Do, an absurdist piece that details the life of Henrietta Leavitt, among other scientists from her era.
Dava Sobel's book The Glass Universe chronicles the work of the women analyzing images taken of the stars at the Harvard College Observatory.
The BBC included Leavitt in their Missed Genius series designed to celebrate individuals from diverse backgrounds who have had a profound effect on our world.
Central Square Theater commissioned a play, The Women Who Mapped The Stars, by Joyce Van Dyke, as part of the Brit D'Arbeloff Women in Science Production Series, staged by the Nora Theatre Company. The play features Leavitt's story, among others.
See also
List of female scientists
List of deaf people
Women in science
Timeline of women in science
Human computer
Cepheids
References
Further reading
Sources
External links
Women in Astronomy Bibliography from the Astronomical Society of the Pacific
Periods of 25 Variable Stars in the Small Magellanic Cloud. Edward C. Pickering, March 3, 1912; credits Leavitt.
Henrietta Swan Leavitt: a Star of the Brightest Magnitude ACS/Women Chemists Committee's biography with several links
Henrietta Swan Leavitt, Tim Hunter (astronomer), The Grasslands Observatory
Henrietta Swan Leavitt's genealogy
Henrietta Swan Leavitt – Lady of Luminosity from the Woman Astronomer website
1868 births
1921 deaths
Astrometry
American women astronomers
Cepheid variables
History of women in the United States
Harvard Computers
Harvard University staff
Radcliffe College alumni
People from Lancaster, Massachusetts
Leavitt family
American Congregationalists
Deaths from cancer in Massachusetts
19th-century American astronomers
20th-century American astronomers
19th-century American women scientists
20th-century American women scientists
American deaf people
Oberlin College alumni
American scientists with disabilities | Henrietta Swan Leavitt | [
"Astronomy"
] | 2,752 | [
"Astrometry",
"Astronomical sub-disciplines"
] |
590,852 | https://en.wikipedia.org/wiki/Xen | Xen (pronounced ) is a free and open-source type-1 hypervisor, providing services that allow multiple computer operating systems to execute on the same computer hardware concurrently. It was
originally developed by the University of Cambridge Computer Laboratory and is now being developed by the Linux Foundation with support from Intel, Citrix, Arm Ltd, Huawei, AWS, Alibaba Cloud, AMD, Bitdefender and EPAM Systems.
The Xen Project community develops and maintains Xen Project as free and open-source software, subject to the requirements of the GNU General Public License (GPL), version 2. Xen Project is currently available for the IA-32, x86-64 and ARM instruction sets.
Software architecture
Xen Project runs in a more privileged CPU state than any other software on the machine, except for firmware.
Responsibilities of the hypervisor include memory management and CPU scheduling of all virtual machines ("domains"), and for launching the most privileged domain ("dom0") - the only virtual machine which by default has direct access to hardware. From the dom0 the hypervisor can be managed and unprivileged domains ("domU") can be launched.
The dom0 domain is typically a version of Linux or BSD. User domains may either be traditional operating systems, such as Microsoft Windows under which privileged instructions are provided by hardware virtualization instructions (if the host processor supports x86 virtualization, e.g., Intel VT-x and AMD-V), or paravirtualized operating systems whereby the operating system is aware that it is running inside a virtual machine, and so makes hypercalls directly, rather than issuing privileged instructions.
Xen Project boots from a bootloader such as GNU GRUB, and then usually loads a paravirtualized host operating system into the host domain (dom0).
History
Xen originated as a research project at the University of Cambridge led by Ian Pratt, a senior lecturer in the Computer Laboratory, and his PhD student Keir Fraser. The first public release of Xen was made in 2003, with v1.0 following in 2004. Soon after, Pratt and Fraser along with other Cambridge alumni including Simon Crosby and founding CEO Nick Gault created XenSource Inc. to turn Xen into a competitive enterprise product.
To support embedded systems such as smartphone/ IoT with relatively scarce hardware computing resources, the Secure Xen ARM architecture on an ARM CPU was exhibited at Xen Summit on April 17, 2007, held in IBM TJ Watson. The first public release of Secure Xen ARM source code was made at Xen Summit on June 24, 2008 by Sang-bum Suh, a Cambridge alumnus, in Samsung Electronics.
On October 22, 2007, Citrix Systems completed its acquisition of XenSource, and the Xen Project moved to the xen.org domain. This move had started some time previously, and made public the existence of the Xen Project Advisory Board (Xen AB), which had members from Citrix, IBM, Intel, Hewlett-Packard, Novell, Red Hat, Sun Microsystems and Oracle. The Xen Advisory Board advises the Xen Project leader and is responsible for the Xen trademark, which Citrix has freely licensed to all vendors and projects that implement the Xen hypervisor. Citrix also used the Xen brand itself for some proprietary products unrelated to Xen, including XenApp and XenDesktop.
On April 15, 2013, it was announced that the Xen Project was moved under the auspices of the Linux Foundation as a Collaborative Project. The Linux Foundation launched a new trademark for "Xen Project" to differentiate the project from any commercial use of the older "Xen" trademark. A new community website was launched at xenproject.org as part of the transfer. Project members at the time of the announcement included: Amazon, AMD, Bromium, CA Technologies, Calxeda, Cisco, Citrix, Google, Intel, Oracle, Samsung, and Verizon. The Xen project itself is self-governing.
Since version 3.0 of the Linux kernel, Xen support for dom0 and domU exists in the mainline kernel.
Release history
Uses
Internet hosting service companies use hypervisors to provide virtual private servers. Amazon EC2 (from August 2006 to November 2017), IBM SoftLayer, Liquid Web, Fujitsu Global Cloud Platform, Linode, OrionVM and Rackspace Cloud use Xen as the primary VM hypervisor for their product offerings.
Virtual machine monitors (also known as hypervisors) also often operate on mainframes and large servers running IBM, HP, and other systems.
Server virtualization can provide benefits such as:
Consolidation leading to increased utilization
Rapid provisioning
Dynamic fault tolerance against software failures (through rapid bootstrapping or rebooting)
Hardware fault tolerance (through migration of a virtual machine to different hardware)
Secure separations of virtual operating systems
Support for legacy software as well as new OS instances on the same computer
Xen's support for virtual machine live migration from one host to another allows load balancing and the avoidance of downtime.
Virtualization also has benefits when working on development (including the development of operating systems): running the new system as a guest avoids the need to reboot the physical computer whenever a bug occurs. Sandboxed guest systems can also help in computer-security research, allowing study of the effects of some virus or worm without the possibility of compromising the host system.
Finally, hardware appliance vendors may decide to ship their appliance running several guest systems, so as to be able to execute various pieces of software that require different operating systems.
Types of virtualization
Xen offers five approaches to running the guest operating system:
PV (paravirtualization): Virtualization-aware Guest and devices.
HVM (hardware virtual machine): Fully hardware-assisted virtualization with emulated devices.
HVM with PV drivers: Fully hardware-assisted virtualization with PV drivers for IO devices.
PVHVM (paravirtualization with hardware virtualization): PV supported hardware-assisted virtualization with PV drivers for IO devices.
PVH (PV in an HVM container): Fully paravirtualized Guest accelerated by hardware-assisted virtualization where available.
https://xenomint.com/ provides a form of virtualization known as paravirtualization, in which guests run a modified operating system. The guests are modified to use a special hypercall ABI, instead of certain architectural features. Through paravirtualization, Xen can achieve high performance even on its host architecture (x86) which has a reputation for non-cooperation with traditional virtualization techniques. Xen can run paravirtualized guests ("PV guests" in Xen terminology) even on CPUs without any explicit support for virtualization. Paravirtualization avoids the need to emulate a full set of hardware and firmware services, which makes a PV system simpler to manage and reduces the attack surface exposed to potentially malicious guests. On 32-bit x86, the Xen host kernel code runs in Ring 0, while the hosted domains run in Ring 1 (kernel) and Ring 3 (applications).
CPUs that support virtualization make it possible to run unmodified guests, including proprietary operating systems (such as Microsoft Windows). This is known as hardware-assisted virtualization, however, in Xen this is known as hardware virtual machine (HVM). HVM extensions provide additional execution modes, with an explicit distinction between the most-privileged modes used by the hypervisor with access to the real hardware (called "root mode" in x86) and the less-privileged modes used by guest kernels and applications with "hardware" accesses under complete control of the hypervisor (in x86, known as "non-root mode"; both root and non-root mode have Rings 0–3). Both Intel and AMD have contributed modifications to Xen to exploit their respective Intel VT-x and AMD-V architecture extensions. Use of ARM v7A and v8A virtualization extensions came with Xen 4.3. HVM extensions also often offer new instructions to allow direct calls by a paravirtualized guest/driver into the hypervisor, typically used for I/O or other operations needing high performance. These allow HVM guests with suitable minor modifications to gain many of the performance benefits of paravirtualized I/O. In current versions of Xen (up to 4.2) only fully virtualized HVM guests can make use of hardware facilities for multiple independent levels of memory protection and paging. As a result, for some workloads, HVM guests with PV drivers (also known as PV-on-HVM, or PVH) provide better performance than pure PV guests. Xen HVM has device emulation based on the QEMU project to provide I/O virtualization to the virtual machines. The system emulates hardware via a patched QEMU "device manager" (qemu-dm) daemon running as a backend in dom0. This means that the virtualized machines see an emulated version of a fairly basic PC. In a performance-critical environment, PV-on-HVM disk and network drivers are used during the normal guest operation, so that the emulated PC hardware is mostly used for booting.
Features
Administrators can "live migrate" Xen virtual machines between physical hosts across a LAN without loss of availability. During this procedure, the LAN iteratively copies the memory of the virtual machine to the destination without stopping its execution. The process requires a stoppage of around 60–300 ms to perform final synchronization before the virtual machine begins executing at its final destination, providing an illusion of seamless migration. Similar technology can serve to suspend running virtual machines to disk, "freezing" their running state for resumption at a later date.
Xen can scale to 4095 physical CPUs, 256 VCPUs per HVM guest, 512 VCPUs per PV guest, 16 TB of RAM per host, and up to 1 TB of RAM per HVM guest or 512 GB of RAM per PV guest.
Availability
The Xen hypervisor has been ported to a number of processor families:
Intel: IA-32, IA-64 (before version 4.2), x86-64
PowerPC: previously supported under the XenPPC project, no longer active after Xen 3.2
ARM: previously supported under the XenARM project for older versions of ARM without virtualization extensions, such as the Cortex-A9. Currently supported since Xen 4.3 for newer versions of the ARM with virtualization extensions, such as the Cortex-A15.
MIPS: XLP832 experimental port
Hosts
Xen can be shipped in a dedicated virtualization platform, such as XCP-ng or XenServer (formerly Citrix Hypervisor, and before that Citrix XenServer, and before that XenSource's XenEnterprise).
Alternatively, Xen is distributed as an optional configuration of many standard operating systems. Xen is available for and distributed with:
Alpine Linux offers a minimal dom0 system (Busybox, UClibc) that can be run from removable media, like USB sticks.
Arch Linux provides the necessary packages with detailed setup instructions on their Wiki.
Debian Linux (since version 4.0 "etch") and many of its derivatives;
FreeBSD 11 includes experimental host support.
Gentoo has the necessary packages available to support Xen, along with instructions on their Wiki.
Mageia (since version 4);
NetBSD can function as domU and dom0.
OpenSolaris-based distributions can function as dom0 and domU from Nevada build 75 onwards.
openSUSE 10.x to 12.x: only 64-bit hosts are supported since 12.1;
Qubes OS uses Xen to isolate applications for a more secure desktop.
SUSE Linux Enterprise Server (since version 10);
Solaris (since 2013 with Oracle VM Server for x86, before with Sun xVM);
Ubuntu (since 8.04 "Hardy Heron", but no dom0-capable kernel in 8.10 "Intrepid Ibex" until 12.04 "Precise Pangolin".)
Guests
Guest systems can run fully virtualized (which requires hardware support), paravirtualized (which requires a modified guest operating system), or fully virtualized with paravirtualized drivers (PVHVM). Most operating systems which can run on PCs can run as a Xen HVM guest. The following systems can operate as paravirtualized Xen guests:
Linux
FreeBSD in 32-bit, or 64-bit through PVHVM;
OpenBSD, through PVHVM;
NetBSD
MINIX
GNU Hurd (gnumach-1-branch-Xen-branch)
Plan 9 from Bell Labs
Xen version 3.0 introduced the capability to run Microsoft Windows as a guest operating system unmodified if the host machine's processor supports hardware virtualization provided by Intel VT-x (formerly codenamed Vanderpool) or AMD-V (formerly codenamed Pacifica). During the development of Xen 1.x, Microsoft Research, along with the University of Cambridge Operating System group, developed a port of Windows XP to Xen — made possible by Microsoft's Academic Licensing Program. The terms of this license do not allow the publication of this port, although documentation of the experience appears in the original Xen SOSP paper. James Harper and the Xen open-source community have started developing free software paravirtualization drivers for Windows. These provide front-end drivers for the Xen block and network devices and allow much higher disk and network performance for Windows systems running in HVM mode. Without these drivers all disk and network traffic has to be processed through QEMU-DM. Subsequently, Citrix has released under a BSD license (and continues to maintain) PV drivers for Windows.
Management
Third-party developers have built a number of tools (known as Xen Management Consoles) to facilitate the common tasks of administering a Xen host, such as configuring, starting, monitoring and stopping of Xen guests. Examples include:
The OpenNebula cloud management toolkit
On openSUSE YaST and virt-man offer graphical VM management
OpenStack natively supports Xen as a Hypervisor/Compute target
Apache CloudStack also supports Xen as a Hypervisor
Novell's PlateSpin Orchestrate also manages Xen virtual machines for Xen shipping in SUSE Linux Enterprise Server.
Xen Orchestra for both XCP-ng and Citrix Hypervisor platforms
Commercial versions
XCP-ng (Open Source, within the Linux Foundation and Xen Project, originally a fork of XenServer)
XenServer (Formerly Citrix Hypervisor until 2023 and formerly Citrix XenServer until 2019)
Huawei FusionSphere
Oracle VM Server for x86
Thinsy Corporation
Virtual Iron (discontinued by Oracle)
Crucible (hypervisor) by Star Lab Corp.
The Xen hypervisor is covered by the GNU General Public Licence, so all of these versions contain a core of free software with source code. However, many of them contain proprietary additions.
See also
CloudStack
Kernel-based Virtual Machine (KVM)
OpenStack
Virtual disk image
tboot, a TXT-based integrity system for the Linux kernel and Xen hypervisor
VMware ESXi
Qubes OS
References
Further reading
Paul Venezia (April 13, 2011) Virtualization shoot-out: Citrix, Microsoft, Red Hat, and VMware. The leading server virtualization contenders tackle InfoWorld's ultimate virtualization challenge, InfoWorld
External links
2003 software
Citrix Systems
Cross-platform free software
Free virtualization software
History of computing in the United Kingdom
Linux Foundation projects
University of Cambridge Computer Laboratory
Virtualization software for Linux | Xen | [
"Technology"
] | 3,354 | [
"History of computing",
"History of computing in the United Kingdom"
] |
590,920 | https://en.wikipedia.org/wiki/Hemodialysis | Hemodialysis, also spelled haemodialysis, or simply dialysis, is a process of filtering the blood of a person whose kidneys are not working normally. This type of dialysis achieves the extracorporeal removal of waste products such as creatinine and urea and free water from the blood when the kidneys are in a state of kidney failure. Hemodialysis is one of three renal replacement therapies (the other two being kidney transplant and peritoneal dialysis). An alternative method for extracorporeal separation of blood components such as plasma or cells is apheresis.
Hemodialysis can be an outpatient or inpatient therapy. Routine hemodialysis is conducted in a dialysis outpatient facility, either a purpose-built room in a hospital or a dedicated, stand-alone clinic. Less frequently hemodialysis is done at home. Dialysis treatments in a clinic are initiated and managed by specialized staff made up of nurses and technicians; dialysis treatments at home can be self-initiated and managed or done jointly with the assistance of a trained helper who is usually a family member.<ref></ref>
Medical uses
Hemodialysis is the choice of renal replacement therapy for patients who need dialysis acutely, and for many patients as maintenance therapy. It provides excellent, rapid clearance of solutes.
A nephrologist (a medical kidney specialist) decides when hemodialysis is needed and the various parameters for a dialysis treatment. These include frequency (how many treatments per week), length of each treatment, and the blood and dialysis solution flow rates, as well as the size of the dialyzer. The composition of the dialysis solution is also sometimes adjusted in terms of its sodium, potassium, and bicarbonate levels. In general, the larger the body size of an individual, the more dialysis they will need. In North America and the UK, 3–4 hour treatments (sometimes up to 5 hours for larger patients) given 3 times a week are typical. Twice-a-week sessions are limited to patients who have a substantial residual kidney function. Four sessions per week are often prescribed for larger patients, as well as patients who have trouble with fluid overload. Finally, there is growing interest in short daily home hemodialysis, which is 1.5 – 4 hr sessions given 5–7 times per week, usually at home. There is also interest in nocturnal dialysis, which involves dialyzing a patient, usually at home, for 8–10 hours per night, 3–6 nights per week. Nocturnal in-center dialysis, 3–4 times per week, is also offered at a handful of dialysis units in the United States.
Adverse effects
Disadvantages
Restricts independence, as people undergoing this procedure cannot travel around because of supplies' availability
Requires more supplies such as high water quality and electricity
Requires reliable technology like dialysis machines
The procedure is complicated and requires that care givers have more knowledge
Requires time to set up and clean dialysis machines, and expense with machines and associated staff
Complications
Fluid shifts
Hemodialysis often involves fluid removal (through ultrafiltration), because most patients with renal failure pass little or no urine. Side effects caused by removing too much fluid and/or removing fluid too rapidly include low blood pressure, fatigue, chest pains, leg-cramps, nausea and headaches. These symptoms can occur during the treatment and can persist post treatment; they are sometimes collectively referred to as the dialysis hangover or dialysis washout. The severity of these symptoms is usually proportionate to the amount and speed of fluid removal. However, the impact of a given amount or rate of fluid removal can vary greatly from person to person and day to day. These side effects can be avoided and/or their severity lessened by limiting fluid intake between treatments or increasing the dose of dialysis e.g. dialyzing more often or longer per treatment than the standard three times a week, 3–4 hours per treatment schedule.
Access-related
Since hemodialysis requires access to the circulatory system, patients undergoing hemodialysis may expose their circulatory system to microbes, which can lead to bacteremia, an infection affecting the heart valves (endocarditis) or an infection affecting the bones (osteomyelitis). The risk of infection varies depending on the type of access used (see below). Bleeding may also occur, again the risk varies depending on the type of access used. Infections can be minimized by strictly adhering to infection control best practices.
Venous needle dislodgement
Venous needle dislodgement (VND) is a fatal complication of hemodialysis where the patient experiences rapid blood loss due to a faltering attachment of the needle to the venous access point.
Anticoagulation-related
Unfractioned heparin (UHF) is the most commonly used anticoagulant in hemodialysis, as it is generally well tolerated and can be quickly reversed with protamine sulfate. Low-molecular weight heparin (LMWH) is however, becoming increasingly popular and is now the norm in western Europe. Compared to UHF, LMWH has the advantage of an easier mode of administration and reduced bleeding but the effect cannot be easily reversed. Heparin can infrequently cause a low platelet count due to a reaction called heparin-induced thrombocytopenia (HIT). The risk of HIT is lower with LMWH compared to UHF. In such patients, alternative anticoagulants may be used. Even though HIT causes a low platelet count it can paradoxically predispose thrombosis. When comparing UHF to LMWH for the risk of adverse effects, the evidence is uncertain as to which treatment approach to thin blood has the least side effects and what is the ideal treatment strategy for preventing blood clots during hemodialysis. In patients at high risk of bleeding, dialysis can be done without anticoagulation.
First-use syndrome
First-use syndrome is a rare but severe anaphylactic reaction to the artificial kidney. Its symptoms include sneezing, wheezing, shortness of breath, back pain, chest pain, or sudden death. It can be caused by residual sterilant in the artificial kidney or the material of the membrane itself. In recent years, the incidence of first-use syndrome has decreased, due to an increased use of gamma irradiation, steam sterilization, or electron-beam radiation instead of chemical sterilants, and the development of new semipermeable membranes of higher biocompatibility. New methods of processing previously acceptable components of dialysis must always be considered. For example, in 2008, a series of first-use type of reactions, including deaths, occurred due to heparin contaminated during the manufacturing process with oversulfated chondroitin sulfate.
Cardiovascular
Long term complications of hemodialysis include hemodialysis-associated amyloidosis, neuropathy and various forms of heart disease. Increasing the frequency and length of treatments has been shown to improve fluid overload and enlargement of the heart that is commonly seen in such patients.
Vitamin deficiency
Folate deficiency can occur in some patients having hemodialysis.
Electrolyte imbalances
Although a dialysate fluid, which is a solution containing diluted electrolytes, is employed for the filtration of blood, haemodialysis can cause an electrolyte imbalance. These imbalances can derive from abnormal concentrations of potassium (hypokalemia, hyperkalemia), and sodium (hyponatremia, hypernatremia). These electrolyte imbalances are associated with increased cardiovascular mortality.
Mechanism and technique
The principle of hemodialysis is the same as other methods of dialysis; it involves diffusion of solutes across a semipermeable membrane. Hemodialysis utilizes counter current flow, where the dialysate is flowing in the opposite direction to blood flow in the extracorporeal circuit. Counter-current flow maintains the concentration gradient across the membrane at a maximum and increases the efficiency of the dialysis.
Fluid removal (ultrafiltration) is achieved by altering the hydrostatic pressure of the dialysate compartment, causing free water and some dissolved solutes to move across the membrane along a created pressure gradient.
The dialysis solution that is used may be a sterilized solution of mineral ions and is called dialysate. Urea and other waste products including potassium, and phosphate diffuse into the dialysis solution. However, concentrations of sodium and chloride are similar to those of normal plasma to prevent loss. Sodium bicarbonate is added in a higher concentration than plasma to correct blood acidity. A small amount of glucose is also commonly used. The concentration of electrolytes in the dialysate is adjusted depending on the patient's status before the dialysis. If a high concentration of sodium is added to the dialysate, the patient can become thirsty and end up accumulating body fluids, which can lead to heart damage. On the contrary, low concentrations of sodium in the dialysate solution have been associated with a low blood pressure and intradialytic weight gain, which are markers of improved outcomes. However, the benefits of using a low concentration of sodium have not been demonstrated yet, since these patients can also develop cramps, intradialytic hypotension and low sodium in serum, which are symptoms associated with a high mortality risk.
Note that this is a different process to the related technique of hemofiltration.
Access
Three primary methods are used to gain access to the blood for hemodialysis: an intravenous catheter, an arteriovenous fistula (AV) and a synthetic graft. The type of access is influenced by factors such as the expected time course of a patient's renal failure and the condition of their vasculature. Patients may have multiple access procedures, usually because an AV fistula or graft is maturing and a catheter is still being used. The placement of a catheter is usually done under light sedation, while fistulas and grafts require an operation.
Types
There are three types of hemodialysis: conventional hemodialysis, daily hemodialysis, and nocturnal hemodialysis. Below is an adaptation and summary from a brochure of The Ottawa Hospital.
Conventional hemodialysis
Conventional hemodialysis is usually done three times per week, for about three to four hours for each treatment (Sometimes five hours for larger patients), during which the patient's blood is drawn out through a tube at a rate of 200–400 mL/min. The tube is connected to a 15, 16, or 17 gauge needle inserted in the dialysis fistula or graft, or connected to one port of a dialysis catheter. The blood is then pumped through the dialyzer, and then the processed blood is pumped back into the patient's bloodstream through another tube (connected to a second needle or port). During the procedure, the patient's blood pressure is closely monitored, and if it becomes low, or the patient develops any other signs of low blood volume such as nausea, the dialysis attendant can administer extra fluid through the machine. During the treatment, the patient's entire blood volume (about 5 L) circulates through the machine every 15 minutes. During this process, the dialysis patient is exposed to a week's worth of water for the average person.
Daily hemodialysis
Daily hemodialysis is typically used by those patients who do their own dialysis at home. It is less stressful (more gentle) but does require more frequent access. This is simple with catheters, but more problematic with fistulas or grafts. The "buttonhole technique" can be used for fistulas, but not grafts, requiring frequent access. Daily hemodialysis is usually done for 2 hours six days a week.
Nocturnal hemodialysis
The procedure of nocturnal hemodialysis is similar to conventional hemodialysis except it is performed three to six nights a week and between six and ten hours per session while the patient sleeps.
Equipment
The hemodialysis machine pumps the patient's blood and the dialysate through the dialyzer. The newest dialysis machines on the market are highly computerized and continuously monitor an array of safety-critical parameters, including blood (QB) and dialysate QD) flow rates; dialysis solution conductivity, temperature, and pH; and analysis of the dialysate for evidence of blood leakage or presence of air. Any reading that is out of normal range triggers an audible alarm to alert the patient-care technician who is monitoring the patient. Manufacturers of dialysis machines include companies such as Nipro, Fresenius, Gambro, Baxter, B. Braun, NxStage and Bellco. QB to QD flow rates have to reach 1:2 ratio where QB is set around 250 ml/min and QD is set around 500 ml/min to ensure good dialysis efficiency.
Water system
An extensive water purification system is critical for hemodialysis. Since dialysis patients are exposed to vast quantities of water, which is mixed with dialysate concentrate to form the dialysate, even trace mineral contaminants or bacterial endotoxins can filter into the patient's blood. Because the damaged kidneys cannot perform their intended function of removing impurities, molecules introduced into the bloodstream from improperly purified water can build up to hazardous levels, causing numerous symptoms or death. Aluminum, chlorine and or chloramines, fluoride, copper, and zinc, as well as bacterial fragments and endotoxins, have all caused problems in this regard.
For this reason, water used in hemodialysis is carefully purified before use. A common water purification system includes a multi stage system.
The water is first softened. Next the water is run through a tank containing activated charcoal to adsorb organic contaminants, and chlorine and chloramines. The water may then be temperature-adjusted if needed. Primary purification is then done by forcing water through a membrane with very tiny pores, a so-called reverse osmosis membrane. This lets the water pass, but holds back even very small solutes such as electrolytes. Final removal of leftover electrolytes is done in some water systems by passing the water through an electrodeionization (EDI) device, which removes any leftover anions or cations and replace them with hydroxyl and hydrogen ions, respectively, leaving ultrapure water.
Even this degree of water purification may be insufficient. The trend lately is to pass this final purified water (after mixing with dialysate concentrate) through an ultrafiltration membrane or absolute filter. This provides another layer of protection by removing impurities, especially those of bacterial origin, that may have accumulated in the water after its passage through the original water purification system.
Dialysate
Once purified water is mixed with dialysate (also called dialysis fluid) concentrate consisting of: sodium, potassium, calcium, magnesium and dextrose mixed in an acid solution; this solution is mixed with the purified water and a chemical buffer. This forms the dialysate solution, which contains the basic electrolytes found in human blood. This dialysate solution contains charged ions that conducts electricity. During dialysis, the conductivity of dialysis solution is continuously monitored to ensure that the water and dialysate concentrate are being mixed in the proper proportions. Both excessively concentrated dialysis solution and excessively dilute solution can cause severe clinical problems.
Chemical buffers such as bicarbonate or lactate can alternatively be added to regulate the pH of the dialysate. Both buffers can stabilize the pH of the solution at a physiological level with no negative impacts on the patient. There is some evidence of a reduction in the incidence of heart and blood problems and high blood pressure events when using bicarbonate as the pH buffer compared to lactate. However, the mortality rates after using both buffers do not show a significative difference.
Dialyzer
The dialyzer is the piece of equipment that filters the blood. Almost all dialyzers in use today are of the hollow-fiber variety. A cylindrical bundle of hollow fibers, whose walls are composed of semi-permeable membrane, is anchored at each end into potting compound (a sort of glue). This assembly is then put into a clear plastic cylindrical shell with four openings. One opening or blood port at each end of the cylinder communicates with each end of the bundle of hollow fibers. This forms the "blood compartment" of the dialyzer. Two other ports are cut into the side of the cylinder. These communicate with the space around the hollow fibers, the "dialysate compartment." Blood is pumped via the blood ports through this bundle of very thin capillary-like tubes, and the dialysate is pumped through the space surrounding the fibers. Pressure gradients are applied when necessary to move fluid from the blood to the dialysate compartment.
Membrane and flux
Dialyzer membranes come with different pore sizes. Those with smaller pore size are called "low-flux" and those with larger pore sizes are called "high-flux." Some larger molecules, such as beta-2-microglobulin, are not removed at all with low-flux dialyzers; lately, the trend has been to use high-flux dialyzers. However, such dialyzers require newer dialysis machines and high-quality dialysis solution to control the rate of fluid removal properly and to prevent backflow of dialysis solution impurities into the patient through the membrane.
Dialyzer membranes used to be made primarily of cellulose (derived from cotton linter). The surface of such membranes was not very biocompatible, because exposed hydroxyl groups would activate complement in the blood passing by the membrane. Therefore, the basic, "unsubstituted" cellulose membrane was modified. One change was to cover these hydroxyl groups with acetate groups (cellulose acetate); another was to mix in some compounds that would inhibit complement activation at the membrane surface (modified cellulose). The original "unsubstituted cellulose" membranes are no longer in wide use, whereas cellulose acetate and modified cellulose dialyzers are still used. Cellulosic membranes can be made in either low-flux or high-flux configuration, depending on their pore size.
Another group of membranes is made from synthetic materials, using polymers such as polyarylethersulfone, polyamide, polyvinylpyrrolidone, polycarbonate, and polyacrylonitrile. These synthetic membranes activate complement to a lesser degree than unsubstituted cellulose membranes. However, they are in general more hydrophobic which leads to increased adsorption of proteins to the membrane surface which in turn can lead to complement system activation. Synthetic membranes can be made in either low- or high-flux configuration, but most are high-flux.
Nanotechnology is being used in some of the most recent high-flux membranes to create a uniform pore size. The goal of high-flux membranes is to pass relatively large molecules such as beta-2-microglobulin (MW 11,600 daltons), but not to pass albumin (MW ~66,400 daltons). Every membrane has pores in a range of sizes. As pore size increases, some high-flux dialyzers begin to let albumin pass out of the blood into the dialysate. This is thought to be undesirable, although one school of thought holds that removing some albumin may be beneficial in terms of removing protein-bound uremic toxins.
Membrane flux and outcome
Whether using a high-flux dialyzer improves patient outcomes is somewhat controversial, but several important studies have suggested that it has clinical benefits. The NIH-funded HEMO trial compared survival and hospitalizations in patients randomized to dialysis with either low-flux or high-flux membranes. Although the primary outcome (all-cause mortality) did not reach statistical significance in the group randomized to use high-flux membranes, several secondary outcomes were better in the high-flux group. A recent Cochrane analysis concluded that benefit of membrane choice on outcomes has not yet been demonstrated. A collaborative randomized trial from Europe, the MPO (Membrane Permeabilities Outcomes) study, comparing mortality in patients just starting dialysis using either high-flux or low-flux membranes, found a nonsignificant trend to improved survival in those using high-flux membranes, and a survival benefit in patients with lower serum albumin levels or in diabetics.
Membrane flux and beta-2-microglobulin amyloidosis
High-flux dialysis membranes and/or intermittent internal on-line hemodiafiltration (iHDF) may also be beneficial in reducing complications of beta-2-microglobulin accumulation. Because beta-2-microglobulin is a large molecule, with a molecular weight of about 11,600 daltons, it does not pass at all through low-flux dialysis membranes. Beta-2-M is removed with high-flux dialysis, but is removed even more efficiently with IHDF. After several years (usually at least 5–7), patients on hemodialysis begin to develop complications from beta-2-M accumulation, including carpal tunnel syndrome, bone cysts, and deposits of this amyloid in joints and other tissues. Beta-2-M amyloidosis can cause very serious complications, including spondyloarthropathy, and often is associated with shoulder joint problems. Observational studies from Europe and Japan have suggested that using high-flux membranes in dialysis mode, or IHDF, reduces beta-2-M complications in comparison to regular dialysis using a low-flux membrane.KDOQI Clinical Practice Guidelines for Hemodialysis Adequacy, 2006 Updates. CPR 5.
Dialyzers and efficiency
Dialyzers come in many different sizes. A larger dialyzer with a larger membrane area (A) will usually remove more solutes than a smaller dialyzer, especially at high blood flow rates. This also depends on the membrane permeability coefficient K0 for the solute in question. So dialyzer efficiency is usually expressed as the K0A – the product of permeability coefficient and area. Most dialyzers have membrane surface areas of 0.8 to 2.2 square meters, and values of K0A ranging from about 500 to 1500 mL/min. K0A'', expressed in mL/min, can be thought of as the maximum clearance of a dialyzer at very high blood and dialysate flow rates.
Reuse of dialyzers
The dialyzer may either be discarded after each treatment or be reused. Reuse requires an extensive procedure of high-level disinfection. Reused dialyzers are not shared between patients. There was an initial controversy about whether reusing dialyzers worsened patient outcomes. The consensus today is that reuse of dialyzers, if done carefully and properly, produces similar outcomes to single use of dialyzers.
Dialyzer Reuse is a practice that has been around since the invention of the product. This practice includes the cleaning of a used dialyzer to be reused multiple times for the same patient. Dialysis clinics reuse dialyzers to become more economical and reduce the high costs of "single-use" dialysis which can be extremely expensive and wasteful. Single used dialyzers are initiated just once and then thrown out creating a large amount of bio-medical waste with no mercy for cost savings. If done right, dialyzer reuse can be very safe for dialysis patients.
There are two ways of reusing dialyzers, manual and automated. Manual reuse involves the cleaning of a dialyzer by hand. The dialyzer is semi-disassembled then flushed repeatedly before being rinsed with water. It is then stored with a liquid disinfectant(PAA) for 18+ hours until its next use. Although many clinics outside the USA use this method, some clinics are switching toward a more automated/streamlined process as the dialysis practice advances. The newer method of automated reuse is achieved by means of a medical device that began in the early 1980s. These devices are beneficial to dialysis clinics that practice reuse – especially for large dialysis clinical entities – because they allow for several back to back cycles per day. The dialyzer is first pre-cleaned by a technician, then automatically cleaned by machine through a step-cycles process until it is eventually filled with liquid disinfectant for storage. Although automated reuse is more effective than manual reuse, newer technology has sparked even more advancement in the process of reuse. When reused over 15 times with current methodology, the dialyzer can lose B2m, middle molecule clearance and fiber pore structure integrity, which has the potential to reduce the effectiveness of the patient's dialysis session. Currently, as of 2010, newer, more advanced reprocessing technology has proven the ability to eliminate the manual pre-cleaning process altogether and has also proven the potential to regenerate (fully restore) all functions of a dialyzer to levels that are approximately equivalent to single-use for more than 40 cycles. As medical reimbursement rates begin to fall even more, many dialysis clinics are continuing to operate effectively with reuse programs especially since the process is easier and more streamlined than before.
Epidemiology
Hemodialysis was one of the most common procedures performed in U.S. hospitals in 2011, occurring in 909,000 stays (a rate of 29 stays per 10,000 population). This was an increase of 68 percent from 1997, when there were 473,000 stays. It was the fifth most common procedure for patients aged 45–64 years.
History
Many have played a role in developing dialysis as a practical treatment for renal failure, starting with Thomas Graham of Glasgow, who first presented the principles of solute transport across a semipermeable membrane in 1854. The artificial kidney was first developed by Abel, Rountree, and Turner in 1913, the first hemodialysis in a human being was by Haas (February 28, 1924) and the artificial kidney was developed into a clinically useful apparatus by Kolff in 1943 to 1945. This research showed that life could be prolonged in patients dying of kidney failure.
Willem Kolff was the first to construct a working dialyzer in 1943. The first successfully treated patient was a 67-year-old woman in uremic coma who regained consciousness after 11 hours of hemodialysis with Kolff's dialyzer in 1945. At the time of its creation, Kolff's goal was to provide life support during recovery from acute renal failure. After World War II ended, Kolff donated the five dialyzers he had made to hospitals around the world, including Mount Sinai Hospital, New York. Kolff gave a set of blueprints for his hemodialysis machine to George Thorn at the Peter Bent Brigham Hospital in Boston. This led to the manufacture of the next generation of Kolff's dialyzer, a stainless steel Kolff-Brigham dialysis machine.
According to McKellar (1999), a significant contribution to renal therapies was made by Canadian surgeon Gordon Murray with the assistance of two doctors, an undergraduate chemistry student, and research staff. Murray's work was conducted simultaneously and independently from that of Kolff. Murray's work led to the first successful artificial kidney built in North America in 1945–46, which was successfully used to treat a 26-year-old woman out of a uraemic coma in Toronto. The less-crude, more compact, second-generation "Murray-Roschlau" dialyser was invented in 1952–53, whose designs were stolen by German immigrant Erwin Halstrup, and passed off as his own (the "Halstrup–Baumann artificial kidney").
By the 1950s, Willem Kolff's invention of the dialyzer was used for acute renal failure, but it was not seen as a viable treatment for patients with stage 5 chronic kidney disease (CKD). At the time, doctors believed it was impossible for patients to have dialysis indefinitely for two reasons. First, they thought no man-made device could replace the function of kidneys over the long term. In addition, a patient undergoing dialysis developed damaged veins and arteries, so that after several treatments, it became difficult to find a vessel to access the patient's blood.
The original Kolff kidney was not very useful clinically, because it did not allow for removal of excess fluid. Swedish professor Nils Alwall encased a modified version of this kidney inside a stainless steel canister, to which a negative pressure could be applied, in this way effecting the first truly practical application of hemodialysis, which was done in 1946 at the University of Lund. Alwall also was arguably the inventor of the arteriovenous shunt for dialysis. He reported this first in 1948 where he used such an arteriovenous shunt in rabbits. Subsequently, he used such shunts, made of glass, as well as his canister-enclosed dialyzer, to treat 1500 patients in renal failure between 1946 and 1960, as reported to the First International Congress of Nephrology held in Evian in September 1960. Alwall was appointed to a newly created Chair of Nephrology at the University of Lund in 1957. Subsequently, he collaborated with Swedish businessman Holger Crafoord to found one of the key companies that would manufacture dialysis equipment in the past 50 years, Gambro. The early history of dialysis has been reviewed by Stanley Shaldon.
Belding H. Scribner, working with the biomechanical engineer Wayne Quinton, modified the glass shunts used by Alwall by making them from Teflon. Another key improvement was to connect them to a short piece of silicone elastomer tubing. This formed the basis of the so-called Scribner shunt, perhaps more properly called the Quinton-Scribner shunt. After treatment, the circulatory access would be kept open by connecting the two tubes outside the body using a small U-shaped Teflon tube, which would shunt the blood from the tube in the artery back to the tube in the vein.
In 1962, Scribner started the world's first outpatient dialysis facility, the Seattle Artificial Kidney Center, later renamed the Northwest Kidney Centers. Immediately the problem arose of who should be given dialysis, since demand far exceeded the capacity of the six dialysis machines at the center. Scribner decided that he would not make the decision about who would receive dialysis and who would not. Instead, the choices would be made by an anonymous committee, which could be viewed as one of the first bioethics committees.
For a detailed history of successful and unsuccessful attempts at dialysis, including pioneers such as Abel and Roundtree, Haas, and Necheles, see this review by Kjellstrand.
See also
Aluminium toxicity in people on dialysis
Dialysis disequilibrium syndrome
References
External links
Your Kidneys and How They Work – (American) National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK), NIH.
Treatment Methods for Kidney Failure – (American) National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK), NIH.
Treatment Methods for Kidney Failure: Hemodialysis – (American) National Kidney and Urologic Diseases Information Clearinghouse, NIH.
Membrane technology
Renal dialysis
Toxicology treatments | Hemodialysis | [
"Chemistry",
"Environmental_science"
] | 6,671 | [
"Toxicology treatments",
"Membrane technology",
"Toxicology",
"Separation processes"
] |
590,951 | https://en.wikipedia.org/wiki/Fenfluramine/phentermine | The drug combination fenfluramine/phentermine, usually called fen-phen, is an anti-obesity medication that is no longer widely available. It was sold in the early 1990s, and utilized two anorectics. Fenfluramine was marketed by American Home Products (later known as Wyeth) as Pondimin, but was shown to cause potentially fatal pulmonary hypertension and heart valve problems, which eventually led to its withdrawal in 1997 and legal damages of over $13 billion. Phentermine was not shown to have harmful effects.
Fenfluramine acts as a serotonin releasing agent, phentermine as primarily a norepinephrine releasing agent. Phentermine also induces the release of serotonin and dopamine, although to a far lesser extent than it induces the release of norepinephrine.
History
Fenfluramine as a single drug was first introduced in the 1970s, but was not popular because it only temporarily reduced weight. A 1984 study found a weight loss of 7.5 kg on average in 24 weeks, as compared to 4.4 kg under placebo. It sold modestly until the 1990s, when it was combined with phentermine and heavily marketed.
Testing on children in New York City
The New York Psychiatric Institute, associated with Columbia University, the Research Foundation of the City University of New York, and Mount Sinai Medical Center tested fenfluramine intravenously on more than 100 Black and Hispanic boys between the ages of 6 and 10, with delinquent older brothers, to test the theory that delinquent behavior could be predicted by serotonin levels. These studies were conducted before the drug was pulled from the market in September 1997. In 1998, CNN reported that these organizations were under "evaluation" by the Office for Protection from Research Risks, an arm of the National Institutes of Health. An article in Nature reports that these tests were published as a study in Archives of General Psychiatry in 1997 and that "The New York trial, funded largely by the Lowenstein Foundation, with some support from the National Institute of Mental Health, was halted in 1995, two years before the drug was withdrawn." In 1999, the New York Times reported that the Mount Sinai School of Medicine and the Research Foundation of the City University of New York were officially faulted by federal research-ethics officials for conducting these tests. The article reports that the yearlong investigation found no misconduct by the New York State Psychiatric Institute for these tests. This article reports the number of children involved in the study as 150 and states that none were harmed.
Harm and litigation
A similar drug, aminorex, had caused severe lung damage and "provided reason to worry that similar drugs ... could increase the risk of a rare but often fatal lung disease, pulmonary hypertension." In 1994, Wyeth official Fred Wilson expressed concerns about fenfluramine's labeling containing only four cases of pulmonary hypertension when a total of 41 had been observed, but no action was taken until 1996. In 1995, Wyeth introduced dexfenfluramine (the dextro isomer, marketed as Redux), which it hoped would cause fewer adverse effects. However, the medical officer of the Food and Drug Administration (FDA), Leo Lutwak, insisted upon a black box warning of pulmonary hypertension risks. After Lutwak refused to approve the drug, the FDA management had James Milton Bilstad, FDA Senior Drug Evaluator, sign it and approve the drug with no black box warning for marketing in 1996. European regulators required a major warning of pulmonary hypertension risks.
In 1996, a 30-year-old woman developed heart problems after a month of using fenfluramine/phentermine; when she died in February 1997, the Boston Herald devoted a front-page article to her. In August 1997, a paper in the New England Journal of Medicine (NEJM) from the Mayo Clinic discussed clinical findings in 24 people who had taken fen-phen. The authors noted that their findings suggested a possible correlation between mitral valve dysfunction and the use of these anorectic agents. The FDA alerted medical practitioners that it had received nine additional reports of the same type and requested all health care professionals to report any such cases to the agency’s MedWatch program, or to their respective pharmaceutical manufacturers. The FDA subsequently received over a hundred additional reports of valvular heart disease in people taking fen-phen, fenfluramine alone, or dexfenfluramine alone. The FDA requested that the manufacturers of fenfluramine and dexfenfluramine stress the potential risk to the heart in the drugs' labeling and in package inserts. The FDA continued to receive reports in 1997 of valvular heart disease in people who had taken these drugs. This disease typically involves the aortic and mitral valves.
After reports of valvular heart disease and pulmonary hypertension, primarily in women who had been undergoing treatment with fen-phen or (dex)fenfluramine, the FDA requested its withdrawal from the market in September 1997. The action was based on findings from doctors who had evaluated people taking these two drugs with echocardiograms, a procedure that can test the functioning of heart valves. The findings indicated that approximately 30 percent of people who had taken the combination for up to 24 months had abnormal echocardiograms, even though they had no symptoms. This percentage of abnormal test results was much higher than would be expected from a sample of the population who had not been exposed to either fenfluramine or dexfenfluramine. Follow-up studies showed that for people who took the combination for 3 months or less, the rate of heart valve complications was less than 3%.
Aftermath
Upon the release of the information regarding fen-phen's cardiac risks, the Association of Trial Lawyers of America formed a large trial lawyer group to seek damages from American Home Products, the distributor of fenfluramine and dexfenfluramine.
Fen-phen is no longer widely available. In April 2005, American Lawyer magazine ran a cover story on the wave of fen-phen litigation, reporting that more than 50,000 product liability lawsuits had been filed by alleged fen-phen victims. Total liability was estimated to be as high as $14 billion. Wyeth was still in negotiations with injured parties in February 2005, offering settlements of $5,000 to $200,000 to some of those who had sued, and stating they might offer more to those who were most seriously injured. One plaintiff's attorney said that "the payments [were] not going to be large enough to cover medical expenses." Thousands of injured persons rejected these offers. At the time, Wyeth announced it had set aside $21.1 billion (U.S.) to cover the cost of the lawsuits.
Possible uses
Obesity
In 1984, researchers at the University of Rochester Medical Center reported that they had performed a double-blind, controlled clinical trial comparing phentermine alone, fenfluramine alone, a combination of phentermine and fenfluramine, and placebo, for weight loss in humans. Weight loss in those receiving the fen-phen combination was significantly greater (8.4±1.1 kg) than in those receiving placebo (4.4±0.9 kg) and equivalent to that of those receiving fenfluramine (7.5±1.2 kg) or phentermine alone (10.0±1.2 kg). This amounts to an additional weight loss of 4±2 kg over the course of 24 weeks. Adverse effects were less frequent with the combination regimen than with the other active (non-placebo) treatments. The authors felt that combining fenfluramine and phentermine capitalized on their pharmacodynamic differences, resulting in equivalent weight loss, fewer adverse effects, and better appetite control.
Addiction remission
The term fen-phen was defined/termed/labeled in 1994 when Pietr Hitzig and Richard B. Rothman reported that this combination could presumptively remit alcohol and cocaine craving. The authors suggested that other combined dopamine and serotonin agonists or precursors might share this therapeutic potential. Subsequent experiments in rats supported these preliminary reports. In 2006 it was confirmed that the combination of phentermine and the serotonin precursor 5-hydroxytryptophan (5-HTP), in place of fenfluramine, significantly decreased alcohol withdrawal seizures in rats.
Intramural National Institutes of Health (NIH) double-blind protocols to demonstrate the efficacy of fen-phen in alcohol and cocaine addiction were designed, but never performed.
Adverse effects of serotonin
The findings on fen-phen, specifically fenfluramine, causing valvular heart disease and pulmonary hypertension prompted a renewed interest in the deleterious effects of systemic serotonin. It had already been known for decades that two of the major side-effects of the carcinoid syndrome, in which excessive serotonin is produced endogenously, are valvular disease and pulmonary hypertension. Several centers were able to note a relationship to an excessive activation of the serotonin receptor subtype 5-HT2B.
See also
Semaglutide, another weight-loss drug that gained mass popularity
References
External links
Frontline: Dangerous prescriptions – Interview with Leo Lutwak, in which he discusses the side effects of fenfluramine (Pondimin), its successor dexfenfluramine (Redux), and the fen-phen combination.
U.S. FDA fen-phen information
Anorectics
Cardiotoxins
Combination anti-obesity drugs
Respiratory toxins
Withdrawn anti-obesity drugs | Fenfluramine/phentermine | [
"Chemistry"
] | 2,014 | [
"Respiratory toxins",
"Cellular respiration"
] |
590,956 | https://en.wikipedia.org/wiki/American%20Association%20of%20Variable%20Star%20Observers | The American Association of Variable Star Observers (AAVSO) is an international nonprofit organization. Founded in 1911, the organization focuses on coordinating, analyzing, publishing, and archiving variable star observations made largely by amateur astronomers. The AAVSO creates records that establish light curves depicting the variation in brightness of a star over time. The AAVSO makes these records available to professional astronomers, researchers, and educators.
Professional astronomers do not have the resources to monitor every variable star. Hence, astronomy is one of the few sciences where
amateurs can make significant contributions to research. In 2011, the 100th year of the AAVSO's existence, the twenty-millionth variable star observation was received into their database. The AAVSO International Database (AID) has stored over thirty-five million observations as of 2019. The organization receives nearly 1,000,000 observations annually from an estimated amount of 2,000 professional and amateur observers, and is quoted regularly in scientific journals. The International Variable Star Index (VSX) website, maintained by the AAVSO, is cataloging (as of November 2023) 2,277,999 variable stars.
The AAVSO is also very active in education and public outreach. They routinely hold training workshops for citizen science and publish papers with amateurs as co-authors. In the 1990s, the AAVSO developed the Hands-On Astrophysics curriculum, now known as Variable Star Astronomy (with support from the National Science Foundation [NSF]). In 2009, the AAVSO was awarded a three-year $800,000 grant from the NSF to run Citizen Sky, a pro-am collaboration project examining the 2009-2011 eclipse of the star epsilon Aurigae.
The AAVSO headquarters was originally located at the residence of its founder William T. Olcott in Norwich, Connecticut.
Minor Planet (8900) AAVSO is named after the organization.
History
After AAVSO's incorporation in 1918, it unofficially moved to Harvard College Observatory, which later served as the official AAVSO headquarters (1931–1953). Thereafter, it moved around Cambridge before their first building was purchased in 1985 - The Clinton B. Ford Astronomical Data and Research Center. In 2007, the AAVSO purchased and moved into the recently vacated premises of Sky & Telescope magazine.
As of September 16, 2022, the Executive Director of the AAVSO is Brian Kloppenborg. Before he assumed this role, Kathy Spirer worked in this capacity for nine months, following the resignation of Styliani ("Stella") Kafka -who was in charge from February 2015 till the ember months of 2021. She succeeded Arne Henden. The previous director of the AAVSO for many decades was Janet Mattei, who died in March 2004 of leukemia.
Current and former members
Recorders and Directors
Presidents
Other members
The AAVSO currently has over 2,000 members and observers, with approximately half of them from outside the United States. This list only consists of those with Wikipedia pages.
Publications
AAVSO Alert Notice.
Journal of the American Association of Variable Star Observers (JAAVSO).
AAVSO Circular was published from 1970 until 2000 and edited by John E. Bortle.
See also
List of astronomical societies
References
External links
AAVSO website
The International Variable Star Index (VSX)
History of the AAVSO
Amateur Astronomy Reaches New Heights Space.com, June 28, 2000
A New Foundation for the AAVSO article in the January 2007 issue of Sky & Telescope magazine
Red Hot News… Possible Nova in Sagittarius! Universe Today, August 9, 2009
100 Years of Citizen Science (1 December 2010)
Harvard University
Amateur astronomy organizations
Astronomy organizations
Variable stars
1911 establishments in the United States
Scientific organizations established in 1911 | American Association of Variable Star Observers | [
"Astronomy"
] | 771 | [
"Amateur astronomy organizations",
"Astronomy organizations"
] |
590,971 | https://en.wikipedia.org/wiki/Haversine%20formula | The haversine formula determines the great-circle distance between two points on a sphere given their longitudes and latitudes. Important in navigation, it is a special case of a more general formula in spherical trigonometry, the law of haversines, that relates the sides and angles of spherical triangles.
The first table of haversines in English was published by James Andrew in 1805, but Florian Cajori credits an earlier use by José de Mendoza y Ríos in 1801. The term haversine was coined in 1835 by James Inman.
These names follow from the fact that they are customarily written in terms of the haversine function, given by . The formulas could equally be written in terms of any multiple of the haversine, such as the older versine function (twice the haversine). Prior to the advent of computers, the elimination of division and multiplication by factors of two proved convenient enough that tables of haversine values and logarithms were included in 19th- and early 20th-century navigation and trigonometric texts. These days, the haversine form is also convenient in that it has no coefficient in front of the function.
Formulation
Let the central angle between any two points on a sphere be:
where
is the distance between the two points along a great circle of the sphere (see spherical distance),
is the radius of the sphere.
The haversine formula allows the haversine of to be computed directly from the latitude (represented by ) and longitude (represented by ) of the two points:
where
, are the latitude of point 1 and latitude of point 2,
, are the longitude of point 1 and longitude of point 2,
, .
Finally, the haversine function , applied above to both the central angle and the differences in latitude and longitude, is
The haversine function computes half a versine of the angle , or the squares of half chord of the angle on a unit circle (sphere).
To solve for the distance , apply the archaversine (inverse haversine) to or use the arcsine (inverse sine) function:
or more explicitly:
where
.
When using these formulae, one must ensure that does not exceed 1 due to a floating point error ( is real only for ). only approaches 1 for antipodal points (on opposite sides of the sphere)—in this region, relatively large numerical errors tend to arise in the formula when finite precision is used. Because is then large (approaching , half the circumference) a small error is often not a major concern in this unusual case (although there are other great-circle distance formulas that avoid this problem). (The formula above is sometimes written in terms of the arctangent function, but this suffers from similar numerical problems near .)
As described below, a similar formula can be written using cosines (sometimes called the spherical law of cosines, not to be confused with the law of cosines for plane geometry) instead of haversines, but if the two points are close together (e.g. a kilometer apart, on the Earth) one might end up with , leading to an inaccurate answer. Since the haversine formula uses sines, it avoids that problem.
Either formula is only an approximation when applied to the Earth, which is not a perfect sphere: the "Earth radius" varies from 6356.752 km at the poles to 6378.137 km at the equator. More importantly, the radius of curvature of a north-south line on the earth's surface is 1% greater at the poles (≈6399.594 km) than at the equator (≈6335.439 km)—so the haversine formula and law of cosines cannot be guaranteed correct to better than 0.5%. More accurate methods that consider the Earth's ellipticity are given by Vincenty's formulae and the other formulas in the geographical distance article.
The law of haversines
Given a unit sphere, a "triangle" on the surface of the sphere is defined by the great circles connecting three points , , and on the sphere. If the lengths of these three sides are (from to ), (from to ), and (from to ), and the angle of the corner opposite is , then the law of haversines states:
Since this is a unit sphere, the lengths , , and are simply equal to the angles (in radians) subtended by those sides from the center of the sphere (for a non-unit sphere, each of these arc lengths is equal to its central angle multiplied by the radius of the sphere).
In order to obtain the haversine formula of the previous section from this law, one simply considers the special case where is the north pole, while and are the two points whose separation is to be determined. In that case, and are (that is, the, co-latitudes), is the longitude separation , and is the desired . Noting that , the haversine formula immediately follows.
To derive the law of haversines, one starts with the spherical law of cosines:
As mentioned above, this formula is an ill-conditioned way of solving for when is small. Instead, we substitute the identity that , and also employ the addition identity , to obtain the law of haversines, above.
Proof
One can prove the formula:
by transforming the points given by their latitude and longitude into cartesian coordinates, then taking their dot product.
Consider two points on the unit sphere, given by their latitude and longitude :
These representations are very similar to spherical coordinates, however latitude is measured as angle from the equator and not the north pole. These points have the following representations in cartesian coordinates:
From here we could directly attempt to calculate the dot product and proceed, however the formulas become significantly simpler when we consider the following fact: the distance between the two points will not change if we rotate the sphere along the z-axis. This will in effect add a constant to . Note that similar considerations do not apply to transforming the latitudes - adding a constant to the latitudes may change the distance between the points. By choosing our constant to be , and setting , our new points become:
With denoting the angle between and , we now have that:
See also
Sight reduction
Vincenty's formulae
Cosine distance
References
Further reading
U. S. Census Bureau Geographic Information Systems FAQ, (content has been moved to What is the best way to calculate the distance between 2 points?)
R. W. Sinnott, "Virtues of the Haversine", Sky and Telescope 68 (2), 159 (1984).
W. Gellert, S. Gottwald, M. Hellwich, H. Kästner, and H. Küstner, The VNR Concise Encyclopedia of Mathematics, 2nd ed., ch. 12 (Van Nostrand Reinhold: New York, 1989).
External links
Implementations of the haversine formula in 91 languages at rosettacode.org and in 17 languages on codecodex.com
Other implementations in C++, C (MacOS), Pascal , Python, Ruby, JavaScript, PHP ,Matlab , MySQL
Spherical trigonometry
Geodesy
Distance | Haversine formula | [
"Physics",
"Mathematics"
] | 1,505 | [
"Distance",
"Physical quantities",
"Applied mathematics",
"Quantity",
"Size",
"Space",
"Spacetime",
"Wikipedia categories named after physical quantities",
"Geodesy"
] |
590,995 | https://en.wikipedia.org/wiki/Intermodulation | Intermodulation (IM) or intermodulation distortion (IMD) is the amplitude modulation of signals containing two or more different frequencies, caused by nonlinearities or time variance in a system. The intermodulation between frequency components will form additional components at frequencies that are not just at harmonic frequencies (integer multiples) of either, like harmonic distortion, but also at the sum and difference frequencies of the original frequencies and at sums and differences of multiples of those frequencies.
Intermodulation is caused by non-linear behaviour of the signal processing (physical equipment or even algorithms) being used. The theoretical outcome of these non-linearities can be calculated by generating a Volterra series of the characteristic, or more approximately by a Taylor series.
Practically all audio equipment has some non-linearity, so it will exhibit some amount of IMD, which however may be low enough to be imperceptible by humans. Due to the characteristics of the human auditory system, the same percentage of IMD is perceived as more bothersome when compared to the same amount of harmonic distortion.
Intermodulation is also usually undesirable in radio, as it creates unwanted spurious emissions, often in the form of sidebands. For radio transmissions this increases the occupied bandwidth, leading to adjacent channel interference, which can reduce audio clarity or increase spectrum usage.
IMD is only distinct from harmonic distortion in that the stimulus signal is different. The same nonlinear system will produce both total harmonic distortion (with a solitary sine wave input) and IMD (with more complex tones). In music, for instance, IMD is intentionally applied to electric guitars using overdriven amplifiers or effects pedals to produce new tones at subharmonics of the tones being played on the instrument. See Power chord#Analysis.
IMD is also distinct from intentional modulation (such as a frequency mixer in superheterodyne receivers) where signals to be modulated are presented to an intentional nonlinear element (multiplied). See non-linear mixers such as mixer diodes and even single-transistor oscillator-mixer circuits. However, while the intermodulation products of the received signal with the local oscillator signal are intended, superheterodyne mixers can, at the same time, also produce unwanted intermodulation effects from strong signals near in frequency to the desired signal that fall within the passband of the receiver.
Causes of intermodulation
A linear time-invariant system cannot produce intermodulation. If the input of a linear time-invariant system is a signal of a single frequency, then the output is a signal of the same frequency; only the amplitude and phase can differ from the input signal.
Non-linear systems generate harmonics in response to sinusoidal input, meaning that if the input of a non-linear system is a signal of a single frequency, then the output is a signal which includes a number of integer multiples of the input frequency signal; (i.e. some of ).
Intermodulation occurs when the input to a non-linear system is composed of two or more frequencies. Consider an input signal that contains three frequency components at, , and ; which may be expressed as
where the and are the amplitudes and phases of the three components, respectively.
We obtain our output signal, , by passing our input through a non-linear function :
will contain the three frequencies of the input signal, , , and (which are known as the fundamental frequencies), as well as a number of linear combinations of the fundamental frequencies, each in the form
where , , and are arbitrary integers which can assume positive or negative values. These are the intermodulation products (or IMPs).
In general, each of these frequency components will have a different amplitude and phase, which depends on the specific non-linear function being used, and also on the amplitudes and phases of the original input components.
More generally, given an input signal containing an arbitrary number of frequency components , the output signal will contain a number of frequency components, each of which may be described by
where the coefficients are arbitrary integer values.
Intermodulation order
The order of a given intermodulation product is the sum of the absolute values of the coefficients,
For example, in our original example above, third-order intermodulation products (IMPs) occur where :
In many radio and audio applications, odd-order IMPs are of most interest, as they fall within the vicinity of the original frequency components, and may therefore interfere with the desired behaviour. For example, intermodulation distortion from the third order (IMD3) of a circuit can be seen by looking at a signal that is made up of two sine waves, one at and one at . When you cube the sum of these sine waves you will get sine waves at various frequencies including and . If and are large but very close together then and will be very close to and .
Passive intermodulation (PIM)
As explained in a previous section, intermodulation can only occur in non-linear systems. Non-linear systems are generally composed of active components, meaning that the components must be biased with an external power source which is not the input signal (i.e. the active components must be "turned on").
Passive intermodulation (PIM), however, occurs in passive devices (which may include cables, antennas etc.) that are subjected to two or more high power tones. The PIM product is the result of the two (or more) high power tones mixing at device nonlinearities such as junctions of dissimilar metals or metal-oxide junctions, such as loose corroded connectors. The higher the signal amplitudes, the more pronounced the effect of the nonlinearities, and the more prominent the intermodulation that occurs — even though upon initial inspection, the system would appear to be linear and unable to generate intermodulation.
The requirement for "two or more high power tones" need not be discrete tones. Passive intermodulation can also occur between different frequencies (i.e. different "tones") within a single broadband carrier. These PIMs would show up as sidebands in a telecommunication signal, which interfere with adjacent channels and impede reception.
Passive intermodulations are a major concern in modern communication systems in cases when a single antenna is used for both high power transmission signals as well as low power receive signals (or when a transmit antenna is in close proximity to a receive antenna). Although the power in the passive intermodulation signal is typically many orders of magnitude lower than the power of the transmit signal, the power in the passive intermodulation signal is often times on the same order of magnitude (and possibly higher) than the power of the receive signal. Therefore, if a passive intermodulation finds its way to receive path, it cannot be filtered or separated from the receive signal. The receive signal would therefore be clobbered by the passive intermodulation signal.
Sources of passive intermodulation
Ferromagnetic materials are the most common materials to avoid and include ferrites, nickel, (including nickel plating) and steels (including some stainless steels). These materials exhibit hysteresis when exposed to reversing magnetic fields, resulting in PIM generation.
Passive intermodulation can also be generated in components with manufacturing or workmanship defects, such as cold or cracked solder joints or poorly made mechanical contacts. If these defects are exposed to high radio frequency currents, passive intermodulation can be generated. As a result, radio frequency equipment manufacturers perform factory PIM tests on components, to eliminate passive intermodulation caused by these design and manufacturing defects.
Passive intermodulation can also be inherent in the design of a high power radio frequency component where radio frequency current is forced to narrow channels or restricted.
In the field, passive intermodulation can be caused by components that were damaged in transit to the cell site, installation workmanship issues and by external passive intermodulation sources. Some of these include:
Contaminated surfaces or contacts due to dirt, dust, moisture or oxidation.
Loose mechanical junctions due to inadequate torque, poor alignment or poorly prepared contact surfaces.
Loose mechanical junctions caused during transportation, shock or vibration.
Metal flakes or shavings inside radio frequency connections.
Inconsistent metal-to-metal contact between radio frequency connector surfaces caused by any of the following:
Trapped dielectric materials (adhesives, foam, etc.), cracks or distortions at the end of the outer conductor of coaxial cables, often caused by overtightening the back nut during installation, solid inner conductors distorted in the preparation process, hollow inner conductors excessively enlarged or made oval during the preparation process.
Passive intermodulation can also occur in connectors, or when conductors made of two galvanically unmatched metals come in contact with each other.
Nearby metallic objects in the direct beam and side lobes of the transmit antenna including rusty bolts, roof flashing, vent pipes, guy wires, etc.
Passive intermodulation testing
IEC 62037 is the international standard for passive intermodulation testing and gives specific details as to passive intermodulation measurement setups. The standard specifies the use of two +43 dBm (20 W) tones for the test signals for passive intermodulation testing. This power level has been used by radio frequency equipment manufacturers for more than a decade to establish PASS / FAIL specifications for radio frequency components.
Intermodulation in electronic circuits
Slew-induced distortion (SID) can produce intermodulation distortion (IMD) when the first signal is slewing (changing voltage) at the limit of the amplifier's power bandwidth product. This induces an effective reduction in gain, partially amplitude-modulating the second signal. If SID only occurs for a portion of the signal, it is called "transient" intermodulation distortion.
Measurement
Intermodulation distortion in audio is usually specified as the root mean square (RMS) value of the various sum-and-difference signals as a percentage of the original signal's root mean square voltage, although it may be specified in terms of individual component strengths, in decibels, as is common with radio frequency work. Audio system measurements (Audio IMD) include SMPTE standard RP120-1994 where two signals (at 60 Hz and 7 kHz, with 4:1 amplitude ratios) are used for the test; many other standards (such as DIN, CCIF) use other frequencies and amplitude ratios. Opinion varies over the ideal ratio of test frequencies (e.g. 3:4, or almost — but not exactly — 3:1 for example).
After feeding the equipment under test with low distortion input sinewaves, the output distortion can be measured by using an electronic filter to remove the original frequencies, or spectral analysis may be made using Fourier transformations in software or a dedicated spectrum analyzer, or when determining intermodulation effects in communications equipment, may be made using the receiver under test itself.
In radio applications, intermodulation may be measured as adjacent channel power ratio. Hard to test are intermodulation signals in the GHz-range generated from passive devices (PIM: passive intermodulation). Manufacturers of these scalar PIM-instruments are Summitek and Rosenberger. The newest developments are PIM-instruments to measure also the distance to the PIM-source. Anritsu offers a radar-based solution with low accuracy and Heuermann offers a frequency converting vector network analyzer solution with high accuracy.
See also
Beat (acoustics)
Audio system measurements
Second-order intercept point (SOI)
Third-order intercept point (TOI), a metric of an amplifier or system related to intermodulation
Luxemburg–Gorky effect
References
Further reading
Audio amplifier specifications
Waves
Radio electronics | Intermodulation | [
"Physics",
"Engineering"
] | 2,464 | [
"Radio electronics",
"Physical phenomena",
"Waves",
"Motion (physics)",
"Electronic engineering",
"Audio engineering",
"Audio amplifier specifications"
] |
591,021 | https://en.wikipedia.org/wiki/Computational%20irreducibility | Computational irreducibility suggests certain computational processes cannot be simplified such that the only way to determine the outcome of such a process is to go through each step of its computation. It is one of the main ideas proposed by Stephen Wolfram in his 2002 book A New Kind of Science, although the concept goes back to studies from the 1980s.
The idea
Many physical systems are complex enough that they cannot be effectively measured. Even simpler programs contain a great diversity of behavior. Therefore no model can predict, using only initial conditions, exactly what will occur in a given physical system before an experiment is conducted. Because of this problem of undecidability in the formal language of computation, Wolfram terms this inability to "shortcut" a system (or "program"), or otherwise describe its behavior in a simple way, "computational irreducibility." The idea demonstrates that there are occurrences where theory's predictions are effectively not possible. Wolfram states several phenomena are normally computationally irreducible.
Computational irreducibility explains why many natural systems are hard to predict or simulate. The Principle of Computational Equivalence implies these systems are as computationally powerful as any designed computer.
Implications
There is no easy theory for any behavior that seems complex.
Complex behavior features can be captured with models that have simple underlying structures.
An overall system's behavior based on simple structures can still exhibit behavior indescribable by reasonably "simple" laws.
Analysis
Navot Israeli and Nigel Goldenfeld found that some less complex systems behaved simply and predictably (thus, they allowed approximations). However, more complex systems were still computationally irreducible and unpredictable. It is unknown what conditions would allow complex phenomena to be described simply and predictably.
Compatibilism
Marius Krumm and Markus P Muller tie computational irreducibility to Compatibilism. They refine concepts via the intermediate requirement of a new concept called computational sourcehood that demands essentially full and almost-exact representation of features associated with problem or process represented, and a full no-shortcut computation. The approach simplifies conceptualization of the issue via the No Shortcuts metaphor. This may be analogized to the process of cooking, where all the ingredients in a recipe are required as well as following the 'cooking schedule' to obtain the desired end product. This parallels the issues of the profound distinctions between similarity and identity.
See also
Chaos theory
Gödel's Theorem
Computation
Principle of Computational Equivalence
Artificial intelligence
Robert Rosen
Emergent behaviour
External links and references
Weisstein, Eric W., et al., "Computational irreducibility". MathWorld—A Wolfram Web Resource.
Wolfram, Stephen, "A New Kind of Science". Wolfram Media, Inc., May 14, 2002.
Wolfram, Stephen, "Computational irreducibility". A New Kind of Science.
Wolfram, Stephen, "History of computational irreducibility". A New Kind of Science.
Wolfram, Stephen, "History of computational irreducibility notes". A New Kind of Science.
Wolfram, Stephen, "Undecidability and intractability in theoretical physics". Physical Review Letters, 1985.
Israeli, Navot, and Nigel Goldenfeld, "On computational irreducibility and the predictability of complex physical systems". Physical Review Letters, 2004.
"
Berger, David, "Stephen Wolfram, A New Kind of Science". Serendip's Bookshelves.
"Complexity is Elusive". Physical Review Letters, March 4, 2004.
Tomasson, Gunnar, "Scientific Theory and Computational Irreducibility". A New Kind of Science: The NKS Forum.
References
Information theory
Theoretical computer science
Emergence | Computational irreducibility | [
"Mathematics",
"Technology",
"Engineering"
] | 759 | [
"Telecommunications engineering",
"Theoretical computer science",
"Applied mathematics",
"Computer science",
"Information theory"
] |
591,099 | https://en.wikipedia.org/wiki/David%20H.%20Levy | David Howard Levy (born May 22, 1948) is a Canadian amateur astronomer, science writer and discoverer of comets and minor planets, who co-discovered Comet Shoemaker–Levy 9 in 1993, which collided with the planet Jupiter in 1994.
Biography
Levy was born in Montreal, Quebec, Canada, in 1948. He developed an interest in astronomy at an early age. However, he pursued and received bachelor's and master's degrees in English literature.
Levy went on to discover 23 comets, either independently or with Gene and Carolyn Shoemaker. He has written 34 books, mostly on astronomical subjects, such as The Quest for Comets, a biography of Pluto-discoverer Clyde Tombaugh in 2006, and his tribute to Gene Shoemaker in Shoemaker by Levy. He has provided periodic articles for Sky and Telescope magazine, as well as Parade Magazine, Sky News and, most recently, Astronomy Magazine.
Periodic comets that Levy co-discovered include 118P/Shoemaker–Levy, 129P/Shoemaker–Levy, 135P/Shoemaker–Levy, 137P/Shoemaker–Levy, 138P/Shoemaker–Levy, 145P/Shoemaker–Levy, and 181P/Shoemaker–Levy. In addition, Levy is the sole discoverer of two periodic comets: 255P/Levy and P/1991 L3.
On February 28, 2011, Levy was awarded a Ph.D. from the Hebrew University of Jerusalem for his successful completion of his thesis "The Sky in Early Modern English Literature: A Study of Allusions to Celestial Events in Elizabethan and Jacobean Writing, 1572–1620."
Starting in 2015, Levy has been donating his observing logs, which he has kept continuously since 1956, his personal journals since 1958, and his comet search records since 1965, to the Linda Hall Library of Science Library in Kansas City. The observing records are also on-line at the website of the Royal Astronomical Society of Canada.
He lives in Vail, Arizona and was married to Wendee Levy from 1997 until her death in 2022. Levy and his wife hosted a weekly internet radio talk show on astronomy, which ended on February 3, 2011, with a planned "Final Show". Show archives are available in WMA and MP3 formats. Levy is President of the National Sharing the Sky Foundation and a Master of Astronomy with DeTao Masters Academy (DTMA).
Levy's autobiography, "A Nightwatchman's journey: the Road Not Taken" was published in June 2019 by the Royal Astronomical Society of Canada.
Awards
The main-asteroid 3673 Levy was named in his honour. Levy was awarded the C.A. Chant Medal of the Royal Astronomical Society of Canada in 1980. Levy was recipient of the 1990 G. Bruce Blair Medal. In 1993 he won the Amateur Achievement Award of the Astronomical Society of the Pacific. In 2007, Levy received the Smithsonian Astrophysical Observatory's Edgar Wilson Award for the discovery of comets. In 2008, a special edition telescope, "The Comet Hunter" was co-designed by Levy.
Together with Martyn Ives, David Taylor, and Benjamin Woolley, Levy won a 1998 News & Documentary Emmy Award in the "Individual Achievement in a Craft, Writer" category for the script of the documentary 3 Minutes to Impact produced by York Films for the Discovery Channel.
Discoveries
Comets
Visual
Comet Levy-Rudenko, 1984t, C/1984 V1, Nov 14, 1984
Comet Levy, 1987a, C/1987 A1, January 5, 1987
Comet Levy, 1987y, C/1987 T1, October 11, 1987
Comet Levy, 1988e, C/1988 F1, March 19, 1988
Comet Okazaki-Levy-Rudenko, 1989r, C/1989 Q1, August 25, 1989
Comet Levy, 1990c, C/1990 K1, May 20, 1990
Periodic Comet Levy, P/1991 L3, June 14, 1991
Comet Takamizawa-Levy, C/1994 G1, April 15, 1994
Periodic Comet 255P/Levy, October 2, 2006
Photographic, as part of team of Eugene and Carolyn Shoemaker and David Levy
Periodic Comet Shoemaker-Levy 1, 1990o, P/1990 V1
Periodic Comet Shoemaker-Levy 2, 1990p, 137 P/1990 UL3
Comet Shoemaker-Levy, 1991d C/1991 B1
Periodic Comet Shoemaker-Levy 3, 1991e, 129P/1991 C1
Periodic Comet Shoemaker-Levy 4, 1991f, 118P/1991 C2
Periodic Comet Shoemaker-Levy 5, 1991z, 145P/1991 T1
Comet Shoemaker-Levy, 1991a1, C/1991 T2
Periodic Comet Shoemaker-Levy 6, 1991b1, P/1991 V1
Periodic Comet Shoemaker-Levy 7, 1991d1, 138P/1991 V2
Periodic Comet Shoemaker-Levy 8, 1992f, 135P/1992 G2
Periodic Comet Shoemaker–Levy 9, 1993e, D/1993 F2
Comet Shoemaker-Levy, 1993h, C/1993 K1
Comet Shoemaker-Levy, 1994d C/1994 E2
Comet Jarnac, P/2010 E2 (David Levy, Wendee Levy, Tom Glinos)
Other
Nova Cygni 1975, August 30, 1975 (independent discovery)
Nova Cygni 1978, September 12, 1978 (independent discovery)
Comet Hartley-IRAS (P/1983 V1), November 30, 1983 (independent discovery)
Comet Shoemaker 1992y, C/1992 U1 (aided in discovery)
Periodic Comet Shoemaker 4, 1994k, P/1994 J3 (aided in discovery)
Asteroid (5261) Eureka, the first Martian Trojan asteroid, with Henry E. Holt, June 1990
Established the cataclysmically recurring nature of 1215-17 TV Corvi (Tombaugh's Star), August 1990
Minor planets
See also
Carolyn S. Shoemaker
Eugene Merle Shoemaker
List of minor planet discoverers
References
External links
David Levy's Home Page
1948 births
20th-century Canadian astronomers
Acadia University alumni
Anglophone Quebec people
Jewish Canadian writers
Discoverers of asteroids
Discoverers of comets
Jewish Canadian scientists
Living people
Scientists from Montreal
Writers from Montreal
Queen's University at Kingston alumni
Hebrew University of Jerusalem alumni
Jewish astronomers
Canadian emigrants to the United States | David H. Levy | [
"Astronomy"
] | 1,287 | [
"Astronomers",
"Jewish astronomers"
] |
591,135 | https://en.wikipedia.org/wiki/Timeline%20of%20architecture | This is a timeline of architecture, indexing the individual year in architecture pages. Notable events in architecture and related disciplines including structural engineering, landscape architecture, and city planning. One significant architectural achievement is listed for each year.
Articles for each year (in bold text, below) are summarized here with a significant event as a reference point.
2020s
2026 – The Sagrada Família is expected to be finished.
2024 – The Arch of Reunification in North Korea is demolished.
2022 – 1915 Çanakkale Bridge in Turkey, the longest suspension bridge in the world, is completed.
2021 – Central Park Tower in New York City, the tallest residential building in the world, is completed.
2020 – Torres Obispado in Monterrey, Mexico the tallest skyscraper in Latin America, completed.
2010s
2019 – Notre-Dame fire
2017 – Apple's new headquarters Apple Park, designed by Norman Foster, opened in Cupertino, California.
2016 – MahaNakhon opens in Bangkok, Zaha Hadid dies.
2015 – Shanghai Tower in Shanghai, the tallest building in China and the second-tallest building in the world, gets completed, Charles Correa dies.
2014 – One World Trade Center opens in New York City.
2013 – Gran Torre Santiago is completed in Santiago.
2012 – The Tokyo Skytree opens in Tokyo, The Queen Elizabeth Olympic Park is open in London for the 2012 Summer Olympics, Oscar Niemeyer dies.
2011 – Al Hamra Tower, the tallest skyscraper in Kuwait, is completed.
2010 – Burj Khalifa became the tallest man-made structure in the world, at .
2000s
2009 – CityCenter opens on the Las Vegas Strip in Paradise, Nevada. This project is the largest privately funded construction project in the history of the United States.
2008 – "Water Cube", "Bird's Nest", South railway station, and other buildings in Beijing, completed for the 2008 Summer Olympics.
2007 – Tarald Lundevall completes the Oslo Opera House in Oslo, Norway.
2006 – Construction begins on the Freedom Tower, on the site of the former World Trade Center.
2005 – Casa da Música opens in Porto, Portugal, designed by the Dutch architect Rem Koolhaas with Office for Metropolitan Architecture.
2004 – 30 St Mary Axe (also known as "the Gherkin" and the Swiss Re Building), designed by Norman Foster, completed in the City of London.
2003 – Taipei 101, designed by C.Y. Lee & Partners the world's tallest building from 2004 to 2010 is topped out.
2002 – Simmons Hall dormitory, designed by architect Steven Holl, completed at the Massachusetts Institute of Technology.
2001 – Jewish Museum Berlin designed by Daniel Libeskind opens to the public.
2000 – The Emirates Towers are both completed in Dubai, The London Eye is open in London.
1990s
1999 – Jewish Museum Berlin, designed by Daniel Libeskind is completed.
1998 – Petronas Twin Towers, Kuala Lumpur, Malaysia, designed by César Pelli completed (world tallest building 1998–2004). Kiasma Museum of Contemporary Art by Steven Holl opens to public.
1997 – Guggenheim Museum Bilbao designed by Frank Gehry. Sky Tower (Auckland) completed.
1996 – Oscar Niemeyer completes the Niterói Contemporary Art Museum in Brazil.
1996 – Aronoff Center for Design and Art, University of Cincinnati completed by Peter Eisenman.
1995 – Steven Holl Architects begin construction of St. Ignatius Chapel at Seattle University.
1994 – Building of the Basel Signal Box by Herzog and de Meuron
1993 – The Umeda Sky Building in Osaka City, Japan is completed.
1992 – The Bank of America Corporate Center in Charlotte, North Carolina is completed.
1991 – Stansted Airport terminal building in Essex, England, designed by Norman Foster, is completed.
1990 – Frederick Weisman Museum of Art, University of Minnesota completed by Frank Gehry.
1980s
1989 – I. M. Pei's pyramid addition to the Louvre is opened.
1988 – MOMA Exhibition called Deconstructivist architecture opens.
1987 – The Riga Radio & TV Tower in Riga, Latvia is completed.
1986 – The Lloyd's Building in London, designed by Richard Rogers, is completed.
1985 – The HSBC Headquarters Building in Hong Kong, China by Norman Foster, is completed.
1984 – Philip Johnson's AT&T Building opens in New York City
1983 – Xanadu House in Kissimmee opened.
1982 – Design competition is held for the Parc de la Villette in Paris.
1981 – Richard Serra installs Tilted Arc in the Federal Plaza in New York City. The sculpture is removed in 1989.
1980 – Santa Monica Place constructed by Frank Gehry.
1970s
1979 – Charles Moore designs the Piazza d'Italia in New Orleans.
1978 – United Nations City in Vienna, Austria is completed.
1977 – The Centre Georges Pompidou in Paris, designed by Renzo Piano, Richard Rogers and Gianfranco Franchini, is opened.
1976 – The Barbican Estate, designed by Chamberlin, Powell and Bon, opens in the City of London.
1976 – The CN Tower in Toronto opens as the tallest freestanding structure on land.
1975 – Completion of the Seoul Tower in Seoul, South Korea.
1974 – National Assembly Building in Dhaka, Bangladesh is completed.
1973 – The World Trade Center towers, designed by Minoru Yamasaki, are opened in New York.
1972 – The Transamerica Pyramid in San Francisco, California, designed by William Pereira, is completed.
1971 – Rothko Chapel in Houston, Texas, designed by Mark Rothko and Philip Johnson is completed.
1970 – Construction begins on the Sears Tower in Chicago, designed by Bruce Graham and Fazlur Khan (of Skidmore, Owings & Merrill).
1960s
1969 – Fernsehturm Berlin opens. Ludwig Mies van der Rohe and Walter Gropius die.
1968 – Mies van der Rohe's New National Gallery in Berlin finished.
1967 – Expo 67 in Montreal features the American pavilion, a geodesic dome designed by Buckminster Fuller, and the Habitat 67 housing complex designed by Moshe Safdie.
1966 – The Gateway Arch by Eero Saarinen is finished in St. Louis, Missouri.
1965 – NASA's Cape Canaveral VAB, the Niagara Skylon Tower, Philadelphia's LOVE Park, the Tel Aviv Shalom Meir tower and the Salk Institute all open.
1964 – The Unisphere heads New York World's Fair.
1963 – The Palace of Assembly at Chandigarh, India, is finished.
1962 – Orinda Orinda House & by Charles W. Moore is completed.
1962 – Seattle Space Needle & TWA Terminal by Saarinen at JFK are opened.
1961 – Louis Kahn finishes the Richards Medical Building at the University of Pennsylvania in Philadelphia.
1960 – Lucio Costa & Oscar Niemeyer plan buildings of Brasília, new capital of Brazil. The Television Centre for the BBC is opened in London.
1950s
1959 – Frank Lloyd Wright's Guggenheim Museum in New York City is finished after 16 years of work on the project.
1958 – The Seagram Building in New York designed by Ludwig Mies van der Rohe and Philip Johnson is completed.
1957 – The Interbau 57 exposition in Berlin features structures by Alvar Aalto, Walter Gropius and his The Architects' Collaborative (TAC), and an unité by Le Corbusier.
1956 – Crown Hall at the Illinois Institute of Technology, Chicago, designed by Mies van der Rohe, is finished.
1955 – Completion of Le Corbusier's Notre Dame du Haut chapel at Ronchamp, France and Disneyland (the world's first theme park) in Anaheim, California.
1954 – Louis Kahn finishes his Yale University Art Gallery in New Haven, Connecticut, US.
1953 – Completion of the United Nations Headquarters in New York by a design team headed by Wallace Harrison and Max Abramowitz.
1952 – Le Corbusier completes his Unité d'Habitation in Marseilles.
1951 – Mies van der Rohe's Lake Shore Drive Apartments completed in Chicago.
1950 – Eames House completed in Santa Monica, California, designed by Charles and Ray Eames.
1940s
1949 – Glass House in New Canaan, Connecticut designed by Philip Johnson.
1948 – Pietro Belluschi completes the Equitable Building in Portland, Oregon.
1947 – Alvar Aalto builds the Baker House dormitories at the Massachusetts Institute of Technology.
1946 – Le Corbusier draws up plans for La Rochelle-La Pallice, while his efforts to redesign Saint-Dié-des-Vosges (both cities in France) are foiled.
1945 – John Entenza launches the Case Study Houses Program through his post as editor of Arts & Architecture magazine.
1944 – Frank Lloyd Wright builds the research tower for his Johnson Wax Headquarters in Racine, Wisconsin.
1943 – Oscar Niemeyer completes his Pampulha project in Brazil.
1942 – Vichy rejects Le Corbusier's Obus E plan for Algiers.
1941 – Australian War Memorial in Canberra, Australia, completed.
1940 – Peter Behrens dies.
1930s
1939 – The 1939 World's Fair in New York includes the Finnish Pavilion by Alvar Aalto and the Brazilian Pavilion by Lucio Costa and Oscar Niemeyer.
1938 – Frank Lloyd Wright purchases of land 26 miles away from Phoenix, and begins to build Taliesin West, his winter home, in Scottsdale, Arizona, US
1937 – Wright completes his house Fallingwater, at Bear Run, Pennsylvania.
1936 – Frank Lloyd Wright designs his monumental inward-looking Johnson Wax Headquarters in Racine, Wisconsin, US.
1935 – Cass Gilbert's United States Supreme Court Building is posthumously finished.
1934 – Frank Lloyd Wright draws up plans for his Broadacre City, a decentralized urban metropolis.
1933 – The Bauhaus closes under Nazi pressure.
1932 – The Museum of Modern Art (MoMA) in New York holds its exhibition on modern architecture, coining the term "International Style."
1931 – The Empire State Building, designed by Shreve, Lamb and Harmon, becomes the tallest building in the world.
1930 – William Van Alen completes the Chrysler Building, an Art Deco skyscraper in New York City, US.
1920s
1929 – Barcelona Pavilion designed by Ludwig Mies van der Rohe.
1929 – Villa Savoye designed by Le Corbusier.
1928 – Hector Guimard builds his last house in Paris.
1927 – The Weissenhof Estate, an exhibition of apartment houses designed by leading modern architects, held at Stuttgart, Germany.
1926 – Bauhaus Dessau building, designed by Walter Gropius, opened. Antoni Gaudí and Louis Majorelle die.
1925 – Government House of Thailand, in Bangkok, opened
1924 – Gerrit Rietveld completes the Schröder House in Utrecht.
1923 – Le Corbusier publishes Vers une architecture (Toward an Architecture), a summary of his ideas.
1922 – Monument to the Third International designed by Vladimir Tatlin (unbuilt).
1921 – Frank Lloyd Wright completes his Hollyhock House for Aline Barnsdall in Los Angeles, begun in 1917.
1920 – The Einstein Tower in Potsdam, designed by Erich Mendelsohn, is completed.
1910s
1919 – Bauhaus design school founded in Weimar, Germany
1918 – Birth of Jørn Utzon, designer of the Sydney Opera House.
1917 – Georges Biet's Art Nouveau house and apartment building in Nancy, Meurthe-et-Moselle is severely damaged by combat shells, but will be rebuilt nearly exactly as before in 1922.
1916 – De Stijl movement founded in the Netherlands.
1915 – Le Corbusier completes studies for his Dom-ino Houses.
1914 – Walter Gropius designs his Fagus Factory.
1913 – Cass Gilbert completes the Woolworth Building in New York.
1912 – Frank Lloyd Wright begins work on the Avery Coonley Playhouse, Riverside, Illinois.
1911 – Josef Hoffmann completes the Stoclet Palace in Brussels.
1910 – Gaudí finishes the Casa Milà in Barcelona.
1900s
1909 – Frank Lloyd Wright completes the Robie House near Chicago.
1908 – Adolf Loos publishes his essay "Ornament and Crime".
1907 – Gaudí completes the Casa Batlló in Barcelona.
1906 – Lucien Weissenburger completes his own house, a striking example of the Art Nouveau style in Nancy, Meurthe-et-Moselle.
1905 – Wright designs Unity Temple in Oak Park, Illinois.
1904 – Otto Wagner completes his Post Office Savings Bank Building in Vienna.
1903 – Josef Hoffmann finishes the Moser House in Vienna.
1902 – Otto Wagner's Viennese Stadtbahn railway system is completed.
1901 – John McArthur Jr., completes the Second Empire-style Philadelphia City Hall, the world's tallest masonry building.
1900 – The Gare d'Orsay, later the famous Musée d'Orsay, is built in Paris by Victor Laloux.
1890s
1899 – Hector Guimard is commissioned to design the edicules for the Paris Métropolitain, which have become a hallmark of Art Nouveau design.
1898 – Victor Horta designs his own house, later the Horta Museum.
1897 – Hendrik Berlage designs his Amsterdam Stock Exchange.
1896 – Eugène Vallin completes his own house and studio in Nancy (France), which is the first of many Art Nouveau structures built there by the members of the École de Nancy.
1895 – The Biltmore Estate, the largest house in the US, is completed for the Vanderbilt family in Asheville, North Carolina.
1894 – Louis Sullivan builds the Guaranty Building in Buffalo, NY, US.
1893 – Victor Horta builds what is widely considered the first full-fledged Art Nouveau structure, the Hôtel Tassel, in Brussels.
1892 – Birth of Modernist architect Richard Neutra.
1891 – Louis Sullivan completes his Wainwright Building in Saint Louis.
1890 – Louis Sullivan and Dankmar Adler build the Auditorium Building in Chicago.
1880s
1889 – The 1889 Paris exhibition showcases some of the new technologies of iron, steel, and glass, including the Eiffel Tower.
1888 – The Exposición Universal de Barcelona (1888) displays many buildings by Lluís Domènech i Montaner and other Catalan architects.
1887 – H. H. Richardson's Marshall Field Store in Chicago is completed.
1886 – Birth of Ludwig Mies van der Rohe.
1885 – William Le Baron Jenney builds the first metal-frame skyscraper, the Home Insurance Building, in Chicago.
1884 – Gaudí is given the commission for the Sagrada Família church in Barcelona, which he will work on until 1926.
1883 – Antoni Gaudí completes his Casa Vicens in Barcelona.
1881 – The Natural History Museum in London opens.
1880 – Cologne Cathedral is finally completed after 632 years.
1870s
1879 – Louis Sullivan joins Dankmar Adler's firm in Chicago.
1878 – Work begins on the Herrenchiemsee in Bavaria, designed by Georg Dollman. Death of Sir George Gilbert Scott.
1877 – St Pancras railway station in London, by Sir George Gilbert Scott, is completed.
1876 – Construction is finished on the Bayreuth Festspielhaus, designed by Gottfried Semper.
1875 – The Opéra Garnier is completed in Paris.
1874 – Completion of the California State Capitol in Sacramento, California.
1873 – Scots' Church in Melbourne, Australia is finished.
1872 – The Albert Memorial in London, designed by Sir George Gilbert Scott, is opened.
1871 – The Great Chicago Fire destroys most of the city, sparking a building boom there; The Royal Albert Hall is completed in London.
1870 – Birth of Adolf Loos.
1860s
1869 – Birth of Georges Biet.
1868 – Birth of Peter Behrens and Charles Rennie Mackintosh.
1868 – The Gyeongbokgung of Korea is reconstructed.
1867 – Birth of Frank Lloyd Wright. William Le Baron Jenney opens his architectural practice in Chicago.
1866 – Completion of the St Pancras Hotel in London by Sir George Gilbert Scott.
1865 – Birth of French architect Paul Charbonnier.
1864 – Birth of French Art Nouveau architect Jules Lavirotte.
1863 – United States Capitol building dome in Washington, D.C., is completed.
1862 – Construction begins on Henri Labrouste's reading room at the Bibliothèque Nationale de France (site Richelieu).
1861 – Birth of Victor Horta.
1860 – Construction on Longwood, the largest octagonal residence in the US, is begun in Natchez, Mississippi.
1850s
1859 – Birth of Louis Majorelle and Cass Gilbert.
1858 – The competition to design Central Park in New York is won by Frederick Law Olmsted and Calvert Vaux.
1857 – Founding of the American Institute of Architects.
1856 – Louis Sullivan and Eugène Vallin are born.
1855 – The Palais d'Industrie is built for the World's Fair in Paris.
1854 –
1853 – Baron Haussmann becomes prefect of the Seine and begins his vast urban renovations of Paris.
1852 – Birth of Antoni Gaudí.
1851 – The Crystal Palace designed by Joseph Paxton.
1850 – Lluis Domènech í Montaner and John W. Root are born.
1840s
1849 – John Ruskin's The Seven Lamps of Architecture is published.
1848 – Construction begins on the Washington Monument in Washington, D.C., though it will not be completed until 1885.
1847 – 24 August, birth of Charles Follen McKim (died 1909).
1846 – 4 September, birth of Daniel Burnham of the firm Burnham and Root.
1845 – Trafalgar Square in London, designed by Charles Barry and John Nash, is completed.
1844 – Uspensky Cathedral in Kharkiv, Ukraine is completed.
1843 – Construction begins on Henri Labrouste's Bibliothèque Sainte-Geneviève in Paris.
1842 – The Église de la Madeleine is finally consecrated in Paris as a church.
1841 – Birth of Otto Wagner.
1840 – Construction begins on the Houses of Parliament in London, designed by Sir Charles Barry and Augustus Welby Northmore Pugin.
1830s
1839 – Birth of Frank Furness in Philadelphia.
1838 – Rideau Hall is built by Scottish architect Thomas McKay.
1837 – The Royal Institute of British Architects (RIBA) is founded.
1836 – A.W.N. Pugin publishes his Contrasts, a treatise on the morality of Catholic, Gothic architecture.
1835 – The New Orleans Mint, Dahlonega Mint, and Charlotte Mint are all designed by William Strickland and begin producing. coins in three years.
1834 – Alfred B. Mullet, designer of both the San Francisco and the Carson City Mints in the US, is born in Britain.
1833 – William Strickland completes the first Philadelphia Mint building.
1832 – Birth of William Le Baron Jenney.
1830 – The Altes Museum in Berlin, designed by Karl Friedrich Schinkel, is completed after seven years of construction.
1820s
1829 – The panopticon-design Eastern State Penitentiary in Philadelphia, designed by John Havilland, opens.
1828 – Completion of the Marble Arch in London, designed by John Nash.
1827 – Birth of British Gothic Revial architect William Burges.
1826 – The Menai Suspension Bridge over the Menai Strait, in Wales, designed by Thomas Telford, is completed.
1825 – The front and rear porticoes of the White House are added to the building.
1824 – The Shelbourne Hotel in Dublin, Ireland is completed.
1823 – Work begins on the British Museum in London, designed by (Sir) Robert Smirke.
1822 – Birth of landscape architect Frederick Law Olmsted.
1821 – Karl Friedrich Schinkel completeds his Schauspielhaus in Berlin and Benjamin Latrobe's Baltimore Basilica is completed.
1820 – Death of Benjamin Latrobe.
1810s
1819 – Birth of English architect Horace Jones.
1818 – Birth of American architect James Renwick Jr.
1817 – Dulwich Picture Gallery in London is designed by Sir John Soane as the first purpose-built art gallery.
1816 – Regent's Bridge, crossing the River Thames in central London, designed by James Walker, is opened.
1815 – Brighton Pavilion is redesigned by John Nash for the future King George IV.
1814 – British troops burn the White House in Washington, D.C., gutting it completely.
1813 – Death of Alexandre-Théodore Brongniart.
1812 – The Egyptian Hall in London, designed by P. F. Robinson, is completed.
1811 – The United States Capitol, designed by Benjamin Latrobe, is completed. Birth of George Gilbert Scott.
1810 – Old Saint Petersburg Stock Exchange, designed by Jean-François Thomas de Thomon, is completed.
1800s
1809 – Birth of city planner Baron Haussmann.
1808 – Construction begins on the Paris Bourse, designed by Alexandre-Théodore Brongniart.
1807 – The Templo de la Virgen del Carmen in Celaya, Guanajuato, Mexico is completed.
1806 – Arc de Triomphe, Paris from Jean Chalgrin commissioned by Napoleon Bonaparte.
1805 – The Ellesmere Canal, designed by Thomas Telford, is completed.
1804 – Completion of the Government House in the Bahamas.
1803 – The Raj Bhavan in Kolkata, West Bengal, India is finished.
1802 – The Temple of Saint Philip Neri in Guadalajara, Jalisco, Mexico is completed.
1801 – Birth of Henri Labrouste.
1800 – The White House in Washington, D.C. is completed by team of client George Washington, planner Pierre L'Enfant, and architect James Hoban.
1790s
1799 – Death of French neoclassicist Étienne-Louis Boullée.
1798 – Karlsruhe Synagogue, usually regarded as the first building of the Egyptian Revival architecture, built by Friedrich Weinbrenner in Karlsruhe.
1797 – Ditherington Flax Mill, in Shrewsbury, England, the world's oldest surviving iron-framed building, is completed.
1796 – Somerset House in London, designed by William Chambers, is completed.
1795 – Birth of English architect Charles Barry.
1794 – Hwaseong Fortress in Suwon, Korea, begins.
1793 – Old East, the oldest public uniir John Soane's Museum.
1791 – Brandenburg Gate in Berlin is completed
1790 –
1780s
1789 – Jacques-Germain Soufflot's Panthéon in Paris is completed by his student Jean-Baptiste Rondelet.
1788 –
1787 –
1786 – Schloss Bellevue in Berlin, designed by Philipp Daniel Boumann, is completed.
1785 –
1784 –
1783 –
1782 – Alexandre-Théodore Brongniart is named architect and controller-general of the École Militaire in Paris.
1781 –
1780 – 29 August – Death of Jacques-Germain Soufflot (b. 1713).
1770s
1779 – Fridericianum in Kassel (Hesse), designed by Simon Louis du Ry, completed.
1778 – La Scala opera house in Milan (Lombardy), designed by Giuseppe Piermarini, is opened
1777 –
1776 –
1775 –
1774 –
1773 –
1772 –
1771 –
1770 –
1760s
1769 – St Clement's Church, Moscow is completed
1768 – Petit Trianon at Versailles is completed.
1767 – Arg of Karim Khan
1766 – Horace Walpole's Strawberry Hill House in London is completed.
1765 –
1764 – Construction begins on church of La Madeleine, Paris.
1763 –
1762 –
1761 –
1760 –
1750s
1759 – Royal Palace of Riofrío in Spain, designed by Virgilio Rabaglio completed.
1758 – The royal water garden of Taman Sari (Yogyakarta) on Java, designed by Tumenggung Mangundipura, is begun.
1757 – Vorontsov Palace (Saint Petersburg), designed by Francesco Bartolomeo Rastrelli, is completed.
1756 –
1755 – Nuruosmaniye Mosque in Istanbul, designed by Mustafa Ağa and Simeon Kalfa, is completed
1754 – Tomb of Safdar Jang in Delhi is completed.
1753 – The Georgian-Style Pennsylvania State House, (Independence Hall) is completed in Philadelphia, Pennsylvania.
1752 – Valletta Waterfront on Malta is built
1751 –
1750 – Rang Ghar in eastern India.
1740s
1749 – The Radcliffe Camera in Oxford, England, designed by James Gibbs, is opened as a library.
1748 –
1747 –
1746 –
1745 –
1744 –
1743 – Dresden Frauenkirche, Dresden, Germany, completed.
1742 –
1741 –
1740 –
1730s
1739 – Birth of Alexandre-Théodore Brongniart.
1738 –
1737 –
1736 –
1735 – Buckingham Palace built
1734 –
1733 –
1732 –
1731 – Basilica of Superga in the vicinity of Turin built, and designed by Filippo Juvarra
1730 –
1720s
1729 – Christ Church, Spitalfields in London is completed.
1728 –
1727 –
1726 – The remaining ruins of Liverpool Castle are demolished.
1725 –
1724 – The construction of Blenheim Palace is completed.
1723 – Mavisbank House in Loanhead, Scotland is designed. Death of Christopher Wren.
1722 –
1721 –
1720 –
1710s
1719 –
1718 –
1717 –
1716 –
1715 –
1714 –
1713 – Vizianagaram Fort in South India is built.
1712 –
1711 –
1710 –
1700s
1709 –
1708 – St. Paul's Cathedral in London, designed by Christopher Wren, is completed.
1707 –
1706 –
1705 – November: In Williamsburg, capital of the Virginia colony in America, construction of the Capitol building is completed.
1704 – St Magnus-the-Martyr in London is completed.
1703 –
1702 – The Thomaskirche in Leipzig, Germany is completed.
1701 –
1700 –
17th century
1690s – Potala Palace is completed in Tibet.
1690s – The city of Noto, Italy, on Sicily, is devastated by an earthquake (1693), and a rebuilding program begins in the Baroque style.
1680s – Church of Les Invalides, Paris is built by Jules Hardouin-Mansart.
1670s – The Royal Greenwich Observatory in London, designed by Christopher Wren is completed (1676).
1660s – Louis XIV, with the architect Jules Hardouin-Mansart, begins to enlarge the Palace of Versailles (1661); foundation stone of Petersberg Citadel, Erfurt, Germany laid (1665).
1650s – Completion of the church Sant'Agnese in Agone in Rome, designed by Borromini and Carlo Rainaldi.
1640s – Borromini builds the church Sant'Ivo alla Sapienza in Rome.
1630s – Emperor Shah Jahan construct Taj Mahal in Agra, India.
1630s – Borromini builds the church San Carlo alle Quattro Fontane in Rome.
1620s – St. Peter's Basilica is completed in Vatican City (1626).
1610s – Mohammadreza Isfahani builds Naghsh-i Jahan Square in Isfahan, Iran.
1600s – 33 pol bridge is constructed in Isfahan, Iran.
16th century
1590s – Bernini and Borromini are born.
1580s –
1570s –
1560s – work begins on Palladio's Villa Capra "La Rotonda".
1550s –
1540s –
1530s – Work begins on Michelangelo's Piazza del Campidoglio (Capitoline Hill).
1520s – Santhome Church was built in Chennai.
1510s – Construction begins on Chateau Chambord.
1500s – Construction begins on St. Peter's Basilica. Birth of Andrea Palladio.
15th century
1490s –
1480s – Vitruvius' treatise De architectura and Leon Battista Alberti's De re aedificatoria were published, having previously existed only in manuscript.
1470s –
1460s –
1450s – Architecture of the Ottoman Empire after capturing Constantinople
1440s –
1430s –
1420s – The Forbidden City of China is completed
1410s –
1400s – The Changdeokgung of Korea is completed.
14th century
14th Century architecture
13th century
1290s –
1280s –
1270s – St. Augustine's Monastery (Erfurt), Germany begun 1277
1260s – Fakr Ad-Din Mosque is finished in the Sultanate of Mogadishu
1250s –
1240s – The foundation stone of Cologne Cathedral in Cologne is laid.
1230s –
1220s –
1210s –
1200s –
12th century
1190s – Construction of Qutb Minar started in India
1190s – Construction begins on the present form of Chartres Cathedral after a fire.
1180s –
1170s –
1160s –
1150s –
1140s – Abbot Sugar supervises the reconstruction of St. Denis in the Gothic style
1130s – Work begins on the Basilica of Saint-Denis in France.
1120s –
1110s –
1100s – Yingzao Fashi written by Li Jie published during mid Song dynasty, an important set of building standards.
11th century
1090s – Durham Cathedral founded; Old Synagogue (Erfurt), Germany, one of the oldest synagogue buildings in Europe (1094)
1080s –
1070s – St Albans Cathedral commenced; built from the ruins of Roman Verulamium.
1060s –
1050s – Greensted Church built, oldest surviving wooden church (extensively repaired) in the world, possibly the oldest wooden building in Europe.
1040s –
1030s – Gangaikonda Cholapuram built by the kingdom of Rajendra Chola I under Chola dynasty.
1020s –
1010s –
1000s – Brihadisvara Temple built by the kingdom of Rajaraja I under Chola dynasty. Construction of stone buildings in Great Zimbabwe begins.
1st millennium AD
905 – Aachen Cathedral consecrated (major renovations in the 10th century).
900s – Akhtala Monastery built, intended as a fortress.
800s –
880 – The Nea Ekklesia is inaugurated in Constantinople, setting the model for all later cross-in-square Orthodox churches.
848 – San Miguel de Lillo built in the Asturian pre-romanesque style of Spain.
700s – Seokguram of Korea is constructed.
605 – Anji Bridge, China, the world's oldest known open-spandrel segmental stone arch bridge, is completed.
600s – St. Hripsime Church, Echmiadzin, one of the world's oldest surviving churches, constructed.
500s – Hagia Sophia built in its present form. Oldest known surviving roof truss in Saint Catherine's Monastery.
495–504 – Basilica of Sant'Apollinare Nuovo, Ravenna.
470 – Basilica of St. Martin, Tours.
432–40 – Santa Maria Maggiore and Santa Sabina in Rome.
400s – Mahabalipuram, an ancient port city of south east India, constructed under Mahendravarman I & his son Narasimhavarman I of the Pallava Kingdom, Tamil Nadu, South India.
391 – Serapeum of Alexandria is destroyed in a conflict between Christians and pagans.
c.330 – Basilica of Saint Paul Outside the Walls, Rome.
325 – Old St. Peter's Basilica, Rome.
320 – Construction of Archbasilica of Saint John Lateran, Rome, begun using standards that will be followed in future basilica designs.
315 – Arch of Constantine in Rome dedicated to the Battle of Milvian Bridge.
312 – Aula Palatina (Basilica of Constantine) at Trier, the brick audience hall, completed.
307–312 – Basilica of Maxentius and Catacomb of the Via Latina in Rome begun.
300s – Nalanda, an ancient center of higher learning, is built in Gupta Empire in India.
296–306 – Baths of Diocletian in Rome.
262 – Arch of Gallienus in Rome completed.
231 – Dura-Europos church in Syria, a house built c.200 converted into a Christian Church.
224 – Dura-Europos synagogue one of the oldest synagogues.
211 – Arch of Drusus in Rome completed.
203 – Arch of Septimius Severus in Rome completed.
212–16 – Baths of Caracalla in Rome.
200s –
200 – Pyramid of the Sun in Teotihuacan is constructed.
193 – Column of Marcus Aurelius dedicated in Rome.
134 – Ponte Sant'Angelo across the Tiber in Rome completed.
118–28 – Pantheon, Rome is completed, an early full dome.
113 – Trajan's Column in Rome dedicated.
104–6 – Alcántara Bridge, a Roman multiple arched bridge over the Tagus River in Spain.
82 – Arch of Titus in Rome an artifact from the 'Temple Period' and the beginning of the Jewish Diaspora.
100s – Pantheon, Rome is completed.
70–80 – Colosseum in Rome built under Emperors Vespasian and Titus.
60–69 – Domus Aurea in Rome begun.
52 – Porta Maggiore (Porta Prenestina) in Rome built. A subterranean Neopythagorean basilica nearby also dates to this century.
47–50 – Romans establish the city of Londinium in Britain.
40 – Lighthouse at Boulogne built.
3 – Gungnae City of Goguryeo completed.
1st millennium BC
1-99 BC – Vitruvius writes De architectura (c. 15 BC). Expansion of Herod the Great's temple begins (c. 37 BC). Pont du Gard (c. 50 BC), Provence, France. Pons Fabricius, oldest functional stone Roman bridge in Rome, Italy (62 BC). Maison Carrée Roman temple is constructed (c. 16 BC). Mausoleum of Augustus is completed (28 BC).
100s – Across the Tiber in Italy: Ponte Milvio is the second bridge at this location (115 BC); Pons Aemilius is the oldest stone Roman bridge in Rome (126 BC).
200s – Erechtheion in Athens completed (206 BC). Lighthouse of Alexandria in Egypt completed and is the tallest man-made structure in existence at the time (c. 246 BC). The city of Djenné-Djenno is first occupied (250 BC). Colossus of Rhodes is completed (280 BC).
300s – University of ancient Taxila, one of the first institutes of learning, is established. Mausoleum at Halicarnassus, one of the Seven Wonders of the Ancient World, is completed (350 BC). Alexander the Great founds the city of Alexandria and plans its layout (331 BC). The city of Antioch is founded (300 BC).
400s – Completion of the final form of the Parthenon in Athens (432 BC). Construction of Pataliputra (modern day Patna) in the Magadha empire (Indian Subcontinent) begun (490 BC).
500s – Construction of the Temple of Artemis in Ephesus begins (c. 500 BC). Second Temple in Jerusalem completed (February 25, 515 BC). Work begins on Persepolis (515 BC). Temple of Jupiter Optimus Maximus completed in Rome (509 BC).
600s – Port city of Naucratis is founded in Egypt (c. 625 BC). Massalia (modern-day Marseille) is founded (c. 600 BC).
700s – According to legend, the city of Rome is founded (753 BC).
800s –
900s – The earliest Greek temple built at Samos with some timber framing based on the Mycenaean megaron
2nd millennium BC
1000s BC –
1100s –
1200s – Chogha Zanbil built. End of Harappan architecture
1300s –
1400s –
1500s –
1600s – Final construction of Stonehenge in England
1700s –
1800s – Last Egyptian pyramid built in Hawara
1900s –
3rd millennium BC
2000s BC – Ziggurat of Ur construction takes place
2100s –
2200s –
2300s –
2400s –
2500s –
2600s – Ancient city of Mohenjo-daro is built in modern day Pakistan. Great Pyramid of Giza and Pyramid of Djoser built in Egypt.
2700s –
2800s –
2900s – (2900 – 1600 BC) the Longshan culture in China. Examples in Shandong, Henan, and southern Shaanxi and Shanxi provinces.
Neolithic
4th millennium BC – Harappa ancient city built.
5th millennium BC – (5000–3000 BC) Yangshao culture in China.
6th millennium BC – (6000–2000 BC) Emergence of wooden frames in Chinese architecture including the use of mortise and tenon joinery to build wood beamed houses.
7th millennium BC – Çatalhöyük in Anatolia constructed without streets.
8th millennium BC – Lahuradewa architecture in Ganges plains of India. Early Mehrgarh settlements are established near the Bolan Pass in Pakistan. Earliest town sites with simple residential neighbourhoods in Jarmo, Jericho, and Ain Ghazal on the Levant.
10th millennium BC – Göbekli Tepe in Turkey, an ancient structure believed to be the first place of worship.
References
See also
Table of years in architecture
Timeline of architectural styles
Outline of architecture
History of architecture
Architecture
+
Architecture lists
Architecture | Timeline of architecture | [
"Engineering"
] | 7,653 | [
"Architecture lists",
"Architectural history",
"Architecture"
] |
591,253 | https://en.wikipedia.org/wiki/Kirchhoff%27s%20circuit%20laws | Kirchhoff's circuit laws are two equalities that deal with the current and potential difference (commonly known as voltage) in the lumped element model of electrical circuits. They were first described in 1845 by German physicist Gustav Kirchhoff. This generalized the work of Georg Ohm and preceded the work of James Clerk Maxwell. Widely used in electrical engineering, they are also called Kirchhoff's rules or simply Kirchhoff's laws. These laws can be applied in time and frequency domains and form the basis for network analysis.
Both of Kirchhoff's laws can be understood as corollaries of Maxwell's equations in the low-frequency limit. They are accurate for DC circuits, and for AC circuits at frequencies where the wavelengths of electromagnetic radiation are very large compared to the circuits.
Kirchhoff's current law
This law, also called Kirchhoff's first law, or Kirchhoff's junction rule, states that, for any node (junction) in an electrical circuit, the sum of currents flowing into that node is equal to the sum of currents flowing out of that node; or equivalently:
The algebraic sum of currents in a network of conductors meeting at a point is zero.
Recalling that current is a signed (positive or negative) quantity reflecting direction towards or away from a node, this principle can be succinctly stated as: where is the total number of branches with currents flowing towards or away from the node.
Kirchhoff's circuit laws were originally obtained from experimental results. However, the current law can be viewed as an extension of the conservation of charge, since charge is the product of current and the time the current has been flowing. If the net charge in a region is constant, the current law will hold on the boundaries of the region. This means that the current law relies on the fact that the net charge in the wires and components is constant.
Uses
A matrix version of Kirchhoff's current law is the basis of most circuit simulation software, such as SPICE. The current law is used with Ohm's law to perform nodal analysis.
The current law is applicable to any lumped network irrespective of the nature of the network; whether unilateral or bilateral, active or passive, linear or non-linear.
Kirchhoff's voltage law
This law, also called Kirchhoff's second law, or Kirchhoff's loop rule, states the following:
The directed sum of the potential differences (voltages) around any closed loop is zero.
Similarly to Kirchhoff's current law, the voltage law can be stated as:
Here, is the total number of voltages measured.
Generalization
In the low-frequency limit, the voltage drop around any loop is zero. This includes imaginary loops arranged arbitrarily in space – not limited to the loops delineated by the circuit elements and conductors. In the low-frequency limit, this is a corollary of Faraday's law of induction (which is one of Maxwell's equations).
This has practical application in situations involving "static electricity".
Limitations
Kirchhoff's circuit laws are the result of the lumped-element model and both depend on the model being applicable to the circuit in question. When the model is not applicable, the laws do not apply.
The current law is dependent on the assumption that the net charge in any wire, junction or lumped component is constant. Whenever the electric field between parts of the circuit is non-negligible, such as when two wires are capacitively coupled, this may not be the case. This occurs in high-frequency AC circuits, where the lumped element model is no longer applicable. For example, in a transmission line, the charge density in the conductor may be constantly changing.
On the other hand, the voltage law relies on the fact that the actions of time-varying magnetic fields are confined to individual components, such as inductors. In reality, the induced electric field produced by an inductor is not confined, but the leaked fields are often negligible.
Modelling real circuits with lumped elements
The lumped element approximation for a circuit is accurate at low frequencies. At higher frequencies, leaked fluxes and varying charge densities in conductors become significant. To an extent, it is possible to still model such circuits using parasitic components. If frequencies are too high, it may be more appropriate to simulate the fields directly using finite element modelling or other techniques.
To model circuits so that both laws can still be used, it is important to understand the distinction between physical circuit elements and the ideal lumped elements. For example, a wire is not an ideal conductor. Unlike an ideal conductor, wires can inductively and capacitively couple to each other (and to themselves), and have a finite propagation delay. Real conductors can be modeled in terms of lumped elements by considering parasitic capacitances distributed between the conductors to model capacitive coupling, or parasitic (mutual) inductances to model inductive coupling. Wires also have some self-inductance.
Example
Assume an electric network consisting of two voltage sources and three resistors.
According to the first law:
Applying the second law to the closed circuit , and substituting for voltage using Ohm's law gives:
The second law, again combined with Ohm's law, applied to the closed circuit gives:
This yields a system of linear equations in , , :
which is equivalent to
Assuming
the solution is
The current has a negative sign which means the assumed direction of was incorrect and is actually flowing in the direction opposite to the red arrow labeled . The current in flows from left to right.
See also
Duality (electrical circuits)
Faraday's law of induction
Lumped matter discipline
Tellegen's Theorem
References
External links
Divider Circuits and Kirchhoff's Laws chapter from Lessons In Electric Circuits Vol 1 DC free ebook and Lessons In Electric Circuits series
Circuit theorems
Conservation equations
Eponymous laws of physics
Linear electronic circuits
Voltage
1845 in science
Gustav Kirchhoff | Kirchhoff's circuit laws | [
"Physics",
"Mathematics"
] | 1,243 | [
"Equations of physics",
"Physical quantities",
"Electrical systems",
"Conservation laws",
"Quantity",
"Mathematical objects",
"Equations",
"Physical systems",
"Circuit theorems",
"Voltage",
"Conservation equations",
"Wikipedia categories named after physical quantities",
"Symmetry",
"Physi... |
591,280 | https://en.wikipedia.org/wiki/Kirchhoff%27s%20law%20of%20thermal%20radiation | In heat transfer, Kirchhoff's law of thermal radiation refers to wavelength-specific radiative emission and absorption by a material body in thermodynamic equilibrium, including radiative exchange equilibrium. It is a special case of Onsager reciprocal relations as a consequence of the time reversibility of microscopic dynamics, also known as microscopic reversibility.
A body at temperature radiates electromagnetic energy. A perfect black body in thermodynamic equilibrium absorbs all light that strikes it, and radiates energy according to a unique law of radiative emissive power for temperature (Stefan–Boltzmann law), universal for all perfect black bodies. Kirchhoff's law states that:
Here, the dimensionless coefficient of absorption (or the absorptivity) is the fraction of incident light (power) at each spectral frequency that is absorbed by the body when it is radiating and absorbing in thermodynamic equilibrium.
In slightly different terms, the emissive power of an arbitrary opaque body of fixed size and shape at a definite temperature can be described by a dimensionless ratio, sometimes called the emissivity: the ratio of the emissive power of the body to the emissive power of a black body of the same size and shape at the same fixed temperature. With this definition, Kirchhoff's law states, in simpler language:
In some cases, emissive power and absorptivity may be defined to depend on angle, as described below. The condition of thermodynamic equilibrium is necessary in the statement, because the equality of emissivity and absorptivity often does not hold when the material of the body is not in thermodynamic equilibrium.
Kirchhoff's law has another corollary: the emissivity cannot exceed one (because the absorptivity cannot, by conservation of energy), so it is not possible to thermally radiate more energy than a black body, at equilibrium. In negative luminescence the angle and wavelength integrated absorption exceeds the material's emission; however, such systems are powered by an external source and are therefore not in thermodynamic equilibrium.
Principle of detailed balance
Kirchhoff's law of thermal radiation has a refinement in that not only is thermal emissivity equal to absorptivity, it is equal in detail. Consider a leaf. It is a poor absorber of green light (around 470 nm), which is why it looks green. By the principle of detailed balance, it is also a poor emitter of green light.
In other words, if a material, illuminated by black-body radiation of temperature , is dark at a certain frequency , then its thermal radiation will also be dark at the same frequency and the same temperature .
More generally, all intensive properties are balanced in detail. So for example, the absorptivity at a certain incidence direction, for a certain frequency, of a certain polarization, is the same as the emissivity at the same direction, for the same frequency, of the same polarization. This is the principle of detailed balance.
History
Before Kirchhoff's law was recognized, it had been experimentally established that a good absorber is a good emitter, and a poor absorber is a poor emitter. Naturally, a good reflector must be a poor absorber. This is why, for example, lightweight emergency thermal blankets are based on reflective metallic coatings: they lose little heat by radiation.
Kirchhoff's great insight was to recognize the universality and uniqueness of the function that describes the black body emissive power. But he did not know the precise form or character of that universal function. Attempts were made by Lord Rayleigh and Sir James Jeans 1900–1905 to describe it in classical terms, resulting in Rayleigh–Jeans law. This law turned out to be inconsistent yielding the ultraviolet catastrophe. The correct form of the law was found by Max Planck in 1900, assuming quantized emission of radiation, and is termed Planck's law. This marks the advent of quantum mechanics.
Theory
In a blackbody enclosure that contains electromagnetic radiation with a certain amount of energy at thermodynamic equilibrium, this "photon gas" will have a Planck distribution of energies.
One may suppose a second system, a cavity with walls that are opaque, rigid, and not perfectly reflective to any wavelength, to be brought into connection, through an optical filter, with the blackbody enclosure, both at the same temperature. Radiation can pass from one system to the other. For example, suppose in the second system, the density of photons at narrow frequency band around wavelength were higher than that of the first system. If the optical filter passed only that frequency band, then there would be a net transfer of photons, and their energy, from the second system to the first. This is in violation of the second law of thermodynamics, which requires that there can be no net transfer of heat between two bodies at the same temperature.
In the second system, therefore, at each frequency, the walls must absorb and emit energy in such a way as to maintain the black body distribution. Hence absorptivity and emissivity must be equal. The absorptivity of the wall is the ratio of the energy absorbed by the wall to the energy incident on the wall, for a particular wavelength. Thus the absorbed energy is where is the intensity of black-body radiation at wavelength and temperature . Independent of the condition of thermal equilibrium, the emissivity of the wall is defined as the ratio of emitted energy to the amount that would be radiated if the wall were a perfect black body. The emitted energy is thus where is the emissivity at wavelength . For the maintenance of thermal equilibrium, these two quantities must be equal, or else the distribution of photon energies in the cavity will deviate from that of a black body. This yields Kirchhoff's law:
By a similar, but more complicated argument, it can be shown that, since black-body radiation is equal in every direction (isotropic), the emissivity and the absorptivity, if they happen to be dependent on direction, must again be equal for any given direction.
Average and overall absorptivity and emissivity data are often given for materials with values which differ from each other. For example, white paint is quoted as having an absorptivity of 0.16, while having an emissivity of 0.93. This is because the absorptivity is averaged with weighting for the solar spectrum, while the emissivity is weighted for the emission of the paint itself at normal ambient temperatures. The absorptivity quoted in such cases is being calculated by:
while the average emissivity is given by:
where is the emission spectrum of the sun, and is the emission spectrum of the paint. Although, by Kirchhoff's law, in the above equations, the above averages and are not generally equal to each other. The white paint will serve as a very good insulator against solar radiation, because it is very reflective of the solar radiation, and although it therefore emits poorly in the solar band, its temperature will be around room temperature, and it will emit whatever radiation it has absorbed in the infrared, where its emission coefficient is high.
Planck's derivation
Historically, Planck derived the black body radiation law and detailed balance according to a classical thermodynamic argument, with a single heuristic step, which was later interpreted as a quantization hypothesis.
In Planck's set up, he started with a large Hohlraum at a fixed temperature . At thermal equilibrium, the Hohlraum is filled with a distribution of EM waves at thermal equilibrium with the walls of the Hohlraum. Next, he considered connecting the Hohlraum to a single small resonator, such as Hertzian resonators. The resonator reaches a certain form of thermal equilibrium with the Hohlraum, when the spectral input into the resonator equals the spectral output at the resonance frequency.
Next, suppose there are two Hohlraums at the same fixed temperature , then Planck argued that the thermal equilibrium of the small resonator is the same when connected to either Hohlraum. For, we can disconnect the resonator from one Hohlraum and connect it to another. If the thermal equilibrium were different, then we have just transported energy from one to another, violating the second law. Therefore, the spectrum of all black bodies are identical at the same temperature.
Using a heuristic of quantization, which he gleaned from Boltzmann, Planck argued that a resonator tuned to frequency , with average energy , would contain entropyfor some constant (later termed the Planck constant). Then applying , Planck obtained the black body radiation law.
Another argument that does not depend on the precise form of the entropy function, can be given as follows. Next, suppose we have a material that violates Kirchhoff's law when integrated, such that the total coefficient of absorption is not equal to the coefficient of emission at a certain , then if the material at temperature is placed into a Hohlraum at temperature , it would spontaneously emit more than it absorbs, or conversely, thus spontaneously creating a temperature difference, violating the second law.
Finally, suppose we have a material that violates Kirchhoff's law in detail, such that the total coefficient of absorption is not equal to the coefficient of emission at a certain and at a certain frequency , then since it does not violate Kirchhoff's law when integrated, there must exist two frequencies , such that the material absorbs more than it emits at , and conversely at . Now, place this material in one Hohlraum. It would spontaneously create a shift in the spectrum, making it higher at than at . However, this then allows us to tap from one Hohlraum with a resonator tuned at , then detach and attach to another Hohlraum at the same temperature, thus transporting energy from one to another, violating the second law.
We may apply the same argument for polarization and direction of radiation, obtaining the full principle of detailed balance.
Black bodies
Near-black materials
It has long been known that a lamp-black coating will make a body nearly black. Some other materials are nearly black in particular wavelength bands. Such materials do not survive all the very high temperatures that are of interest.
An improvement on lamp-black is found in manufactured carbon nanotubes. Nano-porous materials can achieve refractive indices nearly that of vacuum, in one case obtaining average reflectance of 0.045%.
Opaque bodies
Bodies that are opaque to thermal radiation that falls on them are valuable in the study of heat radiation. Planck analyzed such bodies with the approximation that they be considered topologically to have an interior and to share an interface. They share the interface with their contiguous medium, which may be rarefied material such as air, or transparent material, through which observations can be made. The interface is not a material body and can neither emit nor absorb. It is a mathematical surface belonging jointly to the two media that touch it. It is the site of refraction of radiation that penetrates it and of reflection of radiation that does not. As such it obeys the Helmholtz reciprocity principle. The opaque body is considered to have a material interior that absorbs all and scatters or transmits none of the radiation that reaches it through refraction at the interface. In this sense the material of the opaque body is black to radiation that reaches it, while the whole phenomenon, including the interior and the interface, does not show perfect blackness. In Planck's model, perfectly black bodies, which he noted do not exist in nature, besides their opaque interior, have interfaces that are perfectly transmitting and non-reflective.
Cavity radiation
The walls of a cavity can be made of opaque materials that absorb significant amounts of radiation at all wavelengths. It is not necessary that every part of the interior walls be a good absorber at every wavelength. The effective range of absorbing wavelengths can be extended by the use of patches of several differently absorbing materials in parts of the interior walls of the cavity. In thermodynamic equilibrium the cavity radiation will precisely obey Planck's law. In this sense, thermodynamic equilibrium cavity radiation may be regarded as thermodynamic equilibrium black-body radiation to which Kirchhoff's law applies exactly, though no perfectly black body in Kirchhoff's sense is present.
A theoretical model considered by Planck consists of a cavity with perfectly reflecting walls, initially with no material contents, into which is then put a small piece of carbon. Without the small piece of carbon, there is no way for non-equilibrium radiation initially in the cavity to drift towards thermodynamic equilibrium. When the small piece of carbon is put in, it radiation frequencies so that the cavity radiation comes to thermodynamic equilibrium.
A hole in the wall of a cavity
For experimental purposes, a hole in a cavity can be devised to provide a good approximation to a black surface, but will not be perfectly Lambertian, and must be viewed from nearly right angles to get the best properties. The construction of such devices was an important step in the empirical measurements that led to the precise mathematical identification of Kirchhoff's universal function, now known as Planck's law.
Kirchhoff's perfect black bodies
Planck also noted that the perfect black bodies of Kirchhoff do not occur in physical reality. They are theoretical fictions. Kirchhoff's perfect black bodies absorb all the radiation that falls on them, right in an infinitely thin surface layer, with no reflection and no scattering. They emit radiation in perfect accord with Lambert's cosine law.
Original statements
Gustav Kirchhoff stated his law in several papers in 1859 and 1860, and then in 1862 in an appendix to his collected reprints of those and some related papers.
Prior to Kirchhoff's studies, it was known that for total heat radiation, the ratio of emissive power to absorptive ratio was the same for all bodies emitting and absorbing thermal radiation in thermodynamic equilibrium. This means that a good absorber is a good emitter. Naturally, a good reflector is a poor absorber. For wavelength specificity, prior to Kirchhoff, the ratio was shown experimentally by Balfour Stewart to be the same for all bodies, but the universal value of the ratio had not been explicitly considered in its own right as a function of wavelength and temperature.
Kirchhoff's original contribution to the physics of thermal radiation was his postulate of a perfect black body radiating and absorbing thermal radiation in an enclosure opaque to thermal radiation and with walls that absorb at all wavelengths. Kirchhoff's perfect black body absorbs all the radiation that falls upon it.
Every such black body emits from its surface with a spectral radiance that Kirchhoff labeled (for specific intensity, the traditional name for spectral radiance).
The precise mathematical expression for that universal function was very much unknown to Kirchhoff, and it was just postulated to exist, until its precise mathematical expression was found in 1900 by Max Planck. It is nowadays referred to as Planck's law.
Then, at each wavelength, for thermodynamic equilibrium in an enclosure, opaque to heat rays, with walls that absorb some radiation at every wavelength:
See also
Kirchhoff's laws (disambiguation)
Sakuma–Hattori equation
Wien's displacement law
Stefan–Boltzmann law, which states that the power of emission is proportional to the fourth power of the black body's temperature
References
Citations
Bibliography
Translated:
Reprinted as
General references
Evgeny Lifshitz and L. P. Pitaevskii, Statistical Physics: Part 2, 3rd edition (Elsevier, 1980).
F. Reif, Fundamentals of Statistical and Thermal Physics (McGraw-Hill: Boston, 1965).
Heat transfer
Electromagnetic radiation
Eponymous laws of physics
Gustav Kirchhoff
1859 in science | Kirchhoff's law of thermal radiation | [
"Physics",
"Chemistry"
] | 3,340 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Electromagnetic radiation",
"Radiation",
"Thermodynamics"
] |
591,288 | https://en.wikipedia.org/wiki/Father%20Time%20%28Lord%27s%29 | Father Time is a weathervane at Lord's Cricket Ground, London, in the shape of Father Time removing the bails from a wicket. The full weathervane is tall, with the figure of Father Time standing at . It was given to Lord's in 1926 by the architect of the Grandstand, Sir Herbert Baker. The symbolism of the figure derives from Law 12(3) of the Laws of Cricket: "After the call of Time, the bails shall be removed from both wickets." The weathervane is frequently referred to as Old Father Time in television and radio broadcasts, but "Old" is not part of its official title.
Father Time was originally located atop the old Grand Stand. It was wrenched from its position during the Blitz, when it became entangled in the steel cable of a barrage balloon, but was repaired and returned to its previous place. In 1992 it was struck by lightning, and the subsequent repairs were featured on the children's television programme Blue Peter. Father Time was permanently relocated to a structure adjacent to the Mound Stand in 1996, when the Grand Stand was demolished and rebuilt. It was again damaged in March 2015 by the high winds of Cyclone Niklas, which necessitated extensive repair by specialists.
In 1969 Father Time became the subject of a poem, "Lord's Test", by the Sussex and England cricketer John Snow.
Notes
External links
Cricket in London
Meteorological instrumentation and equipment
Lord's
Herbert Baker buildings and structures | Father Time (Lord's) | [
"Technology",
"Engineering"
] | 298 | [
"Meteorological instrumentation and equipment",
"Measuring instruments"
] |
591,348 | https://en.wikipedia.org/wiki/National%20Science%20Bowl | The National Science Bowl (NSB) is a high school and middle school science knowledge competition, using a quiz bowl format, held in the United States. A buzzer system similar to those seen on popular television game shows is used to signal an answer. The competition has been organized and sponsored by the United States Department of Energy since its inception in 1991.
Subject areas
Questions are asked in the categories of Biology, Chemistry, Earth and Space Science, Energy (dealing with DOE research), Mathematics, and Physics.
Several categories have been added, dropped, or merged throughout the years. Computer Science was dropped from the list in late 2002. Current Events was in the 2005 competition, but did not make a return. General Science was dropped and Astronomy was merged with Earth Science to create Earth and Space Science in 2011.
Regional competitions
The winning team of each regional Science Bowl competition is invited to participate in the National Science Bowl finals in Washington, D.C., with all expenses paid. As of 2018, there were 65 high school regionals and 48 middle school regionals. These figures include the two "super regional" sites that are permitted to send two teams to the national competition. The two super regionals are the Kansas/Missouri Regional High School Science Bowl and the Connecticut/Northeast Regional High School Science Bowl (The Northeast Regional includes Rhode Island, Connecticut, Massachusetts, New Hampshire, Vermont, and parts of New York).
Typically, any school that meets the eligibility requirements of the National Science Bowl is permitted to register for its regional competition according to its geographic location. No school may compete in multiple regionals. In addition, most regional competitions permit schools to register up to three teams. Since 2017, club teams are no longer able to compete.
Rules
This section first lists the rules for in-person competitions, and finishes with the rules for virtual competitions. The national competition follows all the rules for the in-person competitions. Most in-person regional competitions use the same rules, but they may request to use different rules. Virtual regional competitions must use the rules set for virtual competitions.
General rules
A team consists of 4 or 5 students from a single school. Only 4 students play at any one time, while the 5th is designated as the alternate. Substitutions and switching captains may occur at halftime and between rounds.
Two teams compete against each other in each match. Each student is given a designation: A1, A Captain, A2, A3, B1, B Captain, B2, or B3, according to the position they sit in. In regional competitions, each round consists of 23 questions (that is, 23 toss-ups and 23 corresponding bonuses). At the National Finals, each round consists of 25 questions. The match is over when all the toss-up questions have been read (and any bonuses related to correctly answered toss-ups), or after two halves have elapsed, whichever occurs first. The team with the most points at this time is the winner. At the regional level, all matches consist of two 8-minute halves, separated by a 2-minute break. At the national level for middle schools, all matches consist of two 10-minute halves. For high schools, all round robin and some double elimination matches consist of two 10-minute halves, with the final rounds consisting of two 12-minute halves to accommodate the longer visual bonus questions. A toss-up/bonus cycle that is begun before time expires in a half will be finished under the usual rules before the half ends. A question officially begins once its subject area is completely read.
Toss-ups
Every match begins with a toss-up question. The moderator announces the subject of the question (see "Subject Areas" above), as well as its type (Multiple Choice or Short Answer). Once the moderator completes the reading of the question, students have 5 seconds to buzz in and give an answer. Students may buzz in at any time after the category has been read—there is no need to wait for the moderator to finish. However, there is a penalty for interrupting the moderator and giving an incorrect answer. After buzzing in, a student must wait for an official to verbally recognize them by saying their designation; otherwise it is considered a blurt, resulting in the answer being ignored and the team being disqualified from answering the toss-up. Upon recognition, the student must give their response within a natural pause (up to 2 seconds); otherwise it is considered a stall and ruled incorrect. If a student buzzes in and answers incorrectly, that student's team may not buzz in again on that question, and the opposing team (if still eligible to answer) gets another 5 seconds to buzz in. Quiet nonverbal communication (e.g. in writing or by hand signals) among team members is allowed on toss-ups, but audible communication or mouthing words is not permitted and will disqualify the team from answering the toss-up.
An answer given by a student is ruled correct or incorrect by the moderator. On short answer questions, if the answer given differs from the official one, the moderator uses his or her judgment to make a ruling (which is subject to a challenge by the competitors). On multiple choice questions, students may give the letter answer (W, X, Y, or Z) or the verbal answer. A verbal answer on a multiple choice question is only correct if it matches the official answer exactly. However, when the choices are mathematical expressions that would be conventionally written in symbols, common alternate expressions of the answer shall be accepted. For example, “square root of 2” and “square root 2” would both be accepted.
Bonuses
If a student answers a toss-up question correctly, that student's team receives a bonus question. The bonus question is always in the same category as the corresponding toss-up question, though it may not always relate to the toss-up question. Since only one team has the opportunity to answer the bonus question, there is no need to buzz in to answer it. After the moderator finishes reading the question, the team has 20 seconds to answer. The timekeeper will give a 5-second warning when 5 seconds remain. Conferring between team members is permitted, but the team captain must give the team's final answer.
Visual bonuses were introduced in 2003. They are only included in the final elimination rounds. The team has 30 seconds to answer a question with the aid of a visual displayed on a monitor (for the final matches) or on a distributed worksheet (for earlier elimination matches).
The same rules apply to the judging of responses to bonus questions as apply to responses to toss-up questions. Once the team's answer has been ruled right or wrong, the moderator proceeds to the next toss-up question.
If neither team answers the toss-up question correctly, the bonus question is not read, and the moderator proceeds to the next toss-up question.
Scoring
Correct responses to toss-up questions are worth 4 points each, and correct responses to bonus questions are worth 10 points each.
If a student buzzes in on a toss-up question before the moderator has completely read the question (i.e., interrupts the moderator) and answers incorrectly (or a blurt or audible communication from the interrupting team occurs), then 4 points are awarded to the opposing team, and the question is re-read in its entirety so that the opposing team has an opportunity to buzz in. Should the opposing team interrupt during the rereading of the question and subsequently incur a penalty as in the previous rule, then 4 points are added to the first team's score, and the moderator proceeds to the next toss-up question.
Challenges
Challenges must be made before the moderator begins reading the next question, or 3 seconds after the last question of the half or game. Only the 4 actively competing members may challenge. The fifth team member, coach, and others associated with a team may not become involved in challenges or their discussion. However, beginning in 2020, anyone in the competition room can make the officials aware of scoring or clock management errors, these are known as corrections rather than challenges.
Challenges may be made either to scientific content or the administration of rules. They may not be made to judgment calls by the officials, such as whether a buzz was an interrupt, whether 20 seconds have passed before beginning to answer a bonus, or whether a stall or blurt has happened. Challenges to scientific content are limited to 2 unsuccessful challenges per round. Successful challenges do not count against this limit. Each team has unlimited challenges to administration of rules.
Rules for Virtual Competitions
For the 2024 competition, regional competitions have the option of choosing a virtual format. Additionally, the DOE will host 4 virtual nationwide regionals for schools meeting certain criteria.
Teams do not play head-to-head matches in the virtual competition. Instead, each team is placed in their own Zoom room and competing against all the other teams in the tournament. Each competition begins with two or three preliminary rounds, in which the teams' scores in the rounds are added up, and the teams with the highest totals advance to the elimination rounds. Each regional competition can choose whether to advance 8, 16, 24, or 32 teams to the elimination rounds. During the elimination rounds, only the score for the current round is used to determine the teams advancing to the next round—the scores from the previous rounds are irrelevant. The number of teams left after each elimination round will go in the following order: 24, 16, 8, 4, 2, and finally 1, beginning with the appropriate number based on how many teams initially advanced from the preliminaries.
During each round, the teams are read the same series of 18 toss-up/bonus cycles. There is no game clock. Teams have 7 seconds after the moderator finishes reading for a student to raise their hand and give an answer on a toss-up, and if answered correctly, have 22 seconds after the moderator finishes reading for a student to raise their hand and give an answer on the bonus. If the team misses the toss-up, the bonus is not read. Toss-ups and bonuses may be answered by any of the 4 or 5 team members. Communication of any kind (verbal, via the Zoom chat, or nonverbal) is allowed on both toss-ups and bonuses. There are no penalties for interrupting (but also no reason to interrupt since all the toss-ups would be read in their entirety) or blurting (although the player will be verbally recognized after raising their hand). The rest of the rules, including the point values for toss-ups and bonuses, are the same as the in-person competitions.
Competition format
This section is concerned with the format of the national competition only. As is the case with competition rules, the competition format varies greatly among the different regional competitions.
Regionals typically use round robin, single-elimination, double-elimination, or any combination of these formats.
The national competition always consists of two stages: round-robin and double-elimination.
Round-robin
All competing teams are randomly arranged (each team captain randomly picks a division and position on the first day of the National Finals) into eight round-robin groups of eight or nine teams each for high school and six teams each for middle school. Every team plays every other team in its group once, receiving 2 points for a win, 1 point for a tie, or 0 points for a loss. If a team's opponent has not arrived, that team can practice instead. The rules still apply, though any win or loss is not counted. In previous years, the top two teams from each group advanced to the double-elimination stage. Starting in 2020, four teams from each group will advance.
Tiebreaks
In the event that two or more teams are tied for one of the top spots in a division, the result of the Division Team Challenge (DTC) is used as a tiebreak. This method is only used for high schools.
For middle schools, there are several tiebreak procedures, applied in the following order:
The head-to-head record of all the tied teams is compared. If this separates a group of two or more teams from the rest of the tied teams, the head-to-head record will be reapplied in the smaller group.
If the top four teams cannot be determined using head-to head records, the following procedures are used:
If more than two teams are still tied, each team is placed in a separate room and is read five toss-up questions. Each team's score is determined by the number of questions answered correctly minus the number answered incorrectly. The team(s) with the highest score(s) win(s) the tiebreak.
If two teams are still tied, the two teams compete head-to-head, receiving five toss-up questions at 4 points for each correct answer (no bonus questions are used). All the usual toss-up rules are in effect, including the interrupt penalty. The team with the higher score wins the tiebreak.
If a tie still exists after the second step, it is reapplied until the tie is resolved.
Single/Double-elimination
Starting in 2020, 32 teams advance to the double elimination stage. Prior to 2020, approximately 16 teams advanced from the round-robin (depending on the number of round robin groups). In 2006, the teams were seeded into a single-elimination tournament based on their preliminary round-robin results. In previous years, a team's position in the double-elimination tournament was determined by random draw; teams were not seeded in any way. The competition then proceeded (in 2006) like a typical single-elimination tournament. Seeding continued in the 2007 tournament: teams that won their pool were paired against teams that placed second in theirs. Unlike in the round-robin, a match in double-elimination cannot be tied. If a match is tied at the end of regulation, overtime periods of five toss-ups each are played until the tie is broken.
Prizes
The top two high school teams receive trips to one of the National Parks, all-expenses paid.
The top three middle and high school teams receive a trophy, individual medals, and photographs with officials of the Department of Energy.
The top 16 middle and high schools earn a check for their school's science departments. As of 2024, the top 16 schools receive $1,000 and the top 2 schools receive $5,000.
Each team with the best Division Team Challenge result in their division earns a $500 check for their school's science department.
Car competition
For the middle school teams, the DOE also sponsored a car competition challenging competitors to construct a car capable of attaining high speeds. They are powered through alternative energy sources such as hydrogen fuel cells and solar panels. The winners of the car competition were awarded with $500 for their school.
Results of the national competition
Middle school
High school
The winning teams from the years 1991-2001 were
2001 (61 teams) North Hollywood High School (North Hollywood, California)
2000 (60 teams) duPont Manual High School (Louisville, Kentucky)
1999 (53 teams) Montgomery Blair High School (Silver Spring, Maryland)
1998 (48 teams) Valley High School (West Des Moines, Iowa)
1997 (45 teams) Venice High School (Los Angeles, California)
1996 (53 teams) Venice High School (Los Angeles, California)
1995 (55 teams) Van Nuys High School (Van Nuys, California)
1994 (51 teams) The Westminster Schools (Atlanta, Georgia)
1993 (43 teams) Albany High School (Albany, California)
1992 (29 teams) Lubbock High School (Lubbock, Texas)
1991 (18 teams) Lubbock High School (Lubbock, Texas)
See also
Quiz Bowl
Notes
References
External links
Official National Science Bowl Website
United States Department of Energy
Student quiz competitions
Science competitions
United States Department of Energy
Science events in the United States
Recurring events established in 1991
1991 establishments in the United States | National Science Bowl | [
"Technology"
] | 3,256 | [
"Science and technology awards",
"Science competitions"
] |
591,359 | https://en.wikipedia.org/wiki/Dialetheism | Dialetheism (; from Greek 'twice' and 'truth') is the view that there are statements that are both true and false. More precisely, it is the belief that there can be a true statement whose negation is also true. Such statements are called "true contradictions", dialetheia, or nondualisms.
Dialetheism is not a system of formal logic; instead, it is a thesis about truth that influences the construction of a formal logic, often based on pre-existing systems. Introducing dialetheism has various consequences, depending on the theory into which it is introduced. A common mistake resulting from this is to reject dialetheism on the basis that, in traditional systems of logic (e.g., classical logic and intuitionistic logic), every statement becomes a theorem if a contradiction is true, trivialising such systems when dialetheism is included as an axiom. Other logical systems, however, do not explode in this manner when contradictions are introduced; such contradiction-tolerant systems are known as paraconsistent logics. Dialetheists who do not want to allow that every statement is true are free to favour these over traditional, explosive logics.
Graham Priest defines dialetheism as the view that there are true contradictions. Jc Beall is another advocate; his position differs from Priest's in advocating constructive (methodological) deflationism regarding the truth predicate.
The term was coined by Graham Priest and Richard Sylvan (then Routley).
Motivations
Dialetheism resolves certain paradoxes
The liar paradox and Russell's paradox deal with self-contradictory statements in classical logic and naïve set theory, respectively. Contradictions are problematic in these theories because they cause the theories to explode—if a contradiction is true, then every proposition is true. The classical way to solve this problem is to ban contradictory statements: to revise the axioms of the logic so that self-contradictory statements do not appear (just as with the Russell's paradox). Dialetheists, on the other hand, respond to this problem by accepting the contradictions as true. Dialetheism allows for the unrestricted axiom of comprehension in set theory, claiming that any resulting contradiction is a theorem.
However, self-referential paradoxes, such as the Strengthened Liar can be avoided without revising the axioms by abandoning classical logic and accepting more than two truth values with the help of many-valued logic, such as fuzzy logic or Łukasiewicz logic.
Human reasoning
Ambiguous situations may cause humans to affirm both a proposition and its negation. For example, if John stands in the doorway to a room, it may seem reasonable both to affirm that John is in the room and to affirm that John is not in the room.
Critics argue that this merely reflects an ambiguity in our language rather than a dialetheic quality in our thoughts; if we replace the given statement with one that is less ambiguous (such as "John is halfway in the room" or "John is in the doorway"), the contradiction disappears. The statements appeared contradictory only because of a syntactic play; here, the actual meaning of "being in the room" is not the same in both instances, and thus each sentence is not the exact logical negation of the other: therefore, they are not necessarily contradictory.
Moreover, John appears to be standing in a conjunction of two concepts. He is in a and not a at the same time, but not in a and not in a at the same time (that would result in a contradiction). He is on his logical connective truth-functional operator, which shows the recurrent ambiguity of human language that often fails to capture the nature of some logical statements.
Apparent dialetheism in other philosophical doctrines
The Jain philosophical doctrine of anekantavada—non-one-sidedness—states that all statements are true in some sense and false in another. Some interpret this as saying that dialetheia not only exist but are ubiquitous. Technically, however, a logical contradiction is a proposition that is true and false in the same sense; a proposition which is true in one sense and false in another does not constitute a logical contradiction. (For example, although in one sense a man cannot both be a "father" and "celibate"—leaving aside such cases as either a celibate man adopting a child or a man fathering a child and only later adopting celibacy—there is no contradiction for a man to be a spiritual father and also celibate; the sense of the word father is different here. In another example, although at the same time George W. Bush cannot both be president and not be president, he was president from 2001-2009, but was not president before 2001 or after 2009, so in different times he was both president and not president.)
The Buddhist logic system, named "Catuṣkoṭi", similarly implies that a statement and its negation may possibly co-exist.
Graham Priest argues in Beyond the Limits of Thought that dialetheia arise at the borders of expressibility, in a number of philosophical contexts other than formal semantics.
Formal consequences
In classical logics, taking a contradiction (see List of logic symbols) as a premise (that is, taking as a premise the truth of both and ), allows us to prove any statement . Indeed, since is true, the statement is true (by generalization). Taking together with is a disjunctive syllogism from which we can conclude . (This is often called the principle of explosion, since the truth of a contradiction is imagined to make the number of theorems in a system "explode".)
Advantages
The proponents of dialetheism mainly advocate its ability to avoid problems faced by other more orthodox resolutions as a consequence of their appeals to hierarchies. According to Graham Priest, "the whole point of the dialetheic solution to the semantic paradoxes is to get rid of the distinction between object language and meta-language". Another possibility is to utilize dialetheism along with a paraconsistent logic to resurrect the program of logicism advocated for by Frege and Russell. This even allows one to prove the truth of otherwise unprovable theorems such as the well-ordering theorem and the falsity of others such as the continuum hypothesis.
There are also dialetheic solutions to the sorites paradox.
Criticisms
One criticism of dialetheism is that it fails to capture a crucial feature about negation, known as absoluteness of disagreement.
Imagine John's utterance of P. Sally's typical way of disagreeing with John is a consequent utterance of ¬P. Yet, if we accept dialetheism, Sally's so uttering does not prevent her from also accepting P; after all, P may be a dialetheia and therefore it and its negation are both true. The absoluteness of disagreement is lost.
A response is that disagreement can be displayed by uttering "¬P and, furthermore, P is not a dialetheia". However, the most obvious codification of "P is not a dialetheia" is ¬(P ¬P). But this itself could be a dialetheia as well. One dialetheist response is to offer a distinction between assertion and rejection. This distinction might be hashed out in terms of the traditional distinction between logical qualities, or as a distinction between two illocutionary speech acts: assertion and rejection. Another criticism is that dialetheism cannot describe logical consequences, once we believe in the relevance of logical consequences, because of its inability to describe hierarchies.
See also
Catuskoti
Compossibility
Doublethink
Paraconsistent logic
Problem of future contingents
Subvaluationism
Tetralemma
Trivialism
References
Sources
Frege, Gottlob. "Negation." Logical Investigations. Trans. P. Geach and R. H Stoothoff. New Haven, Conn.: Yale University Press, 1977. 31–53.
Parsons, Terence. "Assertion, Denial, and the Liar Paradox." Journal of Philosophical Logic 13 (1984): 137–152.
Parsons, Terence. "True Contradictions." Canadian Journal of Philosophy 20 (1990): 335–354.
Priest, Graham. In Contradiction. Dordrecht: Martinus Nijhoff (1987). (Second Edition, Oxford: Oxford University Press, 2006.)
Priest, Graham. "What Is So Bad About Contradictions?" Journal of Philosophy 95 (1998): 410–426.
External links
JC Beall UCONN Homepage
(Blog & ~Blog)
Paul Kabay on dialetheism and trivialism (includes both published and unpublished works)
Modal metaphysics
Non-classical logic
Theories of deduction
Theories of truth | Dialetheism | [
"Mathematics"
] | 1,837 | [
"Theories of deduction"
] |
591,375 | https://en.wikipedia.org/wiki/Willem%20de%20Sitter | Willem de Sitter (6 May 1872 – 20 November 1934) was a Dutch mathematician, physicist, and astronomer. The De Sitter universe is a cosmological model named after him.
Life and work
Born in Sneek, De Sitter studied mathematics at the University of Groningen and then joined the Groningen astronomical laboratory. He worked at the Cape Observatory in South Africa (1897–1899). Then, in 1908, De Sitter was appointed to the chair of astronomy at Leiden University. He was director of the Leiden Observatory from 1919 until his death.
De Sitter made major contributions to the field of physical cosmology. He co-authored a paper with Albert Einstein in 1932 in which they discussed the implications of cosmological data for the curvature of the universe. He also came up with the concept of the De Sitter space and De Sitter universe, a solution for Einstein's general relativity in which there is no matter and a positive cosmological constant. This results in an exponentially expanding, empty universe. De Sitter was also well-known for his research on the motions of the moons of Jupiter, invited to give the George Darwin Lecture at the Royal Astronomical Society in 1931.
Willem de Sitter died after a brief illness in November 1934.
Honours
In 1912, he became a member of the Royal Netherlands Academy of Arts and Sciences.
Awards
James Craig Watson Medal (1929)
Bruce Medal (1931)
Gold Medal of the Royal Astronomical Society (1931)
Prix Jules Janssen, the highest award of the Société astronomique de France, the French astronomical society (1934)
Named after him
The crater De Sitter on the Moon
Asteroid 1686 De Sitter
De Sitter universe
De Sitter space
Anti-de Sitter space
De Sitter invariant special relativity
Einstein–de Sitter universe
De Sitter double star experiment
De Sitter precession
De Sitter–Schwarzschild metric
Family
One of his sons, Ulbo de Sitter (1902 – 1980), was a Dutch geologist, and one of Ulbo's sons was a Dutch sociologist Ulbo de Sitter (1930 – 2010).
Another son of Willem, Aernout de Sitter (1905 – 15 September 1944), was the director of the Bosscha Observatory in Lembang, Indonesia (then the Dutch East Indies), where he studied the Messier 4 globular cluster.
Selected publications
On Einstein's theory of gravitation and its astronomical consequences:
See also
De Sitter double star experiment
De Sitter precession
De Sitter relativity
De Sitter space
De Sitter universe
Anti-de Sitter space
The Dreams in the Witch House, a story by H. P. Lovecraft featuring de Sitter, and inspired by his lecture The Size of the Universe
References
External links
P.C. van der Kruit Willem de Sitter (1872 – 1934) in: History of science and scholarship in the Netherlands.
A. Blaauw, Sitter, Willem de (1872–1934), in Biografisch Woordenboek van Nederland.
Bruce Medal page
Awarding of Bruce Medal: PASP 43 (1931) 125
Awarding of RAS gold medal: MNRAS 91 (1931) 422
de Sitter's binary star arguments against Ritz's relativity theory (1913) (four articles)
Obituaries
AN 253 (1934) 495/496 (one line)
JRASC 29 (1935) 1
MNRAS 95 (1935) 343
Obs 58 (1935) 22
PASP 46 (1934) 368 (one paragraph)
PASP 47 (1935) 65
1872 births
1934 deaths
19th-century Dutch astronomers
19th-century Dutch mathematicians
20th-century Dutch astronomers
Dutch relativity theorists
20th-century Dutch mathematicians
Cosmologists
People from Sneek
Academic staff of Leiden University
University of Groningen alumni
Members of the Royal Netherlands Academy of Arts and Sciences
Foreign associates of the National Academy of Sciences
Recipients of the Gold Medal of the Royal Astronomical Society
Presidents of the International Astronomical Union | Willem de Sitter | [
"Astronomy"
] | 813 | [
"Astronomers",
"Presidents of the International Astronomical Union"
] |
591,394 | https://en.wikipedia.org/wiki/Principle%20of%20explosion | In classical logic, intuitionistic logic, and similar logical systems, the principle of explosion is the law according to which any statement can be proven from a contradiction. That is, from a contradiction, any proposition (including its negation) can be inferred; this is known as deductive explosion.
The proof of this principle was first given by 12th-century French philosopher William of Soissons. Due to the principle of explosion, the existence of a contradiction (inconsistency) in a formal axiomatic system is disastrous; since any statement can be proven, it trivializes the concepts of truth and falsity. Around the turn of the 20th century, the discovery of contradictions such as Russell's paradox at the foundations of mathematics thus threatened the entire structure of mathematics. Mathematicians such as Gottlob Frege, Ernst Zermelo, Abraham Fraenkel, and Thoralf Skolem put much effort into revising set theory to eliminate these contradictions, resulting in the modern Zermelo–Fraenkel set theory.
As a demonstration of the principle, consider two contradictory statements—"All lemons are yellow" and "Not all lemons are yellow"—and suppose that both are true. If that is the case, anything can be proven, e.g., the assertion that "unicorns exist", by using the following argument:
We know that "Not all lemons are yellow", as it has been assumed to be true.
We know that "All lemons are yellow", as it has been assumed to be true.
Therefore, the two-part statement "All lemons are yellow or unicorns exist" must also be true, since the first part of the statement ("All lemons are yellow") has already been assumed, and the use of "or" means that if even one part of the statement is true, the statement as a whole must be true as well.
However, since we also know that "Not all lemons are yellow" (as this has been assumed), the first part is false, and hence the second part must be true to ensure the two-part statement to be true, i.e., unicorns exist (this inference is known as the Disjunctive syllogism).
The procedure may be repeated to prove that unicorns do not exist (hence proving an additional contradiction where unicorns do and do not exist), as well as any other well-formed formula. Thus, there is an explosion of true statements.
In a different solution to the problems posed by the principle of explosion, some mathematicians have devised alternative theories of logic called paraconsistent logics, which allow some contradictory statements to be proven without affecting the truth value of (all) other statements.
Symbolic representation
In symbolic logic, the principle of explosion can be expressed schematically in the following way:
Proof
Below is the Lewis argument, a formal proof of the principle of explosion using symbolic logic.
This proof was published by C. I. Lewis and is named after him, though versions of it were known to medieval logicians.
This is just the symbolic version of the informal argument given in the introduction, with standing for "all lemons are yellow" and standing for "Unicorns exist". We start out by assuming that (1) all lemons are yellow and that (2) not all lemons are yellow. From the proposition that all lemons are yellow, we infer that (3) either all lemons are yellow or unicorns exist. But then from this and the fact that not all lemons are yellow, we infer that (4) unicorns exist by disjunctive syllogism.
Semantic argument
An alternate argument for the principle stems from model theory. A sentence is a semantic consequence of a set of sentences only if every model of is a model of . However, there is no model of the contradictory set . A fortiori, there is no model of that is not a model of . Thus, vacuously, every model of is a model of . Thus is a semantic consequence of .
Paraconsistent logic
Paraconsistent logics have been developed that allow for subcontrary-forming operators. Model-theoretic paraconsistent logicians often deny the assumption that there can be no model of and devise semantical systems in which there are such models. Alternatively, they reject the idea that propositions can be classified as true or false. Proof-theoretic paraconsistent logics usually deny the validity of one of the steps necessary for deriving an explosion, typically including disjunctive syllogism, disjunction introduction, and reductio ad absurdum.
Usage
The metamathematical value of the principle of explosion is that for any logical system where this principle holds, any derived theory which proves ⊥ (or an equivalent form, ) is worthless because all its statements would become theorems, making it impossible to distinguish truth from falsehood. That is to say, the principle of explosion is an argument for the law of non-contradiction in classical logic, because without it all truth statements become meaningless.
Reduction in proof strength of logics without the principle of explosion are discussed in minimal logic.
See also
Consequentia mirabilis – Clavius' Law
Dialetheism – belief in the existence of true contradictions
Law of excluded middle – every proposition is true or false
Law of noncontradiction – no proposition can be both true and not true
Paraconsistent logic – a family of logics used to address contradictions
Paradox of entailment – a seeming paradox derived from the principle of explosion
Reductio ad absurdum – concluding that a proposition is false because it produces a contradiction
Trivialism – the belief that all statements of the form "P and not-P" are true
Notes
References
Theorems in propositional logic
Classical logic
Principles | Principle of explosion | [
"Mathematics"
] | 1,222 | [
"Theorems in propositional logic",
"Theorems in the foundations of mathematics"
] |
591,470 | https://en.wikipedia.org/wiki/Moisture | Moisture is the presence of a liquid, especially water, often in trace amounts. Moisture is defined as water in the adsorbed or absorbed phase. Small amounts of water may be found, for example, in the air (humidity), in foods, and in some commercial products. Moisture also refers to the amount of water vapor present in the air. The soil also includes moisture.
Moisture control in products
Control of moisture in products can be a vital part of the process of the product. There is a substantial amount of moisture in what seems to be dry matter. Ranging in products from cornflake cereals to washing powders, moisture can play an important role in the final quality of the product. There are two main aspects of concern in moisture control in products: allowing too much moisture or too little of it. For example, adding some water to cornflake cereal, which is sold by weight, reduces costs and prevents it from tasting too dry, but adding too much water can affect the crunchiness of the cereal and the freshness because water content contributes to bacteria growth. Water content of some foods is also manipulated to reduce the number of calories.
Moisture has different effects on different products, influencing the final quality of the product. Wood pellets, for instance, are made by taking remainders of wood and grinding them to make compact pellets, which are sold as a fuel. They need to have a relatively low water content for combustion efficiency. The more moisture that is allowed in the pellet, the more smoke that will be released when the pellet is burned.
The need to measure water content of products has given rise to a new area of science, aquametry. There are many ways to measure moisture in products, such as different wave measurement (light and audio), electromagnetic fields, capacitive methods, and the more traditional weighing and drying technique.
See also
Damp (structural)
Dry matter
Humidor
Water content
References
Hydrology | Moisture | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 396 | [
"Hydrology",
"Environmental engineering"
] |
591,482 | https://en.wikipedia.org/wiki/Automatic%20transmission%20system | An automatic transmission system (ATS) is an automated system designed to keep a broadcast radio or television station's transmitter and antenna system running without direct human oversight or attention for long periods. Such systems are occasionally referred to as automated transmission systems to avoid confusion with the automatic transmission of an automobile.
History
Traditionally, radio and television stations were required to have a licensed operator, technician or electrical engineer available to tend to a transmitter at all times it was operating or capable of operating. Any condition (such as distorted or off-frequency transmission) that could interfere with other broadcast services would require immediate manual intervention to correct the fault or take the transmitter off the air. Facilities also had to be monitored for any fault conditions which could impair the transmitted signal or cause damage to the transmitting equipment.
Because broadcast transmitters were often at a different location from the broadcast studios, attended operation required an operator to be physically located at the transmitter site. In the 1950s and 1960s, remote control systems were introduced to allow an operator at the studio to power the transmitter on or off. At the same time, an early remote control system, the Automon, was developed by RCA engineers in Montréal that included a relay system that automatically detected if the transmitter was operating outside of its allowed parameters. The Automon could send the studio an alarm if the transmitter was out of tolerance and, if contact to the studio was lost, it could automatically power down the transmitter. A similar system was developed in 1953 by Paul Schafer in California, using a rotary telephone to raise or lower transmitter parameters remotely.
As technology improved, transmitters became more reliable, and electromechanical means of checking and later correcting problems became commonplace. Regulations eventually caught up with these advances, to allow of unattended operation via an ATS. During the 1970s, the BBC made widespread use of automated systems on its UHF television network to switch from main to standby transmitters in the case of a fault, as well as to alert engineering staff to problems. In 1977, the U.S. Federal Communications Commission loosened operation rules to allow stations in the United States with ATSes to automatically monitor transmitter operation and allow the ATS to automatically adjust modulation or shutdown the transmitter if operation was out of tolerance, although the specific rules have continued to evolve with changes to the Emergency Alert System and the introduction of digital radio.
Theory of operation
An ATS monitors conditions such as voltage, current, and temperature within the transmitter cabinet or enclosure, and often has external sensors as well, particularly on the antenna. Some systems have remote monitoring points which report back to the main unit through telemetry links.
Advanced systems can monitor and often correct other problems which are considered mission-critical, such as detecting ice on antenna elements or radomes and turning on heaters to prevent the VSWR (power reflected from a mismatched antenna back into the transmitter) from going too high. High-power stations that use desiccation pumps to put dry nitrogen into their feedline (to displace moisture for increased efficiency) can also monitor the pressure. Generators, batteries, and incoming electricity can also be monitored.
If anything goes wrong which the ATS cannot handle, it can send out calls for help, via pager, telephone voice message, or dedicated telemetry links back to a fixed point such as a broadcast studio. Other than possibly listening for dead air from the studio/transmitter link, an ATS does not cover the programming or the studio equipment like broadcast automation, but rather only the "transmitter plant".
An ATS can also be used to automate scheduled tasks, such as lowering an AM radio station's transmission power at sundown and raising it at sunrise to meeting license requirements for different propagation patterns at day and night.
References
See also
Broadcast automation and central casting
Broadcast translators and repeaters
Broadcast engineering
Amateur radio | Automatic transmission system | [
"Engineering"
] | 775 | [
"Broadcast engineering",
"Electronic engineering"
] |
591,492 | https://en.wikipedia.org/wiki/Complementary%20good | In economics, a complementary good is a good whose appeal increases with the popularity of its complement. Technically, it displays a negative cross elasticity of demand and that demand for it increases when the price of another good decreases. If is a complement to , an increase in the price of will result in a negative movement along the demand curve of and cause the demand curve for to shift inward; less of each good will be demanded. Conversely, a decrease in the price of will result in a positive movement along the demand curve of and cause the demand curve of to shift outward; more of each good will be demanded. This is in contrast to a substitute good, whose demand decreases when its substitute's price decreases.
When two goods are complements, they experience joint demand - the demand of one good is linked to the demand for another good. Therefore, if a higher quantity is demanded of one good, a higher quantity will also be demanded of the other, and vice versa. For example, the demand for razor blades may depend on the number of razors in use; this is why razors have sometimes been sold as loss leaders, to increase demand for the associated blades. Another example is that sometimes a toothbrush is packaged free with toothpaste. The toothbrush is a complement to the toothpaste; the cost of producing a toothbrush may be higher than toothpaste, but its sales depends on the demand of toothpaste.
All non-complementary goods can be considered substitutes. If and are rough complements in an everyday sense, then consumers are willing to pay more for each marginal unit of good as they accumulate more . The opposite is true for substitutes: the consumer is willing to pay less for each marginal unit of good "" as it accumulates more of good "".
Complementarity may be driven by psychological processes in which the consumption of one good (e.g., cola) stimulates demand for its complements (e.g., a cheeseburger). Consumption of a food or beverage activates a goal to consume its complements: foods that consumers believe would taste better together. Drinking cola increases consumers' willingness to pay for a cheeseburger. This effect appears to be contingent on consumer perceptions of these relationships rather than their sensory properties.
Examples
An example of this would be the demand for cars and petrol. The supply and demand for cars is represented by the figure, with the initial demand . Suppose that the initial price of cars is represented by with a quantity demanded of . If the price of petrol were to decrease by some amount, this would result in a higher quantity of cars demanded. This higher quantity demanded would cause the demand curve to shift rightward to a new position . Assuming a constant supply curve of cars, the new increased quantity demanded will be at with a new increased price . Other examples include automobiles and fuel, mobile phones and cellular service, printer and cartridge, among others.
Perfect complement
A perfect complement is a good that must be consumed with another good. The indifference curve of a perfect complement exhibits a right angle, as illustrated by the figure. Such preferences can be represented by a Leontief utility function.
Few goods behave as perfect complements. One example is a left shoe and a right; shoes are naturally sold in pairs, and the ratio between sales of left and right shoes will never shift noticeably from 1:1.
The degree of complementarity, however, does not have to be mutual; it can be measured by the cross price elasticity of demand. In the case of video games, a specific video game (the complement good) has to be consumed with a video game console (the base good). It does not work the other way: a video game console does not have to be consumed with that game.
Example
In marketing, complementary goods give additional market power to the producer. It allows vendor lock-in by increasing switching costs. A few types of pricing strategy exist for a complementary good and its base good:
Pricing the base good at a relatively low price - this approach allows easy entry by consumers (e.g. low-price consumer printer vs. high-price cartridge)
Pricing the base good at a relatively high price to the complementary good - this approach creates a barrier to entry and exit (e.g., a costly car vs inexpensive gas)
Gross complements
Sometimes the complement-relationship between two goods is not intuitive and must be verified by inspecting the cross-elasticity of demand using market data.
Mosak's definition states "a good is a gross complement of if is negative, where for denotes the ordinary individual demand for a certain good." In fact, in Mosak's case, is not a gross complement of but is a gross complement of . The elasticity does not need to be symmetrical. Thus, is a gross complement of while can simultaneously be a gross substitutes for .
Proof
The standard Hicks decomposition of the effect on the ordinary demand for a good of a simple price change in a good , utility level and chosen bundle is
If is a gross substitute for , the left-hand side of the equation and the first term of right-hand side are positive. By the symmetry of Mosak's perspective, evaluating the equation with respect to , the first term of right-hand side stays the same while some extreme cases exist where is large enough to make the whole right-hand-side negative. In this case, is a gross complement of . Overall, and are not symmetrical.
Effect of price change of complementary goods
Substitute good
References
Goods (economics)
Utility function types | Complementary good | [
"Physics"
] | 1,130 | [
"Materials",
"Goods (economics)",
"Matter"
] |
591,513 | https://en.wikipedia.org/wiki/Optical%20cavity | An optical cavity, resonating cavity or optical resonator is an arrangement of mirrors or other optical elements that confines light waves similarly to how a cavity resonator confines microwaves. Optical cavities are a major component of lasers, surrounding the gain medium and providing feedback of the laser light. They are also used in optical parametric oscillators and some interferometers. Light confined in the cavity reflects multiple times, producing modes with certain resonance frequencies. Modes can be decomposed into longitudinal modes that differ only in frequency and transverse modes that have different intensity patterns across the cross section of the beam. Many types of optical cavities produce standing wave modes.
Different resonator types are distinguished by the focal lengths of the two mirrors and the distance between them. Flat mirrors are not often used because of the difficulty of aligning them to the needed precision. The geometry (resonator type) must be chosen so that the beam remains stable, i.e. the size of the beam does not continually grow with multiple reflections. Resonator types are also designed to meet other criteria such as a minimum beam waist or having no focal point (and therefore no intense light at a single point) inside the cavity.
Optical cavities are designed to have a large Q factor, meaning a beam undergoes many oscillation cycles with little attenuation. In the regime of high Q values, this is equivalent to the frequency line width being small compared to the resonant frequency of the cavity.
Resonator modes
Light confined in a resonator will reflect multiple times from the mirrors, and due to the effects of interference, only certain patterns and frequencies of radiation will be sustained by the resonator, with the others being suppressed by destructive interference. In general, radiation patterns which are reproduced on every round-trip of the light through the resonator are the most stable. These are known as the modes of the resonator.
Resonator modes can be divided into two types: longitudinal modes, which differ in frequency from each other; and transverse modes, which may differ in both frequency and the intensity pattern of the light. The basic, or fundamental transverse mode of a resonator is a Gaussian beam.
Resonator types
The most common types of optical cavities consist of two facing plane (flat) or spherical mirrors. The simplest of these is the plane-parallel or Fabry–Pérot cavity, consisting of two opposing flat mirrors. While simple, this arrangement is rarely used in large-scale lasers due to the difficulty of alignment; the mirrors must be aligned parallel within a few seconds of arc, or "walkoff" of the intracavity beam will result in it spilling out of the sides of the cavity. However, this problem is much reduced for very short cavities with a small mirror separation distance (L < 1 cm). Plane-parallel resonators are therefore commonly used in microchip and microcavity lasers and semiconductor lasers. In these cases, rather than using separate mirrors, a reflective optical coating may be directly applied to the laser medium itself. The plane-parallel resonator is also the basis of the Fabry–Pérot interferometer.
For a resonator with two mirrors with radii of curvature R1 and R2, there are a number of common cavity configurations. If the two radii are equal to half the cavity length (R1 = R2 = L / 2), a concentric or spherical resonator results. This type of cavity produces a diffraction-limited beam waist in the centre of the cavity, with large beam diameters at the mirrors, filling the whole mirror aperture. Similar to this is the hemispherical cavity, with one plane mirror and one mirror of radius equal to the cavity length.
A common and important design is the confocal resonator, with mirrors of equal radii to the cavity length (R1 = R2 = L). This design produces the smallest possible beam diameter at the cavity mirrors for a given cavity length, and is often used in lasers where the purity of the transverse mode pattern is important.
A concave-convex cavity has one convex mirror with a negative radius of curvature. This design produces no intracavity focus of the beam, and is thus useful in very high-power lasers where the intensity of the light might be damaging to the intracavity medium if brought to a focus.
Less common resonator types include optical ring resonators and whispering-gallery mode resonators, in which a resonance is formed by waves moving in a closed loop rather than reflecting between two mirrors.
Stability
Only certain ranges of values for R1, R2, and L produce stable resonators in which periodic refocussing of the intracavity beam is produced. If the cavity is unstable, the beam size will grow without limit, eventually growing larger than the size of the cavity mirrors and being lost. By using methods such as ray transfer matrix analysis, it is possible to calculate a stability criterion:
Values which satisfy the inequality correspond to stable resonators.
The stability can be shown graphically by defining a stability parameter, g for each mirror:
,
and plotting g1 against g2 as shown. Areas bounded by the line g1 g2 = 1 and the axes are stable. Cavities at points exactly on the line are marginally stable; small variations in cavity length can cause the resonator to become unstable, and so lasers using these cavities are in practice often operated just inside the stability line.
A simple geometric statement describes the regions of stability: A cavity is stable if the line segments between the mirrors and their centers of curvature overlap, but one does not lie entirely within the other.
In the confocal cavity, if a ray is deviated from its original direction in the middle of the cavity, its displacement after reflecting from one of the mirrors is larger than in any other cavity design. This prevents amplified spontaneous emission and is important for designing high power amplifiers with good beam quality.
Practical resonators
If the optical cavity is not empty (e.g., a laser cavity which contains the gain medium), the value of L needs to be adjusted to account for the index of refraction of the medium. Optical elements such as lenses placed in the cavity alter the stability and mode size. In addition, for most gain media, thermal and other inhomogeneities create a variable lensing effect in the medium, which must be considered in the design of the laser resonator.
Practical laser resonators may contain more than two mirrors; three- and four-mirror arrangements are common, producing a "folded cavity". Commonly, a pair of curved mirrors form one or more confocal sections, with the rest of the cavity being quasi-collimated and using plane mirrors. The shape of the laser beam depends on the type of resonator: The beam produced by stable, paraxial resonators can be well modeled by a Gaussian beam. In special cases the beam can be described as a single transverse mode and the spatial properties can be well described by the Gaussian beam, itself. More generally, this beam may be described as a superposition of transverse modes. Accurate description of such a beam involves expansion over some complete, orthogonal set of functions (over two-dimensions) such as Hermite polynomials or the Ince polynomials. Unstable laser resonators on the other hand, have been shown to produce fractal shaped beams.
Some intracavity elements are usually placed at a beam waist between folded sections. Examples include acousto-optic modulators for cavity dumping and vacuum spatial filters for transverse mode control. For some low power lasers, the laser gain medium itself may be positioned at a beam waist. Other elements, such as filters, prisms and diffraction gratings often need large quasi-collimated beams.
These designs allow compensation of the cavity beam's astigmatism, which is produced by Brewster-cut elements in the cavity. A Z-shaped arrangement of the cavity also compensates for coma while the 'delta' or X-shaped cavity does not.
Out of plane resonators lead to rotation of the beam profile and more stability. The heat generated in the gain medium leads to frequency drift of the cavity, therefore the frequency can be actively stabilized by locking it to unpowered cavity. Similarly the pointing stability of a laser may still be improved by spatial filtering by an optical fibre.
Alignment
Precise alignment is important when assembling an optical cavity. For best output power and beam quality, optical elements must be aligned such that the path followed by the beam is centered through each element.
Simple cavities are often aligned with an alignment laser—a well-collimated visible laser that can be directed along the axis of the cavity. Observation of the path of the beam and its reflections from various optical elements allows the elements' positions and tilts to be adjusted.
More complex cavities may be aligned using devices such as electronic autocollimators and laser beam profilers.
Optical delay lines
Optical cavities can also be used as multipass optical delay lines, folding a light beam so that a long path-length may be achieved in a small space. A plane-parallel cavity with flat mirrors produces a flat zigzag light path, but as discussed above, these designs are very sensitive to mechanical disturbances and walk-off. When curved mirrors are used in a nearly confocal configuration, the beam travels on a circular zigzag path. The latter is called a Herriott-type delay line. A fixed insertion mirror is placed off-axis near one of the curved mirrors, and a mobile pickup mirror is similarly placed near the other curved mirror. A flat linear stage with one pickup mirror is used in case of flat mirrors and a rotational stage with two mirrors is used for the Herriott-type delay line.
The rotation of the beam inside the cavity alters the polarization state of the beam. To compensate for this, a single pass delay line is also needed, made of either a three or two mirrors in a 3d respective 2d retro-reflection configuration on top of a linear stage. To adjust for beam divergence a second car on the linear stage with two lenses can be used. The two lenses act as a telescope producing a flat phase front of a Gaussian beam on a virtual end mirror.
See also
Optical feedback
Multiple-prism grating laser oscillator (or Multiple-prism grating laser cavity)
Coupled mode theory
Vertical-cavity surface-emitting laser
References
Further reading
Koechner, William. Solid-state laser engineering, 2nd ed. Springer Verlag (1988).
An excellent two-part review of the history of optical cavities:
Cavity, optical
Laser science | Optical cavity | [
"Materials_science",
"Engineering"
] | 2,222 | [
"Glass engineering and science",
"Optical devices"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.