id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
6,928,351 | https://en.wikipedia.org/wiki/Sigma-ring | In mathematics, a nonempty collection of sets is called a -ring (pronounced sigma-ring) if it is closed under countable union and relative complementation.
Formal definition
Let be a nonempty collection of sets. Then is a -ring if:
Closed under countable unions: if for all
Closed under relative complementation: if
Properties
These two properties imply:
whenever are elements of
This is because
Every -ring is a δ-ring but there exist δ-rings that are not -rings.
Similar concepts
If the first property is weakened to closure under finite union (that is, whenever ) but not countable union, then is a ring but not a -ring.
Uses
-rings can be used instead of -fields (-algebras) in the development of measure and integration theory, if one does not wish to require that the universal set be measurable. Every -field is also a -ring, but a -ring need not be a -field.
A -ring that is a collection of subsets of induces a -field for Define Then is a -field over the set - to check closure under countable union, recall a -ring is closed under countable intersections. In fact is the minimal -field containing since it must be contained in every -field containing
See also
References
Walter Rudin, 1976. Principles of Mathematical Analysis, 3rd. ed. McGraw-Hill. Final chapter uses -rings in development of Lebesgue theory.
Measure theory
Families of sets | Sigma-ring | [
"Mathematics"
] | 302 | [
"Basic concepts in set theory",
"Families of sets",
"Combinatorics"
] |
6,928,403 | https://en.wikipedia.org/wiki/Wilkinson%20Award | The Wilkinson Award is an Australian architecture award presented by the New South Wales Chapter of the Australian Institute of Architects and was first awarded in 1961. The award recognises excellence in residential buildings built in New South Wales, Australia, often for freestanding houses, but at times awarding multiresidential projects and alterations and additions.
The medal is presented in memory of the Australian architect and academic Professor Leslie Wilkinson , (12 October 1882 – 20 September 1973). Born in New Southgate, London, England he emigrated to Sydney in 1918 and became the first Dean of Architecture at the University of Sydney, School of Architecture.
National Awards
Since 1981 a total of eight Wilkinson Award winners have won the national Robin Boyd Award later in the same year at the Australian national architecture awards, regarded as the highest award for residential architecture in Australia.
Multiple Winners
Glenn Murcutt has won the award on six occasions and Harry Seidler and Ken Woolley on four occasions. Alexander Tzannes and Durbach Block Jaggers have won the award three times each.
List of recipients
See also
Australian Institute of Architects Awards and Prizes
National Award for Enduring Architecture
New South Wales Enduring Architecture Award
Australian Institute of Architects
Victorian Architecture Medal
Robin Boyd Award
Melbourne Prize
References
External links
Example of work by Leslie Wilkinson - 'Markdale' NSW
Architecture awards
Awards established in 1961
Architecture in Australia | Wilkinson Award | [
"Engineering"
] | 270 | [
"Architecture stubs",
"Architecture"
] |
6,928,455 | https://en.wikipedia.org/wiki/Early%20prostate%20cancer%20antigen-2 | Early prostate cancer antigen-2 (EPCA-2) is a protein of which blood levels are elevated in prostate cancer. It appears to provide more accuracy in identifying early prostate cancer than the standard prostate cancer marker, PSA.
"EPCA-2" is not the name of a gene. EPCA-2 gets its name because it is the second prostate cancer marker identified by the research team. This earlier marker was previously known as "EPCA", but is now called "EPCA-1".
EPCA-2 versus PSA
Leman, Getzenberg and colleagues describe, in the April 2007 issue of Urology, the performance characteristic of EPCA-2, a novel nuclear protein marker for prostate cancer cells. This paper has since been retracted by the publisher.
A study was initiated which suggested that the EPCA-2 protein serum assay exhibits favorable performance characteristics which are potentially superior to serum PSA. However more studies are necessary to see if this test will retain its sensitivity when used in a screening population.
In September 2008 the industry sponsor of EPCA-2, Onconome sued Dr Robert Getzenberg, JHU, and the University of Pittsburgh, his previous institution, claiming that Getzenberg misrepresented and falsified data related to EPCA-2 after Onconome sponsored 13 million dollars of research over five years in Getzenberg's labs at University of Pittsburgh and Johns Hopkins for a blood test for prostate cancer. Onconome claimed that the test was "essentially as reliable as flipping a coin". Robert H. Getzenberg (Ph.D-JHU 1992), first developed EPCA-2 as a graduate student with Professor Donald Coffey at Johns Hopkins and later as a faculty member at University of Pittsburgh. Getzenberg, former professor of Urology and Director of Research of the James Buchanan Brady Urological Institute, left Johns Hopkins University School of Medicine in 2013 for undisclosed reasons.
References
External links
Medical Today - EPCA-2: A Highly Specific Serum Marker For Prostate Cancer
consumeraffairs.com - Hopkins Researchers Find Better Blood Test for Prostate Cancer
Tumor markers
Prostate cancer | Early prostate cancer antigen-2 | [
"Chemistry",
"Biology"
] | 438 | [
"Chemical pathology",
"Tumor markers",
"Biomarkers"
] |
6,929,277 | https://en.wikipedia.org/wiki/Mohawk%20Airlines%20Flight%2040 | Mohawk Airlines Flight 40 was a scheduled passenger flight between Syracuse, New York and Washington, DC, with an intermediate stop in Elmira, New York. On June 23, 1967 it suffered a loss of control and crashed, killing all 30 passengers and four crew on board. It was the deadliest disaster in the airline's history. A valve in the auxiliary power unit had suffered a complete failure, spreading fire to the tailplane, causing a loss of pitch control.
Aircraft and crew
This particular BAC 1-11 was new, having had its first flight the previous year. Its airframe had accumulated 2,246 hours in total. It was equipped with two model 506-14 Spey engines manufactured by Rolls-Royce. Its registration number was N1116J with the aircraft name Discover America.
The captain was 43-year-old Charles E. Bullock, who had logged 13,875 flight hours, including 603 hours on the BAC 1-11. The first officer was 33-year-old Troy E. Rudesill, who had 4,814 flight hours, with 667 of them on the BAC 1-11.
History of flight
The aircraft, a BAC 1-11, took off from runway 24 at Elmira Corning Regional Airport at approximately 14:39 EDT. It was cleared to climb to five minutes later. Nine minutes after that, several eyewitnesses saw large pieces of the tailplane break away from the aircraft with flames and smoke coming out from the fuselage, as the flight proceeded south from Mansfield, Pennsylvania. The aircraft subsequently lost control and plunged into a heavily wooded area served only by dirt roads. No one on the ground was hurt, but there were no survivors aboard the plane. Thereafter, the air traffic controller at New York Center vectored a Piper Aztec over the area of Flight 40's target disappearance. The pilot of this plane reported observing the burning wreckage of an airplane, which was later identified as Flight 40.
The plane gouged a strip through the woods about wide and long. The tail section was thrown from the impact site of the crash. Some of the witnesses were workmen at a coal strip mine who immediately took a bulldozer and plowed two roads through to the site a mile and a half away.
Shortly after the incident, Robert E. Peach, president of Mohawk, demanded an investigation by the Federal Bureau of Investigation. In a telegram to J. Edgar Hoover, director of the F.B.I., Mr. Peach wrote: "Evidence has developed in the course of notification of next of kin of crash victims which leads to strong suggestion of sabotage. Mohawk Airlines formally demands that the F.B.I. investigate the possibility of sabotage." However, Mr. Peach did not make public the nature of the "evidence."
Investigation
The National Transportation Safety Board launched a full investigation. The findings of that investigation are as follows:
A non-return valve in the auxiliary power unit had suffered a complete failure. This allowed bleed air from the engine to flow through the system in the wrong direction. This air exited at the start of the system at sufficient temperatures to ignite components there. The fire quickly spread to the hydraulics in the aircraft, and moved along the hydraulic lines to the rear of the plane. There, it caused heavy damage to the tail, causing a loss of pitch control which sent the airplane diving into the ground.
Aftermath
In July 1967, the National Transportation Safety Board made three safety recommendations to the Federal Aviation Administration, which issued Airworthiness Directive 68-01-01 to prevent heat damage or fire in the airframe plenum of the auxiliary power unit installation. On 23 June 2017, a memorial was erected to honor the victims.
Notes
References
National Transportation Safety Board Summary
National Transportation Safety Board Aircraft Accident Report - April 18, 1968
External links
Text of Airworthiness Directive 68-1-1, issued as a result of the crash
A photo of the accident aircraft
Aviation accidents and incidents in the United States in 1967
1967 in Pennsylvania
Airliner accidents and incidents in Pennsylvania
Airliner accidents and incidents caused by in-flight fires
Airliner accidents and incidents caused by mechanical failure
Airliner accidents and incidents caused by in-flight structural failure
Accidents and incidents involving the BAC One-Eleven
Mohawk Airlines accidents and incidents
Tioga County, Pennsylvania
June 1967 events in the United States | Mohawk Airlines Flight 40 | [
"Materials_science"
] | 886 | [
"Airliner accidents and incidents caused by mechanical failure",
"Mechanical failure"
] |
6,929,747 | https://en.wikipedia.org/wiki/C57BL/6 | C57BL/6, often referred to as "C57 black 6", "B6", "C57" or "black 6", is a common inbred strain of laboratory mouse.
It is the most widely used "genetic background" for genetically modified mice for use as models of human disease. They are the most widely used and best-selling mouse strain due to the availability of congenic strains, easy breeding, and robustness.
The median lifespan of C57BL/6 mice is 27–29 months and the maximum lifespan is about 36 months.
Origin
The inbred strain of C57BL mice was created in 1921 by C. C. Little at the Bussey Institute for Research in Applied Biology. The substrain "6" was the most popular of the surviving substrains. Little's supervisor William E. Castle had obtained the predecessor strain of C57BL/6, "mouse number 57", from Abbie Lathrop who was breeding inbred strains for mammary tumor research in collaboration with Leo Loeb at the time.
Appearance and behavior
C57BL/6 mice have a dark brown, nearly black coat. They are more sensitive to noise and odours and are more likely to bite than the more docile laboratory strains such as BALB/c. They are good breeders.
Group-housed B6 male mice display barbering behavior, in which the dominant mouse in a cage selectively removes hair from its subordinate cage mates. Mice that have been barbered have large bald patches on their bodies, commonly around the head, snout, and shoulders, although barbering may appear anywhere on the body. Both hair and whiskers may be removed.
C57BL/6 has many unusual characteristics that make it useful for some work and inappropriate for others: It is unusually sensitive to pain and to cold, and analgesic medications are less effective in it. Unlike most mouse strains, it drinks alcoholic beverages voluntarily. It is more susceptible than average to morphine addiction, atherosclerosis, and age-related hearing loss.
Genetics
The C57BL/6 mouse was the second-ever mammalian species to have its entire genome published.
The dark coat makes the mouse strain convenient for creating transgenic mice: it is crossed with a light-furred 129 mouse, and the desirable crosses can be easily identified by their mixed coat colors.
There now exist colonies of mice derived from the original C57BL/6 colony that have been bred in isolation from one another for many hundreds of generations. Owing to genetic drift these colonies differ widely from one another (and, it goes without saying, from the original mice isolated at the Bussey Institute). Responsible scientists, including those at accredited repositories, are careful to point out this fact and take pains to distinguish sublines such as C57BL/6J (the established subline at The Jackson Laboratory) from C57BL/6N, etc. But even within these sublines, the potential for drift exists in colonies maintained by individual laboratories who do not have a systematic practice of reestablishing breeders from a centralized, vetted stock.
The mice (as well as NOD and SJL) are known to have IgG2c allele.
Popularity
By far the most popular laboratory rodent, the C57BL/6 mouse accounts for to of all rodents shipped to research laboratories from American suppliers. Its overwhelming popularity is due largely to inertia: it has been widely used and widely studied, and therefore it is used even more.
In 1993 the first C57BL/6 gene targeted knockout mouse was published by a group at Hoffmann-La Roche in Switzerland.
In 2013 C57BL/6 mice were flown into space aboard Bion-M No.1.
In 2015 C57BL/6NTac females provided by Taconic Biosciences were sent to the International Space Station on SpaceX CRS-6.
References
Laboratory mouse strains
Space-flown life | C57BL/6 | [
"Biology"
] | 817 | [
"Space-flown life"
] |
6,930,483 | https://en.wikipedia.org/wiki/Pederin | Pederin is a vesicant toxic amide with two tetrahydropyran rings, found in the haemolymph of the beetle genus Paederus, including the Nairobi fly, belonging to the family Staphylinidae. It was first characterized by processing 25 million field-collected P. fuscipes. It makes up approximately 0.025% of an insects weight (for P. fuscipes).
It has been demonstrated that the production of pederin relies on the activities of an endosymbiont (Pseudomonas ssp.) within Paederus.
The manufacture of pederin is largely confined to adult female beetles—larvae and males only store pederin acquired maternally (i.e., through eggs) or by ingestion.
Physical effects
Skin contact with pederin from the coelomic fluid exuded from a female Paederus beetle causes Paederus dermatitis. This is a rash that varies from a slight erythema to severe blistering, depending on the concentration and duration of exposure. Treatment involves washing the irritated area with cool soapy water. Application of a topical steroid is also recommended for more intense exposures. These measures can significantly reduce the physical effects the toxin has on the affected area.
Synthesis
An efficient total synthesis of pederin is known. Beginning with (+)-benzoylselenopederic acid, Zn(BH4)2 reduction is applied, introducing stereoselective reduction of the acyclic ketone. Michael addition of nitromethane is performed. After several steps of Moffatt oxidation, phenylselenation, hydrolysis, and reduction, pederic acid is reached.
The final steps of the synthesis of pederin are shown to the right. Here, pederic acid is added to the protected compound in LiHMDS and THF, producing a 75% yield. The protecting groups are then removed using TBAF and a hydrolytic quench. This step gives an 88% yield.
Mode of action
Pederin blocks mitosis at levels as low as 1 ng/ml, by inhibiting protein and DNA synthesis without affecting RNA synthesis, prevents cell division, and has been shown to extend the life of mice bearing a variety of tumors. For these reasons, it has garnered interest as a potential anti-cancer treatment.
Uses
Pederin and its derivatives are being researched as anticancer drugs. This family of compounds is able to inhibit protein and DNA biosynthesis, making it useful to slow the division of cancer cells. One derivative of pederin, psymberin, has been found to be highly selective in targeting solid tumor cells.
See also
Psymberin
Paederus dermatitis
Cycloheximide
Christmas eye
References
Acetamides
Ethers
Tetrahydropyrans
Blister agents | Pederin | [
"Chemistry"
] | 599 | [
"Blister agents",
"Chemical weapons",
"Functional groups",
"Organic compounds",
"Ethers"
] |
6,930,834 | https://en.wikipedia.org/wiki/Mohawk%20Airlines%20Flight%20405 | Mohawk Airlines Flight 405, a Fairchild Hiller FH-227 twin-engine turboprop airliner registered N7818M, was a domestic scheduled passenger flight operated by Mohawk Airlines that crashed into a house within the city limits of Albany, New York, on March 3, 1972, on final approach to Albany County Airport (now Albany International Airport), New York, killing 17 people. The intended destination airport lies in the suburban Town of Colonie, about 4 miles north of the crash site.
Flight history
The flight, which originated in New York City, encountered problems during its final approach to runway 01 at Albany. The weather at the airport was reported to the flight crew as "ceiling indefinite, 1,200 feet obscured, 2 miles visibility in light snow, surface winds (from) 360 degrees (north) at 9 knots". As the Fairchild FH227B twin-engine turboprop reached 8.5 miles from the airport, the flight crew contacted Mohawk's operations center via radio and informed them that the left propeller was 'hung up' in the cruise pitch lock, which would prevent normal thrust reduction on that side, needed for landing. At about 5 miles out, the flight crew notified Albany Approach Control that they were trying to perform an emergency 'feathering' of the left propeller. As they continued to descend and struggle with the propeller, they advised the controller that they were going to "land short". The plane subsequently crashed into a house 3.5 miles south of the runway. Of the 3 crew members and 45 passengers, 2 crew members and 14 passengers were killed, as well as one occupant of the house.
Investigation
The National Transportation Safety Board (NTSB) launched a full investigation into the accident, which included a three-day public hearing in Albany on April 25 through April 27, 1972, and a deposition in Washington, D.C., on May 19, 1972. Both the flight data recorder and the cockpit voice recorder were recovered from the wreckage, and their recorded data was found to be intact and usable. The investigation revealed that as the flight crew attempted to reduce thrust on the left engine during the final approach, they were unable to remove the 'cruise pitch lock' mechanism that is used to maintain a cruise thrust setting. When they subsequently attempted to perform an emergency feathering and shutdown procedure on that engine, they were able to shut down the engine but unable to achieve a feathering of the propeller. This eventually resulted in the left propeller creating a high amount of asymmetric drag while windmilling; so much so, that the other engine operating at full power was not able to arrest the resulting uncontrollable descent.
The NTSB, despite investing substantial investigative resources trying to uncover the reasons behind the two unusual and seemingly separate propeller-related malfunctions, was unable to shed light on either one. It was not able to replicate the 'pitch lock stuck' malfunction, nor adequately explain why the crew subsequently failed to effect the standard feathering procedure to properly shut down and reduce the thrust and drag on the left side.
In effect, by not being able to properly secure the left engine, an unwanted asymmetric high thrust situation turned into an irreversible unwanted high asymmetric drag, which eventually resulted in an inevitable and premature descent and crash.
In its final report, issued on April 11, 1973, the Board determined the following Probable Cause for the accident:
The inability of the crew to feather the left propeller, in combination with the descent of the aircraft below the prescribed minimum altitudes for the approach. The Board is unable to determine why the left propeller could not be feathered.
The Board also found the following Contributing Factors:
Contributing causal factors for the nonstandard approach were the captain's preoccupation with a cruise pitch lock malfunction, the first officer's failure to adhere to company altitude awareness procedures, and the captain's failure to delegate any meaningful responsibilities to the copilot which resulted in a lack of effective task sharing during the emergency. Also, the Board was unable to determine why the propeller pitch lock malfunctioned during the descent.
In subsequent correspondence between the NTSB and the Federal Aviation Administration (FAA), included in the final report, the NTSB questioned the then-available operating procedures and manuals for the aircraft. The NTSB found that there was insufficient guidance to pilots in the handling of "Cruise Pitch Lock Stuck" condition. For example, it was not clear based on existing instructions and guidelines whether a missed approach would be indicated and/or possible under these circumstances, and if so, what would be the recommended procedure to successfully execute the maneuver. Also, the condition of a shut down but unfeathered engine, i.e. windmilling propeller with high asymmetric drag and minimum control implications, which was encountered in this accident, was insufficiently covered, according to the NTSB.
Safety recommendations
As a result of its investigation into the accident and in light of its findings, the NTSB also issued the following safety recommendations:
That shoulder harnesses be provided to and worn by the flight crew
That flight attendant seats be designed for improved G-force tolerance
That emergency lighting switches be armed prior to every flight
That flight crew coordination procedures be reinforced during initial and recurrent training, so that especially during emergency situations, one crew member always flies the aircraft, and making appropriate altitude and airspeed callouts is always clearly assigned to one crew member
See also
List of accidents and incidents involving commercial aircraft
Mohawk Airlines Flight 411
References
External links
Airliners.net Photo of accident aircraft N7818M, one day prior to accident, on March 2, 1972 in La Guardia Airport, New York
Aviation-Safety.net Photo of aircraft N7818M crash aftermath, on March 4, 1972 in Albany, New York
Airlinecolors.com Images and historical overview
NTSB Report - See copy at Embry-Riddle Aeronautical University
Summary NTSB Report
Carol DeMare, "Recalling scenes of death from the sky", Albany Times Union, May 1, 2006
Airliner accidents and incidents caused by pilot error
Airliner accidents and incidents caused by mechanical failure
Airliner accidents and incidents caused by engine failure
Aviation accidents and incidents in the United States in 1972
1972 in New York (state)
Mohawk Airlines accidents and incidents
Accidents and incidents involving the Fairchild F-27
Airliner accidents and incidents in New York (state)
History of Albany, New York
March 1972 events in the United States | Mohawk Airlines Flight 405 | [
"Materials_science"
] | 1,318 | [
"Airliner accidents and incidents caused by mechanical failure",
"Mechanical failure"
] |
6,931,292 | https://en.wikipedia.org/wiki/Modulated%20ultrasound | Modulated ultrasound is a technique for the transmission of audio information, comparable to radio. Ultrasound can be modulated to carry an audio signal (as radio signals are modulated). It is usually used to carry messages underwater at ranges under five miles.
The received ultrasound signal is decoded into audible sound by a modulated-ultrasound receiver. A modulated ultrasound receiver is a device that receives a modulated ultrasound signal and decodes it for use as sound, navigational-position information, etc. Its function is somewhat like that of a radio receiver.
Applications include use in underwater diving communicators as well as communication with submarines.
Range limitation
Due to the absorption characteristics of seawater, ultrasound (sound at frequencies greater than human hearing, or approximately greater than 20,000 hertz) is not used for long-range underwater communications. The higher the frequency, the faster the sound is absorbed by the seawater, and the more quickly the signal fades. For this reason, most underwater "telephones" either operate in "baseband" mode (at the same frequency as the voice and is basically a loudspeaker), in a "UQC-1" mode (as defined in MIL-C-15240D) with a modulated carrier of 7,500 Hz, or in "UQC-2" mode (as defined in MIL-C-22509) from around 8,500 hertz to approximately 12,000 hertz, or in the later "WQC-2" mode from 8,500 hertz to approximately 100,000 hertz - with most use around 32,500 hertz) (Also see NATO STANAG-1074 ED.4 for descriptions of internationally used frequencies.)
See also
Sound from ultrasound for modulated ultrasound that can make its carried signal audible without needing a receiver set.
References
Ultrasound
Underwater diving equipment components
Submarine tactics | Modulated ultrasound | [
"Technology"
] | 387 | [
"Components",
"Underwater diving equipment components"
] |
6,931,943 | https://en.wikipedia.org/wiki/Blowing%20house | A blowing house or blowing mill was a building used for smelting tin in Cornwall and on Dartmoor in Devon, in South West England. Blowing houses contained a furnace and a pair of bellows that were powered by an adjacent water wheel, and they were in use from the early 14th century until they were gradually replaced by reverberatory furnaces in the 18th century. The remains of over 40 blowing houses have been identified on Dartmoor.
History
The blowing house method of smelting tin was probably introduced early in the 14th century to replace the earliest method of smelting which had to be done in two stages – a first smelting probably took place near to the tinworks and the roughly smelted metal was taken to a stannary town to be smelted again to produce the final refined product. Each of these smeltings was taxed separately until 1303 when they were replaced by a single tax on the finished product. It is likely that this tax change was due to the improved smelting process provided by the blowing houses.
Documentation confirms the existence of blowing houses in Cornwall as early as 1402, but the earliest reference for Dartmoor is not found until the early 16th century, though it is likely that they were in use on the moor earlier. In Devon there are many references to blowing mills throughout the 16th and 17th centuries, reflecting the boom time in tin-mining on Dartmoor. However, by 1730 there were only two blowing mills working in the whole of the county: at Sheepstor and Plympton.
From the beginning of the 18th century, this method was gradually superseded by reverberatory furnace smelting, which used higher temperatures and powdered anthracite as fuel and had the advantage of not requiring a forced draught of air. The smelting house at Eylesbarrow tin mine which was in operation during the first half of the 19th century had two furnaces, one of each type.
Construction
On Dartmoor, blowing houses were rectangular buildings between around in length and around half this in width. They were made of unmortared granite blocks with walls often or more thick and probably had turf or thatch roofs which were periodically burnt to retrieve the particles of tin that would have been driven into the roof through the blast from the bellows. Blowing houses are typically located on or near the bank of a stream where the fall of water was enough for a leat to be built to work a small () diameter overshot water wheel which developed around to operate the bellows.
The only contemporary detailed description of a blowing house was provided by a Cornishman named William Pryce in his treatise on Cornish mining of 1778:
According to Crispin Gill, about two tons of charcoal was needed to smelt a ton of metal. The molten metal ran out from the bottom of the furnace into a granite trough or "float" from where it was ladled into stone moulds.
Archaeology
Archaeological investigation of blowing houses started in 1866 when John Kelly examined the lower mill at Yealm Steps. Robert Burnard cleared the interior of the lower mill at Week Ford in the 1880s, but the most notable researcher of the remains on Dartmoor was R. Hansford Worth who made detailed records of over 40 sites. Since then, research on the tin industry in the south west has continued, for instance The Dartmoor Tinworking Interest Group was formed in 1991.
There is evidence for furnaces that match the description given by Pryce (see above) at Upper and Lower Merrivale, Avon Dam and Blacksmith's Shop. Each of these sites has two upright granite slabs about apart set into the floor area of the mill, leaving space behind for the bellows. The Merrivale sites also have a slab at the back, making a basic hearth shape. The accompanying photograph shows the two side slabs of the furnace at the Lower Merrivale blowing house, with the somewhat displaced float stone between them. The mouldstone, filled with rainwater, is just above and to the right.
The mouldstone is the best field evidence for a blowing house; they are large blocks of granite with a flat top containing a rectangular hollow recess into which the molten tin was poured to be cast into ingots. The moulds vary in size and shape, the largest known from Dartmoor being that from Upper Merrivale at , and the smallest square at Longstone. Some mouldstones have additional smaller hollows on their surface; these are traditionally assumed to be for assaying purposes, but some authors have suggested that they may have been for making small ingots for selling illicitly to avoid tin coinage, the tax on white, or refined tin.
The only properly documented find of a tin ingot from Dartmoor has a diagonal hole through it which matches the supposed practice of placing a stick in the mould when pouring in the molten tin. The stick would burn away leaving the hole which would be used to lever out the solid ingot from the mould and would later be useful to tie up the ingots for carriage. According to Worth, the found ingot fitted precisely into one of the moulds found at the lower blowing house on the River Yealm, although it did not fill it and weighed only , far less than the average Dartmoor ingot weight of . He speculated that it was the small surplus that remained after the normal ingots had been cast. Cornish ingots were much larger, averaging around .
One of the best preserved blowing houses on Dartmoor is above Merrivale Bridge on the River Walkham. It has a mould stone close to its entrance and the wheel-pit can be easily traced. Some blowing houses also housed crushing ("knacking") or grinding ("crazing") mills, and at Gobbet Mine on the River Swincombe both the upper and lower grinding stones were found.
See also
Dartmoor tin-mining
Blowing engine
References
Further reading
Bryan Earl Cornish mining: the techniques of metal mining in the West of England, past and present; 2nd edition; Cornish Hillside Publications, 1994. .pp. 97, 97.1 with illustration.
Tin mining
Industrial furnaces
Mining in Cornwall
History of Dartmoor | Blowing house | [
"Chemistry"
] | 1,260 | [
"Metallurgical processes",
"Industrial furnaces"
] |
6,931,945 | https://en.wikipedia.org/wiki/Blood%20parrot%20cichlid | The Blood Parrot Cichlid (Amphilophus citrinellus × Vieja melanurus), or parrot cichlid, is a hybrid species of fish in the family Cichlidae. The fish was first bred in Taiwan around 1986. Blood parrots should not be confused with other parrot cichlids or salt water parrotfish (family Scaridae). Natural colors of the fish are red, yellow, and grey: other colors are injected by breeders.
Because this hybrid cichlid has various anatomical deformities, controversy exists over the ethics of creating the blood parrot. One deformity is its mouth, which has only a narrow vertical opening. This makes blood parrots somewhat harder to feed and potentially vulnerable to malnutrition.
The fish is known to be semi-aggressive. Despite its deformity, it can hold its own in a fight, and will prey on any small fish that can fit in its mouth.
Description
Blood parrots are often bright orange in coloration, but there are other colors that they can have naturally, such as red, yellow or gray. Other colors may be produced by dyeing the fish, which can shorten life expectancy. Some fish have been injected with a colored dye by the breeder. Another modification, generally considered inhumane by enthusiasts, involves cutting the tail while small which causes the fish to grow into a heart shape; these are usually sold under the name of "heart parrots". As the press has brought this practice to light, the majority of fish stockists will no longer sell these modified fish. Adult fish can grow to a length of 8 inches (20 centimeters) and reach an age of 10 to 15 years.
Various breeds of blood parrots have been developed, such as the "King Kong parrot", which typically vary in color from red to yellow. They have fully functioning mouths with less of a nuchal deformity and grow larger. They are usually considered more valuable than the traditional blood parrots.
Genetic defects
As a result of hybridization of the parent species, the fish have several anatomical deformities, including a beak-shaped mouth that cannot fully close, which they compensate for by crushing food with the throat muscles, a deformed nuchal hump, and compressed vertebrae. Some commercial foods have been developed specifically to be easy for blood parrots to ingest, and recently some blood parrots have been selectively bred to be able to completely close their mouths. Blood parrots sometimes can have deformed swim bladders, causing an awkward swimming pattern; and unusually large, and often deformed irises.
Breeding
Male blood parrots generally are infertile, but successful breeding has occurred. Normally, a female blood parrot lays eggs on a hard surface, and both parents guard the eggs unless the brood develops fungus, at which time the eggs will be consumed by either the parents or other fish. However, fish farms have begun introducing male blood parrots injected with a hormone to increase fertility. Most female blood parrots are fertile.
Aquarium
Blood parrots are hardy and may be housed singly, in schools, or with complementary species under a variety of conditions. Sufficient lighting can be provided by a variety of compact fluorescent lamps without the use of T5 or halide fixtures. The fish are voracious eaters and generate significant uneaten debris during feeding. High volume filtration and frequent substrate suctioning is recommended to minimize nitrates.
The recommended aquarium size is 55 gallons for one, and for every extra add 20 gallons.
References
Cichlasomatinae
Fish hybrids
Intergeneric hybrids | Blood parrot cichlid | [
"Biology"
] | 738 | [
"Intergeneric hybrids",
"Hybrid organisms"
] |
6,932,189 | https://en.wikipedia.org/wiki/Neviot | Neviot () is an Israeli mineral water marketing company.
History
Neviot was established in 1989 after geologists discovered that the water of Ein Zahav spring near Kiryat Shmona was suitable for drinking. In 2002, Neviot changed its logo and bottle design.
In 2004, the Podhorzer family, which owned Neviot, sold almost half its shares to the Central Bottling Company (Coca-Cola Israel), which already owned 34.06% of Neviot, bringing its total stake to 78.58%.
See also
Economy of Israel
References
Drink companies of Israel
Bottled water brands
Israeli drinks
Israeli brands
Mineral water
Words and phrases in Modern Hebrew | Neviot | [
"Chemistry"
] | 140 | [
"Mineral water"
] |
6,932,317 | https://en.wikipedia.org/wiki/Conoid | In geometry a conoid () is a ruled surface, whose rulings (lines) fulfill the additional conditions:
(1) All rulings are parallel to a plane, the directrix plane.
(2) All rulings intersect a fixed line, the axis.
The conoid is a right conoid if its axis is perpendicular to its directrix plane. Hence all rulings are perpendicular to the axis.
Because of (1) any conoid is a Catalan surface and can be represented parametrically by
Any curve with fixed parameter is a ruling, describes the directrix and the vectors are all parallel to the directrix plane. The planarity of the vectors can be represented by
.
If the directrix is a circle, the conoid is called a circular conoid.
The term conoid was already used by Archimedes in his treatise On Conoids and Spheroides.
Examples
Right circular conoid
The parametric representation
describes a right circular conoid with the unit circle of the x-y-plane as directrix and a directrix plane, which is parallel to the y--z-plane. Its axis is the line
Special features:
The intersection with a horizontal plane is an ellipse.
is an implicit representation. Hence the right circular conoid is a surface of degree 4.
Kepler's rule gives for a right circular conoid with radius and height the exact volume: .
The implicit representation is fulfilled by the points of the line , too. For these points there exist no tangent planes. Such points are called singular.
Parabolic conoid
The parametric representation
describes a parabolic conoid with the equation . The conoid has a parabola as directrix, the y-axis as axis and a plane parallel to the x-z-plane as directrix plane. It is used by architects as roof surface (s. below).
The parabolic conoid has no singular points.
Further examples
hyperbolic paraboloid
Plücker conoid
Whitney Umbrella
helicoid
Applications
Mathematics
There are a lot of conoids with singular points, which are investigated in algebraic geometry.
Architecture
Like other ruled surfaces conoids are of high interest with architects, because they can be built using beams or bars. Right conoids can be manufactured easily: one threads bars onto an axis such that they can be rotated around this axis, only. Afterwards one deflects the bars by a directrix and generates a conoid (s. parabolic conoid).
External links
mathworld: Plücker conoid
References
A. Gray, E. Abbena, S. Salamon, Modern differential geometry of curves and surfaces with Mathematica, 3rd ed. Boca Raton, FL:CRC Press, 2006. ()
Vladimir Y. Rovenskii, Geometry of curves and surfaces with MAPLE ()
Surfaces
Geometric shapes | Conoid | [
"Mathematics"
] | 579 | [
"Geometric shapes",
"Mathematical objects",
"Geometric objects"
] |
6,932,525 | https://en.wikipedia.org/wiki/Richard%20Henderson%20%28biologist%29 | Richard Henderson is a British molecular biologist and biophysicist and pioneer in the field of electron microscopy of biological molecules. Henderson shared the Nobel Prize in Chemistry in 2017 with Jacques Dubochet and Joachim Frank. "Thanks to his work, we can look at individual atoms of living nature, thanks to cryo-electron microscopes we can see details without destroying samples, and for this he won the Nobel Prize in Chemistry."
Education
Henderson was educated at Newcastleton primary school, Hawick High School and Boroughmuir High School. His father was a baker. He went on to study Physics at the University of Edinburgh graduating with a BSc degree in Physics, 1st Class honours in 1966. He then commenced postgraduate study at Corpus Christi College, Cambridge, and obtained his PhD degree from the University of Cambridge in 1969.
Career and research
Research
Henderson worked on the structure and mechanism of chymotrypsin for his doctorate under the supervision of David Mervyn Blow at the MRC Laboratory of Molecular Biology. His interest in membrane proteins led to him working on voltage-gated sodium channels as a post-doctoral researcher at Yale University. Returning to the MRC Laboratory of Molecular Biology in 1975, Henderson worked with Nigel Unwin to study the structure of the membrane protein bacteriorhodopsin by electron microscopy. A seminal paper in Nature by Henderson and Unwin (1975) established a low resolution structural model for bacteriorhodopsin showing the protein to consist of seven transmembrane helices. This paper was important for a number of reasons, not the least of which was that it showed that membrane proteins had well defined structures and that transmembrane alpha-helices could occur. After 1975 Henderson continued to work on the structure of bacteriorhodopsin without Unwin. In 1990 Henderson published an atomic model of bacteriorhodopsin by electron crystallography in the Journal of Molecular Biology. This model was the second ever atomic model of a membrane protein. The techniques Henderson developed for electron crystallography are still in use.
Together with Chris Tate, Henderson helped develop conformational thermostabilisation: a method that allows any protein to be made more stable while still holding a chosen conformation of interest. This method has been critical in crystallising and solving the structures of several G protein–coupled receptors (GPCRs). With help from the charity LifeArc, Henderson and Tate founded the MRC start-up company, Heptares Therapeutics Ltd (HTL) in 2007. HTL continues to develop new drugs targeting medically important GPCRs linked to a wide range of human diseases.
In the last few years, Henderson has returned to hands-on research focusing on single particle electron microscopy. Having been an early proponent of the idea that single particle electron microscopy is capable of determining atomic resolution models for proteins, explained in a 1995 paper in Quarterly Reviews of Biophysics. Henderson aims to be able to routinely obtain atomic structures without crystals. He has made seminal contributions to many of the approaches used in single particle electron microscopy, including pioneering the development of direct electron detectors that recently allowed single particle cryo-electron microscopy to achieve its goals.
Post-docs and PhD students
Although Henderson has typically worked independently, he has trained a number of scientists who have gone on to independent research careers. These scientists include:
David Agard, since 1983 at UCSF
Per Bullough, since 1994 at the University of Sheffield
Nikolaus Grigorieff, since 2013 at HHMI Janelia Research Campus
Reinhard Grisshammer, since 2017 at the National Cancer Institute
Edmund Kunji, since 2000 at MRC Mitochondrial Biology Unit, University of Cambridge
Peter Rosenthal, since 2015 at the Francis Crick Institute
John Rubinstein, since 2006 at The Hospital for Sick Children, Toronto
Gebhard Schertler, since 2010 at ETH Paul Scherrer Institute
Christopher Tate, since 1992 at MRC Laboratory of Molecular Biology
Vinzenz Unger, since 2010 at Northwestern University
Other positions
Henderson has worked at the Medical Research Council Laboratory of Molecular Biology (MRC LMB) in Cambridge since 1973, and was its director between 1996 and 2006. He was also a visiting professor at the Miller Institute of the University of California, Berkeley in Spring 1993. He is currently a mentor for the Academy of Medical Sciences Mentoring Scheme. Outside academia, he lists his interests as hill walking in Scotland, kayaking and drinking good wine.
Awards and honours
1978 Awarded the William Bate Hardy Prize
1983 Elected a Fellow of the Royal Society (FRS)
1984 Awarded the Sir Hans Krebs Medal by the Federation of European Biochemical Societies
1998 Elected a Foreign Associate of the US National Academy of Sciences
1981 Awarded the Ernst-Ruska Prize for Electron Microscopy
1991 Awarded the Lewis S. Rosenstiel Award
1993 Awarded the Louis-Jeantet Prize for Medicine
1998 Elected as a founder Fellow of the Academy of Medical Sciences (FMedSci)
1999 Awarded the Gregori Aminoff prize (together with Nigel Unwin)
2003 Honorary Fellow of the Corpus Christi College, Cambridge
2003 Honorary Member of the British Biophysical Society
2005 Awarded Distinguished Scientist Award and Fellow, Microscopy Society of America
2008 Honorary Doctor of Science degree from the University of Edinburgh
2016 Awarded the Copley Medal of the Royal Society
2016 Awarded the Alexander Hollaender Award in Biophysics
2017 Awarded the Wiley Prize
2017 Honorary Fellow of the Royal Society of Chemistry (HonFRSC)
2017 Awarded the Nobel Prize in Chemistry together with Jacques Dubochet and Joachim Frank "for developing cryo-electron microscopy for the high-resolution structure determination of biomolecules in solution"
2018 Appointed Member of the Order of the Companions of Honour (CH) in the Queen's Birthday Honours for services to electron microscopy of biological molecules
2018 Awarded the Royal Medal of the Royal Society of Edinburgh
2019 Honorary Doctor of Science degree from the University of Leeds
Interviews
He was interviewed by Jim Al-Khalili for The Life Scientific, first broadcast on BBC Radio 4 in February 2018.
References
External links
including the Nobel Lecture on 8 December 2017 From Electron Crystallography to Single Particle cryoEM
Richard Henderson on The Scientists' Channel
Richard Henderson in Hyde Park Civilization on ČT24 22.7.2023 (moderator Daniel Stach)
1945 births
Alumni of Corpus Christi College, Cambridge
Alumni of the University of Edinburgh
British molecular biologists
Recipients of the Copley Medal
Fellows of Corpus Christi College, Cambridge
Fellows of Darwin College, Cambridge
Fellows of the Royal Society
Living people
Members of the Order of the Companions of Honour
Foreign associates of the National Academy of Sciences
Microscopists
People educated at Hawick High School
People educated at Boroughmuir High School
Scientists from Edinburgh
Nobel laureates in Chemistry
Scottish biochemists
Scottish biologists
Scottish Nobel laureates
British Nobel laureates
Structural biologists
Helen Hay Whitney Foundation fellows | Richard Henderson (biologist) | [
"Chemistry"
] | 1,381 | [
"Structural biologists",
"Microscopists",
"Structural biology",
"Microscopy"
] |
6,932,634 | https://en.wikipedia.org/wiki/Gregori%20Aminoff%20Prize | The Gregori Aminoff Prize is an international prize awarded since 1979 by the Royal Swedish Academy of Sciences in the field of crystallography, rewarding "a documented, individual contribution in the field of crystallography, including areas concerned with the dynamics of the formation and dissolution of crystal structures. Some preference should be shown for work evincing elegance in the approach to the problem."
The prize, which is named in memory of the Swedish scientist and artist Gregori Aminoff (1883–1947), Professor of Mineralogy at the Swedish Museum of Natural History from 1923, was endowed through a bequest by his widow Birgit Broomé-Aminoff. The prize can be shared by several winners. It is considered the Nobel prize for crystallography.
Recipients of the Prize
Source: Royal Swedish Academy of Science
See also
List of chemistry awards
List of physics awards
References
Notes
A. The form and spelling of the names in the name column is according to www.kva.se, the official website of the Royal Swedish Academy of Sciences. Alternative spellings and name forms, where they exist, are given at the articles linked from this column.
B. The information in the country column is according to www.kva.se, the official website of the Royal Swedish Academy of Sciences. This information may not necessarily reflect the recipient's birthplace or citizenship.
C. The information in the institution column is according to www.kva.se, the official website of the Royal Swedish Academy of Sciences. This information may not necessarily reflect the recipient's current institution.
D. The citation for each award is quoted (not always in full) www.kva.se, the official website of the Royal Swedish Academy of Sciences. The links in this column are to articles (or sections of articles) on the history and areas of physics for which the awards were presented. The links are intended only as a guide and explanation. For a full account of the work done by each prize winner, please see the biography articles linked from the name column.
Citations
External links
awardee of the Gregori Aminoff Prize
Awards of the Royal Swedish Academy of Sciences
Chemistry awards
Crystallography awards
Physics awards
Awards established in 1979 | Gregori Aminoff Prize | [
"Chemistry",
"Materials_science",
"Technology"
] | 447 | [
"Crystallography awards",
"Chemistry awards",
"Crystallography",
"Science and technology awards",
"Physics awards"
] |
6,932,661 | https://en.wikipedia.org/wiki/BMW%20GT%20101 | The BMW GT 101 was a turboshaft-type gas turbine engine developed from the BMW 003 aviation engine, that was considered for installation in Nazi Germany's Panther tank. The German Army's development division, the Heereswaffenamt (Army Ordnance Board), studied a number of gas turbine engines for use in tanks starting in mid-1944. Although none of these was fitted operationally, the GT 101 (GT for "Gas Turbine") reached a production quality stage of development. Several designs were produced over the lifetime of the program, including the GT 102 and GT 103.
Origins
As early as mid-1943 Adolf Müller, formerly of the Junkers Jumo aircraft powerplant division of the parent Junkers aviation firm in Dessau, and then Heinkel-Hirth's (Heinkel Strahltriebwerke) jet engine division, proposed the use of a gas turbine for armored vehicle engines. A gas turbine would be much lighter than the 600 hp-plus class, gasoline-fueled reciprocating piston engines being used in the next-generation tanks, to that time primarily sourced from the Maybach firm for the Wehrmacht Heer's existing armored fighting vehicle designs, that it would considerably improve their power-to-weight ratio and thereby improve cross-country performance, and potentially outright speed. At that time, there were considerable challenges with the use of gas turbine engines in this role, however. In the case of a pure turbojet engine for aviation purposes, the hot exhaust from the turbine is used directly for thrust alone; but in the case of a gas turbine being used for a traction engine, any heat flowing out the exhaust was essentially wasted power. The turbine exhaust was much hotter than that from a piston engine, with pioneering-design gas turbine engines possessing atrociously bad fuel economy figures when compared to traditional reciprocating piston-engine designs. On the upside, the use of inexpensive and widely available kerosene as fuel offset this disadvantage at least to some degree, so the overall economics of running the engines might end up being similar. Another problem was that the gas turbine engine only works well near a particular designed operating speed, although at (or near) that speed it can provide a wide range of output torque. More specifically, turbines offer very little torque at low speeds, which is much less of a problem for a piston engine, and not at all for an electric motor. In order to use a turbine in the tank role, the design would need to use an advanced transmission and clutch that allowed the engine to run at a limited range of speeds, or alternately use some other method to extract power. At first the Army was uninterested, and Müller turned to the design of an advanced turbosupercharger for BMW (it is unclear if this design saw use). When this work was completed in January 1944 he once again turned to the traction engine designs, and eventually met with the Heereswaffenamt in June 1944 to present a number of proposed designs for a 1,000-horsepower unit. Given the extreme problems Germany had with fuel supplies late in the war, use of low-grade fuels, no matter how much of it was needed and used, was actually seen as a major advantage, and the primary reason the Heereswaffenamt eventually became interested in the design.
Preliminary design
Müller's first detailed design was a simple modification to a traditional jet engine, the core engine being based on the experimental Heinkel HeS 011, of which only 19 complete examples were ever built. In this design a separate turbine and power take-off shaft was bolted onto the exhaust of the engine core, the hot gases of the engine powering the turbine, and thus the tank. Since the engine core was separate from the power take-off, torque was available immediately because the core could be left running at full speed while generating small amounts of power, the unneeded gases being "dumped". This design had a serious problem, however; when the load was removed, during gear shifts for instance, the power turbine was unloaded and could race out of control. Either the power turbine had to be braked during these periods, or the gas flow from the engine core had to be dumped.
Another problem was that the Heereswaffenamt was seriously concerned about the quality of the fuels they could find. Unlike the aviation role where it was expected the fuel would be highly refined, it was considered likely the Army would end up with lower-quality fuels that could expected to contain all sorts of heavy contaminants. This led to the possibility that the fuel would not have time to mix properly in a traditional design, leading to poor combustion. They were particularly interested in having the fuel injectors rotate along with the engine core, which could be expected to lead to much better mixing, with the additional benefit of reducing hot spots on the turbine's stators. Unfortunately Müller's design did not appear to be able to be adapted to use these injectors, and the design was eventually rejected on 12 August 1944.
Müller then turned to designs that removed the separate power turbine and instead required some sort of torque-maintaining transmission. The best solution to the problem would have been to drive an electrical generator and use the power to drive motors for traction (a system Porsche had tried to introduce several times), but a serious shortage of copper by this point in the war — as well as its relatively poor quality throughout the war for electrical use, from copper ore resources that Germany could access — ruled out this solution. Instead some sort of hydraulic transmission was to be used, although not initially specified. Additionally, the new design included the rotating fuel injectors in the combustion chamber that the Heereswaffenamt was interested in. Müller presented the new design on 14 September, and the Heereswaffenamt proved considerably more interested – the deteriorating fuel supply situation at this point may have been a factor as well.
Oddly, they then suggested that any engine core developed for this role should also be suitable for aviation use, which led to the abandonment of the rotating injectors after all, and eventually to the use of a modified BMW 003 core, from a well-proven design. The basic layout had to be modified with the addition of a third bearing near the middle of the engine to help absorb shock loads, and a third turbine stage was added to the end of the engine to harness more torque. Unlike the earlier design, the power-take off could be placed anywhere (not just off the free turbine stage) and was in fact moved to the front of the engine in order to make the design as compatible as possible with existing engine compartments. The basic design was completed in mid-November, and assigned the name GT 101.
Originally they had intended to mount the new engine in the Henschel-designed Tiger tank, but although the engine was smaller, in a diametral manner than the V-12 piston engine it replaced, its beginnings as the axial compressor-based BMW 003 aviation turbojet meant that it was too long to fit in the Tiger I's engine bay. Attention then turned to the Panther, which by this point in the war was to be the basis of all future tank production anyway (see the Entwicklung series for details). For experimental fitting, Porsche provided one of the prototype Jagdtiger hulls.
Fitting of the GT 101 in the Panther hull took some design effort, but eventually a suitable arrangement was found. The engine exhaust was fitted with a large divergent diffuser to lower the exhaust velocity and temperature, which also allowed for a larger third turbine stage. The entire exhaust area extended out of the rear of the engine compartment into "free air", which made it extremely vulnerable to enemy fire, and it was realized this was not practical for a production system.
A new automatic transmission from Zahnradfabrik of Friedrichshafen (ZF) was built for the fitting, it had three clutching levels in the torque converter and twelve speeds. The transmission also included an electrically-operated clutch that mechanically disengaged from the engine completely at 5,000 rpm, below which the engine produced no torque on the output. At full speed, 14,000 rpm, the engine itself also acted in the manner of a huge flywheel, which greatly improved cross-country performance by allowing some of the engine's excess speed to be dumped into the transmission to pull the tank over bumps.
In terms of performance the GT 101 would have been surprisingly effective. It would have produced a total of 3,750 hp, using 2,600 hp to operate the compressor and thus leaving 1,150 hp to power the transmission. The entire engine assembly weighed 450 kg (992 lb), not including the transmission. In comparison, the existing Maybach HL230 P30 it replaced provided 620 hp yet weighed a comparatively huge 1,200 kg (2,646 lb). With the Maybach the Panther had a specific power of about 13.5 hp/ton, with the GT 101 this would improve to 27 hp/ton, outperforming any tank of WWII by a wide margin (for instance, the T-34 was 16.2 hp/tonne) and nearly matching the modern, turboshaft-powered American M1 Abrams tank's own 26.9 hp/ton top rating. For other reasons, essentially wear and tear, speeds for a GT 101-powered Panther would be deliberately limited to those of the gasoline-powered Panthers. The only downsides were poor torque at low power settings, and a fuel consumption about double that of the Maybach, which presented problems in finding enough room for fuel tankage — a similar problem also existed with early German gas turbines used for aircraft propulsion.
GT 102
While work on the GT 101 continued, Müller proposed another way to build the free-turbine engine that avoided the problems with his original designs. In December 1944 he presented his plans, which were accepted for development as the GT 102.
The basic idea of the GT 102 was to completely separate the power turbine from the engine itself, using the latter as a gas generator. The core engine was run hot enough to power itself and nothing more, no power was taken from the core to drive the tank. Compressed air from the core's compressor, 30% of the overall airflow, was bled off through a pipe to a completely separate two-stage turbine with its own combustion chamber. This avoided the overspeed problems of the original design; when load was removed, simply shutting off the airflow to the turbine would slow it down. This also meant that the core could be run at full speed while the power turbine ran at low speed, providing significantly improved low-speed torque. The only downside to the design was that the power turbine no longer had the huge spinning mass of the GT 101, and thus did not offer any significant flywheel energy storage.
Since the turbine section of the core engine was no longer being fed all of the air from the compressor, it could be built smaller than in the GT 101. This made the engine shorter overall, allowing it to be installed transversely in the upper portion of the Panther's engine compartment, in the wider area above the tracks. The power turbine was then fitted in the empty space below, mounted at a right angle to the engine. This positioned it in-line with the normal transmission, which was located at the front of the vehicle, driving it via a power shaft. The mounting was considerably more practical than the GT 101, and entirely "under armor" as well. Although the GT 102 had fuel economy about equal to the GT 101, the mounting left considerably more empty room within the engine compartment in the space formerly used by the engine cooling system that could be used for new fuel cells, doubling the overall fuel capacity to 1,400 liters and thus providing equal range to the original gasoline engine.
Most of the design work for the GT 102 was complete by early 1945, and the plans were to have been delivered on 15 February (along with final designs for the GT 101). It appears the plans were not delivered, likely due to the deteriorating war condition.
GT 102 Ausf. 2
In order to further improve the fit of the GT 102 in the Panther, the GT 102 Ausf. 2 design modified several sections of the original gas generator layout to shorten the compressor area and combustion chamber. These were somewhat longer in the GT 102 than they would have been in a comparable aircraft engine in order to allow for better mixing with lower quality fuels. The Ausf. 2 returned these to their original dimensions, and instead re-introduced the rotating fuel injectors from the original pre-GT 101 designs. The compressor was further reduced in length by reducing it from nine to seven stages, but retained the original compression ratio by operating the first stage close to Mach 1. With these reductions in length the engine could be fit lengthwise in the engine compartment, allowing the space above the tracks to be used for fuel storage, as they had originally.
GT 103
Much of the poor fuel economy of the gas turbine in the traction role was due to the hot exhaust, which essentially represented lost energy. In order to reclaim some of this energy, it is possible to use the hot exhaust to pre-heat the air from the compressor before it flows into the combustion chamber, using a heat exchanger. Although not common, these recuperators are used in a number of applications today.
W. Hryniszak of Brown Boveri in Heidelberg designed a recuperator that was added to the otherwise unmodified GT 102 design to produce the GT 103. The heat exchanger used a rotating porous ceramic cylinder fit into a cruciform duct. Air from the gas generator's exhaust entered the duct outside the cylinder at 500 °C, and blew around the cylinder, heating it and then exhausting at about 350 °C. The ceramic cylinder rotated slowly in order to avoid overheating the "hot" side. Compressed air flowing into the power turbine was piped through the middle of the cylinder, entering at about 180 °C and exiting at about 300 °C.
This meant that 120 °C of the 800 °C final temperature of the air did not have to be provided by the fuel, representing a fairly substantial savings. Estimates suggested an improvement of about 30% in fuel consumption. It was also suggested that a second heat exchanger could be used on the gas generator engine core, saving another 30%. This reduced fuel use by half overall, making it similar to the original gasoline engine. These estimates appear unreasonable in retrospect, although General Motors did experiment with these systems throughout the 1960s and 70s.
See also
References
Kay, Antony, German Jet Engine and Gas Turbine Development 1930-1945, Airlife Publishing, 2002,
World War II military equipment of Germany
Gas turbines
Aero-derivative engines
Tank engines | BMW GT 101 | [
"Technology"
] | 3,035 | [
"Aero-derivative engines",
"Engines",
"Gas turbines"
] |
6,932,879 | https://en.wikipedia.org/wiki/Aurora%20kinase%20A | Aurora kinase A also known as serine/threonine-protein kinase 6 is an enzyme that in humans is encoded by the AURKA gene.
Aurora A is a member of a family of mitotic serine/threonine kinases. It is implicated with important processes during mitosis and meiosis whose proper function is integral for healthy cell proliferation. Aurora A is activated by one or more phosphorylations and its activity peaks during the G2 phase to M phase transition in the cell cycle.
Discovery
The aurora kinases were first identified in 1990 during a cDNA screen of Xenopus eggs. The kinase discovered, Eg2, is now referred to as Aurora A. However, Aurora A's meiotic and mitotic significance was not recognized until 1998.
Aurora kinase family
The human genome contains three members of the aurora kinase family: Aurora kinase A, Aurora kinase B and Aurora C kinase. The Xenopus, Drosophila, and Caenorhabditis elegans genomes, on the other hand, contain orthologues only to Aurora A and Aurora B.
In all studied species, the three Aurora mitotic kinases localize to the centrosome during different phases of mitosis. The family members have highly conserved C-terminal catalytic domains. Their N-terminal domains, however, exhibit a large degree of variance in the size and sequence.
Aurora A and Aurora B kinases play important roles in mitosis. The Aurora kinase A is associated with centrosome maturation and separation and thereby regulates spindle assembly and stability. The Aurora kinase B is a chromosome passenger protein and regulates chromosome segregation and cytokinesis.
Although there is evidence to suggest that Aurora C might be a chromosomal passenger protein, the cellular function of it is less clear.
Localization
Aurora A localizes next to the centrosome late in the G1 phase and early in the S phase. As the cell cycle progresses, concentrations of Aurora A increase and the kinase associates with the mitotic poles and the adjacent spindle microtubules. Aurora A remains associated with the spindles through telophase. Right before mitotic exit, Aurora A relocalizes to the mid-zone of the spindle.
Mitosis
During mitosis, a mitotic spindle is assembled by using microtubules to tether together the mother centrosome to its daughter. The resulting mitotic spindle is then used to propel apart the sister chromosomes into what will become the two new daughter cells. Aurora A is critical for proper formation of mitotic spindle. It is required for the recruitment of several different proteins important to the spindle formation. Among these target proteins are TACC, a microtubule-associated protein that stabilizes centrosomal microtubules and Kinesin 5, a motor protein involved in the formation of the bipolar mitotic spindle. γ-tubulins, the base structure from which centrosomal microtubules polymerize, are also recruited by Aurora A. Without Aurora A the centrosome does not accumulate the quantity of γ-tubulin that normal centrosomes recruit prior to entering anaphase. Though the cell cycle continues even in the absence of sufficient γ-tubulin, the centrosome never fully matures; it organizes fewer aster microtubules than normal.
Furthermore, Aurora A is necessary for the proper separation of the centrosomes after the mitotic spindle has been formed. Without Aurora A, the mitotic spindle, depending on the organism, will either never separate or will begin to separate only to collapse back onto itself. In the case of the former, it has been suggested that Aurora A cooperates with the kinase Nek2 in Xenopus to dissolve the structure tethering the cell's centrosomes together. Therefore, without proper expression of Aurora A, the cell's centrosomes are never able to separate.
Aurora A also assures proper organization and alignment of the chromosomes during prometaphase. It is directly involved in the interaction of the kinetochore, the part of the chromosome at which the mitotic spindle attaches and pulls, and the mitotic spindle's extended microtubules. It is speculated that Aurora B cooperates with Aurora A to complete this task. In the absence of Aurora A mad2, a protein that normally dissipates once a proper kinetochore-microtubule connection is made, remains present even into metaphase.
Finally, Aurora A helps orchestrate an exit from mitosis by contributing to the completion of cytokinesis- the process by which the cytoplasm of the parent cell is split into two daughter cells. During cytokinesis the mother centriole returns to the mid-body of the mitotic cell at the end of mitosis and causes the central microtubules to release from the mid-body. The release allows mitosis to run to completion. Though the exact mechanism by which Aurora A aids cytokinesis is unknown, it is well documented that it relocalizes to the mid-body immediately before the completion of mitosis.
Intriguingly, abolishment of Aurora A through RNAi interference results in different mutant phenotypes in different organisms and cell types. For example, deletion of Aurora A in C. elegans results in an initial separation of the cell's centrosomes followed by an immediate collapse of the asters. In Xenopus, deletion disallows the mitotic spindle from ever even forming. And in Drosophila, flies without Aurora A will effectively form spindles and separate but the aster microtubules will be dwarfed. These observations suggests that while Aurora-A has orthologues in many different organisms, it may play a similar but slightly different role in each.
Meiosis
Aurora A phosphorylation directs the cytoplasmic polyadenylation translation of mRNA's, like the MAP kinase kinase kinase protein MOS, that are vital to the completion of meiosis in Xenopus Oocytes. Prior to the first meiotic metaphase, Aurora A induces the synthesis of MOS. The MOS protein accumulates until it exceeds a threshold and then transduces the phosphorylation cascade in the map kinase pathway. This signal subsequently activates the kinase RSK which in turn binds to the protein Myt1. Myt1, in complex with RSK, is now unable to inhibit cdc2. As a consequence, cdc2 permits entry into meiosis. A similar Aurora A dependent process regulates the transition from meiosis I-meiosis II.
Furthermore, Aurora A has been observed to have a biphasic pattern of activation during progression through meiosis. It has been suggested that the fluctuations, or phases, of Aurora A activation are dependent on a positive-feedback mechanism with a p13SUC1-associated protein kinase
Protein translation
Aurora A is not only implicated with the translation of MOS during meiosis but also in the polyadenylation and subsequent translation of neural mRNAs whose protein products are associated with synaptic plasticity.
Clinical significance
Aurora A dysregulation has been associated with high occurrence of cancer. For example, one study showed over-expression of Aurora A in 94 percent of the invasive tissue growth in breast cancer, while surrounding, healthy tissues had normal levels of Aurora A expression. Aurora A has also been shown to be involved in the Epithelial–mesenchymal transition and Neuroendocrine Transdifferentiation of Prostate Cancer cells in aggressive disease.
Dysregulation of Aurora A may lead to cancer because Aurora A is required for the completion of cytokinesis. If the cell begins mitosis, duplicates its DNA, but is then not able to divide into two separate cells it becomes an aneuploid- containing more chromosomes than normal. Aneuploidy is a trait of many cancerous tumors. Ordinarily, Aurora A expression levels are kept in check by the tumor suppressor protein p53.
Mutations of the chromosome region that contains Aurora A, 20q13, are generally considered to have a poor prognosis.
Osimertinib and rociletinib, two anti cancer drugs for lung cancer, work by shutting off mutant EGFR, which initially kills cancerous tumors, but the tumors rewire and activate Aurora kinase A, becoming cancerous growths again. According to a 2018 study, targeting both EGFR and Aurora prevents return of drug resistant tumors.
Interactions
Aurora A kinase has been shown to interact with:
MBD3,
MYCN,
NME1,
P53,
TACC1,
TPX2, and
UBE2N.
References
Further reading
External links
PDBe-KB provides an overview of all the structure information available in the PDB for Human Aurora kinase A
PDBe-KB provides an overview of all the structure information available in the PDB for Mouse Aurora kinase A
Cell cycle
EC 2.7.11
Cancer research | Aurora kinase A | [
"Biology"
] | 1,857 | [
"Cell cycle",
"Cellular processes"
] |
6,933,049 | https://en.wikipedia.org/wiki/General%20frame | In logic, general frames (or simply frames) are Kripke frames with an additional structure, which are used to model modal and intermediate logics. The general frame semantics combines the main virtues of Kripke semantics and algebraic semantics: it shares the transparent geometrical insight of the former, and robust completeness of the latter.
Definition
A modal general frame is a triple , where is a Kripke frame (i.e., is a binary relation on the set ), and is a set of subsets of that is closed under the following:
the Boolean operations of (binary) intersection, union, and complement,
the operation , defined by .
They are thus a special case of fields of sets with additional structure. The purpose of is to restrict the allowed valuations in the frame: a model based on the Kripke frame is admissible in the general frame , if
for every propositional variable .
The closure conditions on then ensure that belongs to for every formula (not only a variable).
A formula is valid in , if for all admissible valuations , and all points . A normal modal logic is valid in the frame , if all axioms (or equivalently, all theorems) of are valid in . In this case we call an -frame.
A Kripke frame may be identified with a general frame in which all valuations are admissible: i.e., , where denotes the power set of .
Types of frames
In full generality, general frames are hardly more than a fancy name for Kripke models; in particular, the correspondence of modal axioms to properties on the accessibility relation is lost. This can be remedied by imposing additional conditions on the set of admissible valuations.
A frame is called
differentiated, if implies ,
tight, if implies ,
compact, if every subset of with the finite intersection property has a non-empty intersection,
atomic, if contains all singletons,
refined, if it is differentiated and tight,
descriptive, if it is refined and compact.
Kripke frames are refined and atomic. However, infinite Kripke frames are never compact. Every finite differentiated or atomic frame is a Kripke frame.
Descriptive frames are the most important class of frames because of the duality theory (see below). Refined frames are useful as a common generalization of descriptive and Kripke frames.
Operations and morphisms on frames
Every Kripke model induces the general frame , where is defined as
The fundamental truth-preserving operations of generated subframes, p-morphic images, and disjoint unions of Kripke frames have analogues on general frames. A frame is a generated subframe of a frame , if the Kripke frame is a generated subframe of the Kripke frame (i.e., is a subset of closed upwards under , and ), and
A p-morphism (or bounded morphism) is a function from to that is a p-morphism of the Kripke frames and , and satisfies the additional constraint
for every .
The disjoint union of an indexed set of frames , , is the frame , where is the disjoint union of , is the union of , and
The refinement of a frame is a refined frame defined as follows. We consider the equivalence relation
and let be the set of equivalence classes of . Then we put
Completeness
Unlike Kripke frames, every normal modal logic is complete with respect to a class of general frames. This is a consequence of the fact that is complete with respect to a class of Kripke models : as is closed under substitution, the general frame induced by is an -frame. Moreover, every logic is complete with respect to a single descriptive frame. Indeed, is complete with respect to its canonical model, and the general frame induced by the canonical model (called the canonical frame of ) is descriptive.
Jónsson–Tarski duality
General frames bear close connection to modal algebras. Let be a general frame. The set is closed under Boolean operations, therefore it is a subalgebra of the power set Boolean algebra . It also carries an additional unary operation, . The combined structure is a modal algebra, which is called the dual algebra of , and denoted by .
In the opposite direction, it is possible to construct the dual frame to any modal algebra . The Boolean algebra has a Stone space, whose underlying set is the set of all ultrafilters of . The set of admissible valuations in consists of the clopen subsets of , and the accessibility relation is defined by
for all ultrafilters and .
A frame and its dual validate the same formulas; hence the general frame semantics and algebraic semantics are in a sense equivalent. The double dual of any modal algebra is isomorphic to itself. This is not true in general for double duals of frames, as the dual of every algebra is descriptive. In fact, a frame is descriptive if and only if it is isomorphic to its double dual .
It is also possible to define duals of p-morphisms on one hand, and modal algebra homomorphisms on the other hand. In this way the operators and become a pair of contravariant functors between the category of general frames, and the category of modal algebras. These functors provide a duality (called Jónsson–Tarski duality after Bjarni Jónsson and Alfred Tarski) between the categories of descriptive frames, and modal algebras. This is a special case of a more general duality between complex algebras and fields of sets on relational structures.
Intuitionistic frames
The frame semantics for intuitionistic and intermediate logics can be developed in parallel to the semantics for modal logics. An intuitionistic general frame is a triple , where is a partial order on , and is a set of upper subsets (cones) of that contains the empty set, and is closed under
intersection and union,
the operation .
Validity and other concepts are then introduced similarly to modal frames, with a few changes necessary to accommodate for the weaker closure properties of the set of admissible valuations. In particular, an intuitionistic frame is called
tight, if implies ,
compact, if every subset of with the finite intersection property has a non-empty intersection.
Tight intuitionistic frames are automatically differentiated, hence refined.
The dual of an intuitionistic frame is the Heyting algebra . The dual of a Heyting algebra is the intuitionistic frame , where is the set of all prime filters of , the ordering is inclusion, and consists of all subsets of of the form
where . As in the modal case, and are a pair of contravariant functors, which make the category of Heyting algebras dually equivalent to the category of descriptive intuitionistic frames.
It is possible to construct intuitionistic general frames from transitive reflexive modal frames and vice versa, see modal companion.
See also
Neighborhood semantics
References
Alexander Chagrov and Michael Zakharyaschev, Modal Logic, vol. 35 of Oxford Logic Guides, Oxford University Press, 1997.
Patrick Blackburn, Maarten de Rijke, and Yde Venema, Modal Logic, vol. 53 of Cambridge Tracts in Theoretical Computer Science, Cambridge University Press, 2001.
Modal logic
Model theory
Duality theories
Concepts in logic | General frame | [
"Mathematics"
] | 1,520 | [
"Mathematical structures",
"Mathematical logic",
"Category theory",
"Duality theories",
"Geometry",
"Model theory",
"Modal logic"
] |
6,933,099 | https://en.wikipedia.org/wiki/Philip%20Coppens%20%28chemist%29 | Philip Coppens (October 24, 1930 – June 21, 2017) was a Dutch-born American chemist and crystallographer known for his work on charge density analysis using X-rays crystallography and the pioneering work in the field of photocrystallography.
Education and career
The Amersfoort-born Coppens received his B.S. and Ph.D. degrees from the University of Amsterdam in 1954 and 1960, where he was supervised by Carolina MacGillavry. In 1968, following appointments at the Weizmann Institute and Brookhaven National Laboratory, he was appointed in the chemistry department at the State University of New York at Buffalo. He was a SUNY Distinguished Professor and holder of the Henry M. Woodburn Chair of Chemistry. Among the many 3-dimensional structures Coppens characterized is the nitroprusside ion.
Honours and awards
Coppens was a corresponding member of the Royal Netherlands Academy of Arts and Sciences since 1979 and a fellow of the American Association for the Advancement of Science from 1993. Additionally, he was awarded the Gregori Aminoff Prize of the Royal Swedish Academy of Sciences in 1996, the Ewald Prize of the International Union of Crystallography in 2005, and Kołos Medal in 2013.
Bibliography
References
Further reading
Report on the Symposium honoring Coppens on the occasion of his retirement.
External links
Official website
Biographical sketch, Yale University
1930 births
2017 deaths
21st-century American chemists
20th-century Dutch chemists
Dutch emigrants to the United States
American crystallographers
Members of the Royal Netherlands Academy of Arts and Sciences
University of Amsterdam alumni
University at Buffalo faculty
People from Amersfoort
Fellows of the American Association for the Advancement of Science
Presidents of the American Crystallographic Association
Photochemists
Solid state chemists
Presidents of the International Union of Crystallography | Philip Coppens (chemist) | [
"Chemistry"
] | 372 | [
"Solid state chemists",
"Photochemists",
"Physical chemists"
] |
6,933,302 | https://en.wikipedia.org/wiki/Sound%20from%20ultrasound | Sound from ultrasound is the name given here to the generation of audible sound from modulated ultrasound without using an active receiver. This happens when the modulated ultrasound passes through a nonlinear medium which acts, intentionally or unintentionally, as a demodulator.
Parametric array
Since the early 1960s, researchers have been experimenting with creating directive low-frequency sound from nonlinear interaction of an aimed beam of ultrasound waves produced by a parametric array using heterodyning. Ultrasound has much shorter wavelengths than audible sound, so that it propagates in a much narrower beam than any normal loudspeaker system using audio frequencies. Most of the work was performed in liquids (for underwater sound use).
The first modern device for air acoustic use was created in 1998, and is now known by the trademark name "Audio Spotlight", a term first coined in 1983 by the Japanese researchers who abandoned the technology as infeasible in the mid-1980s.
A transducer can be made to project a narrow beam of modulated ultrasound that is powerful enough, at 100 to 110 dBSPL, to substantially change the speed of sound in the air that it passes through. The air within the beam behaves nonlinearly and extracts the modulation signal from the ultrasound, resulting in sound that can be heard only along the path of the beam, or that appears to radiate from any surface that the beam strikes. This technology allows a beam of sound to be projected over a long distance to be heard only in a small well-defined area; for a listener outside the beam the sound pressure decreases substantially. This effect cannot be achieved with conventional loudspeakers, because sound at audible frequencies cannot be focused into such a narrow beam.
There are some limitations with this approach. Anything that interrupts the beam will prevent the ultrasound from propagating, like interrupting a spotlight's beam. For this reason, most systems are mounted overhead, like lighting.
Applications
Commercial advertising
A sound signal can be aimed so that only a particular passer-by, or somebody very close, can hear it. In commercial applications, it can target sound to a single person without the peripheral sound and related noise of a loudspeaker.
Personal audio
It can be used for personal audio, either to have sounds audible to only one person, or that which a group wants to listen to. The navigation instructions for example are only interesting for the driver in a car, not for the passengers. Another possibility are future applications for true stereo sound, where one ear does not hear what the other is hearing.
Train signaling device
Directional audio train signaling may be accomplished through the use of an ultrasonic beam which will warn of the approach of a train while avoiding the nuisance of loud train signals on surrounding homes and businesses.
History
This technology was originally developed by the US Navy and Soviet Navy for underwater sonar in the mid-1960s, and was briefly investigated by Japanese researchers in the early 1980s, but these efforts were abandoned due to extremely poor sound quality (high distortion) and substantial system cost. These problems went unsolved until a paper published by Dr. F. Joseph Pompei of the Massachusetts Institute of Technology in 1998 fully described a working device that reduced audible distortion essentially to that of a traditional loudspeaker.
Products
there were known to be five devices which have been marketed that use ultrasound to create an audible beam of sound.
Audio Spotlight
F. Joseph Pompei of MIT developed technology he calls the "Audio Spotlight", and made it commercially available in 2000 by his company Holosonics, which according to their website claims to have sold "thousands" of their "Audio Spotlight" systems. Disney was among the first major corporations to adopt it for use at the Epcot Center, and many other application examples are shown on the Holosonics website.
Audio Spotlight is a narrow beam of sound that can be controlled with similar precision to light from a spotlight. It uses a beam of ultrasound as a "virtual acoustic source", enabling control of sound distribution.
The ultrasound has wavelengths only a few millimeters long which are much smaller than the source, and therefore naturally travel in an extremely narrow beam.
The ultrasound, which contains frequencies far outside the range of human hearing, is completely inaudible. But as the ultrasonic beam travels through the air, the inherent properties of the air cause the ultrasound to change shape in a predictable way. This gives rise to frequency components in the audible band, which can be predicted and controlled.
HyperSonic Sound
Elwood "Woody" Norris, founder and Chairman of American Technology Corporation (ATC), announced he had successfully created a device which achieved ultrasound transmission of sound in 1996. This device used piezoelectric transducers to send two ultrasonic waves of differing frequencies toward a point, giving the illusion that the audible sound from their interference pattern was originating at that point. ATC named and trademarked their device as "HyperSonic Sound" (HSS). In December 1997, HSS was one of the items in the Best of What's New issue of Popular Science. In December 2002, Popular Science named HyperSonic Sound the best invention of 2002. Norris received the 2005 Lemelson–MIT Prize for his invention of a "hypersonic sound". ATC (now named LRAD Corporation) spun off the technology to Parametric Sound Corporation in September 2010 to focus on their long-range acoustic device (LRAD) products, according to their quarterly reports, press releases, and executive statements.
Mitsubishi Electric Engineering Corporation
Mitsubishi apparently offers a sound from ultrasound product named the "MSP-50E" and commercially available from Mitsubishi electrical engineering company.
AudioBeam
German audio company Sennheiser Electronic once listed their "AudioBeam" product for about $4,500. There is no indication that the product has been used in any public applications. The product has since been discontinued.
Literature survey
The first experimental systems were built over 30 years ago, although these first versions only played simple tones. It was not until much later (see above) that the systems were built for practical listening use.
Experimental ultrasonic nonlinear acoustics
A chronological summary of the experimental approaches taken to examine Audio Spotlight systems in the past will be presented here. At the turn of the millennium working versions of an Audio Spotlight capable of reproducing speech and music could be bought from Holosonics, a company founded on Dr. Pompei's work in the MIT Media Lab.
Related topics were researched almost 40 years earlier in the context of underwater acoustics.
The first article consisted of a theoretical formulation of the half pressure angle of the demodulated signal.
The second article provided an experimental comparison to the theoretical predictions.
Both articles were supported by the U.S. Office of Naval Research, specifically for the use of the phenomenon for underwater sonar pulses. The goal of these systems was not high directivity per se, but rather higher usable bandwidth of a typically band-limited transducer.
The 1970s saw some activity in experimental airborne systems, both in air and underwater. Again supported by the U.S. Office of Naval Research, the primary aim of the underwater experiments was to determine the range limitations of sonar pulse propagation due to nonlinear distortion. The airborne experiments were aimed at recording quantitative data about the directivity and propagation loss of both the ultrasonic carrier and demodulated waves, rather than developing the capability to reproduce an audio signal.
In 1983 the idea was again revisited experimentally but this time with the firm intent to analyze the use of the system in air to form a more complex base band signal in a highly directional manner. The signal processing used to achieve this was simple DSB-AM with no precompensation, and because of the lack of precompensation applied to the input signal, the THD (total harmonic distortion) levels of this system would have probably been satisfactory for speech reproduction, but prohibitive for the reproduction of music. An interesting feature of the experimental set up was the use of 547 ultrasonic transducers to produce a 40 kHz ultrasonic sound source of over 130db at 4 m, which would demand significant safety considerations. Even though this experiment clearly demonstrated the potential to reproduce audio signals using an ultrasonic system, it also showed that the system suffered from heavy distortion, especially when no precompensation was used.
Theoretical ultrasonic nonlinear acoustics
The equations that govern nonlinear acoustics are quite complex and unfortunately they do not have general analytical solutions. They usually require the use of a computer simulation. However, as early as 1965, Berktay performed an analysis under some simplifying assumptions that allowed the demodulated SPL to be written in terms of the amplitude modulated ultrasonic carrier wave pressure Pc and various physical parameters. Note that the demodulation process is extremely lossy, with a minimum loss in the order of 60 dB from the ultrasonic SPL to the audible wave SPL. A precompensation scheme can be based from Berktay's expression, shown in Equation 1, by taking the square root of the base band signal envelope E and then integrating twice to invert the effect of the double partial-time derivative. The analogue electronic circuit equivalents of a square root function is simply an op-amp with feedback, and an equalizer is analogous to an integration function. However, these topic areas lie outside the scope of this project.
where
Audible secondary pressure wave
misc. physical parameters
SPL of the ultrasonic carrier wave
Envelope function (such as DSB-AM)
This equation says that the audible demodulated ultrasonic pressure wave (output signal) is proportional to the twice differentiated, squared version of the envelope function (input signal). Precompensation refers to the trick of anticipating these transforms and applying the inverse transforms on the input, hoping that the output is then closer to the untransformed input.
By the 1990s, it was well known that the Audio Spotlight could work but suffered from heavy distortion. It was also known that the precompensation schemes placed an added demand on the frequency response of the ultrasonic transducers. In effect the transducers needed to keep up with what the digital precompensation demanded of them, namely a broader frequency response. In 1998 the negative effects on THD of an insufficiently broad frequency response of the ultrasonic transducers was quantified with computer simulations by using a precompensation scheme based on Berktay's expression. In 1999 Pompei's article discussed how a new prototype transducer met the increased frequency response demands placed on the ultrasonic transducers by the precompensation scheme, which was once again based on Berktay's expression. In addition impressive reductions in the THD of the output when the precompensation scheme was employed were graphed against the case of using no precompensation.
In summary, the technology that originated with underwater sonar 40 years ago has been made practical for reproduction of audible sound in air by Pompei's paper and device, which, according to his AES paper (1998), demonstrated that distortion had been reduced to levels comparable to traditional loudspeaker systems.
Modulation scheme
The nonlinear interaction mixes ultrasonic tones in air to produce sum and difference frequencies. A DSB (double-sideband) amplitude-modulation scheme with an appropriately large baseband DC offset, to produce the demodulating tone superimposed on the modulated audio spectrum, is one way to generate the signal that encodes the desired baseband audio spectrum. This technique suffers from extremely heavy distortion as not only the demodulating tone interferes, but also all other frequencies present interfere with one another. The modulated spectrum is convolved with itself, doubling its bandwidth by the length property of the convolution. The baseband distortion in the bandwidth of the original audio spectrum is inversely proportional to the magnitude of the DC offset (demodulation tone) superimposed on the signal. A larger tone results in less distortion.
Further distortion is introduced by the second order differentiation property of the demodulation process. The result is a multiplication of the desired signal by the function -ω² in frequency. This distortion may be equalized out with the use of preemphasis filtering (increase amplitude of high frequency signal).
By the time-convolution property of the Fourier transform, multiplication in the time domain is a convolution in the frequency domain. Convolution between a baseband signal and a unity gain pure carrier frequency shifts the baseband spectrum in frequency and halves its magnitude, though no energy is lost. One half-scale copy of the replica resides on each half of the frequency axis. This is consistent with Parseval's theorem.
The modulation depth m is a convenient experimental parameter when assessing the total harmonic distortion in the demodulated signal. It is inversely proportional to the magnitude of the DC offset. THD increases proportionally with m1².
These distorting effects may be better mitigated by using another modulation scheme that takes advantage of the differential squaring device nature of the nonlinear acoustic effect. Modulation of the second integral of the square root of the desired baseband audio signal, without adding a DC offset, results in convolution in frequency of the modulated square-root spectrum, half the bandwidth of the original signal, with itself due to the nonlinear channel effects. This convolution in frequency is a multiplication in time of the signal by itself, or a squaring. This again doubles the bandwidth of the spectrum, reproducing the second time integral of the input audio spectrum. The double integration corrects for the -ω² filtering characteristic associated with the nonlinear acoustic effect. This recovers the scaled original spectrum at baseband.
The harmonic distortion process has to do with the high frequency replicas associated with each squaring demodulation, for either modulation scheme. These iteratively demodulate and self-modulate, adding a spectrally smeared-out and time-exponentiated copy of the original signal to baseband and twice the original center frequency each time, with one iteration corresponding to one traversal of the space between the emitter and target. Only sound with parallel collinear phase velocity vectors interfere to produce this nonlinear effect. Even-numbered iterations will produce their modulation products, baseband and high frequency, as reflected emissions from the target. Odd-numbered iterations will produce their modulation products as reflected emissions off the emitter.
This effect still holds when the emitter and the reflector are not parallel, though due to diffraction effects the baseband products of each iteration will originate from a different location each time, with the originating location corresponding to the path of the reflected high frequency self-modulation products.
These harmonic copies are largely attenuated by the natural losses at those higher frequencies when propagating through air.
Attenuation of ultrasound in air
The figure provided in provides an estimation of the attenuation that the ultrasound would suffer as it propagated through air. The figures from this graph correspond to completely linear propagation, and the exact effect of the nonlinear demodulation phenomena on the attenuation of the ultrasonic carrier waves in air was not considered. There is an interesting dependence on humidity. Nevertheless, a 50 kHz wave suffers an attenuation level in the order of 1 dB per meter at one atmosphere of pressure.
Safe use of high-intensity ultrasound
For the nonlinear effect to occur, relatively high-intensity ultrasonics are required. The SPL involved was typically greater than 100 dB of ultrasound at a nominal distance of 1 m from the face of the ultrasonic transducer. Exposure to more intense ultrasound over 140 dB near the audible range (20–40 kHz) can lead to a syndrome involving manifestations of nausea, headache, tinnitus, pain, dizziness, and fatigue, but this is around 100 times the 100 dB level cited above, and is generally not a concern. Dr Joseph Pompei of Audio Spotlight has published data showing that their product generates ultrasonic sound pressure levels around 130 dB (at 60 kHz) measured at 3 meters.
The UK's independent Advisory Group on Non-ionising Radiation (AGNIR) produced a 180-page report on the health effects of human exposure to ultrasound and infrasound in 2010. The UK Health Protection Agency (HPA) published their report, which recommended an exposure limit for the general public to airborne ultrasound sound pressure levels (SPL) of 100 dB (at 25 kHz and above).
OSHA specifies a safe ceiling value of ultrasound as 145 dB SPL exposure at the frequency range used by commercial systems in air, as long as there is no possibility of contact with the transducer surface or coupling medium (i.e. submerged). This is several times the highest levels used by commercial Audio Spotlight systems, so there is a significant margin for safety. In a review of international acceptable exposure limits Howard et al. (2005) noted the general agreement among standards organizations, but expressed concern with the decision by United States of America's Occupational Safety and Health Administration (OSHA) to increase the exposure limit by an additional 30 dB under some conditions (equivalent to a factor of 1000 in intensity).
For frequencies of ultrasound from 25 to 50 kHz, a guideline of 110 dB had been recommended by Canada, Japan, the USSR, and the International Radiation Protection Agency, and 115 dB by Sweden in the late 1970s to early 1980s, but these were primarily based on subjective effects. The more recent OSHA guidelines above are based on ACGIH (American Conference of Governmental Industrial Hygienists) research from 1987.
Lawton(2001) reviewed international guidelines for airborne ultrasound in a report published by the United Kingdom's Health and Safety Executive, this included a discussion of the guidelines issued by the American Conference of Governmental Industrial Hygienists (ACGIH), 1988. Lawton states "This reviewer believes that the ACGIH has pushed its acceptable exposure limits to the very edge of potentially injurious exposure". The ACGIH document also mentioned the possible need for hearing protection.
See also
Directional sound
Infrasound
Further resources
filed on 17 August 2004 describes an HSS system for using ultrasound to:
Direct distinct 'in-car entertainment' directly to passengers in different positions.
Shape the airwaves in the vehicle to deaden unwanted noises.
References
External links
Holosonics Audio Spotlight
Hypersonic Sound
NextFest
Acoustics
Sound
Ultrasound | Sound from ultrasound | [
"Physics"
] | 3,784 | [
"Classical mechanics",
"Acoustics"
] |
6,934,129 | https://en.wikipedia.org/wiki/Werner%20Erhard%20%28book%29 | Werner Erhard: The Transformation of a Man, The Founding of est is a biography of Werner Erhard by philosophy professor William Warren Bartley, III. The book was published in 1978 by Clarkson Potter. Bartley was a graduate of Erhard Seminars Training and served on its advisory board. Erhard wrote a foreword to the book. The book's structure describes Erhard's education, transformation, reconnection with his family, and the theories of the est training.
The book went through five editions in its first year. Reviewers generally commented that the book was favorable to Erhard, and a number of critics felt that it was unduly so, or lacked objectivity, citing Bartley's close relationship to Erhard. Responses to the writing were mixed; while some reviewers found it well written and entertaining, others felt the tone was too slick, promotional, or hagiographic.
Background
This biography tells Werner Erhard's early life story and the creation of the est Training which he designed to provide people with access to their own transformational experience.
Werner Erhard (born John Paul Rosenberg), a California-based former salesman, training manager and executive in the encyclopedia business, created the Erhard Seminars Training (est) course in 1971. est was a form of Large Group Awareness Training, and was part of the Human Potential Movement. est was a four-day, 60-hour self-help program given to groups of 250 people at a time. The program was very intensive. Participants were taught that they were responsible for their life outcomes.
est was widely ridiculed in the popular press and aroused a great deal of controversy.
In 1985, Werner Erhard and Associates repackaged the course as "The Forum", a seminar focused on "goal-oriented breakthroughs".
In the early 1990s Erhard faced family problems, as well as tax problems that were eventually resolved in his favor. In 1991 a group of his associates formed the company Landmark Education, purchasing The Forum's course "technology" from Erhard.
Author
William Warren Bartley, III, professor of philosophy at California State University, Hayward from 1973, prior to writing his biography on Erhard, had authored The Retreat to Commitment (1962), on the epistemology of Sir Karl Popper; Wittgenstein (1973), a biography of the philosopher Ludwig Wittgenstein; edited (1977) Lewis Carroll's Symbolic Logic of 1896; and authored a book titled Morality and Religion (1971). Bartley was first introduced to and referred to est in March 1972 by a doctor whom he had consulted about his nine-year struggle with insomnia. As a result of his experience in the est training his insomnia was cured. He then became very involved in the est organization, and served for several years as the company's philosophical consultant. He received payments of over US$30,000 in this capacity during the two years he spent writing the book.
He also served on the "Advisory Board" of est. Bartley interviewed a number of individuals who were involved in his subject's life and made use of quotations from a wide array of sources. Bartley commented on his subject in an article on the book in The Evening Independent, stating: "He's not a huckster, although he's a great salesman. I think he's a very good man, a very important man. ... He's a fascinating man. People are interested in him."
Contents
Life story
The book recounts how Erhard's childhood events, job positions and self-education led to the development of the est training. Born Jack Rosenberg, Erhard was an inquisitive child who was close to his mother. In his student years, he read profusely and earned superior grades. As a teenager, Erhard experienced both conflicts with his mother and a growing dissatisfaction with his life. Shortly after graduating from high school he married his girlfriend Pat Campbell, who had become pregnant. Instead of pursuing his plans for higher education, he took on a variety of jobs including meat-packing, heating and plumbing, estimating and selling cars. By the age of 21, Erhard had become the top car salesman at the dealership he worked for. By the time he was 25, Erhard and his wife had four children and he was feeling increasingly restless and constrained. He formed a friendship with a woman named June Bryde, which gradually deepened into an affair. He secretly arranged a flight from Philadelphia, Pennsylvania with June in 1960, leaving behind his wife and their four children, who would not hear from him for twelve years. The couple settled for a time in St Louis, and it was at this time that he changed his name to Werner Erhard with June changing hers to Ellen Erhard. After more work in car sales, Erhard joined the sales staff of Parents Magazine and was rapidly promoted to training manager and eventually appointed vice-president in 1967. During this period Erhard moved frequently to different parts of the US as dictated by the demands of the job, finally settling in San Francisco. When Parents Magazine was sold to the Time-Life group, he was recruited by the Grolier Society as Divisional Manager. According to Grolier vice-president John Wirtz the intention of appointing Erhard was that he would bring "integrity, honesty and straightforwardness" to their sales practices.
Personal search and self-education
Shortly after moving to St. Louis Erhard began to embark on a program of inquiry and self-education. Initially he focused on self-improvement books such as Think and Grow Rich by Napoleon Hill and Psycho-Cybernetics by Maxwell Maltz. From there, he widened his search to Human Potential Movement psychologists such as Abraham Maslow and Carl Rogers, a range of traditional Western philosophers, and Eastern disciplines such as Zen Buddhism, Taoism, Confucianism, Subud and the Martial arts as well as contemporary movements including Mind Dynamics, and Scientology.
Creating the est training
Bartley recounts a revelation that Erhard said he had experienced in March 1971 while driving into San Francisco, California to work at Grolier Society. Erhard described to Bartley what the revelation experience felt like: "What happened had no form. It was timeless, unbounded, ineffable, beyond language." He told Bartley that he realized: "I had to 'clean up' my life. I had to acknowledge and correct the lies in my life. I saw that the lies that I told about others — my wanting my family, or Ellen (his second wife), or anyone else, to be different from the way that they are -- came from lies that I told about myself -- my wanting to be different from the way that I was."
His desire to share this experience led to the plans formed later that year to create the est training. The first promotional seminar was held in September with over one thousand attendees, and the first est training took place in October 1971 in a San Francisco hotel.
In October 1972, while leading an est session in New York, Erhard realized that the time had come to reconnect with his family after an absence of 12 years. Although his long absence from his family caused them feelings of confusion and pain, he re-established cordial and loving relationships with all of them. His brother and sister became est Trainers and took on prominent roles in the business. He also set up a separate business venture for Ellen that gave her the financial freedom to choose how to structure her life and her relationship with him.
Key concepts of the est training as defined by Erhard and described in the book include:
Completion: the acknowledgement of actions or decisions taken in the past, and the taking of steps to bring a resolution.
Rackets: behavior patterns ostensibly involving complaints about people in one's life, but actually resulting in the perpetuation of the complaint and the securing of a payoff such as dominating the other person.
Integrity: being whole and complete, and honoring one's word. In the est context the word is used to depict a matter of workability, rather than with the moral overtones it has in everyday usage.
Stories: the interpretations of experiences which are regarded as reality, leading to conflict with other people who have created differing interpretations of the same events.
Responsibility: the willingness to accept oneself as the source of outcomes in life – whether welcome or unwelcome – rather than blaming others for them.
Intersections
The biographical chapters on Erhard are interspersed with chapters that Bartley refers to as "Intersections". These chapters contain Bartley's scholarly overview and analysis of the various disciplines that Werner Erhard explored before founding the est training.
Reception
The book was 8th place on the Time non-fiction bestseller list of November 20, 1978. Bartley told The Evening Independent in February 1979 that the book had sold a total of 110,000 copies and gone through five editions.
Jonathan Lieberson, writing for The New York Review of Books, described the book as "attractively written, never shrill or unduly proselytizing, careful to avoid the hysteria and tribalism that usually characterize the early years of movements like est", but considered Bartley to have "fallen" for Erhard. Given Bartley's previous work, Lieberson stated, he might have made an ideal interpreter of Erhard, but he found this expectation "disappointed [although] the book is nevertheless instructive". A review of Werner Erhard in Kirkus Reviews similarly concluded, "Too entranced to be truly objective, Bartley is nonetheless an insightfully partial observer." Booklist stated that Bartley, as an est student, had made the "mistake of being too close to his subject to be objective or critical."
In Psychology Today, Morris B. Parloff stated that Bartley had written his biography of Erhard "carefully, lovingly, and well". Kris Jeter, writing in Cults and the Family, commented that "wise researchers know and teach that one should be in love with their research topic", and counted Bartley's book among several in which "this love was highly evident". Steve McNamarra, in the Pacific Sun, said that the book was "clearly written and, while basically sympathetic" was not "an adulatory 'house job'." McNamarra found the sections detailing Erhard's "soap opera", making up three-quarters of the book, the easiest to read, while the "intersections", passages in which Bartley provided concise summaries of the philosophical traditions underpinning Erhard's est training, were tougher but ultimately rewarding.
Kenneth Wayne Thomas, in Intrinsic Motivation at Work, described the book as "somewhat sympathetic" to Erhard and the est philosophy; Steve Jackson, writing in Westword, similarly included it among "books sympathetic to Erhard, est and Landmark", written by an "old friend of Erhard's". Stephen Goldstein, in a Washington Post review, said Bartley had made it "obvious from the start that he cares about his subject and his own est experience" and had told "a rather simple, straightforward story that pretty much lets you draw your own conclusions [about Erhard] or keep the ones you have already reached." A reviewer in Choice: Current Reviews for Academic Libraries stated he was "enthusiastic about this book", praising the "personal quality [of] the narrative, which, though, sometimes becomes overly detailed." He highly recommended the book for general and college libraries focused on the social sciences.
Other commentators felt that the book was unduly favourable to Erhard. A review of the book in The Christian Century stated that Bartley had got "sucked into" writing a "promo on Erhard, founder of one of the pseudo-therapies of the '70s." The Los Angeles Times commented that "[Bartley's] philosophical justification of est as a mishmash of totalitarianism, hucksterism and existentialism makes this book more a public relations product than an objective study." A Chicago Tribune review described the book as a "painstaking ... act of devotion" that nevertheless failed in its mission: "No one reading it is likely to agree with Bartley that the founder of est is a philosopher and spiritual leader of Gandhian magnitude except the already convinced." James R. Fisher, in Six Silent Killers: Management's Greatest Challenge, and Suzanne Snider, writing for The Believer magazine, referred to Bartley's book as a "hagiography", and Rachel Jones of Noseweek considered the book "sycophantic". A review in The Evening Independent described Bartley as Erhard's "friend and admitted booster", telling his "often-sordid story in detail." E. C. Dennis, writing for Library Journal, found that Bartley's work "has a slick tone and more than a trace of hero worship". Dennis acknowledged that the book gave "the full details of Erhard's 'soap opera,' often in his own words," but was critical of Bartley's writing, saying he cast "a Freud's-eye-view on his subject's youthful failings, but after the famous 'transformation' his tone becomes almost reverential." Dennis stated that the book failed to ask important questions, but that large public libraries should carry a copy, given its status as an "authorized" biography.
See also
Getting It: The psychology of est
New age
Outrageous Betrayal
The Book of est
References
Further reading
Book reviews
External links
Werner Erhard: Books and Articles , as cited on official Werner Erhard homepage
1978 non-fiction books
Biographical books
Human Potential Movement
Personal development
New Age books
Werner Erhard
English-language non-fiction books | Werner Erhard (book) | [
"Biology"
] | 2,833 | [
"Personal development",
"Behavior",
"Human behavior"
] |
6,934,687 | https://en.wikipedia.org/wiki/Constant-weight%20code | In coding theory, a constant-weight code, also called an m-of-n code, is an error detection and correction code where all codewords share the same Hamming weight.
The one-hot code and the balanced code are two widely used kinds of constant-weight code.
The theory is closely connected to that of designs (such as t-designs and Steiner systems). Most of the work on this field of discrete mathematics is concerned with binary constant-weight codes.
Binary constant-weight codes have several applications, including frequency hopping in GSM networks.
Most barcodes use a binary constant-weight code to simplify automatically setting the brightness threshold that distinguishes black and white stripes.
Most line codes use either a constant-weight code, or a nearly-constant-weight paired disparity code.
In addition to use as error correction codes, the large space between code words can also be used in the design of asynchronous circuits such as delay insensitive circuits.
Constant-weight codes, like Berger codes, can detect all unidirectional errors.
A(n, d, w)
The central problem regarding constant-weight codes is the following: what is the maximum number of codewords in a binary constant-weight code with length , Hamming distance , and weight ? This number is called .
Apart from some trivial observations, it is generally impossible to compute these numbers in a straightforward way. Upper bounds are given by several important theorems such as the first and second Johnson bounds, and better upper bounds can sometimes be found in other ways. Lower bounds are most often found by exhibiting specific codes, either with use of a variety of methods from discrete mathematics, or through heavy computer searching. A large table of such record-breaking codes was published in 1990, and an extension to longer codes (but only for those values of and which are relevant for the GSM application) was published in 2006.
1-of-N codes
A special case of constant weight codes are the one-of-N codes, that encode bits in a code-word of bits. The one-of-two code uses the code words 01 and 10 to encode the bits '0' and '1'. A one-of-four code can use the words 0001, 0010, 0100, 1000 in order to encode two bits 00, 01, 10, and 11. An example is dual rail encoding, and chain link used in delay insensitive circuits. For these codes, and .
Some of the more notable uses of one-hot codes include
biphase mark code uses a 1-of-2 code;
pulse-position modulation uses a 1-of-n code;
address decoder,
etc.
Balanced code
In coding theory, a balanced code is a binary forward error correction code for which each codeword contains an equal number of zero and one bits. Balanced codes have been introduced by Donald Knuth; they are a subset of so-called unordered codes, which are codes having the property that the positions of ones in a codeword are never a subset of the positions of the ones in another codeword. Like all unordered codes, balanced codes are suitable for the detection of all unidirectional errors in an encoded message. Balanced codes allow for particularly efficient decoding, which can be carried out in parallel.
Some of the more notable uses of balanced-weight codes include
biphase mark code uses a 1 of 2 code;
6b/8b encoding uses a 4 of 8 code;
the Hadamard code is a of code (except for the zero codeword),
the three-of-six code;
etc.
The 3-wire lane encoding used in MIPI C-PHY can be considered a generalization of constant-weight code to ternary -- each wire transmits a ternary signal, and at any one instant one of the 3 wires is transmitting a low, one is transmitting a middle, and one is transmitting a high signal.
m-of-n codes
An m-of-n code is a separable error detection code with a code word length of n bits, where each code word contains exactly m instances of a "one". A single bit error will cause the code word to have either or "ones". An example m-of-n code is the 2-of-5 code used by the United States Postal Service.
The simplest implementation is to append a string of ones to the original data until it contains m ones, then append zeros to create a code of length n.
Example:
Some of the more notable uses of constant-weight codes, other than the one-hot and balanced-weight codes already mentioned above, include
Code 39 uses a 3-of-9 code;
bi-quinary coded decimal code uses a 2-of-7 code,
the 2-of-5 code,
etc.
References
External links
Table of lower bounds on maintained by Andries Brouwer
Table of upper bounds on maintained by Erik Agrell
Information theory
Error detection and correction | Constant-weight code | [
"Mathematics",
"Technology",
"Engineering"
] | 1,035 | [
"Telecommunications engineering",
"Reliability engineering",
"Applied mathematics",
"Error detection and correction",
"Computer science",
"Information theory"
] |
6,935,048 | https://en.wikipedia.org/wiki/Kumi%20Kumi | Kumi Kumi (from Swahili 'kumi' for 'ten') is an illegal liquor brewed in Kenya from sorghum, maize or millet. The cheap, widely brewed drink grows in popularity among the lower classes and disadvantaged of the region, as the economy and the value of the shilling has declined. Kumi Kumi is known for its exceptional alcohol content.
Kumi Kumi is so named for its cheap price, KSh.10/= for a mug, which in 2006 comes to roughly US$0.15. Legal beers usually cost around KSh.65/=.
Health concerns
The brew is often doctored in unsafe and poisonous ways, and its regular abuse frequently has resulted in alcohol poisoning related hospitalizations, blindness, and death.
Notes
Distilled drinks
Alcohol in Kenya
Adulteration | Kumi Kumi | [
"Chemistry"
] | 172 | [
"Adulteration",
"Distillation",
"Drug safety",
"Distilled drinks"
] |
6,935,363 | https://en.wikipedia.org/wiki/Invariant%20differential%20operator | In mathematics and theoretical physics, an invariant differential operator is a kind of mathematical map from some objects to an object of similar type. These objects are typically functions on , functions on a manifold, vector valued functions, vector fields, or, more generally, sections of a vector bundle.
In an invariant differential operator , the term differential operator indicates that the value of the map depends only on and the derivatives of in . The word invariant indicates that the operator contains some symmetry. This means that there is a group with a group action on the functions (or other objects in question) and this action is preserved by the operator:
Usually, the action of the group has the meaning of a change of coordinates (change of observer) and the invariance means that the operator has the same expression in all admissible coordinates.
Invariance on homogeneous spaces
Let M = G/H be a homogeneous space for a Lie group G and a Lie subgroup H. Every representation gives rise to a vector bundle
Sections can be identified with
In this form the group G acts on sections via
Now let V and W be two vector bundles over M. Then a differential operator
that maps sections of V to sections of W is called invariant if
for all sections in and elements g in G. All linear invariant differential operators on homogeneous parabolic geometries, i.e. when G is semi-simple and H is a parabolic subgroup, are given dually by homomorphisms of generalized Verma modules.
Invariance in terms of abstract indices
Given two connections and and a one form , we have
for some tensor . Given an equivalence class of connections , we say that an operator is invariant if the form of the operator does not change when we change from one connection in the equivalence class to another. For example, if we consider the equivalence class of all torsion free connections, then the tensor Q is symmetric in its lower indices, i.e. . Therefore we can compute
where brackets denote skew symmetrization. This shows the invariance of the exterior derivative when acting on one forms.
Equivalence classes of connections arise naturally in differential geometry, for example:
in conformal geometry an equivalence class of connections is given by the Levi Civita connections of all metrics in the conformal class;
in projective geometry an equivalence class of connection is given by all connections that have the same geodesics;
in CR geometry an equivalence class of connections is given by the Tanaka-Webster connections for each choice of pseudohermitian structure
Examples
The usual gradient operator acting on real valued functions on Euclidean space is invariant with respect to all Euclidean transformations.
The differential acting on functions on a manifold with values in 1-forms (its expression is in any local coordinates) is invariant with respect to all smooth transformations of the manifold (the action of the transformation on differential forms is just the pullback).
More generally, the exterior derivative that acts on n-forms of any smooth manifold M is invariant with respect to all smooth transformations. It can be shown that the exterior derivative is the only linear invariant differential operator between those bundles.
The Dirac operator in physics is invariant with respect to the Poincaré group (if we choose the proper action of the Poincaré group on spinor valued functions. This is, however, a subtle question and if we want to make this mathematically rigorous, we should say that it is invariant with respect to a group which is a double cover of the Poincaré group)
The conformal Killing equation is a conformally invariant linear differential operator between vector fields and symmetric trace-free tensors.
Conformal invariance
Given a metric
on , we can write the sphere as the space of generators of the nil cone
In this way, the flat model of conformal geometry is the sphere with and P the stabilizer of a point in . A classification of all linear conformally invariant differential operators on the sphere is known (Eastwood and Rice, 1987).
See also
Differential operators
Laplace invariant
Invariant factorization of LPDOs
Notes
References
Differential geometry
Differential operators | Invariant differential operator | [
"Mathematics"
] | 816 | [
"Mathematical analysis",
"Differential operators"
] |
6,935,971 | https://en.wikipedia.org/wiki/Alpha%20cleavage | Alpha-cleavage (α-cleavage) in organic chemistry refers to the act of breaking the carbon-carbon bond adjacent to the carbon bearing a specified functional group.
Mass spectrometry
Generally this topic is discussed when covering tandem mass spectrometry fragmentation and occurs generally by the same mechanisms.
For example, of a mechanism of alpha-cleavage, an electron is knocked off an atom (usually by electron collision) to form a radical cation. Electron removal generally happens in the following order: 1) lone pair electrons, 2) pi bond electrons, 3) sigma bond electrons.
One of the lone pair electrons moves down to form a pi bond with an electron from an adjacent (alpha) bond. The other electron from the bond moves to an adjacent atom (not one adjacent to the lone pair atom) creating a radical. This creates a double bond adjacent to the lone pair atom (oxygen is a good example) and breaks/cleaves the bond from which the two electrons were removed.
In molecules containing carbonyl groups, alpha-cleavage often competes with McLafferty rearrangement.
Photochemistry
In photochemistry, it is the homolytic cleavage of a bond adjacent to a specified group.
See also
Inductive cleavage
References
Organic reactions
Tandem mass spectrometry | Alpha cleavage | [
"Physics",
"Chemistry"
] | 262 | [
"Organic reactions",
"Mass spectrometry",
"Spectrum (physical sciences)",
"Tandem mass spectrometry"
] |
10,715,684 | https://en.wikipedia.org/wiki/Edinburgh%20Concurrent%20Supercomputer | The Edinburgh Concurrent Supercomputer (ECS) was a large Meiko Computing Surface supercomputer. This transputer-based, massively parallel system was installed at the University of Edinburgh during the late 1980s and early 1990s.
History
Following a pilot project involving an early 40-transputer Computing Surface installed in April 1986, funding was obtained from SERC and the DTI for a much larger system using T800 transputers and a MicroVAX fileserver. The Edinburgh Concurrent Supercomputer Project (ECSP) was formed to manage and support the facility, which was commissioned at the end of 1987.
Over the next few years, the system received several upgrades, including more transputers (reaching, at its peak, around 400 processors) and the installation of M²VCS and MeikOS system software, which enabled multi-user access and removed the need for the MicroVAX.
In 1990, the Edinburgh Concurrent Supercomputer Project was succeeded by the Edinburgh Parallel Computing Centre, which consolidated the project with other parallel computing resources and activities within the University. The ECS continued to be used for a variety of academic and commercial research work.
In October 1992 the ECS was reconfigured as a SPARC-hosted Computing Surface with three SPARC "host" processors running SunOS and around 380 T800s. The system was finally decommissioned in August 1994.
References
Wallace, D J. "Supercomputing with Transputers", Computing Systems in Engineering, Volume 1, Issue 1, 1990, Pages 131-141, , Pergamon Press, Inc. Elmsford, NY, USA
Brown, Mike. "The Edinburgh Concurrent Supercomputer: an appreciation", EPCC News, No.24, 1994.
External links
EPCC History page
Supercomputers
University of Edinburgh School of Informatics | Edinburgh Concurrent Supercomputer | [
"Technology"
] | 381 | [
"Supercomputers",
"Computing stubs",
"Supercomputing",
"Computer hardware stubs"
] |
10,715,835 | https://en.wikipedia.org/wiki/Torsion%20%28gastropod%29 | Torsion is a gastropod synapomorphy which occurs in all gastropods during larval development. Torsion is the rotation of the visceral mass, mantle, and shell 180˚ with respect to the head and foot of the gastropod. This rotation brings the mantle cavity and the anus to an anterior position above the head.
In some groups of gastropods (Opisthobranchia) there is a degree of secondary detorsion or rotation towards the original position; this may be only partial detorsion or full detorsion.
The torsion or twisting of the visceral mass of larval gastropods is not the same thing as the spiral coiling of the shell, which is also present in many shelled gastropods.
Development
There are two different developmental stages which cause torsion. The first stage is caused by the development of the asymmetrical velar/foot muscle which has one end attached to the left side of the shell and the other end has fibres attached to the left side of the foot and head. At a certain point in larval development this muscle contracts, causing an anticlockwise rotation of the visceral mass and mantle of roughly 90˚. This process is very rapid, taking from a few minutes to a few hours. After this transformation the second stage of torsion development is achieved by differential tissue growth of the left hand side of the organism compared to the right hand side. This second stage is much slower and rotates the visceral mass and mantle a further 90˚. Detorsion is brought about by reversal of the above phases.
During torsion the visceral mass remains almost unchanged anatomically. There are, however, other important changes to other internal parts of the gastropod. Before torsion the gastropod has an euthyneural nervous system, where the two visceral nerves run parallel down the body. Torsion results in a streptoneural nervous system, where the visceral nerves cross over in a figure of eight fashion. As a result, the parietal ganglions end up at different heights. Because of differences between the left and right hand sides of the body, there are different evolutionary pressures on left and right hand side organs and as a result in some species there are considerable differences. Some examples of this are: in the ctenidia (equivalent to lungs or gills) in some species, one side may be reduced or absent; or in some hermaphrodite species the right hand renal system has been transformed into part of the reproductive system.
Evolutionary roles
The original advantage of torsion for gastropods is unclear. It is further complicated by potential problems that accompany torsion. For example, having the place where wastes are excreted positioned above the head could result in fouling of the mouth and sense organs. Nevertheless, the diversity and success of the gastropods suggests torsion is advantageous, or at least has no strong disadvantages.
One likely candidate for the original purpose of torsion is defence against predators in adult gastropods. By moving the mantle cavity over the head, the gastropod can retract its vulnerable head into its shell. Some gastropods can also close the entrance to their shell with a tough operculum, a door-like structure which is attached to the dorsal surface of their foot. In evolutionary terms, the appearance of an operculum occurred shortly after that of torsion, which suggests a possible link with the role of torsion, though there is not sufficient evidence for or against this hypothesis. The English zoologist Walter Garstang wrote a famous poem in 1928, The Ballad of the Veliger, in which he argued with gentle humour in favour of the defence theory, including the lines
Predaceous foes, still drifting by in numbers unabated,Were baffled now by tactics which their dining plans frustrated.Their prey upon alarm collapsed, but promptly turned about,With the tender morsel safe within and the horny foot without!
Torsion can provide other advantages. For aquatic gastropods, anterior positioning of the mantle cavity may be useful for preventing sediment getting into the mantle cavity, an event which is more likely with posterior positioning because sediment can be stirred up by the motion of the gastropod. Another possible advantage for aquatic species is that moving the osphradium (olfactory sense organs) to an anterior position means they are sampling water the gastropod is entering rather than leaving. This may help the gastropod locate food or avoid predators. In terrestrial species, ventilation is better with anterior positioning. This is due to the back and forth motion of the shell during movement, which would tend to block the mantle opening against the foot if it was in a posterior position. The evolution of an asymmetrical conispiral shell allowed gastropods to grow larger, but resulted in an unbalanced shell. Torsion allows repositioning of the shell, bringing the centre of gravity back to the middle of the gastropod's body, and thus helps prevent the animal or the shell from falling over.
Whatever original advantage resulted in the initial evolutionary success of torsion, subsequent adaptations linked to torsion have provided modern gastropods with further advantages.
References
Sources
Brusca, R.C.; Brusca, G.J. (1990) Invertebrates. Sinauer Associates, Inc. Massachusetts.
Page L. R. (2006) "Modern insights on gastropod development: Reevaluation of the evolution of a novel body plan". Integrative and Comparative Biology 46(2): 134–143. doi:10.1093/icb/icj018.
Ruppert, E.E. et al. (2004) Invertebrate Zoology. Seventh edition. Brooks/Cole – Thompson Learning. Belmont, California.
Phylogenetics
Gastropod anatomy | Torsion (gastropod) | [
"Biology"
] | 1,183 | [
"Bioinformatics",
"Phylogenetics",
"Taxonomy (biology)"
] |
10,716,352 | https://en.wikipedia.org/wiki/Hamilton%20Wetland%20Restoration%20Project | The Hamilton Wetland Restoration Project, now known as the Hamilton/Bel Marin Keys Wetlands Restoration, is a wetlands habitat restoration project at the former Hamilton Air Force Base—Hamilton Army Airfield (1930−1988) site and adjacent Bel Marin Keys shoreline, in Marin County, California.
It is located at Whiteside Marsh on the northwestern shore of San Pablo Bay, in and adjacent to the city of Novato in the North Bay region of the San Francisco Bay Area.
Project
The restoration project is a joint venture between two public agencies: the U.S. Army Corps of Engineers is the lead federal agency, with the California Coastal Conservancy as the local sponsoring agency. In addition, the San Francisco Bay Conservation and Development Commission serves as a collaborating partner.
The U.S. Congress authorized the Hamilton Wetland Restoration Project in 1999, and the addition of the Bel Marin Keys property to the project in 2007. The combined project site comprises approximately .
Together, these three agencies are working to restore the Whiteside Marsh section of the closed Hamilton Air Force Base—Hamilton Army Airfield site to its former natural estuary and wetlands condition, and to create valuable endangered species habitat in the urbanized San Francisco Bay Area.
The Hamilton Wetlands Restoration Project "represents an unprecedented opportunity to contribute to the restoration of the San Francisco Bay, which has lost over 85% of its natural wetlands since the 1880s."
External links
San Pablo Bay
Wetlands of the San Francisco Bay Area
Ecological restoration
Estuaries of California
Landforms of Marin County, California
Natural history of Marin County, California
Protected areas of Marin County, California
Protected areas established in 1999
1999 establishments in California
Environment of the San Francisco Bay Area | Hamilton Wetland Restoration Project | [
"Chemistry",
"Engineering"
] | 333 | [
"Ecological restoration",
"Environmental engineering"
] |
10,717,407 | https://en.wikipedia.org/wiki/Peter%20Dodson | Peter Dodson (born August 20, 1946) is an American paleontologist who has published many papers and written and collaborated on books about dinosaurs. An authority on Ceratopsians, he has also authored several papers and textbooks on hadrosaurs and sauropods, and is a co-editor of The Dinosauria, widely considered the definitive scholarly reference on dinosaurs. Dodson described Avaceratops in 1986; Suuwassea in 2004, and many others, while his students have named Paralititan and Auroraceratops. He has conducted field research in Canada, the United States, India, Madagascar, Egypt, Argentina, and China. A professor of vertebrate paleontology and of veterinary anatomy at the University of Pennsylvania, Dodson has also taught courses in geology, history, history and sociology of science, and religious studies. Dodson is also a research associate at the Academy of Natural Sciences. In 2001, two former students named an ancient frog species, Nezpercius dodsoni, after him (as well as after the Native American Nez Perce people). Dodson has also been skeptical to the theory of a dinosaurian origin of birds, but more recently has come down on the side of this theory.
Religious views
Describing himself as a "deeply committed Christian," Dodson is a Roman Catholic who subscribes to theistic evolution and has argued that there is no real conflict between religion and science, writing that: "I have found little if anything to support or necessitate the warlike antagonism between science and religion pictured by Dawkins and like-minded scientists, who are animated by motives other than pure, disinterested science." Dodson has written numerous essays on the topic of religious belief and science, and has served on the Board of Directors for the nonprofit New York City-based Metanexus Institute.
Publications
Books
References
1946 births
American paleontologists
Living people
Theistic evolutionists
University of Alberta alumni
Yale University alumni | Peter Dodson | [
"Biology"
] | 407 | [
"Non-Darwinian evolution",
"Theistic evolutionists",
"Biology theories"
] |
10,717,514 | https://en.wikipedia.org/wiki/Osmostat | The osmostat is the regulatory center in the hypothalamus that controls the osmolality of the extracellular fluid. The area in the anterior region of the hypothalamus contains the osmoreceptors, cells that control osmolality via the secretion of antidiuretic hormone (ADH).
In neurological conditions such as epilepsy or paraplegia, the osmostat can be pathologically reset, secreting ADH at a lower osmolality, which may cause hyponatremia. A reset osmostat is also a feature of SIADH.
References
Physiology | Osmostat | [
"Biology"
] | 134 | [
"Physiology"
] |
10,717,548 | https://en.wikipedia.org/wiki/NGC%207252 | NGC 7252 is a peculiar galaxy resulting from an interaction between two galaxies that started a billion years ago. It is located 220 million light years away in the constellation Aquarius. It is also called Atoms for Peace Galaxy, a nickname which comes from its loop-like structure, made of stars, that resembles a classic diagram of an electron orbiting an atomic nucleus.
Description
NGC 7252 is located in the southern part of Aquarius. With an apparent magnitude of 12.7, it is bright enough to be seen by amateur astronomers as a faint small fuzzy blob. Large loops of gas and stars around it makes the galaxy quite peculiar. Thus, it is also Arp 226 (the 226th entry in Arp's list of peculiar galaxies).
In December 1953, U.S. President Dwight D. Eisenhower gave the "Atoms for Peace" speech. The speech was concerned about promoting nuclear power for peaceful purposes instead of nuclear weapons. Significant to the scientific community, the name of the speech was given to this peculiar galaxy. The two galaxies merging also resembles nuclear fusion and the galaxies' giant loops resemble a diagram of electrons orbiting the nucleus of an atom.
The galaxy is the result of a collision of two galaxies. This collision is an opportunity for astronomers to study such mergers and to predict the future of our Milky Way after its expected collision with the Andromeda Galaxy.
X-ray emissions were observed in NGC 7252. This suggests the existence of nuclear activity or an intermediate-mass black hole in the galaxy.
Structure
The central region of the galaxy is home to hundreds of massive, ultra-luminous clusters of young stars that appear as bluish knots of light. These young clusters were created in the suspected galaxy merger, that pushed gases into these regions and caused a burst of star formation.
The most conspicuous of them is one known as W3, which has a mass of around 8*107 solar masses. This object, also the most luminous super star cluster known to date, has properties more similar to an ultra-compact dwarf galaxy and differs only from those galaxies because of its age (300–500 million years).
A pinwheel-shaped disk, rotating in a direction opposite to that of the galaxy, is found deep inside NGC 7252: it resembles a face-on spiral galaxy, yet it is only 10,000 light years across. It is believed that this pinwheel-shaped structure is a remnant of a collision between two galaxies. Within a few billion years, NGC 7252 will look like an elliptical galaxy with a small inner disk due to the exhaustion of the gases in the galaxy.
In August 2013, F. Schweizer and others published a paper in the Astrophysical Journal titled "The [O III] Nebula of the Merger Remnant NGC 7252: A Likely Faint Ionization Echo". This reports the finding of a Voorwerpje on the outskirts of the well-studied NGC 7252. The abstract states (edited): "We present images and spectra of a ~10 kpc-sized emission-line nebulosity discovered in the prototypical merger remnant NGC 7252 and dubbed the `[O III] nebula' because of its dominant [O III]_5007 line. This nebula seems to yield the first sign of episodic AGN activity still occurring in the remnant, ~220 Myr after the coalescence of two gas-rich galaxies. Its location and kinematics suggest it belongs to a stream of tidal-tail gas falling back into the remnant." It continues: "This large discrepancy suggests that the nebula is a faint ionization echo excited by a mildly active nucleus that has declined by ~3 orders of magnitude over the past 20,000–200,000 years. In many ways this nebula resembles the prototypical `Hanny's Voorwerp' near IC 2497, but its size is 3x smaller."
See also
NGC 7727, a similar galaxy, also in Aquarius.
References
External links
ESA homepage for the Hubble Space telescope Pictures and information on NGC 7252
Atoms for Peace galaxy at ESO
Article about Atoms for Peace galaxy
Lenticular galaxies
Peculiar galaxies
Galaxy mergers
Aquarius (constellation)
7252
68612
226 | NGC 7252 | [
"Astronomy"
] | 860 | [
"Constellations",
"Aquarius (constellation)"
] |
10,717,940 | https://en.wikipedia.org/wiki/Simazine | Simazine is an herbicide of the triazine class. The compound is used to control broad-leaved weeds and annual grasses.
Preparation
Simazine may be prepared from cyanuric chloride and a concentrated solution of ethyl amine (at least 50 percent by number) in water. The reaction is highly exothermic and is therefore best carried out below 10 °C.
Cyanuric chloride decomposes at high temperatures into hydrogen chloride and hydrogen cyanide, both of which are highly toxic by inhalation.
Properties and uses
Simazine is an off-white crystalline compound which is sparingly soluble in water. It is a member of the triazine-derivative herbicides, and was widely used as a residual non-selective herbicide, but is now banned in European Union states. Like atrazine, a related triazine herbicide, it acts by inhibiting photosynthesis. It remains active in the soil for two to seven months or longer after application.
See also
Atrazine
References
External links
Simazine, Extoxnet PIP
Herbicides
Triazines
Chloroarenes | Simazine | [
"Biology"
] | 230 | [
"Herbicides",
"Biocides"
] |
10,718,784 | https://en.wikipedia.org/wiki/Mass%20Spectrometry%20Reviews | Mass Spectrometry Reviews (usually abbreviated as Mass Spectrom. Rev.), is a peer-reviewed scientific journal, published since 1982 by John Wiley & Sons. It publishes reviews in selected topics of mass spectrometry and associated scientific disciplines bimonthly.
See also
Journal of Mass Spectrometry
Rapid Communications in Mass Spectrometry
John Wiley & Sons
Mass spectrometry journals
Academic journals established in 1987 | Mass Spectrometry Reviews | [
"Physics",
"Chemistry"
] | 89 | [
"Spectrum (physical sciences)",
"Biochemistry journal stubs",
"Biochemistry stubs",
"Mass spectrometry",
"Mass spectrometry journals"
] |
10,720,277 | https://en.wikipedia.org/wiki/Tom%20Newman%20%28scientist%29 | Tom Newman, a graduate student at Stanford University in 1985, was one of the two people to meet one of a pair of challenges put forth by Nobel Prize-winning physicist Richard Feynman at the annual meeting of the American Physical Society in 1959, in a talk titled "There's Plenty of Room at the Bottom".
In December of that year, Feynman offered two challenges at the meeting, held that year in Caltech, offering a $1000 prize to the first person to solve each of them. Both challenges involved nanotechnology, and the first prize was won by William McLellan.
The second challenge was for anyone who could find a way to inscribe a book page on a surface area 25,000 times smaller than its standard print (a scale at which the entire contents of the Encyclopædia Britannica could fit on the head of a pin).
Newman claimed the prize when he wrote the first page of Charles Dickens' A Tale of Two Cities, at the required scale, on the head of a pin with a beam of electrons. The main problem he had before he could claim the prize was finding the text after he had written it; the head of the pin was a huge empty space compared with the text inscribed on it.
References
External links
APS article — a further account of the prize-winning feat
Caltech article
Nanotechnologists
Living people
Year of birth missing (living people) | Tom Newman (scientist) | [
"Materials_science"
] | 293 | [
"Nanotechnology",
"Nanotechnologists"
] |
10,721,076 | https://en.wikipedia.org/wiki/Grease%20trap | A grease trap (also known as grease interceptor, grease recovery device, grease capsule and grease converter) is a plumbing device (a type of trap) designed to intercept most greases and solids before they enter a wastewater disposal system. Common wastewater contains small amounts of oils which enter into septic tanks and treatment facilities to form a floating scum layer. This scum layer is very slowly digested and broken down by microorganisms in the anaerobic digestion process. Large amounts of oil from food preparation in restaurants can overwhelm a septic tank or treatment facility, causing the release of untreated sewage into the environment. High-viscosity fats and cooking grease such as lard solidify when cooled, and can combine with other disposed solids to block drain pipes.
Grease traps have been in use since the Victorian era; in the late 1800s, Nathaniel Whiting was granted the first patent. The quantity of fats, oils, greases, and solids (FOGS) that enter sewers is decreased by the traps. They consist of boxes within the drain run that flows between the sinks in a kitchen and the sewer system. They have only kitchen wastewater flowing through them and do not serve any other drainage system, such as toilets. They can be made from various materials, such as stainless steel, plastics, concrete and cast iron. They range from 35-liter capacity to 45,000 litres and greater. They can be located above ground, below ground, inside the kitchen, or outside the building.
Types
There are three primary types of devices. The most common are those specified by American Society Of Mechanical Engineers (ASME), utilizing baffles, or a proprietary inlet diffuser.
Grease trap sizing is based on the size of the 2- or 3-compartment sink, dishwasher, pot sinks, and mop sinks. Many manufacturers and vendors offer online sizing tools to make these calculations easy. The cumulative flow rates of these devices, as well as overall grease retention capacity (in pounds or kilograms) are considered. Currently, ASME Standard (ASME A112.14.3) is being adopted by both of the national model plumbing codes (International Plumbing Code and Uniform Plumbing Code) that cover most of the US. This standard requires that grease interceptors remove a minimum of 90% of incoming FOGs. It also requires that grease interceptors are third-party tested and certified to 90 days compliance with the standard pumping. This third-party testing must be conducted by a recognized and approved testing laboratory.
Passive grease traps are generally smaller, point-of-use units used under three-compartment sinks or adjacent to dishwashers in kitchens.
Large in-ground tanks, usually , are also passive grease interceptors. These units, made of concrete, fiberglass, or steel, have greater grease and solid storage capacities for high-flow applications such as a restaurant or hospital store. They are commonly called gravity interceptors. Interceptors require a retention time of 30 minutes to allow the fats, oils, grease, and food solids to settle in the tank. As more wastewater enters the tank, the grease-free water is pushed out of the tank. The rotting brown grease inside a grease trap or grease interceptor must be pumped out on a scheduled basis. The brown grease is not recycled and goes to landfills. On average of brown grease goes to landfill annually from each restaurant.
Passive grease traps and passive grease interceptors must be emptied and cleaned when 25% full. As the passive devices fill with fats, oils, and grease, they become less productive for grease recovery. A full grease trap does not stop any FOG from entering the sanitary sewer system. The emptied contents or "brown grease" is considered hazardous waste in many jurisdictions.
A third system type, hydromechanical grease interceptors (HGIs), has become more popular in recent years as restaurants open in more nontraditional sites. Often, these sites don't have space for a large concrete grease interceptor. HGIs take up less space and hold more grease as a percent of their liquid capacity — often between 70 and 85% of their liquid capacity or even higher as in the case of some "Trapzilla" models. These interceptors are 3rd-party certified to meet efficiency standards. Most are made out of durable plastic or fiberglass, lasting much longer than concrete gravity grease interceptors. They are usually lightweight and easy to install without heavy equipment. Most manufacturers test beyond the minimum standard to demonstrate the full capacity of the unit.
Finally, automatic grease removal devices or recovery units offer an alternative to hydromechanical grease interceptors in kitchens. While their tanks passively intercept grease, they have an automatic, motorized mechanism for removing the grease from the tank and isolating it in a container. These interceptors must meet the same efficiency standards as a passive HGI, but must also meet an additional standard that proves they are capable of skimming the grease effectively.
They are often designed to be installed unobtrusively in a commercial kitchen, in a corner, or under a sink. The upfront cost of these units can be higher, but kitchen staff can handle the minimal maintenance required, avoiding pumping fees. The compact design of these units allows them to fit in tight spaces, and simplifies installation.
Uses
Restaurant and food service kitchens produce waste grease which is present in the drain lines from various sinks, dishwashers and cooking equipment such as combi ovens and commercial woks. Rotisserie ovens have also become big sources of waste grease. If not removed, the grease can clump and cause blockage and back-up in the sewer.
In the US, sewers back up annually an estimated 400,000 times, and municipal sewer overflows on 40,000 occasions. The U.S. Environmental Protection Agency has determined that sewer pipe blockages are the leading cause of sewer overflows, and grease is the primary cause of sewer blockages in the United States. Even if accumulated FOG does not escalate into blockages and sanitary sewer overflows, it can disrupt wastewater utility operations and increase operations and maintenance requirements.
For these reasons, depending on the country, nearly all municipalities require commercial kitchen operations to use some type of interceptor device to collect grease before it enters sewers. Where FOG is a concern in the local wastewater system, communities have established inspection programs to ensure that these grease traps and/or interceptors are being routinely maintained.
It is estimated 50% of all sewer overflows are caused by grease blockages, with over of raw sewage spills annually.
Method of operation
When the outflow from the kitchen sink enters the grease trap, the solid food particles sink to the bottom, while lighter grease and oil float to the top. The relatively grease-free water is then fed into the normal septic system.The food solids at the bottom and floating oil and grease must be periodically removed in a manner similar to septic tank pumping. A traditional grease trap is not a food disposal unit. Unfinished food must be scraped into the garbage or food recycling bin. Gravy, sauces and food solids must be scraped off dishes before entering the sink or dishwasher.
To maintain some degree of efficiency, there has been a trend to specify larger traps. Unfortunately, providing a large tank for the effluent to stand also means that food waste has time to settle to the bottom of the tank, reducing available volume and adding to clean-out problems. Also, rotting food contained within an interceptor breaks down, producing toxic waste (such as sulfur gases); hydrogen sulfide combines with the water present to create sulfuric acid. This attacks mild steel and concrete materials, resulting in "rot out", On the other hand, polyethylene has acid-resisting properties. A larger interceptor is not a better interceptor. In most cases, multiple interceptors in series will separate grease much better.
Because it has been in the trap for some time, grease thus collected will be contaminated and is unsuitable for further use. This type of grease is called brown grease.
Brown grease
Waste from passive grease traps and gravity interceptors is called brown grease. Brown grease is rotted food solids in combination with fats, oils, and grease (FOG). Brown grease is pumped from the traps and interceptors by grease pumping trucks. Unlike the collected yellow grease, the majority of brown grease goes to landfill sites. New facilities (2012) and new technology are beginning to allow brown grease to be recycled.
References
External links
A112.14.3 Grease Interceptors Standard and A112.14.6 FOG (Fats, Oils, & Greases) Disposal Systems Standard, American Society of Mechanical Engineers (ASME)
Plumbing
Sewerage infrastructure
Sanitation | Grease trap | [
"Chemistry",
"Engineering"
] | 1,781 | [
"Water treatment",
"Plumbing",
"Sewerage infrastructure",
"Construction"
] |
10,721,277 | https://en.wikipedia.org/wiki/Redshift%20%28theory%29 | Redshift is a techno-economic theory suggesting hypersegmentation of information technology markets based on whether individual computing needs are over or under-served by Moore's law, which predicts the doubling of computing transistors (and therefore roughly computing power) every two years. The theory,
proposed and named by New Enterprise Associates partner and former Sun Microsystems CTO Greg Papadopoulos, categorized a series of high growth markets (redshifting) while predicting slower GDP-driven growth in traditional computing markets (blueshifting). Papadopoulos predicted the result will be a fundamental redesign of components comprising computing systems.
Hypergrowth market segments (redshifting)
According to the Redshift theory, applications "redshift" when they grow dramatically faster than Moore's Law allows, growing quickly in their absolute number of systems. In these markets, customers are running out of datacenter real-estate, power and cooling infrastructure. According to Dell Senior Vice President Brad Anderson, “Businesses requiring hyperscale computing environments – where infrastructure deployments are measured by up to millions of servers, storage and networking equipment – are changing the way they approach IT.”
While various Redshift proponents offer minor alterations on the original presentation, “Redshifting” generally includes:
ΣBW (Sum-of-Bandwidth)
These are companies that drive heavy Internet traffic. This includes popular web-portals like Google, Yahoo, AOL and MSN. It also includes telecoms, multimedia, television over IP, online games like World of Warcraft and others. This segment has been enabled by widespread availability of high-bandwidth Internet connections to consumers through a DSL or cable modem. A simple way to understand this market is that for every byte of content served to a PC, mobile phone or other device over a network, there must exist computing systems to send it over the network.
High performance computing (HPC)
These are companies that do complex simulations that involve (for example) weather, stock markets or drug-design simulations. This is a generally elastic market because businesses frequently spend every "available" dollar budgeted for IT. A common anecdote claims that cutting the cost of computing by half causes customers in this segment to buy at least twice as much, because each marginal IT dollar spent contributes to business advantage.
*prise (or "Star-prise")
These are companies that aggregate traditional computing applications and offer them as services, typically in the form of Software as a Service (SaaS). For example, companies that deploy CRM are over-served by Moore's Law, but companies that aggregate CRM functions and offer them as a service, such as Salesforce.com, grow faster than Moore's Law.
The eBay crisis
A prime example of redshift was a crisis at eBay. In 1999 eBay suffered a database crisis when a single Oracle Database running on the fastest Sun machine available (these tracking Moore's law in this period) was not enough to cope with eBay's growth. The solution was to massively parallelise their system architecture.
Traditional computing markets (blueshifting)
Redshift theory suggests that traditional computing markets, such as those serving enterprise resource planning or customer relationship management applications, have reached relative saturation in industrialized nations. Thereafter, proponents argued further market growth will closely follow gross domestic product growth, which typically remains under 10% for most countries annually. Given that Moore's Law continues to predict accurately the rate of computing transistor growth, which roughly translates into computing power doubling every two years, the Redshift theory suggests that traditional computing markets will ultimately contract as a percentage of computing expenditures over time.
Functionally, this means “Blueshifting” customers can satisfy computing requirement growth by swapping in faster processors without increasing the absolute number of computing systems.
Consequences and industry commentary
Papadopoulos argued that while traditional computing markets remain the dominant source of revenue through the late 2000s, a shift to hypergrowth markets will inevitably occur. When that shift occurs, he argued computing (but not computers) will become a utility, and differentiation in
the IT market will be based upon a company's ability to deliver computing at massive scale, efficiently and with predictable service levels, much like electricity at that time.
If computing is to be delivered as a utility, Nicholas Carr suggested Papadopoulos' vision compares with Microsoft researcher Jim Hamilton, who both agree that computing is most efficiently generated in shipping containers. Industry analysts are also beginning to quantify Redshifting and Blueshifting markets. According to International Data Corporation vice president Matthew Eastwood, "IDC believes that the IT market is in a period of hyper segmentation... This a class of customers that is Moore's law driven and as price performance gains continue, IDC believes that these organizations will accelerate their consumption of IT infrastructure.”
History and nomenclature
Key portions of Papadopoulos' theory were first presented by Sun Microsystems CEO Jonathan Schwartz in late 2006. Papadopoulos later gave a full presentation on Redshift to Sun's annual Analyst Summit in February 2007. The term Redshift refers to what happens when electromagnetic radiation, usually visible light, moves away from an observer. Papadopoulos chose this term to reflect growth markets because redshift helped cosmologists explain the expansion of the universe.
Papadopoulos originally depicted traditional IT markets as green to represent their revenue base, but later changed them to “blueshift,” which occurs when a light source moves toward an observer, similar to what would happen during a contraction of the universe.
Notes
External links
Greg Papadopoulos: Original Redshift presentation video
Official Greg Papadopoulos biography
InformationWeek feature story on Redshift
Nicholas Carr, "The future of computing demand"
Nicholas Carr, "Showdown in the trailer park II"
Microsoft's Jim Hamilton: paper and presentation
ZDnet Blog on Redshift
Adages
Rules of thumb
Computer industry
Digital media
Futures studies
Technology strategy
Computing culture | Redshift (theory) | [
"Technology"
] | 1,214 | [
"Multimedia",
"Computing culture",
"Digital media",
"Computing and society",
"Computer industry"
] |
10,721,443 | https://en.wikipedia.org/wiki/Journal%20of%20Hydrologic%20Engineering | The Journal of Hydrologic Engineering is a monthly engineering journal, first published by the American Society of Civil Engineers in 1996. The journal provides information on the development of new hydrologic methods, theories, and applications to current engineering problems. It publishes papers on analytical, experimental, and numerical methods with regard to the investigation and modeling of hydrological processes. It also publishes technical notes, book reviews, and forum discussions. Though the journal is based in the United States, articles dealing with subjects from around the world are accepted and published. The journal requires the use of the metric system, but allows for authors to also submit their papers in other systems of measure in addition to the SI system.
The journal is run by an editor-in-chief and a number of associate editors, who are respected professionals in the fields of hydrology and hydraulic engineering. The editors come from both academic and professional backgrounds and are responsible for screening submissions and forwarding articles to journal reviewers. The journal reviewers are subject matter experts who volunteer to review articles in order to determine if they should be published by the journal. The current editor-in-chief is R. S. Govindaraju of Purdue University.
G. V. Loganathan of Virginia Polytechnic Institute and State University (a victim of the Virginia Tech massacre on 16 April 2007) was an associate editor.
Editors
The following individuals have served as the editor-in-chief:
Rao S. Govindaraju (2013 – present)
Vijay P. Singh (2005 – 2013)
M. Levent Kavvas (1996–2005)
Indexes
The journal is indexed in Google Scholar, Baidu, Elsevier (Ei Compendex), Clarivate Analytics (Web of Science), ProQuest, Civil engineering database, TRDI, OCLC (WorldCat), IET/INSPEC, Crossref, Scopus, and EBSCOHost.
See also
List of scientific journals
References
External links
ASCE Library
Journal website
Academic journals established in 1996
Hydrology journals
Hydraulic engineering
Hydrologic Engineering
American Society of Civil Engineers academic journals | Journal of Hydrologic Engineering | [
"Physics",
"Engineering",
"Environmental_science"
] | 421 | [
"Hydrology",
"Hydrology journals",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Hydraulic engineering"
] |
10,721,560 | https://en.wikipedia.org/wiki/Haplogroup%20L6 | In human mitochondrial genetics, Haplogroup L6 is a human mitochondrial DNA (mtDNA) haplogroup. It is a small haplogroup local to the Ethiopian highlands and Yemen.
Distribution
This haplogroup has been found only in Yemen and Ethiopia.
Subclades
Tree
This phylogenetic tree of haplogroup M subclades is based on the paper by Mannis van Oven and Manfred Kayser Updated comprehensive phylogenetic tree of global human mitochondrial DNA variation and subsequent published research.
L3'4'6
L6
L6a
L6b
See also
Genealogical DNA test
Genetic genealogy
Human mitochondrial genetics
Population genetics
Human mitochondrial DNA haplogroups
References
External links
General
Mannis van Oven's Phylotree
Haplogroup L6
Ian Logan's Mitochondrial DNA Site: Haplogroup L6
YFull MTree's Haplogroup L6
L6 | Haplogroup L6 | [
"Chemistry",
"Biology"
] | 186 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Bioinformatics",
"Bioinformatics stubs"
] |
10,722,339 | https://en.wikipedia.org/wiki/Architectural%20reprography | Architectural reprography, the reprography of architectural drawings, covers a variety of technologies, media, and supports typically used to make multiple copies of original technical drawings and related records created by architects, landscape architects, engineers, surveyors, mapmakers and other professionals in building and engineering trades.
Within the context of archival preservation, the custodians of architectural records must consider many aspects of identification and care when managing the artifactual nature of these materials. Storage containers, handling, paper and chemical compositions and interactions, ultraviolet light exposure, humidity, mold, and other agents of potential harm all interact to determine the longevity of these documents. As well, architectural reprographic drawings are often in very large formats, making storage and handling decisions especially complex.
History
With the rise of the professionalized practice of western architecture in the second half of the 19th century, the field of architectural reprography—and the corresponding developments of photography and mass-produced wood-pulp paper—saw significant experiments and advances in technology. Beginning with major refinements in blueprinting processes in the 1840s, through the widespread adoption of diazotype printing after World War II, the design profession turned to analog architectural reprography to create accurate, to-scale reproductions of original drawings created on tracing paper, vellum, and linen supports. These copies were typically used throughout the architect's own design process and also for distribution to clients, contractors, governmental agencies, and other interested parties. However, the integration of CAD—or Computer-Aided Design—over the last twenty-five years of design practice has made analog reprography far less common in the profession and more ephemeral in nature. For archivists, curators, librarians and other custodians of architectural records, traditional reprographic formats are now often seen as historic documents, with attendant needs for long-term care and conservation.
Major processes
Both the underlying support—paper or plastic—and the image type are used to identify the specific processes used in architectural reprography. Between the late 19th century and the late 20th century, several processes emerged as the preferred methods, used for decades, while other less common processes were employed for shorter periods of time.
Blueprints
Also called a cyanotype. Developed in the 1840s by John Herschel, blueprinting uses a wet process to produce an image of white lines on a cyan or Prussian blue ground. To make a blueprint, a heavy paper (or more rarely drafting linen) support is impregnated with potassium ferricyanide and ferric ammonium, placed under a translucent original drawing, weighted with glass, and exposed to ultraviolet light. After sufficient light exposure, the glass and original drawing are removed and the blueprint paper is washed to reveal a negative image. This same process, using an intermediary reprographic drawing, could also be used to produce a positive blueprint—blue lines on a white ground—however, this more expensive and time-intensive method was far less commonly employed.
The major disadvantages of the blueprint process, however, included paper distortions caused by the wet process which might render scale drawings less accurately, as well as the inability to make further copies from the blueprints. Nonetheless, for its efficiency and low cost, the blueprint process, further simplified and mechanized by the turn of the 20th century, became the most widely used reprographic process from the mid-19th century through the first half of the 20th century.
In archival settings, because the process involves ammonium, the resulting prints should not be stored in contact with other papers that have a buffered reserve, nor should blueprints be de-acidified, as the resulting chemical interactions can cause irreversible image loss. Blueprints are also highly light-sensitive and should not be exposed to ultraviolet light for long periods of time.
Pellet prints
Invented in 1887 by Henry Pellet, the Pellet process uses a wet process to produce an image of cyan or Prussian blue lines on a white ground. Essentially, this process produces a positive image, while a blueprint produces a negative one. To make a Pellet print, a paper (or more rarely drafting linen) support is coated with ferric salts suspended in a gelatin emulsion, placed under a translucent original drawing, weighted with glass, and exposed to ultraviolet light. As with the blueprint process, after sufficient light exposure, the original drawing is removed, the paper washed in a ferrocyanide bath, and then rinsed in an acidic bath to reveal a positive image. This process required fewer steps than creating a positive blueprint, and was thus more widely employed during the late 19th and early 20th centuries.
In an archival setting, Pellet prints should be treated and stored under the same conditions as blueprints.
Van Dyke prints
The Van Dyke process, invented by F. R. Van Dyke in 1901, created an intermediary print—a white line on a dark brown ground—that could be used in any of several other processes, such as blueprinting, to create a positive print, i.e. a dark line on a light ground.
Using a translucent vellum support, the paper was prepared with a coating of silver salts. The vellum was then united with the original drawing, exposed to ultraviolet light, and later washed in a sodium thiosulfate bath.
In an archival setting, Van Dyke prints are relatively rare, as they were created for temporary purposes and often discarded after the final positive prints were made. Because of the nitrates used in preparing the paper and the preferred thin paper itself, Van Dyke prints are often extremely brittle and susceptible to damage. Van Dyke prints should be stored separately and, when possible reformatted before the image degrades unacceptably.
Diazotypes
By the middle of the 20th century, wet-process reprographic techniques such as blueprinting, Pellet, and Van Dyke printing were largely superseded by various dry-printing processes. The most common of these is the Diazotype process, refined in the 1920s, which used paper supports sensitized with diazonium salts, a coupling agent, and an acid stabilizer to produce a dark line on a white ground. The Diazo positive print was considered more readable than a negative blueprint, and the dry process eliminated the image distortion of wet paper.
As with other earlier reprographic processes, a translucent original drawing was placed over a sheet of the sensitized paper and exposed to light. However, the next step exposed the paper to an ammonia gas. This alkaline gas catalyzed a reaction between the diazo salts and the coupling agent to produce an image that fixed in the paper over several days. Typically these prints have blue or dark purple lines on a mottled cream-colored background, although line and ground colors can vary.
A related process is the sepia Diazo print, which produced either a positive or negative print in dark brown and light tones. The negative versions of these prints were most often produced as intermediaries, like the earlier Van Dyke process, to allow corrections and revisions without disturbing the original drawing. In the negative printing process, additional resins and oils were sometimes added to the paper support to increase translucency. Positive sepia prints, generally made on opaque paper, were typically used as an alternative to positive blueline Diazo prints.
Both blueline and sepia prints were often poorly and cheaply processed, resulting in undesirable residual chemical content. Off-gassing of sulfurous compounds, image fading, and yellowing of the paper support are common signs of degradation and are not reversible. Diazo prints are also highly light-sensitive and can fade to illegibility within a short period of exposure to ultraviolet light.
In archival practice, Diazo prints are the most common reprographic format encountered in late 20th-century architectural collections. However, their inherent fragility and fugitive images, as compared with blueprints and earlier processes, makes their care problematic. Diazos—particularly sepia prints, which readily transfer color to adjacent papers—should be physically segregated from all other types of media. Exposure to light and pollutants in air should be minimized, and wherever possible, original drawings or reformatted prints should be kept for reference.
Other processes
Hectographic prints
Ferrogallic prints
Gel-lithographs
Photostatic prints
Wash-Off prints
Silver halide prints
Electrostatic prints
Cleaning, flattening, and repairing
For large collections of architectural materials, conservation work can address several areas of concern. Consultation with a professional conservator is recommended, although some minor treatments can be accomplished by general caretakers with training. Rolled and folded reprography, once cleaned, can be flattened through humidification. Cleaning may be done with white vinyl erasers, using great care in areas of friable media, such as graphite and colored pencil. Tears, losses, and other surface damage should be treated by a professional conservator. For particularly fragile or frequently-handled prints, sheets may be encapsulated in polyester or polypropylene film for additional support and protection. This is not recommended, however, for reprographic prints with annotations in friable media.
Storage
Rolled storage
The most common form of storage for architectural drawings—both for drawings in active professional use and in archival environments—has traditionally been in rolls. While this allows for efficiency in the use of space and ease of retrieval, potentially damaging situations can arise from a casual approach to roll storage. For reprographic drawings on paper supports, rolling can stress paper fibers and make unrolling for examination more difficult. Small rolls can be easily crushed and ends can be creased and torn without additional protective wrapping and support.
Flat storage
In circumstances where fragile, rigid, or otherwise atypical media makes rolled storage unfeasible, storage in flat boxes or flatfile drawers can be the best choice. Acid-free and lignin-free portfolio boxes, ideally no more than four inches deep, can be cost-effective and allow more flexibility in arrangement on shelving. Flatfile furniture should meet the minimum requirements of archivally-sound construction—powder- or enamel-coated steel units with no rust or sharp edges that could damage materials while stored or moved in and out of the drawers.
Drawings should be grouped and identified for ease in retrieval, preferably within folders that are cut to fit the full dimensions of the corresponding container. As with rolled materials, the potentially damaging chemical interactions of print processes should be considered when grouping drawings in folders. Wherever possible, for example, blueprints should be segregated from diazotypes, and sepia diazo prints should be stored alone to the extent possible.
Reformatting
For most drawings, especially those that are oversized or significantly damaged, photographic reproduction remains the best method of accurately reproducing the fine details of a drawing. For drawings that are not significantly damaged or that are encapsulated in a polyester film, digital flat-bed scanning or other mechanical methods may be used.
Professional resources
The Society of American Archivists supports many architectural archivists in their professional responsibilities. In particular, the SAA's Architectural Records Roundtable is a primary forum for discussion of issues of acquisition, identification, description, conservation, and digital preservation of a wide variety of architectural documentation.
References
Further reading
Dessauer, J. H. & Clark, H. E. (1965). Xerography and Related Processes. London and New York: Focal Press.
Kissel, E. & Vigneau, E. (1999). Architectural Photoreproductions: a manual for identification and care. New Castle, Del.: Oak Knoll Press.
Lowell, W. & Nelb, T. R. (2006). Architectural Records: managing design and construction records. Chicago: Society of American Archivists.
Reed, J., Kissel, E., & Vigneau, E. (1995). Photo-Reproductive Processes Used for the Duplication of Architectural and Engineering Drawings: creating guidelines for identification. Book and Paper Group Annual, 14.
Reprographic Guide: technical data and applications of most processes and services performed by reprographic firms. (1981). [Franklin Park, Ill.]: The Association.
Tyrell, A. (1972). Basics of Reprography. London and New York: Focal Press.
Verry, H. R. (1958). Document Copying and Reproduction Processes. London: Fountain Press.
Reprography
Library science | Architectural reprography | [
"Engineering"
] | 2,580 | [
"Construction",
"Architecture"
] |
10,723,118 | https://en.wikipedia.org/wiki/Watershed%20area%20%28medical%29 | Watershed area is the medical term referring to regions of the body, that receive dual blood supply from the most distal branches of two large arteries, such as the splenic flexure of the large intestine. The term refers metaphorically to a geological watershed, or drainage divide, which separates adjacent drainage basins. For example, the watershed area of colon includes the griffith point and sudeck’s point.
During times of blockage of one of the arteries that supply the watershed area, such as in atherosclerosis, these regions are spared from ischemia by virtue of their dual supply. However, during times of systemic hypoperfusion, such as in disseminated intravascular coagulation or heart failure, these regions are particularly vulnerable to ischemia because they are supplied by the most distal branches of their arteries, and thus the least likely to receive sufficient blood.
Watershed areas are found in the brain, where areas are perfused by both the anterior and middle cerebral arteries, and in the intestines, where areas are perfused by both the superior and inferior mesenteric arteries (i.e., splenic flexure). Additionally, the sigmoid colon and rectum form a watershed zone with blood supply from inferior mesenteric, pudendal and iliac circulations. Hypoperfusion in watershed areas can lead to mural and mucosal infarction in the case of ischemic bowel disease. When watershed stroke occurs in the brain, it produces unique focal neurologic symptoms that aid clinicians in diagnosis and localization. For example, a cerebral watershed area is situated in the dorsal prefrontal cortex; when it is affected on the left side, this can lead to transcortical motor aphasia.
References
Circulatory system | Watershed area (medical) | [
"Biology"
] | 375 | [
"Organ systems",
"Circulatory system"
] |
10,723,135 | https://en.wikipedia.org/wiki/Voice%20bangladesh | VOICE is a Bangladesh-based activist, rights based research and advocacy organization working around the issues of corporate globalization.
Campaigns
It critically campaigns on neo-liberal "economic hegemony", the role of international financial institutions (IFIs), WTO, TNCs and privatization. It works around aid conditions, food sovereignty, media, communication rights and information and communication technologies, governance and human rights, policy research and advocacy, both at local and national levels in Bangladesh to raise awareness and shape the public discourse against economic, social and cultural hegemony and injustice including "global capitalism and imperialism."
VOICE is located in Shyamoli in Dhaka, the capital of Bangladesh.
Campaign over monga
VOICE has published Monga: the art of politics of dying and The Politics of Aid: Conditionalities and Challenges during the seventh summit of the World Social Forum in Kenya in January 2007.
Famine-like situation
Monga refers to a famine-like situation observed in several northern districts of Bangladesh, has been "recurring every year for decades", according to Voice. The two months of the monga season between September and October are generally marked by a dire lack of food, stemming from the lack of non-agricultural work and the agricultural off-season coinciding. Periodic famine stems from the neglect and lack of commitment on the part of successive governments of Bangladesh, who have consistently denied the very existence of the phenomenon, according to Voice.
Ahmed Swapan Mahmud is executive director of Voice, based at the Pisciculture Housing Society in the Shyamoli locality of Dhaka.
References
External links
VOICE official site, Bangladesh
APC on VOICE
Information and communication technologies in Asia
Non-profit technology
Non-profit organisations based in Bangladesh | Voice bangladesh | [
"Technology"
] | 346 | [
"Information technology",
"Non-profit technology"
] |
10,723,149 | https://en.wikipedia.org/wiki/Deuterated%20DMSO | Deuterated DMSO, also known as dimethyl sulfoxide-d6, is an isotopologue of dimethyl sulfoxide (DMSO, (CH3)2S=O)) with chemical formula ((CD3)2S=O) in which the hydrogen atoms ("H") are replaced with their isotope deuterium ("D"). Deuterated DMSO is a common solvent used in NMR spectroscopy.
Production
Deuterated DMSO is produced by heating DMSO in heavy water (D2O) with a basic catalyst such as calcium oxide. The reaction does not give complete conversion to the d6 product, and the water produced must be removed and replaced with D2O several times to drive the equilibrium to the fully deuterated product.
Use in NMR spectroscopy
Pure deuterated DMSO shows no peaks in 1H NMR spectroscopy and as a result is commonly used as an NMR solvent. However commercially available samples are not 100% pure and a residual DMSO-d5 1H NMR signal is observed at 2.50ppm (quintet, JHD=1.9Hz). The 13C chemical shift of DMSO-d6 is 39.52ppm (septet).
References
Deuterated solvents | Deuterated DMSO | [
"Chemistry"
] | 281 | [
"Deuterated solvents",
"Nuclear magnetic resonance"
] |
10,723,489 | https://en.wikipedia.org/wiki/Liviu%20Librescu | Liviu Librescu (; ; August 18, 1930 – April 16, 2007) was a Romanian–American scientist and engineer. A prominent academic in addition to being a survivor of the Holocaust, his major research fields were aeroelasticity and aerodynamics.
Librescu is most widely known for his actions during the Virginia Tech shooting, when he held the doors to his lecture hall closed, allowing all but one of his students enough time to escape through the windows. Shot and killed during the attack, Librescu was posthumously awarded the Order of the Star of Romania, the country's highest civilian honor. Coincidentally, Librescu's act of heroism happened on Nisan 27 in the Jewish lunar calendar. That date is Yom HaShoah, which is Holocaust Remembrance Day in Israel.
At the time of his death, he was Professor of Engineering Science and Mechanics at Virginia Tech.
Life and career
Liviu Librescu was born in 1930 to a Jewish family in the city of Ploiești, Romania. After Romania allied with Nazi Germany in World War II, his family was deported to a labor camp in Transnistria, and later, along with thousands of other Jews, was deported to a ghetto in the Romanian city of Focșani. His wife, Marlena, who is also a Holocaust survivor, told Israeli Channel 10 TV the day after his death, "We were in Romania during the Second World War, and we were Jews there among the Germans, and among the anti-Semitic Romanians." Dorothea Weisbuch, a cousin of Librescu living in Romania, said in an interview to Romanian newspaper Cotidianul: "He was an extraordinarily gifted person and very altruistic. When he was little, he was very curious and knew everything, so that I thought he would become very conceited, but it did not happen so; he was of a rare modesty."
After surviving the Holocaust, Librescu was repatriated to Communist Romania. He studied aerospace engineering at the Polytechnic University of Bucharest, graduating in 1952 and continuing with a Master's degree at the same university. He was awarded a Ph.D. in fluid mechanics in 1969 at the Academia de Științe din România. From 1953 to 1975, he worked as a researcher at the Bucharest Institute of Applied Mechanics, and later at the Institute of Fluid Mechanics and the Institute of Fluid Mechanics and Aerospace Constructions of the Academy of Science of Romania.
His career stalled in the 1970s because he refused to swear allegiance to Nicolae Ceaușescu's government. When Librescu requested permission to emigrate to Israel, the Academy of Science of Romania fired him. In 1976, a smuggled research manuscript that he had published in the Netherlands drew him international attention in the growing field of material dynamics.
After months on end government refusal, Israeli Prime Minister Menachem Begin intervened to get the Librescu family an emigration permit by directly asking Romanian President Nicolae Ceaușescu to let them go. They moved to Israel in 1978.
From 1979 to 1986, Librescu was Professor of Aeronautical and Mechanical Engineering at Tel Aviv University and taught at the Technion in Haifa. In 1985, he left on sabbatical for the United States, where he served as Professor at Virginia Tech in its Department of Engineering Science and Mechanics, where he remained until his death. He served as a member on the editorial board of seven scientific journals and was invited as a guest editor of special issues of five other journals. Most recently, he was co-chair of the International Organizing Committee of the 7th International Congress on Thermal Stress, Taipei, Taiwan, June 4–7, 2007, for which he had been scheduled to give the keynote lecture. According to his wife, no Virginia Tech professor has ever published more articles than Librescu.
Fields of research
Librescu's major fields of study included:
Foundation and applications of the modern theory of shells incorporating non-classical effects and composed of advanced composite materials
Foundation of the theory and applications of sandwich type structures
Aeroelastic stability of flight vehicle structures
Nonlinear aeroelasticity of structures in supersonic and hypersonic flow fields
Aeroelastic and structural tailoring
Dynamic response and instability of elastic and viscoelastic laminated composite structures subjected to deterministic and random loading systems
Mechanical and thermal postbuckling of flat and curved shear-deformable elastic panels
Static, dynamic and aeroelastic feedback control of adaptive structures
Unsteady aerodynamics and magnetoaerodynamics of supersonic flows with applications
Optimization problems of aeroelastic structural systems
Theory of composite thin-walled beams and its application in aeronautical and mechanical constructions
Nonlinear structural deformation of compressible composite materials under shear stress
Response and behavior of structures to underwater and in-air explosions
Multifunctional and functionally graded material structures.
Death and legacy
At age 76, Librescu was the oldest of the 32 people who were murdered in the Virginia Tech shooting. On April 16, 2007, Seung-Hui Cho entered the Norris Hall Engineering Building and opened fire on classrooms. Librescu, who taught a solid mechanics class in Room 204 in the Norris Hall during April 2007, held the door of his classroom shut while the gunman attempted to enter it and yelled to his students to escape through the windows. While the shooter tried to nudge open the door, Librescu managed to prevent him from entering until most of his students had escaped through the windows. After kicking open the window screens, the students successfully escaped. Some suffered leg injuries while landing on the ground two floors below, others survived after landing on the shrubbery just below the window and then ran either to some ambulances pulling up or to the nearest bus stop. Librescu was shot four times through the door, including once through his wrist watch. Of the 23 registered students in his class, Minal Panchal, a grad student from Mumbai, India, was the only student in the room who lost her life, while two others, who were injured while taking cover in a corner, made it out alive. It was then noted that after the armed aggressor forced his way inside the room, he was enraged after the majority of students escaped. Before leaving the room, Cho confronted Professor Librescu and student Panchal who were lying on the ground next to the door and fatally shot them in the temple.
A number of Librescu's students have called him a hero because of his actions. Caroline Merrey, a senior, said she and about 20 other students scrambled through the windows as Librescu shouted for them to hurry. Merrey said, "I don't think I would be here if it wasn't for [Librescu]." Librescu's son Joe said he had received e-mails from several students who said he had saved their lives and regarded him as a hero.
Following the murder of Librescu, at the request of his family and with the assistance of Gov. Tim Kaine, his body was released on April 17 and he received a funeral service at an Orthodox Jewish funeral home in Borough Park, Brooklyn, New York. On April 20, he was interred in Israel. In his native Romania, his picture was placed on a table at the Polytechnic University of Bucharest, and a candle was lit. People laid flowers nearby.
The massacre took place on Holocaust Remembrance Day (Yom HaShoah). On April 18, 2007, President of the United States George W. Bush honored Librescu at a memorial service held at the United States Holocaust Memorial Museum, attended by a crowd that included many Holocaust survivors:
That day we saw horror, but we also saw quiet acts of courage. We saw this courage in a teacher named Liviu Librescu. With the gunman set to enter his class, this brave professor blocked the door with his body while his students fled to safety. On the Day of Remembrance, this Holocaust survivor gave his own life so that others may live. And this morning we honor his memory and we take strength from his example.
Honors and awards
Librescu received many academic honors during his work in the Engineering Science and Mechanics Department at Virginia Tech, serving as chair or invited as a keynote speaker of several International Congresses on Thermal Stresses and receiving several honorary degrees. He was elected member of the Academy of Sciences of the Shipbuilding of Ukraine (2000) and Foreign Fellow of the Academy of Engineering of Armenia (1999). He was a recipient of Doctor Honoris Causa of the Polytechnic Institute of Bucharest (2000), of the 1999 Dean's Award for Excellence in Research, College of Engineering at Virginia Tech, and a laureate of the Traian Vuia Prize of the Romanian Academy (1972). He was a member of the Board of Experts of the Italian Ministry of Education, University and Scientific Research. He was awarded the Frank J. Maher Award for Excellence in Engineering Education (2005) and an ASME diploma (2005) expressing "deep appreciation for the valuable services in advancing the engineering profession".
Posthumously, Professor Librescu was commended by Traian Băsescu, the President of Romania, with the Order of the Star of Romania with the rank of Grand Cross, "as a sign of high appreciation and gratitude for the entire scientific and academic activity, as well as for the heroism shown in the course of the tragic events which took place on April 16th, 2007, [...] through which he saved the lives of his students, sacrificing his own life." The Chabad Hasidic Movement named its Jewish Student Center at Virginia Tech after him.
The classroom of the Sara and Sam Schoffer Holocaust Resource Center at Stockton University in Galloway, New Jersey was dedicated to the memory of Liviu Librescu in April 2009 through a donation from The Azeez Family and Foundation of Egg Harbor Township. Jane B. Stark, who is Executive Director of the Sam Azeez Museum of Woodbine Heritage in Woodbine, New Jersey, said "This man, who endured so much during the Holocaust, thought of his students’ safety before his own in a time of crisis. ... He deserves to be remembered for these heroic actions."
The street in front of the U.S. Embassy in Bucharest was named in his honor.
Professor Librescu was also awarded the 2007 Facilitator Award by Stetson University College of Law's Center for Excellence in Higher Education Law and Policy.
A gift to Columbia Law School from alumnus Ira Greenstein '85 honored Professor Librescu's heroism during the Virginia Tech shooting and established a professorship in his name—the "Liviu Librescu Professor of Law." This professorship is awarded at the discretion of the Dean, who seeks to appoint to the Librescu Professorship a member of the faculty with an expertise in national security or social justice. Matthew Waxman currently holds the Librescu Professorship. He is an expert in national security law and international law, including issues such as executive power, international human rights and constitutional rights, military force and armed conflict, terrorism, cybersecurity, and maritime disputes.
Publications
Books authored by Librescu include:
See also
History of the Jews in Romania
Romanian American
Israeli American
References
External links
We Remember, Virginia Tech Remembrance
Librescu Family Condolence Page, Chabad on Campus Foundation
(mirror)
BBC profile
The Librescu Jewish Student Center
News
Complete Coverage: Virginia Tech Shooting , Newsday, April 17, 2007
Heroes in the Midst of Horror: Holocaust Survivor, Students Saved Others by Marcus Baram, ABC News, April 17, 2007
Librescu 'cared only about science' by Judy Siegel-Itzkovich, Jerusalem Post, April 17, 2007
Liviu Librescu, The Times, April 18, 2007
1930 births
2007 deaths
Romanian murder victims
Israeli murder victims
American murder victims
Romanian people murdered abroad
Israeli people murdered abroad
Grand Crosses of the Order of the Star of Romania
Israeli aerospace engineers
American aerospace engineers
Romanian aerospace engineers
Israeli materials scientists
American materials scientists
Israeli Orthodox Jews
American Orthodox Jews
Romanian Orthodox Jews
Israeli scientists
Romanian scientists
Jewish scientists
Jewish refugees
Murdered American Jews
20th-century American Jews
People from Ploiești
Politehnica University of Bucharest alumni
Romanian emigrants to Israel
Romanian refugees
Structural engineers
Survivors of World War II deportations to Transnistria
Virginia Tech faculty
Mass murder victims
Deaths by firearm in Virginia
People murdered in Virginia
20th-century American engineers
Victims of mass shootings in the United States
Virginia Tech shooting | Liviu Librescu | [
"Engineering"
] | 2,498 | [
"Structural engineering",
"Structural engineers"
] |
10,723,887 | https://en.wikipedia.org/wiki/SxS | SxS (S-by-S) is a flash memory standard compliant to the Sony and SanDisk-created ExpressCard standard. According to Sandisk and Sony, the cards have transfer rates of 800 Mbit/s and burst transfer rate of up to 2.5 Gbit/s over the ExpressCard's PCI Express interface. Sony uses these cards as the storage medium for their XDCAM EX line of professional video cameras.
Compatibility
The card can be inserted directly into an ExpressCard slot, available on many notebooks. However, it will only work in Windows and Mac OS X, and only with a Sony device driver installed on the machine. Experimental Linux drivers are also available.
The only universal connectivity for these cards is the Sony SBAC-US10 and Sony SBAC-US20. These external USB adapters will make the cards visible to any system as an external USB hard drive. The Sony SBAC-US20 uses the USB 3.0 interface and has a suggested retail price of US$350.
SxS PRO+
SxS PRO+ is a faster version of SxS designed for the recording of 4K resolution video. SxS Pro+ has a guaranteed minimum recording speed of 1.3 Gbit/s and an interface with a theoretical maximum speed of 8 Gbit/s.
SxS PRO+ media cards are used on three CineAlta cameras which are the Sony PMW-F55 Sony PMW-F5, and the Sony Venice. The XAVC recording format can record 4K resolution at 60 fps with 4:2:2 chroma subsampling at 600 Mbit/s. A 128 Gigabyte SxS PRO+ media card can record up to 20 minutes of 4K resolution XAVC video at 60 fps, up to 40 minutes of 4K resolution XAVC video at 30 fps, and up to 120 minutes of 2K resolution XAVC video at 30 fps.
See also
P2 (storage media)
XAVC - A recording format that can be used with SxS PRO+ media cards
References
Computer memory
Solid-state computer storage media | SxS | [
"Technology"
] | 446 | [
"Computing stubs",
"Computer science",
"Computer science stubs"
] |
10,725,167 | https://en.wikipedia.org/wiki/Tadek%20Marek | Tadeusz "Tadek" Marek (1908–1982) was a Polish automobile engineer, known for his Aston Martin engines.
Before the war
Marek was from Kraków and studied engineering at Technische Universität Berlin before working for Fiat in Poland and also for General Motors. Despite a serious racing accident in 1928, he raced the 1937 Monte Carlo Rally in a Fiat 1100 followed by a Lancia Aprilia in 1938 and an Opel Olympia in 1939. Driving a Chevrolet Master sedan, he won the XII Rally Poland (1939) before moving to Great Britain in 1940 to join the Polish Army.
He joined the Centurion tank Meteor engine development (1944), but returned to Germany, working for United Nations Relief and Rehabilitation Administration.
After the war
In 1949 he joined the Austin Motor Company, and eventually joined Aston Martin (1954). He is notable for his work on three engines, developing the alloy straight six-cylinder engine of the Aston Martin DBR2 racing car (1956), redesigning the company's venerable straight six-cylinder Lagonda (1957), and developing the Aston Martin V8 engine (1968).
The Lagonda engine received a new cast iron block with top seating liners, used in the DB Mark III that debuted in 1957. After modifications, the DBR2 engine was used in the DB4 (1958), DB5 (1963), DB6 (1965) and DBS (1967). It was used for the final time in the 1973 Vantage. A rare cross over following the end of the David Brown era with only 71 produced with the Straight 6 but the body of the AMV8. The V8 first appeared in the DBS V8 in 1969, going on to power Aston Martins for part of five decades before being retired in 2000. A prototype was fitted in the mid-'60s in a one-off DB5 extended 4" after the doors and driven by Marek personally, and a normally 6-cylinder Aston Martin DB7 was equipped with a V8 unit in 1998.
Marek and his wife moved to Italy in 1968, where he died in 1982.
References
Engineers from Kraków
Automotive engineers
Aston Martin people
1908 births
1982 deaths
Polish emigrants to Italy | Tadek Marek | [
"Engineering"
] | 448 | [
"Automotive engineering",
"Automotive engineers"
] |
10,725,984 | https://en.wikipedia.org/wiki/Gunshot%20wound | A gunshot wound (GSW) is a penetrating injury caused by a projectile (e.g. a bullet) shot from a gun (typically a firearm). Damage may include bleeding, bone fractures, organ damage, wound infection, and loss of the ability to move part of the body. Damage depends on the part of the body hit, the path the bullet follows through (or into) the body, and the type and speed of the bullet. In severe cases, although not uncommon, the injury is fatal. Long-term complications can include bowel obstruction, failure to thrive, neurogenic bladder and paralysis, recurrent cardiorespiratory distress and pneumothorax, hypoxic brain injury leading to early dementia, amputations, chronic pain and pain with light touch (hyperalgesia), deep venous thrombosis with pulmonary embolus, limb swelling and debility, and lead poisoning.
Factors that determine rates of gun violence vary by country. These factors may include the illegal drug trade, easy access to firearms, substance misuse including alcohol, mental health problems, firearm laws, social attitudes, economic differences, and occupations such as being a police officer. Where guns are more common, altercations more often end in death.
Before management begins, the area must be verified as safe. This is followed by stopping major bleeding, then assessing and supporting the airway, breathing, and circulation. Firearm laws, particularly background checks and permit to purchase, decrease the risk of death from firearms. Safer firearm storage may decrease the risk of firearm-related deaths in children.
In 2015, about a million gunshot wounds occurred from interpersonal violence. In 2016, firearms resulted in 251,000 deaths globally, up from 209,000 in 1990. Of these deaths, 161,000 (64%) were the result of assault, 67,500 (27%) were the result of suicide, and 23,000 (9%) were accidents. In the United States, guns resulted in about 40,000 deaths in 2017. Firearm-related deaths are most common in males between the ages of 20 and 24 years. Economic costs due to gunshot wounds have been estimated at $140 billion a year in the United States.
Signs and symptoms
Trauma from a gunshot wound varies widely based on the bullet, velocity, mass, entry point, trajectory, affected anatomy, and exit point. Gunshot wounds can be particularly devastating compared to other penetrating injuries because the trajectory and fragmentation of bullets can be unpredictable after entry. Moreover, gunshot wounds typically involve a large degree of nearby tissue disruption and destruction caused by the physical effects of the projectile correlated with the bullet velocity classification.
The immediate damaging effect of a gunshot wound is typically severe bleeding with the potential for a type of shock known as hypovolemic shock, a condition characterized by inadequate delivery of oxygen to vital organs. In the case of traumatic hypovolemic shock, this failure of adequate oxygen delivery is due to blood loss, as blood is the means of delivering oxygen to the body's constituent parts. Besides blood loss, internal bleeding can lead to complications.
Devastating effects can result when a bullet strikes a vital organ such as the heart, lungs, or liver, or damages a component of the central nervous system such as the spinal cord or brain. It can lead to organ failure and death.
Common causes of death following gunshot injury include bleeding, low oxygen caused by pneumothorax, catastrophic injury to the heart and major blood vessels, and damage to the brain or central nervous system. Non-fatal gunshot wounds frequently have mild to severe long-lasting effects, typically some form of major disfigurement such as amputation because of a severe bone fracture and may cause permanent disability. A sudden blood gush may take effect immediately from a gunshot wound if a bullet directly damages larger blood vessels, especially arteries.
Pathophysiology
The degree of tissue disruption caused by a projectile is related to the cavitation the projectile creates as it passes through tissue. A bullet with sufficient energy will have a cavitation effect in addition to the penetrating track injury. As the bullet passes through the tissue, initially crushing then lacerating, the space left forms a cavity; this is called the permanent cavity. Higher-velocity bullets create a pressure wave that forces the tissues away, creating not only a permanent cavity the size of the caliber of the bullet but a temporary cavity or secondary cavity, which is often many times larger than the bullet itself. The temporary cavity is the radial stretching of tissue around the bullet's wound track, which momentarily leaves an empty space caused by high pressures surrounding the projectile that accelerate material away from its path. The extent of cavitation, in turn, is related to the following characteristics of the projectile:
Kinetic energy: KE = 1/2mv2 (where m is mass and v is velocity). This helps to explain why wounds produced by projectiles of higher mass and/or higher velocity produce greater tissue disruption than projectiles of lower mass and velocity. The velocity of the bullet is a more important determinant of tissue injury. Although both mass and velocity contribute to the overall energy of the projectile, the energy is proportional to the mass while proportional to the square of its velocity. As a result, for constant velocity, if the mass is doubled, the energy is doubled; however, if the velocity of the bullet is doubled, the energy increases four times. The initial velocity of a bullet is largely dependent on the firearm. The US military commonly uses 5.56-mm bullets, which have a relatively low mass as compared with other bullets; however, the speed of these bullets is relatively fast. As a result, they produce a larger amount of kinetic energy, which is transmitted to the tissues of the target. The size of the temporary cavity is approximately proportional to the kinetic energy of the bullet and depends on the resistance of the tissue to stress. Muzzle energy, which is based on muzzle velocity, is often used for ease of comparison.
Yaw: Handgun bullets will generally travel in a relatively straight line or make one turn if a bone is hit. Upon travel through deeper tissue, high-energy rounds may become unstable as they decelerate, and may tumble (pitch and yaw) as the energy of the projectile is absorbed, causing stretching and tearing of the surrounding tissue.
Fragmentation: Most commonly, bullets do not fragment, and secondary damage from fragments of shattered bone is a more common complication than bullet fragments.
Diagnosis
Classification
Gunshot wounds are classified according to the speed of the projectile using the Gustilo open fracture classification:
Low-velocity: Less than 335 m/s (1,100 ft/s)
Low velocity wounds are typical of small caliber handguns. They do not usually cause extensive soft tissue damage, and in the Gustilo open fracture classification are classified as Type 1 or 2 wounds.
Medium-velocity: Between 360 m/s (1,200 ft/s) and 600 m/s (2,000 ft/s)
These are more typical of shotgun blasts or higher caliber handguns like magnums. The risk of infection from these types of wounds can vary depending on the type and pattern of bullets fired as well as the distance from the firearm.
High-velocity: Between 600 m/s (2,000 ft/s) and 1,000 m/s (3,500 ft/s)
Usually caused by powerful assault or hunting rifles and usually cause Gustilo Type 3 wounds. The risk of infection is especially high due to the large area of injury and destroyed tissue.
Bullets from handguns are sometimes less than but with modern pistol loads, they usually are slightly above , while bullets from most modern rifles exceed . One recently developed class of firearm projectiles is the hyper-velocity bullet, such cartridges are usually made for achieving such high speed, purpose-built in factories or made by amateurs. Examples of hyper velocity cartridges include the .220 Swift, .17 Remington and .17 Mach IV cartridges. The US military commonly uses 5.56mm bullets, which have a relatively low mass as compared with other bullets (2,6-4,0 grams); however, the speed of these bullets is relatively fast (approximately , placing them in the high velocity category). As a result, they produce a larger amount of kinetic energy, which is transmitted to the tissues of the target. High energy transfer results in more tissue disruption, which plays a role in incapacitation, but other factors such as wound size and shot placement are also important.
Kronlein shot
The "Kronlein shot" (German: Krönleinschuss) is a distinctive type of headshot wound that can only be created by a high velocity rifle bullet or shotgun slug. In a Kronlein shot, the intact brain is ejected from the skull and deposited some distance from the victim's body. This type of wound is believed to be caused by a hydrodynamic effect. Hydraulic pressure generated within the skull by a high velocity bullet leads to the explosive ejection of the brain from the fractured skull.
Prevention
Interventions have been recommended to reduce the risk of firearm related injury or death. Medical organizations in the United States recommend a criminal background check being held before a person buys a gun and that a person who has convictions for crimes of violence should not be permitted to buy a gun. Safe storage of guns is recommended, as well as better mental health care and removal of guns from those at risk of suicide. Experts recommend that physicians counsel patients regarding safe storage of guns and other injury prevention strategies related to guns as part of routine medical care. Having guns locked and unloaded is associated with a lower risk of gun related injury or death (including a lower risk of suicide) for all household members as compared to guns that are stored loaded and unlocked.
Temporarily removing guns from the home, either voluntarily or by court order (such as with extreme risk protection orders [so called "red flag laws"] in the United States) is recommended for those who are at risk of suicide or violence towards others. Such laws have been associated with a lower risk of suicide using guns in population based studies.
In an effort to prevent mass shootings, greater regulations on guns that can rapidly fire many bullets is recommended.
Management
Initial assessment for a gunshot wound is approached in the same way as other acute trauma using the advanced trauma life support (ATLS) protocol. These include:
A) Airway - Assess and protect airway and potentially the cervical spine
B) Breathing - Maintain adequate ventilation and oxygenation
C) Circulation - Assess for and control bleeding to maintain organ perfusion including focused assessment with sonography for trauma (FAST)
D) Disability - Perform basic neurological exam including Glasgow Coma Scale (GCS)
E) Exposure - Expose entire body and search for any missed injuries, entry points, and exit points while maintaining body temperature
Depending on the extent of injury, management can range from urgent surgical intervention to observation. As such, any history from the scene such as gun type, shots fired, shot direction and distance, blood loss on scene, and pre-hospital vitals signs can be very helpful in directing management. Unstable people with signs of bleeding that cannot be controlled during the initial evaluation require immediate surgical exploration in the operating room. Otherwise, management protocols are generally dictated by anatomic entry point and anticipated trajectory.
Neck
A gunshot wound to the neck can be particularly dangerous because of the high number of vital anatomical structures contained within a small space. The neck contains the larynx, trachea, pharynx, esophagus, vasculature (carotid, subclavian, and vertebral arteries; jugular, brachiocephalic, and vertebral veins; thyroid vessels), and nervous system anatomy (spinal cord, cranial nerves, peripheral nerves, sympathetic chain, brachial plexus). Gunshots to the neck can thus cause severe bleeding, airway compromise, and nervous system injury.
Initial assessment of a gunshot wound to the neck involves non-probing inspection of whether the injury is a penetrating neck injury (PNI), classified by violation of the platysma muscle. If the platysma is intact, the wound is considered superficial and only requires local wound care. If the injury is a PNI, surgery should be consulted immediately while the case is being managed. Of note, wounds should not be explored on the field or in the emergency department given the risk of exacerbating the wound.
Due to the advances in diagnostic imaging, management of PNI has been shifting from a "zone-based" approach, which uses anatomical site of injury to guide decisions, to a "no-zone" approach which uses a symptom-based algorithm. The no-zone approach uses a hard signs and imaging system to guide next steps. Hard signs include airway compromise, unresponsive shock, diminished pulses, uncontrolled bleeding, expanding hematoma, bruits/thrill, air bubbling from wound or extensive subcutaneous air, stridor/hoarseness, neurological deficits. If any hard signs are present, immediate surgical exploration and repair is pursued alongside airway and bleeding control. If there are no hard signs, the person receives a multi-detector CT angiography for better diagnosis. A directed angiography or endoscopy may be warranted in a high-risk trajectory for the gunshot. A positive finding on CT leads to operative exploration. If negative, the person may be observed with local wound care.
Chest
Important anatomy in the chest includes the chest wall, ribs, spine, spinal cord, intercostal neurovascular bundles, lungs, bronchi, heart, aorta, major vessels, esophagus, thoracic duct, and diaphragm. Gunshots to the chest can thus cause severe bleeding (hemothorax), respiratory compromise (pneumothorax, hemothorax, pulmonary contusion, tracheobronchial injury), cardiac injury (pericardial tamponade), esophageal injury, and nervous system injury.
Initial workup as outlined in the Workup section is particularly important with gunshot wounds to the chest because of the high risk for direct injury to the lungs, heart, and major vessels. Important notes for the initial workup specific for chest injuries are as follows. In people with pericardial tamponade or tension pneumothorax, the chest should be evacuated or decompressed if possible prior to attempting tracheal intubation because the positive pressure ventilation can cause hypotention or cardiovascular collapse. Those with signs of a tension pneumothorax (asymmetric breathing, unstable blood flow, respiratory distress) should immediately receive a chest tube (> French 36) or needle decompression if chest tube placement is delayed. FAST exam should include extended views into the chest to evaluate for hemopericardium, pneumothorax, hemothorax, and peritoneal fluid.
Those with cardiac tamponade, uncontrolled bleeding, or a persistent air leak from a chest tube all require surgery. Cardiac tamponade can be identified on FAST exam. Blood loss warranting surgery is 1–1.5 L of immediate chest tube drainage or ongoing bleeding of 200-300 mL/hr. Persistent air leak is suggestive of tracheobronchial injury which will not heal without surgical intervention. Depending on the severity of the person's condition and if cardiac arrest is recent or imminent, the person may require surgical intervention in the emergency department, otherwise known as an emergency department thoracotomy (EDT).
However, not all gunshot to the chest require surgery. Asymptomatic people with a normal chest X-ray can be observed with a repeat exam and imaging after 6 hours to ensure no delayed development of pneumothorax or hemothorax. If a person only has a pneumothorax or hemothorax, a chest tube is usually sufficient for management unless there is large volume bleeding or persistent air leak as noted above. Additional imaging after initial chest X-ray and ultrasound can be useful in guiding next steps for stable people. Common imaging modalities include chest CT, formal echocardiography, angiography, esophagoscopy, esophagography, and bronchoscopy depending on the signs and symptoms.
Abdomen
Important anatomy in the abdomen includes the stomach, small bowel, colon, liver, spleen, pancreas, kidneys, spine, diaphragm, descending aorta, and other abdominal vessels and nerves. Gunshots to the abdomen can thus cause severe bleeding, release of bowel contents, peritonitis, organ rupture, respiratory compromise, and neurological deficits.
The most important initial evaluation of a gunshot wound to the abdomen is whether there is uncontrolled bleeding, inflammation of the peritoneum, or spillage of bowel contents. If any of these are present, the person should be transferred immediately to the operating room for laparotomy. If it is difficult to evaluate for those indications because the person is unresponsive or incomprehensible, it is up to the surgeon's discretion whether to pursue laparotomy, exploratory laparoscopy, or alternative investigative tools.
Although all people with abdominal gunshot wounds were taken to the operating room in the past, practice has shifted in recent years with the advances in imaging to non-operative approaches in more stable people. If the person's vital signs are stable without indication for immediate surgery, imaging is done to determine the extent of injury. Ultrasound (FAST) and help identify intra-abdominal bleeding and X-rays can help determine bullet trajectory and fragmentation. However, the best and preferred mode of imaging is high-resolution multi-detector CT (MDCT) with IV, oral, and sometimes rectal contrast. Severity of injury found on imaging will determine whether the surgeon takes an operative or close observational approach.
Diagnostic peritoneal lavage (DPL) has become largely obsolete with the advances in MDCT, with use limited to centers without access to CT to guide requirement for urgent transfer for operation.
Extremities
The four main components of extremities are bones, vessels, nerves, and soft tissues. Gunshot wounds can thus cause severe bleeding, fractures, nerve deficits, and soft tissue damage. The Mangled Extremity Severity Score (MESS) is used to classify the severity of injury and evaluates for severity of skeletal and/or soft tissue injury, limb ischemia, shock, and age. Depending on the extent of injury, management can range from superficial wound care to limb amputation.
Vital sign stability and vascular assessment are the most important determinants of management in extremity injuries. As with other traumatic cases, those with uncontrolled bleeding require immediate surgical intervention. If surgical intervention is not readily available and direct pressure is insufficient to control bleeding, tourniquets or direct clamping of visible vessels may be used temporarily to slow active bleeding. People with hard signs of vascular injury also require immediate surgical intervention. Hard signs include active bleeding, expanding or pulsatile hematoma, bruit/thrill, absent distal pulses and signs of extremity ischemia.
For stable people without hard signs of vascular injury, an injured extremity index (IEI) should be calculated by comparing the blood pressure in the injured limb compared to an uninjured limb in order to further evaluate for potential vascular injury. If the IEI or clinical signs are suggestive of vascular injury, the person may undergo surgery or receive further imaging including CT angiography or conventional arteriography.
In addition to vascular management, people must be evaluated for bone, soft tissue, and nerve injury. Plain films can be used for fractures alongside CTs for soft tissue assessment. Fractures must be debrided and stabilized, nerves repaired when possible, and soft tissue debrided and covered. This process can often require multiple procedures over time depending on the severity of injury.
Epidemiology
In 2015, about a million gunshot wounds occurred from interpersonal violence. Firearms, globally in 2016, resulted in 251,000 deaths up from 209,000 in 1990. Of these deaths 161,000 (64%) were the result of assault, 67,500 (27%) were the result of suicide, and 23,000 were accidents. Firearm related deaths are most common in males between the ages of 20 and 24 years.
The countries with the greatest number of deaths from firearms are Brazil, United States, Mexico, Colombia, Venezuela, Guatemala, Bahamas and South Africa which make up just over half the total. In the United States in 2015, about half of the 44,000 people who died by suicide did so with a gun.
As of 2016, the countries with the highest rates of gun violence per capita were El Salvador, Venezuela, and Guatemala with 40.3, 34.8, and 26.8 violent gun deaths per 100,000 people respectively. The countries with the lowest rates of were Singapore, Japan, and South Korea with 0.03, 0.04, and 0.05 violent gun deaths per 100,000 people respectively.
Canada
In 2016, about 893 people died due to gunshot wounds in Canada (2.1 per 100,000). About 80% were suicides, 12% were assaults, and 4% were accidents.
United States
In 2017, there were 39,773 deaths in the United States as a result gunshot wounds. Of these 60% were suicides, 37% were homicides, 1.4% were by law enforcement, 1.2% were accidents, and 0.9% were from an unknown cause. This is up from 37,200 deaths in 2016 due to a gunshot wound (10.6 per 100,000). With respect to those that pertain to interpersonal violence, it had the 31st highest rate in the world with 3.85 deaths per 100,000 people in 2016. The majority of all homicides and suicides are firearm-related, and the majority of firearm-related deaths are the result of murder and suicide. When sorted by GDP, however, the United States has a much higher violent gun death rate compared to other developed countries, with over 10 times the number of firearms assault deaths than the next four highest GDP countries combined. Gunshot violence is the third most costly cause of injury and the fourth most expensive form of hospitalization in the United States.
History
Until the 1880s, the standard practice for treating a gunshot wound called for physicians to insert their unsterilized fingers into the wound to probe and locate the path of the bullet. Standard surgical theory such as opening abdominal cavities to repair gunshot wounds, germ theory, and Joseph Lister's technique for antiseptic surgery using diluted carbolic acid, had not yet been accepted as standard practice. For example, sixteen doctors attended to President James A. Garfield after he was shot in 1881, and most probed the wound with their fingers or dirty instruments. Historians agree that massive infection was a significant factor in Garfield's death.
At almost the same time, in Tombstone, Arizona Territory, on 13 July 1881, George E. Goodfellow performed the first laparotomy to treat an abdominal gunshot wound. Goodfellow pioneered the use of sterile techniques in treating gunshot wounds, washing the person's wound and his hands with lye soap or whisky, and his patient, unlike the President, recovered. He became America's leading authority on gunshot wounds and is credited as the United States' first civilian trauma surgeon.
Mid-nineteenth-century handguns such as the Colt revolvers used during the American Civil War had muzzle velocities of just 230– /s and their powder and ball predecessors had velocities of 167 m/s or less. Unlike today's high-velocity bullets, nineteenth-century balls produced almost little or no cavitation and, being slower moving, they were liable to lodge in unusual locations at odds with their trajectory.
Wilhelm Röntgen's discovery of X-rays in 1895 led to the use of radiographs to locate bullets in wounded soldiers.
Survival rates for gunshot wounds improved among US military personnel during the Korean and Vietnam Wars, due in part to helicopter evacuation, along with improvements in resuscitation and battlefield medicine. Similar improvements were seen in US trauma practices during the Iraq War. Military health care providers who return to civilian practice sometimes disseminate military trauma care practices. One such practice is to transfer major trauma cases to an operating theater as soon as possible, to stop internal bleeding. Within the United States, the survival rate for gunshot wounds has increased, leading to declines in the gun death rate in states that have stable rates of gunshot hospitalizations.
See also
Blast injury, an injury that may present similar dangers to a gunshot wound.
Bullet hit squib, a special effect used in the film industry to portray a gunshot wound.
Stab wound, an equivalent penetrating injury caused by a bladed weapon or any other sharp objects.
References
External links
Virtual Autopsy – CT scans of fatal gunshot wounds
Patient.info
Medical emergencies
Causes of death
Injuries
Ballistics
Wikipedia medicine articles ready to translate
Gun violence | Gunshot wound | [
"Physics"
] | 5,180 | [
"Applied and interdisciplinary physics",
"Ballistics"
] |
10,727,311 | https://en.wikipedia.org/wiki/Precapillary%20sphincter | A precapillary sphincter is a band of contractile mural cells either classified as smooth muscle or pericytes that adjusts blood flow into capillaries. They were originally described in the mesenteric microcirculation, and were thought to only reside there. At the point where each of the capillaries originates from an arteriole, contractile mural cells encircle the capillary. This is called the precapillary sphincter. The precapillary sphincter has now also been found in the brain, where it regulates blood flow to the capillary bed. The sphincter can open and close the entrance to the capillary, by which contraction causes blood flow in a capillary to change as vasomotion occurs. In some tissues, the entire capillary bed may be bypassed by blood flow through arteriovenous anastomoses or through preferential flow through metarterioles. If the sphincter is damaged or cannot contract, blood can flow into the capillary bed at high pressures. When capillary pressures are high (as per gravity, etc.), fluid passes out of the capillaries into the interstitial space, and edema or fluid swelling is the result.
Dispute over concept
Precapillary sphincters and metarterioles were discovered in the mesenteric circulation in the 1950s. Medical and physiological textbooks, such as Guyton, Boron and Fulton, etc. were quick to claim the existence of said sphincters and metarterioles all over the body, despite lack of evidence. At least since 1976 there has been considerable debate about the existence of precapillary sphincters and metarterioles. In 2020, precapillary sphincters were identified as a mechanism for controlling cerebral blood flow.
References
Further reading
Angiology
Cardiovascular system anatomy
Circulatory system | Precapillary sphincter | [
"Biology"
] | 403 | [
"Organ systems",
"Circulatory system"
] |
10,727,315 | https://en.wikipedia.org/wiki/NGC%207537 | NGC 7537 is a spiral galaxy located in the equatorial constellation of Pisces, about 1.5° to the NNW of Gamma Piscium. It was first documented by German-born astronomer William Herschel on Aug 30, 1785. J. L. E. Dreyer described it as, "very faint, considerably small, round, brighter middle, southwestern of 2". This galaxy lies at a distance of approximately from the Milky Way, and is a member of the Pegasus I cluster.
This object forms a pair with the nearly edge-on barred spiral galaxy NGC 7541, and the two show signs of interaction. NGC 7537 has a curved tidal tail to the northeast with a length of , while NGC 7541 has two tidal tails. They have a projected separation of .
A Type II supernova designated SN 2002gd was detected by multiple independent observers beginning October 5, 2002. It was positioned east and north of the galactic nucleus of NGC 7537.
References
External links
Unbarred spiral galaxies
Pisces (constellation)
7537
12442
70786 | NGC 7537 | [
"Astronomy"
] | 222 | [
"Pisces (constellation)",
"Constellations"
] |
10,727,643 | https://en.wikipedia.org/wiki/Gelato%20Federation | The Gelato Federation (usually just Gelato) was a "global technical community dedicated to advancing Linux on the Intel Itanium platform through collaboration, education, and leadership." Formed in 2001, membership included more than seventy academic and research organizations around the world, including several that operated Itanium-based supercomputers on the Top500 list. The organization was active in projects to enhance the Linux kernel for Itanium and GCC for Itanium. The organization took its name from the Italian dessert gelato, paying homage to this by naming sub-projects Gelato Vanilla and Gelato Coconut for varieties of the dessert.
History
In late 2001, representatives from seven organizations met with Hewlett-Packard. The institutions were the Bioinformatics Institute, Singapore; Groupe ESIEE, France; Hewlett-Packard Company; National Center for Supercomputing Applications, USA; Tsinghua University, China; University of Illinois at Urbana-Champaign, USA; University of New South Wales, Australia; and University of Waterloo, Canada. These were the founding members of Gelato.
Representatives from these organizations met twice a year. The first few meetings (in Palo Alto, California 2001 and Paris 2002) were primarily a "strategy council meeting" where the by-laws and charter were hammered out.
The Sydney meeting in October 2002 was the first that included a day of technical presentations. These became a regular feature of the meetings, eventually expanded to conferences, and thus the two conferences each year were entirely composed of technical presentations by vendors and members.
The organization apparently ceased operation in 2009. The Itanium processor was discontinued by Intel in 2021.
Membership
The federation grew markedly after its inception. By April 2007, there were more than 70 members and sponsors around the world. Members were institutions, but there were a few individuals who, because of their contribution to IA-64 on Linux or to Gelato, were made Honorary Members. These included Clemens C. J. Roothaan (who contributed to the Itanium math libraries and floating point unit), Brian Lynn (the original HP representative), David Mosberger-Tang (original porter of Linux to IA-64) and Jean-Pol Taffin (ex-general secretary of ESIEE, and very influential in the early days of Gelato).
Institutional members were sponsored by an IA-64 vendor, or came in on their own. Sponsored members typically focused on specific projects.
Conferences
The Gelato ICE: Itanium Conference & Expo alternated between San Jose, California and somewhere else in the world, often in Southeast Asia or Europe. Gelato conferences were where most of the collaboration and cooperation between members were established, and where Intel revealed some of their future strategy for the Itanium-based platform. The last conference was held in Singapore in October 2007.
Other activities
Apart from the Members' activities, Gelato funded a Central Operations (hosted at the University of Illinois at Urbana-Champaign). Central Operations, in addition to running the twice-a-year meetings, tried to coordinate and manage a number of projects. These included:
Gelato GCC on Itanium Workgroup, a group of members and sponsors of the Gelato Federation and the GCC) community interested in improving GCC on Itanium processors.
Vanilla, a concerted effort to port and tune software for Itanium, providing both tuned binaries and documentation of the tuning process.
Coconut, a system of access to Itanium machines for members.
The Gelato System Grant program, which provided Itanium systems for members.
Sponsors
Gelato was funded by HP, Intel, BP, Itanium Solutions Alliance, and SGI. Gelato Central Operations was housed at the Coordinated Science Lab at the University of Illinois.
See also
Linaro, a similar project for the ARM architecture
References
Computer science organizations
Information technology organizations
International organizations based in the United States
Very long instruction word computing | Gelato Federation | [
"Technology"
] | 802 | [
"Computer science",
"Information technology organizations",
"Information technology",
"Computer science organizations"
] |
10,727,745 | https://en.wikipedia.org/wiki/NGC%203195 | NGC 3195 (also known as Caldwell 109) is a planetary nebula located in the southern constellation of Chamaeleon. Discovered by Sir John Herschel in 1835, this 11.6 apparent magnitude planetary nebula is slightly oval in shape, with dimensions of 40×35 arc seconds, and can be seen visually in telescopic apertures of at low magnifications.
Spectroscopy reveals that NGC 3195 is approaching Earth at , while the nebulosity is expanding at around . The central star is listed as >15.3V or 16.1B magnitude. An analysis of Gaia data suggests that the central star is a binary system. Distance is estimated at 1.7 kpc.
References
External links
The Hubble European Space Agency Information Centre: pictures and information on NGC 3195
Planetary nebulae
3195
109b
Chamaeleon
18350212
Discoveries by John Herschel | NGC 3195 | [
"Astronomy"
] | 182 | [
"Chamaeleon",
"Constellations"
] |
10,728,972 | https://en.wikipedia.org/wiki/Philanthropreneur | A Philanthropreneur, also known as a Philanthro-capitalist, is a portmanteau of entrepreneur and philanthropy. The Wall Street Journal used the term in a 1999 article, while a publication entitled The Philanthropreneur Newsletter existed as far back as 1997. Philanthropreneurship is often considered the start of a new era in philanthropy, characterized by the development of the philanthropist's role and the integration of business practices.
The core objective of philanthropreneurship is to increase the philanthropic impact of non-profit organizations through the use of corporations. Traditionally, non-profit organizations solely depended on donations, grants, or other forms of charitable giving. Philanthropreneurship differs by investing rather than donating; there is an expectation of financial profit on top of the social profit traditionally associated with non-profit organizations. Philanthropreneurs aim to achieve social change that is supposed to be both profitable and sustainable.
Description
Philanthropreneurs are interested in effecting positive change in the world and doing so whilst making a profit. Philanthropreneurs are often "driven to do good and have their profit, too," as Stephanie Strom wrote in an article for the New York Times.
Theoretical Framework of Philanthropreneurship
As an emerging field, there is no defined operating model or strategic approach. Still, philanthropreneurship marks the transition from a grant and donation model to a profit model with predefined objectives and constant focus on quantifiable results. This form of “commercial giving” demands measurable return, which is why opportunities are assessed and evaluated according to different criteria. Factors such as profitability and quantifiable performance are fixed requirements for granting support. The shift towards more business-minded professional management has also resulted in a greater focus on long-term goals.
The application of entrepreneurial practices in philanthropy drives the impact of connected non-profit organizations through strategic funding. Traditional philanthropy encouraged the promotion of social welfare exclusively through charitable giving. Philanthropreneurship differs from the traditional non-profit organization set up by prioritizing revenue-generating strategies over donations and social impact. In philanthropreneurship, prosperous ventures require the establishment of recurring income as a means of avoiding depletion of funds and ultimately preventing the organization's dissolution.
Philanthropic buying has a limited reach, which is why philanthropreneurs do not dispose of surplus funds, but tailor investments by actively leveraging their class advantages like wealth, time, business expertise, networks, and reputation. Philanthropreneurship is measured in impact, sustainability and scalability.
Philanthropreneurs include Bill and Melinda Gates, Steve Case, Pierre Omidyar and Bill Clinton. Philanthropreneurship is now supported by emerging new business models and legislation including low-profit limited liability companies (L3Cs), created by a tax attorney experienced in entrepreneurial finance named Marc J. Lane.
Controversies
Non-profit organizations have historically found it challenging to trust and accept the concept of "philanthro-capitalism". Critics note that many metrics of the commercial sectors, such as return on investment lack applicability to non-profit organizations. Moreover, the inclusion of commercial and enterprise strategies has generated concerns in maintaining the institution's culture and ideology. A particular concern is the risk that the organization's focus will shift away from the social mission and instead towards satisfying the need for profit.
The performance assessment of philanthropreneurial ventures remains an area of concern for many, as there is no precise measurement for social impact. For example, in "impact investing", a core practice of philanthropreneurship, project selection for funding is based on estimated social impact and financial return. From an ethical context, many critics argue that the incorporation of a business model commercializes the nonprofit sector and further increases the risk of distorting the organization's mission and principles, alienating the very people it would help.
Conversely, many supporters point out that traditional philanthropy alone cannot sustain social initiatives because of the shortage in sources of funding. In philanthropreneurship, a dependency on traditional fundraising has been a strong predictor of failure, which is why the need to diversify income sources was introduced through the concept of philanthro-capitalism.
Practitioners of philanthropreneurship
Amr Al-Dabbagh
Steve Case
Bill Clinton
Bill and Melinda Gates Foundation
Pierre Omidyar
See also
Social entrepreneurship
Mutual Aid
Capitalism
Social Work
Impact Investing
References
External links
Fortune article examining entrepreneurial practices for non-profits
Guardian article on philanthropreneurship
New York Times article examining philanthropreneurs
Book on business solutions to the world's top social problems
"Philanthropreneuring for the Rest of Us"
Philanthropy
Business models | Philanthropreneur | [
"Biology"
] | 1,010 | [
"Philanthropy",
"Behavior",
"Altruism"
] |
10,729,341 | https://en.wikipedia.org/wiki/Eosinophilic%20gastroenteritis | Eosinophilic gastroenteritis (EG or EGE), also known as eosinophilic enteritis, is a rare and heterogeneous condition characterized by patchy or diffuse eosinophilic infiltration of gastrointestinal (GI) tissue, first described by Kaijser in 1937. Presentation may vary depending on location as well as depth and extent of bowel wall involvement and usually runs a chronic relapsing course. It can be classified into mucosal, muscular and serosal types based on the depth of involvement. Any part of the GI tract can be affected, and isolated biliary tract involvement has also been reported.
The stomach is the organ most commonly affected, followed by the small intestine and the colon.
Signs and symptoms
EG typically presents with a combination of chronic nonspecific GI symptoms which include abdominal pain, diarrhea, occasional nausea and vomiting, weight loss and abdominal distension. Approximately 80% have symptoms for several years; a high degree of clinical suspicion is often required to establish the diagnosis, as the disease is extremely rare. It does not come all of a sudden but takes about 3–4 years to develop depending upon the age of the patient. Occasionally, the disease may manifest itself as an acute abdomen or bowel obstruction.
Mucosal EG (25–100%) is the most common variety, which presents with features of malabsorption and protein losing enteropathy. Failure to thrive and anaemia may also be present. Lower gastrointestinal bleeding may imply colonic involvement.
Muscular EG (13–70%) present with obstruction of gastric outlet or small intestine; sometimes as an obstructing caecal mass or intussusception.
Subserosal EG (4.5% to 9% in Japan and 13% in the US) presents with ascites which is usually exudative in nature, abundant peripheral eosinophilia, and has favourable responses to corticosteroids.
Other documented features are cholangitis, pancreatitis, eosinophilic splenitis, acute appendicitis and giant refractory duodenal ulcer.
Pathophysiology
Peripheral blood eosinophilia and elevated serum IgE are usual but not universal. The damage to the gastrointestinal tract wall is caused by eosinophilic infiltration and degranulation.
As a part of host defense mechanism, eosinophils are normally present in gastrointestinal mucosa, though the finding in deeper tissue is almost always pathologic. What triggers such dense infiltration in EG is not clear. It is possible that different pathogenetic mechanisms of disease is involved in several subgroups of patients. Food allergy and variable IgE response to food substances has been observed in some patients which implies role of hypersensitive response in pathogenesis. Many patients indeed have history of other atopic conditions like eczema, asthma, etc.
Eosinophil recruitment into inflammatory tissue is a complex process, regulated by a number of inflammatory cytokines. In EG cytokines IL-3, IL-5 and granulocyte macrophage colony stimulating factor (GM-CSF) may be behind the recruitment and activation. They have been observed immunohistochemically in diseased intestinal wall.
In addition eotaxin has been shown to have an integral role in regulating the homing of eosinophils into the lamina propria of stomach and small intestine.
In the allergic subtype of disease, it is thought that food allergens cross the intestinal mucosa and trigger an inflammatory response that includes mast cell degranulation and recruitment of eosinophils.
Diagnosis
Talley et al. suggested three diagnostic criteria which are still widely used:
the presence of gastrointestinal symptoms,
histological demonstration of eosinophilic infiltration in one or more areas of the gastrointestinal tract or presence of high eosinophil count in ascitic fluid (latter usually indicates subserosal variety),
no evidence of parasitic or extraintestinal disease.
Hypereosinophilia, the hallmark of allergic response, may be absent in up to 20% of patients, but hypoalbuminaemia and other abnormalities suggestive of malabsorption may be present. CT scans may show nodular and irregular thickening of the folds in the distal stomach and proximal small bowel, but these findings can also be present in other conditions like Crohn's disease and lymphoma.
The endoscopic appearance in eosinophilic gastroenteritis is nonspecific; it includes erythematous, friable, nodular, and occasional ulcerative changes.
Sometimes diffuse inflammation results in complete loss of villi, involvement of multiple layers, submucosal oedema and fibrosis.
Definitive diagnosis involves histological evidence of eosinophilic infiltration in biopsy slides. Microscopy reveals >20 eosinophils per high power field. Infiltration is often patchy, can be missed and laparoscopic full thickness biopsy may be required.
Radio isotope scan using technetium (99mTc) exametazime-labeled leukocyte SPECT may be useful in assessing the extent of disease and response to treatment but has little value in diagnosis, as the scan does not help differentiating EG from other causes of inflammation.
When eosinophilic gastroenteritis is observed in association with eosinophilic infiltration of other organ systems, the diagnosis of idiopathic hypereosinophilic syndrome should be considered.
Management
Corticosteroids are the mainstay of therapy with a 90% response rate in some studies. Appropriate duration of steroid treatment is unknown and relapse often necessitates long term treatment. Various steroid sparing agents e.g. sodium cromoglycate (a stabilizer of mast cell membranes), ketotifen (an antihistamine), and montelukast (a selective, competitive leukotriene receptor antagonist) have been proposed, centering on an allergic hypothesis, with mixed results. Oral budesonide (an oral steroid) can be useful in treatment, as well. An elimination diet may be successful if a limited number of food allergies are identified. An elemental diet may also be successful in the treatment of children.
In a randomized clinical trial, lirentelimab was found to improve eosinophil counts and symptoms in individuals with eosinophilic gastritis and duodenitis.
Epidemiology
Epidemiology may differ between studies, as number of cases are small, with approximately 300 EG cases reported in published literature.
EG can present at any age and across all races, with a slightly higher incidence in males. Earlier studies showed higher incidence in the third to fifth decades of life.
Other gastrointestinal conditions associated with allergy
Eosinophilic esophagitis
Eosinophilic ascites
Coeliac disease
Protein losing enteropathy from intolerance to cow's milk protein
Infantile formula protein intolerance
See also
Aeroallergen
Allergy
Gastroenteritis
Malabsorption
References
External links
Gastrointestinal tract disorders
Histopathology
Immune system disorders | Eosinophilic gastroenteritis | [
"Chemistry"
] | 1,555 | [
"Histopathology",
"Microscopy"
] |
10,730,216 | https://en.wikipedia.org/wiki/Calcium%20hexaboride | Calcium hexaboride (sometimes calcium boride) is a compound of calcium and boron with the chemical formula CaB6. It is an important material due to its high electrical conductivity , hardness, chemical stability, and melting point. It is a black, lustrous, chemically inert powder with a low density. It has the cubic structure typical for metal hexaborides, with octahedral units of 6 boron atoms combined with calcium atoms. CaB6 and lanthanum-doped CaB6 both show weak ferromagnetic properties, which is a remarkable fact because calcium and boron are neither magnetic, nor have inner 3d or 4f electronic shells, which are usually required for ferromagnetism.
Properties
CaB6 has been investigated in the past due to a variety of peculiar physical properties, such as superconductivity, valence fluctuation and Kondo effects. However, the most remarkable property of CaB6 is its ferromagnetism. It occurs at unexpectedly high temperature (600 K) and with low magnetic moment (below 0.07 per atom). The origin of this high temperature ferromagnetism is the ferromagnetic phase of a dilute electron gas, linkage to the presumed excitonic state in calcium boride, or external impurities on the surface of the sample. The impurities might include iron and nickel, probably coming from impurities in the boron used to prepare the sample.
CaB6 is insoluble in H2O, MeOH (methanol), and EtOH (ethanol) and dissolves slowly in acids. Its microhardness is 27 GPa, Knoop hardness is 2600 kg/mm2), Young modulus is 379 GPa, and electrical resistivity is greater than 2·1010 Ω·m for pure crystals. CaB6 is a semiconductor with an energy gap estimated as 1.0 eV. The low, semi-metallic conductivity of many CaB6 samples can be explained by unintentional doping due to impurities and possible non-stoichiometry.
Structural information
The crystal structure of calcium hexaboride is a cubic lattice with calcium at the cell centre and compact, regular octahedra of boron atoms linked at the vertices by B-B bonds to give a three-dimensional boron network. Each calcium has 24 nearest-neighbor boron atoms The calcium atoms are arranged in simple cubic packing so that there are holes between groups of eight calcium atoms situated at the vertices of a cube. The simple cubic structure is expanded by the introduction of the octahedral B6 groups and the structure is a CsCl-like packing of the calcium and hexaboride groups. Another way of describing calcium hexaboride is as having a metal and a B62− octahedral polymeric anions in a CsCl-type structure where the Calcium atoms occupy the Cs sites and the B6 octahedra in the Cl sites. The Ca-B bond length is 3.05 Å and the B-B bond length is 1.7 Å.
43Ca NMR data contains δpeak at -56.0 ppm and δiso at -41.3 ppm where δiso is taken as peak max +0.85 width, the negative shift is due to the high coordination number.
Raman Data: Calcium hexaboride has three Raman peaks at 754.3, 1121.8, and 1246.9 cm−1 due to the active modes A1g, Eg, and T2g respectively.
Observed Vibrational Frequencies cm−1 : 1270(strong) from A1g stretch, 1154 (med.) and 1125(shoulder) from Eg stretch, 526, 520, 485, and 470 from F1g rotation, 775 (strong) and 762 (shoulder) from F2g bend, 1125 (strong) and 1095 (weak) from F1u bend, 330 and 250 from F1u translation, and 880 (med.) and 779 from F2u bend.
Preparation
One of the main reactions for industrial production is:
CaO + 3 B2O3 + 10 Mg → CaB6 + 10 MgO
Other methods of producing CaB6 powder include:
Direct reaction of calcium or calcium oxide and boron at 1000 °C;
Ca + 6B → CaB6
Reacting Ca(OH)2 with boron in vacuum at about 1700 °C (carbothermal reduction);
Ca(OH)2 +7B → CaB6 + BO(g) + H2O(g)
Reacting calcium carbonate with boron carbide in vacuum at above 1400 °C (carbothermal reduction)
Reacting of CaO and H3BO3 and Mg to 1100 °C.
Low-temperature (500 °C) synthesis
CaCl2 + 6NaBH4 → CaB6 + 2NaCl + 12H2 + 4Na
results in relatively poor quality material.
To produce pure CaB6 single crystals, e.g., for use as cathode material, the thus obtained CaB6 powder is further recrystallized and purified with the zone melting technique. The typical growth rate is 30 cm/h and crystal size ~1x10 cm.
Single-crystal CaB6 Nanowires (diameter 15–40 nm, length 1–10 micrometres) can be obtained by pyrolysis of diborane (B2H6) over calcium oxide (CaO) powders at 860–900 °C, in presence of Ni catalyst.
Uses
Calcium hexaboride is used in the manufacturing of boron-alloyed steel and as a deoxidation agent in production of oxygen-free copper. The latter results in higher conductivity than conventionally phosphorus-deoxidized copper owing to the low solubility of boron in copper. CaB6 can also serve as a high temperature material, surface protection, abrasives, tools, and wear resistant material.
CaB6 is highly conductive, has low work function, and thus can be used as a hot cathode material. When used at elevated temperature, calcium hexaboride will oxidize degrading its properties and shortening its usable lifespan.
CaB6 is also a promising candidate for n-type thermoelectric materials, because its power factor is larger than or comparable to that of common thermoelectric materials Bi2Te3 and PbTe.
CaB also can be used as an antioxidant in carbon bonded refractories.
Precautions
Calcium hexaboride is irritating to the eyes, skin, and respiratory system. This product should be handled with proper protective eyeware and clothing. Never put calcium hexaboride down the drain or add water to it.
See also
Boride
Calcium
References
Further reading
Borides
Calcium compounds
Deoxidizers
Non-stoichiometric compounds
Ferromagnetic materials | Calcium hexaboride | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,433 | [
"Non-stoichiometric compounds",
"Deoxidizers",
"Ferromagnetic materials",
"Metallurgy",
"Materials",
"Matter"
] |
10,730,229 | https://en.wikipedia.org/wiki/Antistatic%20bag | An antistatic bag is a bag used for storing electronic components, which are prone to damage caused by electrostatic discharge (ESD).
These bags are usually plastic polyethylene terephthalate (PET) and have a distinctive color (silvery for metallised film, pink or black for polyethylene). The polyethylene variant may also take the form of foam or bubble wrap, either as sheets or bags. Multiple layers of protection are often used to protect from both mechanical damage and electrostatic damage. A protected device can be packaged inside a metalized PET film bag, inside a pink polyethylene bubble-wrap bag, which is finally packed inside a rigid black polyethylene box lined with pink poly foam. It is important that the bags only be opened at static-free workstations.
Dissipative antistatic bags, as the name suggests, are made of standard polyethylene with a static dissipative coating or layer on the plastic. This prevents buildup of a static charge on the surface of the bag, as it dissipates the charge to ground (i.e., whatever other surface it is touching). This bridge to ground is achieved with the inclusion of a tallow amine on the bags surface which attracts moisture that can conduct the charge to another surface, or to the atmosphere itself. In this sense, this type is truly 'antistatic' in that it hinders the formation of static charges. It, however, is not resistant to electrostatic discharge; if something else with a charge touches the bag (such as a person's hand), its charge would easily transfer through the bag and its contents. These bags are usually pink or red in color because of the dissipative chemical layer. Black bags also exist, wherein the polyethylene is manufactured containing trace amounts of carbon, forming a partial shield, though not a complete one.
Conductive antistatic bags are manufactured with a layer of conductive metal, often aluminum, and a dielectric layer of plastic covered in a static dissipative coating. This forms both a shield and a non-conductive barrier, shielding the contents from static charge via the Faraday cage effect. These bags are preferred for more sensitive parts, but they also see use in environments where sparks would be hazardous, such as oxygen-rich areas in aircraft and hospitals. Metalized bags are more fragile than their nonmetal counterparts, however, as any puncture compromises the integrity of the shield. In addition, they have a limited shelf life, as the metal substrate can deteriorate over time. These bags are often gray or silver owing to the metal layer, while still being transparent to some degree.
Foam also exists in both pink (dissipative) and black (conductive) varieties, used for storing individual leaded components by piercing the leads into the foam.
See also
ESD materials
Antistatic device
Antistatic garments
Antistatic agent
Antistatic mat
Antistatic wrist strap
Electromagnetic shielding
Electrostatic sensitive device
Velostat
References
Further reading
Yam, K. L., "Encyclopedia of Packaging Technology", John Wiley & Sons, 2009,
ANSI/ESD S541-2019: Standard for Packaging ESD Susceptible Items
ANSI/ESD STM11.31-2018. Standard test method provides a method for testing and determining the shielding capabilities of electrostatic shielding bags
ANSI/ESD STM11.11-2022. Standard for Protection of Electrostatic Discharge Susceptible Items - Surface Resistance Measurement
Electrostatics
Digital electronics
Bags | Antistatic bag | [
"Engineering"
] | 735 | [
"Electronic engineering",
"Digital electronics"
] |
7,088,907 | https://en.wikipedia.org/wiki/Concrete%20pump | A concrete pump is a machine used for transferring liquid concrete by pumping. There are different types of concrete pumps.
A common type of concrete pump for large scale construction projects is known as a boom concrete pump, because it uses a remote-controlled articulating robotic arm (called a boom) to place concrete accurately. It is attached to a truck or a semi-trailer. Boom pumps are capable of pumping at very high volumes and are less labor intensive to operate when compared to line or other types of concrete pumps.
The second main type of concrete pump, commonly referred to as a "line pump" or trailer-mounted concrete pump, is either mounted on a truck or placed on a trailer.
This pump requires steel or flexible concrete placing hoses to be manually attached to the outlet of the machine and feed the concrete to the place of application. The length of the hoses varies, typical hose lengths are , depending on the diameter. Due to their lower pump volume, line pumps are used for smaller volume concrete placing applications such as swimming pools, sidewalks, single family home concrete slabs and most ground slabs.
There are also skid mounted and rail mounted concrete pumps, but these are uncommon and only used on specialized jobsites such as mines and tunnels.
History
Until the early 20th century, concrete was mixed on the job site and transported from the cement mixer to the formwork, either in wheelbarrows or in buckets lifted by cranes. This required a lot of time and labor. In 1927, the German engineers Max Giese and Fritz Hull came upon the idea of pumping concrete through pipes. They pumped concrete to a height of and a distance of . Shortly after, a concrete pump was patented in Holland in 1932 by Jacob Cornelius Kweimn (Jacobus Cornelius Kooijman). This patent incorporated the developer's previous German patent.
Mechanism
Concrete pump designers face many challenges because concrete is heavy, viscous, abrasive, contains pieces of hard rock, and solidifies if not kept moving.
Usually, piston pumps are used, because they can produce hundreds of atmospheres of pressure. Such piston-style pumps can push cylinders of heterogenous concrete mixes (aggregate + cement). At present, double-piston pumps are predominantly used, which are hydraulically driven by electric or diesel engines using oil pumps. The pressure pistons are hydraulically connected to each other through the drive cylinders and operate in a two-stroke mode.
For lower pressures peristaltic pumps are common.
How it works
The return pressure piston of one pressure cylinder creates a vacuum, the medium from the feed funnel is sucked into the cylinder. At the same time, the advancing delivery piston pushes the contents of the other delivery cylinder through the transfer tube into the delivery line. At the end of the stroke, the pump switches, i.e., the transfer tube turns in front of the other filled pressure cylinders, and the pressure pistons change their movement direction.
Concrete pump drives are now exclusively hydraulic, so control options vary between individual manufacturers. Each system has certain advantages and disadvantages.
Important performance factors are:
discharge pressure
machine weight
price
system complexity
For these reasons, many options have existed side by side for a long time. Nowadays, fluid pressures of up to and flow rates of up to can be achieved, while using piston-type pumps.
Example of pump performance
To illustrate, below are data on a typical concrete sample pump BRF 42.14 H:
Vertical reach of boom:
Horizontal reach of boom:
Pumping rate:
Concrete pressure:
Cylinder length:
Cylinder diameter:
Number of substitutions of strokes per minute: 27
Number of outriggers legs: 4
Gallery
See also
High-density solids pump - concrete pump technology in general
Concrete mixing transport trucks
References
Concrete
Construction equipment
Articles containing video clips
German inventions
1928 in Germany
1928 in science | Concrete pump | [
"Engineering"
] | 770 | [
"Structural engineering",
"Construction equipment",
"Construction",
"Concrete",
"Industrial machinery"
] |
7,088,921 | https://en.wikipedia.org/wiki/Abel%27s%20test | In mathematics, Abel's test (also known as Abel's criterion) is a method of testing for the convergence of an infinite series. The test is named after mathematician Niels Henrik Abel, who proved it in 1826. There are two slightly different versions of Abel's test – one is used with series of real numbers, and the other is used with power series in complex analysis. Abel's uniform convergence test is a criterion for the uniform convergence of a series of functions dependent on parameters.
Abel's test in real analysis
Suppose the following statements are true:
is a convergent series,
is a monotone sequence, and
is bounded.
Then is also convergent.
It is important to understand that this test is mainly pertinent and
useful in the context of non absolutely convergent series .
For absolutely convergent series, this theorem, albeit true, is almost self evident.
This theorem can be proved directly using summation by parts.
Abel's test in complex analysis
A closely related convergence test, also known as Abel's test, can often be used to establish the convergence of a power series on the boundary of its circle of convergence. Specifically, Abel's test states that if a sequence of positive real numbers is decreasing monotonically (or at least that for all n greater than some natural number m, we have ) with
then the power series
converges everywhere on the closed unit circle, except when z = 1. Abel's test cannot be applied when z = 1, so convergence at that single point must be investigated separately. Notice that Abel's test implies in particular that the radius of convergence is at least 1. It can also be applied to a power series with radius of convergence R ≠ 1 by a simple change of variables ζ = z/R. Notice that Abel's test is a generalization of the Leibniz Criterion by taking z = −1.
Proof of Abel's test: Suppose that z is a point on the unit circle, z ≠ 1. For each , we define
By multiplying this function by (1 − z), we obtain
The first summand is constant, the second converges uniformly to zero (since by assumption the sequence converges to zero). It only remains to show that the series converges. We will show this by showing that it even converges absolutely:
where the last sum is a converging telescoping sum. The absolute value vanished because the sequence is decreasing by assumption.
Hence, the sequence converges (even uniformly) on the closed unit disc. If , we may divide by (1 − z) and obtain the result.
Another way to obtain the result is to apply the Dirichlet's test. Indeed, for holds , hence the assumptions of the Dirichlet's test are fulfilled.
Abel's uniform convergence test
Abel's uniform convergence test is a criterion for the uniform convergence of a series of functions or an improper integration of functions dependent on parameters. It is related to Abel's test for the convergence of an ordinary series of real numbers, and the proof relies on the same technique of summation by parts.
The test is as follows. Let {gn} be a uniformly bounded sequence of real-valued continuous functions on a set E such that gn+1(x) ≤ gn(x) for all x ∈ E and positive integers n, and let {fn} be a sequence of real-valued functions such that the series Σfn(x) converges uniformly on E. Then Σfn(x)gn(x) converges uniformly on E.
Notes
References
Gino Moretti, Functions of a Complex Variable, Prentice-Hall, Inc., 1964
External links
Proof (for real series) at PlanetMath.org
Convergence tests
Articles containing proofs | Abel's test | [
"Mathematics"
] | 781 | [
"Articles containing proofs",
"Theorems in mathematical analysis",
"Convergence tests"
] |
7,089,536 | https://en.wikipedia.org/wiki/Lacuna%20%28histology%29 | In histology, a lacuna is a small space, containing an osteocyte in bone, or chondrocyte in cartilage.
Bone
The lacuna are situated between the lamellae, and consist of a number of oblong spaces. In an ordinary microscopic section, viewed by transmitted light, they appear as fusiform opaque spots. Each lacuna is occupied during life by a branched cell, termed an osteocyte, bone-cell or bone-corpuscle. Lacunae are connected to one another by small canals called canaliculi. A lacuna never contains more than one osteocyte. Sinuses are an example of lacuna.
Cartilage
The cartilage cells or chondrocytes are contained in cavities in the matrix, called cartilage lacunae; around these, the matrix is arranged in concentric lines as if it had been formed in successive portions around the cartilage cells. This constitutes the so-called capsule of the space. Each lacuna is generally occupied by a single cell, but during the division of the cells, it may contain two, four, or eight cells. Lacunae are found between narrow sheets of calcified matrix that are known as lamellae ( ).
See also
Lacunar stroke
References
External links
Bone histology photomicrographs
Musculoskeletal system | Lacuna (histology) | [
"Biology"
] | 287 | [
"Organ systems",
"Musculoskeletal system"
] |
7,089,667 | https://en.wikipedia.org/wiki/Fatty-acid%20amide%20hydrolase%201 | Fatty-acid amide hydrolase 1 (FAAH) is a member of the serine hydrolase family of enzymes. It was first shown to break down anandamide (AEA), an N-acylethanolamine (NAE) in 1993. In humans, it is encoded by the gene FAAH.
Function
FAAH is an integral membrane hydrolase with a single N-terminal transmembrane domain. In vitro, FAAH has esterase and amidase activity. In vivo, FAAH is the principal catabolic enzyme for a class of bioactive lipids called the fatty acid amides (FAAs). Members of the FAAs include:
Anandamide (N-arachidonoylethanolamine), an endocannabinoid
2-arachidonoylglycerol (2-AG), an endocannabinoid.
Other N-acylethanolamines, such as N-oleoylethanolamine and N-palmitoylethanolamine
The sleep-inducing lipid oleamide
The N-acyltaurines, which are agonists of the transient receptor potential (TRP) family of calcium channels.
FAAH knockout mice display highly elevated (>15-fold) levels of N-acylethanolamines and N-acyltaurines in various tissues. Because of their significantly elevated anandamide levels, FAAH KOs have an analgesic phenotype, showing reduced pain sensation in the hot plate test, the formalin test, and the tail flick test. Finally, because of their impaired ability to degrade anandamide, FAAH KOs also display supersensitivity to exogenous anandamide, a cannabinoid receptor (CB) agonist.
Due to the ability of FAAH to regulate nociception, it is currently viewed as an attractive drug target for the treatment of pain.
Studies in cells and animals and genetic studies in humans have shown that inhibiting FAAH may be a useful strategy to treat anxiety disorders, as inhibition produce analgesic, anxiolytic, neuroprotective, and anti-inflammatory effects by elevated N-acylethanolamines (NAE's) and their activation of cannabinoid receptors.
Inhibitors and inactivators
Activation of the cannabinoid receptor CB1 or CB2 in different tissues, including skin, inhibit FAAH, and thereby increases endocannabinoid levels.
Based on the hydrolytic mechanism of fatty acid amide hydrolase, a large number of irreversible and reversible inhibitors of this enzyme have been developed.
Some of the more significant compounds are listed below;
AM374, palmitylsulfonyl fluoride, one of the first FAAH inhibitors developed for in vitro use, but too reactive for research in vivo
ARN2508, derivative of flurbiprofen, dual FAAH / COX inhibitor
BIA 10-2474 (Bial-Portela & Ca. SA, Portugal) has been linked to severe adverse events affecting 5 patients in a drug trial in Rennes, France, and at least one death, in January 2016. Many other pharmaceutical companies have previously taken other FAAH inhibitors into clinical trials without reporting such adverse events.
BMS-469908
CAY-10402
JNJ-245
JNJ-1661010
JNJ-28833155
JNJ-40413269
JNJ-42119779
JNJ-42165279 in clinical trials against social anxiety and depression, trials suspended as a precautionary measure following serious adverse event with BIA 10-2474
LY-2183240
Cannabidiol
MK-3168
MK-4409
MM-433593
OL-92
OL-135
PF-622
PF-750
PF-3845
PF-04862853
Redafamdastat (JZP-150; PF-04457845) – "exquisitely selective" for FAAH over other serine hydrolases, but failed in clinical trials against osteoarthritis
RN-450
SA-47
SA-73
SSR-411298 well tolerated in clinical trials but insufficient efficacy against depression, subsequently trialled against cancer pain as an adjunctive treatment.
ST-4068, reversible inhibitor of FAAH
TK-25
URB524
URB597 (KDS-4103, Kadmus Pharmaceuticals), is an irreversible inactivator with a carbamate-based mechanism, and appears in one report as a somewhat selective, though it also inactivates other serine hydrolases (e.g., carboxylesterases) in peripheral tissues.
URB694
URB937
VER-156084 (Vernalis)
V-158866 (Vernalis) in clinical trials for neuropathic pain following spinal injury, and spasticity associated with multiple sclerosis. Structure not revealed though Vernalis holds several patents in the area.
Inhibition and binding
Structural and conformational properties that contribute to enzyme inhibition and substrate binding imply an extended bound conformation, and a role for the presence, position, and stereochemistry of a delta cis double bond.
Enhancement of FAAH activity
Insulin medication increases the production and activity of FAAH.
Genetic variants
rs324420
The FAAH gene contains a single nucleotide polymorphism (SNP) called rs324420. The variant allele, C385A, is associated with a higher sensitivity of FAAH to proteolytic degradation and a shorter half-life compared to the standard C variant. As a result, carriers of the A variant has increased N-acylethanolamine (NAE) levels and anandamide (AEA) signaling at the cannabinoid receptors. The A variant may be responsible for lower levels of the FAAH protein seen in high-performing athletes, providing increased physical and mental fitness. However, among elite Polish athletes, the A variant is under-represented regardless of metabolic characteristics of their sport disciplines; this seems to suggest an opposite role for the A variant.
A 2017 study found a strong correlation between national percentage of very happy people (as measured by the World Values Survey) and the presence of the rs324420 C385A allele in citizens' genetic make-up.
The C385A allele was initially provisionally linked to drug abuse and dependence but this was not borne out in subsequent studies. According to later studies, carriers of the A allele are more likely to try cannabis, but less likely to become dependent.
microdeletion
FAAH-OUT is a pseudogene downstream of the FAAH coding region. It expresses a long non-coding RNA (lncRNA) that increases the expression of FAAH. In 2019, a Scottish woman named Jo Cameron was found to have both a previously unreported microdeletion mutation in FAAH-OUT and a rs324420 C385A mutation. The result is extreme disruption of FAAH function leading to elevated anandamide levels. She was immune to anxiety, unable to experience fear, and insensitive to pain. The frequent burns and cuts suffered due to her hypoalgesia healed quicker than average with little or no scarring. Her son, who shares the FAAH-OUT deletion but has no C385A mutation, has a lesser degree of pain insensitivity.
A 2023 study looks further into the functions of FAAH-OUT using transcriptomic analyses of cell models, some created anew using CRISPR-Cas9, others obtained from the 2019 patient. The study confirms that FAAH-OUT increases the expression of FAAH, both via its lncRNA product and through an intronic enhancer called FAAH-AMP. Loss of FAAH-OUT also changes the expression of a wide network of genes beyond FAAH itself. For example, although the pain insensitivity is mostly due to loss of FAAH function (via increased endocannabinoid levels and reduced ACKR3 expression), lack of depression and anxiety is instead due to a non-canonical Wnt pathway upregulating BDNF. The increased wound healing is due to both pathways: loss of FAAH function increases N-acyltaurine levels; the non-canonical Wnt pathway is also beneficial to healing.
Assays
The enzyme is typically assayed making use of a radiolabelled anandamide substrate, which generates free labelled ethanolamine, although alternative LC-MS methods have also been described.
Structures
The first crystal structure of FAAH was published in 2002 (PDB code 1MT5). Structures of FAAH with drug-like ligands were first reported in 2008, and include non-covalent inhibitor complexes and covalent adducts.
Regulation
In slime molds
The slime mold Dictyostelium discoideum produces a semispecific FAAH inhibitor. By controlling the levels of FAAH activity, they modulate endogenous N-acylethanolamine levels.
Enzyme classification
In the Enzyme Commission numbering scheme, "fatty acid amide hydrolase" is . The number applies to all enzymes that have the chemical activity; in humans it covers both the genes FAAH and FAAH2. The systematic name is "fatty acylamide amidohydrolase". Recorded synonyms include "oleamide hydrolase", "anandamide amidohydrolase".
See also
Endocannabinoid enhancer
Endocannabinoid reuptake inhibitor
Monoacylglycerol lipase
References
External links
Proteopedia FAAH entry - interactive structure (JMOL) of inhibitor-bound FAAH
Fatty acid amide hydrolase (FAAH1) Human Protein Atlas
EC 3.5.1
Integral membrane proteins | Fatty-acid amide hydrolase 1 | [
"Biology"
] | 2,074 | [
"SNPs on chromosome 1",
"Single-nucleotide polymorphisms"
] |
7,090,274 | https://en.wikipedia.org/wiki/Coordinated%20Video%20Timings | Coordinated Video Timings (CVT; VESA-2013-3 v1.2) is a standard by VESA which defines the timings of the component video signal. Initially intended for use by computer monitors and video cards, the standard made its way into consumer televisions.
The parameters defined by standard include horizontal blanking and vertical blanking intervals, horizontal frequency and vertical frequency (collectively, pixel clock rate or video signal bandwidth), and horizontal/vertical sync polarity.
The standard was adopted in 2002 and superseded the Generalized Timing Formula.
Reduced blanking
CVT timings include the necessary pauses in picture data (known as "blanking intervals") to allow CRT displays to reposition their electron beam at the end of each horizontal scan line, as well as the vertical repositioning necessary at the end of each frame. CVT also specifies a mode ("CVT-R") which significantly reduces these blanking intervals (to a period insufficient for CRT displays to work correctly) in the interests of saving video signal bandwidth when modern displays such as LCD monitors are being used, since such displays typically do not require these pauses in the picture data. This also allows for lower pixel clock rates and higher frame rates.
In revision 1.2, released in 2013, a new "Reduced Blanking Timing Version 2" mode was added which further reduces the horizontal blanking interval from 160 to 80 pixels, increases pixel clock precision from ±0.25 MHz to ±0.001 MHz, and adds the option for a 1000/1001 modifier for ATSC/NTSC video-optimized timing modes (e.g. 59.94 Hz instead of 60.00 Hz or 23.976 Hz instead of 24.000).
CEA-861-H introduced RBv3. RBv3 defines ways to specify different VBLANK and HBLANK duration formulae.
CEA-861-I introduced "Optimized Video Timings" (OVT), a standard timing calculation that covers resolution/refresh rate combinations not supported by CVT.
Bandwidth
See also
Extended display identification data
References
External links
VESA free standards - includes free CVT 1.2 timings spreadsheet
Video signal
Audiovisual introductions in 2002 | Coordinated Video Timings | [
"Technology"
] | 464 | [
"Computing stubs"
] |
7,090,330 | https://en.wikipedia.org/wiki/Tasklist | In computing, tasklist is a command available in Microsoft Windows and in the AROS shell.
It is equivalent to the ps command in Unix and Unix-like operating systems and can also be compared with the Windows task manager (taskmgr).
Windows NT 4.0, the Windows 98 Resource Kit, the Windows 2000 Support Tools, and ReactOS include the similar tlist command. Additionally, Microsoft provides the similar PsList command as part of Windows Sysinternals.
Usage
Microsoft Windows
On Microsoft Windows tasklist shows all of the different local computer processes currently running. tasklist may also be used to show the processes of a remote system by using the command: tasklist /S "SYSTEM".
Optionally, they can be listed sorted by either the imagename, the PID or the amount of computer usage. But by default, they are sorted by chronological order:
See also
Task manager
nmon — a system monitor tool for the AIX and Linux operating systems.
pgrep
pstree
top
References
Further reading
External links
tasklist | Microsoft Docs
Windows communication and services
Windows administration
Task managers | Tasklist | [
"Technology"
] | 223 | [
"Windows commands",
"Computing commands"
] |
7,090,419 | https://en.wikipedia.org/wiki/Facadism | Facadism, façadism, or façadomy is the architectural and construction practice where the facade of a building is designed or constructed separately from the rest of a building, or when only the facade of a building is preserved with new buildings erected behind or around it.
There are aesthetic and historical reasons for preserving building facades. Facadism can be the response to the interiors of a building becoming unusable, such as being damaged by fire. In developing areas, however, the practice is sometimes used by property developers seeking to redevelop a site as a compromise with preservationists who wish to preserve buildings of historical or aesthetic interest. It can be regarded as a compromise between historic preservation and demolition and thus has been lauded as well as decried.
There is sometimes a blurred line between renovation, adaptive reuse, reconstruction and facadism. Sometimes buildings are renovated to such an extent that they are "skinned", preserving only the exterior shell, and used for purposes other than those for which they were originally intended. While this is equivalent to facadism, the difference is typically the retention of roof and or floor structures, maintaining a credible link to the original building. In contrast, facadism generally involves retaining only one or two street facing walls for purely aesthetic and decorative purposes. Facadomy is a practice in postmodern architecture reaching its peak in the latter half of the 20th century. The setback or podium architecture technique gives an illusion of integrity to the original building by visually separating the old from the new, helping to mitigate farcical effects such as the floors and windows not lining up or a dramatic clash of styles.
Critics label the practice as architectural sham, some claiming that it sometimes results in part of the building becoming a folly.
Distribution and control measures
Despite being highly controversial and denounced by many preservationists as vandalism, facadism is used as the demand for new development is overwhelming community desires for preservation. Facadism appears often in cities where there is a strong pressure for new development.
While the controversial practice of facadism is encouraged by governments in some cities (such as Toronto and Brisbane), it is actively discouraged in others (such as Paris and Sydney).
Architectural podiums are often seen by some architects as a solution to this problem and these are allowed for as part of planning frameworks in urban heritage areas.
International policies
The practice of facadism has the potential to conflict with ICOMOS international charters. The Venice Charter, article 7, states that: "A monument is inseparable from the history to which it bears witness and from the setting in which it occurs. The moving of all or part of a monument cannot be allowed except where the safeguarding of that monument demands it or where it is justified by national or international interest of paramount importance".
By country
Australia
In Australia, the Burra Charter, which sets out the principles and procedures to be followed when conserving heritage places, does not have any policies which specifically deal with facadism. However, it requires that all aspects of the significance of a place be understood and retained as far as possible. Many local governments have heritage policies, but while some specifically warn against facadism, others do not.
Sydney
The central city of Sydney sported numerous examples of historic buildings reduced to facades as part of the development boom of the 1980s. The retention of only the two street facades of the 1890s Colonial Mutual Building on Martin Place in 1976 to allow an office tower behind is probably the earliest example of facadism in Australia, and set a precedent for the following decade. The most notorious was the treatment of the 1850s North British Hotel in Loftus Street in 1983, where the bland office building facade rose straight up from the retained facade, and had been given the bonus floors on the basis of the preservation. In 2018, a new development on a larger site saw its complete demolition, noting that it was not officially heritage protected. Many warehouses and industrial buildings in the central city area were facaded in the 1980s, some left propped up for years after the building boom bust in the early 1990s. New heritage controls introduced in NSW allow the listing of interiors, and many now are, and the City of Sydney heritage guidelines assume the retention of the whole of buildings that are heritage listed, and so there have been very few examples of facadism since the mid 1990s.
Melbourne
In the rapidly growing city of Melbourne, facadism has become very common. The Old Commerce building at the University of Melbourne was a very early example dating from the 1930s, however this was a case of saving an elaborate stone bank facade by relocating it to Melbourne University and reconstructed as part of a new building (which was itself demolished and replaced in 2014 with the facade left in place).
With the introduction of heritage controls in the 1980s, and high profile heritage battles over a number of several large scale CBD buildings, a compromise policy avoiding facadism was generally adopted. The front portion, often about 10 metres depth of the front of a building was retained, with taller development setback behind the retained portion. One Collins Street in 1984 and the Olderfleet group development in 1986 were high profile examples.
Facadism still occurred in a few examples, notably the T & G Building on Collins Street where 10 storey walls on two street fronts were propped up in 1990 allowing a completely new building behind, but the same height and floor levels as the facades.
In the early 21st century, with development pressures increasing, and policies introduced encouraging high density residential development, new non-specific controls in the central city, and a series of decisions at the Victorian Civil and Administrative Tribunal, retention of only facades has become more and more common. By the early 2010s, industrial buildings, shop rows, and traditional corner pubs across the inner and middle city were routinely reduced to a facade (and usually some side walls to retain the appearance of a "three dimensional building"), to allow new residential construction behind and above. In 2012 the huge Myer Store on Lonsdale Street in the city was reduced to a highly visible propped facade to allow the construction of a shopping center. Concern was expressed by heritage groups in 2013 that the trend has gone too far, and the City of Melbourne started the process for introducing new guidelines for the central city that would curb such practices.
Brisbane
In Brisbane, where heritage controls did not exist until 1992, many historic buildings were lost completely despite public opposition in the 1970s and 80s, and facadism was seen as an acceptable compromise by the Brisbane City Council. Uptown, completed in 1988 was hailed as a heritage 'success', retaining the facades of several buildings including the Hotel Carlton (1885), New York Hotel (1860) and Newspaper House. Another notorious example was the Queensland Country Life Building (1888) which was reduced to a facade in 1991, and left as a remnant for many years until a development was built behind in 2006. Guidelines for places on the State level Queensland Heritage Register however requires that interiors specifically taken into account.
Canada
In Canada, all jurisdictions at the federal, provincial, territorial and municipal level have adopted the Standards & Guidelines for the Conservation of Historic Places in Canada, which – though not explicit – does not recommend facadism as good conservation practice. In general, projects which have approached projects using facadism are considered to have lost their integrity and value. Nonetheless, facadism is very common, especially in Toronto: for example in 2017 the facades of the McLaughlin Motor Car Showroom were dismantled and re-erected as part of the Burano tower development.
Gallery
See also
Adaptive reuse
Brusselization
Building restoration
Cultural heritage
Ship of Theseus
Western false front architecture
References
Notes
Bibliography
External links
Building engineering
Urban studies and planning terminology
Architectural history
Historic preservation | Facadism | [
"Engineering"
] | 1,587 | [
"Building engineering",
"Civil engineering",
"Architectural history",
"Architecture"
] |
7,090,506 | https://en.wikipedia.org/wiki/Dielectric%20loss | In electrical engineering, dielectric loss quantifies a dielectric material's inherent dissipation of electromagnetic energy (e.g. heat). It can be parameterized in terms of either the loss angle or the corresponding loss tangent . Both refer to the phasor in the complex plane whose real and imaginary parts are the resistive (lossy) component of an electromagnetic field and its reactive (lossless) counterpart.
Electromagnetic field perspective
For time-varying electromagnetic fields, the electromagnetic energy is typically viewed as waves propagating either through free space, in a transmission line, in a microstrip line, or through a waveguide. Dielectrics are often used in all of these environments to mechanically support electrical conductors and keep them at a fixed separation, or to provide a barrier between different gas pressures yet still transmit electromagnetic power. Maxwell’s equations are solved for the electric and magnetic field components of the propagating waves that satisfy the boundary conditions of the specific environment's geometry. In such electromagnetic analyses, the parameters permittivity , permeability , and conductivity represent the properties of the media through which the waves propagate. The permittivity can have real and imaginary components (the latter excluding effects, see below) such that
If we assume that we have a wave function such that
then Maxwell's curl equation for the magnetic field can be written as:
where is the imaginary component of permittivity attributed to bound charge and dipole relaxation phenomena, which gives rise to energy loss that is indistinguishable from the loss due to the free charge conduction that is quantified by . The component represents the familiar lossless permittivity given by the product of the free space permittivity and the relative real/absolute permittivity, or
Loss tangent
The loss tangent is then defined as the ratio (or angle in a complex plane) of the lossy reaction to the electric field in the curl equation to the lossless reaction:
Solution for the electric field of the electromagnetic wave is
where:
is the angular frequency of the wave, and
is the wavelength in the dielectric material.
For dielectrics with small loss, square root can be approximated using only zeroth and first order terms of binomial expansion. Also, for small .
Since power is electric field intensity squared, it turns out that the power decays with propagation distance as
where:
is the initial power
There are often other contributions to power loss for electromagnetic waves that are not included in this expression, such as due to the wall currents of the conductors of a transmission line or waveguide. Also, a similar analysis could be applied to the magnetic permeability where
with the subsequent definition of a magnetic loss tangent
The electric loss tangent can be similarly defined:
upon introduction of an effective dielectric conductivity (see relative permittivity#Lossy medium).
Discrete circuit perspective
A capacitor is a discrete electrical circuit component typically made of a dielectric placed between conductors. One lumped element model of a capacitor includes a lossless ideal capacitor in series with a resistor termed the equivalent series resistance (ESR), as shown in the figure below. The ESR represents losses in the capacitor. In a low-loss capacitor the ESR is very small (the conduction is high leading to a low resistivity), and in a lossy capacitor the ESR can be large. Note that the ESR is not simply the resistance that would be measured across a capacitor by an ohmmeter. The ESR is a derived quantity representing the loss due to both the dielectric's conduction electrons and the bound dipole relaxation phenomena mentioned above. In a dielectric, one of the conduction electrons or the dipole relaxation typically dominates loss in a particular dielectric and manufacturing method. For the case of the conduction electrons being the dominant loss, then
where C is the lossless capacitance.
When representing the electrical circuit parameters as vectors in a complex plane, known as phasors, a capacitor's loss tangent is equal to the tangent of the angle between the capacitor's impedance vector and the negative reactive axis, as shown in the adjacent diagram. The loss tangent is then
.
Since the same AC current flows through both ESR and Xc, the loss tangent is also the ratio of the resistive power loss in the ESR to the reactive power oscillating in the capacitor. For this reason, a capacitor's loss tangent is sometimes stated as its dissipation factor, or the reciprocal of its quality factor Q, as follows
References
Electromagnetism
Electrical engineering
External links
Loss in dielectrics, frequency dependence | Dielectric loss | [
"Physics",
"Engineering"
] | 982 | [
"Electromagnetism",
"Electrical engineering",
"Physical phenomena",
"Fundamental interactions"
] |
7,090,708 | https://en.wikipedia.org/wiki/NGC%204625 | NGC 4625 is a distorted dwarf galaxy in the constellation Canes Venatici. The galaxy is formally classified as a Sm galaxy, which means that its structure vaguely resembles the structure of spiral galaxies. The galaxy is sometimes referred to as a Magellanic spiral because of its resemblance to the Large Magellanic Cloud's single-arm structure.
Structure
Unlike most spiral galaxies, NGC 4625 has a single spiral arm, which gives the galaxy an asymmetric appearance. It has been hypothesized that this galaxy's asymmetric structure may be the result of a gravitational interaction with NGC 4618. Such asymmetric structure is commonly seen among many interacting galaxies. However, observations of neutral hydrogen gas in NGC 4618 and NGC 4625 show that NGC 4625 does not appear to have been affected by the gravitational interaction. This indicates that the single-arm structure seen in NGC 4625 may be created through intrinsic processes.
Ultraviolet observations of NGC 4625 made by the Galaxy Evolution Explorer (GALEX) show the presence of an extended disk. The new spiral disk extrends 28,000 light-years from the galaxy center, a staggering four times the optical radius. The hot blue stars in this new disk may have formed from the inflow of fresh gas and dust from interaction with its companions, NGC 4618 and NGC 4625A. The UV-to-optical colors suggest that the bulk of the stars in the disk of NGC 4625 are currently being formed, providing a unique opportunity to study today the physics of star formation under conditions similar to those when the normal disks of spiral galaxies like the Milky Way first formed.
Environment
As mentioned above, NGC 4625 is interacting with NGC 4618.
See also
NGC 5713 - a similar asymmetric spiral galaxy
References
External links
Canes II Group
Peculiar galaxies
Intermediate spiral galaxies
Interacting galaxies
Canes Venatici
4625
IC objects
07861
42607
Magellanic spiral galaxies | NGC 4625 | [
"Astronomy"
] | 396 | [
"Canes Venatici",
"Constellations"
] |
7,090,781 | https://en.wikipedia.org/wiki/Vanillylmandelic%20acid | Vanillylmandelic acid (VMA) is a chemical intermediate in the synthesis of artificial vanilla flavorings and is an end-stage metabolite of the catecholamines (epinephrine, and norepinephrine). It is produced via intermediary metabolites.
Chemical synthesis
VMA synthesis is the first step of a two-step process practiced by Rhodia since the 1970s to synthesize artificial vanilla. Specifically the reaction entails the condensation of guaiacol and glyoxylic acid in an ice cold, aqueous solution with sodium hydroxide.
Biological elimination
VMA is found in the urine, along with other catecholamine metabolites, including homovanillic acid (HVA), metanephrine, and normetanephrine. In timed urine tests the quantity excreted (usually per 24 hours) is assessed along with creatinine clearance, and the quantity of cortisols, catecholamines, and metanephrines excreted is also measured.
Clinical significance
Urinary VMA is elevated in patients with tumors that secrete catecholamines.
These urinalysis tests are used to diagnose an adrenal gland tumor called pheochromocytoma, a tumor of catecholamine-secreting chromaffin cells. These tests may also be used to diagnose neuroblastomas, and to monitor treatment of these conditions.
Norepinephrine is metabolised into normetanephrine and VMA. Norepinephrine is one of the hormones produced by the adrenal glands, which are found on top of the kidneys. These hormones are released into the blood during times of physical or emotional stress, which are factors that may skew the results of the test.
References
Neurochemistry
Alpha hydroxy acids
O-methylated natural phenols
Phenolic human metabolites
Acetic acids
Vanilloids | Vanillylmandelic acid | [
"Chemistry",
"Biology"
] | 419 | [
"Biochemistry",
"Neurochemistry"
] |
7,090,906 | https://en.wikipedia.org/wiki/Agroinfiltration | Agroinfiltration is a method used in plant biology and especially lately in plant biotechnology to induce transient expression of genes in a plant, or isolated leaves from a plant, or even in cultures of plant cells, in order to produce a desired protein. In the method, a suspension of Agrobacterium tumefaciens is introduced into a plant leaf by direct injection or by vacuum infiltration, or brought into association with plant cells immobilised on a porous support (plant cell packs), whereafter the bacteria transfer the desired gene into the plant cells via transfer of T-DNA. The main benefit of agroinfiltration when compared to the more traditional plant transformation is speed and convenience, although yields of the recombinant protein are generally also higher and more consistent.
The first step is to introduce a gene of interest to a strain of Agrobacterium tumefaciens. Subsequently, the strain is grown in a liquid culture and the resulting bacteria are washed and suspended into a suitable buffer solution. For injection, this solution is then placed in a syringe (without a needle). The tip of the syringe is pressed against the underside of a leaf while simultaneously applying gentle counterpressure to the other side of the leaf. The Agrobacterium suspension is then injected into the airspaces inside the leaf through stomata, or sometimes through a tiny incision made to the underside of the leaf.
Vacuum infiltration is another way to introduce Agrobacterium deep into plant tissue. In this procedure, leaf disks, leaves, or whole plants are submerged in a beaker containing the solution, and the beaker is placed in a vacuum chamber. The vacuum is then applied, forcing air out of the intercellular spaces within the leaves via the stomata. When the vacuum is released, the pressure difference forces the "Agrobacterium" suspension into the leaves through the stomata into the mesophyll tissue. This can result in nearly all of the cells in any given leaf being in contact with the bacteria.
Once inside the leaf the Agrobacterium remains in the intercellular space and transfers the gene of interest as part of the Ti plasmid-derived T-DNA in high copy numbers into the plant cells. The gene transfer occurs when the plant signals are induced and physical contact is made between the plant cells and the bacteria. The bacteria create a mechanism that burrows a hole and transfers the new T-DNA strand into the plant cell. The T-DNA moves into the nucleus of the plant and begins to integrate into the plants' chromosome. The gene is then transiently expressed through RNA synthesis from appropriate promoter sequences in all transfected cells (no selection for stable integration is performed). The plant can be monitored for a possible effect in the phenotype, subjected to experimental conditions or harvested and used for purification of the protein of interest. Many plant species can be processed using this method, but the most common ones are Nicotiana benthamiana and less often, Nicotiana tabacum.
Transient expression in cultured plant cell packs is a new procedure, recently patented by the Fraunhofer Institute IVV, Germany. For this technique, suspension cultured cells of tobacco (e.g.: NT1 or BY2 cell lines of Nicotiana tabacum) are immobilised by filtration onto a porous support to form a well-aerated cell pack, then incubated with recombinant Agrobacterium for a time to allow T-DNA transfer, before refiltration to remove excess bacteria and liquid. Incubation of the cell pack in a humid environment for time periods up to several days allows transient expression of protein. Secreted proteins can be washed out of the cell pack by application of buffer and further filtration.
Silencing suppressors in agroinfiltration
[[File:Agroinfiltration uing a promoter--GUS construct with and without p19.jpg|thumb|Agroinfiltration using a promoter::GUS construct in Nicotiana benthamiana" with TBSV p19 (right leaf disc) and without TBSV p19 (left leaf disc).]]
It's quite common to coinfiltrate the Agrobacterium carrying the construct of interest together with another Agrobacterium carrying a silencing suppressor protein gene such as the one encoding the p19 protein from the plant pathogenic Tomato bushy stunt virus (TBSV), or the NSs protein from tomato spotted wilt virus (TSWV). TBSV was first discovered in 1935 in tomatoes and results in plants with stunted growth and deformed fruits. TSWV was discovered in tomatoes in Australia in 1915, and for many years was the only member of what is now known as genus Tospovirus, family Bunyaviridae.
In order to defend itself against viruses and other pathogens that introduce foreign nucleic acids into their cells, plants have developed a system of post-transcriptional gene silencing (PTGS) where small interfering RNAs are produced from double-stranded RNA in order to create a sequence specific degradation pathway that efficiently silence non-native genes. Many plant viruses have developed mechanisms that counter the plants PTGS-systems by evolving proteins, such as p19 and NSs, that interfere with the PTGS-pathway at different levels.
Although it is not clear exactly how p19 works to suppress RNA silencing, studies have shown that transiently expressed proteins in Nicotiana benthamiana leaves have an up to 50-fold higher yield when coinfiltrated with TBSV p19.
TSWV and other tospovirus NSs proteins have been shown to be effective as suppressors of both local and systemic silencing, and may be a useful alternative to p19 where the latter has been shown not to be effective. In other studies, p19 from artichoke mottled crinkle virus has been shown to have a similar, although weaker, effect to TBSV p19.
See alsoAgrobacterium''
Transient expression
References
Biotechnology
Gene expression | Agroinfiltration | [
"Chemistry",
"Biology"
] | 1,275 | [
"Gene expression",
"Biotechnology",
"Molecular genetics",
"Cellular processes",
"nan",
"Molecular biology",
"Biochemistry"
] |
7,091,330 | https://en.wikipedia.org/wiki/Water-fuelled%20car | A water-fuelled car is an automobile that hypothetically derives its energy directly from water. Water-fuelled cars have been the subject of numerous international patents, newspaper and popular science magazine articles, local television news coverage, and websites. The claims for these devices have been found to be pseudoscience and some were found to be tied to investment frauds. These vehicles may be claimed to produce fuel from water on board with no other energy input, or may be a hybrid claiming to derive some of its energy from water in addition to a conventional source (such as gasoline). According to the currently accepted laws of physics, there is no way to extract chemical energy from water alone.
What water-fuelled cars are not
A water-fuelled car is not any of the following:
Water injection, which is a method for cooling the combustion chambers of engines by adding water to the incoming fuel-air mixture, allowing for greater compression ratios and reduced engine knocking (detonation).
The hydrogen car, although it often incorporates some of the same elements. To fuel a hydrogen car from water, electricity is used to generate hydrogen by electrolysis. The resulting hydrogen is an energy carrier that can power a car by reacting with oxygen from the air to create water, either through burning in a combustion engine or catalyzed to produce electricity in a fuel cell.
Hydrogen fuel enhancement, where a mixture of hydrogen and conventional hydrocarbon fuel is burned in an internal combustion engine, usually in an attempt to improve fuel economy or reduce emissions.
The steam car, which uses water (in both liquid and gaseous forms) as a working fluid, not as a fuel.
An electric car charged with or directly powered by hydroelectricity.
Extracting energy from water
According to the currently accepted laws of physics, there is no way to extract chemical energy from water alone. Water itself is highly stable—it was one of the classical elements and contains very strong chemical bonds. Its enthalpy of formation is negative (−68.3 kcal/mol or −285.8 kJ/mol), meaning that energy is required to break those stable bonds, to separate water into its elements, and there are no other compounds of hydrogen and oxygen with more negative enthalpies of formation, meaning that no energy can be released in this manner either.
Most proposed water-fuelled cars rely on some form of electrolysis to separate water into hydrogen and oxygen and then recombine them to release energy. However, the first law of thermodynamics guarantees that the energy required to separate the elements will always be equal to the amount of energy released (assuming no losses), so this cannot be used to produce net energy. The second law of thermodynamics further states that the amount of useful energy released this way is necessarily less than the amount of energy input.
Claims of functioning water-fuelled cars
Garrett electrolytic carburetor
Charles H. Garrett allegedly demonstrated a water-fuelled car "for several minutes", which was reported on September 8, 1935, in The Dallas Morning News. The car generated hydrogen by electrolysis as can be seen by examining Garrett's patent, issued that same year. This patent includes drawings which show a carburetor similar to an ordinary float-type carburetor but with electrolysis plates in the lower portion, and where the float is used to maintain the level of the water. Garrett's patent fails to identify a new source of energy.
Stanley Meyer's water fuel cell
At least as far back as 1980, Stanley Meyer claimed that he had built a dune buggy that ran on water, although he gave inconsistent explanations as to its mode of operation. In some cases, he claimed that he had replaced the spark plugs with a "water splitter", while in other cases it was claimed to rely on a "fuel cell" that split the water into hydrogen and oxygen. The "fuel cell", which he claimed was subjected to an electrical resonance, would split the water mist into hydrogen and oxygen gas, which would then be combusted back into water vapour in a conventional internal combustion engine to produce net energy. Meyer's claims were never independently verified, and in an Ohio court in 1996 he was found guilty of "gross and egregious fraud". He died of an aneurysm in 1998, although conspiracy theories claim that he was poisoned.
Dennis Klein
In 2002, the firm Hydrogen Technology Applications patented an electrolyser design and trademarked the term "Aquygen" to refer to the hydrogen oxygen gas mixture produced by the device. Originally developed as an alternative to oxyacetylene welding, the company claimed to be able to run a vehicle exclusively on water, via the production of "Aquygen", and invoked an unproven state of matter called "magnegases" and a discredited theory about magnecules to explain their results. Company founder Dennis Klein claimed to be in negotiations with a major US auto manufacturer and that the US government wanted to produce Hummers that used his technology.
At present, the company no longer claims it can run a car exclusively on water, and is instead marketing "Aquygen" production as a technique to increase fuel efficiency, thus making it hydrogen fuel enhancement rather than a water-fuelled car.
Genesis World Energy (GWE)
Also in 2002, Genesis World Energy announced a market ready device which would extract energy from water by separating the hydrogen and oxygen and then recombining them. In 2003, the company announced that this technology had been adapted to power automobiles. The company collected over $2.5 million from investors, but none of their devices were ever brought to market. In 2006, Patrick Kelly, the owner of Genesis World Energy was sentenced in New Jersey to five years in prison for theft and ordered to pay $400,000 in restitution.
Genepax Water Energy System
In June 2008, Japanese company Genepax unveiled a car it claimed ran on only water and air, and many news outlets dubbed the vehicle a "water-fuel car". The company said it "cannot [reveal] the core part of this invention" yet, but it disclosed that the system used an onboard energy generator, which it called a "membrane electrode assembly", to extract the hydrogen using a "mechanism which is similar to the method in which hydrogen is produced by a reaction of metal hydride and water". The hydrogen was then used to generate energy to run the car. This led to speculation that the metal hydride is consumed in the process and is the ultimate source of the car's energy, making it a hydride-fuelled "hydrogen on demand" vehicle rather than water-fuelled as claimed. On the company's website the energy source is explained only with the words "Chemical reaction". The science and technology magazine Popular Mechanics described Genepax's claims as "rubbish". The vehicle Genepax demonstrated to the press in 2008 was a REVAi electric car, which was manufactured in India and sold in the UK as the G-Wiz.
In early 2009, Genepax announced they were closing their website, citing large development costs.
Thushara Priyamal Edirisinghe
Also in 2008, Sri Lankan news sources reported that Thushara Priyamal Edirisinghe claimed to drive a water-fuelled car about . on of water. Like other alleged water-fuelled cars described above, energy for the car was supposedly produced by splitting water into hydrogen and oxygen using electrolysis, and then burning the gases in the engine. Thushara showed the technology to Prime Minister Ratnasiri Wickramanayaka, who "extended the Government’s full support to his efforts to introduce the water-powered car to the Sri Lankan market". Thushara was arrested a few months later on suspicion of investment fraud.
Daniel Dingel
Daniel Dingel, a Filipino inventor, has been claiming since 1969 to have developed technology allowing water to be used as fuel. In 2000, Dingel entered into a business partnership with Formosa Plastics Group to further develop the technology. In 2008, Formosa Plastics successfully sued Dingel for fraud and Dingel, who was 82, was sentenced to 20 years' imprisonment.
Ghulam Sarwar
In December 2011, Ghulam Sarwar claimed he had invented a car that ran only on water. At the time the invented car was claimed to use 60% water and 40% Diesel or fuel, but that the inventor was working to make it run on only water, probably by end of June 2012. It was further claimed the car "emits only oxygen rather than the usual carbon".
Agha Waqar Ahmad
Pakistani man Agha Waqar Ahmad claimed in July 2012 to have invented a water-fuelled car by installing a "water kit" for all kind of automobiles, which consists of a cylindrical jar that holds the water, a bubbler, and a pipe leading to the engine. He claimed the kit used electrolysis to convert water into "HHO", which is then used as fuel. The kit required use of distilled water to work. Ahmed claimed he has been able to generate more oxyhydrogen than any other inventor because of "undisclosed calculations". He applied for a patent in Pakistan. Some Pakistani scientists said Agha's invention was a fraud that violates the laws of thermodynamics.
Aryanto Misel
Indonesian inventor Aryanto Misel claimed in May 2022 that his invention, called Nikuba, can convert water into hydrogen that can be used as fuel for motorcycles. Aryanto claimed that he only required 1 liter of water for the distance of 500 kilometers.
In July 2023, Aryanto claimed that Italian-based automobile manufacturers Lamborghini, Ducati, and Ferrari are interested in Nikuba. He also claimed that he is willing to sell the device to foreign companies for 15 billion rupiahs, while also claiming that he didn't need the Indonesian government and National Research and Innovation Agency as they have "destroyed" him. Indonesian scientists from National Research and Innovation Agency stated that the device is theoretically impossible. They also stated that there is no interest from Italian automobile manufacturers in Nikuba, and Aryanto was invited by their partners instead of the automobile manufacturers.
Hydrogen as a supplement
In addition to claims of cars that run exclusively on water, there have also been claims that burning hydrogen or oxyhydrogen together with petrol or diesel increases mileage and efficiency; these claims are debated. A number of websites promote the use of oxyhydrogen, also called "HHO", selling plans for do-it-yourself electrolysers or kits with the promise of large improvements in fuel efficiency. According to a spokesman for the American Automobile Association, "All of these devices look like they could probably work for you, but let me tell you they don't".
Gasoline pill and related additives
Related to the water-fuelled car hoax are claims that additives, often a pill, can convert the water into usable fuel, similar to a carbide lamp, in which a high-energy additive produces the combustible fuel. These claims are all false, and often with fraudulent intent, as water itself cannot contribute any energy to the process.
Hydrogen on demand technologies
A hydrogen on demand vehicle uses a chemical reaction to produce hydrogen from water. The hydrogen is then burned in an internal combustion engine or used in a fuel cell to generate electricity which powers the vehicle. These designs take energy from the chemical that reacts with water; vehicles of this type are not precluded by the laws of nature. Aluminium, magnesium, and sodium borohydride react with water to generate hydrogen and have been used in hydrogen on demand prototypes. Eventually, the chemical runs out and has to be replenished. The energy required to produce such compounds exceeds the energy obtained from their reaction with water.
One example of a hydrogen on demand device, created by scientists from the University of Minnesota and the Weizmann Institute of Science, uses boron to generate hydrogen from water. An article in New Scientist in July 2006 described the power source under the headline "A fuel tank full of water," and they quote Abu-Hamed as saying:
A vehicle powered by the device would take on water and boron instead of petrol, and generate boron trioxide. Elemental boron is difficult to prepare and does not occur naturally. Boron trioxide is an example of a borate, which is the predominant form of boron on earth. Thus, a boron-powered vehicle would require an economical method of preparing elemental boron. The chemical reactions describing the oxidation of boron are:
4B + 6H2O -> 2B2O3 + 6H2 [Hydrogen generation step]
6H2 + 3O2 -> 6H2O [Combustion step]
The balanced chemical equation representing the overall process (hydrogen generation and combustion) is:
4B + 3O2 -> 2B2O3
As shown above, boron trioxide is the only net byproduct, and it could be removed from the car and turned back into boron and reused. Electricity input is required to complete this process, which Al-Hamed suggests could come from solar panels. Although it is possible to obtain elemental boron by electrolysis, a substantial expenditure of energy is required. The process of converting borates to elemental boron and back might be compared with the analogous process involving carbon: carbon dioxide could be converted to charcoal (elemental carbon), then burnt to produce carbon dioxide.
In popular culture
It is referred to in the pilot episode for the That '70s Show sitcom, as well as in the twenty-first episode of the fifth season and the series finale.
"Gashole" (2010), a documentary film about the history of oil prices and the future of alternative mentions multiple stories regarding engines that use water to increase mileage efficiency.
"Like Water for Octane," an episode of The Lone Gunmen, is based on a "water-powered" car that character Melvin Frohike saw with his own eyes back in 1962.
The Water Engine, a David Mamet play made into a television film in 1994, tells the story of Charles Lang inventing an engine that runs using water for fuel. The plot centers on the many obstacles the inventor must overcome to patent his device.
The plot of the 1996 action film Chain Reaction revolves around a technology to turn water (via a type of self-sustaining bubble fusion & electrolysis) into fuel and official suppression of it.
A water-powered car was depicted in a 1997 episode of Team Knight Rider (a spinoff of the original Knight Rider TV series) entitled "Oil and Water". In the episode, the vehicle explodes after a character sabotages it by putting seltzer tablets in the fuel tank. The car shown was actually a Bricklin SV-1.
See also
List of topics characterized as pseudoscience
List of water fuel inventions
Perpetual motion
Water power engine
References
Further reading
Free energy conspiracy theories
Fringe physics | Water-fuelled car | [
"Technology"
] | 3,087 | [
"Free energy conspiracy theories",
"Science and technology-related conspiracy theories"
] |
7,091,771 | https://en.wikipedia.org/wiki/ELETTRA | Elettra Sincrotrone Trieste is an international research center located in Basovizza on the outskirts of Trieste, Italy.
Elettra – Sincrotrone Trieste S.C.p.A. is a multidisciplinary international research center, specialized in generating high quality synchrotron and free-electron laser light and applying it in materials science. Its mission is to promote cultural, social and economic growth through:
Basic and applied research
Technical and scientific training
Transfer of technology and know-how
The main assets of the research centre are two advanced light sources: the Elettra synchrotron (third generation electron storage ring, working at 2 and 2.4 GeV, in operation since October 1993) and the free-electron laser (FEL) FERMI, continuously (H24) operated supplying light of the selected energy and quality to more than 30 experimental stations on 28 beamlines. Since 1993, Elettra has been subjected to several updates that have allowed a top up operating mode from 2010 for both 2 and 2.4 GeV operational energy. The accumulation ring is formed by twelve groups of magnets forming a ring of 260 m in circumference. The ring beam current at 2 GeV is normally set to 310 mA and the top-up operational mode foresees a new ring injection every 6 min: 1 mA electron in 4 seconds is injected, keeping the ring current constant in the range of 3‰. At 2.4 GeV the beam current is set to 140 mA, and the top-up injections occurs every 20 minutes: in this case, 1 mA electron in 4 s is injected keeping the current level constant to 7‰. At the actual condition, the spectral brightness available on most beamlines is up to 1019 photons/s/mm2/mrad2/0.1%bw.
The FERMI FEL work in a SRHG method, operating at 2 GeV and producing coherent femtosecond light pulses with variable polarization, in an ultraviolet energy range. The produced optical pulses are characterized by a High peak power (~ GW) delivered to 8 different beamlines. The peak brightness of the FEL sources is expected to go up to 1030 photons/s/mm2/mrad2/0.1%bw.
These facilities enable the international community of researchers from academy and industry to characterize material properties and functions with sensitivity down to molecular and atomic levels, to pattern and nanofabricate new structures and devices, and to develop new processes. Every year scientists and engineers from more than 50 countries compete by submitting proposals to access and use time on these stations. These are selected by peer-reviewed by panels of international experts on the basis of scientific merit and potential impact, and the winners are granted valuable access time as a contribution to their research. Because of its central location in Europe, Elettra – Sincrotrone Trieste is increasingly attracting users from Central and Eastern European countries, where the demand for synchrotron radiation is in continuous growth, and is part of the primary network for science and technology of the Central European Initiative (CEI). The access by researchers from developing countries has tripled over the last few years, and the Indian research community is one of the largest users.
Elettra – Sincrotrone Trieste has been the coordinator of the EU-supported networks involving synchrotron and free electron lasers in the European area, in the last decade. Such networks promote transnational access, joint research activities and collaborations among the laboratories to improve the overall service offered to European users.
The facility, available for use by the Italian and international scientific communities, houses several ultra bright light sources, which use the synchrotron and free electron laser (FEL) sources to produce light ranging from ultraviolet to X-rays.
The centre also houses the European Storage Ring FEL Project (EUFELE).
References
See also
Signorina Elettra is a recurring fictional character in Donna Leon's Commissario Guido Brunetti novels
Synchrotron radiation facilities
Research institutes in Italy
Trieste | ELETTRA | [
"Materials_science"
] | 845 | [
"Materials testing",
"Synchrotron radiation facilities"
] |
7,092,344 | https://en.wikipedia.org/wiki/Scottish%20painted%20pebbles | Painted pebbles are a class of Early Medieval artifact found in northern Scotland dating from the first millennium CE.
Appearance
They are small rounded beach pebbles made of quartzite, which have been painted with simple designs in a dye which is now dark brown in colour. The size varies from to . It has not proven possible to analyse the dye itself from the stains that remain.
The motifs are carefully executed and the most common are dots and wavy lines. Other motifs are small circles, pentacles, crescents and triangles, showing strong relationships with the Pictish symbol stone motifs.
Experimental archaeology suggests that the designs were likely to have been painted with peat tar.
Distribution
To date, 55 painted pebbles have been found. 11 of these were found in Caithness, 5 in Orkney and 27 in Shetland. Most have come from broch sites which have been shown to have had an extensive post-broch occupation. An ogham-inscribed spindle-whorl was associated with one find at Buckquoy in Orkney (see Buckquoy spindle whorl). Several have been associated with wheelhouses or their outbuildings. An example was found at a Pictish site at Buckquoy in Orkney as reported in 1976. It had the "small ring" type decoration.
Manufacture
No statements can be made about the paints used, as their brown residues have so far not been sufficient for an investigation. Good experimental results were achieved in the early 2010s using a pitch-like material that is produced as a residue from the burning of peat. This can be found widely on the surface of the earth in the north of the British Isles, and its use as a heating material on in Shetland has already been proven for the early Iron Age. The material was applied with a straw, and halved quills and the stems of angelica roots were used for the round shapes.
Cultural significance
Scottish painted pebbles have been dated to the period 200 AD to the eighth century AD, the Pictish period. They may have been sling-stones that were thought to be of magical nature by the Picts; however, local traditions suggest that they were "charm-stones", often known as "cold-stones". Such stones were used within living memory to cure sickness in animals and humans.
In the Life of St. Columba it is recorded that he visited King Bridei in Pictland in around the year 565 AD and, taking a white stone pebble from the River Ness, he blessed it and any water it came into contact with would cure sick people. It floated in water and cured the king from a terminal illness. It remained as one of the great treasures of the king and cured many others.
See also
Amulets
Apotrope
Azilian pebbles
Touch pieces
References
Bibliography
External links
The Museum of Scottish Country Life
8th century in Scotland
1st-millennium works
Amulets
Archaeological artefact types
Archaeological discoveries in the United Kingdom
Magic items
Pictish culture
Rock art in Europe
Scottish art
Scottish folklore
Superstitions of Great Britain | Scottish painted pebbles | [
"Physics"
] | 610 | [
"Magic items",
"Physical objects",
"Matter"
] |
7,092,541 | https://en.wikipedia.org/wiki/Materialism%20and%20Empirio-criticism | Materialism and Empirio-criticism (Russian: Материализм и эмпириокритицизм, Materializm i empiriokrititsizm) is a philosophical work by Vladimir Lenin, published in 1909. It was an obligatory subject of study in all institutions of higher education in the Soviet Union, as a seminal work of dialectical materialism, a part of the curriculum called "Marxist–Leninist Philosophy". Lenin argued that human minds are capable of forming representations of the world that portray the world as it is. Thus, Lenin argues, our beliefs about the world can be objectively true; a belief is true when it accurately reflects the facts. According to Lenin, absolute truth is possible, but our theories are often only relatively true. Scientific theories can therefore constitute knowledge of the world.
Lenin formulates the fundamental philosophical contradiction between idealism and materialism as follows: "Materialism is the recognition of 'objects in themselves' or objects outside the mind; the ideas and sensations are copies or images of these objects. The opposite doctrine (idealism) says: the objects do not exist, outside the mind '; they are 'connections of sensations'."
Background
The book, whose full title is Materialism and Empirio-criticism. Critical Comments on a Reactionary Philosophy, was written by Lenin from February through October 1908 while he was in Geneva and London and was published in Moscow in May 1909 by Zveno Publishers. The original manuscript and preparatory materials have been lost.
Most of the book was written when Lenin was in Geneva, apart from the one month spent in London, where he visited the library of the British Museum to access modern philosophical and natural science material. The index lists in excess of 200 sources for the book.
In December 1908, Lenin moved from Geneva to Paris, where he worked until April 1909 on correcting the proofs. Some passages were edited to avoid tsarist censorship. It was published in Imperial Russia with great difficulty. Lenin insisted on the rapid distribution of the book and stressed that "not only literary but also serious political obligations" were involved in its publication.
The book was written as a reaction and criticism to the three-volume work Empiriomonism (1904–1906) by Alexander Bogdanov, his political opponent within the Party. In June 1909, Bogdanov was defeated at a Bolshevik mini-conference in Paris and expelled from the Central Committee, but he still retained a relevant role in the Party's left wing. He participated in the Russian Revolution and in 1919 he was appointed to the praesidium of the Socialist Academy of Social Sciences.
Materialism and Empirio-criticism was republished in Russian in 1920 with an introduction attacking Bogdanov by Vladimir Nevsky, the Rector of the Sverdlov Communist University. It subsequently appeared in over 20 languages and acquired canonical status in Marxist–Leninist philosophy.
Chapters summary
Chapter I The Epistemology of Empiriocriticism and Dialectical Materialism I
Lenin then discusses the "solipsism" of Mach and Avenarius.
Chapter II The Epistemology of Empiriocriticism and Dialectical Materialism II
Lenin, Chernov and Bazarov confront the views of Ludwig Feuerbach, Joseph Dietzgen and Friedrich Engels and comment on the criterion of practice in epistemology.
Chapter III The Epistemology of Empiriocriticism and Dialectical Materialism III
Lenin seeks to define "matter" and "experience" and addresses the questions of causality and necessity in nature as well as "freedom and necessity" and the "principle of the economy of thought".
Chapter IV The Philosophical Idealists as Collaborators and Successors of Empiriocriticism
Lenin deals with left and right Kant criticism, with the philosophy of immanence, Bogdanov's empiriomonism, and the critique of Hermann von Helmholtz on the "theory of symbols."
Chapter V The Latest Revolution in Science and Philosophical Idealism
Lenin deals with the thesis that "the crisis of physics" "has disappeared matter". In this context he speaks of a "physical idealism" and notes (on p. 260): "For the only" property "of matter to whose acknowledgment philosophical materialism is bound is the property of being objective reality, outside of our consciousness."
Chapter VI Empiriocriticism and Historical Materialism
Lenin discusses authors such as Bogdanov, Suvorov, Ernst Haeckel and Ernst Mach.
In an addition to Chapter IV, Lenin addresses the question: "From what side did N. G. Chernyshevsky criticize Kantianism?"
Philosophers and scientists cited
Lenin cites a broad range of philosophers:
Immanentist
Richard Avenarius
Ernst Mach
Richard von Schubert-Soldern
Joseph Petzoldt
Russian Machists
Jakov Berman
Osip Helfond
Sergei Suvorov
Pavel Yushkevich
See also
Anti-Dühring
Empirio-criticism
Vladimir Lenin bibliography
Notes
Further reading
Robert V. Daniels: A Documentary History of Communism in Russia: From Lenin to Gorbachev, 1993, .
External links
Materialism and Empirio-criticism by Vladimir Lenin at the Marxists Internet Archive
Materialism and Empirio-criticism, a PDF version published by Progress Publishers
1909 non-fiction books
Works by Vladimir Lenin
Materialism
Marxism
Academic works about philosophy
Epistemology literature
Dialectical materialism | Materialism and Empirio-criticism | [
"Physics"
] | 1,126 | [
"Materialism",
"Matter"
] |
7,092,764 | https://en.wikipedia.org/wiki/Characteristic%20mode%20analysis | Characteristic modes (CM) form a set of functions which, under specific boundary conditions, diagonalizes operator relating field and induced sources. Under certain conditions, the set of the CM is unique and complete (at least theoretically) and thereby capable of describing the behavior of a studied object in full.
This article deals with characteristic mode decomposition in electromagnetics, a domain in which the CM theory has originally been proposed.
Background
CM decomposition was originally introduced as set of modes diagonalizing a scattering matrix. The theory has, subsequently, been generalized by Harrington and Mautz for antennas. Harrington, Mautz and their students also successively developed several other extensions of the theory. Even though some precursors were published back in the late 1940s, the full potential of CM has remained unrecognized for an additional 40 years. The capabilities of CM were revisited in 2007 and, since then, interest in CM has dramatically increased. The subsequent boom of CM theory is reflected by the number of prominent publications and applications.
Definition
For simplicity, only the original form of the CM – formulated for perfectly electrically conducting (PEC) bodies in free space — will be treated in this article. The electromagnetic quantities will solely be represented as Fourier's images in frequency domain. Lorenz's gauge is used.
The scattering of an electromagnetic wave on a PEC body is represented via a boundary condition on the PEC body, namely
with representing unitary normal to the PEC surface, representing incident electric field intensity, and representing scattered electric field intensity defined as
with being imaginary unit, being angular frequency, being vector potential
being vacuum permeability, being scalar potential
being vacuum permittivity, being scalar Green's function
and being wavenumber. The integro-differential operator is the one to be diagonalized via characteristic modes.
The governing equation of the CM decomposition is
with and being real and imaginary parts of impedance operator, respectively: The operator, is defined by
The outcome of (1) is a set of characteristic modes , , accompanied by associated characteristic numbers . Clearly, (1) is a generalized eigenvalue problem, which, however, cannot be analytically solved (except for a few canonical bodies). Therefore, the numerical solution described in the following paragraph is commonly employed.
Matrix formulation
Discretization of the body of the scatterer into subdomains as and using a set of linearly independent piece-wise continuous functions , , allows current density to be represented as
and by applying the Galerkin method, the impedance operator (2)
The eigenvalue problem (1) is then recast into its matrix form
which can easily be solved using, e.g., the generalized Schur decomposition or the implicitly restarted Arnoldi method yielding a finite set of expansion coefficients and associated characteristic numbers . The properties of the CM decomposition are investigated below.
Properties
The properties of CM decomposition are demonstrated in its matrix form.
First, recall that the bilinear forms
and
where superscript denotes the Hermitian transpose and where represents an arbitrary surface current distribution, correspond to the radiated power and the reactive net power, respectively. The following properties can then be easily distilled:
The weighting matrix is theoretically positive definite and is indefinite. The Rayleigh quotient
then spans the range of and indicates whether the characteristic mode is capacitive (), inductive (), or in resonance (). In reality, the Rayleigh quotient is limited by the numerical dynamics of the machine precision used and the number of correctly found modes is limited.
The characteristic numbers evolve with frequency, i.e., , they can cross each other, or they can be the same (in case of degeneracies). For this reason, the tracking of modes is often applied to get smooth curves . Unfortunately, this process is partly heuristic and the tracking algorithms are still far from perfection.
The characteristic modes can be chosen as real-valued functions, . In other words, characteristic modes form a set of equiphase currents.
The CM decomposition is invariant with respect to the amplitude of the characteristic modes. This fact is used to normalize the current so that they radiate unitary radiated power
This last relation presents the ability of characteristic modes to diagonalize the impedance operator (2) and demonstrates far field orthogonality, i.e.,
Modal quantities
The modal currents can be used to evaluate antenna parameters in their modal form, for example:
modal far-field ( — polarization, — direction),
modal directivity ,
modal radiation efficiency ,
modal quality factor ,
modal impedance .
These quantities can be used for analysis, feeding synthesis, radiator's shape optimization, or antenna characterization.
Applications and further development
The number of potential applications is enormous and still growing:
antenna analysis and synthesis,
design of MIMO antennas,
compact antenna design (RFID, Wi-Fi),
UAV antennas,
selective excitation of chassis and platforms,
model order reduction,
bandwidth enhancement,
nanotubes and metamaterials,
validation of computational electromagnetics codes.
The prospective topics include
electrically large structures calculated using MLFMA,
dielectrics,
use of Combined Field Integral Equation,
periodic structures,
formulation for arrays.
Software
CM decomposition has recently been implemented in major electromagnetic simulators, namely in FEKO, CST-MWS, and WIPL-D. Other packages are about to support it soon, for example HFSS and CEM One. In addition, there is a plethora of in-house and academic packages which are capable of evaluating CM and many associated parameters.
Alternative bases
CM are useful to understand radiator's operation better. They have been used with great success for many practical purposes. However, it is important to stress that they are not perfect and it is often better to use other formulations such as energy modes, radiation modes, stored energy modes or radiation efficiency modes.
References
Electromagnetism
Electrodynamics
Antennas (radio)
Numerical differential equations
Computational electromagnetics | Characteristic mode analysis | [
"Physics",
"Mathematics"
] | 1,229 | [
"Electromagnetism",
"Physical phenomena",
"Computational electromagnetics",
"Computational physics",
"Fundamental interactions",
"Electrodynamics",
"Dynamical systems"
] |
7,092,991 | https://en.wikipedia.org/wiki/Norm%20%28artificial%20intelligence%29 | Norms can be considered from different perspectives in artificial intelligence to create computers and computer software that are capable of intelligent behaviour.
In artificial intelligence and law, legal norms are considered in computational tools to automatically reason upon them. In multi-agent systems (MAS), a branch of artificial intelligence (AI), a norm is a guide for the common conduct of agents, thereby easing their decision-making, coordination and organization.
Since most problems concerning regulation of the interaction of autonomous agents are linked to issues traditionally addressed by legal studies, and since law is the most pervasive and developed normative system, efforts to account for norms in artificial intelligence and law and in normative multi-agent systems often overlap.
Artificial intelligence and law
With the arrival of computer applications into the legal domain, and especially artificial intelligence applied to it, logic has been used as the major tool to formalize legal
reasoning and has been developed in many directions, ranging from deontic logics to formal systems of argumentation.
The knowledge base of legal reasoning systems usually includes legal norms (such as governmental regulations and contracts), and as a consequence, legal rules are the focus of knowledge representation and reasoning approaches to automatize and solve complex legal tasks. Legal norms are typically represented into a logic-based formalism, such a deontic logic.
Artificial intelligence and law applications using an explicit representation of norms range from checking the compliance of business processes and the automatic execution of smart contracts to legal expert systems advising people on legal matters.
Multi-agent systems
Norms in multi-agent systems may appear with different degrees of explicitness ranging from fully unambiguous written prescriptions to implicit unwritten norms or tacit emerging patterns. Computer scientists’ studies mirror this polarity. Explicit norms are typically investigated in formal logics (e.g. deontic logics and argumentation) to represent and reason upon them, leading eventually to architecture for cognitive agents, while implicit norms are accounted as patterns emerging from repeated interactions amongst agents (typically reinforced learning agents). Explicit and implicit norms can be used together to coordinate agents.
Explicit norms are typically represented as a deontic statement that aims at regulating the life of software agents and the interactions among them. It can be an obligation, a permission or a prohibition, and is often represented with some dialect or extension of Deontic logic. At the opposite, implicit norms are social norms that are not written, and they usually emerge from the repetitive interactions of agents.
References
Multi-agent systems
Computer law
Deontic logic | Norm (artificial intelligence) | [
"Technology",
"Engineering"
] | 509 | [
"Artificial intelligence engineering",
"Computer law",
"Computing and society",
"Multi-agent systems"
] |
7,093,304 | https://en.wikipedia.org/wiki/Simulated%20moving%20bed | In manufacturing, the simulated moving bed (SMB) process is a highly engineered process for implementing chromatographic separation. It is used to separate one chemical compound or one class of chemical compounds from one or more other chemical compounds to provide significant quantities of the purified or enriched material at a lower cost than could be obtained using simple (batch) chromatography. It cannot provide any separation or purification that cannot be done by a simple column purification. The process is rather complicated. The single advantage which it brings to a chromatographic purification is that it allows the production of large quantities of highly purified material at a dramatically reduced cost. The cost reductions come about as a result of: the use of a smaller amount of chromatographic separation media stationary phase, a continuous and high rate of production, and decreased solvent and energy requirements. This improved economic performance is brought about by a valve-and-column arrangement that is used to lengthen the stationary phase indefinitely and allow very high solute loadings to the process.
In the conventional moving bed technique of production chromatography the feed entry and the analyte recovery are simultaneous and continuous, but because of practical difficulties with a continuously moving bed, the simulated moving bed technique was proposed. In the simulated moving bed technique instead of moving the bed, the feed inlet, the solvent or eluent inlet and the desired product exit and undesired product exit positions are moved continuously, giving the impression of a moving bed, with continuous flow of solid particles and continuous flow of liquid in the opposite direction of the solid particles.
Construction
Specifically, an SMB system has two or more identical columns, which are connected to the mobile phase pump, and each other, by a multi-port valve. The plumbing is configured in such a way that:
a) all columns will be connected in series, forming a single continuous loop;
b) typically, between each column there will be provisions for four process streams: incoming feed mixture, exiting purified fast component, exiting purified slow component, and incoming solvent or eluent;
and
c) each process stream (two inlets and two outlets) will proceed in the same direction after a set time (the steptime).
Advantages
SMB provides lower production cost by requiring less column volume, less chromatographic separation media ("packing" or "stationary phase"), using less solvent and less energy, and requiring far less labor.
At industrial scale an SMB chromatographic separator is operated continuously, requiring less resin and less solvent than batch chromatography. The continuous operation facilitates operation control and integration into production plants.
Drawbacks
The drawbacks of the SMB are higher investment cost compared to single column operations, a higher complexity, as well as higher maintenance costs. But these drawbacks are effectively compensated by the better yield and a much lower solvent consumption as well as a much higher productivity compared to simple batch separations.
For purifications, in particular the isolation of an intermediate single component or a fraction out of a multicomponent mixture, the SMB is not as ideally suited. Normally, a single SMB will separate only two fractions from each other, but a series or "train" of SMBs can perform multiple cuts and purify one or more products from a multi-component mixture. SMB is not readily suited for solvent gradients. Solvent gradient purification may be preferred for the purification of some biomolecules. A continuous chromatography technique to overcome the two fraction limit and to apply gradients is multicolumn countercurrent solvent gradient purification (MCSGP).
Applications
In size exclusion chromatography, where the separation process is driven by entropy, it is not possible to increase the resolution attained by a column via temperature or solvent gradients. Consequently, these separations often require SMB, to extend usable retention time differences between the molecules or particles being separated. SMB is also very useful in the pharmaceutical industry, where separation of molecules having different chirality must be done on a very large scale. For the purification of fructose, e.g. in high fructose corn syrup, or amino-acids, biological-acids, etc. on an industrial scale, simulated moving bed chromatography is used in order to improve the economics of the production.
See also
Chromatography
Multicolumn countercurrent solvent gradient purification
References
Chromatography | Simulated moving bed | [
"Chemistry"
] | 909 | [
"Chromatography",
"Separation processes"
] |
7,093,376 | https://en.wikipedia.org/wiki/PVR-resistant%20advertising | PVR (DVR)-resistant advertising is a form of advertising which is designed specifically to remain viewable despite a user skipping through the commercials when using a device such as a TiVo or other digital video recorder. For instance, a black bar with a product's tagline and logo or the title of a promoted television program or film and its release date may appear on the top of the screen and remain visible much longer being fast-forwarded than a usual commercial.
This was used first by cable network FX's British network when advertising Brotherhood.
References
Advertising by medium
Digital video recorders | PVR-resistant advertising | [
"Technology"
] | 122 | [
"Digital video recorders",
"Recording devices"
] |
7,093,932 | https://en.wikipedia.org/wiki/Postponement | Postponement is a business strategy employed in manufacturing and supply chain management which maximizes possible benefit and minimizes risk by delaying further investment into a product or service until the last possible moment, or where a manufacturer produces a generic product, which can be modified at a later stage before the final distribution to the customer. An example of such a strategy is Dell Computers' build-to-order online store. One of the earliest references to the concept was in a paper by Walter Zinn and Donald J. Bowersox in the Journal of Business Logistics in 1988, which highlighted five types: labelling, packaging, assembly, manufacturing and time postponements.
One of the most modern definitions today is the following, suggested by Christopher (2005):
A successful example of postponement – delayed differentiation – is the use of "vanilla boxes". Semi-finished computers are stored in advance of seeing the actual demand for the finished products. Upon seeing the demand, thus with no residual uncertainty – these “vanilla boxes” are finished by adding (or removing) components. The three key interrelated decisions are: (a) how many different types of vanilla boxes to stock, (b) in what quantities, and (c) how to finish to meet the order most effectively. Another example is an umbrella manufacturer who does not know what the demand will be for different colored umbrellas. The manufacturer will manufacture all white umbrellas and dye them later when umbrellas are in season and it is easier to predict demand of each color of umbrella. This way the manufacturer can stock up on white umbrellas early with minimal labor costs, and be sure of the demand before they dedicate time and money into predicting the demand so far in the future.
Historical development of postponement
According to various logistics journals, supply chain management books and articles, the postponement concept has three key dates in its development in the 20th century – 1950, 1965 and 1988:
Marketing theorist Alderson in 1950 was the first to create the concept of postponement. He stated that it could reduce costs from a marketing point of view by postponing to as late as possible the product differentiation. The theorist believes that the closer the product is to its consumer, the more differentiated it becomes due to changes in unique tastes and demands. In this situation, both the consumer and producer benefit, as there is less risk from uncertainty for a producer leaving the consumer satisfied with a product.
In 1965, L. P. Bucklin argued that Alderson’s interpretation needed modification, as it was still unclear how exactly postponement was applied on the channel level, namely, distribution. He explained that there is a shift of the risk to another partner in the supply chain due to postponement of the owned bunch of goods. This means that an institution involved in the chain, may it be a consumer, producer or the ones in between, have to bear the risk. In addition to this, Bucklin also claimed that inventories might be ineffective due to postponement, meaning there is no need to use forces for the stock. To solve the problem Bucklin developed the speculation concept for the aim of creating the speculation-postponement strategy. Speculation allowed for orders of large quantities of goods, which already cuts the costs in transportation and sorting. These goods are then placed into speculative inventories and emptied according to the orders. The ideal strategy would be to either use speculation or postponement in the distribution channel depending on competition and potential risk savings.
Zinn and Bowersox in 1988 divided postponement into five different types to improve the distribution systems: four form postponements (labeling, packaging, assembly, manufacturing) and time postponement. These strategies were created with the aim to save costs, and therefore Zinn and Bowersox (1988) created a useful cost model to see how postponement affects each strategy with regards to costs.
After the development of the concept in the 20th century, researchers started defining postponement differently since 2000 and there are two key developments in 2001 and 2004. In 2001, Remko Van Hoek pointed out that it is important to analyze postponement not just on the marketing and distribution channel levels but also on the supply chain level. He argued that previous theories developed in the 20th century had gaps in their research on postponement, and identified 5 challenges: 1. Postponement as a supply chain concept, 2. Integrating related supply chain concepts, 3. Postponement in the globalizing supply chain, 4. Postponement in the customized supply chain, 5. Methodological upgrading of postponement.
In the first challenge, Van Hoek criticised Bucklin’s and Zinn’s postponement theories as lacking application throughout the whole supply chain, since they only linked their theories to one of its levels (upstream – sourcing & components, midstream - manufacturing, downstream - distribution). Professor Van Hoek states that “specific study should be undertaken to assess what extent postponement is applied at various positions in the supply chain”.
The second challenge states that to cover the entire supply chain in conceptualization of postponement, a researcher would need to engage related concepts, e.g. just-in-time manufacturing and supply, efficient consumer response.
Globalization in postponement comes as the third challenge. He states that there are differences in language, culture across the world and that postponement is widely present in Western countries rather than emerging countries in Asia. Therefore, Van Hoek advises to analyse these geographical dimensions when conducting a research on postponement.
The fourth challenge discusses lack of typology in postponement. Researches should not only pay attention to manufacturing and logistics related postponement but also to service postponement, since the concept takes its place in services too.
Finally, the fifth challenge tells that in order to conduct a solid research plan on postponement one should consider the triangulation model with first step – how postponement is implemented in a global supply chain, second step – where, to what extent and how postponement is applied, third step – benefits of postponement in the customized supply chain.
It should be stated that Van Hoek has made a solid contribution into postponement concept development as he provided these 5 challenges, and raised interest on postponement, i.e. there has been more literature on postponement available.
Yang et al. (2004) arranged the Zinn and Bowersox postponement strategies into more accurate groups and explained how exactly the strategy is matched to a type of postponement.
Yang et al. stated that in order to cope with high level of uncertainty, purchasing postponement (purchasing materials as close to production as possible) product development postponement may be applied => no physical inventory. In contrast, to deal with low uncertainty we use logistics postponement (reduction of obsolete inventories, just-in-time delivery) and production postponement => semi-finished product. With high modularity (when components can be incorporated into products with almost no change) product development and production postponements are used, whereas with low modularity (when customization is required) – logistics and purchasing postponements. This is what exactly was lacking in the 20th century because you did not know whether physical inventory, semi-finished, or finished products would work best as it was uncertain due to fluctuating consumer demands. Therefore, Yang et al. provides us with a guideline on how to manage this uncertainty.
Terminology
Thacker uses the term "point(s) of mutation" to refer to the potentially postponable stages in a production process when products become more specialised.
See also
Procrastination
References
Further reading
Swaminathan, J. M., & Lee, H. L. (2003), Design for Postponement. Handbooks in Operations Research and Management Science, 11 (Supply Chain Management: Design, Coordination and Operation), 199-226
Business terms
Manufacturing | Postponement | [
"Engineering"
] | 1,631 | [
"Manufacturing",
"Mechanical engineering"
] |
7,093,937 | https://en.wikipedia.org/wiki/Metacomputing | Metacomputing is all computing and computing-oriented activity which involves computing knowledge (science and technology) utilized for the research, development and application of different types of computing. It may also deal with numerous types of computing applications, such as: industry, business, management and human-related management. New emerging fields of metacomputing focus on the methodological and technological aspects of the development of large computer networks/grids, such as the Internet, intranet and other territorially distributed computer networks for special purposes.
Uses
In computer science
Metacomputing, as a computing of computing, includes: the organization of large computer networks, choice of the design criteria (for example: peer-to-peer or centralized solution) and metacomputing software (middleware, metaprogramming) development where, in the specific domains, the concept metacomputing is used as a description of software meta-layers which are networked platforms for the development of user-oriented calculations, for example for computational physics and bio-informatics.
Here, serious scientific problems of systems/networks complexity emerge, not only related to domain-dependent complexities but focused on systemic meta-complexity of computer network infrastructures.
Metacomputing is also a useful descriptor for self-referential programming systems. Often these systems are functional as fifth-generation computer languages which require the use of an underlying metaprocessor software operating system in order to be operative. Typically metacomputing occurs in an interpreted or real-time compiling system since the changing nature of information in processing results may result in an unpredictable compute state throughout the existence of the metacomputer (the information state operated upon by the metacomputing platform).
In socio-cognitive engineering
From the human and social perspectives, metacomputing is especially focused on: human-computer software, cognitive interrelations/interfaces, the possibilities of the development of intelligent computer grids for the cooperation of human organizations, and on ubiquitous computing technologies. In particular, it relates to the development of software infrastructures for the computational modeling and simulation of cognitive architectures for various decision support systems.
In systemics and from philosophical perspective
Metacomputing refers to the general problems of computationality of human knowledge, to the limits of the transformation of human knowledge and individual thinking to the form of computer programs. These and similar questions are also of interest of mathematical psychology.
See also
Complex system
Computer
Distributed computing
High-performance computing
Meta-
Meta-knowledge
Meta-mathematics
Metacomputing software
Metaprogramming
Parallel computing
Quantum computing
Supercomputing
References
Further reading
Special Issue on Metacomputing: From Workstation Clusters to Internet computing, Future Generation Computer Systems, Gentzsch W. (editor), No. 15, North Holland (1999)
Metacomputing Project- with DARPA contribution
The Grid: International Efforts in Global Computing, Mark Baker, Rajkumar Buyya and Domenico Laforenza (2005)
Toward the Identification of the Real-World Meta-Complexity, (2004) NEST-IDEA Interdisciplinary Research
Journal of Mathematical Psychology
Classes of computers
Systems theory | Metacomputing | [
"Technology"
] | 632 | [
"Classes of computers",
"Computers",
"Computer systems"
] |
7,094,111 | https://en.wikipedia.org/wiki/Zappa%E2%80%93Sz%C3%A9p%20product | In mathematics, especially group theory, the Zappa–Szép product (also known as the Zappa–Rédei–Szép product, general product, knit product, exact factorization or bicrossed product) describes a way in which a group can be constructed from two subgroups. It is a generalization of the direct and semidirect products. It is named after Guido Zappa (1940) and Jenő Szép (1950) although it was independently studied by others including B.H. Neumann (1935), G.A. Miller (1935), and J.A. de Séguier (1904).
Internal Zappa–Szép products
Let G be a group with identity element e, and let H and K be subgroups of G. The following statements are equivalent:
G = HK and H ∩ K = {e}
For each g in G, there exists a unique h in H and a unique k in K such that g = hk.
If either (and hence both) of these statements hold, then G is said to be an internal Zappa–Szép product of H and K.
Examples
Let G = GL(n,C), the general linear group of invertible n × n matrices over the complex numbers. For each matrix A in G, the QR decomposition asserts that there exists a unique unitary matrix Q and a unique upper triangular matrix R with positive real entries on the main diagonal such that A = QR. Thus G is a Zappa–Szép product of the unitary group U(n) and the group (say) K of upper triangular matrices with positive diagonal entries.
One of the most important examples of this is Philip Hall's 1937 theorem on the existence of Sylow systems for soluble groups. This shows that every soluble group is a Zappa–Szép product of a Hall p'-subgroup and a Sylow p-subgroup, and in fact that the group is a (multiple factor) Zappa–Szép product of a certain set of representatives of its Sylow subgroups.
In 1935, George Miller showed that any non-regular transitive permutation group with a regular subgroup is a Zappa–Szép product of the regular subgroup and a point stabilizer. He gives PSL(2,11) and the alternating group of degree 5 as examples, and of course every alternating group of prime degree is an example. This same paper gives a number of examples of groups which cannot be realized as Zappa–Szép products of proper subgroups, such as the quaternion group and the alternating group of degree 6.
External Zappa–Szép products
As with the direct and semidirect products, there is an external version of the Zappa–Szép product for groups which are not known a priori to be subgroups of a given group. To motivate this, let G = HK be an internal Zappa–Szép product of subgroups H and K of the group G. For each k in K and each h in H, there exist α(k, h) in H and β(k, h) in K such that kh = α(k, h) β(k, h). This defines mappings α : K × H → H and β : K × H → K which turn out to have the following properties:
α(e, h) = h and β(k, e) = k for all h in H and k in K.
α(k1k2, h) = α(k1, α(k2, h))
β(k, h1h2) = β(β(k, h1), h2)
α(k, h1h2) = α(k, h1) α(β(k, h1), h2)
β(k1k2, h) = β(k1, α(k2, h)) β(k2, h)
for all h1, h2 in H, k1, k2 in K. From these, it follows that
For each k in K, the mapping h α(k, h) is a bijection of H.
For each h in H, the mapping k β(k, h) is a bijection of K.
(Indeed, suppose α(k, h1) = α(k, h2). Then h1 = α(k−1k, h1) = α(k−1, α(k, h1)) = α(k−1, α(k, h2)) = h2. This establishes injectivity, and for surjectivity, use h = α(k, α(k−1, h)).)
More concisely, the first three properties above assert the mapping α : K × H → H is a left action of K on (the underlying set of) H and that β : K × H → K is a right action of H on (the underlying set of) K. If we denote the left action by h → kh and the right action by k → kh, then the last two properties amount to k(h1h2) = kh1 kh1h2 and (k1k2)h = k1k2h k2h.
Turning this around, suppose H and K are groups (and let e denote each group's identity element) and suppose there exist mappings α : K × H → H and β : K × H → K satisfying the properties above. On the cartesian product H × K, define a multiplication and an inversion mapping by, respectively,
(h1, k1) (h2, k2) = (h1 α(k1, h2), β(k1, h2) k2)
(h, k)−1 = (α(k−1, h−1), β(k−1, h−1))
Then H × K is a group called the external Zappa–Szép product of the groups H and K. The subsets H × {e} and {e} × K are subgroups isomorphic to H and K, respectively, and H × K is, in fact, an internal Zappa–Szép product of H × {e} and {e} × K.
Relation to semidirect and direct products
Let G = HK be an internal Zappa–Szép product of subgroups H and K. If H is normal in G, then the mappings α and β are given by, respectively, α(k,h) = k h k− 1 and β(k, h) = k. This is easy to see because and since by normality of , . In this case, G is an internal semidirect product of H and K.
If, in addition, K is normal in G, then α(k,h) = h. In this case, G is an internal direct product of H and K.
See also
Complement (group theory)
References
, Kap. VI, §4.
.
.
.
; Edizioni Cremonense, Rome, (1942) 119–125.
.
Group theory
pl:Iloczyn kompleksowy | Zappa–Szép product | [
"Mathematics"
] | 1,528 | [
"Group theory",
"Fields of abstract algebra"
] |
7,094,820 | https://en.wikipedia.org/wiki/DAMAC%20Residenze | DAMAC Residenze, formerly named DAMAC Heights and Ocean Heights 2, is an 85-storey, , supertall skyscraper in Dubai Marina, Dubai. It is the second supertall project by DAMAC Properties, the first being Ocean Heights, which is also located in Dubai Marina. The building overlooks the Palm Jumeirah.
As of 2022, DAMAC Residenze is the 13th-tallest building in Dubai and the 12th-tallest residential building in the world.
Architecture and design
Damac Residenze is located in the upper part of the marina, the most populated district of Dubai Marina containing nine skyscrapers and about 10 between 200m and 300m. The design incorporates elements that increase the field of view, giving the impression of a larger space between the DAMAC and the other towers. According to the architects Aedas, the curvature of the tower is crucial to provide views for the largest number of apartments possible.
The tower was planned to be high, but its height was reduced to in February 2013.
In February 2013, the foundation work of DAMAC Residenze was in progress, while the piling had already been completed.
The building was topped out in September 2016.
See also
DAMAC Properties
List of tallest buildings in Dubai
List of tallest buildings in the United Arab Emirates
List of tallest residential buildings
References
6. DAMAC Canal Heights Retrieved 09 January 2024.
External links
DAMAC Properties Official website
Property In Dubai
Residential skyscrapers in Dubai
Residential buildings completed in 2018
Buildings and structures completed in 2018
2018 establishments in the United Arab Emirates
Architecture in Dubai
High-tech architecture
Postmodern architecture
Pencil towers | DAMAC Residenze | [
"Engineering"
] | 329 | [
"Postmodern architecture",
"Architecture"
] |
7,094,834 | https://en.wikipedia.org/wiki/Damp%20proofing | Damp proofing in construction is a type of moisture control applied to building walls and floors to prevent moisture from passing into the interior spaces. Dampness problems are among the most frequent problems encountered in residences.
Damp proofing is defined by the American Society for Testing and Materials (ASTM) as a material that resists the passage of water with no hydrostatic pressure. Waterproof is defined by the ASTM as a treatment that resists the passage of water under pressure. Generally, damp proofing keeps exterior moisture from entering a building; vapor barriers, a separate category, keep interior moisture from getting into walls. Moisture resistance is not necessarily absolute; it is usually stated in terms of acceptable limits based on engineering tolerances and a specific test method.
Methods
Damp proofing is accomplished several ways including:
A damp-proof course (DPC) is a barrier through the structure designed to prevent moisture rising by capillary action such as through a phenomenon known as rising damp. Rising damp is the effect of water rising from the ground into property. The damp proof course may be horizontal or vertical. A DPC layer is usually laid below all masonry walls, regardless if the wall is a load bearing wall or a partition wall.
A damp-proof membrane (DPM) is a membrane material applied to prevent moisture transmission. A common example is polyethylene sheeting laid under a concrete slab to prevent the concrete from gaining moisture through capillary action. A DPM may be used for the DPC.
Integral damp proofing in concrete involves adding materials to the concrete mix to make the concrete itself impermeable.
Surface suppressant coating with thin water proof materials such as epoxy resin for resistance to non-pressurized moisture such as rain water or a coating of cement sprayed on such as shotcrete which can resist water under pressure.
Cavity wall construction, such as rainscreen construction, is where the interior walls are separated from the exterior walls by a cavity.
Pressure grouting cracks and joints in masonry materials.
Materials
Materials widely used for damp proofing include:
Flexible materials like butyl rubber, hot bitumen (asphalt), plastic sheets, bituminous felts, sheets of lead, copper, etc.
Semi-rigid materials like mastic asphalt
Rigid materials, like impervious brick, stone, slate, cement mortar, or cement concrete painted with bitumen, etc.
Stones
Mortar with waterproofing compounds
Coarse sand layers under floors
Continuous plastic sheets under floors
Masonry construction
A DPC is a durable, impermeable material such as slate, felt paper, metal, plastic or special engineered bricks bedded into the mortar between two courses of bricks or blocks. It can often be seen as a thin line in the mortar near ground level. To create a continuous barrier, pieces of DPC or DPM may be sealed together. In addition, the DPC may be sealed to the DPM around the outside edges of the ground floor, completely sealing the inside of the building from the damp ground around it.
In a masonry cavity wall, there is usually a DPC in both the outer and inner wall. In the outer wall it is normally to above ground level (the height of 2-3 brick courses). This allows rain to form puddles and splash up off the ground, without saturating the wall above DPC level. The wall below the DPC may become saturated in rainy weather. The DPC in the inner wall is usually below floor level, (under a suspended timber floor structure), or, with a solid concrete floor, it is usually found immediately above the floor slab so that it can be linked to the DPM under the floor slab. This enables installation of skirting boards above floor level without fear of puncturing it. Alternatively, instead of fitting separate inner and outer DPCs, it is common in commercial housebuilding to use a one-piece length of rigid plastic (with an angled section) that fits neatly across the cavity and slots into both walls (a cavity tray). This method requires weep vents to enable water to drain from the cavity, otherwise dampness could rise from above the DPC.
Concrete walls and floors
Concrete normally allows moisture to pass through so a vertical vapor barrier is needed. Barriers may be a coating or membrane applied to the exterior of the concrete. The coating may be asphalt, asphalt emulsion, a thinned asphalt called cutback asphalt, or an elastomer. Membranes are rubberized asphalt or EPDM rubber. Rubberized products perform better because concrete sometimes develops cracks and the barrier does not crack with the concrete.
Remedial damp proofing
Until the 20th century, masonry buildings in Europe and North America were generally constructed from highly permeable materials such as stone and lime-based mortars and renders covered with soft water-based paints which all allowed any damp to diffuse into the air without damage. The later application of impermeable materials which prevent the natural dispersion of damp, such as tile, linoleum, cement and gypsum-based materials and synthetic paints is thought by some to be the most significant cause of damp problems in older buildings.
There are many solutions for dealing with dampness in existing buildings, the choice of which will largely be determined by the types of dampness that are affecting the building, e.g., rising damp, hygroscopic damp, condensation, penetrating damp, etc.
In older buildings, damp stains on internal walls are usually due to external factors such as:
Leaking rainwater gutters
Misdirected rainwater downpipes
Insufficient external drainage
Poor drip details to cills and other protrusions
Bridging of the damp proof course
Health and safety
Some DPC materials may contain asbestos fibres. This was more commonly found in the older, grey sealants as well as flexible tar boards.
References
External links
Building engineering
Moisture protection
Masonry
Horticulture
Landscape architecture
Indoor air pollution | Damp proofing | [
"Engineering"
] | 1,204 | [
"Building engineering",
"Landscape architecture",
"Construction",
"Civil engineering",
"Masonry",
"Architecture"
] |
7,095,290 | https://en.wikipedia.org/wiki/J.%20Howard%20Redfield | John Howard Redfield (June 8, 1879 – April 17, 1944) was an American mathematician, best known for discovery of what is now called Pólya enumeration theorem (PET) in 1927, ten years ahead of similar but independent discovery made by George Pólya. Redfield was a great-grandson of William Charles Redfield, one of the founders and the first president of AAAS.
Solution to MacMahon's conjecture
Redfield's ability is evident in letters exchanged among Redfield, Percy MacMahon, and Sir Thomas Muir, following the publication of Redfield's paper [1] in 1927. Apparently Redfield sent a copy of his paper to MacMahon. In reply (letter of November 19, 1927), MacMahon expresses the view that Redfield has made a valuable contribution to the subject and goes on to mention a conjecture which he himself made in his recently delivered Rouse-Ball memorial lecture. He also says that it is probable that Redfield's work would lead to a proof of it. Such was the case: in a draft reply dated December 26, 1927, Redfield writes:
"I am now able to demonstrate your conjectured expression...".
MacMahon, who had failed to prove it himself and then put the matter before men at both Cambridge and Oxford "without effect", delightedly wrote to Redfield (letter of January 9, 1928):
"when you first wrote to me I formed the opinion that with your powerful handling of the theory of substitutions it would be childs play to you and I was right. I congratulate you and feel sure that your methods will carry you far."
MacMahon urged Redfield to publish his new results and also informed Muir about them. In a letter to Redfield dated December 31, 1931, Muir also encourages him to publish his verification "without waiting for MacMahon's executors" and suggests the Journal of the London Mathematical Society as an appropriate medium. As far as is known, Redfield did not follow up this suggestion, but the proof of MacMahon's conjecture was included in an unpublished manuscript which appears to be a sequel to the paper [3].
Redfield's contemporaries on him
A letter from Professor Cletus Oakley to Frank Harary, dated December 19, 1963, reads in part:
"Howard Redfield was a graduate of Haverford College in the Class of 1899. He was a man of very broad interests and we do not have a continuous record of his doings. Directly after leaving college, he worked as a civil engineer. In college he took a lot of languages and mathematics. (There was no major department in those days.) After graduating from Haverford with a B.S. degree, he took a S.B. degree in M.I.T. and a M.A. and Ph.D. (mathematics) at Harvard. During the year 1907-1908, he studied romance philology at the University of Paris. In 1908-1909, he was an instructor in mathematics at Worcester Polytechnic Institute, Worcester, Massachusetts. In 1910-1911 he taught French at Swarthmore College and from 1912-1914 he was an assistant professor of romance languages at Princeton University. From 1916 onward until his death in 1944, he was a practicing civil engineer in Wayne, Pennsylvania. "
"I knew him from about 1938-1944. Indeed in 1940 he came to Haverford College and gave us some lectures on 'Electronic Digital Computers' (this was slightly before Eckert-Mauchly). Knowing him as I did in those later years, I could well understand how he would not make a great teacher. He was completely off in the clouds at all times. He never looked at you, he spoke softly with his eyes on the floor, he worked with his back to you and wrote on the board. His board work, however, was impeccable. It could have been photographed and printed by photo offset it was so perfect."
"He came to Haverford to talk to our math club many times and always had something new to say..."
Redfield's brother, Alfred, a marine biologist-oceanographer and former Associate Director of the Woods Hole Oceanographic Institution, wrote (letter to E. Keith Lloyd, September 8, 1976):
"During the later years of his life, he turned to mathematics and I usually found him working at it when I called on him. It was evident that this was his true love."
Publications
This publication is based on a manuscript discovered in Redfield's legacy by his daughter. The correspondence found with the manuscript revealed that it had been submitted for publication in the American Journal of Mathematics on October 19, 1940 and was rejected by the editors in a brief letter of January 7, 1941. Redfield answered the objections of the referee in great detail ten days later and asked specific questions, but he never received a reply to his rebuttal. Apparently it was not subsequently resubmitted elsewhere. The significance of this paper is discussed in.
This publication represents a typescript of a lecture delivered by Redfield in 1937. According to Lloyd, “The text of Redfield's lecture is very readable, and anyone wishing to study his work would be well advised to read the lecture before passing on to his 1927 and 1940 papers.”
References
1879 births
1944 deaths
20th-century American mathematicians
Combinatorialists
Haverford College alumni
Massachusetts Institute of Technology School of Science alumni
Harvard Graduate School of Arts and Sciences alumni
Worcester Polytechnic Institute faculty
Princeton University faculty
Swarthmore College faculty | J. Howard Redfield | [
"Mathematics"
] | 1,142 | [
"Combinatorialists",
"Combinatorics"
] |
7,095,521 | https://en.wikipedia.org/wiki/Uranium%20tailings | Uranium tailings or uranium tails are a radioactive waste byproduct (tailings) of conventional uranium mining and uranium enrichment. They contain the radioactive decay products from the uranium decay chains, mainly the U-238 chain, and heavy metals. Long-term storage or disposal of tailings may pose a danger for public health and safety.
Production
Uranium mill tailings are primarily the sandy process waste material from a conventional uranium mill. Milling is the first step in making fuel for nuclear reactors from natural uranium ore. The uranium extract is transformed into yellowcake.
The raw uranium ore is brought to the surface and crushed into a fine sand. The valuable uranium-bearing minerals are then removed via heap leaching with the use of acids or bases, and the remaining radioactive sludge, called "uranium tailings", is stored in huge impoundments. A short ton (907 kg) of ore yields one to five pounds (0.45 to 2.3 kg) of uranium depending on the uranium content of the mineral. Uranium tailings can retain up to 85% of the ore's original radioactivity.
Composition
The tailings contain mainly decay products from the decay chain involving Uranium-238. Uranium tailings contain over a dozen radioactive nuclides, which are the primary hazard posed by the tailings. The most important of these are thorium-230, radium-226, radon-222 (radon gas) and the daughter isotopes of radon decay, including polonium-210. All of those are naturally occurring radioactive materials or "NORM".
Health risks
Tailings contain heavy metals and radioactive radium. Radium then decays over thousands of years and radioactive radon gas is produced. Tailings are kept in piles for long-term storage or disposal and need to be maintained and monitored for leaks over the long term.
If uranium tailings are stored aboveground and allowed to dry out, the radioactive sand can be carried great distances by the wind, entering the food chain and bodies of water. The danger posed by such sand dispersal is uncertain at best given the dilution effect of dispersal. The majority of tailing mass will be inert rock, just as it was in the raw ore before the extraction of the uranium, but physically altered, ground up, mixed with large amounts of water and exposed to atmospheric oxygen, which can substantially alter chemical behaviour.
An EPA estimate of risk based on uranium tailings deposits existing in the United States in 1983 gave the figure of 500 lung cancer deaths per century if no countermeasures are taken.
See also
List of uranium mines
Uranium Mill Tailings Radiation Control Act
References
Radioactive waste
Uranium mining | Uranium tailings | [
"Physics",
"Chemistry",
"Technology"
] | 539 | [
"Nuclear chemistry stubs",
"Nuclear and atomic physics stubs",
"Environmental impact of nuclear power",
"Radioactivity",
"Nuclear physics",
"Hazardous waste",
"Radioactive waste"
] |
7,095,652 | https://en.wikipedia.org/wiki/Spurious%20wakeup | In computing, a spurious wakeup occurs when a thread wakes up from waiting on a condition variable and finds that the condition is still unsatisfied. It is referred to as spurious because the thread has seemingly been awoken for no reason. Spurious wakeups usually happen because in between the time when the condition variable was signaled and when the awakened thread was finally able to run, another thread ran first and changed the condition again. In general, if multiple threads are awakened, the first one to run will find the condition satisfied, but the others may find the condition unsatisfied. In this way, there is a race condition between all the awakened threads. The first thread to run will win the race and find the condition satisfied, while the other threads will lose the race, and experience a spurious wakeup.
The problem of spurious wakeup can be exacerbated on multiprocessor systems. When several threads are waiting on a single condition variable, the system may decide to wake all threads up when it's signaled. The system treats every signal( ) to wake one thread as a broadcast( ) to wake all of them, thus breaking any possibly expected 1:1 relationship between signals and wakeup. If ten threads are waiting, only one will win and the other nine will experience spurious wakeup.
To allow for implementation flexibility in dealing with error conditions and races inside the operating system, condition variables may also be allowed to return from a wait even if not signaled, though it is not clear how many implementations do that. In the Solaris implementation of condition variables, a spurious wakeup may occur without the condition being assigned if the process is signaled; the wait system call aborts and returns EINTR. The Linux p-thread implementation of condition variables guarantees that it will not do that.
Because spurious wakeup can happen, when a thread wakes after waiting on a condition variable, it should always check that the condition it sought is still satisfied. If it is not, it should go back to sleeping on the condition variable, waiting for another opportunity.
References
C POSIX library
Threads (computing) | Spurious wakeup | [
"Technology"
] | 437 | [
"Operating system stubs",
"Computing stubs"
] |
7,095,671 | https://en.wikipedia.org/wiki/Gysin%20homomorphism | In the field of mathematics known as algebraic topology, the Gysin sequence is a long exact sequence which relates the cohomology classes of the base space, the fiber and the total space of a sphere bundle. The Gysin sequence is a useful tool for calculating the cohomology rings given the Euler class of the sphere bundle and vice versa. It was introduced by , and is generalized by the Serre spectral sequence.
Definition
Consider a fiber-oriented sphere bundle with total space E, base space M, fiber Sk and projection map
:
Any such bundle defines a degree k + 1 cohomology class e called the Euler class of the bundle.
De Rham cohomology
Discussion of the sequence is clearest with de Rham cohomology. There cohomology classes are represented by differential forms, so that e can be represented by a (k + 1)-form.
The projection map induces a map in cohomology called its pullback
In the case of a fiber bundle, one can also define a pushforward map
which acts by fiberwise integration of differential forms on the oriented sphere – note that this map goes "the wrong way": it is a covariant map between objects associated with a contravariant functor.
Gysin proved that the following is a long exact sequence
where is the wedge product of a differential form with the Euler class e.
Integral cohomology
The Gysin sequence is a long exact sequence not only for the de Rham cohomology of differential forms, but also for cohomology with integral coefficients. In the integral case one needs to replace the wedge product with the Euler class with the cup product, and the pushforward map no longer corresponds to integration.
Gysin homomorphism in algebraic geometry
Let i: X → Y be a (closed) regular embedding of codimension d, Y → Y a morphism and i: X = X ×Y Y → Y the induced map. Let N be the pullback of the normal bundle of i to X. Then the refined Gysin homomorphism i! refers to the composition
where
σ is the specialization homomorphism; which sends a k-dimensional subvariety V to the normal cone to the intersection of V and X in V. The result lies in N through .
The second map is the (usual) Gysin homomorphism induced by the zero-section embedding .
The homomorphism i! encodes intersection product in intersection theory in that one either shows the intersection product of X and V to be given by the formula or takes this formula as a definition.
Example: Given a vector bundle E, let s: X → E be a section of E. Then, when s is a regular section, is the class of the zero-locus of s, where [X] is the fundamental class of X.
See also
Logarithmic form
Wang sequence
Notes
Sources
Algebraic topology | Gysin homomorphism | [
"Mathematics"
] | 605 | [
"Fields of abstract algebra",
"Topology",
"Algebraic topology"
] |
7,095,752 | https://en.wikipedia.org/wiki/John%20Ockendon | John Richard Ockendon (born 1940) is an applied mathematician noted especially for his contribution to fluid dynamics and novel applications of mathematics to real world problems. He is a professor at the University of Oxford and an Emeritus Fellow at St Catherine's College, Oxford, served as the first director of the Oxford Centre for Collaborative Applied Mathematics (OCCAM) and a former director of the Smith Institute for Industrial Mathematics and System Engineering.
Education
Ockendon was privately educated at Dulwich College and the University of Oxford where he was awarded a Doctor of Philosophy degree in 1965 for research on fluid dynamics supervised by Alan B. Tayler.
Research and career
His initial fluid mechanics interests included hypersonic aerodynamics, creeping flow, sloshing and channel flows and leading to flows in porous media, ship hydrodynamics and models for flow separation.
He moved on to free and moving boundary problems. He pioneered the study of diffusion-controlled moving boundary problems in the 1970s his involvement centring on models for phase changes and elastic contact problems all built around the paradigm of the Hele-Shaw free boundary problem. Other industrial collaboration has led to new ideas for lens design, fibre manufacture, extensional and surface-tension- driven flows and glass manufacture, fluidised-bed models, semiconductor device modelling and a range of other problems in mechanics and heat and mass transfer, especially scattering and ray theory, nonlinear wave propagation, nonlinear oscillations, nonlinear diffusion and impact in solids and liquids.
His efforts to promote mathematical collaboration with industry led him to organise annual meetings of the Study Groups with Industry from 1972 to 1989.
Awards and honours
Ockendon was elected Fellow of the Royal Society (FRS) in 1999, and awarded the IMA Gold Medal by the Institute of Mathematics and its Applications in 2006.
Personal life
Ockendon is married to his coauthor and colleague Hilary Ockendon (née Mason).
His Who's Who entry lists his recreations as mathematical modelling, bird watching, Hornby-Dublo model trains and old sports cars.
References
Fellows of the Royal Society
Living people
1940s births
20th-century British mathematicians
21st-century British mathematicians
Fellows of the Society for Industrial and Applied Mathematics
Fellows of St Catherine's College, Oxford
Fluid dynamicists
Alumni of the University of Oxford | John Ockendon | [
"Chemistry"
] | 461 | [
"Fluid dynamicists",
"Fluid dynamics"
] |
7,095,838 | https://en.wikipedia.org/wiki/Steelyard%20balance | A steelyard balance, steelyard, or stilyard is a straight-beam balance with arms of unequal length. It incorporates a counterweight which slides along the longer arm to counterbalance the load and indicate its weight. A steelyard is also known as a Roman steelyard or Roman balance.
Structure
The steelyard comprises a balance beam which is suspended from a lever/pivot or fulcrum which is very close to one end of the beam. The two parts of the beam which flank the pivot are the arms. The arm from which the object to be weighed (the load) is hung is short and is located close to the pivot point. The other arm is longer, is graduated and incorporates a counterweight which can be moved along the arm until the two arms are balanced about the pivot, at which time the weight of the load is indicated by the position of the counterweight.
Mechanism
The steelyard exemplifies the law of the lever, wherein, when balanced, the weight of the object being weighed, multiplied by the length of the short balance arm to which it is attached, is equal to the weight of the counterweight multiplied by the distance of the counterweight from the pivot.
History
According to Thomas G. Chondros of Patras University, a simple steelyard balance with a lever mechanism first appeared in the ancient Near East over 5,000 years ago. According to Mark Sky of Harvard University, the steelyard was in use among Greek craftsmen of the 5th and 4th centuries BC, even before Archimedes demonstrated the law of the lever theoretically. The Latin name statera comes from the Ancient Greek στατήρ (statḗr). Roman and Chinese steelyards were independently invented around 200 BC. Steelyards dating from AD 100 to 400 have been unearthed in Great Britain. Steelyards and their components have also been excavated from shipwrecks of the Byzantine period in the Mediterranean and the Red Sea, such as the 7th-century wreck at Yassi Ada, Turkey, and the mid-first millennium shipwreck at Black Assarca Island, Eritrea. The Oxford English Dictionary suggests that the name "steelyard" is derived from steel combined with yard, influenced by an allusion to the Steelyard, the main trading base of the Hanseatic League in London in the 14th century.
Large steelyard balances (known as cart balances), both public and private, were a common feature in agricultural areas in England from the eighteenth century forward. An example of a public cart steelyard remains at Soham, Cambridgeshire, and another is at Woodbridge, Suffolk.
Function
Steelyards of different sizes have been used to weigh loads ranging from ounces to tons. A small steelyard could be a foot or less in length and thus conveniently used as a portable device that merchants and traders could use to weigh small ounce-sized items of merchandise. In other cases a steelyard could be several feet long and used to weigh sacks of flour and other commodities. Even larger steelyards were three stories tall and used to weigh fully laden horse-drawn carts.
Scandinavian variant
A Scandinavian steelyard is a variant which consists of a bar with a fixed weight attached to one end, a movable pivot point, and an attachment point for the object to be weighed at the other end. Once the object to be weighed is attached to its end of the bar, the pivot point, which is frequently a loop at the end of a cord or chain, is moved until the bar is balanced. The bar is calibrated so that the object's weight can be read off directly from the position of the pivot. This type is known in Sweden, Denmark, Norway and Finland
.
Gallery
See also
Weigh house
References
External links
The physics of the steelyard
To calibrate a steelyard
Roman steelyard and weight from Caernarfon
Iron steelyard
Steelyard with sliding weight
Steelyard Weight and Hook
A Roman steelyard, 79 AD
The Chinese steelyard dates to 200 B.C.E.
Roman steelyards unearthed in Britain
A three-storey steelyard
Weighing instruments | Steelyard balance | [
"Physics",
"Technology",
"Engineering"
] | 842 | [
"Weighing instruments",
"Mass",
"Matter",
"Measuring instruments"
] |
7,095,986 | https://en.wikipedia.org/wiki/Neighborhood%20operation | In computer vision and image processing a neighborhood operation is a commonly used class of computations on image data which implies that it is processed according to the following pseudo code:
Visit each point p in the image data and do {
N = a neighborhood or region of the image data around the point p
result(p) = f(N)
}
This general procedure can be applied to image data of arbitrary dimensionality. Also, the image data on which the operation is applied does not have to be defined in terms of intensity or color, it can be any type of information which is organized as a function of spatial (and possibly temporal) variables in .
The result of applying a neighborhood operation on an image is again something which can be interpreted as an image, it has the same dimension as the original data. The value at each image point, however, does not have to be directly related to intensity or color. Instead it is an element in the range of the function , which can be of arbitrary type.
Normally the neighborhood is of fixed size and is a square (or a cube, depending on the dimensionality of the image data) centered on the point . Also the function is fixed, but may in some cases have parameters which can vary with , see below.
In the simplest case, the neighborhood may be only a single point. This type of operation is often referred to as a point-wise operation.
Examples
The most common examples of a neighborhood operation use a fixed function which in addition is linear, that is, the computation consists of a linear shift invariant operation. In this case, the neighborhood operation corresponds to the convolution operation. A typical example is convolution with a low-pass filter, where the result can be interpreted in terms of local averages of the image data around each image point. Other examples are computation of local derivatives of the image data.
It is also rather common to use a fixed but non-linear function . This includes median filtering, and computation of local variances. The Nagao-Matsuyama filter is an example of a complex local neighbourhood operation that uses variance as an indicator of the uniformity within a pixel group. The result is similar to a convolution with a low-pass filter with the added effect of preserving sharp edges.
There is also a class of neighborhood operations in which the function has additional parameters which can vary with :
Visit each point p in the image data and do {
N = a neighborhood or region of the image data around the point p
result(p) = f(N, parameters(p))
}
This implies that the result is not shift invariant. Examples are adaptive Wiener filters.
Implementation aspects
The pseudo code given above suggests that a neighborhood operation is implemented in terms of an outer loop over all image points. However, since the results are independent, the image points can be visited in arbitrary order, or can even be processed in parallel. Furthermore, in the case of linear shift-invariant operations, the computation of at each point implies a summation of products between the image data and the filter coefficients. The implementation of this neighborhood operation can then be made by having the summation loop outside the loop over all image points.
An important issue related to neighborhood operation is how to deal with the fact that the neighborhood becomes more or less undefined for points close to the edge or border of the image data. Several strategies have been proposed:
Compute result only for points for which the corresponding neighborhood is well-defined. This implies that the output image will be somewhat smaller than the input image.
Zero padding: Extend the input image sufficiently by adding extra points outside the original image which are set to zero. The loops over the image points described above visit only the original image points.
Border extension: Extend the input image sufficiently by adding extra points outside the original image which are set to the image value at the closest image point. The loops over the image points described above visit only the original image points.
Mirror extension: Extend the image sufficiently much by mirroring the image at the image boundaries. This method is less sensitive to local variations at the image boundary than border extension.
Wrapping: The image is tiled, so that going off one edge wraps around to the opposite side of the image. This method assumes that the image is largely homogeneous, for example a stochastic image texture without large textons.
References
Computer vision
Image processing | Neighborhood operation | [
"Engineering"
] | 885 | [
"Artificial intelligence engineering",
"Packaging machinery",
"Computer vision"
] |
7,096,085 | https://en.wikipedia.org/wiki/Construction%20aggregate | Construction aggregate, or simply aggregate, is a broad category of coarse- to medium-grained particulate material used in construction. Traditionally, it includes natural materials such as sand, gravel, crushed stone. As with other types of aggregates, it is a component of composite materials, particularly concrete and asphalt.
Aggregates are the most mined materials in the world, being a significant part of 6 billion tons of concrete produced per year.
Aggregate serves as reinforcement to add strength to the resulting material.
Due to the relatively high hydraulic conductivity as compared to most soil types, aggregates are widely used in drainage applications such as foundation and French drains, septic drain fields, retaining wall drains, and roadside edge drains. Aggregates are also used as base material under building foundations, roads, and railroads (aggregate base). It has predictable, uniform properties, preventing differential settling under the road or building.
Aggregates are also used as a low-cost extender that binds with more expensive cement or asphalt to form concrete. Although most kinds of aggregate require a form of binding agent, there are types of self-binding aggregate which require no form of binding agent.
More recently, recycled concrete and geosynthetic materials have also been used as aggregates.
Sources
Sources for these basic materials can be grouped into three main areas: mining of mineral aggregate deposits, including sand, gravel, and stone; use of waste slag from the manufacture of iron and steel; and recycling of concrete, which is itself chiefly manufactured from mineral aggregates. In addition, there are some (minor) materials that are used as specialty lightweight aggregates: clay, pumice, perlite, and vermiculite. Other minerals include:
basalt
dolomite
granite
gravel
limestone
sand
sandstone
Specifications
In Europe, sizing ranges are specified as d/D, where the d shows the smallest and D shows the largest square mesh grating that the particles can pass. Application-specific preferred sizings are covered in European Standard EN 13043 for road construction, EN 13383 for larger armour stone, EN 12620 for concrete aggregate, EN 13242 for base layers of road construction, and EN 13450 for railway ballast.
The American Society for Testing and Materials publishes an exhaustive listing of specifications including ASTM D 692 and ASTM D 1073 for various construction aggregate products, which, by their individual design, are suitable for specific construction purposes. These products include specific types of coarse and fine aggregate designed for such uses as additives to asphalt and concrete mixes, as well as other construction uses. State transportation departments further refine aggregate material specifications in order to tailor aggregate use to the needs and available supply in their particular locations.
History
People have used sand and stone for foundations for thousands of years. Significant refinement of the production and use of aggregate occurred during the Roman Empire, which used aggregate to build its vast network of roads and aqueducts. The invention of concrete, which was essential to architecture utilizing arches, created an immediate, permanent demand for construction aggregates.
Vitruvius writes in De architectura:
Economy denotes the proper management of materials and of site, as well as a thrifty balancing of cost and common sense in the construction of works. This will be observed if, in the first place, the architect does not demand things which cannot be found or made ready without great expense. For example: it is not everywhere that there is plenty of pit-sand, rubble, fir, clear fir, and marble... Where there is no pit sand, we must use the kinds washed up by rivers or by the sea... and other problems we must solve in similar ways.
Modern production
The advent of modern blasting methods enabled the development of quarries, which are now used throughout the world, wherever competent bedrock deposits of aggregate quality exist. In many places, good limestone, granite, marble or other quality stone bedrock deposits do not exist. In these areas, natural sand and gravel are mined for use as aggregate. Where neither stone, nor sand and gravel, are available, construction demand is usually satisfied by shipping in aggregate by rail, barge or truck. Additionally, demand for aggregates can be partially satisfied through the use of slag and recycled concrete. However, the available tonnages and lesser quality of these materials prevent them from being a viable replacement for mined aggregates on a large scale.
Large stone quarry and sand and gravel operations exist near virtually all population centers due to the high cost of transportation relative to the low value of the product. Trucking aggregate more than 40 kilometers is typically uneconomical. These are capital-intensive operations, utilizing large earth-moving equipment, belt conveyors, and machines specifically designed for crushing and separating various sizes of aggregate, to create distinct product stockpiles.
According to the USGS, 2006 U.S. crushed stone production was 1.72 billion tonnes valued at $13.8 billion (compared to 1.69 billion tonnes valued at $12.1 billion in 2005), of which limestone was 1,080 million tonnes valued at $8.19 billion from 1,896 quarries, granite was 268 million tonnes valued at $2.59 billion from 378 quarries, trap rock was 148 million tonnes valued at $1.04 billion from 355 quarries, and the balance other kinds of stone from 729 quarries. Limestone and granite are also produced in large amounts as dimension stone. The great majority of crushed stone is moved by heavy truck from the quarry/plant to the first point of sale or use. According to the USGS, 2006 U.S. sand and gravel production was 1.32 billion tonnes valued at $8.54 billion (compared to 1.27 billion tonnes valued at $7.46 billion in 2005), of which 264 million tonnes valued at $1.92 billion was used as concrete aggregates. The great majority of this was again moved by truck, instead of by electric train.
Currently, total U.S. aggregate demand by final market sector was 30%–35% for non-residential building (offices, hotels, stores, manufacturing plants, government and institutional buildings, and others), 25% for highways, and 25% for housing.
Recycled materials
Recycled material such as blast furnace and steel furnace slag can be used as aggregate or partly substitute for portland cement. Blast furnace and steel slag is either air-cooled or water-cooled. Air-cooled slag can be used as aggregate. Water-cooled slag produces sand-sized glass-like particles (granulated). Adding free lime to the water during cooling gives granulated slag hydraulic cementitious properties.
In 2006, according to the USGS, air-cooled blast furnace slag sold or used in the U.S. was 7.3 million tonnes valued at $49 million, granulated blast furnace slag sold or used in the U.S. was 4.2 million tonnes valued at $318 million, and steel furnace slag sold or used in the U.S. was 8.7 million tonnes valued at $40 million. Air-cooled blast furnace slag sales in 2006 were for use in road bases and surfaces (41%), asphaltic concrete (13%), ready-mixed concrete (16%), and the balance for other uses. Granulated blast furnace slag sales in 2006 were for use in cementitious materials (94%), and the balance for other uses. Steel furnace slag sales in 2006 were for use in road bases and surfaces (51%), asphaltic concrete (12%), for fill (18%), and the balance for other uses.
Recycled glass aggregate crushed to a small size is substituted for many construction and utility projects in place of pea gravel or crushed rock. Glass aggregate is not dangerous to handle. It can be used as pipe bedding—placed around sewer, storm water or drinking water pipes to transfer weight from the surface and protect the pipe. Another common use is as fill to bring the level of a concrete floor even with a foundation. Use of glass aggregate helps close the loop in glass recycling in many places where glass cannot be smelted into new glass.
Aggregates themselves can be recycled as aggregates. Recyclable aggregate tends to be concentrated in urban areas. The supply of recycled aggregate depends on physical decay and demolition of structures. Mobile recycling plants eliminate the cost of transporting the material to a central site. The recycled material is typically of variable quality.
Many aggregate products are recycled for other industrial purposes. Contractors save on disposal costs and less aggregate is buried or piled and abandoned. In Bay City, Michigan, for example, a recycle program exists for unused products such as mixed concrete, block, brick, gravel, pea stone, and other used materials. The material is crushed to provide subbase for roads and driveways, among other purposes.
According to the USGS in 2006, 2.9 million tonnes of Portland cement concrete (including aggregate) worth $21.9 million was recycled, and 1.6 million tonnes of asphalt concrete (including aggregate) worth $11.8 million was recycled, both by crushed stone operations. Much more of both materials are recycled by construction and demolition firms not included in the USGS survey. For sand and gravel, the survey showed that 4.7 million tonnes of cement concrete valued at $32.0 million was recycled, and 6.17 million tonnes of asphalt concrete valued at $45.1 million was recycled. Again, more of both materials are recycled by construction and demolition firms not in this USGS survey. The Construction Materials Recycling Association indicates that there are 325 million tonnes of recoverable construction and demolition materials produced annually.
Organic materials
Many geosynthetic aggregates are made from recycled materials. Recyclable plastics can be reused in aggregates. For example, Ring Industrial Group's EZflow product lines are produced with geosynthetic aggregate pieces that are more than 99.9% recycled polystyrene. This polystyrene, otherwise destined for a landfill, is gathered, melted, mixed, reformulated and expanded to create low density aggregates that maintain high strength properties under compressive loads. Such geosynthetic aggregates replace conventional gravel while simultaneously increasing porosity, increasing hydraulic conductivity and eliminating the fine dust "fines" inherent to gravel aggregates which otherwise serve to clog and disrupt the operation of many drainage applications.
Several groups have attempted to use minced tires as part of concrete aggregate. The result is tougher than regular concrete, because it can bend instead of breaking under pressure. However, tires reduce compressive strength partially because the cement bonds poorly with the rubber. Pores in the rubber fill with water when the concrete is mixed, but become voids as the concrete sets. One group put the concrete under pressure as it sets, reducing pore volumes.
Recycled aggregates in the UK
Recycled aggregate in the UK results from the processing of construction material. To ensure the aggregate is inert, it is manufactured from material tested and characterised under European Waste Codes.
In 2008, 210 million tonnes of aggregate were produced including 67 million tonnes of recycled product, according to the Quarry Products Association. The Waste and Resource Action Programme has produced a Quality Protocol for the regulated production of recycled aggregates.
See also
Aggregate (composite), Aggregate base
Aggregate industry in the United States
Alkali-aggregate reaction
Alkali–silica reaction
Concrete
Crushed stone
Dimension stone – stone recycling and reuse
Hoggin
Interfacial transition zone (ITZ)
Marble
Pozzolanic reaction
Road metal
Saturated-surface-dry
Tumble finishing
References
Citations
Sources
UEPG – The European Aggregates Association
Samscreen International
The National Stone, Sand & Gravel Association
Pit and Quarry University/
"Rock to Road" (Industry publication - Canada)
The American Society for Testing Materials
Gravel Watch Ontario
Oregon Concrete & Aggregate Producers Association
Portland Cement Association
Pavement Interactive article on Aggregates
2006 USGS Minerals Yearbook: Stone, Crushed
2005 USGS Minerals Yearbook: Stone, Crushed
2006 USGS Minerals Yearbook: Construction Sand and Gravel
2005 USGS Minerals Yearbook: Construction Sand and Gravel
Construction Aggregate, in June 2007 Mining Engineering (private membership)
2006 USGS Minerals Yearbook: Iron & Steel Slag
Aggregates from Natural and Recycled Sources-Economic Assessments
Construction Materials Recycling Association
MN DNR Aggregate Resource Mapping Program – Division of Lands and Minerals
Quarrying in Depth Recycling
Recycling Tonnages and Primary aggregate production figures
Alberta Sand and Gravel Association (Canada)
Aggregate (composite)
Building stone
Concrete
Granularity of materials
Pavements
Stone (material)
Quarrying
Industrial minerals | Construction aggregate | [
"Physics",
"Chemistry",
"Engineering"
] | 2,555 | [
"Structural engineering",
"Materials",
"Concrete",
"Particle technology",
"Granularity of materials",
"Matter"
] |
7,096,097 | https://en.wikipedia.org/wiki/Gas%20thermometer | A gas thermometer is a thermometer that measures temperature by the variation in volume or pressure of a gas.
Volume Thermometer
This thermometer functions by Charles's Law. Charles's Law states that when the temperature of a gas increases, so does the volume.
Using Charles's Law, the temperature can be measured by knowing the volume of gas at a certain temperature by using the formula, written below. Translating it to the correct levels of the device that is holding the gas. This works on the same principle as mercury thermometers.
or
is the volume,
is the thermodynamic temperature,
is the constant for the system.
is not a fixed constant across all systems and therefore needs to be found experimentally for a given system through testing with known temperature values.
Pressure Thermometer and Absolute Zero
The constant volume gas thermometer plays a crucial role in understanding how absolute zero could be discovered long before the advent of cryogenics. Consider a graph of pressure versus temperature made not far from standard conditions (well above absolute zero) for three different samples of any ideal gas (a, b, c). To the extent that the gas is ideal, the pressure depends linearly on temperature, and the extrapolation to zero pressure occurs at absolute zero. Note that data could have been collected with three different amounts of the same gas, which would have rendered this experiment easy to do in the eighteenth century.
History
See also
Thermodynamic instruments
Boyle's law
Combined gas law
Gay-Lussac's law
Avogadro's law
Ideal gas law
References
Thermometers
Gases
fr:Thermomètre#Thermomètre à gaz | Gas thermometer | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 348 | [
"Thermodynamics stubs",
"Statistical mechanics stubs",
"Matter",
"Phases of matter",
"Measuring instruments",
"Thermodynamics",
"Thermometers",
"Statistical mechanics",
"Physical chemistry stubs",
"Gases"
] |
7,096,154 | https://en.wikipedia.org/wiki/Tristyly | Tristyly is a rare floral polymorphism that consists of three floral morphs that differ in regard to the length of the stamens and style within the flower. This type of floral mechanism is thought to encourage outcross pollen transfer and is usually associated with heteromorphic self-incompatibility to reduce inbreeding. It is an example of heterostyly and reciprocal herkogamy, like distyly, which is the more common form of heterostyly. Darwin first described tristylous species in 1877 in terms of the incompatibility of these three morphs.
Description
The three floral morphs of tristylous plants are based on the positioning of the male and female reproductive structures, as either long-, mid-, or short-styled morphs. Often this is shortened to L, M and S morphs. There are two different lengths of stamens in each flower morph that oppose the length of the style. For example, in the short-styled morph, the two sets of stamen are arranged in the mid and long position in order to prevent autogamy. In trimorphic incompatibility system, full seed set is accomplished only with pollination of stigmas by pollen from anthers of the same height. This incompatibility system produces pollen and styles with three different incompatibility phenotypes because of the three style and stamen lengths.
Tristylous species have been found in several angiosperm families including the Oxalidaceae, Pontederiaceae, Amaryllidaceae, Connaraceae, Linaceae and Lythraceae, though several others have been proposed. There is not a consistent consensus on the specific criteria defining tristyly. In a 1993 review of tristylous evolutionary biology, Barrett proposes three common features for tristylous plants, 1) three floral morphs with differing style and stamen height, 2) a trimorphic incompatibility system, and 3) additional polymorphisms of the stigmas and pollen.
Heteromorphic Incompatibility System
This incompatibility system is a specific mechanism employed by heterostylous species, where incompatibility is based on the positioning of the reproductive structure of the flower. In tristylous species this is based on two loci, S and M with one allele dominant at each loci. For the short-styled morph the dominant allele is in the S locus (Ssmm or SsMm), whereas in the mid-styled morph the dominant allele is at the M locus (ssMm). The S locus is epistatic to the M locus such that the presence of the S allele produces a short-styled flower regardless of the genotype at the M locus. The long-styled morph, on the other hand, is homozygous recessive for both loci (ssmm).
In tristylous species, incompatibility varies, with some species showing varying degrees of compatibility outside of the reciprocal herkogamy pattern of pollination. Darwin noted weak incompatibility commonly occurring in the M-morph of Lythrum salicaria. Some species have shown weak or absent incompatibility in their mating system, however self-compatibility in tristylous species is still poorly understood. Research on Eichhornia paniculate, found difference in pollen tube growth between intra- and inter-morph pollen, indicating that the incompatibility system is a case of cryptic self-incompatibility.
Evolution
Heterostyly has been found in at least 28 families, while tristyly has only been found in six families. The rarity and complexity of tristyly coupled with its development in a variety of unrelated plant families has made its evolution and adaptive significant hard to discern. It would be assumed that distyly would be the intermediate stage to tristyly but it has also been proposed that that distyly originated from tristyly through the loss of one of the floral morphs. However, there are some distylous families with no tristylous species present, so it is possible that these two polymorphisms evolved separately.
The adaption for structural variation in heterostylous species likely developed out of the need for efficient pollen transfer and simultaneous selection to reduce self-fertilization. The mid-morph with stamen positioned below and above the stigma is unique to tristylous species. If this positioning occurred in monomorphic species it would promote self-fertilization which could be achieved much more easily without different stamen heights, indicating this positioning in heteromorphic species is meant to encourage cross pollination.
References
Plant morphology
Pollination | Tristyly | [
"Biology"
] | 971 | [
"Plant morphology",
"Plants"
] |
7,096,353 | https://en.wikipedia.org/wiki/Clipped%20tag | The clipped tag is a radio frequency identification (RFID) tag designed to enhance consumer privacy. RFID is an identification technology in which information stored in semiconductor chips contained in RFID tags is communicated by means of radio waves to RFID readers. The most simple passive RFID tags do not have batteries or transmitters. They get their energy from the field of the reader. They transfer their information to the reader by modulating the signal that is reflected back to the reader by the tag. Because tags depend on the reader for power their range is limited, typically up to for UHF RFID tags.
Today, the public uses RFID tags for many applications including electronic toll collection, E-ZPass for example, or the Speedpass which is used as a credit token for the purchase of gasoline. The retail supply chain uses RFID tags to monitor the passage of pallets and cases at loading dock doors. The expectation for the future is for RFID tags to be used for the labelling of items for retail sale. Concerns for individual privacy have been raised because the RFID tags may be read by invisible radio waves without the knowledge of the holder of the tagged item.
The privacy-protecting RFID tag, the “clipped tag” has been suggested by IBM. The clipped tag puts the option of privacy protection in the hands of the consumer. After the point of sale, a consumer may tear off a portion of the tag, much like the way in which a ketchup packet is opened. This allows the transformation of a long-range tag into a proximity tag that still may be read, but only at short range – less than a few inches or centimeters. The modification of the tag may be confirmed visually. The tag may still be used later for returns, recalls, or recycling. The clipped tag was listed among The Wall Street Journal Technology Innovation Winners for 2006. Two US patents were issued for this invention in 2007.
Other mechanisms designed to protect privacy for RFID item tagging for retail use are the EPCglobal kill command and the RSA blocker tag.
Clipped tag development
The concept of the clipped tag was first introduced in a paper authored by IBM researchers Paul Moskowitz and Guenter Karjoth in 2005, RFID Journal, November 7, 2005. In their paper, presented at the 2005 ACM Workshop on Privacy in the Electronic Society, the authors suggest that by providing the consumer with a means to shorten the antenna, the read range of the tag may be reduced from many meters to just a few centimeters. Several mechanisms were suggested. The mechanisms included perforating the tag like a sheet of postage stamps to allow the tearing off of a portion of the antenna. Another proposed mechanism was to manufacture the tag antenna with exposed conducting lines which could be scratched off by the consumer.
IBM teamed up with Marnlen RFiD, a manufacturer of RFID labels, and Printronix, a maker of RFID printers, to demonstrate prototypes of the Clipped Tag, Wired News, May 1, 2006. The tag took the form of a garment hang tag with v-shaped notches in the edges and perforations to direct the tearing of the tag. Reactions by RFID privacy experts were favorable to the invention. According to Wired, Robert Atkinson, president of the Information Technology and Innovation Foundation, said "The Clipped Tag shows that IBM is addressing privacy concerns, even those that are unreasonable." Subsequently, IBM and Marnlen RFiD announced that Marnlen had licensed the technology from IBM and was shipping samples to select users, RFID Update, November, 2006.
External links
United States Patent 7,277,016, System and method for disabling RFID tags. USPTO
United States Patent 7,253,734, System and method for altering or disabling RFID tags. USPTO
Realizing Benefits Today in your Retail Environment, Deidre Lenderking, Paul Moskowitz, and Robyn Schwartz, IBM white paper.
A Privacy-Enhancing Radio Frequency Identification Tag: Implementation of the Clipped Tag, Paul Moskowitz, Stephen Morris, Andris Lauris, Fifth Annual IEEE International Conference on Pervasive Computing and Communications Workshops (PerComW'07), pp. 348–351.
A Privacy-Enhancing Radio Frequency Identification Tag: Implementation of the Clipped Tag, Paul Moskowitz, Stephen Morris, Andris Lauris. IEEE PerTec 2007, March 20, 2007 (presentation).
IBM signs first license for Clipped Tag technology, CNNMoney.com, February 13, 2007.
Clipped RFID Tags Protect Consumer Privacy, Guenter Karjoth and Paul Moskowitz, ERCIM News, January 2007.
Can RFID Invade Your Privacy?, Forbes, December 7, 2006.
Pro-Privacy Tearable RFID Tag Becomes a Reality, RFID Update, November 8, 2006, with Clipped Tag video demonstration.
Company adopts "clipped tag" technology, USA TODAY. November 9, 2006
IBM Research's clipped tags among top technology innovations of 2006, September 11, 2006.
Privacy Enhancing Radio Frequency Identification Tag, Clipped Tag White Paper, Paul Moskowitz, Andris Lauris, and Stephen S. Morris, May 1, 2006.
Retail-Safe RFID Unveiled, Wired News, May 1, 2006.
IBM Proposes Privacy-Protecting Tag, RFID Journal, November 7, 2005.
Guenter Karjoth and Paul Moskowitz, Disabling RFID Tags with Visible Confirmation, WPES ’05, Proceedings of the 2005 ACM Workshop on Privacy in the Electronic Society, pp. 27–30, ACM Press, 2005.
Automatic identification and data capture
Radio-frequency identification | Clipped tag | [
"Technology",
"Engineering"
] | 1,154 | [
"Radio-frequency identification",
"Radio electronics",
"Data",
"Automatic identification and data capture"
] |
7,096,466 | https://en.wikipedia.org/wiki/Structure%20tensor | In mathematics, the structure tensor, also referred to as the second-moment matrix, is a matrix derived from the gradient of a function. It describes the distribution of the gradient in a specified neighborhood around a point and makes the information invariant to the observing coordinates. The structure tensor is often used in image processing and computer vision.
The 2D structure tensor
Continuous version
For a function of two variables , the structure tensor is the 2×2 matrix
where and are the partial derivatives of with respect to x and y; the integrals range over the plane ; and w is some fixed "window function" (such as a Gaussian blur), a distribution on two variables. Note that the matrix is itself a function of .
The formula above can be written also as , where is the matrix-valued function defined by
If the gradient of is viewed as a 2×1 (single-column) matrix, where denotes transpose operation, turning a row vector to a column vector, the matrix can be written as the matrix product or tensor or outer product . Note however that the structure tensor cannot be factored in this way in general except if is a Dirac delta function.
Discrete version
In image processing and other similar applications, the function is usually given as a discrete array of samples , where p is a pair of integer indices. The 2D structure tensor at a given pixel is usually taken to be the discrete sum
Here the summation index r ranges over a finite set of index pairs (the "window", typically for some m), and w[r] is a fixed "window weight" that depends on r, such that the sum of all weights is 1. The values are the partial derivatives sampled at pixel p; which, for instance, may be estimated from by by finite difference formulas.
The formula of the structure tensor can be written also as , where is the matrix-valued array such that
Interpretation
The importance of the 2D structure tensor stems from the fact eigenvalues (which can be ordered so that ) and the corresponding eigenvectors summarize the distribution of the gradient of within the window defined by centered at .
Namely, if , then (or ) is the direction that is maximally aligned with the gradient within the window.
In particular, if then the gradient is always a multiple of (positive, negative or zero); this is the case if and only if within the window varies along the direction but is constant along . This condition of eigenvalues is also called linear symmetry condition because then the iso-curves of consist in parallel lines, i.e there exists a one dimensional function which can generate the two dimensional function as for some constant vector and the coordinates .
If , on the other hand, the gradient in the window has no predominant direction; which happens, for instance, when the image has rotational symmetry within that window. This condition of eigenvalues is also called balanced body, or directional equilibrium condition because it holds when all gradient directions in the window are equally frequent/probable.
Furthermore, the condition happens if and only if the function is constant () within .
More generally, the value of , for k=1 or k=2, is the -weighted average, in the neighborhood of p, of the square of the directional derivative of along . The relative discrepancy between the two eigenvalues of is an indicator of the degree of anisotropy of the gradient in the window, namely how strongly is it biased towards a particular direction (and its opposite). This attribute can be quantified by the coherence, defined as
if . This quantity is 1 when the gradient is totally aligned, and 0 when it has no preferred direction. The formula is undefined, even in the limit, when the image is constant in the window (). Some authors define it as 0 in that case.
Note that the average of the gradient inside the window is not a good indicator of anisotropy. Aligned but oppositely oriented gradient vectors would cancel out in this average, whereas in the structure tensor they are properly added together. This is a reason for why is used in the averaging of the structure tensor to optimize the direction instead of .
By expanding the effective radius of the window function (that is, increasing its variance), one can make the structure tensor more robust in the face of noise, at the cost of diminished spatial resolution. The formal basis for this property is described in more detail below, where it is shown that a multi-scale formulation of the structure tensor, referred to as the multi-scale structure tensor, constitutes a true multi-scale representation of directional data under variations of the spatial extent of the window function.
Complex version
The interpretation and implementation of the 2D structure tensor becomes particularly accessible using complex numbers. The structure tensor consists in 3 real numbers
where , and in which integrals can be replaced by summations for discrete representation. Using Parseval's identity it is clear that the three real numbers are the second order moments of the power spectrum of . The following second order complex moment of the power spectrum of can then be written as
where and is the direction angle of the most significant eigenvector of the structure tensor whereas and are the most and the least significant eigenvalues. From, this it follows that contains both a certainty and the optimal direction in double angle representation since it is a complex number consisting of two real numbers. It follows also that if the gradient is represented as a complex number, and is remapped by squaring (i.e. the argument angles of the complex gradient is doubled), then averaging acts as an optimizer in the mapped domain, since it directly delivers both the optimal direction (in double angle representation) and the associated certainty. The complex number represents thus how much linear structure (linear symmetry) there is in image , and the complex number is obtained directly by averaging the gradient in its (complex) double angle representation without computing the eigenvalues and the eigenvectors explicitly.
Likewise the following second order complex moment of the power spectrum of , which happens to be always real because is real,
can be obtained, with and being the eigenvalues as before. Notice that this time the magnitude of the complex gradient is squared (which is always real).
However, decomposing the structure tensor in its eigenvectors yields its tensor components as
where is the identity matrix in 2D because the two eigenvectors are always orthogonal (and sum to unity). The first term in the last expression of the decomposition, , represents the linear symmetry component of the structure tensor containing all directional information (as a rank-1 matrix), whereas the second term represents the balanced body component of the tensor, which lacks any directional information (containing an identity matrix ). To know how much directional information there is in is then the same as checking how large is compared to .
Evidently, is the complex equivalent of the first term in the tensor decomposition, whereas is the equivalent of the second term. Thus the two scalars, comprising three real numbers,
where is the (complex) gradient filter, and is convolution, constitute a complex representation of the 2D Structure Tensor. As discussed here and elsewhere defines the local image which is usually a Gaussian (with a certain variance defining the outer scale), and is the (inner scale) parameter determining the effective frequency range in which the orientation is to be estimated.
The elegance of the complex representation stems from that the two components of the structure tensor can be obtained as averages and independently. In turn, this means that and can be used in a scale space representation to describe the evidence for presence of unique orientation and the evidence for the alternative hypothesis, the presence of multiple balanced orientations, without computing the eigenvectors and eigenvalues. A functional, such as squaring the complex numbers have to this date not been shown to exist for structure tensors with dimensions higher than two. In Bigun 91, it has been put forward with due argument that this is because complex numbers are commutative algebras whereas quaternions, the possible candidate to construct such a functional by, constitute a non-commutative algebra.
The complex representation of the structure tensor is frequently used in fingerprint analysis to obtain direction maps containing certainties which in turn are used to enhance them, to find the locations of the global (cores and deltas) and local (minutia) singularities, as well as automatically evaluate the quality of the fingerprints.
The 3D structure tensor
Definition
The structure tensor can be defined also for a function of three variables p=(x,y,z) in an entirely analogous way. Namely, in the continuous version we have , where
where are the three partial derivatives of , and the integral ranges over .
In the discrete version,, where
and the sum ranges over a finite set of 3D indices, usually for some .
Interpretation
As in the two-dimensional case, the eigenvalues of , and the corresponding eigenvectors , summarize the distribution of gradient directions within the neighborhood of p defined by the window . This information can be visualized as an ellipsoid whose semi-axes are equal to the eigenvalues and directed along their corresponding eigenvectors.
In particular, if the ellipsoid is stretched along one axis only, like a cigar (that is, if is much larger than both and ), it means that the gradient in the window is predominantly aligned with the direction , so that the isosurfaces of tend to be flat and perpendicular to that vector. This situation occurs, for instance, when p lies on a thin plate-like feature, or on the smooth boundary between two regions with contrasting values.
If the ellipsoid is flattened in one direction only, like a pancake (that is, if is much smaller than both and ), it means that the gradient directions are spread out but perpendicular to ; so that the isosurfaces tend to be like tubes parallel to that vector. This situation occurs, for instance, when p lies on a thin line-like feature, or on a sharp corner of the boundary between two regions with contrasting values.
Finally, if the ellipsoid is roughly spherical (that is, if ), it means that the gradient directions in the window are more or less evenly distributed, with no marked preference; so that the function is mostly isotropic in that neighborhood. This happens, for instance, when the function has spherical symmetry in the neighborhood of p. In particular, if the ellipsoid degenerates to a point (that is, if the three eigenvalues are zero), it means that is constant (has zero gradient) within the window.
The multi-scale structure tensor
The structure tensor is an important tool in scale space analysis. The multi-scale structure tensor (or multi-scale second moment matrix) of a function is in contrast to other one-parameter scale-space features an image descriptor that is defined over two scale parameters.
One scale parameter, referred to as local scale , is needed for determining the amount of pre-smoothing when computing the image gradient . Another scale parameter, referred to as integration scale , is needed for specifying the spatial extent of the window function that determines the weights for the region in space over which the components of the outer product of the gradient by itself are accumulated.
More precisely, suppose that is a real-valued signal defined over . For any local scale , let a multi-scale representation of this signal be given by where represents a pre-smoothing kernel. Furthermore, let denote the gradient of the scale space representation.
Then, the multi-scale structure tensor/second-moment matrix is defined by
Conceptually, one may ask if it would be sufficient to use any self-similar families of smoothing functions and . If one naively would apply, for example, a box filter, however, then non-desirable artifacts could easily occur. If one wants the multi-scale structure tensor to be well-behaved over both increasing local scales and increasing integration scales , then it can be shown that both the smoothing function and the window function have to be Gaussian. The conditions that specify this uniqueness are similar to the scale-space axioms that are used for deriving the uniqueness of the Gaussian kernel for a regular Gaussian scale space of image intensities.
There are different ways of handling the two-parameter scale variations in this family of image descriptors. If we keep the local scale parameter fixed and apply increasingly broadened versions of the window function by increasing the integration scale parameter only, then we obtain a true formal scale space representation of the directional data computed at the given local scale . If we couple the local scale and integration scale by a relative integration scale , such that then for any fixed value of , we obtain a reduced self-similar one-parameter variation, which is frequently used to simplify computational algorithms, for example in corner detection, interest point detection, texture analysis and image matching.
By varying the relative integration scale in such a self-similar scale variation, we obtain another alternative way of parameterizing the multi-scale nature of directional data obtained by increasing the integration scale.
A conceptually similar construction can be performed for discrete signals, with the convolution integral replaced by a convolution sum and with the continuous Gaussian kernel replaced by the discrete Gaussian kernel :
When quantizing the scale parameters and in an actual implementation, a finite geometric progression is usually used, with i ranging from 0 to some maximum scale index m. Thus, the discrete scale levels will bear certain similarities to image pyramid, although spatial subsampling may not necessarily be used in order to preserve more accurate data for subsequent processing stages.
Applications
The eigenvalues of the structure tensor play a significant role in many image processing algorithms, for problems like corner detection, interest point detection, and feature tracking. The structure tensor also plays a central role in the Lucas-Kanade optical flow algorithm, and in its extensions to estimate affine shape adaptation; where the magnitude of is an indicator of the reliability of the computed result. The tensor has been used for scale space analysis, estimation of local surface orientation from monocular or binocular cues, non-linear fingerprint enhancement, diffusion-based image processing, and several other image processing problems. The structure tensor can be also applied in geology to filter seismic data.
Processing spatio-temporal video data with the structure tensor
The three-dimensional structure tensor has been used to analyze three-dimensional video data (viewed as a function of x, y, and time t).
If one in this context aims at image descriptors that are invariant under Galilean transformations, to make it possible to compare image measurements that have been obtained under variations of a priori unknown image velocities
it is, however, from a computational viewpoint preferable to parameterize the components in the structure tensor/second-moment matrix using the notion of Galilean diagonalization
where denotes a Galilean transformation of spacetime and a two-dimensional rotation over the spatial domain,
compared to the abovementioned use of eigenvalues of a 3-D structure tensor, which corresponds to an eigenvalue decomposition and a (non-physical) three-dimensional rotation of spacetime
To obtain true Galilean invariance, however, also the shape of the spatio-temporal window function needs to be adapted, corresponding to the transfer of affine shape adaptation from spatial to spatio-temporal image data.
In combination with local spatio-temporal histogram descriptors,
these concepts together allow for Galilean invariant recognition of spatio-temporal events.
See also
Tensor
Tensor operator
Directional derivative
Gaussian
Corner detection
Edge detection
Lucas-Kanade method
Affine shape adaptation
Generalized structure tensor
References
Resources
Download MATLAB Source
Structure Tensor Tutorial (Original)
Tensors
Feature detection (computer vision) | Structure tensor | [
"Engineering"
] | 3,257 | [
"Tensors"
] |
7,096,967 | https://en.wikipedia.org/wiki/Ground%20source%20heat%20pump | A ground source heat pump (also geothermal heat pump) is a heating/cooling system for buildings that use a type of heat pump to transfer heat to or from the ground, taking advantage of the relative constancy of temperatures of the earth through the seasons. Ground-source heat pumps (GSHPs)or geothermal heat pumps (GHP), as they are commonly termed in North Americaare among the most energy-efficient technologies for providing HVAC and water heating, using less energy than can be achieved by use of resistive electric heaters.
Efficiency is given as a coefficient of performance (CoP) which is typically in the range 3-6, meaning that the devices provide 3-6 units of heat for each unit of electricity used. Setup costs are higher than for other heating systems, due to the requirement of installing ground loops over large areas or of drilling bore holes, hence ground source is often installed when new blocks of flats are built. Air-source heat pumps have lower set-up costs.
Thermal properties of the ground
Ground-source heat pumps take advantage of the difference between the ambient temperature and the temperature at various depths in the ground.
The thermal properties of the ground near the surface can be described as follows:
In the surface layer to a depth of about 1 meter, the temperature is very sensitive to sunlight and weather.
In the shallow layer to a depth of about 8–20 meters (depending on soil type), the thermal mass of the ground causes temperature variation to decrease exponentially with depth until it is close to the local annual average air temperature; it also lags behind the surface temperature, so that the peak temperature is about 6 months after the surface peak temperature.
Below that, in the deeper layer, the temperature is effectively constant, rising about 0.025 °C per metre according to the geothermal gradient.
The "penetration depth" is defined as the depth at which the temperature variable is less than 0.01 of the variation at the surface. This also depends on the type of soil:
History
The heat pump was described by Lord Kelvin in 1853 and developed by Peter Ritter von Rittinger in 1855. Heinrich Zoelly had patented the idea of using it to draw heat from the ground in 1912.
After experimentation with a freezer, Robert C. Webber built the first direct exchange ground source heat pump in the late 1940s; sources disagree, however, as to the exact timeline of his invention The first successful commercial project was installed in the Commonwealth Building (Portland, Oregon) in 1948, and has been designated a National Historic Mechanical Engineering Landmark by ASME. Professor Carl Nielsen of Ohio State University built the first residential open loop version in his home in 1948.
As a result of the 1973 oil crisis, ground source heat pumps became popular in Sweden and have since grown slowly in worldwide popularity as the technology has improved. Open loop systems dominated the market until the development of polybutylene pipe in 1979 made closed loop systems economically viable.
As of 2004, there are over a million units installed worldwide, providing 12 GW of thermal capacity with a growth rate of 10% per year. Each year (as of 2011/2004, respectively), about 80,000 units are installed in the US and 27,000 in Sweden. In Finland, a geothermal heat pump was the most common heating system choice for new detached houses between 2006 and 2011 with market share exceeding 40%.
Arrangement
Internal arrangement
A heat pump is the central unit for the building's heating and cooling. It usually comes in two main variants:
Liquid-to-water heat pumps (also called water-to-water) are hydronic systems that carry heating or cooling through the building through pipes to conventional radiators, underfloor heating, baseboard radiators and hot water tanks. These heat pumps are also preferred for pool heating. Heat pumps typically only heat water to about efficiently, whereas boilers typically operate at . The size of radiators designed for the higher temperatures achieved by boilers may be too small for use with heat pumps, requiring replacement with larger radiators when retrofitting a home from boiler to heat pump. When used for cooling, the temperature of the circulating water must normally be kept above the dew point to ensure that atmospheric humidity does not condense on the radiator.
Liquid-to-air heat pumps (also called water-to-air) output forced air, and are most commonly used to replace legacy forced air furnaces and central air conditioning systems. There are variations that allow for split systems, high-velocity systems, and ductless systems. Heat pumps cannot achieve as high a fluid temperature as a conventional furnace, so they require a higher volume flow rate of air to compensate. When retrofitting a residence, the existing ductwork may have to be enlarged to reduce the noise from the higher air flow.
Ground heat exchanger
Ground source heat pumps employ a ground heat exchanger in contact with the ground or groundwater to extract or dissipate heat. Incorrect design can result in the system freezing after a number of years or very inefficient system performance; thus accurate system design is critical to a successful system
Pipework for the ground loop is typically made of high-density polyethylene pipe and contains a mixture of water and anti-freeze (propylene glycol, denatured alcohol or methanol). Monopropylene glycol has the least damaging potential when it might leak into the ground, and is, therefore, the only allowed anti-freeze in ground sources in an increasing number of European countries.
Horizontal
A horizontal closed loop field is composed of pipes that are arrayed in a plane in the ground. A long trench, deeper than the frost line, is dug and U-shaped or slinky coils are spread out inside the same trench. Shallow horizontal heat exchangers experience seasonal temperature cycles due to solar gains and transmission losses to ambient air at ground level. These temperature cycles lag behind the seasons because of thermal inertia, so the heat exchanger will harvest heat deposited by the sun several months earlier, while being weighed down in late winter and spring, due to accumulated winter cold. Systems in wet ground or in water are generally more efficient than drier ground loops since water conducts and stores heat better than solids in sand or soil. If the ground is naturally dry, soaker hoses may be buried with the ground loop to keep it wet.
Vertical
A vertical system consists of a number of boreholes some deep fitted with U-shaped pipes through which a heat-carrying fluid that absorbs (or discharges) heat from (or to) the ground is circulated. Bore holes are spaced at least 5–6 m apart and the depth depends on ground and building characteristics. Alternatively, pipes may be integrated with the foundation piles used to support the building. Vertical systems rely on migration of heat from surrounding geology, unless recharged during the summer and at other times when surplus heat is available. Vertical systems are typically used where there is insufficient available land for a horizontal system.
Pipe pairs in the hole are joined with a U-shaped cross connector at the bottom of the hole or comprises two small-diameter high-density polyethylene (HDPE) tubes thermally fused to form a U-shaped bend at the bottom. The space between the wall of the borehole and the U-shaped tubes is usually grouted completely with grouting material or, in some cases, partially filled with groundwater. For illustration, a detached house needing 10 kW (3 ton) of heating capacity might need three boreholes deep.
Radial or directional drilling
As an alternative to trenching, loops may be laid by mini horizontal directional drilling (mini-HDD). This technique can lay piping under yards, driveways, gardens or other structures without disturbing them, with a cost between those of trenching and vertical drilling. This system also differs from horizontal & vertical drilling as the loops are installed from one central chamber, further reducing the ground space needed. Radial drilling is often installed retroactively (after the property has been built) due to the small nature of the equipment used and the ability to bore beneath existing constructions.
Open loop
In an open-loop system (also called a groundwater heat pump), the secondary loop pumps natural water from a well or body of water into a heat exchanger inside the heat pump. Since the water chemistry is not controlled, the appliance may need to be protected from corrosion by using different metals in the heat exchanger and pump. Limescale may foul the system over time and require periodic acid cleaning. This is much more of a problem with cooling systems than heating systems. A standing column well system is a specialized type of open-loop system where water is drawn from the bottom of a deep rock well, passed through a heat pump, and returned to the top of the well. A growing number of jurisdictions have outlawed open-loop systems that drain to the surface because these may drain aquifers or contaminate wells. This forces the use of more environmentally sound injection wells or a closed-loop system.
Pond
A closed pond loop consists of coils of pipe similar to a slinky loop attached to a frame and located at the bottom of an appropriately sized pond or water source. Artificial ponds are used as heat storage (up to 90% efficient) in some central solar heating plants, which later extract the heat (similar to ground storage) via a large heat pump to supply district heating.
Direct exchange (DX)
The direct exchange geothermal heat pump (DX) is the oldest type of geothermal heat pump technology where the refrigerant itself is passed through the ground loop. Developed during the 1980s, this approach faced issues with the refrigerant and oil management system, especially after the ban of CFC refrigerants in 1989 and DX systems now are infrequently used.
Installation
Because of the technical knowledge and equipment needed to design and size the system properly (and install the piping if heat fusion is required), a GSHP system installation requires a professional's services. Several installers have published real-time views of system performance in an online community of recent residential installations. The International Ground Source Heat Pump Association (IGSHPA), Geothermal Exchange Organization (GEO), Canadian GeoExchange Coalition and Ground Source Heat Pump Association maintain listings of qualified installers in the US, Canada and the UK. Furthermore, detailed analysis of soil thermal conductivity for horizontal systems and formation thermal conductivity for vertical systems will generally result in more accurately designed systems with a higher efficiency.
Thermal performance
Cooling performance is typically expressed in units of BTU/hr/watt as the energy efficiency ratio (EER), while heating performance is typically reduced to dimensionless units as the coefficient of performance (COP). The conversion factor is 3.41 BTU/hr/watt. Since a heat pump moves three to five times more heat energy than the electric energy it consumes, the total energy output is much greater than the electrical input. This results in net thermal efficiencies greater than 300% as compared to radiant electric heat being 100% efficient. Traditional combustion furnaces and electric heaters can never exceed 100% efficiency. Ground source heat pumps can reduce energy consumption – and corresponding air pollution emissions – up to 72% compared to electric resistance heating with standard air-conditioning equipment.
Efficient compressors, variable speed compressors and larger heat exchangers all contribute to heat pump efficiency. Residential ground source heat pumps on the market today have standard COPs ranging from 2.4 to 5.0 and EERs ranging from 10.6 to 30. To qualify for an Energy Star label, heat pumps must meet certain minimum COP and EER ratings which depend on the ground heat exchanger type. For closed-loop systems, the ISO 13256-1 heating COP must be 3.3 or greater and the cooling EER must be 14.1 or greater.
Standards ARI 210 and 240 define Seasonal Energy Efficiency Ratio (SEER) and Heating Seasonal Performance Factors (HSPF) to account for the impact of seasonal variations on air source heat pumps. These numbers are normally not applicable and should not be compared to ground source heat pump ratings. However, Natural Resources Canada has adapted this approach to calculate typical seasonally adjusted HSPFs for ground-source heat pumps in Canada. The NRC HSPFs ranged from 8.7 to 12.8 BTU/hr/watt (2.6 to 3.8 in nondimensional factors, or 255% to 375% seasonal average electricity utilization efficiency) for the most populated regions of Canada.
For the sake of comparing heat pump appliances to each other, independently from other system components, a few standard test conditions have been established by the American Refrigerant Institute (ARI) and more recently by the International Organization for Standardization. Standard ARI 330 ratings were intended for closed-loop ground-source heat pumps, and assume secondary loop water temperatures of for air conditioning and for heating. These temperatures are typical of installations in the northern US. Standard ARI 325 ratings were intended for open-loop ground-source heat pumps, and include two sets of ratings for groundwater temperatures of and . ARI 325 budgets more electricity for water pumping than ARI 330. Neither of these standards attempts to account for seasonal variations. Standard ARI 870 ratings are intended for direct exchange ground-source heat pumps. ASHRAE transitioned to ISO 13256–1 in 2001, which replaces ARI 320, 325 and 330. The new ISO standard produces slightly higher ratings because it no longer budgets any electricity for water pumps.
Soil without artificial heat addition or subtraction and at depths of several metres or more remains at a relatively constant temperature year round. This temperature equates roughly to the average annual air temperature of the chosen location, usually at a depth of in the northern US. Because this temperature remains more constant than the air temperature throughout the seasons, ground source heat pumps perform with far greater efficiency during extreme air temperatures than air conditioners and air-source heat pumps.
Analysis of heat transfer
A challenge in predicting the thermal response of a ground heat exchanger (GHE) is the diversity of the time and space scales involved. Four space scales and eight time scales are involved in the heat transfer of GHEs. The first space scale having practical importance is the diameter of the borehole (~ 0.1 m) and the associated time is on the order of 1 hr, during which the effect of the heat capacity of the backfilling material is significant. The second important space dimension is the half distance between two adjacent boreholes, which is on the order of several meters. The corresponding time is on the order of a month, during which the thermal interaction between adjacent boreholes is important. The largest space scale can be tens of meters or more, such as the half-length of a borehole and the horizontal scale of a GHE cluster. The time scale involved is as long as the lifetime of a GHE (decades).
The short-term hourly temperature response of the ground is vital for analyzing the energy of ground-source heat pump systems and for their optimum control and operation. By contrast, the long-term response determines the overall feasibility of a system from the standpoint of the life cycle.
The main questions that engineers may ask in the early stages of designing a GHE are (a) what the heat transfer rate of a GHE as a function of time is, given a particular temperature difference between the circulating fluid and the ground, and (b) what the temperature difference as a function of time is, given a required heat exchange rate. In the language of heat transfer, the two questions can probably be expressed as
where Tf is the average temperature of the circulating fluid, T0 is the effective, undisturbed temperature of the ground, ql is the heat transfer rate of the GHE per unit time per unit length (W/m), and R is the total thermal resistance (m.K/W).R(t) is often an unknown variable that needs to be determined by heat transfer analysis. Despite R(t) being a function of time, analytical models exclusively decompose it into a time-independent part and a time-dependent part to simplify the analysis.
Various models for the time-independent and time-dependent R can be found in the references. Further, a thermal response test is often performed to make a deterministic analysis of ground thermal conductivity to optimize the loopfield size, especially for larger commercial sites (e.g., over 10 wells).
Seasonal thermal storage
The efficiency of ground source heat pumps can be greatly improved by using seasonal thermal energy storage and interseasonal heat transfer. Heat captured and stored in thermal banks in the summer can be retrieved efficiently in the winter. Heat storage efficiency increases with scale, so this advantage is most significant in commercial or district heating systems.
Geosolar combisystems have been used to heat and cool a greenhouse using an aquifer for thermal storage. In summer, the greenhouse is cooled with cold ground water. This heats the water in the aquifer which can become a warm source for heating in winter. The combination of cold and heat storage with heat pumps can be combined with water/humidity regulation. These principles are used to provide renewable heat and renewable cooling to all kinds of buildings.
Also the efficiency of existing small heat pump installations can be improved by adding large, cheap, water-filled solar collectors. These may be integrated into a to-be-overhauled parking lot, or in walls or roof constructions by installing one-inch PE pipes into the outer layer.
Environmental impact
The US Environmental Protection Agency (EPA) has called ground source heat pumps the most energy-efficient, environmentally clean, and cost-effective space conditioning systems available. Heat pumps offer significant emission reductions potential where the electricity is produced from renewable resources.
GSHPs have unsurpassed thermal efficiencies and produce zero emissions locally, but their electricity supply includes components with high greenhouse gas emissions unless it is a 100% renewable energy supply. Their environmental impact, therefore, depends on the characteristics of the electricity supply and the available alternatives.
The GHG emissions savings from a heat pump over a conventional furnace can be calculated based on the following formula:
HL = seasonal heat load ≈ 80 GJ/yr for a modern detached house in the northern US
FI = emissions intensity of fuel = 50 kg(CO2)/GJ for natural gas, 73 for heating oil, 0 for 100% renewable energy such as wind, hydro, photovoltaic or solar thermal
AFUE = furnace efficiency ≈ 95% for a modern condensing furnace
COP = heat pump coefficient of performance ≈ 3.2 seasonally adjusted for northern US heat pump
EI = emissions intensity of electricity ≈ 200–800 ton(CO2)/GWh, depending on the region's mix of electric power plants (Coal vs Natural Gas vs Nuclear, Hydro, Wind & Solar)
Ground-source heat pumps always produce fewer greenhouse gases than air conditioners, oil furnaces, and electric heating, but natural gas furnaces may be competitive depending on the greenhouse gas intensity of the local electricity supply. In countries like Canada and Russia with low emitting electricity infrastructure, a residential heat pump may save 5 tons of carbon dioxide per year relative to an oil furnace, or about as much as taking an average passenger car off the road. But in cities like Beijing or Pittsburgh that are highly reliant on coal for electricity production, a heat pump may result in 1 or 2 tons more carbon dioxide emissions than a natural gas furnace. For areas not served by utility natural gas infrastructure, however, no better alternative exists.
The fluids used in closed loops may be designed to be biodegradable and non-toxic, but the refrigerant used in the heat pump cabinet and in direct exchange loops was, until recently, chlorodifluoromethane, which is an ozone-depleting substance. Although harmless while contained, leaks and improper end-of-life disposal contribute to enlarging the ozone hole. For new construction, this refrigerant is being phased out in favor of the ozone-friendly but potent greenhouse gas R410A. Open-loop systems (i.e. those that draw ground water as opposed to closed-loop systems using a borehole heat exchanger) need to be balanced by reinjecting the spent water. This prevents aquifer depletion and the contamination of soil or surface water with brine or other compounds from underground.
Before drilling, the underground geology needs to be understood, and drillers need to be prepared to seal the borehole, including preventing penetration of water between strata. The unfortunate example is a geothermal heating project in Staufen im Breisgau, Germany which seems the cause of considerable damage to historical buildings there. In 2008, the city centre was reported to have risen 12 cm, after initially sinking a few millimeters. The boring tapped a naturally pressurized aquifer, and via the borehole this water entered a layer of anhydrite, which expands when wet as it forms gypsum. The swelling will stop when the anhydrite is fully reacted, and reconstruction of the city center "is not expedient until the uplift ceases". By 2010 sealing of the borehole had not been accomplished. By 2010, some sections of town had risen by 30 cm.
Economics
Ground source heat pumps are characterized by high capital costs and low operational costs compared to other HVAC systems. Their overall economic benefit depends primarily on the relative costs of electricity and fuels, which are highly variable over time and across the world. Based on recent prices, ground-source heat pumps currently have lower operational costs than any other conventional heating source almost everywhere in the world. Natural gas is the only fuel with competitive operational costs, and only in a handful of countries where it is exceptionally cheap, or where electricity is exceptionally expensive. In general, a homeowner may save anywhere from 20% to 60% annually on utilities by switching from an ordinary system to a ground-source system.
Capital costs and system lifespan have received much less study until recently, and the return on investment is highly variable. The rapid escalation in system price has been accompanied by rapid improvements in efficiency and reliability. Capital costs are known to benefit from economies of scale, particularly for open-loop systems, so they are more cost-effective for larger commercial buildings and harsher climates. The initial cost can be two to five times that of a conventional heating system in most residential applications, new construction or existing. In retrofits, the cost of installation is affected by the size of the living area, the home's age, insulation characteristics, the geology of the area, and the location of the property. Proper duct system design and mechanical air exchange should be considered in the initial system cost.
Capital costs may be offset by government subsidies; for example, Ontario offered $7000 for residential systems installed in the 2009 fiscal year. Some electric companies offer special rates to customers who install a ground-source heat pump for heating or cooling their building. Where electrical plants have larger loads during summer months and idle capacity in the winter, this increases electrical sales during the winter months. Heat pumps also lower the load peak during the summer due to the increased efficiency of heat pumps, thereby avoiding the costly construction of new power plants. For the same reasons, other utility companies have started to pay for the installation of ground-source heat pumps at customer residences. They lease the systems to their customers for a monthly fee, at a net overall saving to the customer.
The lifespan of the system is longer than conventional heating and cooling systems. Good data on system lifespan is not yet available because the technology is too recent, but many early systems are still operational today after 25–30 years with routine maintenance. Most loop fields have warranties for 25 to 50 years and are expected to last at least 50 to 200 years. Ground-source heat pumps use electricity for heating the house. The higher investment above conventional oil, propane or electric systems may be returned in energy savings in 2–10 years for residential systems in the US. The payback period for larger commercial systems in the US is 1–5 years, even when compared to natural gas. Additionally, because geothermal heat pumps usually have no outdoor compressors or cooling towers, the risk of vandalism is reduced or eliminated, potentially extending a system's lifespan.
Ground source heat pumps are recognized as one of the most efficient heating and cooling systems on the market. They are often the second-most cost-effective solution in extreme climates (after co-generation), despite reductions in thermal efficiency due to ground temperature. (The ground source is warmer in climates that need strong air conditioning, and cooler in climates that need strong heating.) The financial viability of these systems depends on the adequate sizing of ground heat exchangers (GHEs), which generally contribute the most to the overall capital costs of GSHP systems.
Commercial systems maintenance costs in the US have historically been between $0.11 to $0.22 per m2 per year in 1996 dollars, much less than the average $0.54 per m2 per year for conventional HVAC systems.
Governments that promote renewable energy will likely offer incentives for the consumer (residential), or industrial markets. For example, in the United States, incentives are offered both on the state and federal levels of government.
See also
Ground-coupled heat exchanger
Deep water source cooling
Solar thermal cooling
Renewable heat
International Ground Source Heat Pump Association
Glossary of geothermal heating and cooling
Uniform Mechanical Code
References
External links
Geothermal Heat Pumps. (EERE/USDOE)
Cost calculation
Geothermal Heat Pump Consortium
International Ground Source Heat Pump Association
Ground Source Heat Pump Association (GSHPA)
Energy conversion
Building engineering
Heat pumps
Sustainable technologies | Ground source heat pump | [
"Engineering"
] | 5,277 | [
"Building engineering",
"Civil engineering",
"Architecture"
] |
7,097,296 | https://en.wikipedia.org/wiki/DMSMS | Diminishing manufacturing sources and material shortages (DMSMS) or diminishing manufacturing sources (DMS) is defined as: "The loss or impending loss of manufacturers of items or suppliers of items or raw materials." DMSMS and obsolescence are terms that are often used interchangeably. However, obsolescence refers to a lack of availability due to statutory or process changes and new designs, whereas DMSMS is a lack of sources or materials.
Impact
Although DMSMS is not strictly limited to electronic systems, much of the effort regarding DMSMS deals with electronic components that have a relatively short lifetime.
Causes
Primary components
DMSMS is a multifaceted problem because there are at least three main components that need to be considered. First, a primary concern is the ongoing improvement in technology. As new products are designed, the technology that was used in their predecessors becomes outdated, making it more difficult to repair the equipment. Second, the mechanical parts may be harder to acquire because fewer are produced as the demand for these parts decreases. Third, the materials required to manufacture a piece of equipment may no longer be readily available.
Product life cycle
It is widely accepted that all electronic devices are subject to the product life cycle. As products evolve into updated versions, they require parts and technology distinct from their predecessors. However, the earlier versions of the product often still need to be maintained throughout their life cycle. As the new product becomes predominant, there are fewer parts available to fix the earlier versions and the technology becomes outdated.
According to EIA-724, there are 6 distinct phases of a product's life cycle: Introduction, Growth, Maturity, Saturation, Decline, and Phase-Out. Although the terms "Introduction", "Growth", and "Decline" are generally accepted without much explanation, the terms "Maturity", "Saturation", and "Phase-Out" are less obvious.
"Maturity" in this case refers to state in the product's life cycle where sales of the product first reach its sales peak and begins to level off. Having survived the Introduction and Growth phases, products in this phase have a low probability of being discontinued.
"Saturation" refers to a state in the product's life cycle where sales have leveled off and, towards the end of this phase, first begin to decline. The term "Saturation" is confusing to many and can be explained in reference to its equivalent in chemistry where a substance can no longer be dissolved in a liquid. A product can be said to have "saturated" its market. The decline at the end of the Saturation phase gives the first indications of the products end of life.
"Phase-out" refers to the final stages of a product's decline ending in the product being altogether discontinued by the supplier.
Mitigation
DMSMS is managed through various risk mitigation efforts, both during the manufacturing of a product as well as later in the products life cycle. DMSMS is a hot topic in military supply where the usable lifetime of an electronic system may far exceed the availability of the components used to produce that system.
Devices in phases 5 and 6 of a product's life cycle require caution on the part of designers and product support engineers to assure that system components are indeed available at the time of production.
Some examples of the signs and symptoms of a DMSMS issue are:
Notification of a part that will be discontinued in the future.
A system that uses a unique part that can only be produced by a single manufacturer.
Dwindling of parts for a system, but no replacements over time.
Planning in a new system design that does not consider future obsolescence problems.
A parts list that contains an end-of-life cycle part before a system has gone into production.
The core methodology for DMSMS analysis has been to make direct contact with the supplier of an item. Direct contact takes the form of phone, e-mail or other communication with a competent supplier representative. This is essential in the management of commercial off-the-shelf products and assemblies. The main items of concern in a DMSMS analysis are:
Is the item an active product?
Is the item a good seller (generates good revenue for the company)?
Is the item slated for obsolescence for any reason (e.g. replaced by a newer version)?
Monitoring
Other methodologies involve subscription to data services which monitor parts lists, known as a Bill of Materials (BOM), for activity on any one part in the user's list. Often both the classic methodology and the data subscription methodology will be used in conjunction to provide a more complete assessment of a part's availability and lifetime.
Lifetime buy
One strategy used to combat DMS is to buy additional inventory during the production run of a system or part, in quantities sufficient to cover the expected number of failures. This strategy is known as a lifetime buy. An example of this is the many 30- and 40-year-old railway locomotives being run by small operators in the United Kingdom. These operators will often buy more locomotives than they actually require, and keep a number of them stored as a source of spare parts.
Take action
It is important and responsible to use a DMSMS risk management plan to ensure parts are available when you need them. Long range planning must occur for every key piece of equipment, establishing "when" and "what" parts will be replaced or redesigned. Try to foresee potential equipment problems. Consider replacing obsolete parts and equipment. New methods of design engineering allow for the open exchange of parts as technology changes. There are also companies out there giving assistance and consult in seminars and workshops, audits and implementation of effective DMSMS processes.
See also
Cannibalization (parts)
Military surplus
Obsolescence
Product life cycle management
Stockout
Supply chain management
References
Further reading
Bjoern Bartels, Ulrich Ermel, Peter Sandborn and Michael G. Pecht: Strategies to the Prediction, Mitigation and Management of Product Obsolescence, 1st. Ed., John Wiley & Sons, Inc., Hoboken, New Jersey, 2012, , online available at google books.
External links
DMSMS Knowledge Sharing Portal
Electronics manufacturing
Obsolescence
Product management
Scarcity | DMSMS | [
"Engineering"
] | 1,269 | [
"Electronic engineering",
"Electronics manufacturing"
] |
7,097,405 | https://en.wikipedia.org/wiki/Nina%20Bari | Nina Karlovna Bari (; 19 November 1901 – 15 July 1961) was a Soviet mathematician known for her work on trigonometric series.<ref name="asc">Biography of Nina Karlovna Bari, by Giota Soublis, Agnes Scott College.</ref> She is also well-known for two textbooks, Higher Algebra and The Theory of Series''.
Early life and education
Nina Bari was born in Russia on 19 November 1901, the daughter of Olga and Karl Adolfovich Bari, a physician. In 1918, she became one of the first women to be accepted to the Department of Physics and Mathematics at the prestigious Moscow State University. She graduated in 1921—just three years after entering the university. After graduation, Bari began her teaching career. She lectured at the Moscow Forestry Institute, the Moscow Polytechnic Institute, and the Sverdlov Communist Institute. Bari applied for and received the only paid research fellowship awarded by the newly created Research Institute of Mathematics and Mechanics. As a student, Bari was drawn to an elite group nicknamed the Luzitania—an informal academic and social organization. She studied trigonometric series and functions under the tutelage of Nikolai Luzin, becoming one of his star students. She presented the main result of her research to the Moscow Mathematical Society in 1922—the first woman to address the society.
In 1926, Bari completed her doctoral work on the topic of trigonometric expansions, winning the Glavnauk Prize for her thesis work. In 1927, Bari took advantage of an opportunity to study in Paris at the Sorbonne and the College de France. She then attended the Polish Mathematical Congress in Lwów, Poland; a Rockefeller grant enabled her to return to Paris to continue her studies. Bari's decision to travel may have been influenced by the disintegration of the Luzitanians. Luzin's irascible, demanding personality had alienated many of the mathematicians who had gathered around him. By 1930, all traces of the Luzitania movement had vanished, and Luzin left Moscow State for the Academy of Science's Steklov Institute of Mathematics. In 1932, she became a professor at Moscow State University and in 1935 was awarded the title of Doctor of Physical and Mathematical Sciences, a more prestigious research degree than traditional Ph.D. By this time, she had completed foundational work on trigonometric series.
Career and later life
She was a close collaborator with Dmitrii Menshov on a number of research projects. She and Menshov took charge of function theory work at Moscow State during the 1940s. In 1952, she published an important piece on primitive functions, and trigonometric series and their almost everywhere convergence. Bari also posted works at the 1956 Third All-Union Congress in Moscow and the 1958 International Congress of Mathematicians in Edinburgh.
Mathematics was the center of Bari's intellectual life, but she enjoyed literature and the arts. She was also a mountain hiking enthusiast and tackled the Caucasus, Altai, Pamir and Tian Shan mountain ranges in Russia. Bari's interest in mountain hiking was inspired by her husband, Viktor Vladimirovich Nemytskii, a Soviet mathematician, Moscow State professor and an avid mountain explorer. There is no documentation of their marriage available, but contemporaries believe the two married later in life. Bari's last work—her 55th publication—was a 900-page monograph on the state of the art of trigonometric series theory, which is recognized as a standard reference work for those specializing in function and trigonometric series theory.
Death
On 15 July 1961, Bari died after being hit by a train. It was possibly a suicide due to depression caused by Luzin's death eleven years earlier.
References
1901 births
1961 deaths
Soviet mathematicians
Soviet women mathematicians
20th-century Russian mathematicians
Mathematical analysts
Moscow State University alumni
Academic staff of Moscow State University
20th-century women mathematicians
Railway accident deaths in Russia | Nina Bari | [
"Mathematics"
] | 793 | [
"Mathematical analysis",
"Mathematical analysts"
] |
7,097,464 | https://en.wikipedia.org/wiki/Ruth%20Aaronson%20Bari | Ruth Aaronson Bari (November 17, 1917 – August 25, 2005) was an American mathematician known for her work in graph theory and algebraic homomorphisms. She was a professor at George Washington University, beginning in 1966.
Career
The daughter of Polish-Jewish immigrants to the United States, Ruth Aaronson was born November 17, 1917, and grew up in Brooklyn, New York.
She attended Brooklyn College, earning her bachelor's degree in mathematics in 1939. She earned her Master of Arts degree at Johns Hopkins University in 1943, but had originally enrolled in the doctoral program. When the university suggested that women in the graduate program should give up their fellowships so that men returning from World War II could study, Bari acceded. After marrying Arthur Bari, she spent the next two decades devoted to their family. They had three daughters together.
She returned to Johns Hopkins for graduate work, and completed her dissertation on "absolute reducibility of maps of at most 19 regions" in 1966 at the age of 47. Bari's dissertation explored chromatic polynomials and the Birkhoff–Lewis conjecture. She determined that, "because of the fact that all other cubic maps with fewer than 20 regions contain at least one absolutely reducible configuration, it follows that the Birkhoff-Lewis conjecture holds for all maps with fewer than 20 regions." Her Ph.D. advisor was Daniel Clark Lewis, Jr. and her thesis was titled Absolute Reducibility of Maps of at Most 19 Regions.
After she received her degree, mathematician William Tutte invited Bari to spend two weeks lecturing on her work in Canada at the University of Waterloo. Bari's work in the areas of graph theory and homomorphisms—and especially chromatic polynomials—has been recognized as influential.
In 1976, two professors relied on computer work to solve the perennial problem of Bari's dissertation, involving the four-color conjecture. When her daughter Martha asked her if she felt cheated by the technological solution, Bari replied, "I'm just grateful that it was solved within my lifetime and that I had the privilege to witness it."
During her teaching career, Bari participated in a class-action lawsuit against George Washington University which protested inequalities in promotion and pay for female faculty members. The protests were successful. Notable students of Bari include Carol Crawford, Steven Kahn, and Lee Lawrence.
Bari retired at the legally mandated age of 70 in 1988 with the distinction of professor emeritus.
Community and personal life
Bari was active in the Washington, DC community. In the early 1970s, Bari used a grant from the National Science Foundation to start a master's degree program in teaching mathematics. She felt that math teachers in DC public schools were not as well prepared as they needed to be.
Her three daughters became influential in their fields. Judi Bari (1949–1997) was a leading labor and environmental activist and feminist, who lived and worked in Northern California. She survived an assassination attempt in 1990. Gina Kolata is a mathematics, health and science journalist for the New York Times. Martha Bari is an art historian at Hood College in Fredrick, Maryland.
Bari died on August 25, 2005, from complications of Alzheimer's disease. She had lived in Silver Spring, Maryland since 1963 and was 87 years old at the time of her death. She was survived by her husband of 64 years of marriage, Arthur Bari (1913–2006). In addition to their three daughters, they had two grandchildren, including Lisa Bari.
References
1917 births
2005 deaths
Jewish American scientists
American people of Polish-Jewish descent
20th-century American mathematicians
21st-century American mathematicians
George Washington University faculty
Graph theorists
20th-century American women scientists
20th-century American women mathematicians
21st-century American women mathematicians
Brooklyn College alumni | Ruth Aaronson Bari | [
"Mathematics"
] | 758 | [
"Mathematical relations",
"Graph theory",
"Graph theorists"
] |
7,097,494 | https://en.wikipedia.org/wiki/Photocyte | A photocyte is a cell that specializes in catalyzing enzymes to produce light (bioluminescence). Photocytes typically occur in select layers of epithelial tissue, functioning singly or in a group, or as part of a larger apparatus (a photophore). They contain special structures called photocyte granules. These specialized cells are found in a range of multicellular animals, including coelenterates (cnidarians and ctenophores), annelids, arthropods (including insects) and fishes. Although some fungi are bioluminescent, they do not have such specialized cells.
Mechanism of light production
Light production may first be triggered by nerve impulses which stimulate the photocyte to release the enzyme luciferase into a "reaction chamber" of luciferin substrate. In some species, the release occurs continually without the precursor impulse via osmotic diffusion. Molecular oxygen is then actively gated through surrounding tracheal cells which otherwise limit the natural diffusion of oxygen from blood vessels; the resulting reaction of oxygen gas with the luciferase and luciferin produces light energy and a byproduct (usually carbon dioxide). The reaction occurs in the peroxisome of the cell.
Researchers once postulated that ATP was the source of reaction energy for photocytes, but since ATP only produces a fraction of the energy of the luciferase reaction, any resulting light wave-energy would be too small for detection by a human eye. The wavelengths produced by most photocytes fall close to 490 nm, although light as energetic as 250 nm is reportedly possible.
The variations of color seen in different photocytes are usually the result of color filters in other parts of the photophore that alter the wavelength of the light prior to exiting the endoderm. The range of colors varies between bioluminescent species.
The exact combinations of luciferase and luciferin types found among photocytes are specific to the species to which they belong. This appears to be the result of consistent evolutionary divergence.
Anatomy and physiology
Firefly larvae
Light production in Photurius pennsylvanica larvae occurs in the roughly 2,000 photocytes located in the heavily innervated light organ of the insect which is much simpler than that of the adult organism. The transparent photocytes of the larvae are clearly distinguishable from the opaque dorsal layer cells that cover them. Nervous and intracellular mechanisms contribute to light production in the photocytes. Nervous and intracellular mechanisms contribute to light production in the photocytes. It has been shown that fireflies can modify the amount of oxygen that travels through their trachea system to the light organ which plays a role in oxygen availability for light production. They do this by modifying the amount of fluid present within the trachea system. Because oxygen diffuses more slowly through water than in a gaseous form, this allows fireflies to effectively change the amount of oxygen reaching the photocytes. Spiracles can be opened and closed to control the amount of air that is able to pass through the tracheal system, but this control mechanism is only used as a response to a stressor.
Neural mechanism of light production
Research has shown that applying 5 to 15 volts of electricity for 50 ms to the segmental nerve that innervates the light organ leads to a glow 1.5 seconds after that lasts for five to ten seconds. Stimulation of the segmental nerve has been found to lead to several different nerve impulses, and frequency of nervous impulses has been found to be proportional to the intensity of the stimulus applied. A high frequency of nervous impulse was found to lead to a constant latency. The light organ is inactive in the absence of nerve impulses. Constant nerve signaling was shown to coincide with constant emission of light from the light organ with a higher frequency coinciding with a higher amplitude of light emitted up to 30 impulses per second. Impulses beyond this frequency were not found to be associated with a more intense glow. The fact that the frequency of nerve impulses was able to exceed beyond the maximum intensity of light emission suggests some limitations in the mechanism either arising from the synapse or the cell's light producing process. Additionally, a series of action potentials have been shown to lead to the sporadic, discontinuous emission to light. It was also found that a higher frequency of action potentials lead to a higher likelihood of any emission of light. Nerve impulses are associated with a depolarization of the photocyte which plays a role in its light emitting mechanism, and greater depolarization events were found to be associated with more intense lightning. The nerve innervating the light organ containing photocytes has only two axons, but they branch repeatedly allowing the numerous photocytes to be innervated with each cell being associated with several nerve terminals with each terminal possibly being associated with several synapses.
It was found that the junction between at the end of the neuron innervating the light organ differs from the kind of junction found between two different neurons or between neurons and muscles in the neuromuscular junction. The depolarization of the photocyte following nervous stimulation was found to be one-hundred times slower than the with the other two kinds of junctions and this slow response cannot be attributed to the rate of diffusion because the synapse between the neuron and photocyte is relatively small. It has been found that the neurons that control the light mechanism terminate at the tracheal cells rather than the photocytes themselves.
Intracellular mechanism
The resting potential of photocytes was found to exist in a range between 50 and 65 millivolts. It is generally accepted that the emission of light was found to occur after depolarization of the photocyte membrane although some have argued that the depolarization follows the emission of light. The depolarization of the membrane results in an increase of the rate of diffusion of ions across it. The depolarization of the photocyte was found to occur 0.5 seconds following nervous impulse culminating at one second with the maximum degree of depolarization observed. A higher frequency of nervous stimulation was associated with a smaller depolarization event. Exposure to neurotransmitters including epinephrine, norepinephrine, and synephrine, results in the emission of light but without any corresponding depolarization of the photocyte membrane.
Mnemiopsis leidyi
Photocytes are found distributed unevenly near the plate cilia cells. Gastric cells form a barrier that keep the photocytes away from the opening of the radially canal which they are found to exist along.
Porichthys
Light production in Porichthys notatus has been found to be triggered through an adrenergic mechanism. The sympathetic nervous system of the fish is responsible for triggering bioluminescence in the photocytes. As a response to being triggered by an norepinephrine, epinephrine, or phenylephrine, the photocyte exhibits a quick flash and then emits light that slowly fades in intensity. Stimulation by isoproterenol was found to cause an only a slow fading illumination. The amplitude of the quick flash, referred to as the "fast response", was higher when the concentration of neurotransmitter stimulating it increased. A great dal of variation in luminescence was exhibited in the photocytes of different fish. Variation also existed depending on what time of year the photocytes were collected from the fish. Stimulation from phenylephrine was found to produce a less intense response than that of epinephrine or norepinephrine. Phentolamine was shown to inhibit the effect of stimulation by phenylephrine completely and of epinephrine and norepinephrine to a lesser degree. Clonidine was shown to have an inhibitory effect on the fast response but no effect on the slow response. The photocytes of Porichthys are known to be extensively innervated.
Amphiura filiformis
Mechanical stimulation to spines on the arm can cause Amphiura filiformis to bioluminesce in the blue range. The species has been found to possess a luciferase compound. The luciferase has been isolated to clusters of photocytes that exist at the tip off the arms and around the spines. What are believed to be photocytes based on evidence have been found around the spine nerve plexus, mucous cells, and what are believed to be pigment cells. It has been found that luminescence is controlled by the animal's nervous system. Acetylcholine is able to stimulate the cells through nicotinic receptors.
Amphipholis squamata
In Amphipholis squamata, bioluminescence has been observed to come from the spines emanating from the arms from photocytes within the spinal ganglia. Acetylcholine has been found to be able to stimulate the photocytes to produce light.
Mollusks
It was discovered that bioluminescent snails are able to exercise a great deal of control over light emission, but the way in which they exercise control over it is still unknown. Phuphania have even been shown to be able to preserve their ability to produce light even after long periods of hibernation. It is currently unknown how these snails are able to maintain their ability to produce light for long periods of time, but theories have been proposed possibly relating it to the way certain fungi are able to maintain their bioluminescence.
Other species of fish
Adrenaline stimulates photocytes to emit light for many species of fish. It is believed that sympathetic nervous impulses provide the stimulus that causes photocytes to emit light.
Embryological development
Mnemiopsis leidyi
For Mnemiopsis leidyi, the ability to produce light is first observed upon the development of the plate cilia cells, and the bioluminescent cells found in the embryo share many characteristics with the photocytes observed in the adult organism. The M macromere lineage of cells are the ones that differentiate into photocytes, and they separate from other lineages of cells in the differential division. The subsequent maturation of the photocytes and intensification of light produced develop rapidly, occurring within ten hours of the first observed instance of bioluminescence. The egg of the organism contains two cytoplasmic regions: cortical and yolky, and the region of cytoplasm that daughter cells receive when the egg divides determine what they differentiate into. It was found that whether cortical cells exhibited bioluminescence or not was dependent on whether they inherited yolk in their cytoplasm with the cells containing yolk producing light and the cells without yolk not producing any light.
Evolution of photocytes
Luciferins have been shown to be largely conserved among different species while luciferases show a greater degree of diversity. Eighty percent of the species that exhibit bioluminescence exist in aquatic habitats.
Etmopterus spinax
Overall, the evolution of light producing cells (photocytes) is believed to have happened twice in sharks through convergence. Evidence suggests that the bioluminescent properties of the shark, Etmopterus spinax, came about as a mechanism of camouflage. It is thought that luminescence has other functions as well due to camouflage not being a logical explanation for the luminescence on the lateral sides of the shark. Bioluminescence is believed to have only evolved in sharks among the cartilaginous fishes. The function of bioluminescence among sharks has not been fully ascertained.
Evolution in fireflies
All five families of luminescent beetle, Phengodidae, Rhagophthalidae, Elateridae, Sinopyrophoridae, and Lampyridae are categorized into the Lampyroid clade. It has been determined that the luciferases and luciferin protein expressed in the photocytes of all species of firefly is homologous with that expressed in beetle species within the families Phengodidae, Rhagophthalidae, and Elateridae. In fact, every bioluminescent beetle species studied has been shown to use very similar mechanisms for light production in the photocyte. The beetle genus, Sinopyrophoridae, has been shown to exhibit bioluminescence although the exact mechanism is not known. It is believed that it shares homology with other genera of beetles however. The first time the entire genome of a bioluminescent beetle was determined was in 2017 with Pyrocoelia pectoralis, a species of firefly, and in 2018, three more species of bioluminescent beetle had their genomes sequenced. Biolumiescence in beetles has been shown to serve multiple purposes including the deterrence of predators and the attraction of mates.
The variation in coloring among different species of firefly has been determined to be due to differences in the amino acid sequences of the luciferases expressed in their photocytes. Two luciferase genes have been identified in the genomes of fireflies. They are luc1-type and luc2-type. There is evidence that suggests that Luc1-type evolved from a gene duplication of the gene that encodes for acyl-CoA synthetase. It is hypothesized that the luciferase of click beetles evolved separately from that in fireflies being the result of two gene duplications of the acyl-CoA synthetase gene suggesting analogy instead of homology between the groups. Additional genes have been found to be related to the storage of luciferin.
Amphiura filiformis
Bioluminescence in Amphiura filiformis and other species of sea star is widely believed to function in protection against predators. By attracting predators to one arm and losing the arm, the sea star is able to escape predation.
Other species of fish
Fish generally use bioluminescence for camouflage to hide from predators. Endogenous photocytes are more commonly used for bioluminescence than other means like bacteria. Some fish may use the bioluminescence produced by their photocytes as a means of communication.
Mollusks
Bioluminescence has only been observed in three classes of mollusks: Cephalopoda, Gastropoda, and Bivalvia. Bioluminescence is widely spread among cephalopods, but much rarer among the other classes of mollusk. Most species of bioluminescent mollusk that have been discovered are found in the ocean with the exception of the genera Latia and Quantula found in freshwater and terrestrial habitats respectively; however, more recent research has discovered luminescence in the Phuphania genus. It is hypothesized that terrestrial mollusks that use bioluminescence developed it as a strategy to deter predation. The green color emanated by the mollusk's photocytes is thought to be the most visible color to nocturnal predators.
Structure and organelles
The mitochondria is believed to be important in controlling the supply of oxygen available for making light in fireflies. An increased rate of respiration decreases the intracellular oxygen concentration which reduces the amount available for light production. The mitochondria of the photocyte exists near the perimeter of the cell while the peroxisome is typically found closer to the middle of the cell. It is worth noting that not all bioluminescence in the firefly light organ occurs in the granules of the photocyte. Some fluorescent protein has been found to exist in the posterior region of the organ.
Organelle targeting
It was found that the luciferase enzyme produced in fireflies is localized to the peroxisome within the photocytes. When mammalian cells were modified to produce the enzyme, it was found that they were targeted to the mammalian peroxisome as well. Because protein targeting to peroxisomes is not well understood, this finding is valuable for its potential to aid in the determination of peroxisome targeting mechanisms. If the cell produces a large amount of luciferase, some of the protein ends up in the cytoplasm. It is unknown what feature of the luciferase enzyme causes it to be targeted to the peroxisome since no particular protein sequences related to peroxisome targeting have been discovered.
Arachnocampa luminosa
The photocyte of Arachnocampa luminosa was found to contain a circular nucleus, and large amounts of ribosomes, smooth endoplasmic reticulum, mitochondria, and microtubules. Instead of having photocyte granules, the photocytes of the organism were shown to undergo the luciferase reaction in their cytoplasm. The cells do not have a golgi apparatus or rough endoplasmic reticulum and were found to be 250 micrometers by 120 micrometers overall with a depth of 25 to 30 micrometers.
Renilla köllikeri
The photocytes of Renilla köllikeri were found to have a diameter of eight to ten micrometers. The mitochondria of the photocytes were found to be very large with abnormally organized cristae surrounding the nucleus of the cell. The rough endoplasmic reticulum of the photocytes were found to exist close to the cell membrane. Several small vesicles, on the order of 0.25 micrometers, were found in the cell, and differently shaped granules containing diverse contents were also observed.
Amphipholis squamata
The photocytes present in Amphipholis squamata have been found to contain a Golgi apparatus and rough endoplasmic reticulum. They have also been found to contain up to six different kinds of vesicles within their cytoplasm.
Signal transduction
Signal transduction pathways in the photocyte of the firefly have been hypothesized to play a role in decreasing the activity of the mitochondria to make oxygen available for the production of light in fireflies. Because the neurons that control the lighting mechanism of the photocytes terminate at the tracheal cells instead of the photocytes, there must be some process that mediates the transference of the signal to them. Nitric oxide is believed to play this role partly due to the fact that it has already been implicated in a plethora of signaling roles in tissues among several, diverse clades of animal including insects. In fact, concentrations of nitric oxide on the order of 70 ppm have been found to result in flashing in fireflies, and carboxy-PTIO, a Nitric Oxide scavenger, has been shown to inhibit the response. Additionally, the tracheolar end organ was found to contain a high concentration of the enzyme nitric oxide synthase. Nitric oxide has been implicated with the action of decreasing respiration in the mitochondria. This effect on the mitochondria has been found to be influenced by surrounding light conditions with more light decreasing the action of nitric oxide on the mitochondria and less light increasing its action. In addition to ambient light, the light produced by the photocytes can also play an inhibitory role on the effect of nitric oxide. The photocytes have been described as containing a vacuole that plays a role in signaling with the extracellular environment. It has been found that octopamine triggers an adenylate cyclase which plays a role in triggering bioluminescence in the photocytes in fireflies. A reaction among D-luciferin, luciferase, and ATP has been implicated in the mechanism of light production in firefly photocytes. The fluorescent response was also found to be greater in basic conditions than in acidic conditions.
Granules
The shape of the photocyte granules ranges from more round to more elliptical, and there are three types of photocyte granules. The bioluminescent reaction is confined to the granules. The granules range from 0.6 to 2.5 micrometers in the larval photocytes of Photuris pennsylvanica and between 2.5 and 4.5 micrometers in the adult photocytes of the Asiatic firefly. The size and shape of photocytes can exhibits a great deal of diversity among the species they are found in. The different types of granules have been observed together within individual photocytes. The illumination of the photocytes is confined to the granules where the reaction occurs.
Type I
The first type of photocyte granule has been found to contain between two and twelve microtubules. In addition, the matrix of the type I granule lacks a uniform shape or structure with ferritin distributed throughout.
Type II
The second type of photocyte granule contains a large crystal surrounded by several small crystals within a matrix with no definite shape or form. T microtubules in the type two granules are associated with the face of the crystal. In addition ferritin has been found to be associated with the crystals. Type II granules are hypothesized to exist in Amphiurus filiformis photocytes.
Type III
The type III granules are characterized by the fact that they contain several tubules with thick walls. The ferritin present in the granules is associated with filament-like features contained in them.
Identification techniques and culturing
Because the compounds that exhibit bioluminescence are typically fluorescent, fluorescence can be used to identify photocytes in organisms.
References
Bioluminescence | Photocyte | [
"Chemistry",
"Biology"
] | 4,412 | [
"Biochemistry",
"Luminescence",
"Bioluminescence"
] |
7,097,640 | https://en.wikipedia.org/wiki/Input%20Field%20Separators | For many command line interpreters (“shell”) of Unix operating systems, the input field separators or internal field separators or shell variable holds characters used to separate text into tokens.
The value of , (in the bash shell) typically includes the space, tab, and the newline characters by default. These whitespace characters can be visualized by issuing the "declare" command in the bash shell or printing with commands like printf %s "$IFS" | od -c, printf "%q\n" "$IFS" or printf %s "$IFS" | cat -A (the latter two commands being only available in some shells and on some systems).
From the Bash, version 4 man page:
The shell treats each character of as a delimiter, and splits the results of the other expansions into words on these characters.
If is unset, or its value is exactly , the default, then sequences of , , and at the beginning and end of the results of the previous expansions are ignored, and any sequence of characters not at the beginning or end serves to delimit words.
If has a value other than the default, then sequences of the whitespace characters and are ignored at the beginning and end of the word, as long as the whitespace character is in the value of (an whitespace character).
Any character in that is not whitespace, along with any adjacent whitespace characters, delimits a field. A sequence of whitespace characters is also treated as a delimiter. If the value of is null, no word splitting occurs.
IFS abbreviation
According to the Open Group Base Specifications, is an abbreviation for "input field separators." A newer version of this specification mentions that "this name is misleading as the IFS characters are actually used as field terminators." However is often referred to as "internal field separators."
Exploits
IFS was usable as an exploit in some versions of Unix. A program with root permissions could be fooled into executing user-supplied code if it ran (for instance) system("/bin/mail") and was called with set to , in which case it would run the program "" (in the current directory and thus writable by the user) with root permissions. This has been fixed by making the shells not inherit the IFS variable.
References
Unix | Input Field Separators | [
"Technology"
] | 500 | [
"Computing stubs",
"Computer science",
"Computer science stubs"
] |
7,097,945 | https://en.wikipedia.org/wiki/GIS%20Day | GIS Day is an annual event celebrating geographic information systems (GIS) based technologies on the third Wednesday of November. The event first took place in 1999. It was initiated by spatial analytics software provider Esri. Esri president and co-founder Jack Dangermond credits Ralph Nader with being the person who inspired the creation of GIS Day. He considered the event a good initiative for people to learn about geography and the many uses of GIS. He wanted GIS Day to be a grassroots effort and open to everyone to participate.
Today, the event provides an international forum for users of GIS technology from across the GIS industry to demonstrate real-world applications that are making a difference in society. Originally the Canada Geographic Information System developed in the 1960s by Roger Tomlinson, it is now used worldwide.
Original sponsors of GIS Day included the following organizations:
National Geographic Society
American Association of Geographers (AAG), formerly Association of American Geographers
University Consortium for Geographic Information Science (UCGIS)
United States Geological Survey (USGS)
Library of Congress
Sun Microsystems
Hewlett-Packard
Esri
King Saud University
Additional resources
GeoMentors Program
References
External links
Esri GIS Day Website
Geography Awareness Week
Geographic information systems
November observances
Wednesday observances
Holidays and observances by scheduling (nth weekday of the month) | GIS Day | [
"Technology"
] | 276 | [
"Information systems",
"Geographic information systems"
] |
1,495,134 | https://en.wikipedia.org/wiki/Insular%20cortex | The insular cortex (also insula and insular lobe) is a portion of the cerebral cortex folded deep within the lateral sulcus (the fissure separating the temporal lobe from the parietal and frontal lobes) within each hemisphere of the mammalian brain.
The insulae are believed to be involved in consciousness and play a role in diverse functions usually linked to emotion or the regulation of the body's homeostasis. These functions include compassion, empathy, taste, perception, motor control, self-awareness, cognitive functioning, interpersonal relationships, and awareness of homeostatic emotions such as hunger, pain and fatigue. In relation to these, it is involved in psychopathology.
The insular cortex is divided by the central sulcus of the insula, into two parts: the anterior insula and the posterior insula in which more than a dozen field areas have been identified. The cortical area overlying the insula toward the lateral surface of the brain is the operculum (meaning lid). The opercula are formed from parts of the enclosing frontal, temporal, and parietal lobes.
Structure
The insula is divided into an anterior and a posterior part by the central sulcus of the insula.
Connections
The anterior part of the insula is subdivided by shallow sulci into three or four short gyri.
The anterior insula receives a direct projection from the basal part of the ventral medial nucleus of the thalamus and a particularly large input from the central nucleus of the amygdala. In addition, the anterior insula itself projects to the amygdala.
One study on rhesus monkeys revealed widespread reciprocal connections between the insular cortex and almost all subnuclei of the amygdaloid complex. The posterior insula projects predominantly to the dorsal aspect of the lateral and to the central amygdaloid nuclei. In contrast, the anterior insula projects to the anterior amygdaloid area as well as the medial, the cortical, the accessory basal magnocellular, the medial basal, and the lateral amygdaloid nuclei.
The posterior part of the insula is formed by a long gyrus.
The posterior insula connects reciprocally with the secondary somatosensory cortex and receives input from spinothalamically activated ventral posterior inferior thalamic nuclei. It has also been shown that this region receives inputs from the ventromedial nucleus (posterior part) of the thalamus that are highly specialized to convey homeostatic information such as pain, temperature, itch, local oxygen status, and sensual touch.
A human neuroimaging study using diffusion tensor imaging revealed that the anterior insula is interconnected to regions in the temporal and occipital lobe, opercular and orbitofrontal cortex, triangular and opercular parts of the inferior frontal gyrus. The same study revealed differences in the anatomical connection patterns between the left and right hemisphere.
The circular sulcus of insula (or sulcus of Reil) is a semicircular sulcus or fissure that separates the insula from the neighboring gyri of the operculum in the front, above, and
behind.
Cytoarchitecture
The insular cortex has regions of variable cell structure or cytoarchitecture, changing from granular in the posterior portion to agranular in the anterior portion. The insula also receives differential cortical and thalamic input along its length. The anterior insular cortex contains a population of spindle neurons (also called von Economo neurons), identified as characterising a distinctive subregion as the agranular frontal insula.
Development
The insular cortex is considered a separate lobe of the telencephalon by some authorities. Other sources see the insula as a part of the temporal lobe. It is also sometimes grouped with limbic structures deep in the brain into a limbic lobe. As a paralimbic cortex, the insular cortex is considered to be a relatively old structure.
Function
Multimodal sensory processing, sensory binding
Functional imaging studies show activation of the insula during audio-visual integration tasks.
Taste
The anterior insula is part of the primary gustatory cortex. Research in rhesus monkeys has also reported that apart from numerous taste-sensitive neurons, the insular cortex also responds to non-taste properties of oral stimuli related to the texture (viscosity, grittiness) or temperature of food.
Speech
The sensory speech region, Wernicke’s area, and the motor speech region, Broca’s area, are interconnected by a large axonal fiber system known as the arcuate fasciculus which passes directly beneath the insular cortex. On account of this anatomical architecture, ischemic strokes in the insular region can disrupt the arcuate fasciculus. Functional imaging studies on the cerebral correlates of language production also suggest that the anterior insula forms part of the brain network of speech motor control. Moreover, electrical stimulation of the posterior insular can evoke speech disturbances such as speech arrest and reduced voice intensity.
Lesion of the pre-central gyrus of the insula can also cause “pure speech apraxia” (i.e. the inability to speak with no apparent aphasic or orofacial motor impairments). This demonstrates that the insular cortex forms part of a critical circuit for the coordination of complex articulatory movements prior to and during the execution of the motor speech plans. Importantly, this specific cortical circuit is different from those that relate to the cognitive aspects of language production (e.g., Broca’s area on the inferior frontal gyrus). Subvocal, or silent, speech has also been shown to activate right insular cortex, further supporting the theory that the motor control of speech proceeds from the insula.
Interoceptive awareness
There is evidence that, in addition to its base functions, the insula may play a role in certain higher-level functions that operate only in humans and other great apes. The spindle neurons found at a higher density in the right frontal insular cortex are also found in the anterior cingulate cortex, which is another region that has reached a high level of specialization in great apes. It has been speculated that these neurons are involved in cognitive-emotional processes that are specific to primates including great apes, such as empathy and metacognitive emotional feelings. This is supported by functional imaging results showing that the structure and function of the right frontal insula is correlated with the ability to feel one's own heartbeat, or to empathize with the pain of others. It is thought that these functions are not distinct from the lower-level functions of the insula but rather arise as a consequence of the role of the insula in conveying homeostatic information to consciousness. The right anterior insula is engaged in interoceptive awareness of homeostatic emotions such as thirst, pain and fatigue, and the ability to time one's own heartbeat. Moreover, greater right anterior insular gray matter volume correlates with increased accuracy in this subjective sense of the inner body, and with negative emotional experience. It is also involved in the control of blood pressure, in particular during and after exercise, and its activity varies with the amount of effort a person believes he/she is exerting.
The insular cortex also is where the sensation of pain is judged as to its degree. Lesion of the insula is associated with dramatic loss of pain perception and isolated insular infarction can lead to contralateral elimination of pinprick perception. Further, the insula is where a person imagines pain when looking at images of painful events while thinking about their happening to one's own body. Those with irritable bowel syndrome have abnormal processing of visceral pain in the insular cortex related to dysfunctional inhibition of pain within the brain.
Physiological studies in rhesus monkeys have shown that neurons in the insula respond to skin stimulation. PET studies have also revealed that the human insula can also be activated by vibrational stimulation to the skin.
Another perception of the right anterior insula is the degree of nonpainful warmth or nonpainful coldness of a skin sensation. Other internal sensations processed by the insula include stomach or abdominal distension. A full bladder also activates the insular cortex.
One brain imaging study suggests that the unpleasantness of subjectively perceived dyspnea is processed in the right human anterior insula and amygdala.
The cerebral cortex processing vestibular sensations extends into the insula, with small lesions in the anterior insular cortex being able to cause loss of balance and vertigo.
Other noninteroceptive perceptions include passive listening to music, laughter and crying, empathy and compassion, and language.
Motor control
In motor control, it contributes to hand-and-eye motor movement, swallowing, gastric motility, and speech articulation. It has been identified as a "central command” centre that ensures that heart rate and blood pressure increase at the onset of exercise. Research upon conversation links it to the capacity for long and complex spoken sentences. It is also involved in motor learning and has been identified as playing a role in the motor recovery from stroke.
Homeostasis
It plays a role in a variety of homeostatic functions related to basic survival needs, such as taste, visceral sensation, and autonomic control. The insula controls autonomic functions through the regulation of the sympathetic and parasympathetic systems. It has a role in regulating the immune system.
Self
The insula has been identified as playing a role in the experience of bodily self-awareness, sense of agency, and sense of body ownership.
Social emotions
The anterior insula processes a person's sense of disgust both to smells and to the sight of contamination and mutilation — even when just imagining the experience. This associates with a mirror neuron-like link between external and internal experiences.
In social experience, it is involved in the processing of norm violations, emotional processing, empathy, and orgasms.
The insula is active during social decision making. Tiziana Quarto et al. measured emotional intelligence (EI) (the ability to identify, regulate, and process emotions of themselves and of others) of sixty-three healthy subjects. Using fMRI EI was measured in correlation with left insular activity. The subjects were shown various pictures of facial expressions and tasked with deciding to approach or avoid the person in the picture. The results of the social decision task yielded that individuals with high EI scores had left insular activation when processing fearful faces. Individuals with low EI scores had left insular activation when processing angry faces.
Emotions
The insular cortex, in particular its most anterior portion, is considered a limbic-related cortex. The insula has increasingly become the focus of attention for its role in body representation and subjective emotional experience. In particular, Antonio Damasio has proposed that this region plays a role in mapping visceral states that are associated with emotional experience, giving rise to conscious feelings. This is in essence a neurobiological formulation of the ideas of William James, who first proposed that subjective emotional experience (i.e., feelings) arise from our brain's interpretation of bodily states that are elicited by emotional events. This is an example of embodied cognition.
In terms of function, the insula is believed to process convergent information to produce an emotionally relevant context for sensory experience. To be specific, the anterior insula is related more to olfactory, gustatory, viscero-autonomic, and limbic function, whereas the posterior insula is related more to auditory-somesthetic-skeletomotor function. Functional imaging experiments have revealed that the insula has an important role in pain experience and the experience of a number of basic emotions, including anger, fear, disgust, happiness, and sadness.
The anterior insular cortex (AIC) is believed to be responsible for emotional feelings, including maternal and romantic love, anger, fear, sadness, happiness, sexual arousal, disgust, aversion, unfairness, inequity, indignation, uncertainty, disbelief, social exclusion, trust, empathy, sculptural beauty, a ‘state of union with God’, and hallucinogenic states.
Functional imaging studies have also implicated the insula in conscious desires, such as food craving and drug craving. What is common to all of these emotional states is that they each change the body in some way and are associated with highly salient subjective qualities. The insula is well-situated for the integration of information relating to bodily states into higher-order cognitive and emotional processes. The insula receives information from "homeostatic afferent" sensory pathways via the thalamus and sends output to a number of other limbic-related structures, such as the amygdala, the ventral striatum, and the orbitofrontal cortex, as well as to motor cortices.
A study using magnetic resonance imaging found that the right anterior insula is significantly thicker in people that meditate. Other research into brain activity and meditation has shown an increase in grey matter in areas of the brain including the insular cortex.
Another study using voxel-based morphometry and MRI on experienced Vipassana meditators was done to extend the findings of Lazar et al., which found increased grey matter concentrations in this and other areas of the brain in experienced meditators.
The strongest evidence against a causative role for the insula cortex in emotion comes from Damasio et al. (2012) which showed that a patient who suffered bilateral lesions of the insula cortex expressed the full complement of human emotions, and was fully capable of emotional learning.
Salience
Functional neuroimaging research suggests the insula is involved in two types of salience. Interoceptive information processing that links interoception with emotional salience to generate a subjective representation of the body. This involves, first, the anterior insular cortex with the pregenual anterior cingulate cortex (Brodmann area 33) and the anterior and posterior mid-cingulate cortices, and, second, a general salience network concerned with environmental monitoring, response selection, and skeletomotor body orientation that involves all of the insular cortex and the mid-cingulate cortex. A related idea is that the anterior insula, as part of the salience network, interacts with the mid-posterior insula to combine salient stimuli with autonomic information, leading to a high state of physiological awareness of salient stimuli.
An alternative or perhaps complementary proposal is that the right anterior insular regulates the interaction between the salience of the selective attention created to achieve a task (the dorsal attention system) and the salience of arousal created to keep focused upon the relevant part of the environment (ventral attention system). This regulation of salience might be particularly important during challenging tasks where attention might fatigue and so cause careless mistakes but if there is too much arousal it risks creating poor performance by turning into anxiety.
Decision making
Studies have shown that damage or dysfunction in the insular cortex can impair decision-making, emotional regulation, and social behavior. The insula is considered a key brain structure in the neural circuitry underlying complex decision-making processes. It plays a significant role in integrating internal and external cues to facilitate adaptive choices.
Auditory perception
Research indicates that the insular cortex is involved in auditory perception. Responses to sound stimuli were obtained using intracranial EEG recordings acquired from patients with epilepsy. The posterior part of the insula showed auditory responses that resemble those observed in Heschl's gyrus, whereas the anterior part responded to the emotional contents of the auditory stimuli. Clinical data additionally shows that bilateral damage to the insula after ischemic injury or trauma can lead to auditory agnosia. Functional magnetic resonance studies have also demonstrated that the insular cortex participates in many key auditory processes such as tuning into novel auditory stimuli and allocating auditory attention.
Direct recordings from the posterior part of the insula showed responses to unexpected sounds within regular auditory streams, a process known as auditory deviance detection. Researchers observed a mismatch negativity (MMN) potential, a well known event related potential, as well as the high frequency activity signals originating from local neurons.
Simple auditory illusions and hallucinations were elicited by electrical functional mapping.
Clinical significance
Progressive expressive aphasia
Progressive expressive aphasia is the deterioration of normal language function that causes individuals to lose the ability to communicate fluently while still being able to comprehend single words and intact other non-linguistic cognition. It is found in a variety of degenerative neurological conditions including Pick's disease, motor neuron disease, corticobasal degeneration, frontotemporal dementia, and Alzheimer's disease. It is associated with hypometabolism and atrophy of the left anterior insular cortex.
Addiction
A number of functional brain imaging studies have shown that the insular cortex is activated when drug users are exposed to environmental cues that trigger cravings. This has been shown for a variety of drugs, including cocaine, alcohol, opiates, and nicotine. Despite these findings, the insula has been ignored within the drug addiction literature, perhaps because it is not known to be a direct target of the mesocortical dopamine system, which is central to current dopamine reward theories of addiction. Research published in 2007 has shown that cigarette smokers suffering damage to the insular cortex, from a stroke for instance, have their addiction to cigarettes practically eliminated. These individuals were found to be up to 136 times more likely to undergo a disruption of smoking addiction than smokers with damage in other areas. Disruption of addiction was evidenced by self-reported behavior changes such as quitting smoking less than one day after the brain injury, quitting smoking with great ease, not smoking again after quitting, and having no urge to resume smoking since quitting. The study was conducted on average eight years after the strokes, which opens up the possibility that recall bias could have affected the results. More recent prospective studies, which overcome this limitation, have corroborated these findings This suggests a significant role for the insular cortex in the neurological mechanisms underlying addiction to nicotine and other drugs, and would make this area of the brain a possible target for novel anti-addiction medication. In addition, this finding suggests that functions mediated by the insula, especially conscious feelings, may be particularly important for maintaining drug addiction, although this view is not represented in any modern research or reviews of the subject.
A recent study in rats by Contreras et al. corroborates these findings by showing that reversible inactivation of the insula disrupts amphetamine conditioned place preference, an animal model of cue-induced drug craving. In this study, insula inactivation also disrupted "malaise" responses to lithium chloride injection, suggesting that the representation of negative interoceptive states by the insula plays a role in addiction. However, in this same study, the conditioned place preference took place immediately after the injection of amphetamine, suggesting that it is the immediate, pleasurable interoceptive effects of amphetamine administration, rather than the delayed, aversive effects of amphetamine withdrawal that are represented within the insula.
A model proposed by Naqvi et al. (see above) is that the insula stores a representation of the pleasurable interoceptive effects of drug use (e.g., the airway sensory effects of nicotine, the cardiovascular effects of amphetamine), and that this representation is activated by exposure to cues that have previously been associated with drug use. A number of functional imaging studies have shown the insula to be activated during the administration of addictive psychoactive drugs. Several functional imaging studies have also shown that the insula is activated when drug users are exposed to drug cues, and that this activity is correlated with subjective urges. In the cue-exposure studies, insula activity is elicited when there is no actual change in the level of drug in the body. Therefore, rather than merely representing the interoceptive effects of drug use as it occurs, the insula may play a role in memory for the pleasurable interoceptive effects of past drug use, anticipation of these effects in the future, or both. Such a representation may give rise to conscious urges that feel as if they arise from within the body. This may make addicts feel as if their bodies need to use a drug, and may result in persons with lesions in the insula reporting that their bodies have forgotten the urge to use, according to this study.
Subjective certainty in ecstatic seizures
A common quality in mystical experiences is a strong feeling of certainty which cannot be expressed in words. Fabienne Picard proposes a neurological explanation for this subjective certainty, based on clinical research of epilepsy.
According to Picard, this feeling of certainty may be caused by a dysfunction of the anterior insula, a part of the brain which is involved in interoception, self-reflection, and in avoiding uncertainty about the internal representations of the world by "anticipation of resolution of uncertainty or risk". This avoidance of uncertainty functions through the comparison between predicted states and actual states, that is, "signaling that we do not understand, i.e., that there is ambiguity." Picard notes that "the concept of insight is very close to that of certainty," and refers to Archimedes' "Eureka!" Picard hypothesizes that during ecstatic seizures the comparison between predicted states and actual states no longer functions, and that mismatches between predicted state and actual state are no longer processed, blocking "negative emotions and negative arousal arising from predictive uncertainty," which will be experienced as emotional confidence. Picard concludes that "[t]his could lead to a spiritual interpretation in some individuals."
Other clinical conditions
The insular cortex has been suggested to have a role in anxiety disorders, emotion dysregulation, and anorexia nervosa.
History
The insula was first described by Johann Christian Reil while describing cranial and spinal nerves and plexuses. Henry Gray in Gray's Anatomy is responsible for it being known as the Island of Reil. John Allman and colleagues showed that anterior insular cortex contains spindle neurons.
Additional images
See also
List of regions in the human brain
References
External links
. Location and literature citations for the insula
Homeostasis
Human homeostasis
Cerebral cortex | Insular cortex | [
"Biology"
] | 4,669 | [
"Human homeostasis",
"Homeostasis"
] |
1,495,383 | https://en.wikipedia.org/wiki/Gary%20Scavone | Gary Paul Scavone is a computer music researcher and musician.
Scavone is currently an associate professor of music technology at McGill University. Previously, Scavone directed the Center for Computer Research in Music and Acoustics at Stanford University. He, along with Perry Cook, authored the Synthesis Toolkit (STK). After conducting extensive research into the digital modeling of woodwind instruments (the subject of his doctoral dissertation), Scavone turned to the electronic synthesis of such instruments.
Scavone plays saxophone. He studied classical saxophone at the Conservatoire National de Région de Bordeaux, France, with Jean-Marie Londeix in 1989, sponsored by a Fulbright scholarship. In the summers of 1987, 1988 and 1990 he played as a street musician in almost every major European capital together with Dan Gordon.
References
External links
Gary P. Scavone's Home Page
Canadian musicologists
Year of birth missing (living people)
Living people
Stanford University Department of Music faculty
Canadian music academics
Academic staff of McGill University | Gary Scavone | [
"Technology"
] | 203 | [
"Computing stubs",
"Computer specialist stubs"
] |
1,495,421 | https://en.wikipedia.org/wiki/Li%20%28unit%29 | Li or ri (, lǐ, or , shìlǐ), also known as the Chinese mile, is a traditional Chinese unit of distance. The li has varied considerably over time but was usually about one third of an English mile and now has a standardized length of a half-kilometer (). This is then divided into 1,500 chi or "Chinese feet".
The character 里 combines the characters for "field" (田, tián) and "earth" (土, tǔ), since it was considered to be about the length of a single village. As late as the 1940s, a "li" did not represent a fixed measure but could be longer or shorter depending on the effort required to cover the distance.
There is also another li (Traditional: 釐, Simplified: 厘, lí) that indicates a unit of length of a chi, but it is used much less commonly. This li is used in the People's Republic of China as the equivalent of the centi- prefix in metric units, thus limi (厘米, límǐ) for centimeter. The tonal difference makes it distinguishable to speakers of Chinese, but unless specifically noted otherwise, any reference to li will always refer to the longer traditional unit and not to either the shorter unit or the kilometer. This traditional unit, in terms of historical usage and distance proportion, can be considered the East Asian counterpart to the Western league unit. However, in English league commonly means "3 miles."
Changing values
Like most traditional Chinese measurements, the li was reputed to have been established by the Yellow Emperor at the founding of Chinese civilization around 2600 BC and standardized by Yu the Great of the Xia dynasty six hundred years later. Although the value varied from state to state during the Spring and Autumn period and Warring States periods, historians give a general value to the li of 405 meters prior to the Qin dynasty imposition of its standard in the 3rd century BC.
The basic Chinese traditional unit of distance was the chi. As its value changed over time, so did the lis. In addition, the number of chi per li was sometimes altered. To add further complexity, under the Qin dynasty, the li was set at 360 "paces" (, bù) but the number of chi per bu was subsequently changed from 6 to 5, shortening the li by . Thus, the Qin li of about 576 meters became (with other changes) the Han li, which was standardized at 415.8 meters.
The basic units of measurement remained stable over the Qin and Han periods. A bronze imperial standard measure, dated AD 9, had been preserved at the Imperial Palace in Beijing and came to light in 1924. This has allowed very accurate conversions to modern measurements, which has provided a new and extremely useful additional tool in the identification of place names and routes. These measurements have been confirmed in many ways including the discovery of a number of rulers found at archaeological sites, and careful measurements of distances between known points. The Han li was calculated by Dubs to be 415.8 metres and all indications are that this is a precise and reliable determination.
Under the Tang dynasty (AD 618–907), the li was approximately 323 meters.
In the late Manchu or Qing dynasty, the number of chi was increased from 1,500 per li to 1,800. This had a value of 2115 feet or 644.6 meters. In addition, the Qing added a longer unit called the tu, which was equal to 150 li (96.7 km).
These changes were undone by the Republic of China of Chiang Kai-shek, who adopted the metric system in 1928. The Republic of China (now also known as Taiwan) continues not to use the li at all but only the kilometer (Mandarin: , gōnglǐ, lit. "common li").
Under Mao Zedong, the People's Republic of China reinstituted the traditional units as a measure of anti-imperialism and cultural pride before officially adopting the metric system in 1984. A place was made within this for the traditional units, which were restandardized to metric values. A modern li is thus set at exactly half a kilometer (500 meters). However, unlike the jin which is still frequently preferred in daily use over the kilogram, the li is almost never used. Nonetheless, its appearance in many phrases and sayings means that "kilometer" must always be specified by saying gōnglǐ in full.
Cultural use
As one might expect for the equivalent of "mile", li appears in many Chinese sayings, locations, and proverbs as an indicator of great distances or the exotic:
One Chinese name for the Great Wall is the "Ten-Thousand-Li Long Wall" (). As in Greek, the number "ten thousand" is used figuratively in Chinese to mean any "immeasurable" value and this title has never provided a literal distance of 10,000 li (). The actual length of the modern Great Wall is around 42,000 li (), over 4 times the name's proverbially "immeasurable" length.
The Chinese proverb appearing in chapter 64 of the Tao Te Ching and commonly rendered as "A journey of a thousand miles begins with a single step" in fact refers to a thousand li: 千里之行,始於足下 (Qiānlǐzhīxíng, shǐyúzúxià).
The greatest horses of Chinese history including Red Hare and Hualiu (驊騮) are all referred to as "thousand-li horses" (, qiānlǐmǎ), since they could supposedly travel a thousand li () in a single day.
Li is sometimes used in location names, for example: Wulipu (Chinese: 五里铺镇), Hubei; Ankang Wulipu Airport (Chinese: 安康五里铺机场), Shaanxi. Sanlitun () is an area in Beijing.
Ri in Japan and Korea
The present day Korean ri (리, 里) and Japanese ri (里) are units of measurements that can be traced back to the Chinese li (里).
Although the Chinese unit was unofficially used in Japan since the Zhou dynasty, the countries officially adopted the measurement used by the Tang dynasty (618–907 AD). The ri of an earlier era in Japan was thus true to Chinese length, corresponding to six chō ( 500–600 m), but later evolved to denote the distance that a person carrying a load would aim to cover on mountain roads in one hour. Thus, there had been various ri of 36, 40, and 48 chō. In the Edo period, the Tokugawa shogunate defined 1 ri as 36 chō, allowing other variants, and the Japanese government adopted this last definition in 1891. The Japanese ri was, at that time, fixed to the metric system, ≈ 3.93 kilometres or about 2.44 miles. Therefore, one must be careful about the correspondence between chō and ri. See Kujūkuri Beach (99-ri beach) for a case.
In South Korea, the ri currently in use is a unit taken from the Han dynasty (206 BC–220 AD) li. It has a value of approximately 392.72 meters, or one tenth of the ri. The Aegukga, the national anthem of South Korea, and the Aegukka, the national anthem of North Korea, both mention 3,000 ri, which roughly corresponds to 1,200 km, the approximate longitudinal span of the Korean peninsula.
In North Korea the Chollima Movement, a campaign aimed at improving labour productivity along the lines of the earlier Soviet Stakhanovite movement, gets its name from the word "chollima" which refers to a thousand-ri horse (chŏn + ri + ma in North Korean Romanization).
See also
Chinese units of measurement
Japanese units of measurement
Korean units of measurement
League (unit) for a general discussion of league-style units
Qianlima for more on "thousand-li horse" including North Korean Chollima
Li (short)
References
Citations
Sources
Homer H. Dubs (1938): The History of the Former Han Dynasty by Pan Ku. Vol. One. Translator and editor: Homer H. Dubs. Baltimore. Waverly Press, Inc.
Homer H. Dubs (1955): The History of the Former Han Dynasty by Pan Ku. Vol. Three. Translator and editor: Homer H. Dubs. Ithaca, New York. Spoken Languages Services, Inc.
Hulsewé, A. F. P. (1961). "Han measures". A. F. P. Hulsewé, T'oung pao Archives, Vol. XLIX, Livre 3, pp. 206–207.
Needham, Joseph. (1986). Science and Civilization in China: Volume 4, Physics and Physical Technology, Part 3, Civil Engineering and Nautics. Taipei: Caves Books Ltd.
History of science and technology in China
Units of length | Li (unit) | [
"Mathematics"
] | 1,836 | [
"Quantity",
"Units of measurement",
"Units of length"
] |
1,495,467 | https://en.wikipedia.org/wiki/Maximum%20length%20sequence | A maximum length sequence (MLS) is a type of pseudorandom binary sequence.
They are bit sequences generated using maximal linear-feedback shift registers and are so called because they are periodic and reproduce every binary sequence (except the zero vector) that can be represented by the shift registers (i.e., for length-m registers they produce a sequence of length 2m − 1). An MLS is also sometimes called an n-sequence or an m-sequence. MLSs are spectrally flat, with the exception of a near-zero DC term.
These sequences may be represented as coefficients of irreducible polynomials in a polynomial ring over Z/2Z.
Practical applications for MLS include measuring impulse responses (e.g., of room reverberation or arrival times from towed sources in the ocean). They are also used as a basis for deriving pseudo-random sequences in digital communication systems that employ direct-sequence spread spectrum and frequency-hopping spread spectrum transmission systems, and in the efficient design of some fMRI experiments.
Generation
MLS are generated using maximal linear-feedback shift registers. An MLS-generating system with a shift register of length 4 is shown in Fig. 1. It can be expressed using the following recursive relation:
where n is the time index and represents modulo-2 addition. For bit values 0 = FALSE or 1 = TRUE, this is equivalent to the XOR operation.
As MLS are periodic and shift registers cycle through every possible binary value (with the exception of the zero vector), registers can be initialized to any state, with the exception of the zero vector.
Polynomial interpretation
A polynomial over GF(2) can be associated with the linear-feedback shift register. It has degree of the length of the shift register, and has coefficients that are either 0 or 1, corresponding to the taps of the register that feed the xor gate. For example, the polynomial corresponding to Figure 1 is .
A necessary and sufficient condition for the sequence generated by a LFSR to be maximal length is that its corresponding polynomial be primitive.
Implementation
MLS are inexpensive to implement in hardware or software, and relatively low-order feedback shift registers can generate long sequences; a sequence generated using a shift register of length 20 is 220 − 1 samples long (1,048,575 samples).
Properties of maximum length sequences
MLS have the following properties, as formulated by Solomon Golomb.
Balance property
The occurrence of 0 and 1 in the sequence should be approximately the same. More precisely, in a maximum length sequence of length there are ones and zeros. The number of ones equals the number of zeros plus one, since the state containing only zeros cannot occur.
Run property
A "run" is a sub-sequence of consecutive "1"s or consecutive "0"s within the MLS concerned. The number of runs is the number of such sub-sequences.
Of all the "runs" (consisting of "1"s or "0"s) in the sequence :
One half of the runs are of length 1.
One quarter of the runs are of length 2.
One eighth of the runs are of length 3.
... etc. ...
Correlation property
The circular autocorrelation of an MLS is a Kronecker delta function (with DC offset and time delay, depending on implementation). For the ±1 convention, i.e., bit value 1 is assigned and bit value 0 , mapping XOR to the negative of the product:
where represents the complex conjugate and represents a circular shift.
The linear autocorrelation of an MLS approximates a Kronecker delta.
Extraction of impulse responses
If a linear time invariant (LTI) system's impulse response is to be measured using a MLS, the response can be extracted from the measured system output y[n] by taking its circular cross-correlation with the MLS. This is because the autocorrelation of a MLS is 1 for zero-lag, and nearly zero (−1/N where N is the sequence length) for all other lags; in other words, the autocorrelation of the MLS can be said to approach unit impulse function as MLS length increases.
If the impulse response of a system is h[n] and the MLS is s[n], then
Taking the cross-correlation with respect to s[n] of both sides,
and assuming that φss is an impulse (valid for long sequences)
Any signal with an impulsive autocorrelation can be used for this purpose, but signals with high crest factor, such as the impulse itself, produce impulse responses with poor signal-to-noise ratio. It is commonly assumed that the MLS would then be the ideal signal, as it consists of only full-scale values and its digital crest factor is the minimum, 0 dB. However, after analog reconstruction, the sharp discontinuities in the signal produce strong intersample peaks, degrading the crest factor by 4-8 dB or more, increasing with signal length, making it worse than a sine sweep. Other signals have been designed with minimal crest factor, though it is unknown if it can be improved beyond 3 dB.
Relationship to Hadamard transform
Cohn and Lempel showed the relationship of the MLS to the Hadamard transform. This relationship allows the correlation of an MLS to be computed in a fast algorithm similar to the FFT.
See also
Barker code
Complementary sequences
Federal Standard 1037C
Frequency response
Gold code
Impulse response
Polynomial ring
References
External links
— Short on-line tutorial describing how MLS is used to obtain the impulse response of a linear time-invariant system. Also describes how nonlinearities in the system can show up as spurious spikes in the apparent impulse response.
— Paper describing MLS generation. Contains C-code for MLS generation using up to 18-tap-LFSRs and matching Hadamard transform for impulse response extraction.
A (binaural) room impulse response database generated by means of maximum length sequences.
— Implementing lfsr's in FPGAs includes listing of taps for 3 to 168 bits
Pseudorandomness
Polynomials
Binary sequences | Maximum length sequence | [
"Mathematics"
] | 1,255 | [
"Polynomials",
"Algebra"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.